crashed deploy ElizaOS template

sosi-fcfs
FREE

5 months ago

Need help deploying Eliza Starter on Railway (with OpenRouter / GPT-4.1-nano)

Hi everyone! I’m trying to deploy eliza-starter on Railway but I’m consistently running into a crash due to llama_local being selected as the model provider — even though I’m explicitly trying to use openai/gpt-4.1-nano via OpenRouter.

Context:

  • I’m using the original template https://railway.com/template/aW47_j.

  • I want to use OpenRouter’s GPT-4.1-nano, not LLaMA.

  • My environment variables:

    • OPENROUTER_API_KEY=...

    • OPENROUTER_MODEL=openai/gpt-4.1-nano

    • CHARACTER_JSON=... (inline-encoded, valid)

  • It works locally when I run pnpm build && pnpm start.

Problem:

  • Railway build completes, but runtime crashes with:

[INFO] Selected model provider: llama_local

[INFO] Initializing LlamaService...

[ERROR] Unhandled error in startAgents: code: "ERR_USE_AFTER_CLOSE"

  • Eliza ignores my OPENROUTER_MODEL and always falls back to llama_local.

Tried:

  • Injecting a full character via CHARACTER_JSON (works locally).

  • Verified all necessary OpenRouter vars are present.

  • Local runs are clean, only Railway fails.

What I need help with:

  1. How do I stop Eliza from defaulting to llama_local on Railway?

  2. Is there a way to override the model provider in production without editing character.ts?

  3. Any tips for injecting character.json without forking?

  4. Is there another reliable method to deploy elizaOS template using OpenRouter that I might have missed?

Would really appreciate if someone has a working example or knows a way around this

Thanks in advance!

$10 Bounty

3 Replies

sosi-fcfs
FREE

4 months ago

up


splatplays
HOBBY

4 months ago

  1. Editing character.ts

  2. Not that I could find

  3. Railway is built generally to prevent separate processes interacting so it would need to be in a fork, I could create a script that changes the character.ts file so that it fits an external variable on run?

  4. Not that I could find


lofimit
HOBBY

4 months ago

Maybe Railway isn’t loading your OPENROUTER_MODEL env var at runtime, so Eliza falls back to llama_local. Also, Eliza often reads the model provider from character.json and that might be forcing llama_local.

You can try to check if OPENROUTER_MODEL is actually set when the app starts on Railway, and add "provider": "openrouter" inside your character.json and pass it via CHARACTER_JSON to force using OpenRouter.

If nothing works, you might have to change the code so it stops defaulting to llama_local.