10 days ago
My app keeps crashing and I am not sure what is causing the crash
I have looked at all relevant logs and nothing seems to be poinitng towards my code.
Memory is fine (22-30 MB heap), DB is fine (19 MB), no app-level
exceptions. Railway is sending SIGKILL to your container for a platform-level reason.
- Heap at 22-30 MB (limit 800 MB)
- DB at 19 MB
- No app errors before crash
- Container restarted 4 times in 1 minute with zero diagnostic output
Claude code has also advised me to open a ticket with Railway on this
5 Replies
10 days ago
This thread has been marked as public for community involvement, as it does not contain any sensitive or personal information. Any further activity in this thread will be visible to everyone.
Status changed to Open Railway • 10 days ago
10 days ago
Can you share more about your tech stack, network config, and healthchecks?
10 days ago
Runtime: Node.js 20 (alpine), Fastify 5.7.4
- Database: Railway-hosted PostgreSQL
- Prisma ORM with connection pool:
- Docker: Multi-stage build, node:20-alpine, V8 heap capped at 800MB
- Frontend: Hosted separately on Vercel (Next.js) — not on Railway
- health checks configured to railway ui
- No cron schedule
- Serverless: disabled
- Static Outbound IPs: disabled
- Replicas: 1
Browser → Vercel (Next.js SSR/static) → Railway proxy (HTTPS) → your container:3001 (Fastify) →
postgres.railway.internal
10 days ago
What is shown for your container usage in the observability tab? If you have no charts there, add a simple one, it will give you memory + cpu usage. Does it really not hit the usage limit?
Do you see "Stopping Container" log?
10 days ago
Attached screenshot for last 7 days.
No stopping container log, no shutdown log, no SIGTERM handler firing,nothing. The logs just go silent and then "Starting Container" / "Server started" appears
Attachments
10 days ago
I had a very similar issue before and it ended up not being app logic.
From what you shared (logs just go quiet, then container starts again, no app exception), this looks more like the process dying at native/runtime level than Fastify code itself.
One thing that bites a lot: Prisma + node:alpine (musl).
If you can, try a quick A/B test:
- switch image from node:20-alpine to node:20-bookworm-slim
- regenerate Prisma client in that environment
- redeploy and watch for 12-24h
If the random restarts stop, that’s probably it.
Also, if you can capture real process exit code in entrypoint, it helps a lot:
- 137 = killed (usually memory/system kill)
- 139 = segfault/native crash
Given your memory chart looks low, I’d personally test the alpine -> debian base first.