4 months ago
My worker (a rust project) is failing after startup with this "oom" error. When I look at the railway dashboard and memory usage for this process I see about 10 gigs of ram in use at the time the process is killed.
This would seem to be within the allowed limits of the pro plan I am on. help?
4 Replies
4 months ago
Hey there! We've found the following might help you get unblocked faster:
🧵 Hey my app keeps breaking and the workers returns ERRORS or rather stop
🧵 Application Crashes on Deployment - ModuleNotFoundError for MySQLdb
If you find the answer from one of these, please let us know by solving the thread!
4 months ago
Hmmm,
Do you have anything in place which would OOM it if your resources went above a cap? I don't really see logs on our end for this.
Status changed to Awaiting User Response Railway • 4 months ago
4 months ago
I have logged memory usage at about the time it is killed and it is showing 11720 kib in use (11 megs). I have nothing programatic that would kill the process. here is the very end of the log:
2025-11-11T22:11:09.119850Z INFO worker_swaps: heartbeat: enqueued=0 flushed=0 flush_errs=0 redis_errs=0
Killed
worker exited with code 137
and the startup command:
sh -c '/usr/local/bin/laserstream-worker-swaps; code=$?; echo "worker exited with code $code" >&2; sleep 600'
Status changed to Awaiting Railway Response Railway • 4 months ago
4 months ago
Your worker process was terminated due to an out-of-memory (OOM) event. The reason you don't see this reflected in the memory usage graph is that our metrics polling interval is relatively long, so the process exceeded its memory allocation and was killed before the metrics system could capture and display the spike.
Status changed to Awaiting User Response Railway • 4 months ago
3 months ago
This thread has been marked as solved automatically due to a lack of recent activity. Please re-open this thread or create a new one if you require further assistance. Thank you!
Status changed to Solved Railway • 3 months ago