17 days ago
We had a node based application running on production for a long time and the memory usage waver around 100 MB to 140 MB
But on 8th October 2025, there was a sudden spike in memory usage.
After investigating this issue within the company itself:
Check what code changes which could affects the Node application were made before and on 8th October
Any dependency change
Any change in infra configuration
but, there was no change in the above 3.
In our node application are use node cluster package to handle replication at the application code level.
Even then before 8th of October the memory usage never exceed 140 MB from the metrics data we have available with us in the Railway dashboard.
5 Replies
17 days ago
Hey there! We've found the following might help you get unblocked faster:
If you find the answer from one of these, please let us know by solving the thread!
17 days ago
Hello,
Your metrics show 140mb of ram being used as of right now. As this usage looks standard and normal to me I'll open this thread to the community who can aid in lowering idle resources!
Status changed to Awaiting User Response Railway • 17 days ago
17 days ago
This thread has been marked as public for community involvement, as it does not contain any sensitive or personal information. Any further activity in this thread will be visible to everyone.
Status changed to Open noahd • 17 days ago
17 days ago
The reason there was recent decline in the memory usage is because I have explicit set the limit (cap) on the replica it can create in the Node itself.
That's what makes me wonder, we had the node cluster package in our codebase for a very long time and even before 8th October 2025 we never set any limit (cap) on the number of replica it created in Node.js
---
My question is, can Railway help with some where can I can see the logs if I change the "Resources Limit" previously (if possible)
I'm 99% sure this is not the case, but can there be 1% chance that this might have happened due to some internal configuration changes at the Railway infra level.
17 days ago
hey, its likely memory jumped because the process count or heap per process jumped. you can prove this by just logging os.cpus().length and the number of cluster.fork calls, you can also measure per-worker memory if you need to report rss/heapused and arraybuffers by process.memoryUsage() to see how each worker is doing & whether there's a code-level issue. unfortunately, i dont think you have this info available if you previously didnt log any of these. youll just have to implement them and see if it happens again.
can i ask you to clarify also what do you mean by a spike in memory? did it go from 140mb to ~xxGBs?