24 days ago
'I have a deployment that starts with a normal memory usage, but throughout the day, the memory usage gradually increases. The memory only decreases when I redeploy the app, but it rises again with continued use. On the other hand, when running the app locally, the memory metrics decrease regularly. I'm using VisualVM for java. Because of this, I want to know: Does memory usage in Railway decrease by itself at any point, or how does it work?'
I have copied this text from another user who had a similar issue, I've tested the application in Docker container on local machines, using javaVM and eclipseMAT, in local there is no leaks and de GC works fine, but when I deploy it the memory usage increases slowly but never goes down
9 Replies
24 days ago
Hey there! We've found the following might help you get unblocked faster:
If you find the answer from one of these, please let us know by solving the thread!
Railway
Hey there! We've found the following might help you get unblocked faster: - [🧵 Spring Boot App Deploy Out Of Memory](https://station.railway.com/questions/spring-boot-app-deploy-out-of-memory-9477852e) - [📚 Deploy a Spring Boot App](https://docs.railway.com/guides/spring-boot) - [🧵 Deployment failed due to build failure for Spring Boot App](https://station.railway.com/questions/deployment-failed-due-to-build-failure-f-0459558a) If you find the answer from one of these, please let us know by solving the thread!
24 days ago
It's not a problem with the deploy itself, the deploy works fine, I've been using railway for about 3 months now, and my application never crashed or anything, but recently a noticed that the memory usage just keeps going up slowly
24 days ago
When did you test locally, how many request did you send? My suggestion is to try some load testing tools and try sending 25/50k requests...
24 days ago
My project is a beta application, current we have just a single user, so in production we have a very low amount of requests, but it increases any way, I have seen an article saying that railway have some issu depending on the image of jdk, a tried to change my image to a different one and it seens to have fixxed the problem, I will keep monitoring it to see if it increases again, thanks for the response
gabrielmqc
My project is a beta application, current we have just a single user, so in production we have a very low amount of requests, but it increases any way, I have seen an article saying that railway have some issu depending on the image of jdk, a tried to change my image to a different one and it seens to have fixxed the problem, I will keep monitoring it to see if it increases again, thanks for the response
20 days ago
Actually it didn't work, the application continues not reducing the ram usage in production, any other suggestion I would appreciate
20 days ago
In production environments like Railway, your Spring Boot app runs inside a container with a fixed memory limit. Over time, memory usage may appear to keep increasing because the JVM heap and GC behavior are different in containerized environments than on local machines. When your app starts, the JVM allocates part of the allowed memory for its heap. As traffic and activity increase, the JVM may request more heap from the container until it reaches its maximum allowed size. The Garbage Collector (GC) only frees memory inside the heap — it doesn’t necessarily release that memory back to the operating system immediately. So, even if objects are cleaned up, external metrics (like Railway’s dashboard) may still show high memory usage. When you redeploy, the container restarts and the JVM process ends releasing all memory back to the host. That’s why you see a sudden drop in memory after redeployment. The reason you don’t observe this locally is that local JVMs often have more memory headroom and different GC tuning, so they can shrink the heap more aggressively when idle.
If you want to verify it’s not a real memory leak:
Use jmap, VisualVM, or Eclipse MAT on the deployed instance to check if objects are being retained unnecessarily.
Tune JVM memory flags (like -Xmx and -XX:+UseContainerSupport) and GC type (G1GC, ZGC) for better performance in containers.
13 days ago
I have the exact same issue! Memory keeps on rising slowly for my spring boot app until it runs out of memory. I didn't do any changes to my deployed instance or pushed new code. The issue started at october 19th after i did a code change and a new deploy. (The new code is not causing the memory leak I can assure you) Memory usage was rising until the 26th of October and then my memory usage stabilized until the 2nd of november when I did a new code change and new deploy (This code change is also not causing the memory leak). It slowly crept again to full memory and crashed my application. For 3th of november until now it looks like my memory is stable. I have had these instances running for almost over a year without any memory issues and the new code is not causing the memory issue, I even reverted my code changes to check if it had something to do with my code but still the application kept crashing. This issue is 100% certain having to do with Railway.
soumya-choudhury
In production environments like Railway, your Spring Boot app runs inside a container with a fixed memory limit. Over time, memory usage may appear to keep increasing because the JVM heap and GC behavior are different in containerized environments than on local machines. When your app starts, the JVM allocates part of the allowed memory for its heap. As traffic and activity increase, the JVM may request more heap from the container until it reaches its maximum allowed size. The Garbage Collector (GC) only frees memory inside the heap — it doesn’t necessarily release that memory back to the operating system immediately. So, even if objects are cleaned up, external metrics (like Railway’s dashboard) may still show high memory usage. When you redeploy, the container restarts and the JVM process ends releasing all memory back to the host. That’s why you see a sudden drop in memory after redeployment. The reason you don’t observe this locally is that local JVMs often have more memory headroom and different GC tuning, so they can shrink the heap more aggressively when idle.If you want to verify it’s not a real memory leak:Use jmap, VisualVM, or Eclipse MAT on the deployed instance to check if objects are being retained unnecessarily.Tune JVM memory flags (like -Xmx and -XX:+UseContainerSupport) and GC type (G1GC, ZGC) for better performance in containers.
13 days ago
I couldn't use VisualVM nor Eclipse with the application in production because railway doesn't accept the necessary flags in the container, when I tried to deploy with the jmx flags in my Docker Image it failed every build, also is not compatible with de Spring Boot Actuator, wich I had to remove from the application to be able to deploy, thats why it has been so difficult to find the erro since I can online test it local
4 hours ago
Hi, it looks like I’ve finally resolved the issue.
The root cause was an incompatibility related to the mysql-cj-abandoned-connection-cleanup mechanism. This cleanup thread was retaining references to the classloader, preventing classes from being released properly and causing them to stack up over time.
Status changed to Solved brody • about 4 hours ago