Disk Usage Wrongly Counted as Memory
dvap
HOBBYOP

a month ago

I’ve run into what seems to be a serious issue with how Railway measures memory usage when deploying via a custom Dockerfile.

Problem Summary

When a container writes large files to disk (even to a mounted volume), Railway incorrectly counts the size of those files as memory usage.

  • The “Memory” metric in the dashboard increases proportionally to the size of files written to disk.

  • This “memory” never goes down unless:

    1. The file is deleted, or

    2. The service is restarted.

This causes the service’s memory graph to spike permanently, even though the process itself is not actually using that much RAM.

Reproduction Steps

  1. Deploy a simple image based on debian:bullseye-slim via Dockerfile. (A sample Dockerfile attached,see "Dockerfile")

  2. Connect to the container via SSH.

  3. Run the following command to download a large file (e.g., 200 MB) into a mounted volume:

curl -o /mnt/volume/testfile.bin https://example.com/some-large-files.bin

I'm not pasting the real URL because the file is private. You can use this command to write a 200MB file, it has the same effect:

dd if=/dev/zero of=/tmp/testfile.bin bs=1M count=200

  1. Observe Railway’s memory usage graph — it increases by ~200 MB.

  2. Run top inside the container:

    • Actual memory usage remains very low.

  3. Delete the file:

rm /mnt/volume/testfile.bin

  1. The memory metric immediately drops back to normal.

Why This Matters

This issue makes Railway deployments unreliable for workloads that temporarily generate or process large files — for example, background jobs, file uploads, or batch data tasks. It easily reach memory limit.

It also leads to incorrect billing since users are charged both for storage (volume) and inflated memory usage that doesn’t reflect reality. It charges users more than they use, which hurts customer trust.

Evidence

I’ve attached screenshots showing:

  • Commands (curl, rm) execution: (see railway-command.png)

  • The Railway memory usage graph (see railway-metrics.png)

  • top output inside the container showing near-zero real memory usage. (see railway-top.png)

Conclusion

It seems that Railway’s internal memory tracking is counting filesystem page cache or volume writes toward container memory usage, even when the files are written to a mounted volume.

This behavior differs from standard Docker behavior — running the same container locally does not reproduce the issue.

Would love to hear from the Railway team if this is:

  • A known bug, or

  • A limitation of the current resource accounting model

  • and whether it's possible to fix

Thanks for taking a look!

$10 Bounty

2 Replies

Railway
BOT

a month ago

Hey there! We've found the following might help you get unblocked faster:

If you find the answer from one of these, please let us know by solving the thread!


Railway

Hey there! We've found the following might help you get unblocked faster: - [🧵 Memory doesn't decrease on Spring Boot App](https://station.railway.com/questions/memory-doesn-t-decrease-on-spring-boot-a-ac195142) - [🧵 accumulating memory with docker/linux](https://station.railway.com/questions/accumulating-memory-with-docker-linux-6b068ea4) - [🧵 Memory usage and disk space](https://station.railway.com/questions/memory-usage-and-disk-space-15c5e366) If you find the answer from one of these, please let us know by solving the thread!

dvap
HOBBYOP

a month ago

These topics are not helping. They are facing the same issue but didn't get an effective solution. The problem lies how Railway counts memory.

Please use the attached Dockerfile, and either curl or the dd command to verify, as stated in the "reproduction steps".


Loading...