18 days ago
My PostgreSQL database ran out of disk space. The database is now stuck in a crash-recovery loop:
PostgreSQL starts and successfully replays WAL (
redo done at 0/3DFFFF28)Fails when trying to write checkpoint:
FATAL: could not write to file "pg_wal/xlogtemp.30": No space left on deviceShuts down and restarts, repeating the cycle
I increased the volume size via the Railway dashboard, but the change doesn't seem to be taking effect - still getting "No space left on device" after redeploy.
Solutions greatly appreciated!!
5 Replies
Status changed to Awaiting Railway Response Railway • 18 days ago
17 days ago
Volume live resizing is only available on the Pro plan and above, so the resize you applied in the dashboard has not taken effect. On the Hobby plan, volumes are capped at 5GB. Upgrading to Pro would allow the resize to apply and give you up to 50GB (expandable to 250GB).
Status changed to Awaiting User Response Railway • 17 days ago
17 days ago
2026-02-25 20:46:30.010 UTC [7] LOG: starting PostgreSQL 16.11 (Debian 16.11-1.pgdg13+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 14.2.0-19) 14.2.0, 64-bit
2026-02-25 20:46:30.011 UTC [7] LOG: listening on IPv4 address "0.0.0.0", port 5432
2026-02-25 20:46:30.011 UTC [7] LOG: listening on IPv6 address "::", port 5432
2026-02-25 20:46:30.033 UTC [7] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2026-02-25 20:46:30.080 UTC [30] LOG: database system was interrupted while in recovery at 2026-02-25 12:56:32 UTC
2026-02-25 20:46:30.080 UTC [30] HINT: This probably means that some data is corrupted and you will have to use the last backup for recovery.
2026-02-25 20:46:30.573 UTC [30] LOG: database system was not properly shut down; automatic recovery in progress
2026-02-25 20:46:30.603 UTC [30] LOG: redo starts at 0/2154EE10
2026-02-25 20:46:35.051 UTC [30] LOG: redo done at 0/26FFFFA0 system usage: CPU: user: 0.08 s, system: 0.13 s, elapsed: 4.44 s
2026-02-25 20:46:35.061 UTC [30] FATAL: could not write to file "pg_wal/xlogtemp.30": No space left on device
2026-02-25 20:46:35.067 UTC [7] LOG: startup process (PID 30) exited with exit code 1
2026-02-25 20:46:35.067 UTC [7] LOG: terminating any other active server processes
2026-02-25 20:46:35.067 UTC [7] LOG: shutting down due to startup process failure
2026-02-25 20:46:35.081 UTC [7] LOG: database system is shut down
Mounting volume on: /var/lib/containers/railwayapp/bind-mounts/b876240c-5306-4b14-9e18-7688c5ff7cde/vol_by5fz7ud1vwytedq
Mounting volume on: /var/lib/containers/railwayapp/bind-mounts/b876240c-5306-4b14-9e18-7688c5ff7cde/vol_by5fz7ud1vwytedq
2026-02-25 20:50:49.076 UTC [7] LOG: starting PostgreSQL 16.11 (Debian 16.11-1.pgdg13+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 14.2.0-19) 14.2.0, 64-bit
2026-02-25 20:50:49.076 UTC [7] LOG: listening on IPv4 address "0.0.0.0", port 5432
2026-02-25 20:50:49.076 UTC [7] LOG: listening on IPv6 address "::", port 5432
2026-02-25 20:50:49.129 UTC [7] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2026-02-25 20:50:49.279 UTC [30] LOG: database system was interrupted while in recovery at 2026-02-25 20:46:30 UTC
2026-02-25 20:50:49.279 UTC [30] HINT: This probably means that some data is corrupted and you will have to use the last backup for recovery.
2026-02-25 20:50:49.847 UTC [30] LOG: database system was not properly shut down; automatic recovery in progress
2026-02-25 20:50:49.888 UTC [30] LOG: redo starts at 0/2154EE10
2026-02-25 20:50:52.820 UTC [31] FATAL: the database system is not yet accepting connections
2026-02-25 20:50:52.820 UTC [31] DETAIL: Consistent recovery state has not been yet reached.
2026-02-25 20:50:53.346 UTC [32] FATAL: the database system is not yet accepting connections
2026-02-25 20:50:53.346 UTC [32] DETAIL: Consistent recovery state has not been yet reached.
Certificate will not expire
PostgreSQL Database directory appears to contain a database; Skipping initialization
2026-02-25 20:50:53.850 UTC [33] FATAL: the database system is not yet accepting connections
2026-02-25 20:50:53.850 UTC [33] DETAIL: Consistent recovery state has not been yet reached.
2026-02-25 20:50:54.333 UTC [30] LOG: redo done at 0/26FFFFA0 system usage: CPU: user: 0.00 s, system: 0.19 s, elapsed: 4.44 s
2026-02-25 20:50:54.343 UTC [30] FATAL: could not write to file "pg_wal/xlogtemp.30": No space left on device
2026-02-25 20:50:54.347 UTC [7] LOG: startup process (PID 30) exited with exit code 1
2026-02-25 20:50:54.347 UTC [7] LOG: terminating any other active server processes
2026-02-25 20:50:54.348 UTC [7] LOG: shutting down due to startup process failure
2026-02-25 20:50:54.356 UTC [7] LOG: database system is shut down
Mounting volume on: /var/lib/containers/railwayapp/bind-mounts/b876240c-5306-4b14-9e18-7688c5ff7cde/vol_by5fz7ud1vwytedq
moved to pro - still not working...
are you sure? my service is unavailable for more tha 20 hours, it's not a joke.
Attachments
Status changed to Awaiting Railway Response Railway • 17 days ago
17 days ago
Hello!
We've escalated your issue to our engineering team.
We aim to provide an update within 1 business day.
Please reply to this thread if you have any questions!
Status changed to Awaiting User Response Railway • 17 days ago
17 days ago
Apologies for the earlier incorrect guidance about the Hobby plan limitation. We can confirm your workspace is now on Pro. The volume resize not taking effect after upgrading and redeploying is a known issue we're actively tracking, where the resize is accepted in the dashboard but not propagated to the underlying filesystem. We've escalated this to our platform engineering team for urgent intervention on your volume.
17 days ago
I don't know how to respond to it - my service is unavailable for 36 hours already. I am not sure how to continue from here.
Status changed to Awaiting Railway Response Railway • 17 days ago
16 days ago
So sorry for the delay. This should be resolved now. I resized the volume and redeployed. Let us know if you have any issues.
Status changed to Awaiting User Response Railway • 16 days ago
Status changed to Solved jake • 6 days ago