a month ago
What happened
I have a Railway Postgres service running PostgreSQL 16.x with a persistent volume.
I updated the Postgres source image (triggered by an available update in Railway).
The redeploy ran for ~15 minutes and then failed during the “Deploy” phase.
The Network Flow Logs during the failure show repeated internal TCP attempts with mixed statuses (OK and NO_SOCKET)
.
What I tried
Restore a backup to the same service
Restore completed (or at least ran), but the service still would not deploy successfully (same deploy failure behavior).
Create a brand-new Postgres service
The new Postgres service deployed successfully with a new volume.
Mount the old volume to the new Postgres service
The new service crashed immediately when using the old volume.
Run Postgres 16 on a fresh volume (new service) and got a version mismatch error
Logs show the data directory was initialized by PostgreSQL 17, which is incompatible with 16.11:
FATAL: database files are incompatible with server
DETAIL: The data directory was initialized by PostgreSQL version 17, which is not compatible with this version 16.11 (Debian 16.11-1.pgdg13+1).
Mount the volume back to the original (primary) Postgres service and revert image to v16
• Still fails after ~15 minutes at Deploy (same behavior / flow logs pattern).
6.Tried to temporarily run Postgres 17 against the existing volume to regain access, then perform a dump/restore into a fresh PG16/PG17 instance
• It crashed.
Current state
The original Postgres service no longer deploys.
The original volume appears to have been initialized/upgraded to PG17, but I’m trying to run it on PG16, which fails due to incompatibility.
Backups/restores didn’t recover the service.
Questions / help needed
Did the “image update” implicitly upgrade the major Postgres version (16 → 17) or re-initialize the data directory on the existing volume?
What’s the correct recovery path here on Railway?
Is there an official Railway-supported workflow for this scenario?
Why would a backup restore not resolve the deploy failure—are restores tied to the same major version / image expectations?
Extra info I can provide
Service ID / project ID, region, exact image tag before/after update, backup timestamp(s), and full deploy logs (beyond flow logs) if needed.
1 Replies
Status changed to Awaiting User Response Railway • 27 days ago
Status changed to Closed brody • 27 days ago
