2 months ago
Hello,
I’m writing to report a recurring issue with my n8n instance deployed on Railway. For the second time, when accessing my instance I encounter the following error:
{"code":503,"message":"Database is not ready!"}This error appears to indicate a PostgreSQL connection failure.
The first time this happened, I manually restarted all deployments (both n8n and PostgreSQL), which temporarily resolved the issue. However:
I could not find clear logs explaining the root cause of the failure.
There were no alerts or warnings indicating an upcoming problem.
The issue has now occurred again without any configuration changes on my side.
I would appreciate your help to try to understand:
What the root cause of the PostgreSQL connection issue is.
What actions or configuration changes I should apply (limits, health checks, timeouts, startup order, etc.) to prevent this from happening again.
Whether there are any recommended best practices for running n8n + PostgreSQL reliably on Railway in a production environment.
Thank you in advance for your support. I look forward to your guidance.
Best regards,
Geo
3 Replies
a month ago
This is typically caused by a temporary loss of connectivity between n8n and PostgreSQL, not a permanent database failure.
What usually happens is:
PostgreSQL briefly restarts or becomes unavailable (often due to resource pressure or maintenance).
n8n fails to reconnect and returns
503 Database is not ready.Restarting the services works because the DB connection is re-established.
That’s why logs often don’t show a clear root cause on the n8n side — from its perspective, the database was simply not reachable.
Recommended steps to prevent this:
Ensure PostgreSQL has sufficient CPU/memory to avoid restarts.
Limit n8n’s database connection pool to prevent exhausting DB connections.
Make sure n8n starts only after PostgreSQL is fully ready.
Enable monitoring/alerts for PostgreSQL restarts or resource saturation.
If you can share the timestamps of when this happened, we can correlate them with PostgreSQL events to confirm the trigger.
gabztoo
This is typically caused by a temporary loss of connectivity between n8n and PostgreSQL, not a permanent database failure.What usually happens is:PostgreSQL briefly restarts or becomes unavailable (often due to resource pressure or maintenance).n8n fails to reconnect and returns 503 Database is not ready.Restarting the services works because the DB connection is re-established.That’s why logs often don’t show a clear root cause on the n8n side — from its perspective, the database was simply not reachable.Recommended steps to prevent this:Ensure PostgreSQL has sufficient CPU/memory to avoid restarts.Limit n8n’s database connection pool to prevent exhausting DB connections.Make sure n8n starts only after PostgreSQL is fully ready.Enable monitoring/alerts for PostgreSQL restarts or resource saturation.If you can share the timestamps of when this happened, we can correlate them with PostgreSQL events to confirm the trigger.
a month ago
thanks for your reply, As I said, the only way n8n responded was because i restarted them manually and logs doesnt show anything related with exhausted resource but I already enabled monitoring in able to understand what happend in the future, if you known specific best practices for a stack n8n + postgresql I would appreciate it
geoom
thanks for your reply, As I said, the only way n8n responded was because i restarted them manually and logs doesnt show anything related with exhausted resource but I already enabled monitoring in able to understand what happend in the future, if you known specific best practices for a stack n8n + postgresql I would appreciate it
a month ago
Hey, are you sure that your PostgreSQL service didn't crash because out of disk space issues? Sharing your PostgreSQL logs here would help us confirm it. Make sure to share from the deployment that had issues (the one before the restart). https://docs.railway.com/guides/logs