18 days ago
My website is down. Initially, I believe it was due to going over my billing limit on the Hobby plan. I upgraded to Pro and it is till down. I have exhausted reasons that would have caused this on my side.
I would appreciate your investigation as soon as possible. Thank you.
13 Replies
18 days ago
This thread has been marked as public for community involvement, as it does not contain any sensitive or personal information. Any further activity in this thread will be visible to everyone.
Status changed to Open Railway • 18 days ago
18 days ago
Have you tried redeploying the service?
Also, go to your workspace settings > usage, and make sure you don’t have any hard limits set up.
18 days ago
Hey, could you provide more information?
Are you not able to redeploy your service? Does it crash? Any error logs?
18 days ago
redeploying now.
Previous error logs:
2026-02-27 03:27:04,707 - DataUpdater - INFO - Batch update complete: 0/1 successful, 0 records added in 1.09s
[2026-02-27 03:27:06 +0000] [1] [INFO] Handling signal: term
[2026-02-27 03:27:07 +0000] [4] [INFO] Worker exiting (pid: 4)
[2026-02-27 03:27:07 +0000] [1] [INFO] Shutting down: Master Stopping Container
Checking usage now
18 days ago
No usage limits.
Website still down on redeploy.
Plan should be in limits.
Attachments
18 days ago
Situation: Made CSS changes to website layout. Nothing integral to actual functionality.
Pushed the changes to git. Website crashed. restored the old git. Found I was over my Hobby provided amount. Upgraded to pro. Redeployed. Website still down.
18 days ago
This sounds like railway is sending a termination request. Any chances your service is running out of memory?
The most likely case is spending limits active, I'd double check that. Also, you don't need to upgrade to Pro, youc an also just pay the difference in the hobby plan.
18 days ago
My Flask/Gunicorn service (V6_EODHD_POpt, project: POpt) has been unable to stay running since approximately Feb 26-27. The container boots successfully, then Railway sends SIGTERM within 7 seconds — every deployment, every time.
What happens:
Gunicorn starts, listens on 0.0.0.0:8080, worker boots
App initializes (DB tables verified, rate limiter initialized, background updater starts)
~7 seconds later:
Handling signal: term→ container shuts downNo application errors in logs
Logs:
[2026-02-27 15:26:47] Starting gunicorn 21.2.0
[2026-02-27 15:26:47] Listening at: http://0.0.0.0:8080
[2026-02-27 15:26:47] Using worker: gthread
[2026-02-27 15:26:47] Booting worker with pid: 4
[2026-02-27 15:26:49] Database tables verified / created
[2026-02-27 15:26:49] Rate limiter initialized
[2026-02-27 15:26:54] Handling signal: term → Worker exiting → Shutting downWhat I've verified:
No Cron Schedule set
Restart Policy: On Failure (with 5 retries)
Serverless: Disabled
No Healthcheck Path configured
Memory usage: ~500MB-1GB (flat, no spikes, well within Pro plan limits)
No resource limits configured
No application errors — app boots clean every time
What I've tried:
Removed and raised usage hard limits
Upgraded from Hobby to Pro plan
Redeployed multiple times
Restarted the container
Rolled back to previous working deployments
Removed health check endpoint
Nothing has worked. The SIGTERM is coming from outside the application. This pattern matches the Feb 11 incident where Railway's automated enforcement system incorrectly flagged legitimate workloads. Could my service still be in a "forced pause" state?
Service details:
Plan: Pro (upgraded from Hobby during troubleshooting)
Stack: Python/Flask/Gunicorn
Start command:
gunicorn webapp.app:app --bind 0.0.0.0:$PORT --workers 1 --threads 2Previously running fine for months
Can a Railway team member please check if my service has been incorrectly flagged or paused?
shadow6-actual
My Flask/Gunicorn service (V6_EODHD_POpt, project: POpt) has been unable to stay running since approximately Feb 26-27. The container boots successfully, then Railway sends SIGTERM within 7 seconds — every deployment, every time.What happens:Gunicorn starts, listens on 0.0.0.0:8080, worker bootsApp initializes (DB tables verified, rate limiter initialized, background updater starts)~7 seconds later: Handling signal: term → container shuts downNo application errors in logsLogs:[2026-02-27 15:26:47] Starting gunicorn 21.2.0 [2026-02-27 15:26:47] Listening at: http://0.0.0.0:8080 [2026-02-27 15:26:47] Using worker: gthread [2026-02-27 15:26:47] Booting worker with pid: 4 [2026-02-27 15:26:49] Database tables verified / created [2026-02-27 15:26:49] Rate limiter initialized [2026-02-27 15:26:54] Handling signal: term → Worker exiting → Shutting downWhat I've verified:No Cron Schedule setRestart Policy: On Failure (with 5 retries)Serverless: DisabledNo Healthcheck Path configuredMemory usage: ~500MB-1GB (flat, no spikes, well within Pro plan limits)No resource limits configuredNo application errors — app boots clean every timeWhat I've tried:Removed and raised usage hard limitsUpgraded from Hobby to Pro planRedeployed multiple timesRestarted the containerRolled back to previous working deploymentsRemoved health check endpointNothing has worked. The SIGTERM is coming from outside the application. This pattern matches the Feb 11 incident where Railway's automated enforcement system incorrectly flagged legitimate workloads. Could my service still be in a "forced pause" state?Service details:Plan: Pro (upgraded from Hobby during troubleshooting)Stack: Python/Flask/GunicornStart command: gunicorn webapp.app:app --bind 0.0.0.0:$PORT --workers 1 --threads 2Previously running fine for monthsCan a Railway team member please check if my service has been incorrectly flagged or paused?
18 days ago
Do you by any chances have experienced any of t he following:
A yellow banner?
A “Service Paused” label?
Any Trust & Safety notification?
Any email from Railway about enforcement?
That's what I'd assume you'd get if that was the case.
But yeah, overall it does seem to be an issue on their side.
xmrafonso
Do you by any chances have experienced any of t he following:A yellow banner?A “Service Paused” label?Any Trust & Safety notification?Any email from Railway about enforcement?That's what I'd assume you'd get if that was the case.But yeah, overall it does seem to be an issue on their side.
18 days ago
No, no other indication outside of exhausting all other options along with multiple Claude Opus 4.6 supporting robots....
shadow6-actual
No, no other indication outside of exhausting all other options along with multiple Claude Opus 4.6 supporting robots....
18 days ago
It appears to be platform wide. See: https://station.railway.com/support/app-keeps-crashing-no-obvious-reason-6fa32f2c
xmrafonso
It appears to be platform wide. See: https://station.railway.com/support/app-keeps-crashing-no-obvious-reason-6fa32f2c
18 days ago
Thank you for sharing. That is exactly what I'm seeing as well.
Seems to be an issue (Dec 16, Feb 11, Feb 18, Feb 26) where their automated systems killed legitimate workloads.
Hmm. much to think about. Thank you for helping me identify this.
15 days ago
Any updates on this? It's been three days and the site is still down. I've restarted and redeployed quite a few times to no avail.