3 months ago
Hello,
I deployed my Django backend on Railway. The container builds successfully, migrations run, and static files are collected. Gunicorn starts and logs:
[INFO] Starting gunicorn 23.0.0
[INFO] Listening at: http://0.0.0.0:8080 (1)
[INFO] Using worker: sync
[INFO] Booting worker with pid: 4
But a few seconds later, the container stops automatically:
[INFO] Handling signal: term
[INFO] Worker exiting (pid: 4)
[INFO] Shutting down: Master
So the app shuts down before I can access it.
What could be causing this? Do I need additional configuration for Gunicorn/Django to keep the container running on Railway?
Thanks! 
11 Replies
3 months ago
Hey there! We've found the following might help you get unblocked faster:
If you find the answer from one of these, please let us know by solving the thread!
3 months ago
Your container is shutting down because it's listening on port 8080 instead of Railway's dynamically assigned $PORT. Change your Gunicorn command from --bind 0.0.0.0:8080 to --bind 0.0.0.0:$PORT and add ALLOWED_HOSTS = ['.railway.app', 'localhost', '127.0.0.1'] to your Django settings.py. Railway requires apps to bind to the PORT environment variable it provides, not a hardcoded port. Once you make this change and redeploy, your container will stay running properly.
Railway
Hey there! We've found the following might help you get unblocked faster: - [🧵 Application failed to respond](https://station.railway.com/questions/application-failed-to-respond-519486a1) - [🧵 container closed error](https://station.railway.com/questions/container-closed-error-8f1fe36b) - [📚 No Start Command Could be Found](https://docs.railway.com/reference/errors/no-start-command-could-be-found) If you find the answer from one of these, please let us know by solving the thread!
3 months ago
sorry but it didn't worked, I tried to change default ports etc, but the error is still persistent
dostogircse171
Your container is shutting down because it's listening on port 8080 instead of Railway's dynamically assigned $PORT. Change your Gunicorn command from --bind 0.0.0.0:8080 to --bind 0.0.0.0:$PORT and add ALLOWED_HOSTS = ['.railway.app', 'localhost', '127.0.0.1'] to your Django settings.py. Railway requires apps to bind to the PORT environment variable it provides, not a hardcoded port. Once you make this change and redeploy, your container will stay running properly.
3 months ago
sorry but it didn't worked,
1. I have already made the binding with $PORT in Procfile,
My Procfile:
web: python manage.py migrate && python manage.py collectstatic --noinput && gunicorn yt_notes.wsgi --bind 0.0.0.0:$PORT --timeout 120 --keep-alive 2
I also tried to update the default ports too, but idk what's the issue, the same error come, and closes the container
dostogircse171
Your container is shutting down because it's listening on port 8080 instead of Railway's dynamically assigned $PORT. Change your Gunicorn command from --bind 0.0.0.0:8080 to --bind 0.0.0.0:$PORT and add ALLOWED_HOSTS = ['.railway.app', 'localhost', '127.0.0.1'] to your Django settings.py. Railway requires apps to bind to the PORT environment variable it provides, not a hardcoded port. Once you make this change and redeploy, your container will stay running properly.
3 months ago
Railway requires apps to bind to the PORT environment variable it provides, not a hardcoded port
This is not completely aligned with the documentation https://docs.railway.com/guides/public-networking, as nothing prevents you from defining your own $PORT variable and make the app listen to it. Railway supplies the $PORT variable only in case you don't have one
Also, the port the application listens to should not affect the application lifecycle as long as the port is bindable. Unless you have the Serverless option enabled on the server that scales down the service when no activity is detected for long time, there should be nothing that could SIGTERM the container seconds later it was booted
fardaan-mahdi
this is the log which im getting
3 months ago
Accordingly to your logs your container process receives a SIGTERM, thus something from the outside wants it to become stopped. Could you check if you could get some insights on your Deployments tab? Could it be that you're starting another deployment immediately after the first once was completed and thus as the doc states https://docs.railway.com/reference/deployments#singleton-deploys your old containers gets a SIGTERM?
Also, by any chance, do you have any HEALTHCHECK instructions on your Dockerfile?
vedmaka
Accordingly to your logs your container process receives a SIGTERM, thus something from the outside wants it to become stopped. Could you check if you could get some insights on your Deployments tab? Could it be that you're starting another deployment immediately after the first once was completed and thus as the doc states https://docs.railway.com/reference/deployments#singleton-deploys your old containers gets a SIGTERM?Also, by any chance, do you have any HEALTHCHECK instructions on your Dockerfile?
3 months ago
fardaan-mahdi
ya I did, but everything seems finecan you check if this is alright?if I made any errors with the deployment itself
3 months ago
The build log and the settings look normal to me. You probably don't really need the pre-build command as Nixpack/Railpack should detect your Django app automatically and install reqs anyway, but it's not the cause of the issue. The healthcheck also looks normal
Could you check the Deployments tab please? I am mainly interested whether you'll see there lots of frequent redeployments or just a few
vedmaka
The build log and the settings look normal to me. You probably don't really need the pre-build command as Nixpack/Railpack should detect your Django app automatically and install reqs anyway, but it's not the cause of the issue. The healthcheck also looks normalCould you check the Deployments tab please? I am mainly interested whether you'll see there lots of frequent redeployments or just a few
3 months ago
here you are:
Attachments
2 months ago
Exact same issue here, have no idea how to fix it yet