6 months ago
Project ID: 11304579-8f04-4e8a-a865-d01537c78aa5
I am using railway hosted django, I have the problem when my settings is DATABASES = {
'default': djdatabaseurl.parse(os.getenv('DATABASEURL')) } env variable DATABASEURL=postgresql://postgres:password@postgres.railway.internal:5432/railway
I don't have the problem when my settings is
DATABASES = {
'default': djdatabaseurl.parse('postgresql://postgres:password@postgres.railway.internal:5432/railway')
}
It seems postgres database becomes very slow and causing timeout when db URL is set as env variable. But no timeout if set as a URL in settings.py, but the the URL with password is not supposed to be in settings.py.
Error log is attached.
0 Replies
6 months ago
I would recommend looking into running gunicorn with the uvicorn event worker instead of your curent sync workers
but it happens when there was only one user one request, large workload means just the data need a few seconds to process. I know uvicorn is for async process of multiple requests.
6 months ago
please try it anyway
When the problem happens, the code was running django StreamingHttpResponse, which was yielding results to client intermittently, over a few seconds span, like a streaming service of video, but not video, just json data. I am not familiar with gunicorn, but is there a gunicorn timeout setting if the response span is too long?
6 months ago
yes its 30 seconds iirc
So how to change it? my railway.json: {
"$schema": "https://railway.app/railway.schema.json",
"build": {
"builder": "NIXPACKS"
},
"deploy": {
"startCommand": "python manage.py collectstatic --noinput && gunicorn iconReadProd.wsgi"
}
}
6 months ago
please see gunicorn's docs -
you can change it in the celery worker configuration update.
celery_app.conf.update(
broker_transport_options={
"visibility_timeout": 10800, # 3 hours as a backup in case of silent worker failures
},
task_acks_late=True, # Acknowledge tasks only after successful completion
task_reject_on_worker_lost=True, # Reject tasks if a worker crashes, re-queue them immediately
)
some of these configruations might help, but since you are using postgres, and I am using redis, some of these options might not be available for you, but there might be an equivalent for it in postgres.
6 months ago
I don't think they mentioned anything about celery?
6 months ago
gunicorn's sync workers
Thanks guys! this solved the problem: gunicorn project.wsgi --timeout 300 --keep-alive 65
6 months ago
I would still recommend you look into using uvicorn event workers, I would hate for you to have to spend more time debugging in the future
6 months ago
!s
Status changed to Solved brody • 6 months ago