2 years ago
this is my Project id: b37f2d5e-73cd-4af6-a19f-f9b6b9c17708
I recently moved my django app that was previously hosted on a GCP VM to railway the deployement worked and my app is live but in my app there is a function which runs a scrapper. when i try to run this function i get:
[2024-05-24 05:38:33 +0000] [7] [CRITICAL] WORKER TIMEOUT (pid:64)
[2024-05-24 05:38:33 +0000] [64] [INFO] Worker exiting (pid: 64)
[2024-05-24 05:38:34 +0000] [7] [ERROR] Worker (pid:64) was sent SIGKILL! Perhaps out of memory?
[2024-05-24 05:38:34 +0000] [98] [INFO] Booting worker with pid: 98
[2024-05-24 05:39:28 +0000] [7] [CRITICAL] WORKER TIMEOUT (pid:98)
[2024-05-24 05:39:28 +0000] [98] [INFO] Worker exiting (pid: 98)
[2024-05-24 05:39:29 +0000] [7] [ERROR] Worker (pid:98) exited with code 1
[2024-05-24 05:39:29 +0000] [7] [ERROR] Worker (pid:98) exited with code 1.
[2024-05-24 05:39:29 +0000] [131] [INFO] Booting worker with pid: 131
i am pretty new to railway and trying to figure out how to increase the memory.
17 Replies
2 years ago
Generally, increasing your memory limit above 32GB would require a higher plan, I'll let the team give you the official answer.
2 years ago
that's the issue it says i have 32GB of memory but i get these logs at 350MB @brody
i have attached a SS as well
Attachments
2 years ago
Have you moved your project over to the Pro workspace and have you since redeployed?
2 years ago
no when i created the project i was already in the pro workspace
2 years ago
There's the possibility of a very quick spike up to 32GB that wasn't captured by the metrics graph.
2 years ago
the redeploy is a success but when i run the function it gives me the same issue again:
[2024-05-25 09:35:16 +0000] [7] [CRITICAL] WORKER TIMEOUT (pid:10)
[2024-05-25 09:35:17 +0000] [7] [ERROR] Worker (pid:10) was sent SIGKILL! Perhaps out of memory?
[2024-05-25 09:35:17 +0000] [54] [INFO] Booting worker with pid: 54
2 years ago
Maybe come Monday Angelo would be willing to temporarily up your memory limit?
Railway is supposed to send an email alert if your deployment has ran out of memory, but so far out of all the memory related issues I've seen, no one has yet to report that they had received an email alert for being out of memory, could you check your email and see if you have all the applicable alerts enabled in your account settings?
2 years ago
hi brody thanks for your prompt reply…i just checked i have all the alerts enabled and have not received any mail related to memory limits reached…and i don't think so that should be the issue…it's a very small script and on gcp it was running with just 2 gb of max memory.
2 years ago
It's a little different on Railway, your application can actually see the full host memory of 256GB so some things that can work with a hard lower memory limit, might not on Railway.
Example, this app is ran on the Hobby plan -
2 years ago
When I add the project Id nothing shows up :|
You can also copy and paste the affected service in here.
2 years ago
this is the project id: b37f2d5e-73cd-4af6-a19f-f9b6b9c17708
this is the service id: 9a23915e-f7e2-426d-b794-a834b887a404
2 years ago
This doesn't seem like a memory issue; I can confirm that you're not using anywhere near the plan limits (32G).
What are your gunicorn settings? Does your function take a long time to run before it returns a response?
2 years ago
yes it takes approx 8 to 10 minutes…
and this is my Procfile:
web: gunicorn 'article_project.wsgi'
2 years ago
A single request for this scraper can take 8 to 10 minutes? if so, you would have to redesign your application to put work into a queue system because a single request to any Railway hosted service gets cancelled after 5 minutes.
2 years ago
To add on to what brody mentioned, you'd likely also want to tune your gunicorn's timeout setting if you have long-running requests, otherwise gunicorn may kill it prematurely.
Status changed to Solved Railway • over 1 year ago
2 years ago
guys i fixed it my single request was taking 4 to 5 minutes max i updated it with await and async as well…and updated my Procfile and it all works well now… thanks a lot to you guys for all the help @brody @RC