a year ago
I have a docker container that succeeds locally, but fails to run correctly when deployed on railway.app.
I rewrote the software in the container to be self-contained, so that the container can be tested by simply running it (and looking at the logs).The container in question is: ghcr.io/toitware/tpkg_registry:repro
When it succeeds it prints on stderr: *************************** TEST SUCCESS - Loaded toitdoc ************************ 200 OK
You can run it yourself:
docker pull ghcr.io/toitware/tpkg_registry:repro
docker run ghcr.io/toitware/tpkg_registry:repro
For testing I ran the container locally with docker run --cpus=0.02 -m2GB ghcr.io/toitware/tpkg_registry:repro
Due to the severe CPU restriction that took a long time (20 minutes?) but it eventually finished. Without any --cpus the test finishes in 17s on my machine.
Deployed on railway.app the test just doesn't finish. After more than an hour it's still not doing anything (except for the periodic synced registry
).
There doesn't seem to be any way to shell into a deployed container, so debugging it locally would make things much easier. Are there other docker flags that I could use to get my local setup more similar to railway's?
11 Replies
a year ago
Could it be crashing due to the resource limitations of the Hobby plan?
Would you like me to attempt to run it on a Pro account?
a year ago
It might be crashing due to resource limitations, but I have been able to run it with --cpus=0.02
and -m2GB
locally. What other resource limitations does railway have that I could be hitting?
Yes, please. It would be nice to know whether it would run in a Pro account. At least we could narrow it down to resource-limitation or not.
a year ago
but I have been able to run it with --cpus=0.02 and -m2GB locally. What other resource limitations does railway have that I could be hitting?
Unfortunately it's a little different on Railway, they do not specify CPU and Memory limits when they run the resulting image meaning the resource that's being ran can see the full host's hardware, but they do kill processes that simply use more than the plan allows for, this approach leads to a whole lot of issues, and this could be the issue we are seeing here.
Yes, please. It would be nice to know whether it would run in a Pro account. At least we could narrow it down to resource-limitation or not.
Will do when I'm back at the computer, I'll update you when I've tried.
a year ago
Will do when I'm back at the computer, I'll update you when I've tried.
Did you have a chance to look into this?
a year ago
Yes I did deploy, but a bug on the Pro plan prevented me from adding a domain to the service so I was never able to make a request.
a year ago
Yes I did deploy, but a bug on the Pro plan prevented me from adding a domain to the service so I was never able to make a request.
You don't need to make any request. The container is configured to just make a request itself. No need to expose any service to the outside.
a year ago
It listens on two different ports though?
Yes.Normally, the container is used as a web-page and API endpoint. However, for debugging, I modified the container so it does a request to itself after it has booted up. This way it's easier to reproduce (no additional steps except for starting the container).
a year ago
Hmmm okay well it doesn't print the desired message even when deployed on the Pro plan, I would recommend adding some verbose logging so you can figure out what is going wrong and where.
a year ago
Hmmm okay well it doesn't print the desired message even when deployed on the Pro plan, I would recommend adding some verbose logging so you can figure out what is going wrong and where.
Thanks a lot for testing.I will see if I can find the time to add more logging. Are there any known incompatibilities to docker that I could focus on?
a year ago
Aside from the fact that the V2 runtime uses podman and not docker, no I can't think of anything, all signs are currently pointing to an issue with the code.