3 months ago
I’m experiencing consistent 502 upstream headers response timeout errors when my Rust backend (Actix Web) makes external API calls that take 1.5–2.5 seconds to respond.
Railway seems to cut the connection after 15 minutes, before my service can return headers — even though the request continues processing normally.
Example log:
{
"httpStatus": 502,
"path": "/api/purchase",
"totalDuration": 900057,
"upstreamRqDuration": 900001,
"upstreamErrors": "[{\"duration\":900001,\"error\":\"upstream headers response timeout\"}]"
}However, other requests do succeed when they finish faster or when the container is warm:
{"httpStatus":200,"totalDuration":3182}So the behavior is inconsistent: sometimes Railway allows >3s, sometimes it kills at ~0.9s.
My questions:
Does Railway enforce a 15-minute header timeout?
Is this configurable in any plan?
Is there a recommended approach for backend services that rely on external APIs taking >1 second to respond?
Is this expected behavior or a platform limitation?
6 Replies
3 months ago
Hey there! We've found the following might help you get unblocked faster:
If you find the answer from one of these, please let us know by solving the thread!
3 months ago
Hey there!
We do not cut off your connection after ~900ms. Poking around I don't see anything at a glance indicating an issue on our end. I've opened this to the community who can help you debug this!
Status changed to Awaiting User Response Railway • 3 months ago
Status changed to Awaiting Conductor Response noahd • 3 months ago
3 months ago
For additional context, we also contacted the external payment provider, and this was their response:
Translation:
Railway applies a very short timeout for waiting for response headers.
In the example you sent us, the transaction took approximately 2 seconds, which is within normal processing times. However, if the reverse proxy is configured with a lower timeout, it closes the connection before receiving the complete response, generating the 502.
We recommend asking Railway what timeout is configured for HTTP connections in their reverse proxy.
Also check whether they allow that timeout to be adjusted.
Attachments
3 months ago
Do you have Serverless mode on? Railway services are always-on by default and Serverless spins down your service after a few minutes of not receiving any network traffic. This could be how you're seeing the variability in response time and the appearance of a "warm container".
I did a Google search about request timeouts on Actix Web and one thing I saw is that if you're using https://crates.io/crates/awc, you might have to configure a longer request timeout which by default is 5 seconds.
3 months ago
I’m not using awc; I’m using reqwest with a shared Client and explicit per-request timeouts (12s for Auth, 30s for Purchase). So the timeouts on my side are larger than the ~2s Plexo takes. The 502 upstream headers response timeout is still happening. I have Serverless ON caus with Serverless OFF the issue still happens, and even more frequently don't really know why.
3 months ago
Again, Railway does not have any platform constraints that would be relevant here. I'm surprised that Serverless mode causes request to happen more often... do you have any proxies at play in your Railway project like Caddy, NGINX, or an external one like Cloudflare?