4 months ago
Hi,
I’m running into a persistent latency issue on my Rust backend deployed on Railway. The API in question is:
https://stacks-wars-be-testnet.up.railway.app/user/stat?identifier=flames
What I observed
On Railway, the endpoint consistently shows ~22s time to first byte (TTFB) when tested with
curl.On localhost, pointing to the same production Redis URL, the response is <1ms.
My server logs show that Redis lookups and handler logic complete in under a second, so the slowdown isn’t in my code or DB.
Debug steps I tried
Measured with curl:
curl -o /dev/null -s -w "\nTotal: %{time_total}s\nConnect: %{time_connect}s\nTTFB: %{time_starttransfer}s\n" \ "https://stacks-wars-be-testnet.up.railway.app/user/stat?identifier=flames"Result on Railway:
Connect: ~0.35s TTFB: ~22s Total: ~22sResult on localhost (same Redis URL):
Total: 0.000756s TTFB: 0.000000sAdded tracing logs in my handler (
get_user_stat_handler).Logs confirm that Redis queries and response building finish well under 1s.
The delay seems to occur after the handler returns, before data reaches the client.
Considered possible causes like JSON serialization or rate limiting middleware, but since localhost + production Redis work instantly, those don’t appear to be the problem.
My conclusion so far
It looks like the issue is networking or container-level at Railway (egress slowness, resource throttling, or infra-related). Since the handler and Redis lookups are fast locally, the 22s delay seems external to the app logic.
Could you help me investigate if:
There’s known egress/networking latency from Railway in my region,
My container is hitting resource throttling, or
If there’s anything I can tweak (Nixpacks vs Railpacks, config, region settings) to resolve this?
Thanks in advance for your help 
16 Replies
4 months ago
Hey there! We've found the following might help you get unblocked faster:
If you find the answer from one of these, please let us know by solving the thread!
4 months ago
Hello,
Are you perhaps visiting with a VPN? if so, can you disable it and report back?
Best,
Brody
Status changed to Awaiting User Response Railway • 4 months ago
brody
Hello,Are you perhaps visiting with a VPN? if so, can you disable it and report back?Best,Brody
4 months ago
Hi Brody, I am not visiting with a VPN.
Status changed to Awaiting Railway Response Railway • 4 months ago
4 months ago
Hello!
We've escalated your issue to our engineering team.
We aim to provide an update within 1 business day.
Please reply to this thread if you have any questions!
Status changed to Awaiting User Response Railway • 4 months ago
4 months ago
So we looked into the container and we found there was no network issue. We think it's an issue with your code, we found the the telegram API you are talking to is just rather slow.
4 months ago
Going to mark this as solved unless you have more information to share.
Status changed to Solved angelo-railway • 4 months ago
angelo-railway
So we looked into the container and we found there was no network issue. We think it's an issue with your code, we found the the telegram API you are talking to is just rather slow.
4 months ago
That is not accurate. Only one endpoint in the server actually calls the Telegram API, and that’s within a POST request handler, which runs after the main execution.
For example, here’s the client-side call to that same endpoint:
https://www.stackswars.com/u/flames
This consistently returns a timeout, which aligns with Vercel’s 10-second timeout limit, not a Telegram API delay.
Status changed to Awaiting Railway Response Railway • 4 months ago
iflames1
That is not accurate. Only one endpoint in the server actually calls the Telegram API, and that’s within a POST request handler, which runs after the main execution.For example, here’s the client-side call to that same endpoint:https://www.stackswars.com/u/flamesThis consistently returns a timeout, which aligns with Vercel’s 10-second timeout limit, not a Telegram API delay.
4 months ago
I respect the downvote.
Myself and our platform engineer did invest time into this. I can see if it's an issue without host set. With your permission I can deploy to a new box to test this out. However, I am only relaying what we saw when we tested the connection to the Redis, which we got 5 ms on the handshake.
Status changed to Awaiting User Response Railway • 4 months ago
angelo-railway
I respect the downvote.Myself and our platform engineer did invest time into this. I can see if it's an issue without host set. With your permission I can deploy to a new box to test this out. However, I am only relaying what we saw when we tested the connection to the Redis, which we got 5 ms on the handshake.
4 months ago
Sure, you can go ahead and deploy to a new box to test it out. I also noticed the timeout rate has reduced since when I first raised the complaint, but ideally there shouldn’t be any timeouts at all regardless.
Status changed to Awaiting Railway Response Railway • 4 months ago
angelo-railway
I respect the downvote.Myself and our platform engineer did invest time into this. I can see if it's an issue without host set. With your permission I can deploy to a new box to test this out. However, I am only relaying what we saw when we tested the connection to the Redis, which we got 5 ms on the handshake.
4 months ago
Hi, any update on this?
I am still getting timeouts
3 months ago
Could you please try to set up Horizontal Scaling? That will land your workload on multiple hosts, and requests will be routed by Railway.
See https://docs.railway.com/guides/optimize-performance#configure-horizontal-scaling
This will help us further investigate.
Status changed to Awaiting User Response Railway • 4 months ago
christian
Could you please try to set up Horizontal Scaling? That will land your workload on multiple hosts, and requests will be routed by Railway.See https://docs.railway.com/guides/optimize-performance#configure-horizontal-scalingThis will help us further investigate.
3 months ago
I don't think my current plan can achieve that?
Status changed to Awaiting Railway Response Railway • 3 months ago
3 months ago
The Hobby plan supports Horizontal Scaling within the same region. The Pro plan further unlocks Horizontal Scaling across regions with Multi-Region Replicas.
Status changed to Awaiting User Response Railway • 3 months ago
Status changed to Awaiting Railway Response Railway • 3 months ago
3 months ago
You would find it in your service settings.
https://docs.railway.com/overview/the-basics#service-settings
Status changed to Awaiting User Response Railway • 3 months ago
3 months ago
🛠️ The ticket Network Performance Issue has been marked as in progress.
3 months ago
This thread has been marked as solved automatically due to a lack of recent activity. Please re-open this thread or create a new one if you require further assistance. Thank you!
Status changed to Solved Railway • 3 months ago
2 months ago
✅ The ticket Network Performance Issue has been marked as completed.

