4 months ago
Hi Railway Support Team,
I’m seeing roughly a 10x slowdown when running the same workload on Railway compared to localhost. Both environments use the exact same external services (Supabase and OpenAI), so the issue seems specific to Railway’s infrastructure.
Summary of the issue
Localhost: 750 parallel OpenAI API calls complete in ~20–30 seconds
Railway: Same workload takes 10–15 minutes (about 20–30× slower)
What I’ve ruled out
Not CPU throttling – Railway metrics show 0.0–0.4 vCPU usage (plenty of headroom)
Not memory issues – stable at 400–450 MB with no spikes
Not database constraints – same Supabase instance with 15 pooled connections
Not OpenAI rate limits – localhost handles identical volume without issue
Not code differences – identical deployment, same services and configuration
Observed metrics during slow periods
CPU: ~0.0 vCPU (appears idle, waiting on I/O)
Memory: 400–450 MB (steady)
Pattern: Fast for three batches → stalls for ~10 minutes → resumes briefly
Possible explanations
Throttling on outbound connections or network egress after burst usage
Internal network or NAT-level limits on concurrent external requests
Geographic routing causing higher latency to OpenAI’s US endpoints
Environment details
Plan: Pro (upgraded with additional RAM)
Region: US West (California)
Service: Node.js backend making parallel fetch() calls
External APIs: OpenAI (api.openai.com), Supabase
Workload: ~750 parallel requests per batch
Questions
Does Railway impose limits on concurrent outbound connections per service?
Are there egress or network throttling mechanisms after sustained bursts?
Could region-level routing affect latency to US-based APIs?
Are there any hidden limits I might be running into?
Request
Could you please:
Review my service’s network metrics from Nov 5 2025 (08:20–09:30 PST)
Confirm if throttling or rate limits were triggered during that window
Suggest best practices for handling large volumes of parallel external API calls on Railway
The fact that CPU usage remains near zero during these stalls suggests network blocking or throttling rather than compute saturation.
Thank you for your help,
Ben
3 Replies
4 months ago
Hey there! We've found the following might help you get unblocked faster:
If you find the answer from one of these, please let us know by solving the thread!
4 months ago
This thread has been marked as public for community involvement, as it does not contain any sensitive or personal information. Any further activity in this thread will be visible to everyone.
Status changed to Open brody • 4 months ago
4 months ago
Hey, I've got a similar issue. In my case, I’m also using Supabase as an external service, and I noticed a significant increase in latency when interacting with it after deployment.
I host both the frontend and backend in Amsterdam, and Supabase is hosted in Frankfurt, so they’re relatively close to each other.
I’m still not sure what’s causing the issue. But even simple calls take much longer time that is not justifiable.But even simple calls take much longer time that is not justifiable.
But even simple calls take much longer, which doesn’t seem justifiable, considering the fact tha cpu/ram usage is really low.
I thought that might be related to the Railway free tier.
3 months ago
My update on that matter is that I switched to Fly.io, and the latency dropped significantly to acceptable levels. I’m still not sure what the issue was when it was hosted on Railway. Would love to use railway tho...