2 months ago
Hi Railway Team and community,
I’m returning to a topic I previously posted about regarding high Egress on some of my services.
Initially, I suspected that the high Egress was caused by publishing data to Redis (even though it was an internal network connection). However, after further testing, I’ve identified the root cause: TCP ACK overhead. Since my services handle a high volume of incoming WebSocket messages, the outbound ACKs for every received packet are driving up the Egress costs.
What I have tried so far to optimize this:
Increasing the TCP Read Buffer .
Tuning TCP_QUICKACK .
Unfortunately, these changes haven't resulted in a noticeable reduction in Egress.
Does anyone have experience with this on Railway? Are there any other OS-level or application-level tweaks you would recommend to minimize ACK-related traffic?
Thanks in advance for any insights!
Pinned Solution
2 months ago
interesting issue. so i did some digging and here's what i found:
tcp acks are around 40-54 bytes each, so the overhead exists but it's pretty small ; you'd need a ton of messages for it to really add up. that said, here's what might actually help:
first, double check you're using railway's private network for ALL internal connections. even if redis/postgres are in the same project, if you're using the public url variables you're getting charged. look for any variable with "private" in the name like REDIS_PRIVATE_URL or postgres.railway.internal addresses. private network = zero egress costs.
second, websocket responses from your app (confirmations, broadcasts, state updates) are usually way bigger than acks. check what your application is actually sending back over the websockets.
third, about tcp_quickack , just fyi, setting it to 1 actually sends acks immediately (more traffic), while 0 or leaving it default uses delayed ack which batches them. so if you set it to 1 that might have made things worse.
i'd suggest checking your railway metrics dashboard to see which specific service is generating the egress, that'll tell you where to focus. you can also add some logging to see outbound data volume vs inbound.
hope this helps 
1 Replies
2 months ago
interesting issue. so i did some digging and here's what i found:
tcp acks are around 40-54 bytes each, so the overhead exists but it's pretty small ; you'd need a ton of messages for it to really add up. that said, here's what might actually help:
first, double check you're using railway's private network for ALL internal connections. even if redis/postgres are in the same project, if you're using the public url variables you're getting charged. look for any variable with "private" in the name like REDIS_PRIVATE_URL or postgres.railway.internal addresses. private network = zero egress costs.
second, websocket responses from your app (confirmations, broadcasts, state updates) are usually way bigger than acks. check what your application is actually sending back over the websockets.
third, about tcp_quickack , just fyi, setting it to 1 actually sends acks immediately (more traffic), while 0 or leaving it default uses delayed ack which batches them. so if you set it to 1 that might have made things worse.
i'd suggest checking your railway metrics dashboard to see which specific service is generating the egress, that'll tell you where to focus. you can also add some logging to see outbound data volume vs inbound.
hope this helps 
Status changed to Solved brody • 2 months ago