Websockets
ahmedmajidgit
PROOP

3 months ago

I have an application using Node.js and Socket.io, but it's much slower compared to Heroku, which has affected the real-time communication performance. In contrast, my other app, hosted on Railway with Node.js and Express.js, has significantly faster API responses than on Heroku.

Could you explain the issue with WebSockets and how to overcome this?

$20 Bounty

4 Replies

3 months ago

This thread has been marked as public for community involvement, as it does not contain any sensitive or personal information. Any further activity in this thread will be visible to everyone.

Status changed to Open brody 3 months ago


domehane
FREE

3 months ago

there's actually some railway-specific stuff going on here that's probably causing your socket.io slowness.

first thing, if you're running multiple replicas on railway that's likely your main problem. railway doesn't support sticky sessions for load balancing, so your websocket connections might be getting bounced between different instances which totally breaks socket.io's stateful connections. your express app doesn't have this issue because regular http requests don't care which instance handles them.

try thos:

-- scale down to a single replica if you're using multiple. i know it sucks but railway just doesn't handle websocket load balancing well right now. if you really need multiple i instances you'll have to set up your own load balancing with redis or something

-- make sure your websockets are actually connecting properly. open browser dev tools and check the network tab - you should see a websocket connection upgrade, not a bunch of polling requests. if it's falling back to long-polling that's where your slowness is coming from

-- set keepalive pings to around 20-25 seconds. railway can have some tcp idle timeout issues and other users recommend sending traffic every 10-30 seconds to keep connections alive

-- double check your cors settings allow the upgrade header and your origins are configured right. sometimes websockets fail the upgrade silently and fall back to polling

-- make sure you're binding to process.env.PORT and 0.0.0.0 since railway uses dynamic ports

-- i want to add another point but i think this is largely enough :)

once you get the websocket connections stable and running on a single replica, railway's infrastructure should handle the real-time communication much better. the platform works great, just needs the right config for websockets


domehane

there's actually some railway-specific stuff going on here that's probably causing your socket.io slowness.first thing, if you're running multiple replicas on railway that's likely your main problem. railway doesn't support sticky sessions for load balancing, so your websocket connections might be getting bounced between different instances which totally breaks socket.io's stateful connections. your express app doesn't have this issue because regular http requests don't care which instance handles them.try thos:-- scale down to a single replica if you're using multiple. i know it sucks but railway just doesn't handle websocket load balancing well right now. if you really need multiple i instances you'll have to set up your own load balancing with redis or something-- make sure your websockets are actually connecting properly. open browser dev tools and check the network tab - you should see a websocket connection upgrade, not a bunch of polling requests. if it's falling back to long-polling that's where your slowness is coming from-- set keepalive pings to around 20-25 seconds. railway can have some tcp idle timeout issues and other users recommend sending traffic every 10-30 seconds to keep connections alive-- double check your cors settings allow the upgrade header and your origins are configured right. sometimes websockets fail the upgrade silently and fall back to polling-- make sure you're binding to process.env.PORT and 0.0.0.0 since railway uses dynamic ports-- i want to add another point but i think this is largely enough :)once you get the websocket connections stable and running on a single replica, railway's infrastructure should handle the real-time communication much better. the platform works great, just needs the right config for websockets

ahmedmajidgit
PROOP

3 months ago

I have a Socket.IO server and two HTTPS servers. In the Socket server, I maintain two environments (production and test). As far as I understand, having multiple environments does not count as running multiple replicas, so I don’t think that’s the cause of the slowness.

I’ll try the points you mentioned. I’d also be interested in hearing the other point you were going to add.


anarchistmanifesto
TRIAL

3 months ago

@domehane covered the big stuff super well

on the environments thing if they're separate services then yeah not replicas but double check your service settings for any horizontal scaling

that other point might be about forcing websocket transport in socket io

like in your server code add

`io = require(socket io)(server {transports: [websocket]})`

that stops it from falling back to polling which kills speed

worked for my chat app last year

let us know if that fixes it


anarchistmanifesto
TRIAL

3 months ago

hey @ahmedmajidgit

yeah multiple environments arent the same as replicas at all

replicas mean horizontal scaling on one service which railway doesnt sticky load balance for websockets

if your prod and test are separate environments or services then connections should stick to their own instance fine

on the two https servers are those separate railway services too

if so maybe check if your socket io is trying to connect through one of them or directly

railway routes all through their proxy so wss should work but sometimes the upgrade fails if cors isnt set right

for that other point @domehane mentioned maybe its about the railway proxy timeout

some folks say set your socket io ping timeout higher like 60 seconds cause railway can drop idle tcp after 30

in code something like

```js

const io = require(socket io)(server {pingTimeout: 60000 pingInterval: 25000})

```

helps keep it alive without slowness

also force websocket only if polling is the fallback issue


Loading...