API performance on railway much slower compared to serverless (NextJS api routes)
jorgerodrigues
PROOP

2 years ago

Hi all,

I would like to share the results of an experiment I ran this week.

At work we have a full graphql api that runs out of a NextJS api route. The performance is often very good but it comes with some effects such as the need for connection pooling.

This week I decide to experiment with moving it to our Node server which is hosted in railway.
The server runs node 20, built with a docker file. The api is served with Express. From this point forward everything is the same on both railway and next ( same queries, resolvers, server setup, database, etc).

The setup went fine (cors was a bit of a bitch, but worked out in the end). However, when testing for performance I was surprised:

  • For lighter operations the performance was roughly the same

  • For heavier operations (lots of io) - the railway hosted server was up to 5 times SLOWER.

I did spend some time with performance tunning but nothing really did the trick.

Does anyone have some concrete ideas as to what is the cause for such a difference and how to mitigate it? I'd really like to eventually migrate out of the serverless function for that specific api.

Cheers,
Jorge

3 Replies

jorgerodrigues
PROOP

2 years ago

Another piece of info: the tests were run up against a postgresql database hosted on google cloud. The connection limits, pooling, methods all done in the same way


brody
EMPLOYEE

2 years ago

we know Railway runs on gcp but I would like to see if anything improves if these tests on the railway hosted service where re-ran but with a database hosted on Railway and connected to via the private network.


jorgerodrigues
PROOP

2 years ago

Good point. Would be interesting to try. However, the other setup (the next one) is also not using a private network and is running on a different region even. So I'd have expected more latency issues there too


Loading...