8 months ago
Hiya!
I've been getting some timeout errors in my Mongo instance (and super early this morning in my Redis instance, but it seems it's fine now). I tried redeploying it, but still getting these timeouts:
Error: NetworkTimeout: viaduct.proxy.rlwy.net:27880: timed out (configured timeouts: timeoutMS: 20000.0ms, connectTimeoutMS: 30000.0ms)
I even tried increasing the timeout values in my app as you can see, but it still fails sometimes (not always). The MongoDB logs are spamming this:
Successfully authenticated
Successfully authenticated
Connection accepted
client metadata
Auth metrics report
Successfully authenticated
Received first command on ingress connection since session start or auth handshake
Successfully authenticated
Connection accepted
client metadata
Auth metrics report
Successfully authenticated
Connection ended
Connection accepted
client metadata
Auth metrics report
Successfully authenticated
Connection accepted
client metadata
Auth metrics report
Successfully authenticatedWhich I think is normal?
I haven't touched my app, and the Mongo instance has been working fine for months. I just moved it to the new METAL server like 10 days ago, but it started failing today. That's why I think this might be a Railway's issue <:thinkingverymuch:1230712484399288371>
7 Replies
8 months ago
Can you get timestamps for those error logs and tell your timezone?
I have a lot of them 😅 but for example, 13:52:23, I'm in GMT-3
btw, these logs are from an app hosted in another Project ID. The one I shared is the MongoDB's Project ID. I'm sharing a screenshot of my OpenTelemetry frontend where I see the traces of my app

It's been 2 hours without those error logs, so I guess everything is fine again? haha
8 months ago
No, it kept failing today. I'll try rolling back to the legacy server.
So, I fixed this by just creating a new Mongo instance and migrating all the data to the new one, so I'm 99% sure this was a Railway's issue. This is ackward, please take a look because the same is happening now with an old Redis instance. I'll need to migrate everything again :/