a year ago
I am seeing a lot of these errors pop up randomly. sometimes my instance just can't reach a host for some reason. I tried to rebuild and disable some proxies the app was using. But things are still breaking intermittently.
Is there a way to check if this is a platform issue or something else?
0 Replies
a year ago
do you happen to have static IPs enabled?
I am not sure, so I am assuming I do not. At least it's not something I ever looked into/needed
a year ago
can you check in your service settings just to be sure please?
a year ago
thanks, discord says you have a Pro badge so I thought id ask
a year ago
raising your thread directly to the team now
a year ago
an incident was just called <#846875565357006878>
Here is a full log as well
get "https://api.wotblitz.com/wotb/account/list/?application id=...": dial tcp 92.223.56.106:443: connect: no route to host
The IPs are different, but it has been randomly happening for around a week now
a year ago
you may not have static IPs enabled but static IPs would still be shared, so you could have ended up with a service being given the same IP
I don't think I have enough knowledge on networking :D
My setup is running 1 replica at a time that is trying to reach this, though some requests are made in parallel. The services I am trying to reach are from a 3rd party as well. Is there a way for static ip to cause an issue here?
Not sure how relevant this is, but same errors occured when I had an http proxy enabled for those request
a year ago
well it doesn't explain why you have had issues for the past week, the incident description says it started today
a year ago
but we can reassess this after the incident has been resolved
Here is the first log I am able to find on my end
— 05/29/2024 11:34 PM
get "https://api.wotblitz.com/wotb/account/info/?account id=1058430648&application id=...": proxyconnect tcp: dial tcp 172.96.83.75:4444: connect: no route to host
a year ago
interesting, that's from quite a while ago
yeah, I though this is an issue with the 3rd party service as their servers sometimes error out, so I did not report and tried to debug on my own
a year ago
gotcha, we will come back to this once the team has confirmed the incident has been resolved
a year ago
the incident has been marked as resolved, let me know if you continue to see this issue
— Today at 9:20 PM
dial tcp 92.223.17.55:443: connect: no route to host
— Today at 9:38 PM
dial tcp 92.223.7.145:443: connect: no route to host
— Today at 11:10 PM
dial tcp 92.223.17.55:443: connect: no route to host
Still happening for me <:sadge:1244710822752813098>
a year ago
are these the only domains you're having issues calling?
a year ago
what region are you deployed to?
just noticed I am on Legacy runtime as well, gonna swap to V2 just in case it matters
a year ago
you're the second person I've seen report an issue with the logs, can I ask what logger you are using?
https://github.com/rs/zerolog
default settings across the board
I also have multiple services using the same logger, some of them seem to be logging fine-ish; the message is not being shown, but logs are there
a year ago
what do you see if you expand the context of a blank log?
a year ago
I have an idea of what's happening, will test
a year ago
logs that aren't json are fine, but your json logs are blank
a year ago
okay ill see if i can reproduce with fiber
it looks like there are now a lot of logs during startup, like container started and etc. seems like this prevents some of the service logs from being delivered right after container start
a year ago
interesting
a year ago
can you share your logger middleware config?
import "github.com/gofiber/contrib/fiberzerolog"
...
fiber.New(fiber.Config{Network: opts.Network})
app.Use(fiberzerolog.New())
a year ago
thanks!
a year ago
can reproduce
just updated a service and it looks like my guess was more or less correct - logs right after container start get lost. this would have logged a bunch of files loaded, but only the last one got captured
a year ago
ill talk to the team about this monday
Just got logs working, at least text ones on one replica e65c43cb-34b6-4519-8a01-6a13cdf03732
a year ago
is it using the v2 runtime?
a year ago
your logs are no longer blank?
just in this one service, the first start after Legacy > V2 switch also logged correctly it seems 0e7a89a8-f577-4121-9a22-bc187fb0eeef
a year ago
you can get the logs from the middeware back by doing this -
logger := zerolog.New(os.Stdout).Hook(zerolog.HookFunc(func(e *zerolog.Event, level zerolog.Level, message string) {
e.Str("msg", message)
}))
app.Use(fiberzerolog.New(fiberzerolog.Config{
Logger: &logger,
}))
zerolog used the message
attribute, but it looks like the runtime v2 is only picking up msg
a year ago
doesnt fix the fiber printout, but its progress
a year ago
We can do now :0
a year ago
was the message
attribute not being picked up a known issue prior to this?
a year ago
the other service is likely still on the legacy runtime
level, _ := zerolog.ParseLevel(os.Getenv("LOG_LEVEL"))
zerolog.SetGlobalLevel(level)
app := fiber.New(fiber.Config{
Network: os.Getenv("NETWORK"),
})
app.Use(fiberzerolog.New())
a year ago
yeah this is a bug in how logs are picked up from the v2 runtime
a year ago
you can switch back to the legacy runtime in the service settings, or used my proposed temporary solution above
the tcp errors have not yet popped up since V2 upgrade, but they happen in small bursts every few hours
a year ago
tbh I'm thinking it may be an issue with the service you are calling not railway, but we will wait and see what happens
i'll just stay on V2, it's gotta be better in some ways right? at least the number is higher than V1 😂
a year ago
so true 🤣
yeah, I have a suspicion as well, but idk how to test that because it is so random
I'll also ask someone who uses this api on another project, maybe they have some similar issues
a year ago
is theirs hosted on railway?
a year ago
ah gotcha
it's also TS, so idk if I can even compare. but i'd imagine network issues like that would pop up anywhere
a year ago
you would hope, wouldn't be too good if this was a "only happens on railway" type of issue
a year ago
but you said you still get this error connecting through a proxy, thus taking railways networking completely out of the question
so far it is, but I am also working on a refactor that will move to fly.io. I should start using that api heavily next week, so I will be able to compare better
a year ago
oh that's not ideal, may I ask why you are moving to fly?
MongoDB is really hard to limit in memory when the container has a technical limit of 8GB, so it end up using a lot of RAM, even when I set limits through command args
#1 led me to explore other options and I decided to try SQLite, but Railway has no way to make custom snapshots/recover the volume data. I think fly does snapshots automatically and I can restore a volume from it super quick
5GB volume size limit
since my apps consume almost no CPU and only a couple hundred mbs of RAM, Pro plan is just not worth it
a year ago
How does it work on fly? I'm not too familiar, can you choose your instance size? and what if mongo does need more memory, wouldn't it just crash on fly?
that's fair
also fair
Mongo resizes their in-memory cache based on available system meory, I can pick a 1GB instance on fly and it will just work with what it has, no matter how big the collections get.
On Railway, I had to reduce the amount of data stored in order to keep Mongo around 1GB, cutting some features. It worked for now, but I am just really worried that it will still grow as the dataset gets bigger since I really have no control over it.
a year ago
Okay gotcha, it works since mongo plays nicely with the available memory
a year ago
all very good feedback, I will make sure the team sees this
yeah, I never had any issues with limiting mongo to some specific size. I believe you can also go below 1GB, it's just not recommended. So I could get an even smaller instance on fly if it's still expensive for what the project is
but yeah, it would be amazing if there was some way to get the files from the volume or at least make snapshots and copy/restore
a year ago
I think the main reason railway hasn't allowed for custom lower instance sizing would be because the majority of services will crash
a year ago
volume snapshots are something they would like to do and will do at some point
well, railway dynamic pricing based on usage is amazing for Go, it is practically free to run most of my projects :D while on other providers I would have to pay for some minimum sized instance, like 256mb. but it is also quite scary to know that something can happen or I can get spammed and there is no way to limit the damage, like set a RAM/CPU limit
a year ago
I've seen thousands of help threads for crashed services that people tried to run on the trial plan with 500mb of memory
a year ago
you can set a usage limit but it's not quite the same
a year ago
haha I know I'm a go dev too
yeah, service/container limits would be awesome. and a way to run migrations :D I had to bloat my final image size quite a bit to run migrations on SQLite, would be nice to have the ability to interact with the volume before the main container starts up
a year ago
i wonder if the v2 builder (different from the v2 runtime) allows for interacting with the service's volume during build
a year ago
side note, you could also use libsql in place of sqlite?
plus I am kinda moving away from sharding everything into microservices 😂 so it's nice to just deploy a single binary
a year ago
if you used libsql daemon you could run your migrations during build
a year ago
thats also fair
a year ago
ah good point
well, I guess it's not too different from how I do it now. so it might be a solid option actually
I'll give it a try when some volume features get through the pipeline, rn I don't need anything more than sqlite, and I want to make the project super friendly to self host
a year ago
sounds good!
So far not a single error since I switched to V2 around 7 hours ago, so it might be solved 🤞
a year ago
how do you know if there are errors if you can't see the logs lol
a year ago
well that's good news
a year ago
you would have gotten errors within 7 hours on the legacy runtime?
a year ago
awesome, the logs are easy to fix but the networking problem likely isn't so I'm happy runtime V2 fixed it
0 errors overnight, so this is definitely fixed now. weird that V1 had some obscure networking issue, time to move to V2 everywhere just in case :D goodbye logs 🫡
a year ago
I'm sure the team will fix the logs fast
a year ago
the team is hands on keyboard to fix the logging issue
10 months ago
update: one half of this problem has been fixed, structured logs with a message attribute will not be blank anymore, the missing logs are still being worked on
10 months ago
update: the missing logs are fixed, but theres a new issue that arose with the possibility of them being not shown in the correct order
10 months ago
sorry for the late reply, but all known logging issues on the v2 runtime have been fixed