V2 + App Sleep = first response always empty
pauldps
HOBBYOP

2 years ago

I recently deployed a new Bun API on V2 with App Sleep, and I've noticed that the first request to a sleeping app always returns an empty response. This hasn't happened on non-V2 Bun apps with App Sleep on.

The following are tests with curl using the same URL/endpoint.

Normal request (non-sleeping app, {"status": "OK"} is the response from my API):

HTTP/2 200
content-type: application/json;charset=utf-8
date: Thu, 04 Jul 2024 06:03:09 GMT
server: railway-edge
x-request-id: 56xHFuw3QlCDX-2Zclvhkw_3165824431
content-length: 15

{"status":"OK"}
real    0m0.269s
user    0m0.016s
sys     0m0.000s

First request on the same app but sleeping:

HTTP/2 200
server: railway-edge
x-request-id: Z4G6GgaAQziEbf20vOO_UQ_3165824431
content-length: 0
date: Thu, 04 Jul 2024 06:02:56 GMT


real    0m1.275s
user    0m0.000s
sys     0m0.000s

Project ID: 34304961-2ebf-4d0b-b2ae-3585cf6b9353

405 Replies

2 years ago

can you also provide the same data for the same app running on the legacy runtime


pauldps
HOBBYOP

2 years ago

you mean change the runtime for that app, right?


pauldps
HOBBYOP

2 years ago

I tested a different app running on Legacy and the issue didn't happen


pauldps
HOBBYOP

2 years ago

but I'll change the runtime


2 years ago

testing a different app is not conclusive, when testing you need to change only one variable at a time, a completely different app changes too many variables


pauldps
HOBBYOP

2 years ago

it was another Bun app, but I can see the variables


pauldps
HOBBYOP

2 years ago

I'm deploying the reported app on Legacy and have to wait for it to sleep 🙂


2 years ago

I'm not talking about environment variables


pauldps
HOBBYOP

2 years ago

yeah I meant variables as not in environment variables but how the apps are different despite both being Bun apis


pauldps
HOBBYOP

2 years ago

deploy is done, will report back in about 10 mins


pauldps
HOBBYOP

2 years ago

Test done, request on sleeping app worked fine:

HTTP/2 200
content-type: application/json;charset=utf-8
date: Thu, 04 Jul 2024 06:34:47 GMT
server: railway-edge
x-request-id: gLwN8r6fSpSLChJuzIS30g_3165824431
content-length: 15

{"status":"OK"}
real    0m1.795s
user    0m0.000s
sys     0m0.016s

pauldps
HOBBYOP

2 years ago

switching back to V2 to repeat the test


pauldps
HOBBYOP

2 years ago

btw that cold boot time = 🏆


2 years ago

1.795s is good?


pauldps
HOBBYOP

2 years ago

for a cold boot time? I'd say excellent


pauldps
HOBBYOP

2 years ago

I have another Rails app running on Railway that cold-boots in about 10s, kinda bad


pauldps
HOBBYOP

2 years ago

but that's mostly Rails to blame


2 years ago

similarly I have a feeling this is bun to blame


pauldps
HOBBYOP

2 years ago

I'm running a compiled executable, so very likely


2 years ago

isn't that the recommended way to run in production though


pauldps
HOBBYOP

2 years ago

it is, I'm following their guide on that


pauldps
HOBBYOP

2 years ago

for comparison, this is normal request time
```HTTP/2 200
content-type: application/json;charset=utf-8
date: Thu, 04 Jul 2024 06:38:27 GMT
server: railway-edge
x-request-id: vAXlgIj9Rlq8binHs09mAQ_603524580
content-length: 15

{"status":"OK"}
real 0m0.221s
user 0m0.000s
sys 0m0.000s
```


pauldps
HOBBYOP

2 years ago

the increased memory usage on V2 is a bit of a bummer though


2 years ago

how much of an increase.


pauldps
HOBBYOP

2 years ago

Legacy was running ~36MB
V2 is running 53~61MB


2 years ago

is this the exact same app, or are you comparing different apps again lol


pauldps
HOBBYOP

2 years ago

it's the same app


2 years ago

someone else reported higher memory usage on the v2 runtime, but I can't reproduce it just by purely allocating bytes


pauldps
HOBBYOP

2 years ago

just by looking at its memory metrics
looking further back (the app has been up only for a couple hours) the lowest it got on V2 was 42MB, but I only had one run of it on Legacy, so probably needs more data

1258313696716001300


pauldps
HOBBYOP

2 years ago

but the difference is quite noticeable, maybe not in the image because the chart ceiling is a bit too high


pauldps
HOBBYOP

2 years ago

btw the app went to sleep again and curl returned an empty response


pauldps
HOBBYOP

2 years ago

if it matters, the service has a volume


2 years ago

remove the volume and try again?


pauldps
HOBBYOP

2 years ago

I'll try, that might break the app though


pauldps
HOBBYOP

2 years ago

app broke, trying to fix it


pauldps
HOBBYOP

2 years ago

alright it's back up, now waiting for sleep


pauldps
HOBBYOP

2 years ago

Network graph also wild on Legacy

1258315523364622300


2 years ago

then it's a good thing the legacy runtime will be phased out


pauldps
HOBBYOP

2 years ago

got empty response on sleeping app, so volume is not it


2 years ago

okay can you provide a minimal reproducible bun app that sends an empty response on the v2 runtime


pauldps
HOBBYOP

2 years ago

I'll try


pauldps
HOBBYOP

2 years ago

I deployed a smaller app, and could not replicate the issue


pauldps
HOBBYOP

2 years ago

but here's the thing… the affected app also stops having the issue 👀


2 years ago

uh.. task failed successfully?


pauldps
HOBBYOP

2 years ago

ugh, technology these days 😛


2 years ago

ugh, bun these days


pauldps
HOBBYOP

2 years ago

now that I have both apps running, I'll try to replicate it again with the affected app


pauldps
HOBBYOP

2 years ago

then try to replicate it with the smaller app


pauldps
HOBBYOP

2 years ago


pauldps
HOBBYOP

2 years ago

I'll try deploying to separate project in case of same project shenanigans


2 years ago

that's definitely minimal


pauldps
HOBBYOP

2 years ago

I was able to reproduce the issue with the minimal app in a separate project


pauldps
HOBBYOP

2 years ago

and the original app also started showing blank responses after I removed the minimal app from that project 👀


2 years ago

this is looking more like instabilities with bun


2 years ago

try the same code with node?


pauldps
HOBBYOP

2 years ago

added a node branch to the minimal app and deployed it, now waiting for sleep


2 years ago

just a question, why do you have the healthcheck timeout set to a low value like 30 seconds?


pauldps
HOBBYOP

2 years ago

because I want it to fail fast
usually if the first request fails, the deploy is likely busted, and I don't want to wait 5 minutes for the deployment to fail


2 years ago

makes sense


pauldps
HOBBYOP

2 years ago

I think for Rails apps with slower boot times I set a higher value


pauldps
HOBBYOP

2 years ago

got the empty response with the Node app as well


2 years ago

interesting


2 years ago

can you link the applicable deployment


pauldps
HOBBYOP

2 years ago

two requests

$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
server: railway-edge
x-request-id: 3OBw1saOQ2WUAAJCR4VCGw_603524580
content-length: 0
date: Thu, 04 Jul 2024 08:17:05 GMT


real    0m1.322s
user    0m0.000s
sys     0m0.000s


$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
content-type: application/json
date: Thu, 04 Jul 2024 08:17:29 GMT
server: railway-edge
x-request-id: 2SzMY7RyRVie9l27WpIy2Q_882434190
content-length: 19

{"status": "NODE"}

real    0m0.342s
user    0m0.000s
sys     0m0.000s

pauldps
HOBBYOP

2 years ago

the deployment? or the project?


2 years ago

the deployment


pauldps
HOBBYOP

2 years ago

oh, got it
7368d15e-ed13-4684-aab3-72e2b3bdaa74


2 years ago

full link please


pauldps
HOBBYOP

2 years ago

the url in the browser when the deployment is open?



2 years ago

would it be too much to ask you to also do an express app?


pauldps
HOBBYOP

2 years ago

lemme see if I can do it quickly, I never used express before lol


2 years ago

that's a crazy sentence, I had never imagined someone who uses bun and Elysia to say they've never used express


pauldps
HOBBYOP

2 years ago

when express was a thing I was mostly working with Rails
when I moved to Node it was during a time where express was considered too slow compared to other libs, so I never touched it


pauldps
HOBBYOP

2 years ago

express app is up

$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
content-type: application/json; charset=utf-8
date: Thu, 04 Jul 2024 08:33:43 GMT
etag: W/"14-kjLmVQInBma0jJMTEoZwvPwAyY4"
server: railway-edge
x-powered-by: Express
x-request-id: F32SRDmRQFSyDPyKYfb06w_603524580
content-length: 20

{"status":"EXPRESS"}
real    0m0.388s
user    0m0.000s
sys     0m0.000s

waiting for sleep


pauldps
HOBBYOP

2 years ago

code in the express branch


pauldps
HOBBYOP

2 years ago

the express app responds correctly


2 years ago

you're still on the v2 runtime?


pauldps
HOBBYOP

2 years ago

yes, I just changed the branch and nothing else


pauldps
HOBBYOP

2 years ago

I wonder what's going on with Node's http server, which is probably what Bun servers are based off of


pauldps
HOBBYOP

2 years ago

using express is not an option for me though


2 years ago

well that seems like this isn't an issue with railway then


pauldps
HOBBYOP

2 years ago

hold on


pauldps
HOBBYOP

2 years ago

got an empty response with express


pauldps
HOBBYOP

2 years ago

I think my first test was too fast


pauldps
HOBBYOP

2 years ago

$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
server: railway-edge
x-request-id: xSTDdCMbTrexjKz3i8FsOg_1654200396
content-length: 0
date: Thu, 04 Jul 2024 08:54:27 GMT


real    0m1.298s
user    0m0.000s
sys     0m0.000s


$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
content-type: application/json; charset=utf-8
date: Thu, 04 Jul 2024 08:54:53 GMT
etag: W/"14-kjLmVQInBma0jJMTEoZwvPwAyY4"
server: railway-edge
x-powered-by: Express
x-request-id: 7r720LsRTEyZ8daihSwTQg_1654200396
content-length: 20

{"status":"EXPRESS"}
real    0m0.270s
user    0m0.000s
sys     0m0.000s

pauldps
HOBBYOP

2 years ago

it seems like an issue with V2 to me


pauldps
HOBBYOP

2 years ago

could potentially test with non-Javascript frameworks but that would be a bit too much for me to do atm


2 years ago

ill test with a go server


2 years ago

what happens if i dont experience the same issue?


pauldps
HOBBYOP

2 years ago

I can test with Ruby later, but for now I need to go sleep myself lol


pauldps
HOBBYOP

2 years ago

good question, I have a theory, but want to test a slow language first


pauldps
HOBBYOP

2 years ago

remember to deploy in a new project since it seems multiple services in a project can affect the results, I'd like to test more about that part too


2 years ago

i have indeed created a new project


pauldps
HOBBYOP

2 years ago

I have deployed a Ruby/Sinatra app, and was not able to replicate the issue on the first cold boot. But I'm seeing a pattern in the logs that I want to investigate


pauldps
HOBBYOP

2 years ago

these are the logs from the Express app. My first request did not trigger the problem, but my second did.

The second request was after the "container event container died" log entry that was absent from the first request. So I'm trying to get that log entry to show on the Sinatra app

1258465619825787100


pauldps
HOBBYOP

2 years ago

the "Stopping Container" spam seems to indicate there's a problem somewhere with V2


pauldps
HOBBYOP

2 years ago

was not able to replicate the issue with Ruby after 2 attempts. I'm going back to the main branch (Bun) to see if maybe the problem resolved itself


2 years ago

stopping container is it being put to sleep


pauldps
HOBBYOP

2 years ago

does it show even if the app is already sleeping?


pauldps
HOBBYOP

2 years ago

"Stopping Container" logs did not show up for the Ruby app 🤔


pauldps
HOBBYOP

2 years ago

but it did go to sleep


pauldps
HOBBYOP

2 years ago

(according to the dashboard)


2 years ago

maybe the ruby app is on the legacy runtime


pauldps
HOBBYOP

2 years ago

I will doublecheck after I test one more time with the Bun branch


pauldps
HOBBYOP

2 years ago

the problem is still there with the Bun app. The logs:

1258470456751558700


pauldps
HOBBYOP

2 years ago

no "Stopping Container" tho


pauldps
HOBBYOP

2 years ago

switching to the ruby branch for now to investigate more, made sure it's on V2


pauldps
HOBBYOP

2 years ago

got to reproduce the issue with the Sinatra app. It was a little worse as two requests gave empty responses before the third one returned the correct response

$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
server: railway-edge
x-request-id: OPED8JR1TW6SpSjny6blUg_882434190
content-length: 0
date: Thu, 04 Jul 2024 18:16:25 GMT


real    0m1.567s
user    0m0.016s
sys     0m0.000s



$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
server: railway-edge
x-request-id: PbIYVlwJQ-qkE6uGJUtMsw_882434190
content-length: 0
date: Thu, 04 Jul 2024 18:16:29 GMT


real    0m0.211s
user    0m0.000s
sys     0m0.000s



$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
content-type: application/json
date: Thu, 04 Jul 2024 18:16:35 GMT
server: railway-edge
server: WEBrick/1.8.1 (Ruby/3.2.4/2024-04-23)
x-content-type-options: nosniff
x-request-id: ZBY6eoJsS_qCym1oa23Yyg_3165824431
content-length: 20

{"status":"SINATRA"}
real    0m0.378s
user    0m0.000s
sys     0m0.000s

pauldps
HOBBYOP

2 years ago

logs

1258487116858397000


pauldps
HOBBYOP

2 years ago

(those are not errors btw, wtf Sinatra)


2 years ago

printed to stderr


2 years ago

I have requested my go app a few times when it has gone to sleep and was not able to get an empty response


pauldps
HOBBYOP

2 years ago

were the logs like the above? I let my app sleep for about an hour or so before making requests


pauldps
HOBBYOP

2 years ago

from observation the problem seems to be related to those "contained died" and "stopping container" errors


2 years ago

those are regular event logs, nothing to be concerned about


pauldps
HOBBYOP

2 years ago

right, I meant logs, not errors 👍


2 years ago

yeah the container log stuff is perfectly normal


pauldps
HOBBYOP

2 years ago

I do think they seem to indicate the container is going into a state where it fails to render responses on wakeup
so far V2 is the common denominator; I've changed projects and languages, and the problem doesn't happen on Legacy. What else could we try?


2 years ago

not sure, I'll report it to the team anyway


pauldps
HOBBYOP

2 years ago

thanks, I'll keep the project up if the team wants to debug/investigate


2 years ago

So you can repro this on both bun and sinatra?


pauldps
HOBBYOP

2 years ago

correct, also Node-http and express


2 years ago

Ack and escalated


2 years ago

It should be triaged on Monday


2 years ago

fairly certain that the blank response should in fact be a 503 application failed to respond page, but railway is no longer sending that page at the moment due to what i believe to be a bug.

so lets assume your first response to a sleeping service is a 503 status code, meaning your app did not respond to the first request in time, that explains why a statically compiled go app did not exhibit this behavior.

when a request comes in for a slept app the container is started and a tcp connection attempt is done on a loop every 30ms, once that succeeds the request is forwarded to your app, but if your app is not ready to handle http traffic just yet you will get the 503 app failed to respond page, the apps health check is not taken into account.

theres definitely some room for improvement here on the railway side of things for waking sleeping services aside from fixing the blank page being sent instead of 503.


2 years ago

Is this for the new proxy or?


2 years ago

yep all testing done with only the new proxy enabled


2 years ago

Great. Miguel merged a fix for this. Should be good to go


2 years ago

for clarity, the fix was for the blank response instead of the 503 application failed to respond page that should have been shown


pauldps
HOBBYOP

2 years ago

I can handle the 503 response better than a blank page, can set my client to retry or something


pauldps
HOBBYOP

2 years ago

(although it would be nice if the 503 didn't happen)


2 years ago

We now no longer return a 200


2 years ago

We should return a 500 as was the previous behavior


pauldps
HOBBYOP

2 years ago

just ran a test now and got a 502 with a long HTML error page


2 years ago

That's correct ye?


pauldps
HOBBYOP

2 years ago

$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 502
content-type: text/html
server: railway-edge
x-railway-fallback: true
x-request-id: Y6CCMwhzRo-9NqOjSW-vSw_882434190
content-length: 4689
date: Mon, 08 Jul 2024 18:19:26 GMT

... HTML ...

pauldps
HOBBYOP

2 years ago

that's better than a 200 for sure


2 years ago

What's "Best"


pauldps
HOBBYOP

2 years ago

Best would be 200 with my app returning the correct response


2 years ago

Well yes


2 years ago

Is your app returning the correct response?


pauldps
HOBBYOP

2 years ago

no, it's returning that huge HTML from Railway


2 years ago

No I mean like


pauldps
HOBBYOP

2 years ago

that's just the first request though


2 years ago

What happens is this intermittent is it always


pauldps
HOBBYOP

2 years ago

the next requests work fine


pauldps
HOBBYOP

2 years ago

it's always when the app is sleeping


pauldps
HOBBYOP

2 years ago

"first request when app is sleeping"


2 years ago

"First request when app sleeping results in 503"


2 years ago

Gotchu


2 years ago

Esclating again


2 years ago

This is only on the V2 runtime ye?


pauldps
HOBBYOP

2 years ago

correct


pauldps
HOBBYOP

2 years ago

does not happen on Legacy


pauldps
HOBBYOP

2 years ago

I suspect it wasn't more widely noticed because it was returning a 200


2 years ago

Yep I suspect so too. Bubbled up


20k-ultra
EMPLOYEE

2 years ago

@Brody , is this application one of those app sleep healthcheck candidates ?


20k-ultra
EMPLOYEE

2 years ago

The fact the legacy proxy would work but new one does not makes me think the healthcheck feature we talked about wouldn't matter.

The new proxy is using the same timeouts as legacy and these edge proxies are not aware of application sleep logic. There must be a timeout setting that is different.


2 years ago

yep I feel like this could benefit from having the heath check checked on waking up the service


20k-ultra
EMPLOYEE

2 years ago

oh this is saying that this issue with app sleep waking only happens on v2 runtime but works on legacy runtime.

Correct ?


20k-ultra
EMPLOYEE

2 years ago

We just happen to be on the new proxy and saw those 200s (sorry about that…will work on ensuring issues that like that don't through)


20k-ultra
EMPLOYEE

2 years ago

I was able to reproduce and have a strong lead on the issue.


20k-ultra
EMPLOYEE

2 years ago

I'm continuing to work on this and will share updates here when I have some.


20k-ultra
EMPLOYEE

2 years ago

Hello, I'm still looking into this.


20k-ultra
EMPLOYEE

2 years ago

heads up, I saw a few reports of 502s and the people's containers were stopped


20k-ultra
EMPLOYEE

2 years ago

if you don't exit with non-0 the restart policy won't start it again.


20k-ultra
EMPLOYEE

2 years ago

unsure what stopped the people's applications but pointing out that checking the logs for container exits and restart the container resolved the issue.


2 years ago

noted, thanks for the heads up


20k-ultra
EMPLOYEE

2 years ago

I’m not sure if I’ll be digging in to the more since I found in 2 examples of this the container was stopped.


20k-ultra
EMPLOYEE

2 years ago

Next time someone brings this up I’ll just ask to restart their container to see if it comes back


20k-ultra
EMPLOYEE

2 years ago

I’ll have to look in to it if more people report of course


2 years ago

OP did say it was reproducible between multiple deployments


20k-ultra
EMPLOYEE

2 years ago

ok I don't think it's stopped containers in that case.


20k-ultra
EMPLOYEE

2 years ago

!remind me to reproduce this in 1 hour


2 years ago

BTW I don't think we have the cycles ATM to repro stuff for people

They'll have to come to us with reproductions that we can have a look at


20k-ultra
EMPLOYEE

2 years ago

In production I have an app with app sleep and v2 runtime. On my first request it says the app is unavailable. The container starts though. Second request it works.

So easy to reproduce.


2 years ago

Okay then givr


pauldps
HOBBYOP

2 years ago

I don't understand this part: "the container was stopped". Do you mean the container was stopped manually? In my case the container stopped because it went to sleep. I didn't do anything to stop the container and my app is a long-running http server, so it doesn't stop on its own. The only reason it stops is the App Sleeping feature.


2 years ago

I can assure you applications can stop on their own


2 years ago

I think like, even if it does stop on it's own, we should probably restart it?


2 years ago

Cause like if it crashes and another request or something comes in IDK


2 years ago

if it exits with an error code, yes, if it exits with a success code, maybe not? but yes if restart is set to always


20k-ultra
EMPLOYEE

2 years ago

Brody got it for the restart behaviour


20k-ultra
EMPLOYEE

2 years ago

The fact I could reproduce this though means we can disregard what I said about the container being stopped.

I think mentioning it might have been a mistake as I was jumping between a few threads debugging stuff.

I will dig into this more next week.


coden
HOBBY

2 years ago

I have same problem...
With legacy runtime work well but with new V2 not.
The first http(s) request via browser is allways 502, nexts working normally.
Build with custom
Dockerfile based on alpine+nginx+phpfpm


20k-ultra
EMPLOYEE

2 years ago

hey folks, I spent some time on this and basically, the v2 runtime wakes and forwards http requests differently than v1 runtime. I have observed the success rate of starting and getting an HTTP response to be pretty flakey (sometimes it works, sometimes it does not). I believe this is something to do with how fast the container can start in v2 runtime before the request times out.

I can't spend more time on this right now because the number of reports for this has been small and have to prioritize some other issues.

If you need app sleep right now I advise just using the v1 (legacy) runtime.


moafshar
HOBBY

2 years ago

Thanks for the update, I just want to say that I also faced the same issue with my python instance. Mainly observing that the first request wakes up the instance (but the request does not go through) but any subsequent request s work. Will try v1 for the time being but would be nice if resolved.


2 years ago

another report of it here -


2 years ago

We’re expressly not going to be able to prioritize this until the new proxy is out unfortunately


pauldps
HOBBYOP

a year ago

Has something changed with this issue? My Legacy apps are starting to show 502 errors after coming back from sleep
Most of my apps also no longer allow me to change between Legacy and V2


a year ago

We have indeed removed the ability for users to switch back to legacy


pauldps
HOBBYOP

a year ago

Is Legacy going to be removed soon?


a year ago

when we move to metal legacy will not be supported, thus in the intrest of moving to metal faster all deploys for all plan tiers use runtime v2


pauldps
HOBBYOP

a year ago

I really need App Sleep to work reliably 😦


a year ago

well then you will be pleased to know that the new proxy is indeed fully rolled out and 100% of the nearly half a million domains used on our platform now have their traffic served via the new proxy, thus we should be able to take a look at picking back up the 502 app sleeping issue.


pauldps
HOBBYOP

a year ago

That would be great, V2 working with App Sleep would be ideal


a year ago

i've also bumped the linear ticket on your behalf


pauldps
HOBBYOP

a year ago

Appreciate that, thanks!


a year ago

We will make sure this works reliably within the next 2 weeks


a year ago

!remind me to circle back in 2 weeks


a year ago

Just wanna say that we are actively working on a solution to this!


a year ago

Circling has been done


20k-ultra
EMPLOYEE

a year ago

We have a solution and hoping to have it out this week.


20k-ultra
EMPLOYEE

a year ago

Solution is being tested and trying to get it out tomorrow.


20k-ultra
EMPLOYEE

a year ago

I'll be sure to comment here when it is.


20k-ultra
EMPLOYEE

a year ago

A fix has been merged and in production now. @pauldps give it a try whenever you can and let me know. Current implementation allows your app to take up to 10 seconds to accept the incoming connection.


a year ago

They do at least have to redeploy for the new chances to take effect right?


20k-ultra
EMPLOYEE

a year ago

oh yes. Please trigger a redeploy. This action applies some settings for the network to be aware of your application has application sleep set.


20k-ultra
EMPLOYEE

a year ago

since this issue impacted applications that started slower than 100 ms, making something that backfilled the applications did not seem worth it given a redeploy would fix.

My go application for example never has this issue because it starts up fast enough to accept the connection before the host rejects it thinking there's no app listening.


20k-ultra
EMPLOYEE

a year ago

started slower than 100 ms

this number is a guess. I think it's roughly correct. Might be 30-100ms


pauldps
HOBBYOP

a year ago

So this is what I did:

  • Changed the app to V2

  • Triggered a redeploy (also made some code changes etc, it's a GraphQL Yoga API running in Bun)

  • Service starts fine (health check worked on first try) and runs fine in a browser

  • Service goes to sleep

  • I refresh the website

  • Got the error in the first image
    Did I miss anything?

1307404754238046200
1307404754539778000


a year ago

request id please


pauldps
HOBBYOP

a year ago

n6zQAxIuT5ysSFbJ-GY0nA_3118653284


a year ago

ill let mig comment on this


a year ago

though, might be worth trying a newer version of bun, you're on 1.1.18


pauldps
HOBBYOP

a year ago

I'll do that soon, but I don't think it will fix the issue
the app is booting in about ~1s


a year ago

i was able to confirm app sleeping works with a node app that took 8 seconds to start, so this may just be bun being bun


pauldps
HOBBYOP

a year ago

maybe my project is stuck in some old/cached workflow?


a year ago

what region?


pauldps
HOBBYOP

a year ago

us-west, the default one


a year ago

same, we'll see if mig wants to work around bun's strange networking issues on monday


pauldps
HOBBYOP

a year ago

I can test with a Ruby/Sinatra app later


pauldps
HOBBYOP

a year ago

I tested it with another Bun app but in a different project and it worked
I think my project/service is borked somehow


pauldps
HOBBYOP

a year ago

the Sinatra app also worked fine on first try


pauldps
HOBBYOP

a year ago

just re-tested the project that had the issue and it still showed the error


pauldps
HOBBYOP

a year ago

This project works: 46548220-e0ba-4a16-b80a-706a55133413
This one does not: 34304961-2ebf-4d0b-b2ae-3585cf6b9353 (service: e2a687a5-9ce2-4694-81ae-12c6756b0bce)


a year ago

maybe try with the same code but in a new project?


pauldps
HOBBYOP

a year ago

for the project that's not working it'll be a bit more difficult since it has other dependencies inside that project that I'd have to deploy too, but I will do it if time permits


a year ago

you could create a template from the project and then create a new project from it


a year ago

its in project settings


pauldps
HOBBYOP

a year ago

oh, didn't know that. I'll give it a try


pauldps
HOBBYOP

a year ago

will the new project use the same env variables and stuff?


20k-ultra
EMPLOYEE

a year ago

If someone gives me the source code for reproducible bug I will check it out!


pauldps
HOBBYOP

a year ago

I copied my services to another project and it seems to be working without issues, no 502s on wakeup


pauldps
HOBBYOP

a year ago

so it seems my old project is somehow bugged


pauldps
HOBBYOP

a year ago

I'll make one last test, as my old service had a volume attached that I wasn't using. I've deleted the volume, redeployed the service, and will wait for it to sleep


pauldps
HOBBYOP

a year ago

just did ☝️ and it errored the same, so it doesn't seem to be volume-related


a year ago

@pauldps - as mig said, we would need a reproducible example in order to look into it


pauldps
HOBBYOP

a year ago

I'm not sure that's reproducible
both projects are running the same code with the same env vars, one works, one does not


pauldps
HOBBYOP

a year ago

I have given the project IDs of both, feel free to look into them


a year ago

unfortunately we wont be able to spend time doing that, we would need a reproducible example


pauldps
HOBBYOP

a year ago

you're probably going to get other people with old projects facing the issue but some of them won't be able to do what I did (copy/move everything to a new project)


a year ago

i had a service i deployed 8 months ago, with the changes done it can now properly wake up from sleep


pauldps
HOBBYOP

a year ago

same-ish with my Sinatra project
that's why I think it's a problem with my project specifically
it's not something related to source code


pauldps
HOBBYOP

a year ago

something about my project, infrastructure/configuration-wise, not code-wise, might be causing the issue


pauldps
HOBBYOP

a year ago

but I can't look at infra/configs


a year ago

if you think that is the case you are welcome to try duplicating the gesund-api service


pauldps
HOBBYOP

a year ago

I already did that


pauldps
HOBBYOP

a year ago

… ah, the service, into the same project?


a year ago

yes


pauldps
HOBBYOP

a year ago

I duplicated the entire project


pauldps
HOBBYOP

a year ago

let me give that a try


pauldps
HOBBYOP

a year ago

done, will wait for it to sleep


pauldps
HOBBYOP

a year ago

also have an update:
the second project's first request failed on wake up, just tried now


pauldps
HOBBYOP

a year ago

that said, how do I actually send the code of this repo for reproduction?


pauldps
HOBBYOP

a year ago

Request ID: f5B_p5h0T5SopI4KU

1307527722872082400


a year ago


a year ago

bun too?


pauldps
HOBBYOP

a year ago

the exact same code as gesund-api. Bun as well


a year ago

just since its easy, I'd still recommend trying the latest bun version


pauldps
HOBBYOP

a year ago

Bun upgraded to 1.1.34


a year ago

and if this doesn't work, we would need that MRE


pauldps
HOBBYOP

a year ago

it's going to be very hard for me to have a MRE if my other Bun example (let's call that Project 3) isn't failing


pauldps
HOBBYOP

a year ago

I'll upgrade Bun there too and test it again


a year ago

project id?


pauldps
HOBBYOP

a year ago

46548220-e0ba-4a16-b80a-706a55133413
this one is just a plain Bun server
it doesn't seem to be affected by the issue
I have other branches where I have other servers like Sinatra for testing purposes


a year ago

also bun 1.1.18


pauldps
HOBBYOP

a year ago

as for that MRE I mean how i can make it actually minimal, it's a graphql api with multiple endpoints, I don't know what is making it fail if anything.
this testing takes time, I have to wait the app to sleep and test the first request, it doesn't seem like I can make several code changes removing stuff until I find the culprit if there's one


a year ago

its up to you to remove as much code from the current app while still retaining the issue


pauldps
HOBBYOP

a year ago

that might take me a very long time


a year ago

there is no rush


pauldps
HOBBYOP

a year ago

I'll see what I can do but I'm not happy about having to do this when the server is returning a 502 which seems to be out of my control


a year ago

what your app is doing is out of our control too, and thus we need that MRE to reproduce and patch around it


pauldps
HOBBYOP

a year ago

don't you think it it was an error on my app we'd at least see it in the logs?


pauldps
HOBBYOP

a year ago

there are no logs for the failed request


a year ago

no i dont think we would see logs for this


pauldps
HOBBYOP

a year ago

does Railway use the health check path during wakeup?


a year ago

no


pauldps
HOBBYOP

a year ago

how does it know it is up and ready to serve requests?


a year ago

we replay the incoming connection for up to 10 seconds


pauldps
HOBBYOP

a year ago

I have a MRE.
Instead of changing my project, I deployed a minimal Graphql-yoga+Bun server to Project 3.
Just tried the first request on sleep and it failed with a 502.
Here's the code: https://github.com/pauldps/bun-railway-v2-test/tree/graphql-yoga


a year ago

so the newer bun version didnt help it seems


pauldps
HOBBYOP

a year ago

yup, the previous version (no graphql) running Bun 1.1.18 also didn't have the issue


a year ago

alright, thank you


pauldps
HOBBYOP

a year ago

Strangely: I deployed the same app on Project 2 (a0aefb5f-15c4-49c6-a7ec-020b58d0cfc5)
same branch and everything. It's working fine!


pauldps
HOBBYOP

a year ago

I've checked it twice now and both requests worked fine. So I don't know what's going on


a year ago

well i have your code deployed so ill let you know if i can reproduce it


pauldps
HOBBYOP

a year ago

I got an error on the app running on Project 2. The fact that it occasionally works seems to be a bit random.


a year ago

I got your bun MRE to cause a 502 on wake


a year ago

so yeah, some strange issue with bun


pauldps
HOBBYOP

a year ago

is Bun networking known to cause issues?


a year ago

yeah, it's not the first time


20k-ultra
EMPLOYEE

a year ago

since we got a MRE I could try it out tomorrow.

Thanks for the persistence on wanting to get this fixed. Sometimes it's a 50/50 effort for us to help when the issue is rare.


a year ago

here is their MRE repo in template form -


pauldps
HOBBYOP

a year ago

is it deploying the graphql-yoga branch? (can't tell)


a year ago

yes


pauldps
HOBBYOP

a year ago

👋 Any updates on this? I'm seeing another post reporting similar issues with other languages (https://discord.com/channels/713503345364697088/1313496536650616852). Hopefully they can provide a MRE.
Can confirm the issue is still happening with my Bun apps. I have set up client retry mechanisms to work around it in the meanwhile.


pauldps
HOBBYOP

a year ago

One thing I noticed that makes this difficult to work around is that when the 502 is shown via XHR, it fails CORS (b/c CORS handling is in my app and the request doesn't reach it), so my client-retry mechanism has to be aware of that too


pauldps
HOBBYOP

a year ago

Also this is just a hunch but my app boots nearly instantly. Maybe there's a race condition somewhere where the app boots too fast that's not covered?


a year ago

mig had to move his attention elsewhere for the time being.


a year ago

@pauldps - we push an additional fix, can you redeploy and then let us know if your app can wake up properly?


pauldps
HOBBYOP

a year ago

will do, thanks! will report in a while


maddsua
HOBBY

a year ago

Not sure if that is related, but when app sleeping is used with some chunky apps, the first request still always fails


a year ago

yeah that's what we hope we have just fixed, at least as long as the app responds within 10 seconds


maddsua
HOBBY

a year ago

Welp 10s seems not to be enough for some apps to come back after sleep


a year ago

we talked about it internally and think 10 seconds is plently for most apps


a year ago

its not like you'll have to run your migration commands as part of the start command for much longer 😉


pauldps
HOBBYOP

a year ago

Image 1:

  • Service ID ae4c8ca4-00b8-415b-ac10-4baa1612691f

  • 🔴 Received a 502 on wakeup, 200 afterwards
    Image 2 (similar stack, but a MRE):

  • Service ID da0153da-2b2d-4ea2-aed8-e6791380c74a

  • 🟢 Received a 200 on wakeup and afterwards

1327014782783459300
1327014783009947600


pauldps
HOBBYOP

a year ago

The logs for both are without any errors


pauldps
HOBBYOP

a year ago

Previously, the MRE was also failing. It's succeeding now, so I think we have some progress


pauldps
HOBBYOP

a year ago

I'll keep testing to see if the MRE will randomly fail


a year ago

failed to forward request to upstream: failed to read from request body


pauldps
HOBBYOP

a year ago

what would that mean? something like the app is up and listening, but the request ultimately failed?


a year ago

something like that, an application (bun) level issue, out of your control though, i wonder if we retry on that error, given that it means the initial connection did work


a year ago

maybe try a request logger on Bun to see what it actually responds?


pauldps
HOBBYOP

a year ago

sounds like there's a race condition somewhere, but it's weird that I don't get any errors in app logs


pauldps
HOBBYOP

a year ago

These are the logs from the failing service

1327017359789133800


pauldps
HOBBYOP

a year ago

There's a 9 second gap between Starting Container (Railway) and my shell script (The one starting with [Boot])


pauldps
HOBBYOP

a year ago

I use a shell script to start my app; once it starts the app boots in 1s.


pauldps
HOBBYOP

a year ago

Logs from the service that succeeded:

1327017807006662700


pauldps
HOBBYOP

a year ago

Much smaller gap (4s). What would explain it?


pauldps
HOBBYOP

a year ago

This is the failed request from Service 1:

1327018546743742500


pauldps
HOBBYOP

a year ago

The time matches the "Starting container" log.


20k-ultra
EMPLOYEE

a year ago

Clarification, we pushed a fix today for an edge condition where we didn't detect sleeping apps. We have not changed the logic when a slept app is detected.

I think I can try to reproduce today to see if I can identify the issue. Seems to be something with Bun.


pauldps
HOBBYOP

a year ago

This is the successful request from Service 2:

1327019142796021800


pauldps
HOBBYOP

a year ago

This happens 1s after "Starting Container" but yet before the server is fully running (:36)


pauldps
HOBBYOP

a year ago

I reported that bug in another thread, but I haven't experienced it anymore with my apps after redeploying them.


20k-ultra
EMPLOYEE

a year ago

yeah, we fixed an issue awhile ago but found this week an edge case where the code wouldn't know to apply that fix in some apps. This has been fixed today.


20k-ultra
EMPLOYEE

a year ago

I should be able to reproduce the Bun issue today though. I will look for some Bun example projects to try. IF you have any info about how your project is setup so I can reproduce that 9 second start might help too


pauldps
HOBBYOP

a year ago

the MRE is a stripped-down version of my real app, which connects to a libsql database via a private URL. Adding that to the MRE would be a bit of work on my side that I can't commit at this time.
But if it helps, the MRE is here: https://github.com/pauldps/bun-railway-v2-test/tree/graphql-yoga
It's a Bun + GraphQL Yoga API


20k-ultra
EMPLOYEE

a year ago

I'll see if I can reproduce with that alone.


pauldps
HOBBYOP

a year ago

ah, some relevant info
Service 1 (consistently failing) is running on Metal
Service 2 (consistently succeeding) is on non-Metal
So the fix might have positively affected non-Metal apps


pauldps
HOBBYOP

a year ago

I'll try moving Service 2 (the MRE) to Metal if I can. (Edit: done, will report later)


pauldps
HOBBYOP

a year ago

Test done, both services are now giving 502s on first request


pauldps
HOBBYOP

a year ago

so you should be able to reproduce the issue with the MRE running on Metal


20k-ultra
EMPLOYEE

a year ago

I tried reproducing with that repo and the request worked.

1327046554753957899


20k-ultra
EMPLOYEE

a year ago

it says that the server is running in 1 second after the container starts though


20k-ultra
EMPLOYEE

a year ago

i'll try on metal


20k-ultra
EMPLOYEE

a year ago

same results with metal.


pauldps
HOBBYOP

a year ago

I just tested the MRE again with Metal and it worked
but it did fail before (see timestamps)

1327058830840037400


pauldps
HOBBYOP

a year ago

The non-MRE service consistently fails (so far):

1327059771077169200


pauldps
HOBBYOP

a year ago

going to run a higher sample with the MRE to see how often it fails


pauldps
HOBBYOP

a year ago

the MRE is no longer failing it seems. Failed only once after I moved to Metal 🤷
the other is still failing 100% of the time.


20k-ultra
EMPLOYEE

a year ago

What client are you using to make the requests ?


pauldps
HOBBYOP

a year ago

I'm using a browser. I go to the /graphql endpoint and prepare a query. When the app sleeps, I press the button to perform the query


pauldps
HOBBYOP

a year ago

I also deployed an exact copy of the affected service in the same project. Just tried once, and it woke up without errors.


pauldps
HOBBYOP

a year ago

the existing service must have something cached poisoning them or something.


pauldps
HOBBYOP

a year ago

that would explain why your MRE is not getting errors.


pauldps
HOBBYOP

a year ago

nevermind, the copy is also returning 502. 😦


a year ago

okay now switch to node <:kekw:788259314607325204> /j


pauldps
HOBBYOP

a year ago

that ship has sailed lol


pauldps
HOBBYOP

a year ago

In Bun We Trust


pauldps
HOBBYOP

a year ago

I'll move one of the copies to non-Metal


pauldps
HOBBYOP

a year ago

The MRE and both copies of the affected service failed for me today
Seems like the MRE failure rate is random, it fails once then works for several hours.
The copies are still failing consistently, both in Metal and non-Metal.


20k-ultra
EMPLOYEE

a year ago

you go to your site in browser, get html response, wait for app to sleep, send request via website action, request fails.

Is this correct ?


pauldps
HOBBYOP

a year ago

yes. Example:

1327449248849072000


pauldps
HOBBYOP

a year ago

I press the "Play" button after the app sleeps.


pauldps
HOBBYOP

a year ago

the response in the image is for a failed request. If I press the button again it returns succesfully


maddsua
HOBBY

a year ago

Ye because it takes your app longer than 10 seconds to resume from sleep , because of that railway gives you that generic "application failed to respond" ahh screen


maddsua
HOBBY

a year ago

I have the same with my grafana instance


maddsua
HOBBY

a year ago

One way to solve that is to put ur resource intensive app behind a proxy service that would wait longer than those 10s


pauldps
HOBBYOP

a year ago

My app boots in ~1s


pauldps
HOBBYOP

a year ago

also Railway gives me the error after a couple of seconds, not 10 seconds


pauldps
HOBBYOP

a year ago

I've made a change to the affected service, thanks to the preDeployCommand new feature ✨ the app boots even faster now.
I just did this so I'll be testing again over the next hours/days


a year ago

I'm sure you already know, but I just want to avoid any uncertainty, the pre-deploy command isn't run when waking the service


pauldps
HOBBYOP

a year ago

yes I'm aware, and that's great


pauldps
HOBBYOP

a year ago

when waking the service the app doesn't need to run any migrations, hence my "boots even faster" comment


pauldps
HOBBYOP

a year ago

Connecting to database... 2025-01-11T03:12:01.773Z
Using database: libsql://... 2025-01-11T03:12:01.913Z
Creating yoga instance... 2025-01-11T03:12:01.925Z
Starting server... 2025-01-11T03:12:01.956Z
Server is running on http://localhost:8080/graphql 2025-01-11T03:12:01.964Z

Added logs to some strategic places to de-facto measure the boot time of my app: ~191ms


pauldps
HOBBYOP

a year ago

I have a hunch that because it boots too fast it may be causing the issues.


a year ago

yolo, add a 2 second sleep haha


pauldps
HOBBYOP

a year ago

you know what? I'll actually try that lol


pauldps
HOBBYOP

a year ago

That didn't work 😦 (I was actually hopeful…)

Server is running on http://localhost:8080/graphql 2025-01-11T03:41:04.501Z

It's weird tho. It took longer to return the error (usually it's within ~2s but this one took 4), so it seems the infra is actually waiting for my server to come up, after the 2s sleep

1327482989365690400


a year ago

was worth a shot


pauldps
HOBBYOP

a year ago

okay, new evidence:
GET requests all work fine waking up the apps and returning correct responses
only POST requests are returning 502s on first request
this is good because I'd imagine the majority of requests waking up apps will be GETs
that said, does the infra code handle non-GETs differently? or is it only running on GETs or something?


a year ago

you really have not been able to get a 502 from waking with a get request?


pauldps
HOBBYOP

a year ago

I've tested a few times today and no 502s so far from GETs
POSTs are still failing tho


a year ago

interesting


pauldps
HOBBYOP

a year ago

I got one 502 today from the MRE, using POST. One out of a dozen.
Been testing that one using POST only, but it works most of the time.


20k-ultra
EMPLOYEE

a year ago

there's no difference with how we handle HTTP methods


maddsua
HOBBY

a year ago

GET requests could be cached by their client


maddsua
HOBBY

a year ago

So that they only get 502 on POSTs


20k-ultra
EMPLOYEE

a year ago

I am wondering if the issue is..

  1. your browser creates the connection to our edge network

  2. our edge makes a connection to your app and returns the HTML response to you

  3. you wait until your app goes to sleep

  4. the connection between your app and our proxy is now closed
    a. but the browser connection to our proxy is not because you remained active on your machine

  5. you send POST to your app

  6. the proxy has some bug with existing downstream connections and dead upstream connections. (I would be shocked).

What if the sleep command for your app prevents your application from closing the connection so our edge proxy thinks the connection to your app is actually active still.

You could do your test but wait 15 minutes just to be really safe that the connection between the proxy and your app is closed. Then retry. I assume you try the request as soon as the application goes to sleep.


20k-ultra
EMPLOYEE

a year ago

the error failed to forward request to upstream: failed to read from request body means an issue between our proxy and your app.

This error mentions body which GET requests do not have, just headers.

the issue not occurring with GET requests would explain why I couldn't ever reproduce the issue.


20k-ultra
EMPLOYEE

a year ago

try this test and let me know. (wait longer after your app is slept before sending POST to ensure connection between proxy and your app is closed)


20k-ultra
EMPLOYEE

a year ago

another test is, can you make the POST request without loading the HTML. I want you to go from no connection to the proxy to sending a POST which wakes up the app.


20k-ultra
EMPLOYEE

a year ago

it seems this app sleep issue has just been with Bun projects too which aligns with a runtime issue not closing connections when it gets terminated, thus making the proxy think those connections are still active.


20k-ultra
EMPLOYEE

a year ago

these connections between the proxy and your app have long timeouts to increase connection re-use across requests so if we set some keep alive / idle timeout to 5 seconds that would impact our connection re-use statistics which make for faster latency.


pauldps
HOBBYOP

a year ago

Yesterday I built a script (in Bun… the irony) that would use fetch on the graphQL URL with a POST (similar to how I made requests using the interface)
Note: this is with the MRE.
Most requests worked, but there were some failures.
Would this qualify as "make the POST without loading HTML"?

1328598810036797511


pauldps
HOBBYOP

a year ago

Whenever it returned a 502, the script would retry in 1s.


pauldps
HOBBYOP

a year ago

with the non-MRE service, 100% of requests returned a 502, so I'm not posting it here.
the MRE had a rather varying error rate.


pauldps
HOBBYOP

a year ago

ah yes, the interval between requests was set to 15 minutes.


20k-ultra
EMPLOYEE

a year ago

thanks for doing this test. I think there's some testing I'll be able to perform also to see how the proxy handles upstream connections when the bun app is stopped.


pauldps
HOBBYOP

a year ago

I am now having this issue with a Crystal app that was automatically moved to Metal by Railway.
Problem only started after the move.
First request always 502 from sleeping, 100% of the time.
I've redeployed the service manually but issue persists.

Project ID: 3b1dbdc2-7f93-4dc1-a731-8f076713a724
Service ID: 9c2ae4bb-4e73-41f2-9be3-12d8c1a8a029
Deploy ID: aadcd8a6-bc3e-4340-9616-3c9a12eb286a

1354982962982289400


pauldps
HOBBYOP

a year ago

This is when the problem started happening. Those requests are all pings from cron-job.org that I set up to keep my app awake during certain hours of the day:

1354984318690398500


pauldps
HOBBYOP

a year ago

This move was not good. My app connects to a CockroachDB in the same region it was before, and now the first connection is very slow. I can't really move my database to be closer to the Metal instance, so I don't know what to do.


a year ago

the app does wake in under 10 seconds right?


pauldps
HOBBYOP

a year ago

let me double check but it should boot very fast


pauldps
HOBBYOP

a year ago

the migration command is separate from the start command, so it should be booting in ~1s

1355005137772478700


pauldps
HOBBYOP

a year ago

(this one is still not using pre-deploy commands)


a year ago

Do you have a MRE that could be made into a template?


pauldps
HOBBYOP

a year ago

Not at this time, I'm also inexperienced with template creation


a year ago

Then an MRE repo will suffice


pauldps
HOBBYOP

a year ago

I think I managed to fix it. The migration command was apparently taking longer than 10s and it was part of the start command
I moved it to a preDeployCommand and the app seems to wake up correctly now
WIll test a few more times to confirm


a year ago

sounds good


Loading...