2 years ago
I recently deployed a new Bun API on V2 with App Sleep, and I've noticed that the first request to a sleeping app always returns an empty response. This hasn't happened on non-V2 Bun apps with App Sleep on.
The following are tests with curl using the same URL/endpoint.
Normal request (non-sleeping app, {"status": "OK"} is the response from my API):
HTTP/2 200
content-type: application/json;charset=utf-8
date: Thu, 04 Jul 2024 06:03:09 GMT
server: railway-edge
x-request-id: 56xHFuw3QlCDX-2Zclvhkw_3165824431
content-length: 15
{"status":"OK"}
real 0m0.269s
user 0m0.016s
sys 0m0.000sFirst request on the same app but sleeping:
HTTP/2 200
server: railway-edge
x-request-id: Z4G6GgaAQziEbf20vOO_UQ_3165824431
content-length: 0
date: Thu, 04 Jul 2024 06:02:56 GMT
real 0m1.275s
user 0m0.000s
sys 0m0.000sProject ID: 34304961-2ebf-4d0b-b2ae-3585cf6b9353
405 Replies
2 years ago
can you also provide the same data for the same app running on the legacy runtime
2 years ago
testing a different app is not conclusive, when testing you need to change only one variable at a time, a completely different app changes too many variables
I'm deploying the reported app on Legacy and have to wait for it to sleep 🙂
2 years ago
I'm not talking about environment variables
yeah I meant variables as not in environment variables but how the apps are different despite both being Bun apis
Test done, request on sleeping app worked fine:
HTTP/2 200
content-type: application/json;charset=utf-8
date: Thu, 04 Jul 2024 06:34:47 GMT
server: railway-edge
x-request-id: gLwN8r6fSpSLChJuzIS30g_3165824431
content-length: 15
{"status":"OK"}
real 0m1.795s
user 0m0.000s
sys 0m0.016s2 years ago
1.795s is good?
I have another Rails app running on Railway that cold-boots in about 10s, kinda bad
2 years ago
similarly I have a feeling this is bun to blame
2 years ago
isn't that the recommended way to run in production though
for comparison, this is normal request time
```HTTP/2 200
content-type: application/json;charset=utf-8
date: Thu, 04 Jul 2024 06:38:27 GMT
server: railway-edge
x-request-id: vAXlgIj9Rlq8binHs09mAQ_603524580
content-length: 15
{"status":"OK"}
real 0m0.221s
user 0m0.000s
sys 0m0.000s
```
2 years ago
how much of an increase.
2 years ago
is this the exact same app, or are you comparing different apps again lol
2 years ago
someone else reported higher memory usage on the v2 runtime, but I can't reproduce it just by purely allocating bytes
just by looking at its memory metrics
looking further back (the app has been up only for a couple hours) the lowest it got on V2 was 42MB, but I only had one run of it on Legacy, so probably needs more data

but the difference is quite noticeable, maybe not in the image because the chart ceiling is a bit too high
2 years ago
remove the volume and try again?
2 years ago
then it's a good thing the legacy runtime will be phased out
2 years ago
okay can you provide a minimal reproducible bun app that sends an empty response on the v2 runtime
2 years ago
uh.. task failed successfully?
2 years ago
ugh, bun these days
now that I have both apps running, I'll try to replicate it again with the affected app
2 years ago
that's definitely minimal
I was able to reproduce the issue with the minimal app in a separate project
and the original app also started showing blank responses after I removed the minimal app from that project 👀
2 years ago
this is looking more like instabilities with bun
2 years ago
try the same code with node?
added a node branch to the minimal app and deployed it, now waiting for sleep
2 years ago
just a question, why do you have the healthcheck timeout set to a low value like 30 seconds?
because I want it to fail fast
usually if the first request fails, the deploy is likely busted, and I don't want to wait 5 minutes for the deployment to fail
2 years ago
makes sense
2 years ago
interesting
2 years ago
can you link the applicable deployment
two requests
$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
server: railway-edge
x-request-id: 3OBw1saOQ2WUAAJCR4VCGw_603524580
content-length: 0
date: Thu, 04 Jul 2024 08:17:05 GMT
real 0m1.322s
user 0m0.000s
sys 0m0.000s
$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
content-type: application/json
date: Thu, 04 Jul 2024 08:17:29 GMT
server: railway-edge
x-request-id: 2SzMY7RyRVie9l27WpIy2Q_882434190
content-length: 19
{"status": "NODE"}
real 0m0.342s
user 0m0.000s
sys 0m0.000s2 years ago
the deployment
2 years ago
full link please
2 years ago
would it be too much to ask you to also do an express app?
2 years ago
that's a crazy sentence, I had never imagined someone who uses bun and Elysia to say they've never used express
when express was a thing I was mostly working with Rails
when I moved to Node it was during a time where express was considered too slow compared to other libs, so I never touched it
express app is up
$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
content-type: application/json; charset=utf-8
date: Thu, 04 Jul 2024 08:33:43 GMT
etag: W/"14-kjLmVQInBma0jJMTEoZwvPwAyY4"
server: railway-edge
x-powered-by: Express
x-request-id: F32SRDmRQFSyDPyKYfb06w_603524580
content-length: 20
{"status":"EXPRESS"}
real 0m0.388s
user 0m0.000s
sys 0m0.000swaiting for sleep
2 years ago
you're still on the v2 runtime?
I wonder what's going on with Node's http server, which is probably what Bun servers are based off of
2 years ago
well that seems like this isn't an issue with railway then
$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
server: railway-edge
x-request-id: xSTDdCMbTrexjKz3i8FsOg_1654200396
content-length: 0
date: Thu, 04 Jul 2024 08:54:27 GMT
real 0m1.298s
user 0m0.000s
sys 0m0.000s
$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
content-type: application/json; charset=utf-8
date: Thu, 04 Jul 2024 08:54:53 GMT
etag: W/"14-kjLmVQInBma0jJMTEoZwvPwAyY4"
server: railway-edge
x-powered-by: Express
x-request-id: 7r720LsRTEyZ8daihSwTQg_1654200396
content-length: 20
{"status":"EXPRESS"}
real 0m0.270s
user 0m0.000s
sys 0m0.000scould potentially test with non-Javascript frameworks but that would be a bit too much for me to do atm
2 years ago
ill test with a go server
2 years ago
what happens if i dont experience the same issue?
remember to deploy in a new project since it seems multiple services in a project can affect the results, I'd like to test more about that part too
2 years ago
i have indeed created a new project
I have deployed a Ruby/Sinatra app, and was not able to replicate the issue on the first cold boot. But I'm seeing a pattern in the logs that I want to investigate
these are the logs from the Express app. My first request did not trigger the problem, but my second did.
The second request was after the "container event container died" log entry that was absent from the first request. So I'm trying to get that log entry to show on the Sinatra app

the "Stopping Container" spam seems to indicate there's a problem somewhere with V2
was not able to replicate the issue with Ruby after 2 attempts. I'm going back to the main branch (Bun) to see if maybe the problem resolved itself
2 years ago
stopping container is it being put to sleep
2 years ago
maybe the ruby app is on the legacy runtime
switching to the ruby branch for now to investigate more, made sure it's on V2
got to reproduce the issue with the Sinatra app. It was a little worse as two requests gave empty responses before the third one returned the correct response
$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
server: railway-edge
x-request-id: OPED8JR1TW6SpSjny6blUg_882434190
content-length: 0
date: Thu, 04 Jul 2024 18:16:25 GMT
real 0m1.567s
user 0m0.016s
sys 0m0.000s
$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
server: railway-edge
x-request-id: PbIYVlwJQ-qkE6uGJUtMsw_882434190
content-length: 0
date: Thu, 04 Jul 2024 18:16:29 GMT
real 0m0.211s
user 0m0.000s
sys 0m0.000s
$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 200
content-type: application/json
date: Thu, 04 Jul 2024 18:16:35 GMT
server: railway-edge
server: WEBrick/1.8.1 (Ruby/3.2.4/2024-04-23)
x-content-type-options: nosniff
x-request-id: ZBY6eoJsS_qCym1oa23Yyg_3165824431
content-length: 20
{"status":"SINATRA"}
real 0m0.378s
user 0m0.000s
sys 0m0.000s2 years ago
printed to stderr
2 years ago
I have requested my go app a few times when it has gone to sleep and was not able to get an empty response
were the logs like the above? I let my app sleep for about an hour or so before making requests
from observation the problem seems to be related to those "contained died" and "stopping container" errors
2 years ago
those are regular event logs, nothing to be concerned about
2 years ago
yeah the container log stuff is perfectly normal
I do think they seem to indicate the container is going into a state where it fails to render responses on wakeup
so far V2 is the common denominator; I've changed projects and languages, and the problem doesn't happen on Legacy. What else could we try?
2 years ago
not sure, I'll report it to the team anyway
2 years ago
So you can repro this on both bun and sinatra?
2 years ago
Ack and escalated
2 years ago
It should be triaged on Monday
2 years ago
fairly certain that the blank response should in fact be a 503 application failed to respond page, but railway is no longer sending that page at the moment due to what i believe to be a bug.
so lets assume your first response to a sleeping service is a 503 status code, meaning your app did not respond to the first request in time, that explains why a statically compiled go app did not exhibit this behavior.
when a request comes in for a slept app the container is started and a tcp connection attempt is done on a loop every 30ms, once that succeeds the request is forwarded to your app, but if your app is not ready to handle http traffic just yet you will get the 503 app failed to respond page, the apps health check is not taken into account.
theres definitely some room for improvement here on the railway side of things for waking sleeping services aside from fixing the blank page being sent instead of 503.
2 years ago
Is this for the new proxy or?
2 years ago
yep all testing done with only the new proxy enabled
2 years ago
Great. Miguel merged a fix for this. Should be good to go
2 years ago
for clarity, the fix was for the blank response instead of the 503 application failed to respond page that should have been shown
I can handle the 503 response better than a blank page, can set my client to retry or something
2 years ago
We now no longer return a 200
2 years ago
We should return a 500 as was the previous behavior
2 years ago
That's correct ye?
$ time curl -i https://bun-railway-v2-test-production.up.railway.app
HTTP/2 502
content-type: text/html
server: railway-edge
x-railway-fallback: true
x-request-id: Y6CCMwhzRo-9NqOjSW-vSw_882434190
content-length: 4689
date: Mon, 08 Jul 2024 18:19:26 GMT
... HTML ...2 years ago
What's "Best"
2 years ago
Well yes
2 years ago
Is your app returning the correct response?
2 years ago
No I mean like
2 years ago
What happens is this intermittent is it always
2 years ago
"First request when app sleeping results in 503"
2 years ago
Gotchu
2 years ago
Esclating again
2 years ago
This is only on the V2 runtime ye?
2 years ago
Yep I suspect so too. Bubbled up
@Brody , is this application one of those app sleep healthcheck candidates ?
The fact the legacy proxy would work but new one does not makes me think the healthcheck feature we talked about wouldn't matter.
The new proxy is using the same timeouts as legacy and these edge proxies are not aware of application sleep logic. There must be a timeout setting that is different.
2 years ago
yep I feel like this could benefit from having the heath check checked on waking up the service
oh this is saying that this issue with app sleep waking only happens on v2 runtime but works on legacy runtime.
Correct ?
We just happen to be on the new proxy and saw those 200s (sorry about that…will work on ensuring issues that like that don't through)
I'm continuing to work on this and will share updates here when I have some.
heads up, I saw a few reports of 502s and the people's containers were stopped
unsure what stopped the people's applications but pointing out that checking the logs for container exits and restart the container resolved the issue.
2 years ago
noted, thanks for the heads up
I’m not sure if I’ll be digging in to the more since I found in 2 examples of this the container was stopped.
Next time someone brings this up I’ll just ask to restart their container to see if it comes back
2 years ago
OP did say it was reproducible between multiple deployments
2 years ago
BTW I don't think we have the cycles ATM to repro stuff for people
They'll have to come to us with reproductions that we can have a look at
In production I have an app with app sleep and v2 runtime. On my first request it says the app is unavailable. The container starts though. Second request it works.
So easy to reproduce.
2 years ago
Okay then givr
I don't understand this part: "the container was stopped". Do you mean the container was stopped manually? In my case the container stopped because it went to sleep. I didn't do anything to stop the container and my app is a long-running http server, so it doesn't stop on its own. The only reason it stops is the App Sleeping feature.
2 years ago
I can assure you applications can stop on their own
2 years ago
I think like, even if it does stop on it's own, we should probably restart it?
2 years ago
Cause like if it crashes and another request or something comes in IDK
2 years ago
if it exits with an error code, yes, if it exits with a success code, maybe not? but yes if restart is set to always
The fact I could reproduce this though means we can disregard what I said about the container being stopped.
I think mentioning it might have been a mistake as I was jumping between a few threads debugging stuff.
I will dig into this more next week.
2 years ago
I have same problem...
With legacy runtime work well but with new V2 not.
The first http(s) request via browser is allways 502, nexts working normally.
Build with custom
Dockerfile based on alpine+nginx+phpfpm
hey folks, I spent some time on this and basically, the v2 runtime wakes and forwards http requests differently than v1 runtime. I have observed the success rate of starting and getting an HTTP response to be pretty flakey (sometimes it works, sometimes it does not). I believe this is something to do with how fast the container can start in v2 runtime before the request times out.
I can't spend more time on this right now because the number of reports for this has been small and have to prioritize some other issues.
If you need app sleep right now I advise just using the v1 (legacy) runtime.
2 years ago
Thanks for the update, I just want to say that I also faced the same issue with my python instance. Mainly observing that the first request wakes up the instance (but the request does not go through) but any subsequent request s work. Will try v1 for the time being but would be nice if resolved.
2 years ago
another report of it here -
2 years ago
We’re expressly not going to be able to prioritize this until the new proxy is out unfortunately
Has something changed with this issue? My Legacy apps are starting to show 502 errors after coming back from sleep
Most of my apps also no longer allow me to change between Legacy and V2
a year ago
We have indeed removed the ability for users to switch back to legacy
a year ago
when we move to metal legacy will not be supported, thus in the intrest of moving to metal faster all deploys for all plan tiers use runtime v2
a year ago
well then you will be pleased to know that the new proxy is indeed fully rolled out and 100% of the nearly half a million domains used on our platform now have their traffic served via the new proxy, thus we should be able to take a look at picking back up the 502 app sleeping issue.
a year ago
i've also bumped the linear ticket on your behalf
a year ago
We will make sure this works reliably within the next 2 weeks
a year ago
!remind me to circle back in 2 weeks
a year ago
Just wanna say that we are actively working on a solution to this!
a year ago
Circling has been done
A fix has been merged and in production now. @pauldps give it a try whenever you can and let me know. Current implementation allows your app to take up to 10 seconds to accept the incoming connection.
a year ago
They do at least have to redeploy for the new chances to take effect right?
oh yes. Please trigger a redeploy. This action applies some settings for the network to be aware of your application has application sleep set.
since this issue impacted applications that started slower than 100 ms, making something that backfilled the applications did not seem worth it given a redeploy would fix.
My go application for example never has this issue because it starts up fast enough to accept the connection before the host rejects it thinking there's no app listening.
started slower than 100 ms
this number is a guess. I think it's roughly correct. Might be 30-100ms
So this is what I did:
Changed the app to V2
Triggered a redeploy (also made some code changes etc, it's a GraphQL Yoga API running in Bun)
Service starts fine (health check worked on first try) and runs fine in a browser
Service goes to sleep
I refresh the website
Got the error in the first image
Did I miss anything?


a year ago
request id please
a year ago
ill let mig comment on this
a year ago
though, might be worth trying a newer version of bun, you're on 1.1.18
I'll do that soon, but I don't think it will fix the issue
the app is booting in about ~1s
a year ago
i was able to confirm app sleeping works with a node app that took 8 seconds to start, so this may just be bun being bun
a year ago
what region?
a year ago
same, we'll see if mig wants to work around bun's strange networking issues on monday
I tested it with another Bun app but in a different project and it worked
I think my project/service is borked somehow
This project works: 46548220-e0ba-4a16-b80a-706a55133413
This one does not: 34304961-2ebf-4d0b-b2ae-3585cf6b9353 (service: e2a687a5-9ce2-4694-81ae-12c6756b0bce)
a year ago
maybe try with the same code but in a new project?
for the project that's not working it'll be a bit more difficult since it has other dependencies inside that project that I'd have to deploy too, but I will do it if time permits
a year ago
you could create a template from the project and then create a new project from it
a year ago
its in project settings
If someone gives me the source code for reproducible bug I will check it out!
I copied my services to another project and it seems to be working without issues, no 502s on wakeup
I'll make one last test, as my old service had a volume attached that I wasn't using. I've deleted the volume, redeployed the service, and will wait for it to sleep
just did ☝️ and it errored the same, so it doesn't seem to be volume-related
a year ago
@pauldps - as mig said, we would need a reproducible example in order to look into it
I'm not sure that's reproducible
both projects are running the same code with the same env vars, one works, one does not
a year ago
unfortunately we wont be able to spend time doing that, we would need a reproducible example
you're probably going to get other people with old projects facing the issue but some of them won't be able to do what I did (copy/move everything to a new project)
a year ago
i had a service i deployed 8 months ago, with the changes done it can now properly wake up from sleep
same-ish with my Sinatra project
that's why I think it's a problem with my project specifically
it's not something related to source code
something about my project, infrastructure/configuration-wise, not code-wise, might be causing the issue
a year ago
if you think that is the case you are welcome to try duplicating the gesund-api service
a year ago
yes
also have an update:
the second project's first request failed on wake up, just tried now
a year ago
a year ago
bun too?
a year ago
just since its easy, I'd still recommend trying the latest bun version
a year ago
and if this doesn't work, we would need that MRE
it's going to be very hard for me to have a MRE if my other Bun example (let's call that Project 3) isn't failing
a year ago
project id?
46548220-e0ba-4a16-b80a-706a55133413
this one is just a plain Bun server
it doesn't seem to be affected by the issue
I have other branches where I have other servers like Sinatra for testing purposes
a year ago
also bun 1.1.18
as for that MRE I mean how i can make it actually minimal, it's a graphql api with multiple endpoints, I don't know what is making it fail if anything.
this testing takes time, I have to wait the app to sleep and test the first request, it doesn't seem like I can make several code changes removing stuff until I find the culprit if there's one
a year ago
its up to you to remove as much code from the current app while still retaining the issue
a year ago
there is no rush
I'll see what I can do but I'm not happy about having to do this when the server is returning a 502 which seems to be out of my control
a year ago
what your app is doing is out of our control too, and thus we need that MRE to reproduce and patch around it
don't you think it it was an error on my app we'd at least see it in the logs?
a year ago
no i dont think we would see logs for this
a year ago
no
a year ago
we replay the incoming connection for up to 10 seconds
I have a MRE.
Instead of changing my project, I deployed a minimal Graphql-yoga+Bun server to Project 3.
Just tried the first request on sleep and it failed with a 502.
Here's the code: https://github.com/pauldps/bun-railway-v2-test/tree/graphql-yoga
a year ago
so the newer bun version didnt help it seems
yup, the previous version (no graphql) running Bun 1.1.18 also didn't have the issue
a year ago
alright, thank you
Strangely: I deployed the same app on Project 2 (a0aefb5f-15c4-49c6-a7ec-020b58d0cfc5)
same branch and everything. It's working fine!
I've checked it twice now and both requests worked fine. So I don't know what's going on
a year ago
well i have your code deployed so ill let you know if i can reproduce it
I got an error on the app running on Project 2. The fact that it occasionally works seems to be a bit random.
a year ago
I got your bun MRE to cause a 502 on wake
a year ago
so yeah, some strange issue with bun
a year ago
yeah, it's not the first time
since we got a MRE I could try it out tomorrow.
Thanks for the persistence on wanting to get this fixed. Sometimes it's a 50/50 effort for us to help when the issue is rare.
a year ago
here is their MRE repo in template form -
a year ago
yes
👋 Any updates on this? I'm seeing another post reporting similar issues with other languages (https://discord.com/channels/713503345364697088/1313496536650616852). Hopefully they can provide a MRE.
Can confirm the issue is still happening with my Bun apps. I have set up client retry mechanisms to work around it in the meanwhile.
One thing I noticed that makes this difficult to work around is that when the 502 is shown via XHR, it fails CORS (b/c CORS handling is in my app and the request doesn't reach it), so my client-retry mechanism has to be aware of that too
Also this is just a hunch but my app boots nearly instantly. Maybe there's a race condition somewhere where the app boots too fast that's not covered?
a year ago
mig had to move his attention elsewhere for the time being.
a year ago
@pauldps - we push an additional fix, can you redeploy and then let us know if your app can wake up properly?
Not sure if that is related, but when app sleeping is used with some chunky apps, the first request still always fails
a year ago
yeah that's what we hope we have just fixed, at least as long as the app responds within 10 seconds
a year ago
we talked about it internally and think 10 seconds is plently for most apps
a year ago
its not like you'll have to run your migration commands as part of the start command for much longer 😉
Image 1:
Service ID
ae4c8ca4-00b8-415b-ac10-4baa1612691f🔴 Received a 502 on wakeup, 200 afterwards
Image 2 (similar stack, but a MRE):Service ID
da0153da-2b2d-4ea2-aed8-e6791380c74a🟢 Received a 200 on wakeup and afterwards


Previously, the MRE was also failing. It's succeeding now, so I think we have some progress
a year ago
failed to forward request to upstream: failed to read from request body
what would that mean? something like the app is up and listening, but the request ultimately failed?
a year ago
something like that, an application (bun) level issue, out of your control though, i wonder if we retry on that error, given that it means the initial connection did work
a year ago
maybe try a request logger on Bun to see what it actually responds?
sounds like there's a race condition somewhere, but it's weird that I don't get any errors in app logs
There's a 9 second gap between Starting Container (Railway) and my shell script (The one starting with [Boot])
Clarification, we pushed a fix today for an edge condition where we didn't detect sleeping apps. We have not changed the logic when a slept app is detected.
I think I can try to reproduce today to see if I can identify the issue. Seems to be something with Bun.
This happens 1s after "Starting Container" but yet before the server is fully running (:36)
I reported that bug in another thread, but I haven't experienced it anymore with my apps after redeploying them.
yeah, we fixed an issue awhile ago but found this week an edge case where the code wouldn't know to apply that fix in some apps. This has been fixed today.
I should be able to reproduce the Bun issue today though. I will look for some Bun example projects to try. IF you have any info about how your project is setup so I can reproduce that 9 second start might help too
the MRE is a stripped-down version of my real app, which connects to a libsql database via a private URL. Adding that to the MRE would be a bit of work on my side that I can't commit at this time.
But if it helps, the MRE is here: https://github.com/pauldps/bun-railway-v2-test/tree/graphql-yoga
It's a Bun + GraphQL Yoga API
ah, some relevant info
Service 1 (consistently failing) is running on Metal
Service 2 (consistently succeeding) is on non-Metal
So the fix might have positively affected non-Metal apps
I'll try moving Service 2 (the MRE) to Metal if I can. (Edit: done, will report later)
I tried reproducing with that repo and the request worked.
it says that the server is running in 1 second after the container starts though
I just tested the MRE again with Metal and it worked
but it did fail before (see timestamps)

the MRE is no longer failing it seems. Failed only once after I moved to Metal 🤷
the other is still failing 100% of the time.
I'm using a browser. I go to the /graphql endpoint and prepare a query. When the app sleeps, I press the button to perform the query
I also deployed an exact copy of the affected service in the same project. Just tried once, and it woke up without errors.
the existing service must have something cached poisoning them or something.
a year ago
okay now switch to node <:kekw:788259314607325204> /j
The MRE and both copies of the affected service failed for me today
Seems like the MRE failure rate is random, it fails once then works for several hours.
The copies are still failing consistently, both in Metal and non-Metal.
you go to your site in browser, get html response, wait for app to sleep, send request via website action, request fails.
Is this correct ?
the response in the image is for a failed request. If I press the button again it returns succesfully
Ye because it takes your app longer than 10 seconds to resume from sleep , because of that railway gives you that generic "application failed to respond" ahh screen
One way to solve that is to put ur resource intensive app behind a proxy service that would wait longer than those 10s
I've made a change to the affected service, thanks to the preDeployCommand new feature ✨ the app boots even faster now.
I just did this so I'll be testing again over the next hours/days
a year ago
I'm sure you already know, but I just want to avoid any uncertainty, the pre-deploy command isn't run when waking the service
when waking the service the app doesn't need to run any migrations, hence my "boots even faster" comment
Connecting to database... 2025-01-11T03:12:01.773Z
Using database: libsql://... 2025-01-11T03:12:01.913Z
Creating yoga instance... 2025-01-11T03:12:01.925Z
Starting server... 2025-01-11T03:12:01.956Z
Server is running on http://localhost:8080/graphql 2025-01-11T03:12:01.964ZAdded logs to some strategic places to de-facto measure the boot time of my app: ~191ms
a year ago
yolo, add a 2 second sleep haha
That didn't work 😦 (I was actually hopeful…)
Server is running on http://localhost:8080/graphql 2025-01-11T03:41:04.501ZIt's weird tho. It took longer to return the error (usually it's within ~2s but this one took 4), so it seems the infra is actually waiting for my server to come up, after the 2s sleep

a year ago
was worth a shot
okay, new evidence:
GET requests all work fine waking up the apps and returning correct responses
only POST requests are returning 502s on first request
this is good because I'd imagine the majority of requests waking up apps will be GETs
that said, does the infra code handle non-GETs differently? or is it only running on GETs or something?
a year ago
you really have not been able to get a 502 from waking with a get request?
I've tested a few times today and no 502s so far from GETs
POSTs are still failing tho
a year ago
interesting
I got one 502 today from the MRE, using POST. One out of a dozen.
Been testing that one using POST only, but it works most of the time.
I am wondering if the issue is..
your browser creates the connection to our edge network
our edge makes a connection to your app and returns the HTML response to you
you wait until your app goes to sleep
the connection between your app and our proxy is now closed
a. but the browser connection to our proxy is not because you remained active on your machineyou send POST to your app
the proxy has some bug with existing downstream connections and dead upstream connections. (I would be shocked).
What if the sleep command for your app prevents your application from closing the connection so our edge proxy thinks the connection to your app is actually active still.
You could do your test but wait 15 minutes just to be really safe that the connection between the proxy and your app is closed. Then retry. I assume you try the request as soon as the application goes to sleep.
the error failed to forward request to upstream: failed to read from request body means an issue between our proxy and your app.
This error mentions body which GET requests do not have, just headers.
the issue not occurring with GET requests would explain why I couldn't ever reproduce the issue.
try this test and let me know. (wait longer after your app is slept before sending POST to ensure connection between proxy and your app is closed)
another test is, can you make the POST request without loading the HTML. I want you to go from no connection to the proxy to sending a POST which wakes up the app.
it seems this app sleep issue has just been with Bun projects too which aligns with a runtime issue not closing connections when it gets terminated, thus making the proxy think those connections are still active.
these connections between the proxy and your app have long timeouts to increase connection re-use across requests so if we set some keep alive / idle timeout to 5 seconds that would impact our connection re-use statistics which make for faster latency.
Yesterday I built a script (in Bun… the irony) that would use fetch on the graphQL URL with a POST (similar to how I made requests using the interface)
Note: this is with the MRE.
Most requests worked, but there were some failures.
Would this qualify as "make the POST without loading HTML"?
with the non-MRE service, 100% of requests returned a 502, so I'm not posting it here.
the MRE had a rather varying error rate.
thanks for doing this test. I think there's some testing I'll be able to perform also to see how the proxy handles upstream connections when the bun app is stopped.
I am now having this issue with a Crystal app that was automatically moved to Metal by Railway.
Problem only started after the move.
First request always 502 from sleeping, 100% of the time.
I've redeployed the service manually but issue persists.
Project ID: 3b1dbdc2-7f93-4dc1-a731-8f076713a724
Service ID: 9c2ae4bb-4e73-41f2-9be3-12d8c1a8a029
Deploy ID: aadcd8a6-bc3e-4340-9616-3c9a12eb286a

This is when the problem started happening. Those requests are all pings from cron-job.org that I set up to keep my app awake during certain hours of the day:

This move was not good. My app connects to a CockroachDB in the same region it was before, and now the first connection is very slow. I can't really move my database to be closer to the Metal instance, so I don't know what to do.
a year ago
the app does wake in under 10 seconds right?
the migration command is separate from the start command, so it should be booting in ~1s

a year ago
Do you have a MRE that could be made into a template?
a year ago
Then an MRE repo will suffice
I think I managed to fix it. The migration command was apparently taking longer than 10s and it was part of the start command
I moved it to a preDeployCommand and the app seems to wake up correctly now
WIll test a few more times to confirm
a year ago
sounds good









