Cloudflare not caching responses w/ cache-control
kylekz
PROOP

4 months ago

Almost done with a migration from Vercel but I'm struggling to get cache-control headers working for caching API endpoints.

On Vercel, I have a two tiered cache:
Cache-Control: 30s -> cache in the browser for 30 seconds
CDN-Cache-Control: $value -> cache in the CDN for $value seconds
at some point I had this as a three tier cache with Cloudflare in front, which worked fine

Now, I have a Tanstack Start app deployed as a node server and it's listening on 8080 (which seems to be the Vite default?). I've pointed my custom domain to it and selected port 8080. This all works great. Problem is that my cached API endpoints are refusing to be cached by Cloudflare, always returning cf-cache-status DYNAMIC despite the cache control headers being returned:

return Response.json(somePayload, {
  headers: {
    "Cache-Control": "public, max-age=30",
    "CDN-Cache-Control": `public, max-age=2592000, stale-while-revalidate=30`
  }
});

the response headers in the browser does return these:

cache-control public, max-age=30
cdn-cache-control public, max-age=2592000, stale-while-revalidate=30
cf-cache-status DYNAMIC
content-type application/json
x-railway-edge railway/asia-southeast1-eqsg3a
x-railway-request-id yeQv3vmSTSGRSJK0AQeqjw

I'm just stumped as to why Cloudflare has determined this endpoint as uncachable. Does the Railway edge network interfere with anything here?
project ID: bc0dbee7-3f1f-41f9-b5b1-b639fa883ca9

Solved$20 Bounty

10 Replies

kylekz
PROOP

4 months ago

i guess this reply got deleted from the thread as i can't see it anymore but the cache rules works, which is annoying since it used to work a few months ago on vercel/nextjs without it. kinda weird but <:PES2_Shrug:513352546341879808>


4 months ago

I'm pretty sure that Railway isn't at fault here, as the cdn-cache-control header returns correctly as you indicated. I remember that Cloudflare has some settings on their dashboard that maybe would override yours?

Also, if you just need this to work, I remember using their dashboard a while ago to set up cache for a documentation website I had.


4 months ago

Just a theory, but perhaps try passing the SWR header directly to cache-control to see if it works? I'm not too familiar with CDN-Cache-Control but I remember having no problems using SWR on Cloudflare with the normal Cache-Control header


kylekz
PROOP

4 months ago

the idea is to have a tiered cache: Cache-Control controls the browser, CDN-Cache-Control controls any CDN (in this case cloudflare)
i used to have a three tier cache where the browser would cache for 30s, vercel would cache for a few hours, cloudflare would cache for days to weeks. reasoning was that cf would reduce outbound vercel bandwidth, and it's vercel caches are easier to bust

i have it all working now with the cache rules, it just threw me off because before i turned cloudflare off on my vercel app, this all worked fine.


kylekz
PROOP

4 months ago

next issue to solve is high response times. app is hosted in us-west and i'm in new zealand, which means i should be getting 150-210ms but i'm seeing:

  • 400ms from NZ through the metal edge domain, bypassing cloudflare

  • 600-1200ms from NZ when hitting a cloudflare cache miss

  • 100ms from a friend in california when going through cloudflare

  • 50ms from a digitalocean server in california through the metal edge domain

  • 100ms from a digitalocean server in california when hitting a cloudflare cache miss

the california times i can deal with, i just don't feel too good about my own response times being so bad since it also affects the root document, making the whole thing feel slow. if i could disable the railway edge proxy then that would likely help, as for me it's routing through singapore first instead of going directly from NZ to california


4 months ago

That's definitely weird, are you familiar with tools such as traceroute? If you're able to share one would help us


kylekz
PROOP

4 months ago

yeah sure. to the cf domain it terminates in 30ms so that's clearly just ending at an NZ PoP
to the railway provided domain:

  1     2 ms     1 ms     1 ms  192.168.1.254
  2     5 ms     5 ms     5 ms  222-152-80-1-fibre.sparkbb.co.nz [222.152.80.1]
  3     *       29 ms    29 ms  122.56.119.217
  4    29 ms    29 ms    29 ms  122.56.119.216
  5    54 ms    55 ms    69 ms  et5-0-2.sgbr3.global-gateway.net.nz [122.56.119.26]
  6    52 ms    55 ms    53 ms  103.13.80.125
  7    55 ms    63 ms    53 ms  ae-3.r21.sydnau06.au.bb.gin.ntt.net [129.250.3.158]
  8     *      149 ms     *     ae-10.r24.sngpsi07.sg.bb.gin.ntt.net [129.250.6.149]
  9   147 ms   145 ms   148 ms  ae-4.a02.sngpsi07.sg.bb.gin.ntt.net [129.250.6.63]
 10   145 ms   145 ms   145 ms  ce-2-0-0.a02.sngpsi07.sg.ce.gin.ntt.net [116.51.16.19]
 11   145 ms   145 ms   145 ms  66.33.22.111

so 145ms NZ->SG, then SG->LAX being about 165ms, then on top having a roughly 50-75ms render time for the root doc, this all checks out

when i traceroute to my own digitalocean server in california:

  1     2 ms     2 ms     2 ms  192.168.1.254
  2     6 ms     6 ms     5 ms  222-152-80-1-fibre.sparkbb.co.nz [222.152.80.1]
  3     *        *        *     Request timed out.
  4    30 ms    29 ms    30 ms  122.56.119.216
  5    31 ms    29 ms    30 ms  ae10-10.tkbr12.global-gateway.net.nz [202.50.232.29]
  6   155 ms   156 ms   155 ms  ae4-10.lebr7.global-gateway.net.nz [122.56.127.78]
  7   166 ms   165 ms   165 ms  ae0-10.lebr8.global-gateway.net.nz [202.50.232.42]
  8     *        *      159 ms  lag-14.ear2.lax1.sp.lumen.tech [4.68.37.89]
  9   161 ms   161 ms     *     ae1.3505.edge9.sanjose1.net.lumen.tech [4.69.219.61]
 10   158 ms   161 ms   159 ms  4.7.18.10
 11     *        *        *     Request timed out.
 12     *        *        *     Request timed out.
 13     *        *        *     Request timed out.
 14   163 ms   162 ms   162 ms  

4 months ago

I'll see if the team can do something about your route, but no promises


4 months ago

!t


4 months ago

This thread has been escalated to the Railway team.

Status changed to Awaiting Railway Response passos 4 months ago


4 months ago

Hey, Can you please open a new issue for the routing.


Status changed to Awaiting User Response Railway 4 months ago


Status changed to Solved brody 4 months ago


Loading...