18 days ago
Calls to service hang forever per https://station.railway.com/questions/limited-access-a79785fe
18 Replies
18 days ago
curl -v https://ph9km42x.up.railway.app/health
* Host ph9km42x.up.railway.app:443 was resolved.
* IPv6: (none)
* IPv4: 66.33.22.204
* Trying 66.33.22.204:443...
* Connected to ph9km42x.up.railway.app (66.33.22.204) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
* (304) (IN), TLS handshake, Unknown (8):
* (304) (IN), TLS handshake, Certificate (11):
* (304) (IN), TLS handshake, CERT verify (15):
* (304) (IN), TLS handshake, Finished (20):
* (304) (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
subject: CN=.up.railway.app
* start date: Feb 4 14:01:41 2026 GMT
* expire date: May 5 14:01:40 2026 GMT
subjectAltName: host "ph9km42x.up.railway.app" matched cert's ".up.railway.app"
* issuer: C=US; O=Let's Encrypt; CN=R13
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://ph9km42x.up.railway.app/health
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: ph9km42x.up.railway.app]
* [HTTP/2] [1] [:path: /health]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
[HTTP/2] [1] [accept: /*]
> GET /health HTTP/2
> Host: ph9km42x.up.railway.app
> User-Agent: curl/8.7.1
> Accept: /
>
* Request completely sent off
< HTTP/2 404
< content-type: application/json
< server: railway-edge
< x-railway-edge: railway/us-east4-eqdc4a
< x-railway-fallback: true
< x-railway-request-id: muOXK4qGQwWdN817g4a9AQ
< content-length: 101
< date: Wed, 18 Feb 2026 03:48:35 GMT
<
* Connection #0 to host ph9km42x.up.railway.app left intact
{"status":"error","code":404,"message":"Application not
found","request_id":"muOXK4qGQwWdN817g4a9AQ"}% 18 days ago
sorry that specific language "hostname changed" was the output of an llm in determining what happened. I didn't mean to imply that did happen, just that the llm thought that's what happened.
The curl is directly to my service bypassing cloudflare.
"status":"error","code":404,"message":"Application not found","request_id":"muOXK4qGQwWdN817g4a9AQ"18 days ago
Great, I'll let the team take a quick look as it's awfully close to an incident. Could you double-check that your service has that exact domain attached to it and that it has an active deployment?
18 days ago
how would I check? I use a custom domain but the cloudflare dns points to what I provided. It worked immediately before the incident, that's what I'm going off of. How would I confirm that that was the domain you gave me to map my cname entry to?
18 days ago
Oh, what you shared is a CNAME pointing? That won't work. You have to use your domain instead of that. Railway detects your service by your domain and not CNAME.
18 days ago
that fails as well
18 days ago
so I curled to what cloudflare is trying to find
18 days ago
btw this curl did work yesterday
shallow-alchemy
that fails as well
18 days ago
Do you get any sort of logs on http logs when trying to reach it via Cloudflare?
18 days ago
none no
18 days ago
On my end
Seeing ~15s response times on all requests to my deployment, including endpoints that do zero I/O (auth rejection returns 401 but takes 15s). This started ~75 minutes ago per Sentry.
Health endpoint returning 502 through Cloudflare due to timeout, also ~15s when hit directly.
Single uvicorn worker, no recent deploys. Suspect this is related to the current platform incident even though it's listed as builds-only.
Cloudflare:
curl -I https://caucus-ai.com/health
HTTP/2 502
date: Wed, 18 Feb 2026 03:58:16 GMT
content-type: text/plain; charset=UTF-8
content-length: 15
cache-control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
expires: Thu, 01 Jan 1970 00:00:01 GMT
referrer-policy: same-origin
x-frame-options: SAMEORIGIN
server: cloudflare
cf-ray: 9cfa9a87590aba52-SEA
alt-svc: h3=":443"; ma=86400
jackwelty
On my endSeeing ~15s response times on all requests to my deployment, including endpoints that do zero I/O (auth rejection returns 401 but takes 15s). This started ~75 minutes ago per Sentry.Health endpoint returning 502 through Cloudflare due to timeout, also ~15s when hit directly.Single uvicorn worker, no recent deploys. Suspect this is related to the current platform incident even though it's listed as builds-only.Cloudflare:curl -I https://caucus-ai.com/healthHTTP/2 502date: Wed, 18 Feb 2026 03:58:16 GMTcontent-type: text/plain; charset=UTF-8content-length: 15cache-control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0expires: Thu, 01 Jan 1970 00:00:01 GMTreferrer-policy: same-originx-frame-options: SAMEORIGINserver: cloudflarecf-ray: 9cfa9a87590aba52-SEAalt-svc: h3=":443"; ma=86400
18 days ago
Please open your thread.
Status changed to Awaiting User Response Railway • 18 days ago
18 days ago
why is my thread set to awaiting user response? to what?
Status changed to Awaiting Railway Response Railway • 18 days ago
18 days ago
I think I'm experiencing this as well. Getting a 502 BAd gateway from Cloudflare
Status changed to Awaiting User Response Railway • 18 days ago
18 days ago
For redeploying you can hit Ctrl + K -> Deploy latest commit with the service panel opened.
Status changed to Awaiting Railway Response Railway • 18 days ago
18 days ago
My service is back. To be clear I had redeployed 2 times. Seems back up
Status changed to Solved shallow-alchemy • 18 days ago

