2 months ago
n8n cannot use private networking for Redis or Postgres — forced to use TCP proxy
Referencing ray-chen's post from 4 months ago about n8n connection issues and private networking fixes.
Setup
- Template: n8n with workers (UI + Worker + Redis + Postgres)
- Region: asia-southeast1
- n8n Version: 2.1.4
- Running stable for: 2 months before incident
What Happened
On Dec 31 2025 at ~3pm IST, all services crashed simultaneously. Logs showed:
```
[Redis client] connect ETIMEDOUT
Unable to connect to Redis after trying to connect for 10s
Exiting process due to Redis connection error
```
Postgres logs:
```
database system was not properly shut down; automatic recovery in progress
```
Resource metrics showed low usage (~100-150MB memory) before crash — not OOM.
Resolution
Switching both Redis and Postgres from private networking to TCP proxy fixed the issue.
Configuration Details
I have all recommended private networking flags enabled:
- ENABLE_ALPINE_PRIVATE_NETWORKING="true"
- N8N_LISTEN_ADDRESS="::"
- QUEUE_BULL_REDIS_DUALSTACK="true"
Working config (TCP proxy):
DB_POSTGRESDB_HOST="${{Postgres.RAILWAY_TCP_PROXY_DOMAIN}}"
DB_POSTGRESDB_PORT="${{Postgres.RAILWAY_TCP_PROXY_PORT}}"
QUEUE_BULL_REDIS_HOST="${{Redis.RAILWAY_TCP_PROXY_DOMAIN}}"
QUEUE_BULL_REDIS_PORT="${{Redis.RAILWAY_TCP_PROXY_PORT}}"
Failing config (private network):
DB_POSTGRESDB_HOST="${{Postgres.RAILWAY_PRIVATE_DOMAIN}}"
DB_POSTGRESDB_PORT="5432"
QUEUE_BULL_REDIS_HOST="${{Redis.RAILWAY_PRIVATE_DOMAIN}}"
QUEUE_BULL_REDIS_PORT="6379"
Error: Could not establish database connection within the configured timeout of 20,000 ms
The Confusing Part
I deployed a NocoDB instance in the same Railway project with the same Redis and Postgres services — and private networking works perfectly:
NC_DB="pg://${{Postgres.RAILWAY_PRIVATE_DOMAIN}}:${{Postgres.RAILWAY_TCP_APPLICATION_PORT}}?u=${{Postgres.PGUSER}}&p=${{Postgres.PGPASSWORD}}&d=${{Postgres.PGDATABASE}}"
NC_REDIS_URL="${{Redis.REDIS_URL}}?family=6"
Connection test results:
n8n → Redis (private):
Failsn8n → Redis (TCP proxy):
Worksn8n → Postgres (private):
Failsn8n → Postgres (TCP proxy):
WorksNocoDB → Redis (private):
WorksNocoDB → Postgres (private):
Works
Known Issue Supposedly Fixed
GitHub Issue #13117 documented this exact problem — ioredis not resolving IPv6 for Railway's private network. It was reportedly fixed in n8n 1.79.0 with the QUEUE_BULL_REDIS_DUALSTACK flag.
I'm on n8n 2.1.4 — well past the fix. It still doesn't work.
The n8n template with internal Redis states:
>
IMPORTANT: If your deployment fails, make sure you've opted out of IPv4 Private Networks in your Railway Account Settings → Feature Flags.
This feature flag does not exist in my Railway account settings.
Documentation vs Reality
Railway docs state:
- TCP Proxy is for "access from outside the private network"
- Private networking provides "faster communication and increased throughput"
- Using private network avoids "service-to-service egress costs"
Yet I'm forced to use TCP proxy for internal n8n
Redis/Postgres communication — paying egress for what should be free internal traffic.
Questions
1. Why does NocoDB work on private networking but n8n doesn't, on the same Railway setup?
2. Is there a known issue with n8n's QUEUE_BULL_REDIS_DUALSTACK implementation?
3. The IPv4 Private Networks feature flag mentioned in templates doesn't exist — is this deprecated?
4. Is private networking in asia-southeast1 known to be less stable?
5. Was there an incident on Dec 31 not reflected on the status page?
Environment Info
n8nVersion: 2.1.4
platform: docker (self-hosted)
nodeJsVersion: 22.21.1
database: postgres
executionMode: scaling (single-main)
Happy to provide any additional logs or configuration details.
Pinned Solution
2 months ago
this isn’t a railway outage or region issue. it’s ipv6.
railway private networking is ipv6-first. nocodb works because it forces ipv6 (?family=6). n8n still uses ioredis/bull which often falls back to ipv4, and QUEUE_BULL_REDIS_DUALSTACK=true is unfortunately unreliable in real deployments even on 2.x.
tcp proxy works because it’s ipv4 → that’s why switching fixed everything instantly.
the “ipv4 private networks” feature flag is deprecated and no longer exists (ipv4 is GA now), the template warning is outdated.
asia-southeast1 isn’t special , this happens in other regions too.
tl;dr:
nocodb forces ipv6 → works
n8n redis/postgres clients don’t consistently → fails
tcp proxy is the only stable workaround today
not your config, not your fault 
5 Replies
2 months ago
This thread has been marked as public for community involvement, as it does not contain any sensitive or personal information. Any further activity in this thread will be visible to everyone.
Status changed to Open brody • 2 months ago
2 months ago
this isn’t a railway outage or region issue. it’s ipv6.
railway private networking is ipv6-first. nocodb works because it forces ipv6 (?family=6). n8n still uses ioredis/bull which often falls back to ipv4, and QUEUE_BULL_REDIS_DUALSTACK=true is unfortunately unreliable in real deployments even on 2.x.
tcp proxy works because it’s ipv4 → that’s why switching fixed everything instantly.
the “ipv4 private networks” feature flag is deprecated and no longer exists (ipv4 is GA now), the template warning is outdated.
asia-southeast1 isn’t special , this happens in other regions too.
tl;dr:
nocodb forces ipv6 → works
n8n redis/postgres clients don’t consistently → fails
tcp proxy is the only stable workaround today
not your config, not your fault 
ilyassbreth
this isn’t a railway outage or region issue. it’s ipv6.railway private networking is ipv6-first. nocodb works because it forces ipv6 (?family=6). n8n still uses ioredis/bull which often falls back to ipv4, and QUEUE_BULL_REDIS_DUALSTACK=true is unfortunately unreliable in real deployments even on 2.x.tcp proxy works because it’s ipv4 → that’s why switching fixed everything instantly.the “ipv4 private networks” feature flag is deprecated and no longer exists (ipv4 is GA now), the template warning is outdated.asia-southeast1 isn’t special , this happens in other regions too.tl;dr:nocodb forces ipv6 → worksn8n redis/postgres clients don’t consistently → failstcp proxy is the only stable workaround todaynot your config, not your fault
2 months ago
Thanks for the explanation
ilyassbreth
this isn’t a railway outage or region issue. it’s ipv6.railway private networking is ipv6-first. nocodb works because it forces ipv6 (?family=6). n8n still uses ioredis/bull which often falls back to ipv4, and QUEUE_BULL_REDIS_DUALSTACK=true is unfortunately unreliable in real deployments even on 2.x.tcp proxy works because it’s ipv4 → that’s why switching fixed everything instantly.the “ipv4 private networks” feature flag is deprecated and no longer exists (ipv4 is GA now), the template warning is outdated.asia-southeast1 isn’t special , this happens in other regions too.tl;dr:nocodb forces ipv6 → worksn8n redis/postgres clients don’t consistently → failstcp proxy is the only stable workaround todaynot your config, not your fault
2 months ago
QUEUE_BULL_REDIS_DUALSTACK=trueis unfortunately unreliable in real deployments even on 2.x.
Is this a known issue in n8n? AFAICT, there's nothing on our end that would cause this behaviour
Status changed to Awaiting User Response Railway • 2 months ago
ray-chen
QUEUE_BULL_REDIS_DUALSTACK=true is unfortunately unreliable in real deployments even on 2.x.Is this a known issue in n8n? AFAICT, there's nothing on our end that would cause this behaviour
2 months ago
yeah, it’s a known but underdocumented n8n issue
there’s no misconfig on your side. the problem is that QUEUE_BULL_REDIS_DUALSTACK only affects ioredis DNS resolution, but in real deployments there are still edge cases where node/bull ends up preferring ipv4 sockets or failing during reconnects on ipv6-only networks (railway private net).
Status changed to Awaiting Railway Response Railway • 2 months ago
Status changed to Solved brody • 2 months ago
ilyassbreth
yeah, it’s a known but underdocumented n8n issuethere’s no misconfig on your side. the problem is that QUEUE_BULL_REDIS_DUALSTACK only affects ioredis DNS resolution, but in real deployments there are still edge cases where node/bull ends up preferring ipv4 sockets or failing during reconnects on ipv6-only networks (railway private net).
2 months ago
I have raised a GitHub issue with n8n
Status changed to Awaiting Railway Response Railway • 2 months ago
Status changed to Solved brody • 2 months ago
