Private networking still unreliable for Redis — forced to use TCP proxy for stability
thesobercoder
PROOP

a month ago

n8n cannot use private networking for Redis or Postgres — forced to use TCP proxy

Referencing ray-chen's post from 4 months ago about n8n connection issues and private networking fixes.

Setup

- Template: n8n with workers (UI + Worker + Redis + Postgres)

- Region: asia-southeast1

- n8n Version: 2.1.4

- Running stable for: 2 months before incident

What Happened

On Dec 31 2025 at ~3pm IST, all services crashed simultaneously. Logs showed:

```

[Redis client] connect ETIMEDOUT

Unable to connect to Redis after trying to connect for 10s

Exiting process due to Redis connection error

```

Postgres logs:

```

database system was not properly shut down; automatic recovery in progress

```

Resource metrics showed low usage (~100-150MB memory) before crash — not OOM.

Resolution

Switching both Redis and Postgres from private networking to TCP proxy fixed the issue.

Configuration Details

I have all recommended private networking flags enabled:

- ENABLE_ALPINE_PRIVATE_NETWORKING="true"

- N8N_LISTEN_ADDRESS="::"

- QUEUE_BULL_REDIS_DUALSTACK="true"

Working config (TCP proxy):

DB_POSTGRESDB_HOST="${{Postgres.RAILWAY_TCP_PROXY_DOMAIN}}"

DB_POSTGRESDB_PORT="${{Postgres.RAILWAY_TCP_PROXY_PORT}}"

QUEUE_BULL_REDIS_HOST="${{Redis.RAILWAY_TCP_PROXY_DOMAIN}}"

QUEUE_BULL_REDIS_PORT="${{Redis.RAILWAY_TCP_PROXY_PORT}}"

Failing config (private network):

DB_POSTGRESDB_HOST="${{Postgres.RAILWAY_PRIVATE_DOMAIN}}"

DB_POSTGRESDB_PORT="5432"

QUEUE_BULL_REDIS_HOST="${{Redis.RAILWAY_PRIVATE_DOMAIN}}"

QUEUE_BULL_REDIS_PORT="6379"

Error: Could not establish database connection within the configured timeout of 20,000 ms

The Confusing Part

I deployed a NocoDB instance in the same Railway project with the same Redis and Postgres services — and private networking works perfectly:

NC_DB="pg://${{Postgres.RAILWAY_PRIVATE_DOMAIN}}:${{Postgres.RAILWAY_TCP_APPLICATION_PORT}}?u=${{Postgres.PGUSER}}&p=${{Postgres.PGPASSWORD}}&d=${{Postgres.PGDATABASE}}"

NC_REDIS_URL="${{Redis.REDIS_URL}}?family=6"

Connection test results:

  • n8n → Redis (private): x emoji Fails

  • n8n → Redis (TCP proxy): white_check_mark emoji Works

  • n8n → Postgres (private): x emoji Fails

  • n8n → Postgres (TCP proxy): white_check_mark emoji Works

  • NocoDB → Redis (private): white_check_mark emoji Works

  • NocoDB → Postgres (private): white_check_mark emoji Works

Known Issue Supposedly Fixed

GitHub Issue #13117 documented this exact problem — ioredis not resolving IPv6 for Railway's private network. It was reportedly fixed in n8n 1.79.0 with the QUEUE_BULL_REDIS_DUALSTACK flag.

I'm on n8n 2.1.4 — well past the fix. It still doesn't work.

The n8n template with internal Redis states:

> warning emoji IMPORTANT: If your deployment fails, make sure you've opted out of IPv4 Private Networks in your Railway Account Settings → Feature Flags.

This feature flag does not exist in my Railway account settings.

Documentation vs Reality

Railway docs state:

- TCP Proxy is for "access from outside the private network"

- Private networking provides "faster communication and increased throughput"

- Using private network avoids "service-to-service egress costs"

Yet I'm forced to use TCP proxy for internal n8n left_right_arrow emoji Redis/Postgres communication — paying egress for what should be free internal traffic.

Questions

1. Why does NocoDB work on private networking but n8n doesn't, on the same Railway setup?

2. Is there a known issue with n8n's QUEUE_BULL_REDIS_DUALSTACK implementation?

3. The IPv4 Private Networks feature flag mentioned in templates doesn't exist — is this deprecated?

4. Is private networking in asia-southeast1 known to be less stable?

5. Was there an incident on Dec 31 not reflected on the status page?

Environment Info

n8nVersion: 2.1.4

platform: docker (self-hosted)

nodeJsVersion: 22.21.1

database: postgres

executionMode: scaling (single-main)

Happy to provide any additional logs or configuration details.

Solved$10 Bounty

Pinned Solution

ilyassbreth
FREE

a month ago

this isn’t a railway outage or region issue. it’s ipv6.

railway private networking is ipv6-first. nocodb works because it forces ipv6 (?family=6). n8n still uses ioredis/bull which often falls back to ipv4, and QUEUE_BULL_REDIS_DUALSTACK=true is unfortunately unreliable in real deployments even on 2.x.

tcp proxy works because it’s ipv4 → that’s why switching fixed everything instantly.

the “ipv4 private networks” feature flag is deprecated and no longer exists (ipv4 is GA now), the template warning is outdated.

asia-southeast1 isn’t special , this happens in other regions too.

tl;dr:

  • nocodb forces ipv6 → works

  • n8n redis/postgres clients don’t consistently → fails

  • tcp proxy is the only stable workaround today

not your config, not your fault +1 emoji

5 Replies

brody
EMPLOYEE

a month ago

This thread has been marked as public for community involvement, as it does not contain any sensitive or personal information. Any further activity in this thread will be visible to everyone.

Status changed to Open brody about 1 month ago


ilyassbreth
FREE

a month ago

this isn’t a railway outage or region issue. it’s ipv6.

railway private networking is ipv6-first. nocodb works because it forces ipv6 (?family=6). n8n still uses ioredis/bull which often falls back to ipv4, and QUEUE_BULL_REDIS_DUALSTACK=true is unfortunately unreliable in real deployments even on 2.x.

tcp proxy works because it’s ipv4 → that’s why switching fixed everything instantly.

the “ipv4 private networks” feature flag is deprecated and no longer exists (ipv4 is GA now), the template warning is outdated.

asia-southeast1 isn’t special , this happens in other regions too.

tl;dr:

  • nocodb forces ipv6 → works

  • n8n redis/postgres clients don’t consistently → fails

  • tcp proxy is the only stable workaround today

not your config, not your fault +1 emoji


ilyassbreth

this isn’t a railway outage or region issue. it’s ipv6.railway private networking is ipv6-first. nocodb works because it forces ipv6 (?family=6). n8n still uses ioredis/bull which often falls back to ipv4, and QUEUE_BULL_REDIS_DUALSTACK=true is unfortunately unreliable in real deployments even on 2.x.tcp proxy works because it’s ipv4 → that’s why switching fixed everything instantly.the “ipv4 private networks” feature flag is deprecated and no longer exists (ipv4 is GA now), the template warning is outdated.asia-southeast1 isn’t special , this happens in other regions too.tl;dr:nocodb forces ipv6 → worksn8n redis/postgres clients don’t consistently → failstcp proxy is the only stable workaround todaynot your config, not your fault

thesobercoder
PROOP

a month ago

Thanks for the explanation


ilyassbreth

this isn’t a railway outage or region issue. it’s ipv6.railway private networking is ipv6-first. nocodb works because it forces ipv6 (?family=6). n8n still uses ioredis/bull which often falls back to ipv4, and QUEUE_BULL_REDIS_DUALSTACK=true is unfortunately unreliable in real deployments even on 2.x.tcp proxy works because it’s ipv4 → that’s why switching fixed everything instantly.the “ipv4 private networks” feature flag is deprecated and no longer exists (ipv4 is GA now), the template warning is outdated.asia-southeast1 isn’t special , this happens in other regions too.tl;dr:nocodb forces ipv6 → worksn8n redis/postgres clients don’t consistently → failstcp proxy is the only stable workaround todaynot your config, not your fault

ray-chen
EMPLOYEE

a month ago

QUEUE_BULL_REDIS_DUALSTACK=true is unfortunately unreliable in real deployments even on 2.x.

Is this a known issue in n8n? AFAICT, there's nothing on our end that would cause this behaviour


Status changed to Awaiting User Response Railway about 1 month ago


ray-chen

QUEUE_BULL_REDIS_DUALSTACK=true is unfortunately unreliable in real deployments even on 2.x.Is this a known issue in n8n? AFAICT, there's nothing on our end that would cause this behaviour

ilyassbreth
FREE

a month ago

yeah, it’s a known but underdocumented n8n issue

there’s no misconfig on your side. the problem is that QUEUE_BULL_REDIS_DUALSTACK only affects ioredis DNS resolution, but in real deployments there are still edge cases where node/bull ends up preferring ipv4 sockets or failing during reconnects on ipv6-only networks (railway private net).


Status changed to Awaiting Railway Response Railway about 1 month ago


Status changed to Solved brody about 1 month ago


ilyassbreth

yeah, it’s a known but underdocumented n8n issuethere’s no misconfig on your side. the problem is that QUEUE_BULL_REDIS_DUALSTACK only affects ioredis DNS resolution, but in real deployments there are still edge cases where node/bull ends up preferring ipv4 sockets or failing during reconnects on ipv6-only networks (railway private net).

thesobercoder
PROOP

a month ago

I have raised a GitHub issue with n8n

https://github.com/n8n-io/n8n/issues/23787


Status changed to Awaiting Railway Response Railway about 1 month ago


Status changed to Solved brody about 1 month ago


Loading...