pgvector service — dashboard shows Volume Size 250 GB, but mounted filesystem is only 4.6 GB
elmasri-fathallah
PROOP

a day ago

Project ID: cfa70aaf-ab41-4cb1-86a1-bab0a7f62c07

Service: pgvector (43629917-2dcb-47d4-916a-fa74dec28634)

Volume: pgvector-volume (7f23aec6-16cb-42b2-982f-f7831bf95c5a)

Issue: Postgres has been crashing in an infinite ENOSPC PANIC loop for

several hours. The dashboard volume page reports Volume Size: 250 GB

with ~5 GB usage. The Live Resize dialog also lists 500 / 750 / 1000 GB

as the only resize options and is gated behind ">=185 GB used", which

matches the dashboard view.

But inside the running container, df -h reports the volume's actual

filesystem is only 4.6 GB and 100% full:

/dev/zd2512    4.6G    4.5G    0    100%    /var/lib/postgresql

The data directory $PGDATA = /var/lib/postgresql/data/pgdata is

consuming 4.5 GB, which is essentially the entire filesystem.

Questions:

  1. Why does the dashboard show 250 GB while the kernel sees 4.6 GB?

    Is the volume thin-provisioned with a smaller initial allocation,

    or is the 250 GB just a maximum cap?

  2. Can you grow the actual filesystem to match the configured 250 GB

    (or some larger tier) without us needing to first reach the

    =185 GB usage threshold? Postgres can't run, so we can't increase

    usage organically to qualify for the next tier.

  3. Is there an operator-side path to expand the filesystem now while

    the database is offline?

Production is down. Any priority help appreciated.

Awaiting Railway Response

1 Replies

Status changed to Awaiting Railway Response Railway 1 day ago


elmasri-fathallah
PROOP

2 hours ago

While you're investigating, can you take a fresh volume snapshot of the

CURRENT pgvector-volume (the broken one) before any resize or restore

action? I want to preserve a recoverable copy of the latest state in

case I need to fall back to the March 8 backup as an interim measure.


Welcome!

Sign in to your Railway account to join the conversation.

Loading...