High RAM usage in PostgreSQL
pazernykormoran
PROOP

a month ago

Hi. I have noticed that Postgres consumes a lot of RAM all the time. It can consume even few gigabytes of RAM when I give it to him.

When 8 GB available:


When I decreased max RAM in the instance to 3 GB everything still works fine but consume all cache:

After logging through ssh I see that postgreSQL consumes only few hundreds MB of RAM. The rest is filesystem cache.

Can You describe what I am being billed for? Is is possible that I am being billed for filesystem cache?

How can I optimise it?

$30 Bounty

10 Replies

darseen
HOBBYTop 5% Contributor

a month ago

If you have a large value for shared_buffers, this can happen. You can use a tool like PGTune to help you set the optimal settings.


darseen

If you have a large value for shared_buffers, this can happen. You can use a tool like PGTune to help you set the optimal settings.

pazernykormoran
PROOP

a month ago

shared_buffers is set to 128 MB.


kwozniakk
PRO

a month ago

Hello, any updates there? I have similar problem


Regarding Billing :

Railway billing based on the memory what your app/service uses


pazernykormoran
PROOP

a month ago

If it is calculated by allocation - why my Postgres is generating 10x higher memory costs than other services deployed to containers with the same RAM allocation then.


nearlabdotfun
HOBBY

a month ago

To lower your bill, you must tell Postgres and the OS to be less "greedy." You can do this by adjusting the internal memory buffers.

A. Adjust shared_buffers

This is the most important setting. It defines how much dedicated RAM Postgres reserves for itself.

  • Default: Usually very low (128MB).

  • Recommendation: Set this to 25% of your desired RAM limit. If you want a 2GB footprint, set this to 512MB.

B. Adjust effective_cache_size

This doesn't allocate memory; it tells the Postgres Query Planner how much total RAM (including OS cache) is available.

  • Optimization: If you want to force Postgres to operate in a smaller footprint, set this to 50-75% of your hard limit.

C. Control Connection Overhead

Every connection to Postgres consumes roughly 2-10MB of RAM.

  • Action: Reduce max_connections (e.g., from 100 down to 20) or use a connection pooler like PgBouncer. This prevents "Memory Bloat" from idle connections.


pazernykormoran
PROOP

18 days ago

Thanks everyone for the responses. Let me address each suggestion and then share what I actually found.

@darseen — shared_buffers is already at 128MB (confirmed via SHOW shared_buffers), so that's not the cause.

@nearlabdotfun — I appreciate the detailed breakdown, but these settings won't help here. Increasing shared_buffers to 512MB would actually increase RAM usage, not decrease it. effective_cache_size allocates zero memory — it's purely a query planner hint.

@dharmateja — You said "Railway billing based on the memory what your app/service uses" — I'd like to challenge this with actual data from inside my container:

Total (what Railway bills): 3.88 GB

─────────────────────────────────────

anon (processes): 59 MB

shmem (shared_buffers): 143 MB

kernel: 47 MB

file cache (OS page cache): 3863 MB ← 94% of the bill

─────────────────────────────────────

Actual PostgreSQL footprint: ~250 MB

94% of what Railway bills is Linux OS page cache — memory the kernel uses to cache disk reads. This is completely normal Linux behavior, but it is freeable on demand and should not be billed as application memory. I also confirmed /proc/sys/vm/drop_caches is read-only in the container, so I can't drop the cache manually.
Note: @kwozniakk reported the same issue — so this affects multiple users.

Question for Railway team / admins:
Is Railway reading memory.current from cgroup for billing? If so, that includes OS page cache which inflates the number significantly. The correct metric for actual application memory would be memory.anon. This doesn't seem solvable with PostgreSQL configuration. It needs either:

1. A platform-level fix to exclude page cache from billing

2. The ability to set a hard memory limit on the container (so the kernel evicts cache under pressure)

3. Clarification on exactly how memory billing is calculated

I'm happy to share more diagnostic data if it helps. This seems like something a Railway admin needs to confirm.


pazernykormoran

Thanks everyone for the responses. Let me address each suggestion and then share what I actually found.@darseen — shared_buffers is already at 128MB (confirmed via SHOW shared_buffers), so that's not the cause.@nearlabdotfun — I appreciate the detailed breakdown, but these settings won't help here. Increasing shared_buffers to 512MB would actually increase RAM usage, not decrease it. effective_cache_size allocates zero memory — it's purely a query planner hint.@dharmateja — You said "Railway billing based on the memory what your app/service uses" — I'd like to challenge this with actual data from inside my container:Total (what Railway bills): 3.88 GB─────────────────────────────────────anon (processes): 59 MBshmem (shared_buffers): 143 MBkernel: 47 MBfile cache (OS page cache): 3863 MB ← 94% of the bill─────────────────────────────────────Actual PostgreSQL footprint: ~250 MB94% of what Railway bills is Linux OS page cache — memory the kernel uses to cache disk reads. This is completely normal Linux behavior, but it is freeable on demand and should not be billed as application memory. I also confirmed /proc/sys/vm/drop_caches is read-only in the container, so I can't drop the cache manually. Note: @kwozniakk reported the same issue — so this affects multiple users.Question for Railway team / admins:Is Railway reading memory.current from cgroup for billing? If so, that includes OS page cache which inflates the number significantly. The correct metric for actual application memory would be memory.anon. This doesn't seem solvable with PostgreSQL configuration. It needs either:1. A platform-level fix to exclude page cache from billing2. The ability to set a hard memory limit on the container (so the kernel evicts cache under pressure)3. Clarification on exactly how memory billing is calculatedI'm happy to share more diagnostic data if it helps. This seems like something a Railway admin needs to confirm.

fantaztig
PRO

18 days ago

IMHO this is working as expected, linux FS cache helps the DB fetch data faster in this case so it's fair to be billed.

If you don't want to get billed for the cache you can reduce the memory limit to what you expect the DB to use + some headroom.


pazernykormoran
PROOP

17 days ago

Fair to be billed 94% more for faster queries?

I can't reduce it significantly because I want to have space for bigger queries handling time to time.


kwozniakk
PRO

12 days ago

Any fix for that?


Loading...