3 months ago
Is there a way to clean up memory in Postgres without restarting it?
My use case is a really bursty workload where I run millions of inserts + some aggregations on a timed schedule which uses up to 5GB memory. Once the jobs complete the memory sticks at 5GB used. If I restart Postgres it drops back to 20MB used.
I tried setting MALLOC_ARENA_MAX=2 and MALLOC_TRIM_THRESHOLD_=100000 but I don't think it did much.
I make available the default 32vCPU and 32GB RAM, I could shrink that down but it would probably still hang on to whatever I give it. Also, I'd rather have the best performance possible and not risk OOM since the amount of data in this workload will be growing over time and I'd have to keep tuning it.
7 Replies
3 months ago
Hey there! We've found the following might help you get unblocked faster:
🧵 Workflow Not Executing – Worker Not Processing Jobs + MaxListenersExceededWarning
🧵 n8n service consuming excessive memory and unable to restart - Postgres in recovery mode
If you find the answer from one of these, please let us know by solving the thread!
3 months ago
im not sure if there is a way to clean up memory with what youre describing. you can attempt to reduce how much accumulates by reducing work_mem per connection, use temp tables, auto vacuum.
your available 32 vCPU & memory is there only for your service to use it, that capacity is available for the service to use, but you aren’t actively billed for the full allocation. You’re billed for the real usage, not the total provisioned quota, having more room simply lets the service scale up when it needs it.
my thoughts are you can probably automate this though railway's graphql https://docs.railway.com/guides/manage-deployments, which might save you some time by doing that. youd need
projectId
environmentId
serviceId (the Postgres service)
to be able to automate the process. a simple node script can complete it right after your entire insert workload
3 months ago
I think it could be an OS level "issue" with glibc or whatever it's using where it has no reason to eagerly free up the memory since it thinks it has 32GB available. I was hoping to tune that somehow to free it. I realize I'm not billed for available memory, but I would be billed for 5GB that really is idle/not needed anymore. I'll keep investigating.
3 months ago
I wanted to try jemalloc but it looks like that's not installed on the server.
3 months ago
For now I'm going to reduce the available memory for my Postgres. I think it's the only option since others would require OS level configs that we cannot apply since it's shared.
armsgora
For now I'm going to reduce the available memory for my Postgres. I think it's the only option since others would require OS level configs that we cannot apply since it's shared.
3 months ago
i believe if you reduce the available memory for your postgres, it'll OOM as it won't have enough resources
3 months ago
No its working fine with 2GB set rather than 32GB. I believe the issue is the OS File Cache builds up and I'm getting billed for that even though it's unused and available to be reclaimed.
Running this:SELECT sum(total_bytes) / 1024 / 1024 AS "Total_MB_Used_By_Postgres"
FROM (
SELECT count(*) * 8192 AS total_bytes FROM pg_buffercache
UNION ALL
SELECT sum(used_bytes) FROM pg_backend_memory_contexts
) sub;
Returns 130MB, even when it says I'm being billed for 5GB+. Reducing my available memory at least limits my billing as a temporary fix.
