6 months ago
I have deployed my Medusa JS v2 application: 1 Redis, 1 Postgres, 1 node application (server mode, handling API requests) and 1 worker mode (handling events and async work). A Next js storefront and react backoffice connect to them, both deployed on Vercel.
I put a limit of 2 GB and 1vCPU for each service. The issue is that i see for both Node Services, 1GB reported of constant RAM usage (i am the only one using the application). Previously, i didn't have this limit set up and the contant usage for each was 3.5 GB ish (i am on hobby plan).
I know in Cloud computing you have actual usage vs allocated / reserved, but want clarification if this is the case or not. Specially, since i have sshed into the service and issue commands to see actual RAM usage and i am quite below 1GB, so even if reserved, it seems excessive.
These are the commands and their result:
root@baf7f7c5f1bc:/app# ps aux --sort=%mem | head -10
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 123 0.0 0.0 2712 1200 pts/0 S+ 13:10 0:00 head -10
root 93 0.0 0.0 2808 1624 ? S May18 0:00 /bin/sh -c medusa start --types=false
root 117 0.0 0.0 4584 3732 pts/0 Ss 13:10 0:00 bash
root 122 0.0 0.0 7892 3912 pts/0 R+ 13:10 0:00 ps aux --sort=%mem
root 1 0.0 0.0 1341428 99512 ? Ssl May18 0:07 node /root/.nix-profile/bin/yarn run start
root 94 0.0 0.0 1640836 311680 ? Sl May18 0:28 /nix/store/fkyp1bm5gll9adnfcj92snyym524mdrj-nodejs-22.11.0/bin/node /app/apps/backend/.medusa/server/node_modules/.bin/medusa start --types=false
root@baf7f7c5f1bc:/app# free -h
total used free shared buff/cache available
Mem: 384Gi 252Gi 16Gi 14Gi 135Gi 131Gi
Swap: 0B 0B 0B
6 Replies
6 months ago
Hi Nicolas,
Hard to be sure without knowing more details of your medusa config and store size however, I will say that 1gb per instance of Medusa isn't out of the ordinary in my experience. It's the same on my instance (currently 800mb) with a small number of subscription products.
My advice would be to consider whether your use case actually requires a second instance as this would ~halve your usage - personally I've had no issues with performance running medusa in shared mode and I think it would scale well for 99.9% of online stores.
Cheers,
Harry
6 months ago
Hey Harry,
I was considering setting shared mode.
Regardless, i would really like to understand what i am seeing in Railway RAM usage report, as it doesn't align with the actual usage my app is having, as you can see from the metrics in the commands i issued above.
If you or somebody has any input to allow me to understand that it would be super!
Greetings,
Nicolas
6 months ago
I'd guess there's chance that the intermittent processes are spiking memory usage, resulting in node to be lazy to then release that memory allocation?
If you try limiting the heap does it still run fine? something like:
node --max-old-space-size=384 node_modules/.bin/medusa start
6 months ago
The thing is Medusa suggest 2 GB of RAM available, so i don't want to limit the RAM when it is needed. I tried also setting the RAM to 1GB in the settings and depoying and the server crashes because of becoming out of memory. I monitored the RAM while using the site and it remains for both services at around 1GB, it doesn't change at all.
Given my findings inside the POD, of the actual used RAM being close to 300MB for each service, it makes me believe it is a thing of Railway allocating more than necessary? Can anyone on the Railway team sehd some light on this?
6 months ago
The amount of memory displayed in the metrics tab is correct, it is the memory the entire container is using.
6 months ago
Hey Brody, thanks for the response, can you help me understand a little bit more, what is actually using that much memory, even when idle?
Specially since i issued this commands as i mentioned above, which i am interpreting as actually using far less than the GB reported for each service?
root@baf7f7c5f1bc:/app# ps aux --sort=%mem | head -10
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 123 0.0 0.0 2712 1200 pts/0 S+ 13:10 0:00 head -10
root 93 0.0 0.0 2808 1624 ? S May18 0:00 /bin/sh -c medusa start --types=false
root 117 0.0 0.0 4584 3732 pts/0 Ss 13:10 0:00 bash
root 122 0.0 0.0 7892 3912 pts/0 R+ 13:10 0:00 ps aux --sort=%mem
root 1 0.0 0.0 1341428 99512 ? Ssl May18 0:07 node /root/.nix-profile/bin/yarn run start
root 94 0.0 0.0 1640836 311680 ? Sl May18 0:28 /nix/store/fkyp1bm5gll9adnfcj92snyym524mdrj-nodejs-22.11.0/bin/node /app/apps/backend/.medusa/server/node_modules/.bin/medusa start --types=false
root@baf7f7c5f1bc:/app# free -h
total used free shared buff/cache available
Mem: 384Gi 252Gi 16Gi 14Gi 135Gi 131Gi
Swap: 0B 0B 0B