21 days ago
helppp
12 Replies
21 days ago
Hey there! We've found the following might help you get unblocked faster:
If you find the answer from one of these, please let us know by solving the thread!
21 days ago
Could you give more detail?
21 days ago
Even if you reverted the code, make sure another deployment is triggered! you can also try reverting to a previous deployment to check
21 days ago
hey I tried to re-deploy the currently running deployment and now I get this error. I'm not understanding because none of the configs changed... but now it says to me allocation failed.
<--- Last few GCs --->
[146:0x39a10b40] 177514 ms: Mark-Compact 4039.9 (4128.3) -> 4023.9 (4128.6) MB, 1540.58 / 0.00 ms (average mu = 0.134, current mu = 0.049) allocation failure; scavenge might not succeed
[146:0x39a10b40] 179095 ms: Mark-Compact 4039.7 (4128.6) -> 4024.1 (4128.8) MB, 1571.40 / 0.00 ms (average mu = 0.073, current mu = 0.006) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----
1: 0xb76dc5 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [node]
2: 0xee6120 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
3: 0xee6407 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
4: 0x10f8055 [node]
5: 0x10f85e4 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [node]
6: 0x110f4d4 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*) [node]
7: 0x110fcec v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
8: 0x10e5ff1 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
9: 0x10e7185 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
10: 0x10c47d6 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
11: 0x1520316 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
12: 0x7f9f7fed9ef6
Aborted (core dumped)
Dockerfile:16
21 days ago
like i have a deployment that is active right now but whenever I try to redeploy that again, it doesn't work. i get above issue.
21 days ago
Hey @selen-pixel,
This “Ineffective mark-compacts near heap limit / JavaScript heap out of memory” error can happen on Railway. Usually, the code isn’t the problem — Node just runs out of heap during the build. Autrement, try these steps in order; they fixed it for me:
1) Check the logs & env first
Look for missing secrets or bad env vars (prerender/builds often fail if an API key is unset).
Confirm Railway is actually using your Dockerfile (Dockerfile must be in the repo root or explicitly referenced).
2) Increase Node’s heap during build
Node’s default heap is small. Add a NODE_OPTIONS override so Node can use more memory during build:
Dockerfile (early, before RUN npm build):
ENV NODE_OPTIONS="--max-old-space-size=8192"
Or add to your package.json build script (Linux container):
"scripts": {
"build": "NODE_OPTIONS='--max-old-space-size=4096' npm run build-app"
}
(If you need cross-platform: use cross-env NODE_OPTIONS=--max-old-space-size=4096.)
Start with 4096 or 8192 and raise if needed. This tells Node to use up to 4–8GB for the build and often stops the OOM.
3) Confirm plan / container memory
Make sure your service has enough RAM on Railway (Pro plans give more). Node won’t use it automatically unless you raise --max-old-space-size.
4) Clear Railway build cache and force a fresh build
Railway sometimes reuses bad cache layers. Go to your project → deployments → settings → clear build cache, then redeploy.
5) Try a local Docker build
Build the same Docker image locally with similar Docker memory limits to confirm whether it’s a build-size/memory problem or something Railway-specific.
6) If it still fails, share these things
Post the Dockerfile, package.json build script, the full build log (first OOM occurrence), and Node version. That makes root-cause debugging much faster.
If this fixes it for you, let me know — cheers!
20 days ago
hey! thanks for the suggestion... I tried all of the suggestions, updated to pro plan, 8192 space increase. For the cache I set NIXPACKS_NO_CACHE=1 on railway. I've uploaded the logs please see.
Attachments
20 days ago
Hey @selen-pixel,
I went through the full logs, your Dockerfile, and package.json carefully.
From what I can see, the crash is a Node OOM during the builder stage. The React frontend build actually finishes fine, but then tsc runs and hits the heap limit. The reason your ENV NODE_OPTIONS=--max-old-space-size=8192 didn’t help is that it’s set in the runner stage — which only affects the final image, not the builder stage where npm run build runs.
do this to fix the problem:
move or add the NODE_OPTIONS setting to the builder stage early on so TypeScript can use more memory:
FROM node:20-bookworm AS builder
WORKDIR /app
# (increase if needed)
ENV NODE_OPTIONS="--max-old-space-size=8192"
COPY package.json ./
COPY prisma ./prisma
RUN npm ci --include=dev --loglevel=error
RUN npx prisma generate
COPY . .
RUN echo "Building TypeScript..." && npm run build
alternatively, if you prefer not to set it globally in the stage, you can also do it inline for the build step:
RUN NODE_OPTIONS="--max-old-space-size=8192" npm run build
i think this well work because of :
NODE_OPTIONS needs to be visible to the Node process running tsc in the builder stage. If it’s only in the runner stage, the builder still uses the default small heap and will crash.
Your logs show the OOM happens right after the frontend build finishes. The TypeScript compile, especially with heavy dependencies like LangChain, needs more memory than the default allows.
the memory to set :
8GB (
8192) is usually enough.If it still crashes, try 12GB (
12288) or 16GB (16384). Railway Pro builder containers can handle that, so pick the smallest that consistently works.
more recommendations:
In
package.json, switch the frontend install tonpm cifor deterministic and faster installs:
"build": "npx prisma generate && cd frontend && npm ci && npm run build && cd .. && tsc"
Keep
NIXPACKS_NO_CACHE=1or clear the Railway build cache after updating the Dockerfile to make sure old layers don’t interfere.Double-check that Railway is using the Dockerfile from your repo root (or explicitly point to it in the project settings).
If it still fails
Try building the same Docker image locally to see if it’s specific to Railway.
If
tscstill consumes too much memory, options include:Add
skipLibCheck: trueand/orincremental: truetotsconfig.json.Switch the backend compile to
esbuild, which uses far less memory and is much faster.
Once you move the NODE_OPTIONS into the builder stage and clear the cache, this should fix the OOM issue.
If this fixes it for you, i hope it well do, let me know
20 days ago
@
it works now! the problem was that dockerignore had package-lock.kson ... so we remove that and then added package-lock.json to docker directly... seemed to work!
selen-pixel
thank you so much!
20 days ago
Awesome, glad to hear it’s working now, i'm happy to help!
20 days ago
Looks like everything is fully resolved now, so the thread should probably be marked as Solved !

