Node.js Fatal Error

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory

V8 fatal crash — no err.code — process exits with code 134 (SIGABRT) or 137 (SIGKILL)

Complete reference — what it means, why V8 runs out of heap, and how to fix it with --max-old-space-size, NODE_OPTIONS, Docker, CI/CD pipelines, Angular, Next.js, webpack, TypeScript, streaming, batch processing, and memory leak diagnosis.

Quick Answer: Node.js is printing FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory because the V8 engine ran out of heap space. The fastest fix is node --max-old-space-size=4096 app.js (gives V8 4 GB). For npm scripts or build tools set NODE_OPTIONS=--max-old-space-size=4096. In Docker add ENV NODE_OPTIONS=--max-old-space-size=4096 to your Dockerfile. The root-cause fix is to stream large files, process data in batches, and eliminate memory leaks.

What is "FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory"?

When V8 — the JavaScript engine inside Node.js — cannot allocate memory for a new object because the heap has reached its configured maximum size, it prints a fatal error and terminates the process. Unlike most Node.js errors, this is not a JavaScript Error object. You cannot catch it with try/catch or process.on('uncaughtException'). The process simply dies.

The default V8 heap limit is approximately 1.5 GB on 64-bit systems and 512 MB on 32-bit systems. When your application allocates more live objects than this limit allows, V8 first runs garbage collection (you see the GC log lines); if GC cannot free enough memory, V8 aborts the process.

On Node.js 12 and later, the runtime is container-aware: inside Docker or Kubernetes, V8 reads the cgroup memory limit and scales the default heap accordingly. On Node.js 20+, this automatic scaling is more aggressive, so the effective default may be lower than 1.5 GB if the container memory limit is small.

Exact error messages you will see:
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
<--- Last few GCs --->
<--- JS stacktrace --->
Killed (on Linux when the OS OOM killer terminates the process before V8 can print the message)

Full Error Example

<--- Last few GCs --->

[12345:0x...] 45123 ms: Mark-sweep (reduce) 1396.8 (1458.5) -> 1395.7 (1459.0) MB, ...
[12345:0x...] 47201 ms: Mark-sweep (reduce) 1397.1 (1459.0) -> 1396.8 (1459.5) MB, ...

<--- JS stacktrace --->

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
 1: 0xb7c6e0 node::Abort() [node]
 2: 0xa9149b node::FatalError(char const*, char const*) [node]
 3: 0xd9b89e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
 4: 0xd9bbf7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
 5: 0xf2ae95  [node]
Aborted (core dumped)
No err.code — cannot be caught: This error is a V8-level crash, not a JavaScript exception. err.code does not exist. try/catch, Promise.catch(), and process.on('uncaughtException') will not intercept it. The process exits with code 134 (SIGABRT) or is killed by the OS OOM killer (exit code 137, SIGKILL).

Common Causes

CauseWhy it happens
Loading large datasets entirely in memory Reading a multi-GB JSON or CSV file with fs.readFileSync or JSON.parse allocates the entire file contents as a string and then again as a parsed object simultaneously.
Memory leaks — retained event listeners or closures Event listeners added with emitter.on() keep the enclosing scope alive. Without a corresponding off() call, every request or loop iteration accumulates more listeners, growing memory without bound.
Unbounded arrays, Maps, or caches Appending to an array or Map inside a loop without a size cap or eviction policy causes the collection to grow until the heap is exhausted.
Build tools — webpack, Next.js, Angular CLI, TypeScript webpack, esbuild, the TypeScript compiler (tsc), and next build / ng build hold the entire module graph in memory. Large monorepos or projects with many dependencies can easily exceed the 1.5 GB default.
Recursive functions without a base case Infinite or very deep recursion keeps all intermediate stack frames — and their closures — alive in the heap until memory is exhausted.
Container or CI runner memory limits too small Docker containers and CI runners (GitHub Actions, CircleCI) often have tight memory limits (1–2 GB). Even if Node.js would normally survive, the container OOM killer terminates it with exit code 137 before the V8 error message appears.
Default V8 heap limit too small The ~1.5 GB default is intentionally conservative. Applications that legitimately process large amounts of data in memory simply need a higher limit set via --max-old-space-size.

Fix 1 – Increase heap size with --max-old-space-size

The --max-old-space-size flag sets the maximum size (in MB) of the V8 old-generation heap — where long-lived objects are stored. This is the fastest fix when you need to run a build or process a dataset that is genuinely large.

Command line

# Give Node.js 4 GB of heap
node --max-old-space-size=4096 app.js

# Give Node.js 8 GB of heap
node --max-old-space-size=8192 app.js

npm scripts in package.json

{
  "scripts": {
    "start": "NODE_OPTIONS=--max-old-space-size=4096 node app.js",
    "build": "NODE_OPTIONS=--max-old-space-size=4096 webpack --config webpack.config.js"
  }
}

Next.js build

{
  "scripts": {
    "build": "NODE_OPTIONS=\"--max-old-space-size=4096\" next build"
  }
}

# On Windows (use cross-env to avoid quoting issues):
# cross-env NODE_OPTIONS=--max-old-space-size=4096 next build

Heap size reference

--max-old-space-size valueHeap limitWhen to use
20482 GBMedium codebases, moderate data processing
40964 GBLarge Next.js / webpack builds, large CSV/JSON processing
81928 GBVery large monorepos, Angular enterprise apps, data-heavy batch jobs
1638416 GBMassive TypeScript compilations, large monorepo builds on high-RAM servers
Important: --max-old-space-size raises the limit but does not solve a memory leak. If memory grows unboundedly, the process will eventually hit even a higher limit. Use this flag as a short-term fix while you diagnose the root cause.

Fix 2 – Angular CLI: ng build and ng serve out of memory

Angular CLI builds (ng build, ng serve, ng test) run entirely through Node.js. Large Angular applications with many modules, lazy-loaded routes, or complex shared libraries regularly exceed the default heap.

Option A — Call the Angular CLI node binary directly

{
  "scripts": {
    "build": "node --max-old-space-size=8192 node_modules/@angular/cli/bin/ng build",
    "serve": "node --max-old-space-size=8192 node_modules/@angular/cli/bin/ng serve",
    "test":  "node --max-old-space-size=8192 node_modules/@angular/cli/bin/ng test"
  }
}

Option B — Use NODE_OPTIONS

# In your shell or CI environment:
export NODE_OPTIONS=--max-old-space-size=8192
ng build --configuration production

Option C — Disable source maps in production

Source map generation is one of the most memory-intensive parts of an Angular build. Disabling them in angular.json can cut memory usage by 30–50%.

// angular.json — under configurations > production > buildOptimizer
{
  "configurations": {
    "production": {
      "sourceMap": false,
      "buildOptimizer": true,
      "optimization": true
    }
  }
}

Fix 3 – TypeScript compiler (tsc) out of memory

Running tsc directly can OOM on large projects because the TypeScript compiler loads all source files and type declarations into a single process. Incremental builds and project references help, but may not be enough without a higher heap limit.

# Increase heap for tsc directly
node --max-old-space-size=8192 node_modules/typescript/bin/tsc

# Or via NODE_OPTIONS
NODE_OPTIONS=--max-old-space-size=8192 npx tsc

# Enable incremental compilation (tsconfig.json)
# {
#   "compilerOptions": {
#     "incremental": true,
#     "tsBuildInfoFile": ".tsbuildinfo"
#   }
# }

Fix 4 – Docker: JavaScript heap out of memory in containers

Docker containers have their own memory limit enforced by cgroups. When the Node.js process exceeds the container memory limit, the Linux OOM killer terminates it with exit code 137 (SIGKILL) — often without printing the V8 error message. This looks like a silent crash.

Dockerfile — set NODE_OPTIONS in the build stage

# Multi-stage build — increase heap only during the build stage
FROM node:20-alpine AS builder
WORKDIR /app

# Set heap to 75% of the build container's memory limit
ENV NODE_OPTIONS=--max-old-space-size=3072

COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage — no extra heap needed for serving
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package.json .
CMD ["node", "dist/server.js"]

docker run — set container memory limit

# Give the container 8 GB of RAM and set the heap to 6 GB (75%)
docker run -m 8g -e NODE_OPTIONS=--max-old-space-size=6144 myapp

# Check if the OOM killer killed a previous container
docker inspect --format='{{.State.OOMKilled}}' <container-id>

docker-compose.yml

services:
  app:
    image: myapp
    environment:
      NODE_OPTIONS: "--max-old-space-size=4096"
    mem_limit: 6g
    memswap_limit: 6g
Important — set --max-old-space-size to 75–80% of container memory: Node.js uses additional memory outside the V8 heap (RSS, native buffers, libuv). If you set --max-old-space-size equal to the container limit, the process will be killed by the OOM killer before it can even emit the heap OOM error. Always leave headroom.

Fix 5 – CI/CD pipelines: GitHub Actions, Vercel, CircleCI

CI runners typically have 2–7 GB of RAM. Build steps that run npm run build inside these constrained environments are a very common source of heap OOM crashes.

GitHub Actions

# .github/workflows/build.yml
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - name: Install dependencies
        run: npm ci
      - name: Build
        env:
          NODE_OPTIONS: "--max-old-space-size=4096"
        run: npm run build

Vercel — via package.json build script

{
  "scripts": {
    "build": "NODE_OPTIONS=\"--max-old-space-size=4096\" next build"
  }
}

Vercel runs your build script directly; setting NODE_OPTIONS inline is the only reliable way to increase the heap inside the Vercel build environment.

CircleCI

# .circleci/config.yml
jobs:
  build:
    docker:
      - image: cimg/node:20.0
    resource_class: medium+   # upgrade to get more RAM
    steps:
      - checkout
      - run:
          name: Build
          environment:
            NODE_OPTIONS: "--max-old-space-size=4096"
          command: npm run build

Fix 6 – Stream large files instead of loading them into memory

The most reliable long-term fix for large-file processing is streaming: read and process one line (or one record) at a time instead of loading the entire file. Memory usage stays constant regardless of file size.

Stream a large text file line by line (readline)

const fs = require('fs');
const readline = require('readline');

async function processLargeFile(filePath) {
  const fileStream = fs.createReadStream(filePath);
  const rl = readline.createInterface({
    input: fileStream,
    crlfDelay: Infinity,
  });

  let lineCount = 0;
  for await (const line of rl) {
    // Process one line at a time — never more than one line in memory
    processLine(line);
    lineCount++;
  }

  console.log(`Processed ${lineCount} lines`);
}

function processLine(line) {
  // your per-line logic here
}

processLargeFile('./large-dataset.csv');

Stream a large CSV file with csv-parse

const fs = require('fs');
const { parse } = require('csv-parse');

fs.createReadStream('large.csv')
  .pipe(parse({ columns: true, trim: true }))
  .on('data', (record) => {
    // Each record is one row — processed and released before the next arrives
    processRecord(record);
  })
  .on('error', (err) => {
    console.error('CSV parse error:', err.message);
  })
  .on('end', () => {
    console.log('Done processing CSV');
  });

function processRecord(record) {
  // your per-record logic here
}

Fix 7 – Process data in batches

When streaming is not possible (e.g. you need random access or must aggregate results), process records in fixed-size chunks. Awaiting each batch before fetching the next allows the garbage collector to reclaim memory from the previous batch.

const BATCH_SIZE = 500;

async function processInBatches(getAllIds, processRecord) {
  const ids = await getAllIds(); // fetch IDs only — not full records

  for (let i = 0; i < ids.length; i += BATCH_SIZE) {
    const batch = ids.slice(i, i + BATCH_SIZE);

    // Fetch and process only this batch
    const records = await fetchRecords(batch);
    await Promise.all(records.map(processRecord));

    // records goes out of scope here and can be GC'd before the next iteration
    console.log(`Processed ${Math.min(i + BATCH_SIZE, ids.length)} / ${ids.length}`);
  }
}

async function fetchRecords(ids) {
  // return only the records for these IDs from DB or API
}

async function processRecord(record) {
  // your per-record logic here
}

Fix 8 – Fix memory leaks

Memory leaks are the most insidious cause of heap exhaustion because memory grows slowly and the crash only happens after extended runtime. The three most common patterns are unremoved event listeners, growing caches, and forgotten intervals.

Remove event listeners when done

const EventEmitter = require('events');
const emitter = new EventEmitter();

// BAD: listener is never removed — leaks if called in a loop or per-request
function badSetup() {
  emitter.on('data', handleData);
}

// GOOD: remove the listener when no longer needed
function goodSetup() {
  emitter.on('data', handleData);

  // Clean up when the associated resource closes
  return function cleanup() {
    emitter.off('data', handleData);
  };
}

function handleData(data) {
  // process data
}

Use WeakMap for caches (allows GC when key is no longer referenced)

// BAD: regular Map holds strong references — keys never GC'd
const cache = new Map();
function processRequest(req) {
  cache.set(req, computeExpensiveResult(req)); // req is retained forever
}

// GOOD: WeakMap holds weak references — entry is GC'd when req goes out of scope
const weakCache = new WeakMap();
function processRequestSafe(req) {
  if (!weakCache.has(req)) {
    weakCache.set(req, computeExpensiveResult(req));
  }
  return weakCache.get(req);
}

function computeExpensiveResult(req) {
  // expensive computation
}

Clear intervals and timeouts

// BAD: interval keeps its callback closure alive forever
function startPolling() {
  const data = loadLargeDataStructure();
  setInterval(() => {
    // data is captured in this closure and can never be GC'd
    processData(data);
  }, 1000);
}

// GOOD: store the handle and clear it when done
function startPollingManaged() {
  const data = loadLargeDataStructure();
  const intervalId = setInterval(() => {
    processData(data);
  }, 1000);

  // Return a cleanup function or store intervalId somewhere accessible
  return () => clearInterval(intervalId);
}

function loadLargeDataStructure() { return {}; }
function processData(data) { /* ... */ }

Fix 9 – Diagnose memory leaks with built-in tools

Before reaching for --max-old-space-size, confirm whether you have a genuine memory leak. Node.js has built-in diagnostics that require no additional npm packages.

Check current heap limit with v8.getHeapStatistics()

const v8 = require('v8');
const stats = v8.getHeapStatistics();

console.log({
  heapSizeLimit: `${Math.round(stats.heap_size_limit / 1024 / 1024)} MB`,
  totalHeapSize: `${Math.round(stats.total_heap_size / 1024 / 1024)} MB`,
  usedHeapSize: `${Math.round(stats.used_heap_size / 1024 / 1024)} MB`,
});
// heap_size_limit shows your actual current limit — useful for verifying
// that --max-old-space-size was applied correctly

Log memory usage over time

// Add this to your app to track heap growth
setInterval(() => {
  const mem = process.memoryUsage();
  console.log({
    heapUsed: `${Math.round(mem.heapUsed / 1024 / 1024)} MB`,
    heapTotal: `${Math.round(mem.heapTotal / 1024 / 1024)} MB`,
    rss: `${Math.round(mem.rss / 1024 / 1024)} MB`,
    external: `${Math.round(mem.external / 1024 / 1024)} MB`,
  });
}, 5000);

Take heap snapshots with Chrome DevTools

# Start the app with the inspector enabled
node --inspect app.js

# Then open Chrome and navigate to:
# chrome://inspect
# Click "Open dedicated DevTools for Node"
# Go to the "Memory" tab → "Heap snapshot" → "Take snapshot"
# Reproduce the leak, take another snapshot
# Compare snapshots to find objects that grew

Force garbage collection to isolate leaks

# Expose the global.gc() function
node --expose-gc app.js
// In your code, force GC and measure memory before/after
global.gc();
const before = process.memoryUsage().heapUsed;

// ... run the operation you suspect is leaking ...
runSuspectedLeakOperation();

global.gc();
const after = process.memoryUsage().heapUsed;
const leaked = after - before;

console.log(`Memory delta after GC: ${Math.round(leaked / 1024)} KB`);
// A positive value after GC means memory is genuinely retained (leaked)
// Zero or negative means the previous operation did not leak

Fix 10 – Offload to worker_threads or child_process

Memory-intensive operations can be moved to a separate V8 isolate. Each worker thread and child process has its own heap, so an OOM crash in the worker does not bring down the main process. You can also set execArgv to give the worker its own higher heap limit.

// main.js — spawns a worker for memory-intensive processing
const { Worker } = require('worker_threads');
const path = require('path');

function runInWorker(filePath) {
  return new Promise((resolve, reject) => {
    const worker = new Worker(path.join(__dirname, 'worker.js'), {
      workerData: { filePath },
      // Give the worker its own higher heap limit:
      execArgv: ['--max-old-space-size=4096'],
    });

    worker.on('message', resolve);
    worker.on('error', reject);
    worker.on('exit', (code) => {
      if (code !== 0) reject(new Error(`Worker exited with code ${code}`));
    });
  });
}

runInWorker('./large-dataset.json')
  .then((result) => console.log('Result:', result))
  .catch((err) => console.error('Worker failed:', err));
// worker.js — runs in a separate thread with its own heap
const { workerData, parentPort } = require('worker_threads');
const fs = require('fs');

const rawData = fs.readFileSync(workerData.filePath, 'utf8');
const parsed = JSON.parse(rawData);

// Do memory-intensive processing here
const result = processData(parsed);

// Send result back to main thread and exit (freeing all worker memory)
parentPort.postMessage(result);

function processData(data) {
  // heavy computation
  return { count: data.length };
}

Debugging Checklist

  1. Run node -e "const v8=require('v8'); console.log(Math.round(v8.getHeapStatistics().heap_size_limit/1e6)+'MB')" to confirm your current heap limit before doing anything else.
  2. Check whether memory grows monotonically using process.memoryUsage() logged at intervals. Monotonic growth strongly indicates a leak.
  3. Run with node --expose-gc --inspect app.js, then use Chrome DevTools "Memory" tab to take and compare heap snapshots before and after suspected operations.
  4. Search for emitter.on() calls inside request handlers, loops, or factory functions without corresponding emitter.off() cleanup.
  5. Look for global or module-scope Map, Set, or plain object caches that are written to but never evicted.
  6. Check for setInterval calls whose handles are never cleared, especially those whose callbacks close over large data structures.
  7. If the crash happens only during npm run build, next build, ng build, or tsc, increase NODE_OPTIONS=--max-old-space-size=4096 — build tools have legitimately high memory requirements.
  8. In Docker or CI, run docker inspect --format='{{.State.OOMKilled}}' <id> to confirm whether the container OOM killer (exit 137) rather than V8 itself (exit 134) terminated the process.
  9. On Linux, check dmesg | grep -i "out of memory" to confirm whether the OS OOM killer terminated the process.
# Check if the Linux OOM killer killed the process
dmesg | grep -i "out of memory"
dmesg | grep -i "killed process"

# Monitor heap growth in real time (every 2 seconds)
node -e "
  setInterval(() => {
    const m = process.memoryUsage();
    process.stdout.write(\`heap: \${Math.round(m.heapUsed/1e6)}MB / \${Math.round(m.heapTotal/1e6)}MB  rss: \${Math.round(m.rss/1e6)}MB\r\`);
  }, 2000);
  require('./app'); // your app
"

# Verify heap limit is applied correctly
node --max-old-space-size=4096 -e "const v8=require('v8'); console.log(Math.round(v8.getHeapStatistics().heap_size_limit/1e6)+'MB')"
Do not just increase --max-old-space-size without investigating: Raising the heap limit on a leaking process only delays the crash. If heapUsed grows continuously over time even with GC running, you have a leak. Investigate with heap snapshots first and only increase the limit as a temporary measure while you fix the root cause.

Frequently Asked Questions

What causes "FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory"?

Node.js crashes with this error when the V8 JavaScript engine cannot allocate memory for a new object because the heap has reached its configured maximum. The default heap limit is approximately 1.5 GB on 64-bit systems. Common causes include loading large JSON or CSV files entirely into memory with fs.readFileSync or JSON.parse, memory leaks from retained event listeners or closures, unbounded arrays or Maps, and build tools like webpack, Next.js next build, Angular ng build, or TypeScript tsc processing large codebases.

How do I fix JavaScript heap out of memory in Node.js?

The quickest fix is node --max-old-space-size=4096 app.js which gives V8 4 GB. For npm scripts set NODE_OPTIONS=--max-old-space-size=4096 in the script. For Docker add ENV NODE_OPTIONS=--max-old-space-size=4096 to the Dockerfile. For GitHub Actions, set NODE_OPTIONS in the env block of the build step. The root cause fix is to stream large files, process data in fixed-size batches, and eliminate memory leaks by removing event listeners and using WeakMap for caches.

Can I catch "JavaScript heap out of memory" with try/catch?

No. This error is a fatal V8-level crash, not a JavaScript exception. There is no err.code property. try/catch, Promise.catch(), and process.on('uncaughtException') will all fail to intercept it. The process exits with code 134 (SIGABRT) when V8 aborts itself, or code 137 (SIGKILL) when the Linux OOM killer terminates the process before V8 can print the error message.

How do I fix "JavaScript heap out of memory" in an Angular ng build?

Call the Angular CLI binary directly through node: "build": "node --max-old-space-size=8192 node_modules/@angular/cli/bin/ng build" in package.json. Alternatively export NODE_OPTIONS=--max-old-space-size=8192 before running ng build. Also consider disabling source maps in production ("sourceMap": false in angular.json) which can reduce build memory usage by 30–50%.

How do I fix JavaScript heap out of memory in a Docker container?

Add ENV NODE_OPTIONS=--max-old-space-size=4096 to your Dockerfile. Also ensure the container itself has enough memory with docker run -m 8g or mem_limit: 8g in docker-compose.yml. Set --max-old-space-size to 75–80% of the container memory limit to leave headroom for RSS and native buffers. Run docker inspect --format='{{.State.OOMKilled}}' <id> to check if the OOM killer (exit 137) rather than V8 (exit 134) killed the container.

How do I fix JavaScript heap out of memory in GitHub Actions or Vercel?

In GitHub Actions, add NODE_OPTIONS: "--max-old-space-size=4096" to the env block of your build step. On Vercel, set NODE_OPTIONS inline in your package.json build script since Vercel runs that script directly: "build": "NODE_OPTIONS=\"--max-old-space-size=4096\" next build". In CircleCI, upgrade the resource class to medium+ or large for more RAM, and set NODE_OPTIONS in the environment block of the run step.

How do I diagnose a memory leak causing the heap OOM error?

First use v8.getHeapStatistics().heap_size_limit to confirm your heap ceiling. Then log process.memoryUsage().heapUsed at intervals — monotonically increasing values indicate a leak. Start the app with node --inspect app.js, open Chrome DevTools at chrome://inspect, go to the Memory tab, take two heap snapshots (before and after reproducing the suspected leak), and compare them. Run with --expose-gc to call global.gc() manually and confirm which objects survive forced garbage collection.

Related Errors