Error: EMFILE: too many open files means the Node.js process has hit the OS per-process file descriptor limit. Immediate fixes: (1) raise ulimit -n 65536 in your shell, (2) on Linux raise inotify watches: echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p, (3) replace require('fs') with require('graceful-fs'), (4) limit concurrent opens with p-limit instead of bare Promise.all, or (5) set WATCHPACK_POLLING=true as a quick dev-server workaround.
What is EMFILE?
EMFILE stands for Error: too Many open FILEs — a
POSIX system error
thrown when a process tries to open a new file descriptor (fd) after it has already
reached the operating system's per-process limit. File descriptors are handles the kernel
allocates for any I/O resource: regular files, directories, sockets, pipes,
fs.watch / inotify watchers, and database connections all count
against the same pool.
Node.js surfaces this as an Error object with code: 'EMFILE'
on any fs operation — fs.open, fs.readFile,
fs.createReadStream, fs.watch — and indirectly on any npm
package or tool that opens files internally (webpack, vite, jest, nodemon, chokidar,
Rollup, esbuild, etc.).
Error: EMFILE: too many open files, open '/path/to/file'Error: EMFILE: too many open files, watchError: EMFILE: too many open files, watch '/path/to/dir'Error: EMFILE, too many open files
Full Error Example
Error: EMFILE: too many open files, open '/home/user/project/src/index.js'
at Object.open (node:internal/fs/promises:567:10)
at async open (node:internal/fs/promises:549:15)
at async readFile (node:internal/fs/promises:252:14)
at async processFile (/home/user/project/scripts/build.js:42:18)
at async Promise.all (index 237) {
errno: -24,
code: 'EMFILE',
syscall: 'open',
path: '/home/user/project/src/index.js'
}
The errno: -24 is the Linux/macOS kernel constant for EMFILE
(EMFILE = 24 on POSIX systems). The path field shows the file
that could not be opened — but the root cause is the total number of already-open fds,
not a problem with that specific file. On Windows Node.js may report errno: -4066
for the same underlying condition.
Common Causes
| Cause | Why it happens |
|---|---|
| Opening many files in parallel without closing them | Code uses Promise.all(files.map(f => fs.readFile(f))) over hundreds of files simultaneously. Each readFile call holds an fd open until the read completes, exhausting the limit before earlier fds are released. |
| File watchers (webpack, vite, jest, nodemon, chokidar) | Watchers open an inotify/kqueue handle for every watched path. A project with thousands of files in node_modules or src/ easily exceeds the default macOS limit of 256 fds or the Linux inotify default of 8192 watches. |
| Exceeded Linux inotify.max_user_watches | On Linux, file watchers use inotify watches (a separate kernel resource from the fd table). The default limit is often 8192 watches. Monorepos and large projects exhaust this before hitting the ulimit. Running cat /proc/sys/fs/inotify/max_user_watches shows the current ceiling. |
| Too many concurrent HTTP/WebSocket connections | Every TCP socket (incoming or outgoing) consumes one file descriptor. A server under high load, a WebSocket server with many long-lived connections, or a crawler that opens hundreds of concurrent connections can hit the fd limit. |
| File descriptor leaks | Code opens files but never closes them — for example, calling fs.open() without a corresponding fs.close(), abandoning a stream without destroying it, or a callback-based API where the callback is never invoked (leaving internal fds open). |
| Low OS ulimit setting | macOS ships with a default soft limit of 256 file descriptors per process. Linux defaults to 1024. Docker containers may inherit host limits or have their own low defaults. All are far lower than what modern Node.js tooling needs. |
Fix 1 – Check and increase ulimit
The fastest fix is to raise the per-process file descriptor limit in your shell session. This is the first thing to try when a bundler, test runner, or dev server suddenly throws EMFILE.
# Check the current soft limit
ulimit -n
# Check the hard limit (maximum you can raise the soft limit to without root)
ulimit -Hn
# Raise the limit for the current shell session
ulimit -n 65536
# Confirm the new value
ulimit -n
Permanent fix on macOS
# Create a launchd config to set the limit system-wide on macOS
sudo tee /Library/LaunchDaemons/limit.maxfiles.plist <<'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>limit.maxfiles</string>
<key>ProgramArguments</key>
<array>
<string>launchctl</string>
<string>limit</string>
<string>maxfiles</string>
<string>65536</string>
<string>200000</string>
</array>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
EOF
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
Alternatively, add ulimit -n 65536 to your ~/.zshrc or ~/.bash_profile for a user-level persistent setting without needing root.
Permanent fix on Linux
# Add to /etc/security/limits.conf (applies after re-login)
echo "* soft nofile 65536" | sudo tee -a /etc/security/limits.conf
echo "* hard nofile 200000" | sudo tee -a /etc/security/limits.conf
# Raise the system-wide fd ceiling (all processes combined)
echo "fs.file-max = 2097152" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
# For systemd services, set in the unit file instead:
# [Service]
# LimitNOFILE=65536
# Verify after re-login
ulimit -n
Fix 2 – Increase Linux inotify.max_user_watches
On Linux, file watchers (used by webpack, vite, jest, nodemon, and chokidar) consume
inotify watches — a separate kernel resource with its own limit, independent
of the process fd table. Even if your ulimit -n is high, exhausting inotify
watches produces the same EMFILE error for fs.watch operations.
# Check the current inotify watch limit
cat /proc/sys/fs/inotify/max_user_watches
# Check how many watches are currently in use
cat /proc/sys/fs/inotify/max_user_instances
# Raise the limit permanently
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
# Verify the new value
cat /proc/sys/fs/inotify/max_user_watches
syscall: 'watch'
rather than syscall: 'open', inotify exhaustion is almost certainly the cause.
This is extremely common on Linux development machines running webpack or vite with large
node_modules trees. The fix is permanent across reboots via /etc/sysctl.conf.
Fix 3 – Process files in batches with a concurrency limit
Replacing Promise.all(files.map(...)) with a concurrency-limited queue
is the correct fix when your code intentionally opens many files. It keeps the number
of simultaneously open fds bounded regardless of how large the input array is.
Using p-limit
// npm install p-limit
const fs = require('fs').promises;
const pLimit = require('p-limit');
async function processAllFiles(filePaths) {
const limit = pLimit(20); // max 20 files open at once
const results = await Promise.all(
filePaths.map((filePath) =>
limit(async () => {
const content = await fs.readFile(filePath, 'utf8');
return processContent(content); // your per-file logic
})
)
);
return results;
}
function processContent(content) {
// ... transform/parse the file content
return content.toUpperCase();
}
Using async-pool (alternative to p-limit)
// npm install tiny-async-pool
const { asyncPool } = require('tiny-async-pool');
const fs = require('fs').promises;
async function processAllFiles(filePaths) {
const results = [];
for await (const result of asyncPool(20, filePaths, async (filePath) => {
const content = await fs.readFile(filePath, 'utf8');
return processContent(content);
})) {
results.push(result);
}
return results;
}
function processContent(content) {
return content.toUpperCase();
}
Manual batching without extra dependencies
const fs = require('fs').promises;
async function processInBatches(filePaths, concurrency = 20) {
const results = [];
for (let i = 0; i < filePaths.length; i += concurrency) {
const batch = filePaths.slice(i, i + concurrency);
const batchResults = await Promise.all(
batch.map(async (filePath) => {
const content = await fs.readFile(filePath, 'utf8');
return processContent(content);
})
);
results.push(...batchResults);
}
return results;
}
function processContent(content) {
return content.toUpperCase();
}
Fix 4 – Use graceful-fs for automatic EMFILE retries
The graceful-fs
package is a drop-in replacement for the built-in fs module. When an
operation fails with EMFILE, it queues the operation and retries it automatically
once a file descriptor becomes available. Many popular tools (npm, webpack, jest)
already use it internally. The fs-extra package also wraps graceful-fs
and provides the same EMFILE protection.
// npm install graceful-fs
// Replace: const fs = require('fs');
// With:
const fs = require('graceful-fs');
// All existing fs calls work identically — EMFILE is handled transparently.
// graceful-fs queues the failed open() call and retries when an fd is freed.
fs.readFile('/path/to/file', 'utf8', (err, data) => {
if (err) throw err;
console.log(data);
});
// Works with fs.promises too:
const fsp = fs.promises;
const content = await fsp.readFile('/path/to/file', 'utf8');
// fs-extra is an alternative that also handles EMFILE via graceful-fs:
// npm install fs-extra
const fse = require('fs-extra');
const data = await fse.readFile('/path/to/file', 'utf8');
p-limit so you understand
the actual load your code generates.
Fix 5 – Always close file handles (prevent fd leaks)
File descriptor leaks are a common cause of EMFILE in long-running processes. Every
fs.open() must have a matching fs.close(), and every
stream must be destroyed if not fully consumed.
Using fs.promises with try/finally
const fs = require('fs').promises;
async function readFileSafely(filePath) {
let fileHandle;
try {
fileHandle = await fs.open(filePath, 'r');
const content = await fileHandle.readFile({ encoding: 'utf8' });
return content;
} finally {
// Always close — even if readFile throws
if (fileHandle) await fileHandle.close();
}
}
Using streams with pipeline (auto-cleanup on error)
const fs = require('fs');
const { pipeline } = require('stream/promises');
const { createGzip } = require('zlib');
async function compressFile(input, output) {
// pipeline() automatically destroys all streams on error,
// releasing their file descriptors.
await pipeline(
fs.createReadStream(input),
createGzip(),
fs.createWriteStream(output)
);
}
// Avoid this pattern — if an error occurs mid-pipe,
// the read stream fd may never be released:
// fs.createReadStream(input).pipe(fs.createWriteStream(output));
Programmatic fd monitoring in Node.js
const fs = require('fs');
const path = require('path');
// Count open file descriptors for the current process (Linux only)
function countOpenFds() {
try {
return fs.readdirSync('/proc/self/fd').length;
} catch {
return -1; // Not on Linux
}
}
// Log fd count periodically to detect leaks
setInterval(() => {
const count = countOpenFds();
if (count > 0) {
console.log(`[fd-monitor] Open fds: ${count}`);
}
}, 5000);
Fix 6 – Limit file watchers in webpack, vite, jest, nodemon, and chokidar
Build tools and test runners that use file watching are a frequent source of EMFILE errors, especially on macOS (256 fd default) and Linux (8192 inotify default). The fix is to tell the watcher to ignore directories it does not need to monitor.
webpack.config.js
module.exports = {
// ...
watchOptions: {
// Do not watch node_modules — this alone can save thousands of fds
ignored: /node_modules/,
// Or use a glob pattern:
// ignored: ['**/node_modules/**', '**/dist/**', '**/.git/**'],
aggregateTimeout: 300,
poll: false, // use native fs events, not polling
},
};
vite.config.js
export default {
server: {
watch: {
// Exclude directories that don't need watching
ignored: ['**/node_modules/**', '**/dist/**', '**/.git/**'],
},
},
};
jest.config.js
module.exports = {
// ...
watchPathIgnorePatterns: [
'/node_modules/',
'/dist/',
'/build/',
'/.git/',
],
// For jest --watch mode, also consider:
// testPathIgnorePatterns to reduce the number of test files scanned
};
chokidar directly
const chokidar = require('chokidar');
const watcher = chokidar.watch('./src', {
ignored: /(^|[/\\])\..|(node_modules)/, // ignore dotfiles and node_modules
persistent: true,
// usePolling: false is the default and preferred — polling opens an fd per file
depth: 10, // limit directory traversal depth to reduce fd usage
});
watcher.on('change', (path) => console.log(`File changed: ${path}`));
WATCHPACK_POLLING environment variable
When native file-system events exhaust inotify/kqueue handles, switching to polling is a quick workaround that avoids opening persistent watch fds:
# Run the dev server with polling instead of native fs events
WATCHPACK_POLLING=true npm run dev
# Make it permanent for your terminal session
export WATCHPACK_POLLING=true
# Or add to .env / .bashrc
echo 'export WATCHPACK_POLLING=true' >> ~/.bashrc
inotify.max_user_watches or ulimit fix.
Fix 7 – Clear node_modules during npm install EMFILE
If Error: EMFILE: too many open files appears during npm install
or npm run build, a corrupted or partially-installed node_modules
tree can cause npm to open far more fds than usual. The full recovery sequence:
# Step 1: raise the fd limit first
ulimit -n 65536
# Step 2: clear everything and reinstall cleanly
rm -rf node_modules package-lock.json
npm cache clean --force
npm install
# Step 3: if still failing in CI (CircleCI, GitHub Actions, etc.)
# Add ulimit step before npm install in your pipeline config:
# - run: ulimit -n 65536 && npm install
Fix 8 – Detect file descriptor leaks with lsof and why-is-node-running
If your Node.js process gradually runs out of file descriptors over time (EMFILE appears
after running for hours, not immediately), you have a leak. Use lsof and
why-is-node-running to identify which files are being held open.
# Find your Node.js process PID
pgrep -a node
# Count total open file descriptors for a process
lsof -p <PID> | wc -l
# List all open files for a process (look for files that keep accumulating)
lsof -p <PID>
# Watch the fd count grow in real time
watch -n 2 'lsof -p <PID> | wc -l'
# Filter for regular files only (REG type)
lsof -p <PID> | grep REG
# On Linux: check /proc directly (no lsof required)
ls /proc/<PID>/fd | wc -l
ls -la /proc/<PID>/fd
Using why-is-node-running
// npm install -D why-is-node-running
const whyIsRunning = require('why-is-node-running');
// Call this when you suspect a handle leak
// It prints a list of all active handles keeping the event loop alive
setTimeout(() => {
whyIsRunning(); // logs all open handles with stack traces
}, 5000);
lsof -p <PID> | wc -l
every few seconds while your application processes files. If the count grows without
returning to a baseline, you have a leak. Common causes: streams that are created but
never consumed or destroyed, fs.open() calls in error paths that skip
fs.close(), and third-party libraries with callbacks that are never called
(leaving internal fds open).
macOS vs Linux vs Docker vs Windows
| Platform | Default soft limit | Default hard limit | How to raise permanently |
|---|---|---|---|
| macOS (Ventura/Sequoia) | 256 | unlimited (kernel max ~12288) | launchd plist in /Library/LaunchDaemons/ or add ulimit -n 65536 to ~/.zshrc |
| Linux (most distros) | 1024 | 4096 (PAM default) | /etc/security/limits.conf, /etc/sysctl.conf fs.file-max, or systemd unit LimitNOFILE=65536 |
| Linux (inotify watches) | 8192 watches | Same as soft | echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p |
| Docker containers | Inherits host or 1048576 | 1048576 | --ulimit nofile=65536:65536 flag on docker run, or ulimits in docker-compose.yml |
| Windows | No ulimit equivalent | N/A | Use graceful-fs, configure build tools to watch fewer paths, set WATCHPACK_POLLING=true |
# Docker: set ulimit when starting the container
docker run --ulimit nofile=65536:65536 node:20 node app.js
# docker-compose.yml equivalent:
# services:
# app:
# image: node:20
# ulimits:
# nofile:
# soft: 65536
# hard: 65536
Debugging Checklist
- Run
ulimit -nto check the current fd limit. If it is 256 (macOS default) or 1024 (Linux default), raise it immediately withulimit -n 65536. - On Linux, if the error syscall is
watch, checkcat /proc/sys/fs/inotify/max_user_watchesand raise it to 524288. - Check
err.syscallanderr.pathin the error object —syscall: 'open'indicates a file open limit;syscall: 'watch'indicates an inotify/kqueue limit. - Run
lsof -p $(pgrep -n node) | wc -lto count open fds. If the number is near yourulimit -nvalue, you are hitting the limit. - If the fd count grows over time without stabilising, you have a leak — search for
fs.open,createReadStream, orcreateWriteStreamcalls that lack matching close/destroy calls. - If EMFILE appears only during
npm installor bundler startup, clearnode_modules, runnpm cache clean --force, then reinstall after raising the fd limit. - If EMFILE appears only during dev server startup (webpack, vite), add
ignored: /node_modules/to your watcher config, or setWATCHPACK_POLLING=true. - Check whether
graceful-fsis already a transitive dependency (npm ls graceful-fs). If so, ensure the top-level code also uses it. - In CI environments (GitHub Actions, CircleCI), add a step to raise ulimit before running your build or install commands.
- On Windows, replace
require('fs')withrequire('graceful-fs')and configure build tools to watch specific directories rather than entire project trees.
# Quick diagnostic script — run while your Node.js process is running
PID=$(pgrep -n node)
echo "PID: $PID"
echo "ulimit -n: $(ulimit -n)"
echo "Open fds: $(lsof -p $PID 2>/dev/null | wc -l)"
echo "inotify max_user_watches: $(cat /proc/sys/fs/inotify/max_user_watches 2>/dev/null || echo 'N/A (macOS)')"
echo "Top open file types:"
lsof -p $PID 2>/dev/null | awk '{print $5}' | sort | uniq -c | sort -rn | head -10
Frequently Asked Questions
What is EMFILE in Node.js?
EMFILE stands for Error: too Many open FILEs. Node.js throws it as Error: EMFILE: too many open files when the process attempts to open a new file descriptor (for a file, socket, pipe, or directory) but has already reached the operating system's per-process limit. On macOS the default limit is 256, on Linux it is typically 1024. Each open file, network socket, database connection, and file watcher consumes one file descriptor.
How do I fix Error: EMFILE: too many open files in Node.js?
The fix depends on the cause:
- Bulk file operations: use
p-limitorasync-poolinstead of barePromise.all - Low OS limits: raise with
ulimit -n 65536 - Linux inotify exhaustion: raise with
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p - Dev server (webpack/vite): add
ignored: /node_modules/to watcher config, or setWATCHPACK_POLLING=true - fd leaks: use
graceful-fsand ensure every opened file is closed in afinallyblock
What is the difference between EMFILE and ENFILE?
EMFILE is a per-process limit: the current Node.js process has too many file descriptors open. ENFILE is a system-wide limit: the entire OS has exhausted its global file table (set by fs.file-max on Linux). EMFILE is far more common in Node.js development. Raising ulimit fixes EMFILE. ENFILE requires system-level tuning (sysctl fs.file-max on Linux).
Why does webpack, vite, or jest throw Error: EMFILE: too many open files, watch?
Bundlers and test runners use file watchers (typically chokidar or watchpack) to monitor the project directory tree. On Linux, each watched path consumes an inotify watch — a separate kernel resource. Check with cat /proc/sys/fs/inotify/max_user_watches and raise with echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p. On macOS, raise ulimit -n 65536. In both cases, configure watchOptions.ignored to exclude node_modules and build artifacts, or set WATCHPACK_POLLING=true as a quick workaround.
How do I detect a file descriptor leak causing EMFILE?
Run lsof -p $(pgrep -n node) | wc -l every few seconds. If the count grows without returning to baseline, you have a leak. Install why-is-node-running (npm install -D why-is-node-running) and call whyIsRunning() to list all active handles with stack traces. On Linux use ls /proc/$(pgrep node)/fd | wc -l. Common leak causes: fs.open() without fs.close() in error paths, streams created but never consumed or destroyed, and callbacks that are never invoked.
How do I fix EMFILE: too many open files during npm install?
First raise the fd limit: ulimit -n 65536. Then clear and reinstall: rm -rf node_modules package-lock.json && npm cache clean --force && npm install. On Linux, also raise inotify watches. In CI environments (CircleCI, GitHub Actions), add ulimit -n 65536 as a step before running npm install.
Does EMFILE: too many open files happen on Windows in Node.js?
Windows does not use POSIX file descriptors or ulimit, but Node.js on Windows can still throw EMFILE (sometimes with errno: -4066) when Windows handle limits are exceeded. Fixes: (1) use graceful-fs to queue and retry failed operations automatically, (2) configure build tools to watch specific directories rather than entire project structures, (3) clear node_modules and reinstall cleanly, (4) set WATCHPACK_POLLING=true to switch from native file-system events to polling.
Related Errors
ENOENT: no such file or directory— the target path does not exist; different from EMFILE which is about fd count, not path existenceECONNRESET: connection reset by peer— network sockets also consume fds; high concurrency can trigger both EMFILE and ECONNRESETECONNREFUSED: connection refused— another network error that may appear alongside EMFILE when too many outgoing connections exhaust both fd and port resourcesEPIPE: broken pipe— often occurs alongside EMFILE when streams are improperly destroyed; fixing fd leaks can resolve bothENFILE: file table overflow— the system-wide fd table is full (affects all processes); rarer than EMFILE and requiressysctl fs.file-maxtuning on LinuxEBADF: bad file descriptor— an already-closed or invalid fd was used; often appears after incorrect close/destroy patterns introduced while trying to fix EMFILE