Error: write EPIPE means your code wrote to a stream or socket after the reading end was closed. The most common cause is piping Node.js output to head or grep — fix by adding process.stdout.on('error', (err) => { if (err.code === 'EPIPE') process.exit(0); });. For HTTP responses, check res.writableEnded before writing and listen for the 'close' event when the client disconnects.
What is EPIPE?
EPIPE stands for Error PIPE — a POSIX system error (errno 32 on Linux) thrown when a process tries to write to a pipe, socket, or stream whose reading end has already been closed. Unlike ECONNRESET (which is a TCP RST from a remote host), EPIPE is the OS telling you the local write target is gone — there is no reader left to consume the data you are trying to send.
Node.js surfaces EPIPE as an Error object with code: 'EPIPE' and
syscall: 'write'. On streams and sockets this appears as an 'error' event.
An unhandled 'error' event crashes the process immediately.
Error: write EPIPEError: write EPIPE with syscall: 'write', code: 'EPIPE'events.js:292 throw er; // Unhandled 'error' event ... Error: write EPIPEwrite EPIPE on process.stdout when piping to head or grep
Full Error Example
Error: write EPIPE
at Socket.write (node:net:777:12)
at write (node:internal/streams/writable:370:10)
at Writable.write (node:internal/streams/writable:429:10)
at process.stdout.write (/home/user/project/script.js:14:16)
at Object.<anonymous> (/home/user/project/script.js:14:16) {
errno: -32,
code: 'EPIPE',
syscall: 'write'
}
The errno: -32 is the Linux EPIPE errno value. On macOS it is also
-32. The syscall: 'write' confirms the error occurred during a
write operation — distinguishing it from read-side errors like
ECONNRESET.
Unhandled EPIPE crash (events.js)
If no 'error' listener is attached to the stream that emits EPIPE, Node.js
crashes the entire process with this output:
events.js:292
throw er; // Unhandled 'error' event
^
Error: write EPIPE
at Socket.write (node:net:777:12)
...
Emitted 'error' event on Socket instance at:
at emitErrorNT (node:internal/streams/destroy:164:8) {
errno: -32,
code: 'EPIPE',
syscall: 'write'
}
Common Causes
| Cause | Why it happens |
|---|---|
Piping output to head, grep, or less |
Running node script.js | head -10 causes head to exit after reading 10 lines, closing its stdin. Node.js then gets EPIPE when it next writes to process.stdout. |
| Writing to an HTTP response after client disconnected | The client closed the connection (navigated away, timeout, network drop). Calling res.write() or res.end() after the response stream is destroyed causes EPIPE. |
| Client disconnects during server-sent streaming | The server is streaming a large payload (SSE, file download, chunked response) and the client closes the connection before the stream completes. |
| Writing to a child process stdin after it exited | The spawned child process exited (or crashed) but the parent process continues to write to child.stdin, which is now a broken pipe. |
| Database connection closed during result streaming | The underlying TCP socket or connection pool connection was closed while a database driver was still streaming query result rows to the application. |
| TCP socket closed by remote while still writing data | The remote end performed a graceful half-close (FIN) or the connection was dropped, and the local side received EPIPE on the next write to that socket. |
Fix 1 – Handle EPIPE on process.stdout (piping to head/grep)
This is the most common cause: running node script.js | head -10 or
node script.js | grep pattern. When the downstream command exits, your script
gets EPIPE on the next process.stdout.write(). This is expected behavior —
the correct response is to exit cleanly.
// Add this near the top of your script (before any output)
process.stdout.on('error', (err) => {
if (err.code === 'EPIPE') {
// The consumer (head, grep, less) has closed stdin.
// Exit cleanly — this is normal pipe behavior, not a bug.
process.exit(0);
}
// Re-throw unexpected stdout errors
throw err;
});
// process.stderr may also receive EPIPE in some shells
process.stderr.on('error', (err) => {
if (err.code === 'EPIPE') process.exit(0);
throw err;
});
process.exit(0) and not process.exit(1)?
When output is piped to head and the consumer closes the pipe, the writer
is expected to stop. Exiting with code 0 signals success. Using 1
would incorrectly signal an error to shell scripts or CI pipelines that check exit codes.
Fix 2 – Check res.writableEnded before writing to HTTP responses
Before calling res.write() or res.end(), check whether the response
has already been ended or the client has disconnected. This avoids EPIPE on HTTP response streams.
const http = require('http');
const server = http.createServer((req, res) => {
let i = 0;
const interval = setInterval(() => {
// Guard: stop if the response has already been finished or destroyed
if (res.writableEnded || res.destroyed) {
clearInterval(interval);
return;
}
i++;
res.write(`data chunk ${i}\n`);
if (i >= 10) {
clearInterval(interval);
res.end();
}
}, 100);
// Also clean up when the client disconnects
res.on('close', () => {
clearInterval(interval);
});
});
server.listen(3000);
Fix 3 – Listen for the 'close' event to stop streaming
The 'close' event on an HTTP response fires when the underlying connection is
closed — whether by the server finishing the response or the client disconnecting. Use it to
cancel any ongoing work and prevent writing to a destroyed stream.
const http = require('http');
const fs = require('fs');
const server = http.createServer((req, res) => {
const fileStream = fs.createReadStream('/path/to/large-file.bin');
// Track whether client is still connected
let clientConnected = true;
res.on('close', () => {
clientConnected = false;
fileStream.destroy(); // Stop reading the file
});
fileStream.on('error', (err) => {
if (!clientConnected) return; // Client already gone, ignore
res.statusCode = 500;
res.end('Internal Server Error');
});
fileStream.pipe(res);
});
server.listen(3000);
Fix 4 – Use stream.pipeline() instead of manual pipe()
The stream.pipeline() function (Node.js 10+) automatically destroys all streams
in the chain if any stream closes or errors. This prevents EPIPE from propagating unhandled
when a downstream consumer closes early.
const { pipeline } = require('stream');
const { promisify } = require('util');
const fs = require('fs');
const http = require('http');
const pipelineAsync = promisify(pipeline);
const server = http.createServer(async (req, res) => {
try {
await pipelineAsync(
fs.createReadStream('/path/to/large-file.bin'),
res
);
} catch (err) {
// pipeline() catches EPIPE when the client disconnects mid-stream
if (err.code === 'EPIPE' || err.code === 'ERR_STREAM_DESTROYED') {
// Client disconnected — nothing more to do
return;
}
console.error('Streaming error:', err);
if (!res.headersSent) {
res.statusCode = 500;
res.end('Internal Server Error');
}
}
});
server.listen(3000);
require('stream/promises').pipeline()
directly instead of promisify(pipeline). It returns a native Promise and has the
same error-handling semantics.
const { pipeline } = require('stream/promises');
Fix 5 – Handle EPIPE on writable streams gracefully
For custom writable streams or any stream you manage directly, attach an 'error'
listener and handle EPIPE explicitly. Without this listener, the first EPIPE event
crashes the process.
const net = require('net');
const client = net.createConnection({ host: 'example.com', port: 9000 });
client.on('connect', () => {
// Send data in a loop
const interval = setInterval(() => {
const canContinue = client.write('ping\n');
if (!canContinue) {
// Backpressure: wait for drain before writing more
client.once('drain', () => { /* resume */ });
}
}, 100);
client.on('close', () => clearInterval(interval));
});
// Always handle 'error' on writable streams
client.on('error', (err) => {
if (err.code === 'EPIPE') {
console.log('Remote closed the connection while writing.');
client.destroy();
} else {
console.error('Socket error:', err);
client.destroy(err);
}
});
Fix 6 – Destroy the stream on EPIPE
When EPIPE is not expected (i.e., not a pipe-to-head scenario), the correct response is to destroy the stream immediately. Attempting to write to it again will keep generating errors.
const { Writable } = require('stream');
function createSafeWriter(destination) {
destination.on('error', (err) => {
if (err.code === 'EPIPE') {
// Stop writing — the read end is gone
console.warn('EPIPE: reader closed, destroying stream');
destination.destroy();
} else {
// Unexpected error — propagate or handle
console.error('Unexpected stream error:', err);
destination.destroy(err);
}
});
return destination;
}
// Example: write to a child process stdin safely
const { spawn } = require('child_process');
const child = spawn('some-command', ['--arg']);
const safeStdin = createSafeWriter(child.stdin);
safeStdin.write('input data\n');
safeStdin.end();
When is EPIPE Expected vs a Real Bug?
| Scenario | Expected? | Correct Action |
|---|---|---|
node script.js | head -10 — script produces more than 10 lines |
Yes — expected | Catch EPIPE on process.stdout and call process.exit(0) |
node script.js | grep pattern — no matching lines found, grep exits early |
Yes — expected | Catch EPIPE on process.stdout and call process.exit(0) |
| HTTP client closes browser tab mid-download | Yes — expected | Listen for 'close' on res, stop writing, clean up resources |
Writing to child.stdin after child exited with an error |
No — real bug | Check child's exit code before writing, or handle the 'exit' event to stop writing |
| Writing to a database socket after connection pool timeout | No — real bug | Handle EPIPE in the database driver error handler, reconnect, and retry the operation |
| TCP socket EPIPE on a keep-alive connection | Sometimes | Treat like ECONNRESET: catch, destroy the socket, and retry with a new connection |
Debugging Checklist
- Check the stack trace for which stream threw EPIPE:
process.stdout, an HTTPresobject, anet.Socket, or a child processstdin. - If the script is invoked via a shell pipe (
| head,| grep,| less), EPIPE is expected — add theprocess.stdouterror handler. - Check
res.writableEndedandres.destroyedbefore every write in long-running HTTP handlers. - Use
stream.pipeline()instead ofstream.pipe()— it handles cleanup automatically when consumers close early. - Enable
NODE_DEBUG=net,streamto trace stream lifecycle events and see exactly when the write end is destroyed. - Search for any writable stream that does not have an
'error'event listener — any unhandled EPIPE will crash the process.
# Trace stream and socket events to find where EPIPE originates
NODE_DEBUG=net,stream node app.js 2>&1 | grep -i epipe
head/grep), catching EPIPE and exiting cleanly is correct.
But for unexpected EPIPE errors on database connections, child process stdin, or HTTP clients
that should still be connected, log the error and investigate. Silently swallowing it hides
connectivity bugs, resource leaks, and race conditions that worsen under load.
Related Errors
ECONNRESET: connection reset by peer— remote TCP peer sent a RST packet; differs from EPIPE which is a local write to a closed pipeECONNREFUSED: connection refused— the connection was never established; nothing listening on the target portERR_STREAM_DESTROYED— Node.js stream-level error thrown when writing to an already-destroyed stream, often accompanies EPIPEERR_STREAM_WRITE_AFTER_END— writing to a stream afterend()has been called on the writable sideETIMEDOUT: connection timed out— no response within timeout; the connection was not actively closed, just silent