Error [ERR_STREAM_WRITE_AFTER_END]: write after end means stream.write() or res.write() was called after stream.end() or res.end() already closed the writable. The most common fix is adding return before every res.end() / res.send() so the handler exits. For async callbacks, guard with if (res.writableEnded || res.destroyed) return; before every write. For pipeline failures, replace stream.pipe() with stream.pipeline().
What is ERR_STREAM_WRITE_AFTER_END?
ERR_STREAM_WRITE_AFTER_END is a Node.js
stream error
thrown when writable.write() is called on a
Writable stream
after writable.end() has already been called. end() signals to the
stream that no more data will follow — calling write() afterward violates that
contract, so Node.js throws immediately.
In HTTP servers this surfaces most often as res.write() or res.end()
being called a second time after the response has already been sent — frequently from an async
callback (database query, setTimeout, upstream proxy response) that fires after
the response was already closed.
Error [ERR_STREAM_WRITE_AFTER_END]: write after endError: write after end (older Node.js without the code prefix)events.js:292 throw er; // Unhandled 'error' event ... Error [ERR_STREAM_WRITE_AFTER_END]: write after endnode:internal/streams/writable ... Error [ERR_STREAM_WRITE_AFTER_END]: write after end
ERR_STREAM_WRITE_AFTER_END vs ERR_HTTP_HEADERS_SENT
These two errors are often confused because they both arise from "responding twice" in an HTTP handler. They operate at different layers:
| Error | Layer | When it fires | Property checked |
|---|---|---|---|
ERR_HTTP_HEADERS_SENT |
HTTP layer | res.writeHead() or res.setHeader() called after headers were already flushed |
res.headersSent |
ERR_STREAM_WRITE_AFTER_END |
Stream layer | res.write() or res.end() called after res.end() already closed the writable |
res.writableEnded |
In practice: ERR_HTTP_HEADERS_SENT fires first if you try to change headers after
sending; ERR_STREAM_WRITE_AFTER_END fires if you try to write body data after the
stream is fully closed. Both require the same root fix — ensure only one code path sends a
response per request.
Full Error Example
Error [ERR_STREAM_WRITE_AFTER_END]: write after end
at new NodeError (node:internal/errors:405:5)
at validChunk (node:internal/streams/writable:224:13)
at ServerResponse.write (node:_http_outgoing:842:3)
at /project/src/routes/users.js:42:9
at processTicksAndMicrotasks (node:internal/process/task_queues:95:5) {
code: 'ERR_STREAM_WRITE_AFTER_END'
}
Unhandled — crashes the process
events.js:292
throw er; // Unhandled 'error' event
^
Error [ERR_STREAM_WRITE_AFTER_END]: write after end
at new NodeError (node:internal/errors:405:5)
at validChunk (node:internal/streams/writable:224:13)
at ServerResponse.write (node:_http_outgoing:842:3)
at Timeout._onTimeout (/project/app.js:18:7) {
code: 'ERR_STREAM_WRITE_AFTER_END'
}
Emitted 'error' event on ServerResponse instance at:
at emitErrorNT (node:internal/streams/destroy:164:8)
Error Object Properties
| Property | Value | Meaning |
|---|---|---|
err.code | 'ERR_STREAM_WRITE_AFTER_END' | Node.js error code — use this to identify the error programmatically |
err.message | 'write after end' | Human-readable message; same across all Node.js versions that carry the code |
err.name | 'Error [ERR_STREAM_WRITE_AFTER_END]' | The full name as it appears in the terminal |
All Causes at a Glance
| Cause | Description | Fix |
|---|---|---|
Missing return after res.end() |
Handler continues executing after sending the response; subsequent write/end calls fire on a closed stream | Add return before every response-terminating call |
| Async callback fires after response was closed | Database query, fetch(), or setTimeout() resolves after an earlier code path already called res.end() |
Guard with if (res.writableEnded || res.destroyed) return; |
Multiple res.send() / res.json() calls in Express |
Each of these methods calls res.end() internally; calling them twice in one handler triggers the error |
Ensure only one response path executes per request; use return |
| Middleware chain — next() after res.end() | A middleware calls next() after sending a response; a downstream middleware or error handler writes to the already-closed stream |
Do not call next() after sending a response; or guard in downstream handlers |
| Pipeline failure — error handler writes to destroyed sink | Upstream stream errors; the error handler calls res.end('error') but the sink was already destroyed by the failing pipeline |
Use stream.pipeline(); check res.writableEnded in the error handler |
| http-proxy / http-proxy-middleware — client disconnects mid-proxy | Proxy writes the upstream response to the client ServerResponse after the client has already disconnected and the response was destroyed |
Attach res.on('close') to abort the proxied request; check res.writableEnded in the proxy error handler |
| Streaming aggregator / pub-sub (googleapis pubsub, kafkajs) | A message delivery callback writes to a stream that was already ended by a timeout or cancellation handler | Use cancellation tokens / AbortController; guard writes with writableEnded |
Double stream.end() call |
end() is called twice — e.g., once explicitly and once by a pipe() chain completing |
Call end() exactly once; stream.pipeline() manages this automatically |
Cause 1 – Missing return after res.end()
The most frequent cause in Express and raw HTTP handlers. When you call res.end(),
res.send(), or res.json(), execution continues to the next line unless
you also return. If any subsequent line calls res.write() or
res.end() again — directly or through a helper — you get
ERR_STREAM_WRITE_AFTER_END.
Broken — no return
// Express route — BROKEN
app.get('/user/:id', async (req, res) => {
const user = await db.findUser(req.params.id);
if (!user) {
res.status(404).json({ error: 'Not found' }); // ends the response
// ↑ no return — code continues executing below
}
// This fires even when user is null — stream already ended!
res.json(user); // Error [ERR_STREAM_WRITE_AFTER_END]: write after end
});
Fixed — return on every exit path
// Express route — FIXED
app.get('/user/:id', async (req, res) => {
const user = await db.findUser(req.params.id);
if (!user) {
return res.status(404).json({ error: 'Not found' }); // return exits here
}
return res.json(user); // only reached when user exists
});
ESM version (same fix applies)
// ESM Express — same pattern
export const getUser = async (req, res) => {
try {
const user = await User.findById(req.params.id);
if (!user) return res.status(404).json({ error: 'Not found' });
return res.json(user);
} catch (err) {
// Without return, both this and the above can fire
return res.status(500).json({ error: err.message });
}
};
Cause 2 – Async callback fires after response was closed
An async operation (database query, external fetch(), setTimeout())
takes longer than expected. Meanwhile, a different code path (an error handler, a timeout, an
early return) already called res.end(). When the slow async operation finally
resolves, its callback tries to write to the now-closed stream.
This is the "works locally, breaks in production" pattern: locally everything is fast so the race never happens; in production with real database latency the timing gap opens up.
Broken — async callback writes after timeout already ended response
// Raw Node.js HTTP — BROKEN
const http = require('http');
http.createServer(async (req, res) => {
// Timeout guard — ends the response after 2 s
const timeout = setTimeout(() => {
res.writeHead(504);
res.end('Gateway timeout');
}, 2000);
// Slow DB query — takes 3 s in production
const data = await db.slowQuery();
clearTimeout(timeout);
// By the time this runs, the timeout already called res.end()!
res.writeHead(200);
res.end(JSON.stringify(data)); // Error [ERR_STREAM_WRITE_AFTER_END]
}).listen(3000);
Fixed — guard with writableEnded
// Raw Node.js HTTP — FIXED
const http = require('http');
http.createServer(async (req, res) => {
const timeout = setTimeout(() => {
if (res.writableEnded) return; // already responded — skip
res.writeHead(504);
res.end('Gateway timeout');
}, 2000);
try {
const data = await db.slowQuery();
clearTimeout(timeout);
if (res.writableEnded || res.destroyed) return; // timeout already fired — skip
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(data));
} catch (err) {
clearTimeout(timeout);
if (res.writableEnded || res.destroyed) return;
res.writeHead(500);
res.end('Internal Server Error');
}
}).listen(3000);
Fixed — AbortController cancels the async work
// Cancel in-flight async work when the client disconnects
const http = require('http');
http.createServer(async (req, res) => {
const controller = new AbortController();
const { signal } = controller;
// Cancel the query if the client disconnects early
res.on('close', () => controller.abort());
try {
// Pass signal to fetch, db drivers, etc. — they throw AbortError on cancel
const response = await fetch('https://api.example.com/data', { signal });
const data = await response.json();
if (res.writableEnded) return; // closed between fetch and this line
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(data));
} catch (err) {
if (err.name === 'AbortError') return; // client disconnected — ignore
if (res.writableEnded) return;
res.writeHead(500);
res.end('Error');
}
}).listen(3000);
Cause 3 – Multiple res.send() / res.json() in Express
Express's res.send(), res.json(), res.redirect(), and
res.render() all call res.end() internally. Calling any two of them
in one request handler triggers ERR_STREAM_WRITE_AFTER_END.
// Express — BROKEN: two res.json() calls
app.post('/process', async (req, res) => {
try {
const result = await process(req.body);
res.json({ success: true, result }); // ends the response
// Audit log call — triggers a second response accidentally
await auditLog(req.body);
res.json({ success: true }); // Error [ERR_STREAM_WRITE_AFTER_END]
} catch (err) {
res.status(500).json({ error: err.message }); // may also fire after first json()
}
});
// FIXED: return on every path, await audit log before responding
app.post('/process', async (req, res) => {
try {
const result = await process(req.body);
await auditLog(req.body); // do work before responding
return res.json({ success: true, result });
} catch (err) {
return res.status(500).json({ error: err.message });
}
});
Cause 4 – Middleware chain: next() after res.end()
In Express middleware, calling next() after a res.end()-equivalent
passes control to the next middleware. If that middleware also writes to res,
you get ERR_STREAM_WRITE_AFTER_END.
// Middleware — BROKEN
function authMiddleware(req, res, next) {
if (!req.headers.authorization) {
res.status(401).json({ error: 'Unauthorized' }); // ends response
next(); // ← BUG: passes control to next middleware on closed stream
return;
}
next();
}
// Middleware — FIXED: return without calling next() after sending
function authMiddleware(req, res, next) {
if (!req.headers.authorization) {
return res.status(401).json({ error: 'Unauthorized' }); // return prevents next()
}
next();
}
(err, req, res, next), make sure it does not call next(err) after
res.json() — the default Express error handler will then also try to end the response.
Always return after sending in error handlers.
Cause 5 – Pipeline failure: upstream error writes to destroyed sink
A common pattern in file-serving and streaming APIs: readStream.pipe(res). If the
read stream errors (file not found, permission denied, network hiccup), you catch the error and
try to send a 500 response. But pipe() may have already ended
res, so the error-response write fails.
Broken — manual pipe with error handler writing after pipe ended res
// BROKEN — manual pipe()
const fs = require('fs');
const http = require('http');
http.createServer((req, res) => {
const stream = fs.createReadStream('/path/to/file');
stream.on('error', (err) => {
// By the time this fires, pipe() may have already written headers + partial
// body and called res.end(). Writing here throws ERR_STREAM_WRITE_AFTER_END.
res.writeHead(500);
res.end('Error reading file'); // Error [ERR_STREAM_WRITE_AFTER_END]
});
stream.pipe(res); // pipe auto-calls res.end() when stream finishes
}).listen(3000);
Fixed — stream.pipeline() handles cleanup automatically
// FIXED — stream.pipeline() (Node.js 10+)
const { pipeline } = require('stream');
const fs = require('fs');
const http = require('http');
http.createServer((req, res) => {
const stream = fs.createReadStream('/path/to/file');
pipeline(stream, res, (err) => {
if (!err) return; // success — res was ended cleanly by pipeline
if (res.writableEnded || res.destroyed) return; // already handled
// Headers not yet sent — we can still send an error response
if (!res.headersSent) {
res.writeHead(500);
}
res.end('Error reading file');
});
}).listen(3000);
Fixed — stream/promises pipeline (Node.js 15+)
// FIXED — async/await with stream/promises (Node.js 15+)
import { pipeline } from 'stream/promises';
import { createReadStream } from 'fs';
import { createServer } from 'http';
createServer(async (req, res) => {
try {
await pipeline(createReadStream('/path/to/file'), res);
} catch (err) {
// pipeline() destroys all streams on error — res is destroyed here
if (res.writableEnded || res.destroyed) return;
if (!res.headersSent) res.writeHead(500);
res.end('Error');
}
}).listen(3000);
stream.pipe() does not propagate
backpressure errors and does not destroy upstream streams when downstream closes.
stream.pipeline() destroys all streams in the chain if any one errors or closes,
preventing the write-to-destroyed-stream race entirely.
Cause 6 – http-proxy / http-proxy-middleware: client disconnects mid-proxy
node-http-proxy and its Express wrapper http-proxy-middleware pipe
the upstream server's response to the client's ServerResponse. If the client
disconnects (browser tab closed, Cypress test reset, network timeout) before the upstream
reply arrives, the client's res is destroyed. The proxy's data and end
callbacks then attempt to write to the destroyed stream.
The same pattern appears in next-http-proxy-middleware,
create-react-app's dev proxy, and any custom proxy built on the
request or got libraries.
// http-proxy-middleware — BROKEN proxy error handler
const { createProxyMiddleware } = require('http-proxy-middleware');
app.use('/api', createProxyMiddleware({
target: 'http://backend:4000',
on: {
error: (err, req, res) => {
// If the client already disconnected, res is destroyed.
// Writing here throws ERR_STREAM_WRITE_AFTER_END.
res.writeHead(502);
res.end('Bad Gateway'); // Error [ERR_STREAM_WRITE_AFTER_END]
}
}
}));
// http-proxy-middleware — FIXED proxy error handler
const { createProxyMiddleware } = require('http-proxy-middleware');
app.use('/api', createProxyMiddleware({
target: 'http://backend:4000',
on: {
error: (err, req, res) => {
// Guard before writing — client may have already disconnected
if (res.writableEnded || res.destroyed || res.headersSent) return;
res.writeHead(502, { 'Content-Type': 'text/plain' });
res.end('Bad Gateway');
}
}
}));
// Raw node-http-proxy — guard and abort on client disconnect
const httpProxy = require('http-proxy');
const http = require('http');
const proxy = httpProxy.createProxyServer({});
proxy.on('error', (err, req, res) => {
if (res.writableEnded || res.destroyed) return;
if (!res.headersSent) res.writeHead(502);
res.end('Proxy error');
});
http.createServer((req, res) => {
// Cancel the proxy request if the client disconnects early
res.on('close', () => {
if (!res.writableEnded) {
req.destroy();
}
});
proxy.web(req, res, { target: 'http://backend:4000' });
}).listen(3000);
Cause 7 – Streaming aggregator / pub-sub (googleapis pubsub, kafkajs)
Libraries like @google-cloud/pubsub, kafkajs, and similar streaming
aggregators deliver messages through callbacks or async iterators. If your code routes those
messages to an HTTP response stream or a file write stream, and the stream is closed by a
timeout or cancellation before all messages are delivered, subsequent message callbacks will
try to write to the closed stream.
// googleapis pubsub — BROKEN: message arrives after res was ended
const { PubSub } = require('@google-cloud/pubsub');
app.get('/stream', (req, res) => {
const pubsub = new PubSub();
const subscription = pubsub.subscription('my-sub');
res.writeHead(200, { 'Content-Type': 'text/event-stream' });
subscription.on('message', (message) => {
// If the client disconnected and res was destroyed, this throws
res.write(`data: ${message.data}\n\n`); // Error [ERR_STREAM_WRITE_AFTER_END]
message.ack();
});
// No cleanup when client disconnects!
});
// googleapis pubsub — FIXED: cancel subscription on client disconnect
const { PubSub } = require('@google-cloud/pubsub');
app.get('/stream', (req, res) => {
const pubsub = new PubSub();
const subscription = pubsub.subscription('my-sub');
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive'
});
function onMessage(message) {
if (res.writableEnded || res.destroyed) {
// Stream closed — remove listener and ack/nack
subscription.removeListener('message', onMessage);
message.nack();
return;
}
res.write(`data: ${message.data}\n\n`);
message.ack();
}
subscription.on('message', onMessage);
// Clean up when client disconnects
res.on('close', () => {
subscription.removeListener('message', onMessage);
subscription.close().catch(() => {});
});
});
Safe Writable Guard Patterns
Reusable patterns to prevent ERR_STREAM_WRITE_AFTER_END in any context.
Universal write guard
/**
* Safely write to a Writable without throwing ERR_STREAM_WRITE_AFTER_END.
* Returns true if the write was performed, false if the stream was already ended.
*/
function safeWrite(writable, chunk) {
if (writable.writableEnded || writable.destroyed) return false;
writable.write(chunk);
return true;
}
/**
* Safely end a Writable — only if it hasn't been ended already.
*/
function safeEnd(writable, chunk) {
if (writable.writableEnded || writable.destroyed) return;
writable.end(chunk);
}
Close-event cancellation token (CJS)
const http = require('http');
http.createServer(async (req, res) => {
let cancelled = false;
res.on('close', () => { cancelled = true; });
async function respond(fn) {
if (cancelled || res.writableEnded) return;
fn();
}
// Simulate slow async work
const data = await slowFetch();
await respond(() => {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(data));
});
}).listen(3000);
Close-event cancellation token (ESM)
// ESM — identical logic with import syntax
import { createServer } from 'http';
createServer(async (req, res) => {
let cancelled = false;
res.on('close', () => { cancelled = true; });
const data = await slowFetch();
if (cancelled || res.writableEnded) return;
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(data));
}).listen(3000);
Checking writable state — property reference
const writable = getWritableSomehow();
// writable.writableEnded — true after .end() has been called
// Does NOT mean all data has been flushed. Use this to prevent write-after-end.
console.log(writable.writableEnded); // false initially; true after end()
// writable.writableFinished — true after 'finish' event (all data flushed)
console.log(writable.writableFinished);
// writable.destroyed — true after .destroy() or a fatal error
console.log(writable.destroyed);
// Correct guard: check both
if (writable.writableEnded || writable.destroyed) {
return; // do not attempt to write
}
writable.write(data);
Debugging Checklist
- Read the stack trace — the frame after
ServerResponse.writeorWritable.writenames the exact file and line wherewrite()was called afterend(). - Search that file for all
res.end(),res.send(),res.json(), andres.redirect()calls — confirm each one is preceded byreturn. - Check every async callback in the handler (Promise
.then(),async/await,setTimeout, event listener) for a write after an earlier synchronous end. - Check
res.writableEndedat the top of every async callback that writes tores. - Look for middleware that calls
next()after ares.end()-equivalent and remove thenext()call. - Replace any
readStream.pipe(res)withstream.pipeline(readStream, res, callback). - For http-proxy, ensure the
onErrorhandler checksres.writableEndedbefore writing. - For streaming / pub-sub, attach a
res.on('close')handler that removes all message listeners and cancels any subscriptions. - Run with
NODE_DEBUG=streamto trace stream lifecycle events and find the exact sequence ofend()andwrite()calls.
# Trace stream events to find the sequence of end() and write() calls
NODE_DEBUG=stream node app.js 2>&1 | grep -i "write\|end\|destroy"
stream.on('error', () => {}) hides the
underlying logic bug. The correct fix is to eliminate the write-after-end condition. A silent
handler means your users get incomplete responses or no response at all, with no log evidence
of what went wrong. Fix the cause; do not mute the symptom.
Frequently Asked Questions
What is ERR_STREAM_WRITE_AFTER_END in Node.js?
ERR_STREAM_WRITE_AFTER_END is thrown when stream.write() or res.write() is called on a Writable stream after stream.end() or res.end() has already been called. Calling end() permanently closes the writable side; any subsequent write is rejected. The full error message is: Error [ERR_STREAM_WRITE_AFTER_END]: write after end.
What is the difference between ERR_STREAM_WRITE_AFTER_END and ERR_HTTP_HEADERS_SENT?
ERR_HTTP_HEADERS_SENT fires at the HTTP layer when res.writeHead() or res.setHeader() is called after HTTP headers were already flushed. ERR_STREAM_WRITE_AFTER_END fires at the stream layer when res.write() or res.end() is called after the writable stream was fully closed. Both arise from "double-responding" in a handler. Check res.headersSent to guard against the first; check res.writableEnded to guard against the second.
How do I fix write after end in an Express route?
Add return before every res.json(), res.send(), or res.end() call so the handler function exits immediately. Example: if (!user) return res.status(404).json({ error: 'Not found' });. Without return, Express continues executing the rest of the function and hits another response call on the already-closed stream.
Why does ERR_STREAM_WRITE_AFTER_END only happen in production and not locally?
The bug is a timing race: an async callback fires after a previous code path already called res.end(). Locally, database and API responses are fast — the race window is too narrow to hit. In production, real latency widens the window and the bug surfaces under load or when queries are slow. The underlying write-after-end condition exists in both environments; production just exposes it.
Why does http-proxy throw ERR_STREAM_WRITE_AFTER_END?
node-http-proxy and http-proxy-middleware pipe the upstream response to the client's ServerResponse. If the client disconnects (browser navigates away, test framework resets the socket, timeout fires), the client res is destroyed before the proxied data arrives. The proxy's onError handler then calls res.end() on the already-destroyed stream. Fix by checking if (res.writableEnded || res.destroyed) return; at the top of the proxy error handler.
How do I fix ERR_STREAM_WRITE_AFTER_END in a stream pipeline?
Replace readStream.pipe(res) with stream.pipeline(readStream, res, callback). The pipeline() function automatically destroys all streams when any one errors or closes, preventing the race where a pipe() error handler tries to write to a stream that was already ended by the pipe completing. In Node.js 15+ you can use await pipeline(readStream, res) from 'stream/promises'.
What do writable.writableEnded and writable.destroyed mean?
writable.writableEnded becomes true the moment writable.end() is called — before all queued data is flushed. writable.destroyed becomes true after writable.destroy() is called or a fatal error occurs. Always guard with if (writable.writableEnded || writable.destroyed) return; before calling write() in asynchronous code. Checking only one of the two is insufficient — a destroyed stream has writableEnded: false until end() was also called.
Related Errors
EPIPE: write EPIPE— the OS-level equivalent: the reading end of a pipe or socket was closed; often occurs alongside ERR_STREAM_WRITE_AFTER_END in stream pipelinesECONNRESET: connection reset by peer— the remote TCP peer forcibly closed the connection; triggers similar cleanup issues in proxy and streaming codeERR_HTTP_HEADERS_SENT— HTTP-layer complement: headers set after they were already sent; typically co-occurs with write-after-end when a handler sends two responsesERR_STREAM_DESTROYED— thrown whenwrite()is called on a stream afterdestroy(); closely related but fires ondestroy()rather thanend()ERR_STREAM_CANNOT_PIPE— thrown whenpipe()is called on a writable that has already been ended