Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close means a stream inside stream.pipeline() (or a pipe chain) was destroyed before it emitted 'end' or 'finish'. This is expected when an HTTP client disconnects, AbortController fires, or a user cancels an LLM stream — catch the error and check err.code === 'ERR_STREAM_PREMATURE_CLOSE' to decide whether to suppress it or treat it as fatal. It is a real bug only when data was expected but the stream closed unexpectedly.
What is ERR_STREAM_PREMATURE_CLOSE?
ERR_STREAM_PREMATURE_CLOSE is a Node.js stream-layer error defined in the
official Node.js errors documentation.
It is thrown by the stream machinery whenever a readable or writable stream is destroyed or
closed before it naturally emits its terminal event ('end' for readables,
'finish' for writables).
stream.pipeline() — introduced in Node.js 10 — always propagates
ERR_STREAM_PREMATURE_CLOSE when any stream in the chain is closed early,
regardless of whether the closure was intentional. This means client disconnects, AbortController
aborts, zip parsers stopping after the first entry, and genuine mid-stream failures all produce
the same error code. Distinguishing intent from accident is your code's responsibility.
Error [ERR_STREAM_PREMATURE_CLOSE]: Premature closeError: Premature close (older Node.js or some wrapper libraries)FetchError: invalid response body while trying to fetch ...: Premature close (node-fetch v2/v3)APIConnectionError: Premature close (some OpenAI SDK versions)Error: Premature close code: 'ERR_STREAM_PREMATURE_CLOSE'
Full Error Example
Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at new NodeError (node:internal/errors:405:5)
at ClientRequest.<anonymous> (node:_http_client:799:21)
at Object.onceWrapper (node:events:629:26)
at ClientRequest.emit (node:events:526:35)
at Socket.socketCloseListener (node:_http_client:381:11)
at Socket.emit (node:events:526:35)
at TCP.<anonymous> (node:net:313:12) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
Notice the error object has only code: 'ERR_STREAM_PREMATURE_CLOSE' — there is no
syscall, errno, or path property. This is a pure
Node.js stream error, not an OS-level error like EPIPE
or ECONNRESET.
node-fetch / undici variant
FetchError: invalid response body while trying to fetch https://api.example.com/stream:
Premature close
at /project/node_modules/node-fetch/src/index.js:98:13
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
type: 'system',
code: 'ERR_STREAM_PREMATURE_CLOSE',
errno: ''
}
OpenAI SDK / Anthropic SDK variant
APIConnectionError: Connection error.
at /project/node_modules/openai/src/error.ts:...
Caused by: Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
at new NodeError (node:internal/errors:405:5)
at ClientRequest.<anonymous> (node:_http_client:799:21) {
code: 'ERR_STREAM_PREMATURE_CLOSE'
}
Diagnostic Fields
| Property | Value | Notes |
|---|---|---|
err.code |
'ERR_STREAM_PREMATURE_CLOSE' |
Always present. Use this to identify the error. |
err.message |
'Premature close' |
Fixed string. Older Node.js versions may omit the [ERR_STREAM_PREMATURE_CLOSE] prefix. |
err.syscall |
Not present | Unlike EPIPE or ECONNRESET, this is a stream-layer error with no OS syscall component. |
err.errno |
Not present (or '' in node-fetch) |
No OS errno. Absence of errno distinguishes this from system errors. |
All Causes at a Glance
| Cause | Context | Intentional? | Fix |
|---|---|---|---|
| HTTP client closes browser tab / cancels request | HTTP server streaming response via pipeline() |
Yes | Catch ERR_STREAM_PREMATURE_CLOSE and return; listen for req.on('close') |
| AbortController aborts a fetch response body | Client-side fetch with AbortSignal |
Yes | Check signal.aborted before treating as fatal; expect AbortError first |
| LLM API stream aborted (OpenAI, Anthropic) | Streaming chat completion, user navigates away | Yes | Pass abortSignal: req.signal; catch ERR_STREAM_PREMATURE_CLOSE as normal abort |
| Proxy / load balancer timeout mid-stream | Long-running SSE or download through nginx / AWS ALB | No — bug | Tune proxy idle/read timeouts; add heartbeat pings to keep connection alive |
| File write stream destroyed before flush | fs.createWriteStream() in a pipeline |
Sometimes | Ensure stream.destroy() is not called before pipeline() finishes; listen for 'finish' |
| Zip / archive parser stops early | node-unzipper, unzip-stream in pipeline |
Sometimes | Use the library's dedicated pipeline adapter or catch ERR_STREAM_PREMATURE_CLOSE when reading fewer entries than exist |
stream.destroy(err) inside pipeline |
Manual stream cancellation | Yes | Pipeline masks original error; attach direct 'error' listener on source stream to capture real cause |
| node-fetch v2 streaming response body | Calling .text() / .json() on a response that was aborted |
Sometimes | Migrate to native fetch (Node.js 18+) or undici; catch and check err.code |
| Long-running OpenAI stream exceeds 10 minutes | Chat completion with stream: true and a slow model |
No — bug | Set explicit request timeout; split into smaller requests; use SDK retry with timeout handling |
Cause 1 – HTTP Client Disconnects Mid-Stream (Browser Tab Closes)
When a Node.js server streams data (file download, SSE, chunked JSON) and the client closes
the browser tab or the network drops, the HTTP socket is destroyed. If the server is using
stream.pipeline() to write to res, the pipeline fires
ERR_STREAM_PREMATURE_CLOSE. This is expected and should not be logged as an error.
// CJS — HTTP server streaming a file with pipeline()
const http = require('http');
const fs = require('fs');
const { pipeline } = require('stream');
const server = http.createServer((req, res) => {
const src = fs.createReadStream('/var/data/large-dataset.csv');
pipeline(src, res, (err) => {
if (!err) return; // success
if (err.code === 'ERR_STREAM_PREMATURE_CLOSE') {
// Client disconnected — expected, not a bug
// src is already destroyed by pipeline; no cleanup needed
return;
}
// Unexpected error (disk read error, etc.)
console.error('Stream error:', err);
});
});
server.listen(3000);
// ESM — same pattern with async/await
import { createServer } from 'node:http';
import { createReadStream } from 'node:fs';
import { pipeline } from 'node:stream/promises';
const server = createServer(async (req, res) => {
try {
await pipeline(createReadStream('/var/data/large-dataset.csv'), res);
} catch (err) {
if (err.code === 'ERR_STREAM_PREMATURE_CLOSE') {
return; // client disconnected — not an error
}
console.error('Unexpected streaming error:', err);
}
});
server.listen(3000);
req.on('close')
or res.on('close') to set a flag and stop any background work (database queries,
expensive computation) as soon as the client leaves. pipeline() handles stream
cleanup automatically, but your application-level resources need manual cancellation.
const server = http.createServer(async (req, res) => {
let clientConnected = true;
req.on('close', () => {
clientConnected = false;
// Cancel expensive work (e.g. abort a DB cursor) here
});
// Only start the expensive operation if the client is still there
if (!clientConnected) { res.end(); return; }
const dbStream = await db.query('SELECT * FROM large_table').stream();
try {
await pipeline(dbStream, res);
} catch (err) {
if (err.code === 'ERR_STREAM_PREMATURE_CLOSE') return;
console.error('DB stream error:', err);
}
});
Cause 2 – LLM Streaming APIs: OpenAI and Anthropic SDK Aborts
This is the most underserved area in competitor documentation. When you stream a chat completion
from the OpenAI or Anthropic Node.js SDKs and the client disconnects — browser tab closed,
user navigated away, frontend AbortController fired — the underlying HTTP response
stream is destroyed mid-read. The SDK surfaces this as ERR_STREAM_PREMATURE_CLOSE
(sometimes wrapped as APIConnectionError).
OpenAI SDK (v4+) — streaming with AbortSignal
// CJS — Express route streaming OpenAI chat completion
const express = require('express');
const OpenAI = require('openai');
const app = express();
const openai = new OpenAI();
app.post('/chat', async (req, res) => {
// Forward the request's AbortSignal to OpenAI
// When the browser tab closes, req.signal fires and cancels the API call
const controller = new AbortController();
req.on('close', () => {
if (!res.writableEnded) {
controller.abort(); // stop the OpenAI stream
}
});
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
try {
const stream = openai.beta.chat.completions.stream(
{
model: 'gpt-4o',
messages: [{ role: 'user', content: req.body.message }],
},
{ signal: controller.signal }
);
for await (const chunk of stream) {
if (res.writableEnded) break;
const text = chunk.choices[0]?.delta?.content ?? '';
if (text) res.write(`data: ${JSON.stringify({ text })}\n\n`);
}
res.write('data: [DONE]\n\n');
res.end();
} catch (err) {
// AbortError: user navigated away — not a bug
if (err.name === 'AbortError' || err.code === 'ERR_STREAM_PREMATURE_CLOSE') {
return;
}
// Real API error
console.error('OpenAI stream error:', err);
if (!res.headersSent) {
res.status(500).json({ error: 'Stream failed' });
}
}
});
app.listen(3000);
Anthropic SDK (TypeScript/JS) — streaming with AbortSignal
// ESM — Next.js / Hono route streaming Anthropic Claude
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
export async function POST(request) {
const body = await request.json();
const encoder = new TextEncoder();
const stream = new ReadableStream({
async start(controller) {
try {
const anthropicStream = anthropic.messages.stream(
{
model: 'claude-opus-4-5',
max_tokens: 1024,
messages: [{ role: 'user', content: body.message }],
},
{ signal: request.signal } // forward the HTTP request's abort signal
);
for await (const event of anthropicStream) {
if (event.type === 'content_block_delta') {
const text = event.delta?.text ?? '';
controller.enqueue(encoder.encode(`data: ${JSON.stringify({ text })}\n\n`));
}
}
controller.enqueue(encoder.encode('data: [DONE]\n\n'));
controller.close();
} catch (err) {
// Client aborted — request.signal was fired
if (err.name === 'AbortError' || err.code === 'ERR_STREAM_PREMATURE_CLOSE') {
controller.close();
return;
}
controller.error(err);
}
},
});
return new Response(stream, {
headers: { 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache' },
});
}
ChatOpenAI or
ChatAnthropic class throws ERR_STREAM_PREMATURE_CLOSE, the LangChain
retry mechanism (p-retry) may not catch it, causing the error to propagate
to your agent or chain instead of being retried. Wrap the chain invocation in your own
try/catch and handle ERR_STREAM_PREMATURE_CLOSE explicitly at the top level.
Cause 3 – AbortController / AbortSignal Aborting a Fetch Response Body
When you use AbortController to cancel a fetch() request mid-stream
— for example, a user-initiated cancel, a timeout, or switching to a different query —
the response body stream is destroyed. Node.js (using undici internally for
native fetch) will produce ERR_STREAM_PREMATURE_CLOSE as you
consume the body.
// CJS — aborting a streaming fetch response safely
const controller = new AbortController();
const { signal } = controller;
// Cancel after 5 seconds (or on user action)
setTimeout(() => controller.abort(), 5000);
try {
const res = await fetch('https://api.example.com/large-data', { signal });
if (!res.ok) throw new Error(`HTTP ${res.status}`);
// Consume the body as a stream
for await (const chunk of res.body) {
process.stdout.write(chunk);
}
} catch (err) {
if (err.name === 'AbortError') {
// Normal timeout/cancel — not an error
console.log('Request aborted by user or timeout.');
return;
}
if (err.code === 'ERR_STREAM_PREMATURE_CLOSE') {
// The abort closed the body stream before it ended
// Also expected when aborting — check signal.aborted
if (signal.aborted) {
console.log('Stream closed due to abort signal.');
return;
}
}
// Real error
throw err;
}
AbortSignal.timeout(ms) over a
manual setTimeout(() => controller.abort(), ms). It is composable via
AbortSignal.any([signal1, signal2]) (Node.js 20.3+) and avoids leaked timers.
const res = await fetch(url, { signal: AbortSignal.timeout(10_000) });
node-fetch v2 / v3 specific behaviour
node-fetch wraps ERR_STREAM_PREMATURE_CLOSE in a
FetchError with type: 'system'. The underlying code
is still accessible via err.code:
const fetch = require('node-fetch'); // v2 or v3
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 8000);
try {
const res = await fetch('https://api.example.com/stream', {
signal: controller.signal,
});
const text = await res.text();
console.log(text);
} catch (err) {
if (err.name === 'AbortError') {
console.log('Fetch aborted (timeout or user cancel).');
} else if (err.type === 'system' && err.code === 'ERR_STREAM_PREMATURE_CLOSE') {
// node-fetch wraps premature close as FetchError with type: 'system'
if (controller.signal.aborted) {
console.log('Body stream closed by abort.');
} else {
console.error('Unexpected premature close:', err);
}
} else {
throw err;
}
} finally {
clearTimeout(timeout);
}
node-fetch is no longer maintained as
actively. For Node.js 18+ projects, use the built-in fetch (powered by
undici) or install undici directly. Both handle abort signals
natively and emit AbortError reliably before the stream error.
Cause 4 – File Stream Destroyed Before Write Finishes
If a writable fs.createWriteStream is passed to pipeline() and
something calls .destroy() on it before all data is flushed to disk, pipeline
throws ERR_STREAM_PREMATURE_CLOSE. Common causes: process signal handlers,
incomplete error handling in transform streams, or concurrent operations on the same file.
// CJS — safe file write pipeline with cleanup
const { pipeline } = require('stream/promises');
const { createReadStream, createWriteStream } = require('fs');
const { unlink } = require('fs/promises');
const path = require('path');
async function downloadToFile(readableStream, destPath) {
const writer = createWriteStream(destPath);
try {
await pipeline(readableStream, writer);
// 'finish' fired — file is completely written
console.log('File written successfully:', destPath);
} catch (err) {
// Ensure the partial file is removed on any failure
try { await unlink(destPath); } catch {}
if (err.code === 'ERR_STREAM_PREMATURE_CLOSE') {
throw new Error(`Stream closed before file was fully written: ${path.basename(destPath)}`);
}
throw err;
}
}
// Usage
const https = require('https');
https.get('https://example.com/data.zip', async (res) => {
await downloadToFile(res, '/tmp/data.zip');
});
Cause 5 – pipeline() Semantics: When It Throws vs. Swallows
Understanding when pipeline() propagates ERR_STREAM_PREMATURE_CLOSE
versus your original error is critical for correct error handling.
| Scenario | What pipeline() reports | How to get the real error |
|---|---|---|
Source stream emits 'error' event |
Forwards the original error | Normal — err.code is the source error code |
Source stream is destroyed with no argument: src.destroy() |
ERR_STREAM_PREMATURE_CLOSE from the downstream stream |
Attach src.on('error', ...) before calling pipeline() |
Source stream is destroyed with an error: src.destroy(err) |
May report ERR_STREAM_PREMATURE_CLOSE from downstream, not err |
Attach src.on('error', ...) directly to capture the original err |
| Writable (destination) stream destroyed early | ERR_STREAM_PREMATURE_CLOSE |
Check res.destroyed / req.aborted to determine cause |
Transform stream throws inside _transform() |
Forwards the transform error | Normal — err is whatever you threw in _transform |
Client disconnects during pipeline to HTTP res |
ERR_STREAM_PREMATURE_CLOSE |
Expected — check req.socket.destroyed to confirm client left |
// Capturing the real error when pipeline masks it
const { pipeline } = require('stream/promises');
async function safeStream(source, destination) {
let realError = null;
// Capture the actual source error before pipeline sees it
source.on('error', (err) => {
realError = err;
});
try {
await pipeline(source, destination);
} catch (err) {
if (err.code === 'ERR_STREAM_PREMATURE_CLOSE' && realError) {
// Pipeline masked the real error — use the original
throw realError;
}
throw err;
}
}
Cause 6 – Zip / Archive Parsers (node-unzipper)
Zip parsers like node-unzipper internally call stream.destroy()
when they have finished reading entries, which triggers ERR_STREAM_PREMATURE_CLOSE
in pipeline even though extraction completed successfully. Verify success by checking whether
all expected entries were processed before checking the error.
// CJS — unzipper with pipeline (handle premature close)
const unzipper = require('unzipper');
const { pipeline } = require('stream/promises');
const { createReadStream, createWriteStream } = require('fs');
async function extractZip(zipPath, outputDir) {
const zip = createReadStream(zipPath).pipe(unzipper.Parse({ forceStream: true }));
for await (const entry of zip) {
const { path: entryPath, type } = entry;
if (type === 'File') {
// Use pipeline per entry — catch premature close per entry
try {
await pipeline(entry, createWriteStream(`${outputDir}/${entryPath}`));
} catch (err) {
if (err.code === 'ERR_STREAM_PREMATURE_CLOSE') {
// unzipper closes the entry stream after it's consumed — expected
continue;
}
throw err;
}
} else {
entry.autodrain(); // skip directories
}
}
}
extractZip('/tmp/archive.zip', '/tmp/output').catch(console.error);
Safe AbortSignal-Aware Stream Consumption Pattern
The following pattern is reusable for any scenario where you consume a readable stream and
want clean cancellation via AbortSignal:
// ESM — AbortSignal-aware stream consumer
import { pipeline } from 'node:stream/promises';
import { Writable } from 'node:stream';
/**
* Consume a ReadableStream (Node.js or web) with abort support.
* Returns the concatenated Buffer, or throws if an unexpected error occurs.
* Resolves to null if aborted intentionally.
*/
async function consumeStreamSafely(readable, signal) {
const chunks = [];
const writer = new Writable({
write(chunk, _enc, cb) {
chunks.push(chunk);
cb();
},
});
try {
await pipeline(readable, writer, { signal });
return Buffer.concat(chunks);
} catch (err) {
// AbortError is thrown by pipeline when the signal fires (Node.js 16+)
if (err.name === 'AbortError') {
return null; // intentional cancel
}
// ERR_STREAM_PREMATURE_CLOSE can appear when aborting in some Node.js versions
if (err.code === 'ERR_STREAM_PREMATURE_CLOSE' && signal?.aborted) {
return null; // intentional cancel
}
throw err; // unexpected error
}
}
// Usage
const controller = new AbortController();
const result = await consumeStreamSafely(responseBodyStream, controller.signal);
signal option
to pipeline() (callback form) or stream/promises pipeline() causes it
to throw AbortError when the signal fires and cleanly destroys all streams.
This is the preferred cancellation mechanism — it avoids the ambiguity of
ERR_STREAM_PREMATURE_CLOSE in most cases.
await pipeline(src, dest, { signal: controller.signal });
Stream Lifecycle Event Debugging
To pinpoint which stream in a chain closed prematurely, attach listeners for all lifecycle events and log them in order. The sequence tells you exactly where things went wrong.
// CJS — debug stream lifecycle to diagnose premature close
function debugStream(stream, label) {
for (const event of ['close', 'end', 'finish', 'error', 'aborted']) {
stream.on(event, (...args) => {
console.log(`[${label}] event: ${event}`, args.length ? args[0] : '');
});
}
return stream;
}
const http = require('http');
const { pipeline } = require('stream/promises');
const server = http.createServer(async (req, res) => {
const readStream = debugStream(getSomeReadableStream(), 'source');
const writeStream = debugStream(res, 'response');
try {
await pipeline(readStream, writeStream);
} catch (err) {
console.error('Pipeline error:', err.code, err.message);
// The debug log above will show which stream fired 'close' first
}
});
server.listen(3000);
Expected output when a client disconnects mid-stream:
[response] event: close <-- client disconnected; socket closed
[source] event: close <-- pipeline destroyed source as cleanup
Pipeline error: ERR_STREAM_PREMATURE_CLOSE Premature close
Expected output when the stream completes normally:
[source] event: end <-- source finished emitting data
[response] event: finish <-- all data flushed to client
[response] event: close <-- TCP connection closed cleanly
got / undici Specific Patterns
got
// CJS — got streaming with abort and premature close handling
const got = require('got');
const { pipeline } = require('stream/promises');
const { createWriteStream } = require('fs');
async function downloadWithGot(url, destPath, signal) {
const downloadStream = got.stream(url, { signal });
try {
await pipeline(downloadStream, createWriteStream(destPath));
} catch (err) {
if (err.name === 'AbortError' || err.code === 'ERR_STREAM_PREMATURE_CLOSE') {
if (signal?.aborted) {
console.log('Download cancelled.');
return;
}
}
// got wraps network errors — check err.response and err.code
throw err;
}
}
undici (Node.js built-in fetch engine)
// ESM — undici stream() with abort
import { stream } from 'undici';
import { createWriteStream } from 'node:fs';
import { pipeline } from 'node:stream/promises';
const controller = new AbortController();
try {
const { body } = await stream(
'https://api.example.com/large-file',
{ method: 'GET', signal: controller.signal },
({ body }) => body // return the body stream
);
await pipeline(body, createWriteStream('/tmp/output'));
} catch (err) {
if (err.name === 'AbortError' || err.code === 'ERR_STREAM_PREMATURE_CLOSE') {
if (controller.signal.aborted) {
console.log('Download aborted intentionally.');
return;
}
}
throw err;
}
Docker and CI Behavior
ERR_STREAM_PREMATURE_CLOSE appears more frequently in Docker and CI environments
due to:
- Container shutdown signals: Docker sends
SIGTERMthenSIGKILLduringdocker stop. If your process does not handleSIGTERMgracefully, active streams are destroyed mid-transfer, producing premature close errors in logs. - CI job cancellation: GitHub Actions, GitLab CI, and Jenkins cancel jobs by sending signals to the process group. Long-running downloads or stream-based test fixtures will receive premature close.
- Proxy timeouts: Many CI environments route outbound traffic through proxies with short idle timeouts (30–60 s). Long-running SSE or streaming API calls stall and the proxy kills the connection.
// Graceful SIGTERM handling — finish in-flight streams before exit
process.on('SIGTERM', async () => {
console.log('SIGTERM received — shutting down gracefully');
server.close(async () => {
// Give in-flight pipeline() calls time to finish or catch premature close
await new Promise((r) => setTimeout(r, 1000));
process.exit(0);
});
});
Windows-Specific Behavior
On Windows, ERR_STREAM_PREMATURE_CLOSE behaves identically to Linux and macOS
at the Node.js stream layer — it is a pure JavaScript error with no OS-level difference.
However, Windows pipe semantics differ for named pipes and child process stdio:
- Windows named pipes close differently from POSIX pipes; scripts using
pipeline()across child process stdio may see premature close when the child exits before the parent finishes writing. - If you see
ERR_STREAM_PREMATURE_CLOSEonchild.stdinon Windows but not Linux, add achild.on('exit', ...)handler and stop writing when the child exits.
Debugging Checklist
- Check
err.code === 'ERR_STREAM_PREMATURE_CLOSE'— if this is the only error property (nosyscall, noerrno), it is a stream-layer close, not an OS error. - Check whether the close was intentional: was
AbortController.abort()called? Did the client disconnect (req.socket.destroyed)? Did your code callstream.destroy()manually? - If using
pipeline(), attach a directstream.on('error', ...)listener to the source stream before calling pipeline to capture the real error if pipeline masks it. - Add lifecycle event logging (
'close','end','finish') to each stream to identify which one closed first and in what order. - For LLM streaming (OpenAI, Anthropic), pass
signal: req.signalor a controller's signal so aborts are cleanAbortErrorevents rather than stream errors. - For proxy/load-balancer environments, verify the proxy idle timeout is longer than your maximum stream duration or add periodic keep-alive writes (e.g., SSE comment lines:
: heartbeat\n\n). - Enable
NODE_DEBUG=streamfor verbose stream lifecycle output:NODE_DEBUG=stream node app.js 2>&1 | grep premature - For Docker/CI, ensure your process handles
SIGTERMgracefully and waits for in-flight pipeline calls before exiting.
# Verbose stream debug output
NODE_DEBUG=stream node app.js 2>&1 | grep -i "premature\|close\|destroy"
ERR_STREAM_PREMATURE_CLOSE will
hide genuine failures — dropped downloads, truncated file writes, incomplete LLM responses
caused by server-side bugs. Always classify first: was the close intentional
(signal.aborted, req.socket.destroyed) or unexpected?
Log unexpected occurrences with enough context (URL, user ID, stream size written so far)
to diagnose in production.
Frequently Asked Questions
What is ERR_STREAM_PREMATURE_CLOSE in Node.js?
ERR_STREAM_PREMATURE_CLOSE is thrown by Node.js when a stream is destroyed or closed before it emits its natural terminal event ('end' for readables, 'finish' for writables). stream.pipeline() always propagates this error when any stream in the chain closes early. It appears as Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close in the stack trace. The error has only code and message — no syscall or errno.
How do I fix ERR_STREAM_PREMATURE_CLOSE in Node.js?
Catch the error and classify intent: if (err.code === 'ERR_STREAM_PREMATURE_CLOSE') { if (signal?.aborted || req.socket?.destroyed) return; /* real error: */ throw err; }. For HTTP servers, listen for req.on('close') and stop producing data when the client leaves. For LLM streaming (OpenAI, Anthropic), pass an AbortSignal and catch AbortError first. For AbortController-aborted fetch bodies, check signal.aborted before treating the error as fatal.
Why does stream.pipeline() throw ERR_STREAM_PREMATURE_CLOSE instead of my original error?
When you call stream.destroy(myError) inside a pipeline, a downstream stream may close before it reads the original error, reporting ERR_STREAM_PREMATURE_CLOSE instead. To capture your original error, attach a direct source.on('error', (err) => { realError = err; }) listener before passing the stream to pipeline(). Then in the pipeline catch block, if err.code === 'ERR_STREAM_PREMATURE_CLOSE' and realError is set, throw realError instead.
Why does ERR_STREAM_PREMATURE_CLOSE happen with OpenAI or Anthropic streaming?
When the browser tab closes, the user cancels, or a proxy times out before the LLM response finishes, the HTTP response body stream is destroyed mid-read. The OpenAI and Anthropic SDKs surface this as ERR_STREAM_PREMATURE_CLOSE (sometimes wrapped as APIConnectionError). Fix by passing an AbortSignal tied to the request lifecycle to the SDK call. When the signal fires on client disconnect, the SDK aborts cleanly and throws AbortError, which is easier to classify as intentional.
How do I handle ERR_STREAM_PREMATURE_CLOSE in an Express or Fastify HTTP server?
Listen for req.on('close') to detect client disconnection. Set a flag (let aborted = false; req.on('close', () => { aborted = true; });). When using pipeline() to stream a response, catch the error and check the flag: if aborted is true when ERR_STREAM_PREMATURE_CLOSE fires, return silently. If aborted is false, it is an unexpected error worth logging.
Does ERR_STREAM_PREMATURE_CLOSE always mean something went wrong?
No. It is expected whenever a consumer intentionally stops reading before the stream ends — browser tab closed, AbortController fired, user cancelled a download, or a zip parser stopped after reading only some entries. In these cases the error is informational: clean up resources and move on. It is a real bug only when the stream closed unexpectedly and data may be lost or corrupted.
What is the difference between ERR_STREAM_PREMATURE_CLOSE and EPIPE?
EPIPE is an OS-level error (errno 32) thrown when a process writes to a pipe or socket whose read end is already closed. ERR_STREAM_PREMATURE_CLOSE is a Node.js stream-layer error thrown when a stream is destroyed before emitting 'end' or 'finish'. Both can appear when a client disconnects mid-stream, but EPIPE surfaces on the write syscall while ERR_STREAM_PREMATURE_CLOSE is generated by the stream machinery during cleanup after the socket is gone.
Why does ERR_STREAM_PREMATURE_CLOSE happen in production but not locally?
In production, real users close tabs and cancel requests, proxy/load balancers impose read timeouts (AWS ALB default: 60 s), and long-running LLM streams exceed proxy idle timeouts. Locally, requests complete before any timeout kicks in and there are no proxy hops. To reproduce locally: use AbortController with a short timeout, or test with nginx as a reverse proxy configured with a low proxy_read_timeout.
How do I suppress ERR_STREAM_PREMATURE_CLOSE when intentionally destroying a stream?
Pass your own error to stream.destroy(new MyIntentionalError()). Then in the pipeline() catch block, check for your error type first before falling back to the premature close check. Alternatively, use the signal option added in Node.js 16+: await pipeline(src, dest, { signal }). When the signal fires, pipeline() throws AbortError rather than ERR_STREAM_PREMATURE_CLOSE, making intent unambiguous.
Related Errors
EPIPE: write EPIPE— OS-level broken pipe error; write to a pipe/socket whose read end has closed; often accompanies premature close on the write pathECONNRESET: connection reset by peer— TCP RST from remote peer; the underlying cause of premature close when a network connection drops mid-streamETIMEDOUT: connection timed out— no response within timeout window; can precede premature close in proxy-heavy environmentsECONNREFUSED: connection refused— target port not listening; connection never established (precedes any stream activity)UnhandledPromiseRejection— if ERR_STREAM_PREMATURE_CLOSE propagates from an async pipeline call without a catch, it becomes an unhandled rejection