fix(http2): reject non-zero content-length with END_STREAM on HEADERS per RFC 9113 8.6#29059
fix(http2): reject non-zero content-length with END_STREAM on HEADERS per RFC 9113 8.6#29059cirospaciari wants to merge 3 commits intomainfrom
Conversation
… per RFC 9113 8.6 A request with a non-zero content-length header and END_STREAM on the HEADERS frame declares a body it can never send. RFC 9113 8.6 treats this as malformed; nghttp2 RSTs the stream with PROTOCOL_ERROR. Bun accepted such requests and delivered them to the 'stream' handler. decodeHeaderBlock now records the parsed content-length on the stream and, before dispatching headers, RSTs with PROTOCOL_ERROR when END_STREAM is set and content-length > 0 (server-side only). Adds focused test. The upstream Node test test-http2-multi-content-length.js also exercises client-side error propagation for the resulting RST, which surfaces a separate Bun behavior (client emits 'end' before 'error' on RST); that is tracked separately.
|
Updated 9:46 PM PT - Apr 8th, 2026
❌ @cirospaciari, your commit f148ea3 has 2 failures in
🧪 To try this PR locally: bunx bun-pr 29059That installs a local version of the PR into your bun-29059 --bun |
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 3 minutes and 51 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughImplements HTTP/2 content-length validation per RFC 9113 §8.6. The frame parser now tracks content-length headers per stream and terminates streams with PROTOCOL_ERROR when END_STREAM flag is set while content-length exceeds zero. Adds test coverage. Changes
🚥 Pre-merge checks | ✅ 1 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/bun.js/api/bun/h2_frame_parser.zig`:
- Around line 1989-1993: After calling this.endStream(stream,
ErrorCode.PROTOCOL_ERROR) when receiving HEADERS with END_STREAM on a
server-side stream with contentLength > 0, always return null immediately
instead of checking this.streams.getEntry(stream_id) and possibly returning a
pointer; this prevents handleHeadersFrame from continuing normal END_STREAM
processing on a stream left in this.streams. Update the block in the same
conditional (the code around endStream(...) and the subsequent return) so that
after endStream(...) you unconditionally return null, ensuring no further
END_STREAM bookkeeping or callbacks run for the closed/errored stream.
- Line 771: The RFC 9113 §8.6 content-length validation is being applied to
every HEADERS frame because stream.contentLength is persistent; restrict that
validation to the initial request header block only. Add or use a boolean
“initial headers” state (e.g., a flag on the stream like stream.initialHeaders
or a local isInitialHeaders) that is true while parsing the opening header block
and becomes false when the first DATA frame is seen or when trailers begin, or
alternatively clear stream.contentLength once body/trailers start; then update
the checks that reference contentLength (the places around the declaration
contentLength: i64 = -1 and the validation sites that consult
stream.contentLength) so they only run when in the initial headers state (or
when contentLength was set in that initial block), leaving trailing HEADERS and
later blocks exempt.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 56d0edfc-e93e-4761-9050-a4c189476c2c
📒 Files selected for processing (2)
src/bun.js/api/bun/h2_frame_parser.zigtest/js/node/http2/h2-content-length-mismatch.test.ts
| waitForTrailers: bool = false, | ||
| closeAfterDrain: bool = false, | ||
| endAfterHeaders: bool = false, | ||
| contentLength: i64 = -1, |
There was a problem hiding this comment.
Restrict this check to the initial request headers, not later trailer HEADERS.
stream.contentLength now lives for the whole stream, and Line 1989 consults it on every later HEADERS block. That makes a valid request like content-length: 10 → 10 bytes of DATA → trailing HEADERS with END_STREAM get reset as PROTOCOL_ERROR, even though the declared body was already sent. Track whether you're still in the opening header block (or clear this state once body/trailers begin) before applying the RFC 9113 §8.6 check.
Also applies to: 1923-1925, 1989-1993
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun.js/api/bun/h2_frame_parser.zig` at line 771, The RFC 9113 §8.6
content-length validation is being applied to every HEADERS frame because
stream.contentLength is persistent; restrict that validation to the initial
request header block only. Add or use a boolean “initial headers” state (e.g., a
flag on the stream like stream.initialHeaders or a local isInitialHeaders) that
is true while parsing the opening header block and becomes false when the first
DATA frame is seen or when trailers begin, or alternatively clear
stream.contentLength once body/trailers start; then update the checks that
reference contentLength (the places around the declaration contentLength: i64 =
-1 and the validation sites that consult stream.contentLength) so they only run
when in the initial headers state (or when contentLength was set in that initial
block), leaving trailing HEADERS and later blocks exempt.
| if (this.isServer and stream.contentLength > 0 and (flags & @intFromEnum(HeadersFrameFlags.END_STREAM)) != 0) { | ||
| this.endStream(stream, ErrorCode.PROTOCOL_ERROR); | ||
| if (this.streams.getEntry(stream_id)) |entry| return entry.value_ptr; | ||
| return null; | ||
| } |
There was a problem hiding this comment.
Return null unconditionally after sending the RST here.
endStream() leaves the closed stream in this.streams, so Lines 1991-1992 return a non-null pointer and handleHeadersFrame() continues normal END_STREAM bookkeeping. That can mutate the closed stream again and emit onStreamEnd after onStreamError for the same malformed request.
Suggested fix
if (this.isServer and stream.contentLength > 0 and (flags & `@intFromEnum`(HeadersFrameFlags.END_STREAM)) != 0) {
this.endStream(stream, ErrorCode.PROTOCOL_ERROR);
- if (this.streams.getEntry(stream_id)) |entry| return entry.value_ptr;
return null;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (this.isServer and stream.contentLength > 0 and (flags & @intFromEnum(HeadersFrameFlags.END_STREAM)) != 0) { | |
| this.endStream(stream, ErrorCode.PROTOCOL_ERROR); | |
| if (this.streams.getEntry(stream_id)) |entry| return entry.value_ptr; | |
| return null; | |
| } | |
| if (this.isServer and stream.contentLength > 0 and (flags & `@intFromEnum`(HeadersFrameFlags.END_STREAM)) != 0) { | |
| this.endStream(stream, ErrorCode.PROTOCOL_ERROR); | |
| return null; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun.js/api/bun/h2_frame_parser.zig` around lines 1989 - 1993, After
calling this.endStream(stream, ErrorCode.PROTOCOL_ERROR) when receiving HEADERS
with END_STREAM on a server-side stream with contentLength > 0, always return
null immediately instead of checking this.streams.getEntry(stream_id) and
possibly returning a pointer; this prevents handleHeadersFrame from continuing
normal END_STREAM processing on a stream left in this.streams. Update the block
in the same conditional (the code around endStream(...) and the subsequent
return) so that after endStream(...) you unconditionally return null, ensuring
no further END_STREAM bookkeeping or callbacks run for the closed/errored
stream.
| } | ||
| } | ||
|
|
||
| if (this.isServer and stream.contentLength > 0 and (flags & @intFromEnum(HeadersFrameFlags.END_STREAM)) != 0) { | ||
| this.endStream(stream, ErrorCode.PROTOCOL_ERROR); | ||
| if (this.streams.getEntry(stream_id)) |entry| return entry.value_ptr; | ||
| return null; | ||
| } | ||
|
|
||
| this.dispatchWith3Extra(.onStreamHeaders, stream.getIdentifier(), headers, sensitiveHeaders, jsc.JSValue.jsNumber(flags)); | ||
| // callbacks can change the Stream ptr in this case we always return the new one | ||
| if (this.streams.getEntry(stream_id)) |entry| return entry.value_ptr; |
There was a problem hiding this comment.
🔴 After the new content-length check calls endStream(stream, PROTOCOL_ERROR), decodeHeaderBlock re-looks up the stream in this.streams (which freeResources never removes it from) and returns it as non-null, so handleHeadersFrame continues executing the endAfterHeaders block — dispatching onStreamEnd after onStreamError was already dispatched by endStream, and incorrectly reverting stream.state from .CLOSED back to .HALF_CLOSED_REMOTE. The fix is to return null unconditionally after calling endStream in the new check, rather than re-fetching the stream from the map.
Extended reasoning...
Root Cause
The new check added in decodeHeaderBlock (lines 1986-1997) correctly calls this.endStream(stream, ErrorCode.PROTOCOL_ERROR) to RST the stream when a non-zero content-length accompanies an END_STREAM flag. However, the code immediately after does:
this.endStream(stream, ErrorCode.PROTOCOL_ERROR);
if (this.streams.getEntry(stream_id)) |entry| return entry.value_ptr;
return null;This re-fetches the stream from this.streams and returns it as a non-null pointer.
Why This Is Wrong
endStream (line 1330-1363) does the following:
- Sets
stream.state = .CLOSED(line 1352) - Calls
stream.freeResources(this, false)(line 1355) — clearsjsContext/queue but does not remove the stream fromthis.streams - Dispatches
onStreamError(line 1359) — first dispatch
Because freeResources does not call this.streams.remove(stream_id), this.streams.getEntry(stream_id) succeeds and decodeHeaderBlock returns the now-closed stream pointer (non-null).
Double-Dispatch Trace
Back in handleHeadersFrame:
- Line 2397 (before
decodeHeaderBlock):stream.endAfterHeaders = true(END_STREAM was set) - Line 2398:
decodeHeaderBlockreturns non-null (stream still in map) - The
orelse { return content.end; }branch is not taken - Line 2402:
if (stream.endAfterHeaders)→ true (set before call, never cleared byendStream) - Line 2410:
if (stream.state == .HALF_CLOSED_LOCAL)→ false (state is.CLOSED) - Line 2414:
stream.state = .HALF_CLOSED_REMOTE— incorrectly overwrites.CLOSED - Line 2417:
dispatchWithExtra(.onStreamEnd, ...)— second dispatch
Concrete Example
A client sends a POST with content-length: 10 and END_STREAM set (no body). The server:
- Dispatches
onStreamError(PROTOCOL_ERROR) fromendStream✓ - Sets
stream.state = .HALF_CLOSED_REMOTE(was.CLOSED) — WRONG - Dispatches
onStreamEndwith the now-clearedjsContext(numeric identifier only) — WRONG
The application sees both an error event and an end event on a stream that was already rejected.
Fix
Change the new block to return null unconditionally after endStream:
if (this.isServer and stream.contentLength > 0 and (flags & @intFromEnum(HeadersFrameFlags.END_STREAM)) \!= 0) {
this.endStream(stream, ErrorCode.PROTOCOL_ERROR);
return null; // stream is closed; do not re-fetch from map
}This causes handleHeadersFrame to take the orelse { return content.end; } path, stopping all further processing for the rejected stream.
| return null; | ||
| } | ||
|
|
||
| if (this.isServer and strings.eqlComptime(header.name, "content-length")) { | ||
| stream.contentLength = std.fmt.parseInt(i64, header.value, 10) catch -1; | ||
| } | ||
|
|
||
| // RFC 7540 Section 6.5.2: Calculate header list size | ||
| // Size = name length + value length + HPACK entry overhead per header | ||
| headerListSize += header.name.len + header.value.len + HPACK_ENTRY_OVERHEAD; |
There was a problem hiding this comment.
🔴 The new RFC 9113 §8.6 check incorrectly RSTs valid POST requests that include trailer headers: stream.contentLength is set to the value from the initial HEADERS frame and never reset, so when trailer HEADERS arrive with END_STREAM (but no repeated content-length header), stream.contentLength is still non-zero, triggering the RST with PROTOCOL_ERROR on a perfectly valid request. The fix should either guard the check to only apply on the initial HEADERS frame (e.g. when stream.state indicates no data has been received yet) or reset stream.contentLength to -1 at the start of each decodeHeaderBlock call.
Extended reasoning...
What the bug is and how it manifests
The RFC 9113 §8.6 check added by this PR (lines 1986–1992 of h2_frame_parser.zig) correctly rejects a malformed HEADERS frame that simultaneously declares a non-zero content-length and sets END_STREAM. However, stream.contentLength is a field on the Stream struct that persists for the entire stream lifetime and is never cleared between the initial HEADERS frame and a subsequent trailer HEADERS frame. This causes a false positive: a valid POST request that sends trailers is rejected with PROTOCOL_ERROR.
The specific code path that triggers it
All HEADERS frames—both initial request headers and trailers—go through the same dispatch path into decodeHeaderBlock. Inside decodeHeaderBlock, the per-header loop at line ~1920 sets stream.contentLength whenever a content-length header is present. After the loop, the new check at line 1986 tests stream.contentLength > 0 AND (flags & END_STREAM) \!= 0.
Why existing code doesn't prevent it
handleHeadersFrame makes no distinction between initial HEADERS and trailer HEADERS before calling decodeHeaderBlock; both receive the same flags. stream.contentLength is initialized to -1 and written once (on the initial HEADERS), but there is no code that resets it before or during the trailer HEADERS call.
Step-by-step proof
- Client sends initial HEADERS:
content-length: 10, END_STREAM not set.
→stream.contentLength = 10. Check does not fire (END_STREAM is absent). Stream state → OPEN. - Client sends DATA frame(s): 10 bytes, END_STREAM not set.
→ Data is processed normally.stream.contentLengthis untouched (still 10). - Client sends trailer HEADERS: END_STREAM set, no
content-lengthheader.
→decodeHeaderBlockis called. The per-header loop finds nocontent-lengthentry, sostream.contentLengthstays at 10.
→ Check at line 1986:10 > 0✓ AND(flags & END_STREAM) \!= 0✓ →this.endStream(stream, PROTOCOL_ERROR)fires.
→ A fully valid HTTP/2 request is rejected with RST_STREAM PROTOCOL_ERROR.
Impact
Any server-side HTTP/2 handler that expects to receive requests with trailers (common in gRPC, multipart streaming, etc.) will fail for all such requests. This is a regression introduced by this PR.
How to fix it
Option A (simplest): In the check at line 1986, additionally guard that the stream is processing its initial HEADERS, e.g. by checking that no DATA has been received yet or that the stream is in the IDLE/OPEN-before-data state.
Option B: Use a local variable for contentLength within decodeHeaderBlock instead of reading from stream.contentLength. Only persist to stream.contentLength (and then run the check) if this is the initial HEADERS frame.
| return null; | ||
| } | ||
|
|
||
| if (this.isServer and strings.eqlComptime(header.name, "content-length")) { | ||
| stream.contentLength = std.fmt.parseInt(i64, header.value, 10) catch -1; | ||
| } |
There was a problem hiding this comment.
🔴 The RFC 9113 §8.6 enforcement introduced by this PR can be bypassed by sending duplicate content-length headers in the HEADERS frame: the second header unconditionally overwrites stream.contentLength, so a client that sends content-length: 10 followed by content-length: 0 (or an unparseable value like content-length: abc) reduces contentLength to 0 or -1, causing the contentLength > 0 guard to be false and allowing the malformed request through. RFC 9113 §8.3 / RFC 7230 §3.3.2 already require rejecting requests with conflicting content-length field values as malformed; the fix should detect and reject the stream on the second content-length header with a different value.
Extended reasoning...
What the bug is and how it manifests
The new check at line 1989 guards on stream.contentLength > 0, but stream.contentLength is set inside the header-iteration loop (lines 1923–1925) with a simple assignment that unconditionally overwrites on every content-length header encountered:
if (this.isServer and strings.eqlComptime(header.name, "content-length")) {
stream.contentLength = std.fmt.parseInt(i64, header.value, 10) catch -1;
}Because HPACK (lshpack) does no HTTP semantic validation, it happily delivers multiple decoded content-length headers to the application layer. The loop will overwrite contentLength as many times as the client sends the header.
The specific code path that triggers it
The attacker sends a single HEADERS frame with END_STREAM set, containing two content-length headers. The loop processes them in order:
content-length: 10→stream.contentLength = 10content-length: 0→stream.contentLength = 0← overwrites
When the loop exits, stream.contentLength == 0. The guard at line 1989 evaluates 0 > 0 → false; endStream is never called, and the request is dispatched to the application stream handler unchanged.
A second variant works with an unparseable value: content-length: abc causes parseInt to return the catch -1 fallback. -1 > 0 is also false, so that path likewise bypasses enforcement.
Why existing code doesn't prevent it
There is no deduplication or conflict detection for duplicate content-length headers anywhere in h2_frame_parser.zig. The HPACK decoder is a pure bytes-to-name/value decoder and intentionally knows nothing about HTTP semantics. Nor does any other layer in the call stack reject a second content-length header.
Impact
A remote client can reliably bypass the new RFC 9113 §8.6 protection by simply adding a second content-length: 0 to every request. The server will accept and dispatch a HEADERS+END_STREAM request that declares a non-zero body, which can lead to desync in the request-processing pipeline or other protocol-level confusion. The attack requires only crafting a HEADERS frame directly (e.g. with nghttp2 CLI, raw h2spec, or any HTTP/2 library that allows full header control).
How to fix it
RFC 9113 §8.3 (citing RFC 7230 §3.3.2) requires the server to treat a request with multiple differing content-length values as malformed and respond with a stream error of type PROTOCOL_ERROR. The simplest compliant fix: if stream.contentLength has already been set (i.e. \!= -1) and the new value differs, call endStream with PROTOCOL_ERROR immediately. Alternatively, take the maximum over all observed values; that is slightly less conservative but still catches the 0-overwrite attack path.
Step-by-step proof
- Client opens an HTTP/2 connection to a Bun server.
- Client sends a HEADERS frame with
END_STREAMset containing:content-length: 10,content-length: 0. decodeHeaderBlockiterates the decoded headers. After iteration:stream.contentLength == 0.- Post-loop guard:
this.isServer→ true;stream.contentLength > 0→0 > 0→ false; guard does not fire. dispatchWith3Extra(.onStreamHeaders, ...)}is called — the stream reaches the application handler.streamCountin the test would be incremented to 1, andreq.rstCodewould not beNGHTTP2_PROTOCOL_ERROR, demonstrating the bypass.
| if (this.isServer and stream.contentLength > 0 and (flags & @intFromEnum(HeadersFrameFlags.END_STREAM)) != 0) { | ||
| this.endStream(stream, ErrorCode.PROTOCOL_ERROR); | ||
| if (this.streams.getEntry(stream_id)) |entry| return entry.value_ptr; | ||
| return null; | ||
| } |
There was a problem hiding this comment.
🔴 The RFC 9113 §8.6 check in decodeHeaderBlock can be bypassed by splitting the header block across a HEADERS+CONTINUATION frame sequence. When handleContinuationFrame runs (line 2332), it unconditionally overwrites stream.endAfterHeaders with false (CONTINUATION frames never carry END_STREAM) before calling decodeHeaderBlock, and the RFC check itself also uses the local flags parameter—which is the CONTINUATION frame's flags and also lacks END_STREAM—so even though stream.contentLength is correctly set to a non-zero value, the condition (flags & END_STREAM) \!= 0 is always false for CONTINUATION frames. Fix: either preserve stream.endAfterHeaders from the original HEADERS frame in handleContinuationFrame, or use stream.endAfterHeaders as the END_STREAM indicator in the decodeHeaderBlock check instead of the flags parameter.
Extended reasoning...
Bug: RFC 9113 §8.6 bypass via HEADERS+CONTINUATION fragmentation
What the bug is
The new RFC 9113 §8.6 enforcement rejects requests that combine a non-zero content-length with END_STREAM on the HEADERS frame. The check at h2_frame_parser.zig lines 1986–1993 correctly catches single-frame header blocks, but can be bypassed when the header block is fragmented across a HEADERS frame followed by one or more CONTINUATION frames.
The specific code path
In handleContinuationFrame (line 2332), the very first thing done after receiving the payload is:
stream.endAfterHeaders = frame.flags & @intFromEnum(HeadersFrameFlags.END_STREAM) \!= 0;
stream = (try this.decodeHeaderBlock(payload[0..payload.len], stream, frame.flags)) ...CONTINUATION frames never carry the END_STREAM bit per RFC 7540 §6.10, so this assignment always writes false to stream.endAfterHeaders, overwriting the true that was stored there by handleHeadersFrame at line 2397 for the original HEADERS frame. decodeHeaderBlock is then called with frame.flags (the CONTINUATION frame's flags), which also has END_STREAM unset.
Why existing code doesn't prevent it
The RFC check at line 1989:
if (this.isServer and stream.contentLength > 0 and (flags & @intFromEnum(HeadersFrameFlags.END_STREAM)) \!= 0) {uses the local flags parameter—the current frame's flags. For CONTINUATION frames, (flags & END_STREAM) \!= 0 is structurally always false. There is no escape: the check can never fire for the CONTINUATION invocation of decodeHeaderBlock regardless of what the original HEADERS frame said.
Concrete proof of the bypass
-
Client sends HEADERS(END_STREAM=1, END_HEADERS=0) with no
content-lengthin this fragment.handleHeadersFrameline 2397:stream.endAfterHeaders = truedecodeHeaderBlockcalled with HEADERS flags (END_STREAM set), butstream.contentLengthis still-1- RFC check:
contentLength > 0is false → check does not fire stream.isWaitingMoreHeaders = true,stream.state = HALF_CLOSED_REMOTE
-
Client sends CONTINUATION(END_HEADERS=1) with
content-length: 10in the fragment.handleContinuationFrameline 2332:stream.endAfterHeaders = false(overwritten! CONTINUATION has no END_STREAM)decodeHeaderBlockcalled with CONTINUATION flags (END_STREAM not set)- Line ~1924:
stream.contentLength = 10is parsed and stored - RFC check:
contentLength > 0is true, but(flags & END_STREAM) \!= 0is false → check does not fire - `dispatchWith3Extra(.onStreamHeaders, ...)\ dispatches the malformed request to the stream handler
The malformed request (non-zero content-length + END_STREAM) bypasses the RFC 9113 §8.6 rejection entirely.
Impact
A client can make the server accept and dispatch a request that declares a body (via content-length > 0) but simultaneously signals end-of-stream on the HEADERS frame, making it impossible to send that body. This is precisely the malformed request class that nghttp2 rejects with PROTOCOL_ERROR. Accepting such requests can confuse stream handlers and violates the RFC.
How to fix
The simplest fix: in handleContinuationFrame, do not overwrite stream.endAfterHeaders before calling decodeHeaderBlock. The field was already set correctly from the original HEADERS frame. Alternatively, change the RFC check in decodeHeaderBlock to use stream.endAfterHeaders instead of the local flags parameter. Either approach restores the invariant that END_STREAM context from the HEADERS frame is available when the full header block (including any CONTINUATION fragments) is finally validated.
once(emitter, 'close') also listens for 'error' and rejects with it. The test expects an error (RST PROTOCOL_ERROR), so use a manual close promise instead. No source change needed; the prior 'crash' was the once() rejection, not an unhandled emitter error.
|
Re-drafting: test now passes but reviews identified 4 valid bypasses/issues (trailers false-positive, CONTINUATION split, duplicate content-length overwrite, freed-ptr re-lookup). The simple-case fix is correct; addressing the bypasses needs a more thorough rewrite (check stream.state to distinguish initial HEADERS from trailers, error on duplicate content-length, defer check to handleHeadersFrame after END_HEADERS). |
RFC 9113 8.6: a request with a non-zero `content-length` and `END_STREAM` on the HEADERS frame is malformed (it declares a body it can never send). nghttp2 RSTs the stream with `PROTOCOL_ERROR`; Bun accepted these and dispatched them to the `'stream'` handler.
`decodeHeaderBlock` now records the parsed `content-length` on the stream and RSTs with `PROTOCOL_ERROR` when `END_STREAM` is set and `content-length > 0` (server side).
Adds a focused test. Node's `test-http2-multi-content-length.js` also asserts the client-side error event on the resulting RST, which surfaces a separate Bun behavior (client emits `'end'` before `'error'` on RST); tracked separately.