Skip to content

tunnel: add pause/resume flow control and disconnect stream cleanup#19

Merged
jurassix merged 1 commit intomainfrom
clint/tunnel-flow-control
Apr 16, 2026
Merged

tunnel: add pause/resume flow control and disconnect stream cleanup#19
jurassix merged 1 commit intomainfrom
clint/tunnel-flow-control

Conversation

@jurassix
Copy link
Copy Markdown
Collaborator

@jurassix jurassix commented Apr 16, 2026

Problem

Large HTTPS assets loaded through the tunnel CONNECT path were being truncated mid-transfer, causing ERR_INCOMPLETE_CHUNKED_ENCODING in the browser. Two bugs contributed:

  1. dataCh overflow → stream close: when the relay's per-stream buffer filled up it closed the stream, silently truncating the response
  2. Active sockets not closed on WS disconnect: TCP sockets stayed open after the WebSocket disconnected, leaking connections and causing silent data loss on reconnect

Changes

1. Flow control: pause / resume wire messages

When the relay's dataCh buffer fills, it now sends a pause message. Handle it by calling socket.pause() on the active TCP socket to stop the kernel receiving from the target server. When the relay drains the buffer it sends resume; call socket.resume() to restart.

case 'pause':
  this.#handleStreamPause(msg);   // socket.pause()
  break;
case 'resume':
  this.#handleStreamResume(msg);  // socket.resume()
  break;

2. Stream cleanup on WebSocket disconnect

#onDisconnect() was calling #abortInflight() (cancels pending HTTP requests) but not #closeStreams() (closes active TCP sockets). Add the missing call:

#onDisconnect(): void {
  this.#abortInflight();
  this.#closeStreams();  // ← added
  this.#clearStaleTimer();
  ...
}

3. Test harness: tests/chunked-server.mjs

HTTPS test server that serves configurable-size chunked JS responses:

node tunnel/tests/chunked-server.mjs
# → https://chunked.fullstory.test:9443/?size=50mb

Serves /?size=<N> (e.g. 1mb, 50mb, 200mb) as chunked transfer encoding with no Content-Length, plus a sentinel window.__BUNDLE_LOADED = true at the end. Used to reproduce and validate the fix end-to-end in a real browser via tunnel.

Proof of bug / fix

The Go-side regression test (TestRelayFlowControlSlowConsumer in the companion mn PR) definitively proves both sides:

Before fix (close-on-overflow, dataCh=4 slots):

FAIL: received 5/20 chunks — flow control not working (stream truncated at dataCh overflow)

After fix (pause/resume, dataCh=4 slots, slow 5ms consumer):

PASS: flow control OK: all 20/20 chunks delivered via slow consumer

The slow consumer (5ms per read) simulates a browser whose TCP receive window is backed up. With the old code only 5 chunks arrive before overflow closes the stream. With flow control all 20 arrive because the client pauses reading from the target server until the relay drains.

Two related fixes for ERR_INCOMPLETE_CHUNKED_ENCODING on large assets:

1. Flow control (pause/resume)
   When the relay's dataCh buffer fills up it now sends a `pause` message.
   Handle it by calling socket.pause() on the active TCP socket so the
   kernel stops receiving from the target server. After the relay drains
   the buffer it sends `resume`; call socket.resume() to restart reading.
   This propagates TCP backpressure end-to-end without dropping data.

2. Stream cleanup on WebSocket disconnect (#onDisconnect)
   Active TCP sockets were not closed when the WebSocket disconnected,
   leaving them open until the OS GC'd them. Add #closeStreams() call in
   #onDisconnect() to close all active sockets immediately on disconnect.

New types in types.ts: StreamPauseMessage, StreamResumeMessage added to
RelayMessage union so the switch in client.ts is exhaustive.

New file: tests/chunked-server.mjs — HTTPS test server that serves
configurable-size chunked responses (/?size=1mb … 200mb). Used to
reproduce and validate the fix end-to-end in a real browser via tunnel.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@jurassix jurassix merged commit 6795490 into main Apr 16, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants