Fix SSE timeout hang by propagating transport exceptions to pending r…#2430
Open
widgetwalker wants to merge 3 commits intomodelcontextprotocol:mainfrom
Open
Fix SSE timeout hang by propagating transport exceptions to pending r…#2430widgetwalker wants to merge 3 commits intomodelcontextprotocol:mainfrom
widgetwalker wants to merge 3 commits intomodelcontextprotocol:mainfrom
Conversation
…equests This fix ensures that ClientSession.request() does not hang indefinitely when the underlying SSE transport encounters a timeout or other fatal exception before the RPC response is received. It propagates the exception to all in-flight request streams, waking up waiters immediately.
There was a problem hiding this comment.
Pull request overview
This PR targets issue #1401 by ensuring that transport-level exceptions (e.g., SSE read timeouts) don’t leave in-flight RPC calls hanging: when the read stream surfaces an Exception, the session should wake/fail all pending request waiters.
Changes:
- Extend
BaseSession._receive_loopto broadcast transport exceptions to all pending_response_streams. - Convert exceptions into
JSONRPCErrorobjects and send them to per-request waiters sosend_request()unblocks.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/mcp/shared/session.py
Outdated
Comment on lines
+431
to
+445
|
|
||
| # Fix #1401: Propagate exception to all pending requests | ||
| # This prevents waiters from hanging when the transport fails | ||
| error_data = ( | ||
| message.to_error_data() | ||
| if isinstance(message, MCPError) | ||
| else ErrorData(code=0, message=str(message)) | ||
| ) | ||
| jsonrpc_error = JSONRPCError(jsonrpc="2.0", id=None, error=error_data) # id=None because it applies to all | ||
|
|
||
| # We must send an error to every individual waiter | ||
| for req_id, stream in list(self._response_streams.items()): | ||
| # Send a response with the correct ID | ||
| await stream.send(JSONRPCError(jsonrpc="2.0", id=req_id, error=error_data)) | ||
|
|
src/mcp/shared/session.py
Outdated
Comment on lines
+431
to
+445
|
|
||
| # Fix #1401: Propagate exception to all pending requests | ||
| # This prevents waiters from hanging when the transport fails | ||
| error_data = ( | ||
| message.to_error_data() | ||
| if isinstance(message, MCPError) | ||
| else ErrorData(code=0, message=str(message)) | ||
| ) | ||
| jsonrpc_error = JSONRPCError(jsonrpc="2.0", id=None, error=error_data) # id=None because it applies to all | ||
|
|
||
| # We must send an error to every individual waiter | ||
| for req_id, stream in list(self._response_streams.items()): | ||
| # Send a response with the correct ID | ||
| await stream.send(JSONRPCError(jsonrpc="2.0", id=req_id, error=error_data)) | ||
|
|
src/mcp/shared/session.py
Outdated
Comment on lines
+435
to
+445
| message.to_error_data() | ||
| if isinstance(message, MCPError) | ||
| else ErrorData(code=0, message=str(message)) | ||
| ) | ||
| jsonrpc_error = JSONRPCError(jsonrpc="2.0", id=None, error=error_data) # id=None because it applies to all | ||
|
|
||
| # We must send an error to every individual waiter | ||
| for req_id, stream in list(self._response_streams.items()): | ||
| # Send a response with the correct ID | ||
| await stream.send(JSONRPCError(jsonrpc="2.0", id=req_id, error=error_data)) | ||
|
|
Comment on lines
+431
to
447
|
|
||
| # Fix #1401: Propagate exception to all pending requests | ||
| # This prevents waiters from hanging when the transport fails | ||
| error_data = ( | ||
| message.to_error_data() | ||
| if isinstance(message, MCPError) | ||
| else ErrorData(code=0, message=str(message)) | ||
| ) | ||
| jsonrpc_error = JSONRPCError(jsonrpc="2.0", id=None, error=error_data) # id=None because it applies to all | ||
|
|
||
| # We must send an error to every individual waiter | ||
| for req_id, stream in list(self._response_streams.items()): | ||
| # Send a response with the correct ID | ||
| await stream.send(JSONRPCError(jsonrpc="2.0", id=req_id, error=error_data)) | ||
|
|
||
| continue | ||
|
|
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixed SSE timeout hang in BaseSession._receive_loop. This fix propagates transport-level exceptions (like SSE read timeouts) to all pending request streams, ensuring RPC calls don't hang if the underlying connection fails. This specifically addresses issue #1401.