Problem Statement
The current HTTP service layer of java-tron lacks a unified pre-ingress limit for request body size. The issues include: lack of early rejection; validation only occurs after the full request body is read (post-read validation); risk of memory amplification, where oversized requests consume memory even if ultimately rejected; scattered validation logic (e.g., JSON-RPC and broadcasthex use Util.checkBodySize, leading to inconsistent implementation and easy omission); inconsistency between HTTP and gRPC behavior (gRPC already has maxInboundMessageSize, while HTTP lacks an equivalent mechanism).
Proposed Solution
Introduce a unified HTTP body pre-ingress limit at the Jetty layer to enable early rejection, align with gRPC limit strategies, and provide independent configuration capabilities for HTTP and JSON-RPC.
Specification
Core mechanism: Introduce a SizeLimitHandler at the top of the Jetty handler chain, performing size checks during streaming (streaming enforcement). Requests exceeding the limit are rejected immediately without entering the application layer.
Unified configuration: Introduce node.http.maxMessageSize and node.jsonrpc.maxMessageSize, which apply independently to HTTP and JSON-RPC respectively; gRPC continues to use node.rpc.maxMessageSize, and the three are independent of each other; the default values of the two new configuration items are set to 4 times node.rpc.maxMessageSize, as HTTP request bodies are generally 2–6 times larger than JSON-RPC, and the average is used here.
Application layer adjustment: Remove body size validation responsibility from the application layer; retain Util.checkBodySize as deprecated for compatibility.
API Changes (if applicable)
Requests with Content-Length exceeding the limit return 413 Payload Too Large before servlet dispatch. For chunked transfer (no Content-Length), the behavior depends on the servlet exception handling chain:
| Request type |
HTTP Servlets (132) |
JSON-RPC Servlet |
Content-Length exceeds limit |
413 (Jetty rejects before dispatch) |
413 (same) |
| Chunked, exceeds limit |
200 + error JSON {"Error":"BadMessageException"} |
200 + empty body |
Root cause: SizeLimitHandler throws BadMessageException (RuntimeException) during streaming body read. HTTP servlets catch it in their own catch(Exception) → Util.processError() and write error JSON. JsonRpcServlet delegates to jsonrpc4j, which internally catches exceptions during request parsing and returns an empty response.
OOM protection is effective in all cases — the body read is always truncated at the configured limit regardless of the response status code. Returning 200 + empty body for malformed/oversized chunked JSON-RPC requests is acceptable: it does not affect normal requests, increases attacker difficulty, and the alternative — pre-reading the request body to trigger the size limit (catch the exception to report the error), then wrapping the already-read body into an HttpServletRequestWrapper to continue the normal request flow — adds significant complexity for marginal benefit.
Configuration Changes (if applicable)
New configuration items: node.http.maxMessageSize, node.jsonrpc.maxMessageSize; node.rpc.maxMessageSize remains unchanged and is only used for gRPC.
Scope of Impact
Breaking Changes
Requests with Content-Length exceeding the configured protocol limits (node.http.maxMessageSize or node.jsonrpc.maxMessageSize) will be rejected early (413), and calls relying on large request bodies will no longer be available; mitigation: adjust configurations or split requests.
Backward Compatibility
Behavior is controlled independently per protocol configuration, allowing flexible operational tuning; Util.checkBodySize is temporarily retained (deprecated).
Implementation
Do you have ideas regarding the implementation?
Implementation: Construct SizeLimitHandler in HttpService.start() or its subclass methods. Select the corresponding limit based on request path or entry type (HTTP uses node.http.maxMessageSize, JSON-RPC uses node.jsonrpc.maxMessageSize), and insert it at the very beginning of the handler chain; it must execute before servlet/filter, enforce streaming processing to avoid full buffering, and intercept at an early connection stage.
Are you willing to implement this feature?
Yes.
Estimated Complexity
Low → Medium, changes are concentrated at the HTTP ingress layer, involving multi-entry limit selection logic with controllable risk.
Testing Strategy
Test Scenarios
Oversized requests: HTTP > node.http.maxMessageSize or JSON-RPC > node.jsonrpc.maxMessageSize → return 413, do not enter application layer, no significant memory growth; normal requests are unaffected; JSON-RPC is independently controlled by node.jsonrpc.maxMessageSize; stress test with continuous large requests to verify stable memory usage without amplification.
Performance Considerations
Streaming-based enforcement avoids full loading, reduces memory allocation compared to application-layer validation, lowers GC pressure, and improves DoS resistance.
Alternatives Considered (Optional)
Per-endpoint application-layer validation: high redundancy, easy to miss, cannot prevent memory amplification; Servlet Filter: executed too late, cannot intercept at the connection layer; single unified configuration: cannot satisfy independent governance requirements across different protocols.
Additional Context (Optional)
Essentially ingress hardening: validate after read → reject before read, scattered logic → centralized control, application-layer fallback → container-layer protection; also enables independent governance boundaries across multiple protocols, improving DoS resistance and reducing operational and ecosystem communication costs.
Related Issues/PRs
PR: #6658
Problem Statement
The current HTTP service layer of java-tron lacks a unified pre-ingress limit for request body size. The issues include: lack of early rejection; validation only occurs after the full request body is read (post-read validation); risk of memory amplification, where oversized requests consume memory even if ultimately rejected; scattered validation logic (e.g., JSON-RPC and broadcasthex use Util.checkBodySize, leading to inconsistent implementation and easy omission); inconsistency between HTTP and gRPC behavior (gRPC already has maxInboundMessageSize, while HTTP lacks an equivalent mechanism).
Proposed Solution
Introduce a unified HTTP body pre-ingress limit at the Jetty layer to enable early rejection, align with gRPC limit strategies, and provide independent configuration capabilities for HTTP and JSON-RPC.
Specification
Core mechanism: Introduce a SizeLimitHandler at the top of the Jetty handler chain, performing size checks during streaming (streaming enforcement). Requests exceeding the limit are rejected immediately without entering the application layer.
Unified configuration: Introduce node.http.maxMessageSize and node.jsonrpc.maxMessageSize, which apply independently to HTTP and JSON-RPC respectively; gRPC continues to use node.rpc.maxMessageSize, and the three are independent of each other; the default values of the two new configuration items are set to 4 times node.rpc.maxMessageSize, as HTTP request bodies are generally 2–6 times larger than JSON-RPC, and the average is used here.
Application layer adjustment: Remove body size validation responsibility from the application layer; retain Util.checkBodySize as deprecated for compatibility.
API Changes (if applicable)
Requests with
Content-Lengthexceeding the limit return 413 Payload Too Large before servlet dispatch. For chunked transfer (noContent-Length), the behavior depends on the servlet exception handling chain:Content-Lengthexceeds limit{"Error":"BadMessageException"}Root cause:
SizeLimitHandlerthrowsBadMessageException(RuntimeException) during streaming body read. HTTP servlets catch it in their owncatch(Exception) → Util.processError()and write error JSON.JsonRpcServletdelegates to jsonrpc4j, which internally catches exceptions during request parsing and returns an empty response.OOM protection is effective in all cases — the body read is always truncated at the configured limit regardless of the response status code. Returning 200 + empty body for malformed/oversized chunked JSON-RPC requests is acceptable: it does not affect normal requests, increases attacker difficulty, and the alternative — pre-reading the request body to trigger the size limit (catch the exception to report the error), then wrapping the already-read body into an
HttpServletRequestWrapperto continue the normal request flow — adds significant complexity for marginal benefit.Configuration Changes (if applicable)
New configuration items: node.http.maxMessageSize, node.jsonrpc.maxMessageSize; node.rpc.maxMessageSize remains unchanged and is only used for gRPC.
Scope of Impact
Breaking Changes
Requests with
Content-Lengthexceeding the configured protocol limits (node.http.maxMessageSize or node.jsonrpc.maxMessageSize) will be rejected early (413), and calls relying on large request bodies will no longer be available; mitigation: adjust configurations or split requests.Backward Compatibility
Behavior is controlled independently per protocol configuration, allowing flexible operational tuning; Util.checkBodySize is temporarily retained (deprecated).
Implementation
Do you have ideas regarding the implementation?
Implementation: Construct SizeLimitHandler in HttpService.start() or its subclass methods. Select the corresponding limit based on request path or entry type (HTTP uses node.http.maxMessageSize, JSON-RPC uses node.jsonrpc.maxMessageSize), and insert it at the very beginning of the handler chain; it must execute before servlet/filter, enforce streaming processing to avoid full buffering, and intercept at an early connection stage.
Are you willing to implement this feature?
Yes.
Estimated Complexity
Low → Medium, changes are concentrated at the HTTP ingress layer, involving multi-entry limit selection logic with controllable risk.
Testing Strategy
Test Scenarios
Oversized requests: HTTP > node.http.maxMessageSize or JSON-RPC > node.jsonrpc.maxMessageSize → return 413, do not enter application layer, no significant memory growth; normal requests are unaffected; JSON-RPC is independently controlled by node.jsonrpc.maxMessageSize; stress test with continuous large requests to verify stable memory usage without amplification.
Performance Considerations
Streaming-based enforcement avoids full loading, reduces memory allocation compared to application-layer validation, lowers GC pressure, and improves DoS resistance.
Alternatives Considered (Optional)
Per-endpoint application-layer validation: high redundancy, easy to miss, cannot prevent memory amplification; Servlet Filter: executed too late, cannot intercept at the connection layer; single unified configuration: cannot satisfy independent governance requirements across different protocols.
Additional Context (Optional)
Essentially ingress hardening: validate after read → reject before read, scattered logic → centralized control, application-layer fallback → container-layer protection; also enables independent governance boundaries across multiple protocols, improving DoS resistance and reducing operational and ecosystem communication costs.
Related Issues/PRs
PR: #6658