Picklescan Bugs Allow Malicious PyTorch Models to Evade Scans and Execute Code
好的,我现在需要帮用户总结一篇文章的内容,控制在100字以内。用户已经提供了文章的详细内容,所以我得先仔细阅读并理解文章的主要信息。 首先,文章讲的是一个开源工具Picklescan发现了三个关键的安全漏洞。这些漏洞可以让恶意攻击者绕过工具的保护,执行任意代码。Picklescan是用来扫描Python的pickle文件,检测可疑导入或函数调用的工具,特别是在机器学习中使用PyTorch模型时。 接下来,文章提到JFrog的研究人员发现了这些漏洞,每个漏洞都有不同的绕过方法。比如CVE-2025-10155是通过文件扩展名绕过,CVE-2025-10156是通过引入CRC错误来绕过ZIP扫描,CVE-2025-10157则是绕过全局检查。这些漏洞可能导致供应链攻击。 然后,文章提到这些问题已经被修复,在Picklescan 0.0.31版本中解决了。最后还讨论了AI库的发展速度和安全工具之间的差距,强调了需要更智能的安全代理来保护模型。 现在我需要把这些信息浓缩到100字以内。重点包括:Picklescan的三个漏洞、恶意代码执行、供应链攻击风险、修复情况以及AI安全挑战。 可能的结构是:Picklescan发现三个漏洞,允许恶意代码执行和供应链攻击;修复已发布;AI库复杂性带来安全挑战。 现在检查字数是否符合要求,并确保信息准确无误。 </think> 开源工具Picklescan被发现存在三个关键安全漏洞,允许恶意攻击者绕过其保护机制,在加载不受信任的PyTorch模型时执行任意代码。这些漏洞可能导致供应链攻击风险,并已被修复。研究揭示了AI库复杂性与安全工具之间的差距,强调了对智能安全代理的需求以应对新兴威胁。 2025-12-3 09:30:0 Author: thehackernews.com(查看原文) 阅读量:1 收藏

Machine Learning / Vulnerability

Three critical security flaws have been disclosed in an open-source utility called Picklescan that could allow malicious actors to execute arbitrary code by loading untrusted PyTorch models, effectively bypassing the tool's protections.

Picklescan, developed and maintained by Matthieu Maitre (@mmaitre314), is a security scanner that's designed to parse Python pickle files and detect suspicious imports or function calls, before they are executed. Pickle is a widely used serialization format in machine learning, including PyTorch, which uses the format to save and load models.

But pickle files can also be a huge security risk, as they can be used to automatically trigger the execution of arbitrary Python code when they are loaded. This necessitates that users and organizations load trusted models, or load model weights from TensorFlow and Flax.

The issues discovered by JFrog essentially make it possible to bypass the scanner, present the scanned model files as safe, and enable malicious code to be executed, which could then pave the way for a supply chain attack.

Cybersecurity

"Each discovered vulnerability enables attackers to evade PickleScan's malware detection and potentially execute a large-scale supply chain attack by distributing malicious ML models that conceal undetectable malicious code," security researcher David Cohen said.

Picklescan, at its core, works by examining the pickle files at bytecode level and checking the results against a blocklist of known hazardous imports and operations to flag similar behavior. This approach, as opposed to allowlisting, also means that it prevents the tools from detecting any new attack vector and requires the developers to take into account all possible malicious behaviors.

The identified flaws are as follows -

  • CVE-2025-10155 (CVSS score: 9.3/7.8) - A file extension bypass vulnerability that can be used to undermine the scanner and load the model when providing a standard pickle file with a PyTorch-related extension such as .bin or .pt
  • CVE-2025-10156 (CVSS score: 9.3/7.5) - A bypass vulnerability that can be used to disable ZIP archive scanning by introducing a Cyclic Redundancy Check (CRC) error
  • CVE-2025-10157 (CVSS score: 9.3/8.3) - A bypass vulnerability that can be used to undermine Picklescan's unsafe globals check, leading to arbitrary code execution by getting around a blocklist of dangerous imports

Successful exploitation of the aforementioned flaws could allow attackers to conceal malicious pickle payloads within files using common PyTorch extensions, deliberately introduce CRC errors into ZIP archives containing malicious models, or craft malicious PyTorch models with embedded pickle payloads to bypass the scanner.

Cybersecurity

Following responsible disclosure on June 29, 2025, the three vulnerabilities have been addressed in Picklescan version 0.0.31 released on September 9.

The findings illustrate some key systemic issues, including the reliance on a single scanning tool, discrepancies in file-handling behavior between security tools and PyTorch, thereby rendering security architectures vulnerable to attacks.

"AI libraries like PyTorch grow more complex by the day, introducing new features, model formats, and execution pathways faster than security scanning tools can adapt," Cohen said. "This widening gap between innovation and protection leaves organizations exposed to emerging threats that conventional tools simply weren't designed to anticipate."

"Closing this gap requires a research-backed security proxy for AI models, continuously informed by experts who think like both attackers and defenders. By actively analyzing new models, tracking library updates, and uncovering novel exploitation techniques, this approach delivers adaptive, intelligence-driven protection against the vulnerabilities that matter most."

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2025/12/picklescan-bugs-allow-malicious-pytorch.html
如有侵权请联系:admin#unsafe.sh