Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection
嗯,用户让我帮忙总结一篇文章,控制在100字以内,而且不需要用“文章内容总结”这样的开头。好的,我得先仔细阅读文章内容,理解主要信息。 文章讲的是LangChain Core发现了一个严重的安全漏洞,CVE-2025-68664,CVSS评分9.3。这个漏洞允许攻击者通过prompt injection窃取敏感信息,甚至影响LLM的响应。漏洞出现在dumps()和dumpd()函数中,没有正确处理带有“lc”键的字典。 修复措施包括引入allowed_objects参数,默认阻止不安全的对象加载,禁用Jinja2模板和secrets_from_env选项。影响的版本是>=1.0.0, <1.2.5和<0.3.81。 另外,还有一个类似的漏洞在LangChain.js中,CVE-2025-68665,CVSS评分8.6。受影响的npm包也有详细说明。 用户的需求是简明扼要地总结文章内容。我需要抓住关键点:漏洞名称、影响、原因、修复措施以及受影响版本。 现在开始组织语言:首先提到LangChain Core的安全漏洞及其严重性,然后说明原因和可能的影响,最后提到修复措施和受影响的版本范围。 确保不超过100字,并且直接描述内容,不使用任何开头语句。 </think> LangChain Core发现严重安全漏洞(CVE-2025-68664),CVSS评分9.3。攻击者可利用此漏洞通过prompt injection窃取敏感信息并影响LLM响应。问题出在未正确处理含"lc"键的字典。修复包括引入allowed_objects参数,默认阻止不安全对象加载,并禁用Jinja2模板和secrets_from_env选项。受影响版本为>=1.0.0, <1.2.5及<0.3.81。 2025-12-26 09:27:0 Author: thehackernews.com(查看原文) 阅读量:2 收藏

Critical LangChain Core Vulnerability

A critical security flaw has been disclosed in LangChain Core that could be exploited by an attacker to steal sensitive secrets and even influence large language model (LLM) responses through prompt injection.

LangChain Core (i.e., langchain-core) is a core Python package that's part of the LangChain ecosystem, providing the core interfaces and model-agnostic abstractions for building applications powered by LLMs.

The vulnerability, tracked as CVE-2025-68664, carries a CVSS score of 9.3 out of 10.0. Security researcher Yarden Porat has been credited with reporting the vulnerability on December 4, 2025. It has been codenamed LangGrinch.

"A serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions," the project maintainers said in an advisory. "The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries."

Cybersecurity

"The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data."

According to Cyata researcher Porat, the crux of the problem has to do with the two functions failing to escape user-controlled dictionaries containing "lc" keys. The "lc" marker represents LangChain objects in the framework's internal serialization format.

"So once an attacker is able to make a LangChain orchestration loop serialize and later deserialize content including an 'lc' key, they would instantiate an unsafe arbitrary object, potentially triggering many attacker-friendly paths," Porat said.

This could have various outcomes, including secret extraction from environment variables when deserialization is performed with "secrets_from_env=True" (previously set by default), instantiating classes within pre-approved trusted namespaces, such as langchain_core, langchain, and langchain_community, and potentially even leading to arbitrary code execution via Jinja2 templates.

What's more, the escaping bug enables the injection of LangChain object structures through user-controlled fields like metadata, additional_kwargs, or response_metadata via prompt injection.

The patch released by LangChain introduces new restrictive defaults in load() and loads() by means of an allowlist parameter "allowed_objects" that allows users to specify which classes can be serialized/deserialized. In addition, Jinja2 templates are blocked by default, and the "secrets_from_env" option is now set to "False" to disable automatic secret loading from the environment.

The following versions of langchain-core are affected by CVE-2025-68664 -

  • >= 1.0.0, < 1.2.5 (Fixed in 1.2.5)
  • < 0.3.81 (Fixed in 0.3.81)

It's worth noting that there exists a similar serialization injection flaw in LangChain.js that also stems from not properly escaping objects with "lc" keys, thereby enabling secret extraction and prompt injection. This vulnerability has been assigned the CVE identifier CVE-2025-68665 (CVSS score: 8.6).

Cybersecurity

It impacts the following npm packages -

  • @langchain/core >= 1.0.0, < 1.1.8 (Fixed in 1.1.8)
  • @langchain/core < 0.3.80 (Fixed in 0.3.80)
  • langchain >= 1.0.0, < 1.2.3 (Fixed in 1.2.3)
  • langchain < 0.3.37 (Fixed in 0.3.37)

In light of the criticality of the vulnerability, users are advised to update to a patched version as soon as possible for optimal protection.

"The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations," Porat said. "This is exactly the kind of 'AI meets classic security' intersection where organizations get caught off guard. LLM output is an untrusted input."

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2025/12/critical-langchain-core-vulnerability.html
如有侵权请联系:admin#unsafe.sh