Pentagon ditches Anthropic AI over “security risk” and OpenAI takes over
嗯,用户让我用中文帮他总结一下这篇文章的内容,控制在一百个字以内。他特别强调不需要以“文章内容总结”或者“这篇文章”这样的开头,直接写描述即可。好的,我先通读一下文章,了解主要内容。 文章讲的是美国国防部周五切断了与Anthropic公司的联系,因为后者开发了Claude AI。国防部指定Anthropic为对国家安全的供应链风险,这意味着任何与美军做生意的承包商、供应商或合作伙伴都不能与该公司合作。这在政府和行业之间的纠纷中是一个罕见的升级,因为之前这个标签只用于外国对手如华为。 Anthropic对此表示反对,认为这是非法且有政治动机的,并计划在法庭上挑战这一决定。争议的核心是Anthropic认为其AI模型在军事系统中的使用范围应该受到限制。他们不允许国防部将其技术用于大规模国内监视美国人或完全自主武器系统。 国防部此前要求所有AI供应商同意合同中的“所有合法用途”条款。但Anthropic认为国防部最终提供的条款可能违反了他们的限制条件。因此,国防部长Hegseth取消了与Anthropic的2亿美元合同,并指责该公司试图干预军事行动。 同时,国防部与OpenAI达成了协议,OpenAI同意了“所有合法用途”的框架,并增加了架构控制措施来确保不会违反他们的原则。此外,尽管政策有变,Claude AI仍在某些军事系统中使用,并将在六个月内逐步移除。 总的来说,这篇文章主要讲述了美国国防部与Anthropic之间的争议及其影响。 </think> 美国国防部将AI公司Anthropic列为国家安全供应链风险,禁止其与军方合作。 Anthropic反对称此举非法且政治动机强烈,并计划上诉。 争议核心在于该公司限制其AI技术用于大规模监视和自主武器系统。 Pentagon取消了2亿美元合同,并指责Anthropic试图干预军事行动。 同时,Pentagon与OpenAI达成协议以替代Claude AI的使用。 2026-3-3 16:5:58 Author: www.malwarebytes.com(查看原文) 阅读量:4 收藏

On Friday the US Pentagon cut ties with Anthropic, the company behind Claude AI. Defense Secretary Pete Hegseth designated the San Francisco-based company a “supply-chain risk to national security.”

The supply-chain risk designation means that no contractor, supplier, or partner doing business with the US military can deal with Anthropic. The label previously applied only to foreign adversaries like Huawei, though, and using it against a US company marks a rare escalation in a government-industry dispute. According to reports, President Donald Trump also ordered every federal agency to stop using Anthropic’s technology.

What Anthropic wouldn’t budge on

Anthropic called the designation “unlawful and politically motivated” and said it intends to challenge it in court.

At the center of the dispute is how far Anthropic believes its models should be allowed to go inside military systems. Anthropic, which was the first frontier AI company deployed on the military’s classified networks, wanted two contractual restrictions on its AI model Claude, as outlined in its response to the Pentagon’s announcement. It forbade the Pentagon to use its tech for the mass domestic surveillance of Americans and did not want its tech employed in fully autonomous weapons.

The Pentagon had previously demanded that all AI vendors agree to “all lawful purposes” language as part of their contracts. Anthropic told ABC that what the Pentagon finally offered left the door open for the government to violate the company’s no-surveillance and no-weapons clauses.

Defense Secretary Hegseth responded with a statement cancelling Anthropic’s $200m Pentagon contract, awarded last July. He accused Anthropic of attempting to seize veto power over military operations and called the company’s position fundamentally incompatible with American principles.

Anthropic’s CEO Dario Amodei called the government’s response retaliatory and punitive and promised to challenge the designation in court.

Legal scholars suggest that the AI company could have a strong case, questioning whether Hegseth can meet the statutory requirements for such a designation, which is allegedly intended to protect military systems from adversarial sabotage rather than resolving a commercial disagreement over contract terms.

Dan W. Ball, senior fellow at the American Foundation for Innovation, called the Pengaton’s move “attempted corporate murder,” arguing that Google, Amazon, and NVIDIA would have to detach themselves from Anthropic if Hegseth got his way. Amazon is Anthropic’s primary cloud computing provider, but it also uses Google’s data centers extensively. Both companies are investors in Anthropic, as is NVIDIA, which also partners with the AI company on GPU engineering. If the Pentagon’s designation restricts federal contractors from integrating Anthropic technology into defense-related systems, those partners could be required to separate or ringfence any federal-facing work involving the company.

OpenAI steps in

In a whirlwind of policy changes by the US military, the Pentagon also signed a deal with ChatGPT creator OpenAI on Friday evening, just a few hours after dropping Anthropic.

OpenAI CEO Sam Altman said the agreement preserved the same principles Anthropic had been blacklisted for defending.

The difference, according to Altman, is the enforcement mechanism. Instead of hard contractual prohibitions, OpenAI accepted the “all lawful purposes” framework but layered on architectural controls: cloud-only deployment, a proprietary safety stack the Pentagon agreed not to override, and cleared engineers embedded forward. OpenAI said these protections made the company confident that the Pentagon couldn’t cross the red lines it shares with Anthropic.

Altman reportedly said Anthropic’s approach differed because it relied on specific contract language rather than existing legal protections, adding Anthropic “may have wanted more operational control than we did.”

The morning after

The policy dispute did not immediately change how existing systems were operating. According to reporting by The Wall Street Journal and Axios, US Central Command used Anthropic’s AI during Operation Epic Fury, a coordinated US–Israeli operation targeting Iran. The outlets reported that the system was used for intelligence assessment, target analysis, and operational modeling.

Claude remained in use because it was already embedded in certain classified military systems. As a senior defense official previously told Axios:

“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”

Hegseth announced a six-month period during which the Pentagon will pick Anthropic’s AI out of its systems.

Consumers vote with their feet

The dispute has also prompted reactions from some AI industry employees and users. More than 875 employees across Google and OpenAI signed an open letter backing Anthropic’s stance. According to the letter:

“They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.”

A consumer boycott, organized under the name QuitGPT, is organizing a campaign to avoid using ChatGPT, along with a protest at OpenAI’s HQ this week. Claude also rocketed to the top of Apple’s App Store over the weekend.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

About the author

Danny Bradbury has been a journalist specialising in technology since 1989 and a freelance writer since 1994. He covers a broad variety of technology issues for audiences ranging from consumers through to software developers and CIOs. He also ghostwrites articles for many C-suite business executives in the technology sector. He hails from the UK but now lives in Western Canada.


文章来源: https://www.malwarebytes.com/blog/news/2026/03/pentagon-ditches-anthropic-ai-over-security-risk-and-openai-takes-over
如有侵权请联系:admin#unsafe.sh