
KINGSTON, Wash. — On Friday afternoon, President Trump ordered every federal agency to stop using Anthropic’s AI technology. Defense Secretary Pete Hegseth followed by designating the company a “supply-chain risk to national security,” a label the government typically reserves for companies like Huawei.
Related: Claude’s memory vs. ChatGpt’s
Anthropic’s offense: refusing to remove contract provisions that prohibited the Pentagon from using its AI model, Claude, for mass domestic surveillance or fully autonomous weapons.
Within hours, OpenAI announced it had struck a deal to replace Claude on the Pentagon’s classified networks. CEO Sam Altman said his company shares the same red lines. The Pentagon apparently accepted those terms from OpenAI while punishing Anthropic for insisting on them.
If that sounds incoherent, it is. But only if you’re looking at this as a contract dispute. Viewed as the latest cycle in a pattern I’ve been covering for a decade, the logic comes into focus.
In October 2016, I watched a larger-than-life Edward Snowden appear via Google Plus Hangout on two massive screens at the Boulders Resort in Scottsdale, Arizona. About 250 attendees of IDT911’s Privacy XChange Forum sat watching a young American exile, beaming in from Russia, explain his rationale for handing classified NSA documents to journalists.
The irony was hard to miss: his image was delivered through Google’s globe-spanning data centers, the same infrastructure the NSA had tapped to conduct the mass surveillance Snowden had exposed. Three years after his disclosures, what he had revealed was not a rogue operation. It was architecture.
The NSA had built the plumbing to collect telephone metadata on virtually every American, authorized by secret court orders under Section 215 of the Patriot Act. The collection ran through the infrastructure itself: the cables, the switches, the carrier networks. The government didn’t need your phone. It needed your phone company.
The public response produced the USA FREEDOM Act of 2015, which ended the bulk telephone metadata program and added transparency requirements for the surveillance court. Reform happened. But surveillance adapted. Collection migrated to other legal authorities, Section 702 of FISA and Executive Order 12333, that Congress and the courts have been slower to constrain. The lesson was structural: public accountability didn’t stop surveillance. It pushed it into less visible channels.
Three years after the Snowden disclosures, the choke point moved from the cable to the device. In February 2016, the FBI sought a court order compelling Apple to build custom software that would bypass the passcode security on the iPhone recovered from the San Bernardino shooter. Apple CEO Tim Cook published an open letter calling it a backdoor and refused.
The FBI invoked the All Writs Act of 1789. Apple assembled a legal team led by former Solicitor General Ted Olson, who told CNN that compliance would lead to a police state. Major tech firms filed amicus briefs. Public opinion split roughly in half.
The case never produced a ruling. The FBI found a third party, reportedly an Australian firm called Azimuth Security, that cracked the phone using a zero-day vulnerability. The government dropped its demand the day before the hearing. Apple held its ground. No backdoor was built. No binding precedent was set. But the FBI paid more than $1.3 million for the exploit tool, and a federal court later ruled the agency could use it in future investigations. The state found a workaround, and the underlying capability question was never resolved.
Now the choke point has moved again. The Pentagon’s dispute with Anthropic is not about accessing a device or tapping a cable. It’s about who controls the behavioral boundaries of an AI model that operates inside classified military systems. Claude was already deployed on the Pentagon’s most sensitive networks.
It was reportedly used in the operation to capture Nicolás Maduro. Defense officials praised its capabilities. By all accounts, the two contested safeguards, the surveillance prohibition and the autonomous weapons restriction, had never been triggered in practice.
The issue was not operational. It was contractual. The Pentagon wanted the right to use Claude for “all lawful purposes” without a private company retaining the ability to define exceptions. Anthropic’s position was that certain uses fall outside what today’s AI models can safely do. The Pentagon’s position was that once the military buys a tool, its own standards govern how it gets used.
That framing is familiar. In the Snowden era, the government argued that metadata collection was legal under existing statute and therefore didn’t require additional constraints. With Apple, the government argued that the All Writs Act gave courts authority to compel technical assistance. In each case, the state asserted that existing legal frameworks already provided sufficient safeguards. In each case, the company or the whistleblower argued that the structural capability being sought would outlast any particular legal interpretation.
The escalation follows a line. The NSA harvested data by tapping infrastructure that carriers built and maintained. The FBI sought to compel a device manufacturer to weaken security features it had engineered. The Pentagon sought to compel an AI company to remove behavioral safeguards it had trained into the model itself. Cables, then devices, then models. Each cycle, the collection point moves closer to the layer where judgment and language live.
What makes this round different is the speed and the stakes. Snowden’s disclosures took years to produce legislative reform. The Apple case played out over six weeks and produced no binding law. The Anthropic confrontation went from negotiation to federal blacklisting in days. The compression of these cycles is itself part of the pattern. The window for public deliberation gets shorter each time.
Altman’s move deserves scrutiny. OpenAI announced the same red lines Anthropic had drawn and secured the contract Anthropic lost. If the Pentagon accepted those terms from one company while punishing another for identical terms, the dispute was never about the safeguards. It was about who gets to sit at the table and on what terms. Anthropic says it will challenge the supply-chain designation in court. Hundreds of employees at OpenAI and Google have signed petitions in support.
The pattern from the previous two cycles suggests what happens next. Some reform will follow. Some constraint will be imposed. And the surveillance capability will migrate to the next layer down, the one we haven’t built governance for yet. The question worth asking is not whether Anthropic’s stand was principled or strategic. It’s what the next choke point looks like, and whether we’ll recognize it before the architecture is already in place.
Acohido
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(Editor’s note: I used Claude and ChatGPT to assist with research compilation, source discovery, and early draft structuring. All interviews, analysis, fact-checking, and final writing are my own. I remain responsible for every claim and conclusion.)
February 28th, 2026 | My Take | Top Stories