In July 2025, Anthropic signed a $200 million contract with the U.S. Department of Defense, making Claude the first frontier AI system cleared for classified military networks. Eight months later, the same Pentagon had designated Anthropic a national security threat, Trump had ordered every federal agency to stop using its products, and OpenAI had stepped in to take its place.

Anthropic filed suit in federal court, arguing the government punished it for publicly stating its views on AI safety. On March 24, 2026, a federal judge in San Francisco heard oral arguments on whether to block the designation while the lawsuit proceeds. Whatever she decides will be a landmark for every AI company that does business with the U.S. government.

This is how it unraveled.


Two Lines Anthropic Would Not Cross

The dispute came down to two restrictions Anthropic had written into its contract and refused to remove.

  • No autonomous weapons. Anthropic's position, backed by internal testing, was that current AI models are not reliable enough to make lethal targeting decisions without a human in the loop. This was an engineering assessment, not a political objection to military AI.
  • No mass surveillance. Near the end of negotiations, the Pentagon demanded the ability to use Claude for what Amodei later described as "analysis of bulk acquired data," a phrase that, he said, "exactly matched this scenario we were most worried about."

The Pentagon's counter was simple: it wanted Claude available for "any lawful purpose" and argued private companies cannot dictate how the military uses technology in warfare. Officials said the Pentagon had no interest in surveillance or autonomous weapons. Anthropic's position was that "we have no interest" and "we will contractually agree not to" are meaningfully different statements.

On February 24, Defense Secretary Pete Hegseth met with CEO Dario Amodei and gave Anthropic a deadline: comply by 5:01 p.m. on Friday, February 27, or face consequences. He also threatened to invoke the Defense Production Act to compel compliance. Amodei said the threats did not change Anthropic's position.


The Truth Social Post, the Blacklist, and What Followed

Anthropic did not budge, and the consequences arrived before the deadline.

At 3:47 p.m. on February 27, Trump posted on Truth Social calling Anthropic "A RADICAL LEFT, WOKE COMPANY" and directing every federal agency to immediately stop using its products. Hegseth followed with a supply-chain risk designation.

Why the designation was legally unprecedented:

The supply-chain risk tool was designed for foreign adversaries such as companies like Huawei that are controlled by hostile governments. It had never been applied to an American company. Federal law defines supply-chain risk as risk that "an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a system, and Anthropic refusing to remove safety guardrails does not meet that definition. Senator Kirsten Gillibrand called it "a dangerous misuse of a tool meant to address adversary-controlled technology."

Hegseth initially claimed the designation would bar all military contractors from any commercial activity with Anthropic whatsoever, a reading that would have pulled in Amazon, Microsoft, and Google, all of which have both Pentagon contracts and major financial stakes in Anthropic. The formal designation letter Anthropic eventually received was narrower: contractors could not use Claude in Pentagon work specifically, but Anthropic products could remain available on their platforms for all non-defense work. Microsoft, Google, and Amazon confirmed this interpretation publicly.

Hours later, OpenAI moved in:

Sam Altman announced a deal with the Pentagon the same evening, claiming OpenAI had secured the same safety restrictions the Pentagon had just punished Anthropic for demanding. Critics questioned whether the contract language actually achieved that, and Lawfare noted the comparison damaged the government's case: if OpenAI's restrictions were acceptable, what distinguished Anthropic's as a national security threat? Altman later admitted the deal was rushed and "looked opportunistic and sloppy."

Amodei sent a memo to staff the same night. It leaked. He called OpenAI's approach "safety theater," described Altman's statements as "straight up lies," and said one real cause of the dispute was that Anthropic "hasn't given dictator-style praise to Trump." He later apologized for the tone, calling it the product of "a chaotic series of announcements" that did not reflect his considered views. The public reaction moved in Anthropic's favor: Claude surpassed ChatGPT in Apple's App Store the day after the termination announcement, and the company reported more than a million new signups per day.


The Inconvenient Operational Reality

One fact loomed over the entire dispute: Claude was not an abstract capability but was embedded in active combat operations. The Wall Street Journal reported Claude was used in the Maduro arrest operation. The Washington Post and Reuters reported it was being used for intelligence processing and targeting in the ongoing U.S. conflict with Iran. The commander of U.S. Central Command confirmed the military was using "advanced AI tools" to "sift through vast amounts of data in seconds" during Iran strikes.

The Pentagon was simultaneously blacklisting the technology, threatening to shut it down, granting itself a six-month transition window, and continuing to rely on it in combat. Trump's six-month phase-out period was a tacit acknowledgment that Claude was too embedded to cut off overnight. Amodei said Anthropic would cooperate and continue supplying models at nominal cost to ensure "warfighters won't be deprived of important tools in the middle of major combat operations," meaning Anthropic was, in effect, continuing to serve a government that had just called it a national security threat.


The Lawsuits

On March 9, Anthropic filed two federal lawsuits: one in the Northern District of California and one in the D.C. federal appeals court.

  • First Amendment claim: The government blacklisted Anthropic not for a genuine security risk but for its publicly stated views on AI safety. Trump's Truth Social post calling Anthropic "RADICAL LEFT" and "WOKE" was cited as evidence the designation was politically motivated rather than legally justified.
  • Fifth Amendment claim: Anthropic was denied due process, with no formal hearing, no independent review, and no procedural safeguards before a social media post and press conference imposed severe business penalties.

Anthropic asked for a preliminary injunction to block enforcement while the case proceeds. The hearing was originally set for April 3, then moved to March 24 after Anthropic's counsel argued the company was suffering "grave and irreparable injury" every day. When Judge Lin asked the DOJ whether the government would commit to no further retaliatory actions before the hearing, DOJ lawyer James Harlow said he was "not prepared to offer any commitments," and she moved the date up.


Who Filed in Support of Anthropic

The coalition that formed ahead of the hearing was unusual.

Who What they said
Researchers from OpenAI and Google DeepMind (personal capacity) Designation could harm U.S. AI competitiveness and suppress critical public discussion
22 former senior U.S. military officials (incl. former Air Force, Army, and Navy secretaries) Hegseth's actions are "retribution against a private company that has displeased the leadership"
Microsoft Called for a pause; wrote that "American AI should not be used to conduct domestic mass surveillance or start a war without human control"
Senator Elizabeth Warren Called the designation "retaliation"; argued the Pentagon could simply have terminated the contract normally

The DOJ's March 17 response rejected Anthropic's framing entirely. Its position: Anthropic's refusal to comply was a business decision, not protected speech, and the designation was a straightforward national security call.


Two Things the Court Filings Revealed

Two sworn declarations filed ahead of the March 24 hearing added significant new detail to the public record.

1. The kill-switch argument was technically false.

The Pentagon claimed Anthropic could disable or alter Claude's behavior during warfighting operations. Thiyagu Ramasamy, Anthropic's head of public sector, declared under oath that this is not possible: once Claude is deployed inside a government-secured, air-gapped system operated by a third-party contractor, Anthropic has no remote access, no kill switch, no backdoor, and no mechanism to push unauthorized updates. Ramasamy also noted this concern never came up during months of negotiations and appeared for the first time in the government's court filings.

2. The Pentagon privately said the sides were "very close" after publicly declaring the relationship over.

Sarah Heck, Anthropic's head of policy and a former NSC official present at the February 24 meeting, attached a previously unreported email to her declaration. On March 4, the day after the formal designation, Under Secretary Emil Michael emailed Amodei saying the two sides were "very close" on autonomous weapons and surveillance. The same Michael posted publicly on March 5 that "there is no active Department of War negotiation with Anthropic," and told CNBC a week later there was "no chance" of renewed talks. The private email said something very different.


What This Case Will Actually Decide

The Anthropic dispute has always been about something larger than one contract. The core question is whether the U.S. government can use anti-espionage procurement law to punish a domestic AI company for the ethical positions it holds about how its technology should be used.

The Pentagon's argument: The government cannot be constrained by private companies' preferences in national security contexts. If a vendor cannot accept military terms, the government finds another vendor, and that is procurement rather than retaliation.
Anthropic's argument: There is a meaningful difference between the government choosing not to buy a product and using anti-espionage law to turn every defense contractor in the country into an enforcement mechanism against a domestic company because of that company's publicly stated views.

Lawfare put it plainly: the government has every right to stop buying from Anthropic through standard procurement channels. What it cannot do is bypass the system Congress built and use an anti-espionage statute to override the procurement authority of dozens of agencies at once. Anthropic's CFO estimated the designation could reduce 2026 revenue by billions of dollars, and the company is valued at $380 billion with projected annual revenue of $14 billion. Whatever Judge Rita Lin decides will establish the first legal precedent for what the U.S. government can and cannot do to an AI company that refuses to subordinate its ethics to a procurement contract.


Timeline

Date Event
Jul 2025 Anthropic signs $200M Pentagon contract; Claude becomes first frontier AI on classified military networks
Feb 24, 2026 Hegseth meets Amodei; demands removal of safety restrictions by 5:01 p.m. Friday
Feb 26 Amodei publicly refuses: "cannot in good conscience accede"
Feb 27, 3:47 p.m. Trump Truth Social post; Hegseth issues supply-chain risk designation
Feb 27, evening OpenAI announces Pentagon deal
Mar 3–6 Anthropic confirms SCR designation; Amodei apologizes for leaked memo; back-channel talks denied publicly
Mar 9 Anthropic files two federal lawsuits; Claude overtakes ChatGPT in App Store
Mar 12 Microsoft, 22 former military officials, OpenAI and Google DeepMind researchers file amicus briefs
Mar 17 DOJ files 40-page response rejecting First Amendment framing
Mar 20 Court filings reveal no kill switch exists and Pentagon privately said sides were "nearly aligned"
Mar 23 Senator Warren calls designation "retaliation"
Mar 24 Hearing before Judge Rita Lin, San Francisco, 1:30 p.m.

Frequently Asked Questions

Why did Anthropic refuse the Pentagon's demands?

Anthropic drew two red lines: no use of Claude for fully autonomous weapons without human oversight, and no mass domestic surveillance of Americans. Its position was that current AI is not reliable enough for autonomous weapons and that mass surveillance violates fundamental rights. These were narrow exceptions rather than a blanket refusal, as Anthropic had operated a classified Pentagon contract for eight months before negotiations collapsed.

What is a supply-chain risk designation and why is this unprecedented?

It is a tool designed for companies tied to foreign adversaries like Huawei, and it had never before been applied to a U.S. company. The statute covers risks that an adversary may "sabotage or subvert" a system, not a company's refusal to remove safety guardrails. Legal experts, former military officials, and senators widely called it a misuse of the authority.

Was Claude being used in active military operations?

Yes. It was reportedly used in the Maduro arrest operation and for intelligence and targeting in the U.S. conflict with Iran. The Pentagon granted itself a six-month phase-out, acknowledging Claude was too embedded to cut off overnight.

What are Anthropic's lawsuits arguing?

The First Amendment suit argues the government punished Anthropic for its publicly stated AI safety positions, which constitutes retaliation against protected speech. The Fifth Amendment suit argues Anthropic was denied due process. Both seek to block the supply-chain risk designation.

What did the court filings reveal?

Two key findings: Anthropic has no remote access to Claude once it is deployed in an air-gapped government system, making the Pentagon's kill-switch argument technically false. Additionally, Under Secretary Emil Michael privately emailed Amodei on March 4 saying the two sides were "very close," the day after the formal designation and days before Michael publicly denied any ongoing negotiations.

Why does this case matter beyond Anthropic?

It will determine whether the U.S. government can use anti-espionage law to punish domestic AI companies for their ethical positions. If the government wins, AI companies working with the Pentagon lose the ability to set limits on how their technology is used. If Anthropic wins, it establishes that constitutional protections apply in these procurement disputes.


Anthropic’s Claude Sonnet 4.6 Arrives With a 1 Million Token Context Window and Record Benchmark Scores
Anthropic has released Claude Sonnet 4.6 with a 1 million token context window, stronger coding abilities, and a 60.4% score on ARC-AGI-2. Here’s what it means.
The Anthropomorphization of AI: Why We Believe “It Understands Us”
The phenomenon of AI anthropomorphization—the human tendency to attribute emotions, intentions, and understanding to artificial intelligence.
Anthropic Releases Opus 4.6 with ‘Agent Teams’ That Let Multiple AIs
Anthropic releases Claude Opus 4.6 with groundbreaking “agent teams” that let multiple AI agents work in parallel. Plus a 1 million token context window, PowerPoint integration, and benchmark scores that beat GPT-5.2. Here’s what it means for developers and enterprise users.