On the evening of February 27, 2026, Sam Altman posted a few lines on X that sent shockwaves through the AI industry: OpenAI had signed an agreement with the U.S. Department of Defense to deploy its models across classified military networks. The contract, worth up to $200 million, was announced within hours of the Pentagon designating OpenAI's chief rival, Anthropic, as a national security supply-chain risk.

The timing was not subtle. Neither was the backlash.

Within 48 hours, Altman was publicly admitting the deal had been rushed. A senior OpenAI executive had resigned in protest. Claude had overtaken ChatGPT in Apple's App Store as users switched en masse. And legal experts were picking apart contract language that, they argued, left more doors open than closed.


How It Started: Anthropic's Standoff with the Pentagon

Anthropic had been the first AI lab to deploy its Claude models on the Pentagon's classified networks, under a contract worth up to $200 million. That deal had been running for over a year. Then the contract came up for renewal and the relationship soured fast.

The dispute was straightforward. The Pentagon wanted to use Anthropic's technology for "any lawful purpose." Anthropic wanted two explicit carve-outs: no use for mass domestic surveillance of Americans, and no use for fully autonomous weapons without human oversight.

Pentagon officials said those restrictions were "jeopardizing critical military operations." Anthropic held firm. On February 26, CEO Dario Amodei said the company "cannot in good conscience accede" to the demands, citing two reasons: current AI models are not reliable enough for autonomous weapons, and mass domestic surveillance constitutes a fundamental rights violation.

The Pentagon responded by designating Anthropic a supply-chain risk, an unprecedented move that would bar any U.S. military contractor from doing business with the company. Then Donald Trump posted on Truth Social directing "EVERY Federal Agency" to immediately stop using Anthropic's technology, calling it "A RADICAL LEFT, WOKE COMPANY."


OpenAI's Move: Same Day, Hours Later

Altman had sent an internal memo to staff on Thursday evening saying OpenAI was pursuing a deal with the same safety limits Anthropic had demanded, and that it shared Anthropic's "red lines." Hours later, when the deal was announced, many observers noticed the contract language told a more complicated story.

The critical difference, according to OpenAI, was architectural rather than contractual. Where Anthropic tried to enshrine restrictions explicitly in the text, OpenAI embedded them in the technical design: cloud-only deployment, its own safety stack in place, cleared personnel "in the loop." The contract still allowed the Pentagon to use the technology for "any lawful purpose" — but Altman said the two major restrictions were also "put into our agreement."

The problem was that both things could not obviously be true at once. If the Pentagon had just blacklisted Anthropic for asking for exactly these restrictions, how had OpenAI negotiated them in while also agreeing to "any lawful purpose"?

Critics pointed to the contract's reference to Executive Order 12333, a Reagan-era directive that U.S. intelligence agencies have historically used to justify collecting data on Americans by intercepting communications outside U.S. borders. Techdirt's Mike Masnick put it directly: the deal "absolutely does allow for domestic surveillance," because of how 12333 operates in practice.

The debate might have stayed niche if the full contract had been released. It was not. When pressed to share specific language, OpenAI's national security chief Katrina Mulligan told one X user: "I do not agree that I'm obligated to share contract language with you." Brad Carson, a former under secretary of the Army, was blunter: "I've reluctantly come to the conclusion that this provision doesn't really exist, and they are just trying to fake it."


The Weekend: Backlash, Amendment, and a Resignation

By Saturday, Claude had overtaken ChatGPT in Apple's App Store. Chalk messages appeared outside OpenAI's San Francisco offices: "Where are your red lines?" Inside the company, employees were reportedly furious — not necessarily opposed to a Pentagon contract in principle, but frustrated by the speed and the opacity.

On Monday, Altman published an internal memo on X.

"We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy," he wrote. "In hindsight, we shouldn't have rushed."

He also announced amendments: new language explicitly stating the AI system "shall not be intentionally used for domestic surveillance of U.S. persons and nationals," and a clarification that intelligence agencies including the NSA were excluded from the specific contract. Many observers remained unpersuaded. Senator Ron Wyden called it "serious cause for alarm," and without the full contract text published, there was no independent way to verify the amendments had any binding force.

On March 7, Caitlin Kalinowski, OpenAI's lead for robotics and consumer hardware, resigned. She said the decision was directly connected to the Pentagon deal: "Surveillance of Americans without judicial oversight and the use of lethal autonomous weapons without human authorization are lines that deserved more deliberation than they got."


The Bigger Question

It is worth noting that Altman's underlying logic was not obviously wrong. The U.S. military will use AI regardless of whether safety-focused labs engage. Refusing to participate doesn't make that use safer; it may just mean less careful companies fill the gap. Altman framed the decision plainly: "If we are right and this leads to de-escalation, we will look like geniuses. If not, we will continue to be characterized as rushed and uncareful."

The harder question is whether any company can meaningfully constrain how the U.S. government uses technology once it is deployed inside classified systems. The history of surveillance programs, from PRISM to the NSA's bulk collection activities, suggests that company-level agreements are weak instruments against institutional drift. Only Congress can establish durable, enforceable limits — and it hasn't.

For now, OpenAI has the Pentagon contract. It also has the memory of a week that cost it a key executive, a significant chunk of its App Store standing, and the goodwill of much of the AI research community — all for a deal its own CEO said was handled badly.

Timeline

Date Event
Feb 25, 2026 Hegseth ultimatum: Anthropic drops surveillance restrictions or gets designated a supply-chain risk
Feb 26 Anthropic CEO refuses; says the company "cannot in good conscience accede"
Feb 27 Trump bans all federal use of Anthropic. Hours later, OpenAI announces Pentagon deal
Feb 28 Critics dissect contract language; Claude passes ChatGPT in App Store
Mar 2 Altman admits deal was rushed; contract amended with explicit surveillance language
Mar 5 FT: Anthropic is back at the negotiating table with the Pentagon
Mar 7 Caitlin Kalinowski resigns from OpenAI, citing the Pentagon deal

OpenAI Makes $25 Billion a Year and Is Preparing for an IPO. Here Is What the Numbers Actually Mean.
OpenAI hit $25B in annualized revenue in February 2026, burns $25B a year, and is preparing for a $1 trillion IPO. Here’s what the financial reality looks like and what it means for users, developers, and the industry.
OpenAI Just Bought a Cybersecurity Startup. Here’s Why That Should Matter to Everyone.
OpenAI acquired AI security startup Promptfoo on March 9, 2026. Here’s what the deal reveals about the hidden dangers of AI agents — and why every business should pay attention.
OpenAI Ships GPT-5.3 Instant: Less Preachy, More Accurate, and a Signal of Where the AI Conversation Is Heading
OpenAI’s GPT-5.3 Instant cuts hallucinations by up to 26.8%, drops the “you’re not broken” energy, and ships to all users. Here’s what actually changed.