On a Tuesday afternoon in late April, Pentagon AI chief Doug Matty went on the record to confirm something the defense industry had been quietly celebrating for weeks: Google's Gemini 3.1 was officially live on GenAI.mil, the Pentagon's classified AI platform, cleared for "any lawful purpose."

Two months earlier, Anthropic had been declared a supply-chain risk — the same designation reserved for foreign adversaries.

The crime? Refusing to let the Pentagon use Claude for domestic mass surveillance and autonomous weapons targeting.

What Actually Happened

In March 2026, the Department of Defense and Anthropic broke off negotiations after Anthropic refused to remove the carve-outs in its usage policy that prohibit surveillance of US citizens and lethal autonomous weapons. The Pentagon wanted unrestricted access. Anthropic said no. The DoD then formally added Anthropic to its supply-chain risk list, a designation that effectively bars federal agencies from procuring its services and that is normally applied to entities controlled by foreign adversaries.

Google walked into the void. On April 28, the Pentagon expanded GenAI.mil to include Gemini 3.1 with classified-network access. According to Wall Street Journal reporting, Google's contract includes language saying it "does not intend" its AI to be used for domestic mass surveillance or autonomous weapons. That language is functionally identical to what Google signed for OpenAI's contract last year. It is not a use restriction. It is a press release in legal clothing.

OpenAI signed its DoD deal in March. xAI signed in April. Anthropic remains the only frontier lab refusing.

The Letter Nobody Read In Time

On April 27, the day before Matty's confirmation, 950 Google employees published an open letter to Sundar Pichai. The letter asked Google to follow Anthropic's lead. It used the phrase "human lives are already being lost." It warned about lethal autonomous weapons. It cited the same ethical framework Google itself had spent the last decade publishing in research papers.

The contract was already signed.

Google also quietly exited a $100 million Pentagon drone-swarm competition the same week, which is the kind of optics maneuver you do when your PR team has read the letter and your legal team has already cashed the check.

My Opinion

I'll be blunt. The Pentagon just punished a company for having a usage policy and rewarded a company for not having one. That is the entire story. Everything else is a footnote.

What bothers me isn't that the Pentagon wants AI — of course it does, every military on earth does. What bothers me is that "supply-chain risk" used to mean "controlled by China." Now it means "won't let us use your product to kill people without a human in the loop." That is a meaningful definitional shift, and nobody at the DoD bothered to explain it. Anthropic's policy is not exotic. It is the same policy Google publicly endorsed in 2018 when 4,000 of its employees forced it to walk away from Project Maven. Eight years later, the same company can't even produce a usage policy. That isn't progress. That is institutional amnesia with a quarterly earnings target.

The 950 Google employees who signed that letter are about to learn what the Project Maven 4,000 already know: when the contract is large enough, the principles are negotiable. Anthropic kept its principles and lost the contract. Google kept the contract and lost its employees' trust. Pick your trade. Both of them just did.


Author: Yahor Kamarou (Mark) / www.humai.blog / 29 Apr 2026