Last Monday, something happened that would have been unthinkable eighteen months ago. OpenAI, Anthropic, and Google—three companies that spend every waking hour trying to destroy each other's market share—sat down and agreed to share their most sensitive internal security data. The target: DeepSeek, Moonshot AI, and MiniMax, three Chinese AI labs that have been systematically cloning American frontier models through adversarial distillation.

The vehicle is the Frontier Model Forum, a nonprofit the three companies co-founded with Microsoft back in 2023. Until now, it existed mostly to issue safety pledges and look good in front of Congress. That changed on April 6.

Here's what triggered it. Anthropic's security team identified over 16 million exchanges with Claude generated by roughly 24,000 fraudulent accounts—all traced back to DeepSeek, Moonshot AI, and MiniMax. These weren't casual API calls. They were coordinated, high-volume extraction campaigns designed to pull out reasoning chains, specialized knowledge, and behavioral patterns. The kind of data you need to build a cheaper copy of someone else's $10 billion model.

OpenAI confirmed the same pattern. In their filing, they accused DeepSeek of deploying "increasingly sophisticated methods" to extract capabilities from GPT models, calling it an attempt to "free-ride on the capabilities developed by OpenAI and other U.S. frontier labs." Google hasn't released specific numbers but joined the intelligence-sharing framework without hesitation.

The three companies are now pooling detection methods: flagging abnormal traffic patterns, identifying repeated prompt sequences designed to extract reasoning chains, catching bot-like behavior routed through proxy networks, and correlating synchronized timing across supposedly independent accounts sharing payment methods. It's a threat-intelligence operation, not a press release.

This matters because adversarial distillation is cheap and it works. You don't need to train a model from scratch when you can systematically query a frontier model with millions of carefully designed prompts, collect the outputs, and fine-tune a smaller model to mimic the results. The Chinese labs reportedly spent a fraction of what it cost to build the original models—and got 80-90% of the performance.

My Opinion

I'll be blunt: this alliance is overdue by at least a year. While OpenAI, Anthropic, and Google were busy suing each other's employees and writing blog posts about competitive moats, DeepSeek was running 24,000 fake accounts against Claude alone. That's not a security incident. That's an industrial extraction operation running in broad daylight.

Here's what bugs me about the framing, though. The Forum is calling this "adversarial distillation" and treating it like a novel cyber threat. But let's be honest—the real failure is that these companies exposed their most valuable intellectual property through pay-per-query APIs with minimal authentication. DeepSeek didn't hack anything. They bought API access and used it at scale. The vulnerability was the business model itself.

The uncomfortable question nobody's asking: if $50 million worth of API calls can replicate a $10 billion training run, what does that say about the defensibility of frontier AI? The moat isn't the model weights. It's the data, the RLHF, the infrastructure to keep iterating faster than someone can copy you. This alliance buys time, but it doesn't solve the fundamental problem. The next DeepSeek won't use 24,000 accounts. It'll use 240,000 accounts across 50 providers, and no amount of traffic analysis will catch it in time.


Author: Yahor Kamarou (Mark) / www.humai.blog / 13 Apr 2026