On March 27, 2026, someone found Anthropic’s secrets in a publicly searchable database. Nobody hacked them. No nation-state intrusion. No zero-day exploit. A misconfigured content management system had exposed nearly 3,000 draft assets — including an unpublished blog post introducing a model the company had never announced publicly.

The model: Claude Mythos, codenamed Capybara.

The leaked document described Mythos as “by far the most powerful AI model we’ve ever developed.” More specifically: it called the model “currently far ahead of any other AI model in cyber capabilities” and warned it “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.” It can identify previously unknown vulnerabilities in live production codebases — a capability Anthropic acknowledged as “dual-use.” Helpful to defenders. Equally useful to attackers.

The market didn’t wait for context. CrowdStrike dropped 7%. Palo Alto Networks fell 6%. Zscaler lost 4.5%. Okta, SentinelOne, and Fortinet each shed 3%. The iShares Cybersecurity ETF — a broad index of the sector — dropped 4.5% in a single session. Bitcoin slid alongside software stocks.

Anthropic confirmed the leak and said Mythos is currently being tested by “early access customers.” They’re restricting initial rollout to organizations focused on cyber defense, giving defenders time to harden their systems before wider release. Mythos outperforms Claude Opus 4.6 “dramatically” on software coding, academic reasoning, and cybersecurity benchmarks. The new model represents an entirely new capability tier — above the existing Opus line — not just an incremental update.

Anthropic built its entire brand on being different. They’re the company that told the Pentagon no on mass surveillance and autonomous weapons. They published the Responsible Scaling Policy. They hired safety researchers before safety was fashionable in Silicon Valley.

And then they leaked their most dangerous model through a CMS misconfiguration. Not a sophisticated attack. A config error. The kind of mistake that gets a junior DevOps engineer fired on their first week.

The irony is structural, not accidental: a company whose entire pitch is “we think about AI risks more carefully than everyone else” failed to think carefully about the risks of an unsecured content management system. If your safety culture doesn’t extend to your infrastructure team, it’s a culture problem, not a research problem.

My Opinion

I’ll be direct: Anthropic’s handling of this is less embarrassing than it first appears — and more embarrassing in a different way.

The leaked draft actually suggests they were planning a careful rollout. Cybersecurity defenders getting access first, wider release later. That’s responsible sequencing. The accidental disclosure blew up a thoughtful launch strategy. If you’re being fair, Anthropic was trying to do the right thing — they just couldn’t keep their own draft folder private long enough to execute it.

Here’s what actually bothers me: the market’s reaction was irrational, and the companies that got hammered know it. CrowdStrike falling 7% because a new AI model can find vulnerabilities is like blacksmiths panicking over power tools. CrowdStrike will be one of the first enterprise customers to deploy Mythos-class capabilities into their platform. Every CISO who read this story now has Anthropic’s most powerful model on their procurement radar. The stocks that dropped will pop the day Mythos officially launches — because the demand signal just got broadcast to every security team on the planet for free.

The real lesson isn’t about AI safety or market panic. It’s this: if you’re building tools capable of breaking into the world’s most critical systems, encrypt your drafts folder first. Anthropic didn’t get hacked. They handed out their own playbook by accident. That’s not a policy failure. It’s an infrastructure failure — and for a company betting its reputation on careful, methodical AI development, it’s the kind of mistake that should keep Dario Amodei up at night.


Author: Yahor Kamarou (Mark) / www.humai.blog / 28 Mar 2026