Last Tuesday, a 17-year-old bug in FreeBSD’s NFS implementation was quietly sitting in production servers worldwide. It had been there since 2009. Nobody knew. Then Anthropic’s Claude Mythos Preview found it in minutes.
That vulnerability—CVE-2026-4747—allows anyone to gain root access on a machine running NFS. Remote code execution. No authentication required. Seventeen years of silent exposure.
It wasn’t alone. Mythos also uncovered a 27-year-old denial-of-service flaw in OpenBSD’s TCP SACK implementation and demonstrated a browser exploit that chained four separate vulnerabilities together, writing a JIT heap spray that escaped both renderer and OS sandboxes. The total haul: thousands of zero-day vulnerabilities across every major operating system and every major web browser.
Over 99% of them remain unpatched right now.
Anthropic released Mythos Preview on April 8 under a new initiative called Project Glasswing. The coalition reads like a who’s-who of tech: AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks all signed on. The premise is straightforward—use Mythos to find vulnerabilities before attackers do, then coordinate responsible disclosure.
The model doesn’t just scan code. It autonomously identifies vulnerabilities in production software, develops working exploits, and demonstrates them. Think of it as a tireless red team operator that never sleeps, never gets bored, and processes codebases at machine speed. During testing, it found critical flaws that human security researchers had walked past for decades.
The responsible disclosure angle matters here. Anthropic isn’t dumping CVEs on Twitter. They’re working through coordinated disclosure with affected vendors, which is why specific details remain under wraps. But the sheer volume—thousands of critical vulnerabilities, many of them one to two decades old—raises an uncomfortable question: if one AI model can find this many bugs this fast, what happens when a less scrupulous actor builds something similar?
My Opinion
I’ll be blunt: this is simultaneously the most exciting and most terrifying AI development of 2026 so far.
Here’s what bugs me. We’ve been running critical infrastructure on software riddled with exploitable holes for 17, 20, 27 years. Human security teams, billion-dollar bug bounty programs, government-funded audits—none of them caught what Mythos found in its first real deployment. That’s not a compliment to the AI. That’s an indictment of how we’ve been doing security.
The Project Glasswing coalition is impressive, but I think people are missing the real story. This isn’t about Anthropic being helpful. This is a land grab for the cybersecurity future. Whoever controls the best vulnerability-finding AI controls the security posture of the entire software industry. Anthropic just positioned itself as the gatekeeper, and every major tech company signed up because the alternative—not having access—is worse.
The dual-use problem is staring us in the face. Mythos proves that AI can be a devastating offensive weapon. The 99% unpatched stat isn’t reassuring—it’s a countdown. Every day those patches don’t ship is a day someone else might independently discover the same flaws. The race between disclosure and exploitation just got a lot faster, and I’m not convinced our patching infrastructure can keep up.
Author: Yahor Kamarou (Mark) / www.humai.blog / 13 Apr 2026