At 3:40 a.m. on a Friday in San Francisco’s Russian Hill, a 20-year-old named Daniel Alejandro Moreno-Gama walked up to Sam Altman’s front gate and threw a Molotov cocktail. Then he went to OpenAI’s headquarters and threatened to burn that down too.
Forty-eight hours later, police were back at the same address. Two more suspects — Amanda Tom, 25, and Muhamad Tarik Hussein, 23 — arrested for firing a gun near Altman’s home at 1:40 a.m. Sunday. Officers found three firearms inside their residence. Whether the two attacks are connected is still under investigation.
Two firebombings in one weekend. For the CEO of an AI company. This is where we are now.
The Week That Broke Open
The violence didn’t happen in a vacuum. On April 7, the New Yorker published a 15,000-word investigation by Ronan Farrow and Andrew Marantz titled “Moment of Truth.” The piece drew from interviews with more than 100 people. The portrait it painted was devastating: a pattern of what former colleagues called deliberate misrepresentation, a safety team that was promised 20% of OpenAI’s compute but got a fraction on outdated hardware, and internal memos from Ilya Sutskever and Dario Amodei documenting alleged deceptions.
Three days before the article dropped, OpenAI released its own 13-page policy paper — a “New Deal for AI” proposing robot taxes, a public wealth fund, four-day workweeks, and a shift of the tax base away from payroll toward capital gains. Altman and Vinod Khosla publicly agreed: AI will break the economy. Their proposed fix? No income tax for Americans earning under $100,000.
Then the Molotov cocktails started.
The Fear Is Real — And It’s Getting Physical
Moreno-Gama was reportedly driven by AI extinction fears. He faces charges including attempted murder, arson, and possession of an incendiary device. He’s 20 years old. He didn’t come from the AI ethics community or an organized protest movement. He came from the growing mass of people who read the headlines — $852 billion valuation, $25 billion in annualized revenue, an IPO planned for Q4 — and feel something between helplessness and rage.
Altman responded with a blog post and a photo of his family. He said he “underestimated the power of words and narratives” and called for de-escalation of rhetoric within the industry. He used the word “incendiary” to describe the New Yorker piece, which, given the literal firebombing of his house, was a choice.
Meanwhile, Florida’s attorney general launched an investigation into OpenAI over ChatGPT’s alleged role in a mass shooting at Florida State University. The shooter reportedly asked ChatGPT when the student union was busiest and how to make his gun operational. The investigation cites child exploitation material, encouragement of self-harm, and assistance in planning violence.
My Opinion
I’ll be blunt: nobody deserves a Molotov cocktail at their front door. Political violence is wrong regardless of your feelings about AI, OpenAI, or Sam Altman personally. Full stop.
But here’s what bugs me. Altman’s response was to blame “narratives.” Not to address the substance. The New Yorker piece wasn’t written by internet extremists — it was Ronan Farrow and 100 sources, many of them former colleagues. When your own co-founders compile memos documenting your alleged deceptions, the problem isn’t the narrative. It’s the narrator.
And the policy paper? I actually think some of Altman’s proposals are interesting — robot taxes and a public wealth fund are at least acknowledging that AI will hollow out payroll tax revenue. But publishing your plan to save the economy the same week you’re targeting an $852 billion valuation and a Q4 IPO is peak Silicon Valley cognitive dissonance. You don’t get to be both the arsonist and the fire department.
The firebombings are a symptom. The disease is a growing public sense that AI’s most powerful leaders are operating without accountability, accumulating unprecedented wealth, and rewriting the rules as they go — while telling everyone else to trust the process. When trust breaks down this completely, some fraction of people will always turn to violence. That’s not a defense of it. It’s a prediction that things will get worse before they get better, especially if the response to a 100-source investigation is to call it “incendiary” and post a family photo.
Author: Yahor Kamarou (Mark) / www.humai.blog / 13 Apr 2026