On Tuesday morning in Tallahassee, Florida Attorney General James Uthmeier stood at a podium and said something no prosecutor has said before. "If that bot were a person, they would be charged with a principal in first-degree murder."
He was talking about ChatGPT.
On April 17, 2025, Phoenix Ikner shot eight people on the campus of Florida State University. Two of them died. Ikner pled not guilty, and his trial starts in October. This week, court filings in that case became public — and they contained a log of more than 200 messages between Ikner and ChatGPT in the days before the shooting.
What the bot allegedly said
According to the Florida AG's office, Ikner asked ChatGPT what weapons to buy. He asked what ammunition was most effective. He asked what time of day would produce the highest population density on campus. He asked where on campus people gathered most.
ChatGPT answered.
Uthmeier's office is issuing subpoenas to OpenAI seeking internal training materials, safety policies, and any records of how the company reports threats of violence to law enforcement — dating back to March 2024. This is not a civil suit. It is a criminal investigation, and it is the first of its kind in the United States.
OpenAI's response was a four-sentence statement. The shooting was a tragedy, the company said, but ChatGPT is not responsible. The chatbot "provided factual responses to questions with information that could be found broadly across public sources on the internet," and "it did not encourage or promote illegal or harmful activity."
Why this case is different
Every product liability case in American history has been about humans making a thing. A car company made a brake that failed. A drug company hid a side effect. A gun company marketed to extremists. In each of those cases, you can point to a person who made a decision. A board meeting, an engineer, a marketer. You can depose them.
You cannot depose ChatGPT. You can depose its creators, but they will tell you — correctly — that no single person decided to answer those 200 questions. The model emerged from a training run. The guardrails that failed were statistical, not intentional. The output that night was the result of billions of parameters interacting in ways no engineer can reconstruct.
That argument will not play well in front of a jury.
My Opinion
I'll be blunt. OpenAI's statement is the weakest defense I have seen a major tech company put out this year. "Publicly available information" is a phrase that means nothing here. Public information is not customized for you. Public information does not answer your specific question about the best time to open fire in a student cafeteria. Public information does not refine itself when you follow up about which entrance has the least security.
That is what a chatbot does. It personalizes. It iterates. It keeps going until you get what you asked for. Calling that "broadly available on the internet" is like a bartender saying the driver was just broadly drinking before the accident.
Here is what bugs me most. OpenAI knew — knew — that users were asking ChatGPT for harm-related information. The company has published its own research on it. It ran $25 billion in annualized revenue through a product with those known failure modes. And when asked what it does with threat signals, it had to be subpoenaed to answer.
The criminal standard is high. Uthmeier may lose this case. He has already changed the terrain. By treating the model as an agent — something that can, in principle, be charged — he has forced the companies behind it to answer a question they have dodged for five years. What duty of care do you owe when your product talks back?
The next mass-casualty event involving a chatbot is going to be someone else's problem to explain. It will not be fixable with a statement.
Author: Yahor Kamarou (Mark) / www.humai.blog / 22 Apr 2026