Last Tuesday, a radiologist in Ohio told me she'd caught a tumor that two colleagues missed — because an AI flagged it during a routine scan. She wasn't amazed. She was annoyed. "It's the fourth time this month the system pinged me for something that turned out to be nothing," she said. "But this one time, it wasn't nothing."
That's AI in March 2026. Not a miracle. Not a catastrophe. A tool that's right often enough to matter and wrong often enough to be exhausting.
What's Actually Working
Let's start with the wins, because they're real. According to a Doximity 2026 report, 63% of physicians now use AI in their practice, up from 47% just a year ago. The most common use? Not diagnosing cancer or predicting heart attacks — it's literature search (35% of docs) and documentation (29%). AI scribes that listen to patient visits and write notes have become the quiet killer app of healthcare. Boring. Effective. Doctors hate paperwork more than they distrust algorithms, it turns out.
Accessibility is where AI genuinely shines and nobody talks about it enough. Be My Eyes launched Be My AI — a feature that lets blind users point their phone at anything and get a detailed description back. Ray-Ban Meta glasses now describe surroundings in real time for visually impaired users. Google Live Transcribe keeps getting better for deaf and hard-of-hearing people. These aren't flashy demos. They're people reading their mail, navigating grocery stores, following conversations at dinner. The kind of stuff the rest of us take for granted.
And coding. The productivity data on AI coding assistants is hard to argue with. Developers complete tasks 20–55% faster depending on the study and the task. GitHub Copilot has over 1.8 million paying subscribers. Junior developers benefit the most, which makes sense — autocomplete is more helpful when you don't already know the answer.
The Uncomfortable Part
Here's what I think people get wrong about AI criticism: it's not that AI doesn't work. It's that the gap between what it does and what we were promised keeps widening.
McKinsey says 78% of companies now use AI. Great headline. But only 1% of executives consider their AI rollouts mature. One percent. An MIT Media Lab study found 95% of organizations see no measurable ROI from AI. And Gartner predicts companies will abandon 80% of failed AI projects in 2026.
So the adoption curve looks like a hockey stick. The results curve looks like a flatline. Something doesn't add up.
Then there's the hallucination problem, which — despite what you may have heard — has not been solved. The average hallucination rate across models sits at 9.2% for general knowledge. For legal queries? Between 69% and 88%. That's not a typo. Models are wrong on legal questions more often than they're right. And here's the kicker from a Suprmind research report: when AI hallucinates, it uses 34% more confident language. It doesn't say "I think" — it says "certainly" and "without doubt." Global losses from AI hallucinations hit $67.4 billion in 2024.
The Slop Problem
Merriam-Webster made "slop" its word of the year for 2025. You know exactly what it means.
That LinkedIn post that reads like a motivational poster generated by a blender. The Amazon product listing with 400 words that say nothing. The Google search result that technically answers your question but feels like reading a terms-of-service agreement. About 19% of top Google results now contain AI-generated content. Google has started issuing complete removals — not ranking drops, full delisting — for "scaled content abuse" since June 2025.
The internet is getting noisier. Not dumber, exactly. Just... flatter. More confident. Less interesting. Like an entire restaurant where every dish is a 6 out of 10.
Jobs: The Number Nobody Wants to Say Out Loud
I'll say it. Dario Amodei, the CEO of Anthropic (the company that makes Claude), predicted in 2025 that AI could eliminate roughly 50% of white-collar entry-level positions within five years.
That's not a fringe take. That's the CEO of an AI company.
The data so far: 76,440 jobs eliminated in 2025 with direct AI ties. Workers aged 22–25 in AI-exposed roles saw a 16% employment drop. And 41% of employers globally plan to cut up to 40% of their workforce due to AI within five years. The San Francisco Fed published a careful analysis in February 2026 noting that while AI hasn't yet shown broad productivity increases, the "certain target areas" where it has — coding and call centers — happen to be full of entry-level workers.
I don't know what happens next. I don't think anyone does, despite the confident op-eds. The honest answer is: we're in a weird middle period where the technology works well enough to displace people but not well enough to replace them. That's an expensive place to be.
My Opinion
I use AI every day. I use it to write, research, code, brainstorm. I'm not anti-AI. I'm anti-pretending.
The pretending goes both ways. Silicon Valley pretends AI is about to fix everything — healthcare, education, climate, loneliness. And the critics pretend it's all hype. Neither is right. AI is a really good autocomplete that occasionally catches tumors. That's not nothing. But it's also not what $100 billion in venture funding was supposed to buy us.
What bothers me most isn't the hallucinations or the job losses — it's the confidence. The way the models talk. The way the companies talk. The way the think pieces talk. Everybody is so sure. And the one thing I've learned from covering this space is that nobody should be sure about anything right now.
Stanford's AI faculty said it best: the era of AI evangelism is giving way to an era of AI evaluation. I hope they're right. Because right now we're spending trillions on something we haven't evaluated.
The radiologist in Ohio doesn't need AI to be perfect. She needs it to be honest about what it doesn't know. So do the rest of us.
Author: Yahor Kamarou (Mark) / www.humai.blog / 27 Mar 2026
Photo by Maxim Tolchinskiy on Unsplash