AI coding tools were supposed to give engineers their evenings back, and that is not what happened.
Bloomberg's April 2026 Businessweek cover story documented what it called the "Great Productivity Panic of 2026," driven in large part by increasingly capable AI coding agents including Claude Code and OpenAI's Codex. The piece's framing was pointed: "Vibe coding was supposed to be chill. But one year later, the vibes, as they say, are off."
The problem is not that AI tools aren't working. It is that they are working well enough to raise everyone's expectations about how much should get done, and workers are staying later to meet those expectations.
What the Research Actually Shows

The UC Berkeley study Bloomberg cited found something counterintuitive at a 200-person technology company: employees using AI tools increased both the volume and variety of work they completed, but they also took on significantly more work as a result. AI made it easy to start tasks, and hours expanded to fill the available capacity.
One worker interviewed by the Berkeley researchers put it plainly: "You had thought that maybe, 'Oh, because you could be more productive with AI, then you save some time, you can work less.' But then really, you don't work less."
The study tracked engineers, product managers, designers, and operations staff over eight months through 40 in-depth interviews. The pattern was consistent: more output, more variety, more total hours.
Separate research from Multitudes, a workplace analytics firm, found measurable increases in working hours among engineers at organizations that had deployed AI coding tools. Interviews suggested the change was driven primarily by elevated productivity expectations rather than individual initiative: workers were not choosing to work longer, but responding to an environment that expected more of them because the tools were capable of more.
A Bloomberg Weekend piece published April 5, 2026 coined the framing that stuck: AI FOMO, the fear that not using AI tools constantly is itself a form of falling behind. The newsletter described "AI productivitymaxxing" and the possibility that the tools would keep workers working longer before they made anyone better.
The Productivity Paradox Inside the Panic
The harder problem, documented across multiple studies, is that the productivity gains are real but the measurement frameworks have not caught up.
The ADP Research Today at Work 2026 report, based on survey responses from more than 39,000 workers across 36 countries, found a striking paradox at its center. Daily AI users were four times as likely as non-users to say they were not as productive as they could be.
The explanation ADP's chief economist Nela Richardson offered was psychological rather than technical. "AI does the things that we used to say make us feel productive," she told reporters. "It writes our emails for us. It responds. It summarizes documents for us. And so we have to kind of remeasure productivity in a different way; it's shifting from productivity based on volume of work to value of work."
In other words, AI has automated the checklist-style tasks that gave workers a tangible sense of daily accomplishment: answered emails, summarized documents, drafted first versions.
Without those small wins in the visible task log, workers feel they have done less, even when they have arguably done more and better work.
The ManpowerGroup 2026 Global Talent Barometer found a related pattern across nearly 14,000 workers in 19 countries: regular AI use increased 13% in 2025, but confidence in the technology's utility fell 18%. More adoption, less trust in the outcome.
The Vibe Coding Reality Check

The specific Bloomberg Businessweek story focused on software engineers, and for good reason: coding is the domain where AI tools have moved furthest from promises to production.
Andrej Karpathy coined the term "vibe coding" in February 2025 to describe a style of software development where engineers chat with AI models to build things, "fully giving in to the vibes." The framing implied ease. What followed in practice was more complicated.
Google's DevOps Research and Assessment team surveyed nearly 5,000 technology professionals and found that 90% were using AI at work, with more than 80% reporting productivity gains. But the same DORA report found that as AI use increased, so did "software delivery instability," meaning more frequent code rollbacks and patches after release. More code shipped, more problems created.
A randomized controlled trial by METR (Model Evaluation and Threat Research) in 2025 produced what Scientific American called a sobering counterweight to vendor case studies claiming major speed improvements. Engineers in the study did not show the large productivity gains that AI-assisted development advocates typically claim for complex tasks. The gains were more modest and more variable than the marketing suggested.
Scientific American's March 2026 analysis of the Berkeley and DORA data connected the dots: developers using AI tools were working longer hours, in part because companies had adjusted their expectations upward, and in part because debugging and verifying AI-generated code introduced new overhead that the productivity gains did not fully offset.
"People were feeling additional pressure to get more work done, and it looks like that was contributing to them putting in more hours," said researcher Michelle Peate, whose firm Multitudes conducted the working hours analysis.
The Broader Workforce Picture
The anxiety is not limited to engineers. The pattern Bloomberg documented in software development appears, in varied forms, across white-collar work more broadly.
The ADP study found that among individual contributors, only 18% felt their jobs were secure. Frontline managers came in at 21%, middle managers at 23%, and even among C-suite executives, only 35% felt secure.
In an environment where job security is this low across all levels of the organizational hierarchy, the pressure to demonstrate continued value through visible output is intense.
A 2025 Stanford University study found that early-career workers in AI-exposed occupations had seen a 13% drop in employment since the introduction of tools like ChatGPT in 2022. The American Psychological Association's July 2025 survey found that 38% of workers worried AI would make some or all of their job duties outdated.
Therapists reported seeing more clients specifically anxious about AI, with the most common fear being not sudden job loss but gradual obsolescence: the sense, as one New York clinical psychologist described it, that "you are no longer needed."
This fear has a specific behavioral consequence that Built In documented in late 2025 and named "job hugging": workers staying in roles they would otherwise leave because the job market has contracted and because demonstrating indispensability through sustained output feels less risky than mobility. Many workers say that learning AI tools feels like a second job, and that it has driven productivity and efficiency standards to near-impossible expectations.
The term coined to describe the internal experience of these workers is FOBO: Fear of Becoming Obsolete. Unlike the fear of being laid off, FOBO is the creeping sense that skills are degrading in real time, that the window to stay relevant is closing while you are still trying to figure out what relevant means.
The Management Failure Underneath

The research is consistent on one point that gets less attention than the productivity data: most of what workers are experiencing is a leadership failure, not a technology failure.
ADP's researchers were direct: the anxiety gripping workers is not an inevitable consequence of AI deployment, but largely a consequence of how organizations are handling it.
Employees at companies undergoing comprehensive AI-driven redesign were significantly more worried about job security, 46%, than those at less-advanced companies, 34%, according to BCG research. The closer people get to actual AI deployment, the more threatened they feel, which is a communication and management problem rather than a technology one.
The Bloomberg piece on bosses destroying the engagement opportunity AI offers, published March 26, 2026, made a related observation: employers are using both the productivity gains and the threat of AI to push workers harder rather than to redesign work in ways that are sustainable. More output, same people, no additional support, no adjusted expectations about what a reasonable workload looks like now that AI handles more of the low-level tasks.
"Workers who clearly see the role their existing skill sets will play in an organization's future will be more engaged, productive, and have the confidence to thrive in this next era of work," ADP workforce strategist Heather Caldwell told reporters. The inverse is also true, and it describes most workplaces right now.
What This Means for Teams Using AI Tools
The Bloomberg framing of AI FOMO as a productivity hazard lands in a specific organizational context that team leaders should understand.
The hazard is not that AI tools fail. It is that when they succeed, the natural institutional response is to raise expectations rather than reduce workload. Engineers who ship 40% more code per month do not work 40% fewer hours; they work the same hours, or more, against a higher bar.
This creates a specific kind of burnout risk that is different from traditional overwork. Traditional overwork is caused by too much demand and not enough time. AI-driven overwork is caused by a mismatch between what the tools can produce and what organizations are doing with that additional output.
If the efficiency gain goes entirely into the company's output and nothing changes about the employee's experience of work, the tools have not made work better. They have made it more.
The practical question for managers is whether the productivity gains AI tools are delivering are being shared with the people using them, in the form of realistic workload expectations, development time, or reduced hours, or whether those gains are being immediately reinvested in more output. The evidence so far suggests most organizations are reinvesting rather than sharing.
Wrap up
The Bloomberg Businessweek story about the productivity panic is not primarily a story about AI tools being bad. Claude Code and similar products are genuinely capable, genuinely useful, and genuinely increasing how much experienced engineers can ship.
The story is about what organizations do with that capability, and what it costs people to keep up with an environment that keeps raising the bar. Every productivity gain that becomes a new baseline is a gain that workers absorb rather than benefit from. At some point, the tools that were supposed to give everyone more time start taking it instead, not because they are failing but because the institutions surrounding them have not changed.
Karpathy's vibes were not wrong about what the tools could do. They were premature about everything that would happen around them.
Frequently Asked Questions
What is the "Great Productivity Panic of 2026"?
The term comes from Bloomberg Businessweek's April 2026 cover story on AI coding agents including Claude Code and OpenAI's Codex. The piece documented how increasingly capable AI tools had created pressure on engineers to work harder and longer rather than freeing them up, as the tools' productivity gains were absorbed into higher expectations rather than reduced workloads.
What is AI FOMO and how does it affect workers?
AI FOMO is the fear that not using AI tools constantly or not being visibly productive with them is itself a form of falling behind. Bloomberg's April 5, 2026 newsletter on vibe coding described it as a new kind of anxiety driving workers to work longer hours to demonstrate continued value in an environment where AI tools have raised baseline expectations.
Does research show that AI tools make workers more productive?
Yes, but with important caveats. Google's DORA survey of nearly 5,000 technology professionals found that 90% used AI at work and over 80% reported productivity gains. However, the same research found increased software delivery instability as AI use grew. A UC Berkeley study found that workers using AI tools increased output but also took on more work and worked longer hours. METR's randomized controlled trial found more modest productivity gains on complex tasks than vendor claims typically suggest.
Why do AI users feel less productive even when they do more work?
ADP's research across 39,000 workers in 36 countries found that daily AI users were four times as likely as non-users to say they were not as productive as they could be. ADP economist Nela Richardson explained that AI has automated the small, checklist-style tasks that gave workers a sense of daily accomplishment, such as answering emails and summarizing documents, so workers feel they have done less even when they have arguably done more.
What is FOBO and how is it different from job loss anxiety?
FOBO stands for Fear of Becoming Obsolete. It is distinct from the fear of being laid off. FOBO is the gradual sense that skills are degrading in real time, that you are falling behind faster than you can catch up, and that the window to stay relevant is closing while you are still figuring out what relevant means. It is particularly acute for younger professionals watching entry-level learning opportunities disappear as AI automates the foundational tasks that used to build judgment and expertise.
What should managers do about AI-driven productivity pressure?
Research consistently identifies leadership communication and expectation-setting as the primary levers. ADP's findings indicate that workers who clearly understand what role their existing skills play in the organization's future are significantly more engaged and confident. The key management question is whether productivity gains from AI tools are being shared with employees in the form of realistic workload adjustments, or reinvested entirely in additional output, effectively making AI adoption a cost that workers bear rather than a benefit they receive.
Related Articles





