Lex Fridman asked Jensen Huang a straightforward question on March 22: how long before an AI can build a billion-dollar company from scratch? Huang didn't hesitate. "I think it's now," he said. "I think we've achieved AGI."
The clip spread in hours. Four words from the man whose chips power roughly 80% of AI training worldwide landed like a detonation across the industry. By the time the podcast (#494) had trended across social media, researchers, investors, and regulators were all trying to figure out what to do with this claim.
Huang's definition doesn't match the one most people in the field use. He's not describing a machine that reasons across all human cognitive domains. He means something narrower and more economic: AI systems that can generate something of value at the scale of a billion-dollar business, even temporarily.
His example was OpenClaw, an open-source AI agent platform that developers are using to launch social apps, digital influencers, and creative experiments. Under Huang's framework, if one of those apps generates $1 billion in value, the bar has been cleared.
He immediately hedged. "You said a billion, and you didn't say forever," he told Fridman. So his AGI lasts only as long as the revenue does. Fridman's reaction: "You're gonna get a lot of people excited with that statement."
Researchers pushed back fast. Academic definitions of AGI require human-level performance across all cognitive tasks — novel physical reasoning, sustained strategy over months, genuine understanding built through experience. Current AI systems still hallucinate facts at scale. They can't navigate an unfamiliar kitchen, reason through a situation they've never encountered, or sustain a complex institution over decades. Passing the bar exam is not the same as being a general intelligence.
Huang's definition sidesteps every one of those limitations. That's either pragmatic brilliance or a convenient redefinition from the man who profits most from the belief that AGI has arrived. Nvidia captured a market cap north of $3 trillion largely because investors believe the race to AGI will keep demanding more chips. Every dollar of that valuation is a bet on perpetual AI acceleration. When Huang says AGI is here, he validates that bet directly.
My Opinion
I don't think Huang is lying. I think he's doing something more sophisticated: redefining the destination to match where we already are.
The original AGI concept — a machine that learns and reasons at human level across all domains — was always going to be slippery. The AI industry needed a finish line to justify the capital flowing into it. Now that the journey itself has become extraordinarily profitable, Huang has decided we've arrived. The destination was wherever we happened to stop.
Here's what bugs me most: declarations like this have real-world consequences. When the CEO of the company that makes the chips powering almost every AI system in existence says AGI is here, governments and regulators hear something different than a carefully hedged podcast answer. Policy gets written around that language. Investors price models as if they're general intelligences. Expectations reset in ways that current systems cannot meet.
AGI, as the researchers who've spent decades on the problem define it, isn't here. What is here are extraordinarily capable pattern-matching systems that fail in ways a genuine general intelligence wouldn't. Calling them AGI doesn't make them smarter. It makes the next high-profile failure harder to explain to the people who believed the headline.
Huang will keep selling shovels. The question is whether the rest of us keep buying the map that says we've already struck gold.
Author: Yahor Kamarou (Mark) / www.humai.blog / 30 Mar 2026