In February, a Context.ai employee went home, opened their laptop, and searched for Roblox auto-farm scripts. They downloaded an executor. The payload was Lumma Stealer. The malware quietly harvested every credential on the machine, including the employee's work Google Workspace session.

Two months later, Vercel announced it had been breached.

That is the full chain. A cheat script for a children's game, bought on the dark web in February, cascaded through an AI vendor into one of the largest cloud platforms on the internet. ShinyHunters is now asking $2 million for the data.

How a Roblox script became a Vercel problem

Context.ai sells an "AI Office Suite" — the kind of tool that reads your email, writes your docs, summarizes your meetings. To do that, it needs OAuth access. A lot of it.

At least one Vercel employee had signed up for Context.ai using their corporate Google Workspace account. The permission scope they granted was "Allow All." Not "read inbox." Not "draft emails." All of it.

When the attackers pivoted through the compromised Context.ai employee into customer sessions, they landed inside that Vercel employee's Google Workspace. From there, they escalated into Vercel's internal systems and pulled environment variables that weren't flagged as sensitive — which, on Vercel, means they weren't encrypted at rest. They decrypted to plaintext. API keys. Database URLs. Whatever developers had shoved into env vars without ticking the right box.

Vercel notified the affected customer subset on April 20 and told them to rotate credentials immediately. Hudson Rock traced the infostealer infection. CyberScoop confirmed the Roblox-cheat origin. ShinyHunters posted the sale listing on BreachForums.

The OAuth scope nobody reads

Here is the part that should keep every security team awake. The Vercel employee did not click a phishing link. They did not reuse a password. They did not fall for a fake Microsoft login. They installed an AI tool and agreed to its default permissions.

That is now a supply chain attack.

Every AI productivity startup in the last eighteen months has pitched the same thing: give us your email, your docs, your calendar, your Slack, and we'll make you faster. Every one of them is a single compromised endpoint away from becoming a highway into their customers' infrastructure. Context.ai is not special. It is the template.

My Opinion

I'll be blunt. Every "Allow All" OAuth prompt that gets clicked inside a Fortune 500 company is a pre-built backdoor waiting for the right stealer log to surface on Telegram. The AI vendors selling these tools know the permissions they request are absurd. The customers clicking through know it too. Everyone agrees it's fine because the alternative — actually configuring scopes — takes effort, and the ROI demo slide promised a 3x productivity gain.

The framing is also off. People keep calling this "the Vercel breach." It isn't. It's the Context.ai breach. Vercel is a downstream victim. Every other Context.ai customer that granted the same scope is sitting on the same time bomb, and we won't hear about most of them because their stolen env vars won't show up on BreachForums with a price tag attached.

The lesson isn't "stop using AI tools." The lesson is: if an AI vendor asks for "Allow All" permissions on your corporate identity, they're not a tool. They're a co-located office. Treat them like one. Audit their security posture before, not after, some contractor at their company decides to hunt for Roblox exploits on a Saturday night.


Author: Yahor Kamarou (Mark) / www.humai.blog / 21 Apr 2026