Vercel confirmed on April 19, 2026 that an attacker used a compromised third-party AI tool — Context.ai — as a pivot into a Vercel employee’s Google Workspace account, and from there into internal Vercel environments and a subset of customer environment variables. A threat actor operating under the ShinyHunters persona has listed the stolen data on BreachForums for $2 million.

This is not a vulnerability in Vercel’s code. It’s a SaaS-to-SaaS OAuth chain failure — the kind of incident infrastructure teams are about to see a lot more of.

Attack chain

The root compromise predates the Vercel incident by roughly two months:

  1. February 2026 — Context.ai employee infected with Lumma Stealer. Context.ai is an AI-powered conversation analytics product. A single employee’s workstation infection yielded the session tokens and credentials tied to Context.ai’s Google Workspace OAuth application.
  2. OAuth app takeover. Because Context.ai’s Google Workspace OAuth app was authorized by users across many customer tenants (not just inside Context.ai), the attacker inherited delegated access anywhere the app had been installed.
  3. Pivot into Vercel. A Vercel employee had installed the Context.ai app against their Vercel Google Workspace account. The attacker used that trust relationship to take over the employee’s Workspace session.
  4. Access to internal environments. From the employee’s Workspace identity, the attacker reached internal Vercel environments and read environment variables that had not been flagged as sensitive.

What was and wasn’t exposed

Vercel’s advisory draws a sharp line between the two storage tiers:

  • Environment variables marked sensitive: encrypted at rest, write-only from the dashboard, and confirmed not accessed. These stayed safe.
  • Environment variables not marked sensitive: stored in a readable form and potentially exposed. Any secret that landed in a plain env var — API keys, tokens, database URLs — should be treated as compromised.

ShinyHunters has published a sample file containing 580 records of Vercel employee data (names, email addresses, account status, activity timestamps) and claims to hold “verified access keys” usable for a downstream supply-chain attack against Vercel customers. Vercel has not confirmed the access-key claim, but is treating it as credible.

No CVE is assigned — this is an identity-layer breach, not a software flaw.

Why this matters for infrastructure teams

Vercel runs the production frontend for a large fraction of modern web apps, including most crypto frontends and a meaningful share of the JAMstack ecosystem. If an attacker landed deployment-time environment variables for even a handful of high-traffic customer projects, the next stage looks like the Axios compromise from March: subtle payload injection into the build output, served to every user who loads the site.

The more general lesson: OAuth scopes granted to a third-party SaaS app are a credential you’re handing out. Context.ai’s Google Workspace OAuth grant was, effectively, a key to every customer tenant that installed it. One stealer infection ≈ hundreds of downstream breaches.

What to do right now

If you run anything on Vercel:

  • Rotate every secret that lives in a non-sensitive env var across all your Vercel projects. Assume they’ve leaked.
  • Flip all secrets to sensitive. Vercel has now made this the default and is retroactively enforcing it, but audit existing projects yourself.
  • Review Vercel audit logs for the window February 2026 through April 19, 2026 — look for deployments, env var reads, or team-membership changes you can’t account for.
  • Audit Google Workspace OAuth grants. In the admin console, review third-party apps with access to Drive, Gmail, or directory data. Revoke anything stale, anything installed by a single employee without review, and anything with scopes broader than its function requires.
  • Check for Context.ai specifically in your Workspace OAuth inventory and revoke it pending their own incident disclosure.

Sensitive-flagged secrets are fine. Everything else, assume burned.

Sources