The call comes in on a Tuesday afternoon. Someone in your finance org just authorized a connected app called “Data Loader” against your Salesforce production tenant. They were on a help-desk callback because the IT support number that paged them five minutes earlier said the company had just “lost MFA registration” for half the org. The voice on the line walked them through the OAuth consent screen — three checkboxes, click Allow — and reassured them this was the standard reset flow. Twenty minutes later, every Account, Contact, and Opportunity record in the tenant is being slurped down through Salesforce’s Bulk API. Twenty minutes after that, the exfiltrated CSVs are being grepped for AWS keys pasted into Case attachments.
You revoke the user’s password. You force a session reset. You require new MFA enrollment. The exfil keeps running.
You revoke the user’s active OAuth tokens through the Salesforce admin console. The exfil keeps running.
You finally find the connected app under Setup → Connected Apps OAuth Usage and uninstall it. The exfil stops. By that point you have lost three million records, and every plaintext credential that anyone on your customer support team ever pasted into a Case is now in a file shared on BreachForums.
This is the UNC6040 attack chain, and it has been running against Salesforce tenants worldwide for the better part of a year. ShinyHunters dumped three million Cisco CRM records on April 19, 2026 using exactly this technique. The same week, Vercel disclosed an unrelated incident with the same architectural cause: a third-party SaaS provider got compromised, the attacker inherited that provider’s OAuth grants, and used them to walk into the customer.
The pattern has a name now in the threat-intel literature, and it should have a name in your incident response runbook too. Your IR playbook was written when the attack surface ended at your VPN. It does not handle SaaS-to-SaaS OAuth pivots, and the gap is what’s driving 2026’s biggest breaches.
The Refresh Token Is the New Backdoor
OAuth was designed to be revocable. In the abstract, that’s true. In practice, the refresh token has become the most persistent attacker artifact in modern enterprise infrastructure — more persistent than a stolen password, more persistent than a rogue local account, frequently more persistent than a backdoored binary on disk.
The mechanics are straightforward and that’s the problem. When a user consents to a connected app — Salesforce, Google Workspace, Microsoft 365, GitHub, Slack, anywhere — the SaaS provider issues an access token (short-lived, typically an hour) and a refresh token (long-lived, typically until explicitly revoked). The refresh token’s only job is to mint new access tokens whenever the old one expires. It does not require the user to be logged in. It does not require MFA. It does not care whether the user’s password has been rotated. It cares only whether the connected app itself has been uninstalled, or the refresh token has been individually revoked through an admin surface that most organizations do not regularly audit.
That is why the Salesloft Drift incident from August 2025 hit more than 700 organizations and why every major tech vendor — Cloudflare, Google, PagerDuty, Palo Alto Networks, Proofpoint, Tanium, Zscaler — ended up in the victim list. The attacker, tracked as UNC6395, did not need credentials for any of those tenants. They needed Drift’s OAuth tokens. Drift had been granted Salesforce access by hundreds of customers as part of a normal chatbot integration. When Drift’s token store was compromised, the attacker inherited every grant simultaneously, and ran SOQL queries against 700 different Salesforce tenants over a ten-day window before anybody noticed.
Salesloft and Salesforce revoked all Drift OAuth tokens on August 20, 2025. That fix took effect for the current tokens. It did not take effect for the credentials, AWS keys, Snowflake tokens, and VPN secrets that had already been combed out of those 700 tenants — because those secrets do not get revoked by Salesforce’s revocation API. They get revoked when each victim, individually, audits their CRM data and rotates everything that ever ended up in a Case attachment. Most organizations have not finished that audit eight months later.
The Same Pattern, Two Different Entry Points
By April 2026, two distinct entry-point variants on this pattern were running concurrently against the same connected-app substrate.
Variant 1: Compromise the third-party SaaS. This is the Salesloft Drift template. An attacker breaches a small-to-medium SaaS provider — typically through a single employee infection — and harvests the OAuth tokens and API keys that provider holds against its customer tenants. Every customer that has installed the provider’s connected app is in scope, simultaneously, with no further interaction.
The Vercel incident disclosed on April 19, 2026 is a textbook example. The actual victim of the initial intrusion was Context.ai, an AI conversation analytics startup. A single Context.ai employee was infected with Lumma Stealer in February 2026. Lumma scraped the Google Workspace credentials and OAuth session material tied to Context.ai’s connected app — the application authorized on the Google Cloud project under the OAuth client ID 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com. Because Context.ai’s connected app had been installed by users at hundreds of customer organizations, the attacker did not need to phish those customers individually. They walked into a Vercel employee’s Google Workspace via the Context.ai trust relationship and from there into Vercel’s internal environments and a subset of customer environment variables.
Vercel’s published forensics make the failure mode embarrassingly clear: only environment variables explicitly flagged sensitive were encrypted at rest. Everything else — third-party API keys, integration tokens, anything a developer didn’t think to mark sensitive when they pasted it into a project’s settings — was readable to anyone with internal Vercel access. The Context.ai compromise gave the attacker that internal access. The flat sensitivity model made everything else free.
Variant 2: Vish the customer. This is the UNC6040 template. Skip the third-party provider entirely. Call the help desk directly, impersonate IT support, walk the victim through authorizing a malicious OAuth connected app under a name that mimics a legitimate one. Salesforce is the favorite target because the consent flow is well-known and because Bulk API 2.0 makes exfiltration of millions of rows trivial once the app is installed. Google Threat Intelligence Group has been tracking the cluster as UNC6040 since mid-2025, and ShinyHunters has been monetizing the harvested data through extortion and BreachForums dumps. Cisco’s three-million-record dump on April 19, 2026 was the highest-profile victim to date. McGraw-Hill, Adidas, Qantas, Allianz Life, Workday, Pandora, Chanel, TransUnion, and multiple LVMH brands are also on the named-victim list.
The two variants converge at the same architectural failure: the SaaS tenant cannot cleanly distinguish, in real time, between a legitimate connected-app data pull and an attacker-controlled one. Once consent has been granted, both look identical to the audit log. Bulk API 2.0 calls from “Data Loader” look like bulk data integration regardless of who is on the other end of the API.
Why Your IR Playbook Misses It
Most incident response playbooks I have read in the last five years have a section called “containment” that lists three actions: rotate credentials, force password reset, kill active sessions. None of those actions touch refresh tokens granted to connected apps. None of them touch third-party OAuth integrations at all. The OAuth surface is treated as somebody else’s problem — the SaaS provider’s, the marketplace’s, the IT team’s — until an attack walks through it and demonstrates that nobody actually owned it.
The result, in incident after incident, is the same observable: containment is declared, the analyst closes the ticket, and exfiltration continues for hours or days because the actual attacker artifact — a refresh token bound to a malicious or hijacked connected app — was never revoked. UNC6040 victims have repeatedly reported that data exfiltration kept running for the entire window between initial detection and the security team finding the right Connected Apps OAuth Usage page in Salesforce admin. That window has been measured in days for several named victims.
This is not a tooling problem. The revocation surfaces exist. They are simply not in the runbook. Most SOC analysts can find a domain admin’s password reset under duress at 3 a.m. Far fewer of them know how to navigate to Salesforce Setup → Connected Apps OAuth Usage, or Google Workspace Admin → Security → API Controls → App Access Control, or GitHub Organization Settings → Third-party access → OAuth app policy, while they are simultaneously triaging a live exfil event.
What to Audit Before Someone Else Does
If you have not done this in 2026, schedule it for this week. The work is mostly mechanical, takes a few hours, and reliably surfaces things that should not be there.
In Salesforce: Open Setup → Connected Apps OAuth Usage. Sort by user count and by recent grants. Anything called “Data Loader” that you did not explicitly install is suspect — UNC6040’s malicious app uses that name and small variants of it. Anything called “Salesforce Inspector,” “My Ticket Portal,” or generic-sounding integration names with consent dates in the last 90 days deserves direct verification with the user who granted it. Set every legitimate connected app’s “Permitted Users” to “Admin approved users are pre-authorized” and disable end-user OAuth consent organization-wide if your business processes can tolerate it.
In Google Workspace: Admin Console → Security → API Controls → Manage Third-Party App Access. Pull the list of apps with domain-wide delegation or org-level grants. Any OAuth app you do not recognize, especially small AI tools that one team installed and forgot, should be either explicitly trusted or blocked. The Context.ai compromise hit Vercel because exactly one employee had granted the app full read access to their Drive — that single-user trust path was enough.
In GitHub: Organization → Settings → Third-party access → OAuth app policy. Set the policy to “Restrict OAuth apps to those approved for this organization” if you have not already. Audit personal access tokens (especially fine-grained ones) for anything that has not been used in 60+ days, and require all PATs to expire on a fixed cadence. The Shai-Hulud npm worm campaigns have been treating long-lived GitHub PATs as the highest-value loot for over a year now, because they propagate the worm to additional packages.
In Microsoft 365: Entra ID → Enterprise applications → All applications. Filter by “Date created” descending. Anything created in the last 90 days by a non-admin should be reviewed. Use Conditional Access policies to require admin consent for any new application requesting more than basic sign-in scopes.
The work is repetitive. The artifact you produce — a current inventory of connected apps with their granted scopes, the user populations they were consented by, and the date of their most recent token use — is the single most useful document your security team can have when the next OAuth-pivot breach is announced and you need to determine, in twenty minutes, whether you are downstream.
The Detection Signals That Actually Work
Two telemetry sources catch SaaS-to-SaaS OAuth abuse early enough to matter. Most organizations are ingesting neither.
The first is OAuth grant audit logs from your IdP. Google Workspace, Microsoft Entra, and Okta all emit events when a user authorizes a new connected app. Every one of those events is worth a SOC review. Volume is low — most users grant a handful of apps per quarter — and the cost of investigating each is small. Set an alert on any new grant of high-impact scopes (Drive read, mail read, full Salesforce access, GitHub repo write) and make a human approve before the grant is allowed to function. This single control would have caught the Vercel-employee Context.ai grant in February 2026 and prevented the entire chain.
The second is API usage telemetry on the SaaS side. Salesforce emits Bulk API 2.0 audit events with the requesting connected app. A baseline of “which apps run bulk queries, against which objects, at what volumes, on what schedule” is straightforward to compute and almost trivial to alert on. UNC6040’s exfil pattern — a brand-new connected app immediately running massive bulk queries against Account/Contact/Opportunity — would alert on the very first job under any reasonable baseline. The problem is not that the signal is missing. The problem is that no one is looking.
What Has To Change in 2026
The structural fix for the OAuth supply chain problem is not new tooling. It is a change in who owns the OAuth surface. Today, in most organizations, OAuth grants are owned by individual users (who consent), the SaaS provider (who issues tokens), and nobody at the security or IT level (who enforces). That ownership gap is the entire vulnerability.
Three things have to happen, in this order:
The OAuth surface needs to live in your CMDB. Every connected app, in every SaaS tenant, with every set of granted scopes, owned and reviewed by a named human. This is unglamorous configuration management work. It is also the only durable defense.
End-user OAuth consent for high-impact scopes has to require admin approval. Every major SaaS platform supports this. Most enterprises do not turn it on because the friction is real. The friction is also lower than the cost of being the next ShinyHunters dump.
Refresh token revocation has to be a first-class IR action with a well-rehearsed runbook step, exactly the way “rotate AWS credentials” already is. When a third-party SaaS provider discloses a breach, the question your team should be answering in twenty minutes is: which of our tenants have that provider’s connected app installed, and do we revoke now or wait for the provider’s coordinated revocation? If you do not know how to answer that question right now, you have homework.
The OAuth supply chain pattern is not a 2026 surprise. Salesloft Drift was the wake-up call in August 2025. The intervening eight months should have been spent closing the gap. Instead they were spent watching UNC6040, ShinyHunters, and the Context.ai/Vercel chain prove the same architectural lesson, in public, to organization after organization. The next disclosure is already in progress. The only question is whether your team can answer “are we downstream” in time to matter.