LiteLLM — the 22,000-star open-source proxy that fronts OpenAI, Anthropic, Azure OpenAI, Bedrock and a long tail of model providers behind a single OpenAI-compatible API — shipped a fix on April 19 for CVE-2026-42208, a pre-authentication SQL injection in the proxy’s API-key verification path. The flaw carries a CVSS v3.1 score of 9.3. Sysdig’s threat-research team has now published telemetry showing the first targeted exploitation began 36 hours and seven minutes after the GitHub advisory was indexed, originating from a single operator at 65.111.27.132 using Python/3.12 aiohttp/3.9.1 and reaching directly for the upstream-credential tables. If you run an internet-exposed LiteLLM proxy on any version from 1.81.16 through 1.83.6, treat it as compromised and rotate every upstream provider key it ever held.

The Bug

The flaw is in the helper that validates the bearer token on every request. Affected versions concatenate the caller-supplied Authorization header value directly into a SELECT against the LiteLLM_VerificationToken table instead of binding it as a query parameter. A single quote in the bearer value escapes the string literal and lets the attacker append arbitrary SQL on the back end.

The query runs before LiteLLM has decided whether the caller is authenticated, so the injection is fully pre-auth — any HTTP client that can reach the proxy port is sufficient. That includes the entire population of internet-exposed LiteLLM deployments, which is non-trivial: the project is a popular drop-in for self-hosted “AI gateways” in front of corporate model providers, and the default deployment templates expose port 4000 over HTTP.

The fix in v1.83.7 replaces the string interpolation with a parameterized query — the trivial, correct version of the same lookup.

How the Exploit Works

The reachable injection point is any LLM API route on the proxy. In the wild, Sysdig observed the operator hitting POST /chat/completions and POST /v1/chat/completions with either an empty body or a 75-byte stub. The Authorization header carries the payload — a quote-escaped bearer that closes the string, appends UNION selections against the schema’s interesting tables, and relies on LiteLLM’s error-handling path to surface results back to the caller.

The two tables the operator queried are exactly the ones you would pick if you wanted to monetize the access:

  • litellm_credentials.credential_values — encrypted upstream provider keys (OpenAI, Anthropic, Bedrock, Azure OpenAI, custom HTTP backends).
  • litellm_config — proxy runtime environment, including any secrets baked into LiteLLM’s own configuration.

A read-only outcome already leaks every upstream API key the proxy fronts. Because the vulnerable code path is a generic SQL injection rather than a constrained read primitive, modification of the same tables — including issuing new privileged virtual keys against the LiteLLM proxy itself — is on the table.

Indicators of Compromise

From the Sysdig writeup:

  • Source IP: 65.111.27.132 (and an adjacent /22 block in the same AS, consistent with one operator rotating egress).
  • User-Agent: Python/3.12 aiohttp/3.9.1 on every injection request.
  • Request path: POST /chat/completions followed by POST /v1/chat/completions.
  • Body: empty or exactly 75 bytes.
  • Bearer values containing ' followed by SQL keywords (UNION, SELECT, FROM litellm_).

Database-side, look for unexpected reads against LiteLLM_VerificationToken, litellm_credentials, and litellm_config from the proxy service account, and any rows in LiteLLM_VerificationToken whose creation timestamp postdates the exposure window without a corresponding admin-API audit record.

What To Do Now

If you run LiteLLM:

  1. Upgrade to 1.83.7-stable or later. Versions in the 1.81.16–1.83.6 range are vulnerable; older versions predate the regression that introduced the bug, but you should still upgrade.
  2. Rotate every upstream provider API key referenced by the proxy. The exposure includes encrypted values, but the encryption key lives next to the database in most default deployments.
  3. Audit LiteLLM_VerificationToken for virtual keys you did not create, particularly any with elevated permissions, no expiry, or generic team assignments.
  4. Pull access logs for POST /chat/completions and POST /v1/chat/completions between April 20 and your patch date and grep for the aiohttp/3.9.1 UA. Sysdig’s telemetry suggests the operator is targeted rather than mass-scanning, but absence of evidence is not evidence of absence — the same primitive works for anyone who reads the advisory.
  5. If the proxy was internet-exposed during the window, assume the upstream keys are burned. Cancel them at the provider; do not just rotate them inside LiteLLM.

Why This Matters

LiteLLM is the load-bearing piece of glue in a lot of “internal AI platform” architectures. The proxy holds long-lived, expensive credentials to commercial model APIs and brokers them out to internal teams via short-lived virtual keys. Compromise of the proxy means compromise of every upstream contract behind it — and the tables Sysdig saw the operator query are exactly the tables that hold those upstream secrets.

The 36-hour disclosure-to-exploitation window is also among the shortest we have logged for an AI-infrastructure vulnerability this year. The pattern — pre-auth SQLi in an auth helper, a credential vault as the database — keeps showing up because the AI-gateway category is shipping the same auth-by-token-lookup design the rest of the industry deprecated a decade ago, just freshly re-implemented.

References

  • LiteLLM advisory: GHSA in the LiteLLM repository, fix in v1.83.7-stable (April 19, 2026).
  • Sysdig threat research: “CVE-2026-42208: Targeted SQL injection against LiteLLM’s authentication path discovered 36 hours following vulnerability disclosure.”
  • The Hacker News and BleepingComputer coverage of in-the-wild exploitation, April 28–29, 2026.