LiteLLM SQLi Flaw: What CVE-2026-42208 Means for Your AI Keys
litellmcve-2026-42208sql injectioncybersecurityai securitydata exfiltrationpre-auth vulnerabilitysysdigopenaianthropicaws bedrockcloud security

LiteLLM SQLi Flaw: What CVE-2026-42208 Means for Your AI Keys

The Incident: A Race Against the Clock

On April 20, 2026, the advisory for GHSA-r75f-5x8p-qvmc (CVE-2026-42208) dropped on the LiteLLM GitHub repository, detailing a critical LiteLLM SQLi flaw. Four days later, on April 24, 2026, it hit the global GitHub Advisory Database, making it visible to automated tools like Dependabot. Then, 36 hours and seven minutes after that global publication, the first targeted SQL injection attempts began, exploiting this very vulnerability.

Sysdig Threat Research Team observed the activity. An operator, using IPs from a German-based provider, initiated probing. This was not an automated SQLmap spray. The attacker demonstrated prior knowledge of LiteLLM's Prisma-generated PostgreSQL identifier casing, retrying table names like litellm_verificationtoken with "LiteLLM_VerificationToken" until successful. They moved directly to high-value tables, performing textbook column-count discovery sweeps, indicating a clear objective of data exfiltration.

Visual representation of the LiteLLM SQLi flaw architecture and attack vector.
Representation of the LiteLLM SQLi flaw architecture

Understanding the LiteLLM SQLi Flaw: How a Single Quote Unlocked Your AI Keys

The core problem, CVE-2026-42208, is a classic pre-authentication SQL injection affecting LiteLLM versions from 1.81.16 up to, but not including, 1.83.7. This LiteLLM SQLi flaw allows attackers to bypass authentication and access sensitive data. Here's a closer look at the exploit:

  • The Vulnerable Spot: LiteLLM uses the Authorization: Bearer header value directly in a SQL query.
  • No Parameterization: The application concatenates the value after Bearer into the SQL query string without sanitization or prepared statements.
  • Pre-Auth Access: This occurs during the proxy verification step, prior to any authentication decision. Valid credentials are not required. Any HTTP client capable of reaching the LiteLLM proxy port (typically 4000) can exploit this.
  • The Escape: An attacker escapes the string literal with a single quote (') and appends arbitrary SQL. For instance, Authorization: Bearer sk-litellm' UNION SELECT ... -- allows execution of arbitrary SELECT statements against the PostgreSQL backend.

This provides an attacker with direct access to database contents without authentication, enabling the exfiltration of sensitive data.

The Impact: A Cloud Account in Your Database

This vulnerability's scope significantly exceeds that of a typical web application SQL injection. The LiteLLM SQLi flaw's potential impact, or 'blast radius,' is comparable to a full cloud-account compromise. AI gateways like LiteLLM centralize "cloud-grade credentials," making them high-value targets.

Consider the data held: virtual API keys (including the master key), upstream provider credentials for services like OpenAI, Anthropic, and AWS Bedrock, and proxy environment variables that may contain PostgreSQL DSNs or other sensitive configurations. Compromise of these assets grants access to the entire AI infrastructure, extending beyond the immediate LiteLLM instance.

Exfiltrated keys can be used against /chat/completions from any IP. LiteLLM does not bind keys to a source by default, rendering a stolen key universally usable. This represents a Tier-1 credential surface, making its compromise a critical security event due to the broad access it enables.

The observed exploitation specifically targeted LiteLLM_VerificationToken (virtual API keys), litellm_credentials (upstream provider keys), and litellm_config (environment variables). The attacker demonstrated precise knowledge of high-value data locations.

While schema enumeration was precise, we did not observe successful authenticated follow-through—meaning, the attacker did not appear to use the exfiltrated keys against the /chat/completions endpoint. This suggests the attacker's ultimate goal may not have been immediate traffic generation, but rather the collection of credentials for future use or sale, or perhaps a temporary operational pause.

Diagram illustrating the flow of exfiltrated credentials after LiteLLM SQLi flaw exploitation.
Diagram illustrating the flow of exfiltrated credentials after

The Response: Patch, Rotate, and Rethink

LiteLLM maintainers released v1.83.7, which addresses the issue by replacing string interpolation with a parameterized query—a technically sound solution, effectively patching the LiteLLM SQLi flaw.

Looking beyond the immediate patch, this incident reveals a recurring pattern. This raises questions regarding the persistence of fundamental flaws in widely adopted open-source projects, particularly those entrusted with sensitive credentials, and the implications for supply chain security.

Immediate action is therefore imperative. Update LiteLLM to v1.83.7 or later without delay. If your LiteLLM instance was internet-reachable on a vulnerable version, assume its database is compromised. Rotate all virtual API keys, master keys, and provider credentials stored within it.

For layered defense, if immediate patching is not feasible, place the proxy behind a reverse proxy or Web Application Firewall (WAF). Configure it to block Authorization header values containing single quotes, parentheses, or SQL keywords like UNION, SELECT, FROM, OR, or --.

Check upstream provider billing for /chat/completions traffic from unfamiliar IPs. Monitor webserver logs for Indicators of Compromise (IoCs): POST /chat/completions or /v1/chat/completions with empty or 75-byte bodies, User-agent: Python/3.12 aiohttp/3.9.1, and Authorization: Bearer headers starting with sk-litellm' or containing SQL keywords.

Restrict network access for LiteLLM proxies. They should operate within an internal network or behind a mutually-authenticated reverse proxy, not directly exposed to the internet without stringent controls. Ultimately, effective security hinges on a precise inventory of all AI proxy and gateway deployments.

This LiteLLM SQLi flaw represents a significant departure from a typical SQL injection. It underscores that AI gateways are now critical infrastructure, centralizing access to powerful, cloud-grade AI services. A single pre-authentication flaw in such a component can have a far-reaching impact, potentially compromising an entire AI ecosystem. These gateways, now functioning as central points for cloud-grade AI access, necessitate the same rigorous security posture applied to core cloud accounts.

Daniel Marsh
Daniel Marsh
Former SOC analyst turned security writer. Methodical and evidence-driven, breaks down breaches and vulnerabilities with clarity, not drama.