LiteLLM SQL Injection Exploited Within 36 Hours of Public Disclosure
A critical SQL injection vulnerability in BerriAI's LiteLLM Python package was exploited in the wild less than 36 hours after its disclosure, marking yet another example of attackers moving at...

A critical SQL injection vulnerability in BerriAI's LiteLLM Python package was exploited in the wild less than 36 hours after its disclosure, marking yet another example of attackers moving at machine speed against AI infrastructure tools.
The flaw, tracked as CVE-2026-42208 with a CVSS score of 9.3, allows unauthenticated attackers to manipulate the LiteLLM proxy database by sending a specially crafted Authorization header to any LLM API endpoint. The issue stems from the proxy's API key check, which concatenated user-supplied key values directly into the query text rather than using parameterized queries.
Affecting versions 1.81.16 through 1.83.6, the vulnerability was patched in version 1.83.7-stable on April 19, 2026. But by April 26, security researchers at Sysdig observed the first exploitation attempt—just over 26 hours after the GitHub advisory was indexed.
Attackers targeted database tables containing upstream LLM provider credentials, including OpenAI organization keys with five-figure monthly spend limits, Anthropic console keys with workspace admin rights, and AWS Bedrock IAM credentials. Notably, they ignored user and team tables, focusing exclusively on high-value secrets. The operation used two different egress IPs in quick succession, suggesting a single, well-prepared operator.
Sysdig noted that a successful database extraction from LiteLLM represents far more than a typical web-app SQL injection—it effectively compromises the cloud accounts tied to those credentials. LiteLLM, an open-source AI gateway with over 45,000 GitHub stars, was also the target of a supply chain attack last month by the TeamPCP hacking group.
Organizations should update to version 1.83.7-stable immediately. If patching isn't possible, administrators can mitigate risk by setting disable_error_logs: true under general_settings to block the vulnerable query path.
Source: The Hackers News
Ready to Modernize Your Business?
Get your AI automation roadmap in minutes, not months.
Analyze Your Workflows →