LiteLLM v1.61.15 Exposes Critical 10.0 Severity Vulnerability in AI Integration Library
A critical security flaw has been identified in the widely used LiteLLM library, a key tool for developers to interface with various large language model (LLM) APIs. The vulnerability, rated with the maximum severity score of 10.0, was discovered in the specific package file `litellm-1.61.15-py3-none-any.whl`. This finding, flagged by automated security scanning, indicates a severe risk for any application or service that has integrated this version of the library, potentially exposing systems to remote code execution or data compromise.
The vulnerability is present in the library distributed via the official Python Package Index (PyPI). The affected file path points directly to a dependency within a project recipe (`/cookbook/litellm-ollama-docker-image/requirements.txt`), suggesting its use in containerized AI deployments. The issue was found in the GitHub commit `6f1a800f9c1f6828abe5047f265402c425cff255` for the repository `snowdensb/litellm`. In total, four distinct vulnerabilities were reported in this release, with the highest severity reaching the critical threshold.
This exposure places countless AI-powered applications and backend services at immediate risk, especially those built on the popular LiteLLM abstraction layer. Developers and organizations relying on this library for production systems must urgently review their dependencies, verify they are not using the compromised v1.61.15 wheel file, and apply patches or version upgrades as they become available. The integration of such a vulnerable component into Docker images and other deployment pipelines significantly amplifies the potential attack surface.