Automated Scan Flags 113 CRITICAL Vulnerabilities in AI/ML Supply Chain, Including LangChain and Anthropic
An automated vulnerability database update has flagged 113 newly identified CRITICAL-severity security flaws, with the AI and machine learning development ecosystem appearing to be a primary target. The scan reveals a concentrated cluster of high-risk issues within popular frameworks, suggesting a systemic exposure point for applications built on these tools. This is not a routine update; the volume and severity of the findings indicate a significant, active threat landscape for developers integrating these packages.
The data points to LangChain as a major focal point, with multiple distinct CVEs detailing vulnerabilities that could allow arbitrary code execution and SQL injection. Specific affected versions include 0.0.197, 0.0.4, and 0.0.68, among others. The entries describe remote attackers being able to execute code through prompt injection and other vectors in versions prior to 0.0.247. Alongside LangChain, the package `anthropic` is also listed with a critical CVE (CVE-2026-35022), though details are currently unspecified, broadening the potential impact across the AI toolchain.
This mass disclosure places immediate pressure on development and security teams to audit their dependencies and apply patches. The presence of such a high count of critical flaws in foundational AI/ML libraries raises the risk of widespread exploitation if left unaddressed, potentially compromising any application or service that integrates these vulnerable components. The situation demands urgent scrutiny of software supply chains, particularly for organizations deploying AI-powered features.