LangChain Core v1 Update Patches Critical SSRF Vulnerability in ChatOpenAI (CVE-2026-26013)
A major security update for LangChain Core patches a critical Server-Side Request Forgery (SSRF) vulnerability that could allow attackers to force AI applications to make unauthorized network requests. The flaw, tracked as CVE-2026-26013, resides in the `ChatOpenAI.get_num_tokens_from_messages()` method. This function, used for token counting in AI interactions, was found to fetch arbitrary `image_url` values from user input without proper validation, creating a direct path for exploitation.
The vulnerability specifically affects the `langchain-core` library, a foundational component for building applications with large language models (LLMs). The security advisory from LangChain AI details that the unsafe fetching of URLs could be weaponized to probe internal networks, access cloud metadata, or interact with internal services from the compromised application's context. The fix is included in the update to version 1.2.11, which moves the library from the pre-1.0.0 series to a stable major release, indicating significant underlying changes.
This patch underscores the escalating security scrutiny on the AI application stack, where foundational libraries are becoming high-value targets. The integration of an OpenSSF Security Scorecard badge in the update notice highlights a growing industry push for transparent security practices. For developers, this is a mandatory update; failure to apply it leaves AI chatbots and agents built on LangChain vulnerable to data exfiltration and internal network reconnaissance attacks initiated through seemingly benign image URL parameters.