- Three security vulnerabilities (CVE-2026-34070, CVE-2025-68664, CVE-2025-67644) were disclosed in LangChain and LangGraph frameworks, impacting over 84 million weekly downloads.
- The flaws could expose filesystem data, environment secrets, and conversation history, allowing attackers to drain sensitive data from enterprise deployments.
- Patches have been released, but exploitation of similar flaws (LangGrinch) has occurred within hours of disclosure.
Cybersecurity researchers disclosed three critical vulnerabilities in the widely-used AI frameworks LangChain and LangGraph on March 27, 2026, potentially exposing sensitive enterprise data. According to Cyera security researcher Vladimir Tokarev, “Each vulnerability exposes a different class of enterprise data: filesystem files, environment secrets, and conversation history.” These frameworks form the backbone for countless Large Language Model (LLM) applications, with PyPI statistics showing they were downloaded over 84 million times last week alone. Consequently, a single flaw in this core infrastructure can ripple outward through hundreds of dependent libraries and integrations.
The specific vulnerabilities include a path traversal flaw (CVE-2026-34070) allowing arbitrary file access via the prompt-loading API, a deserialization issue (CVE-2025-68664) leaking API keys and secrets, and an SQL injection (CVE-2025-67644) in LangGraph‘s SQLite checkpoint feature. Meanwhile, Cyera noted that these issues offer “three independent paths that an attacker can leverage to drain sensitive data” from any LangChain deployment. Patches have been issued in updated versions of langchain-core, langchain, and langgraph-checkpoint-sqlite.
However, this incident highlights that AI development tools are not immune to classic security threats. It follows the rapid exploitation of a similar flaw in Langflow (CVE-2026-33017) within 20 hours of its disclosure, according to reports. As Naveen Sunkavally of Horizon3.ai pointed out, the speed of these attacks underscores the critical need for prompt patching. The findings demonstrate that securing the foundational plumbing of the AI ecosystem is essential to protecting the entire stack built upon it.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
