- Critical remote code execution vulnerabilities affect major AI inference engines from Meta, NVIDIA, Microsoft, and open-source projects like vLLM and SGLang.
- The vulnerabilities originate from unsafe use of ZeroMQ (ZMQ) and Python’s pickle deserialization, a pattern named ShadowMQ due to code reuse across projects.
- Several security flaws were traced to Meta’s Llama framework (CVE-2024-50050), with similar issues in NVIDIA TensorRT-LLM, Microsoft Sarathi-Serve, Modular Max Server, vLLM, and SGLang.
- Exploitation can enable attackers to execute arbitrary code, escalate privileges, steal models, and deploy Malware such as cryptocurrency miners across AI clusters.
- Separate research revealed that Cursor’s AI-powered source code editor is vulnerable to JavaScript injection attacks via rogue MCP servers and malicious extensions, risking credential theft and system compromise.
Security researchers have identified critical remote code execution vulnerabilities impacting key Artificial Intelligence (AI) inference engines used by major technology firms. Flaws have been found in frameworks developed by Meta, Nvidia, Microsoft, and open-source projects including vLLM and SGLang. These issues stem from unsafe deserialization practices involving ZeroMQ (ZMQ) communication combined with Python’s pickle module.
The root cause, as detailed by Oligo Security researcher Avi Lumelsky in a recent report, has been termed the ShadowMQ pattern. This pattern describes the repeated unsafe use of pickle deserialization on unauthenticated ZMQ TCP sockets due to widespread code reuse among AI projects.
The initial vulnerability was found in Meta’s Llama large language model framework (CVE-2024-50050, CVSS score: 6.3/9.3) and patched last October. It involved the ZeroMQ recv_pyobj() method deserializing network data without proper security checks, allowing attackers to execute arbitrary code remotely. The pyzmq library has also received fixes addressing this weakness.
Further investigation revealed the same unsafe pattern in NVIDIA TensorRT-LLM (CVE-2025-23254, CVSS 8.8), Modular Max Server (CVE-2025-60455), Microsoft’s Sarathi-Serve, as well as open-source vLLM and SGLang projects. Some of these issues remain unpatched or only partially resolved. Code reuse through direct copying of vulnerable logic contributed to the spread of this flaw across multiple codebases.
Compromising a single AI inference engine node could enable attackers to execute code on clusters, escalate privileges, steal AI models, or deploy malicious payloads like cryptocurrency miners for financial gain. Lumelsky emphasized the rapid pace of AI development and the dangers of reusing unsafe architectural components.
In related developments, security research by Knostic has exposed vulnerabilities in Cursor’s AI-enabled source code editor. Attackers can exploit rogue local Model Context Protocol (MCP) servers to replace browser login pages with fake versions, capturing user credentials. Additionally, malicious IDE extensions can inject JavaScript to perform arbitrary actions with the editor’s full privileges, including file system access and persisting malware. Guidance to mitigate these risks includes disabling auto-run features, carefully vetting extensions and MCP servers, limiting API permissions, and auditing critical integrations.
References to the specific vulnerabilities and their fixes are available through these links:
- vLLM CVE-2025-30165
- NVIDIA TensorRT-LLM CVE-2025-23254
- Modular Max Server CVE-2025-60455
- SGLang incomplete fixes
- Knostic report on Cursor browser vulnerability
- Demonstration of code injection in VSCode Cursor
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Shiba Inu’s $1K Plunge to $55M: A Meme Coin Miracle
- Morgan Stanley, Wells Fargo Boost Nvidia Targets on AI Chip Demand
- Crypto Community Clashes with Ex-SEC Chief Amanda Fischer Over Uniswap Fee Switch
- Bitcoin Falls Below $95K as Crypto Markets Suffer Sharp Decline
- VeChain Updates Tokenomics with Dynamic VTHO Issuance Model
