- Security researchers uncovered and reported a critical flaw in LangChain’s LangSmith platform that exposed API keys and user data.
- The vulnerability, known as AgentSmith, allowed attackers to capture sensitive information with a CVSS risk score of 8.8 out of 10.
- The exploit worked by adding malicious agents with hidden proxy servers to the platform, which intercepted user communications.
- LangChain addressed the issue with a backend fix and implemented new warnings regarding custom proxy configurations.
- Meanwhile, new variants of the WormGPT Malware, powered by xAI Grok and Mistral AI Mixtral, have appeared in underground forums.
Cybersecurity researchers identified a severe vulnerability in the LangSmith platform from LangChain, which allowed attackers to access sensitive user information. The flaw, now fixed, was found to affect how users interacted with AI agents and exposed confidential details like API keys and prompts.
According to analysis by Noma Security, the vulnerability, labeled AgentSmith, received a risk score of 8.8 out of 10 on the Common Vulnerability Scoring System (CVSS). Attackers could upload compromised AI agents to the LangChain Hub, which any user could then access. If a user tried out one of these agents, a hidden proxy server intercepted all their data.
Researchers Sasi Levi and Gal Moyal explained, "Once adopted, the malicious proxy discreetly intercepted all user communications – including sensitive data such as API keys (including OpenAI API Keys), user prompts, documents, images, and voice inputs – without the victim’s knowledge." (source). This meant attackers could gain access to the victim’s OpenAI environment, risking theft of proprietary information and possible financial loss if API resources were misused.
The problem was responsibly reported on October 29, 2024, and LangChain released a backend patch on November 6, 2024. The update added a prompt to warn users about potential data exposure when cloning agents with custom proxy settings. If users cloned a malicious agent into their organization, the data leak could continue without their knowledge.
Researchers noted that risks included not just unauthorized access to datasets, but also potential legal and reputational consequences. They said, "Malicious actors could gain persistent access to internal datasets uploaded to OpenAI, proprietary models, trade secrets and other intellectual property, resulting in legal liabilities and reputational damage."
Separately, security analysts from Cato Networks reported that cybercriminals have introduced new versions of the WormGPT malware, now powered by both xAI Grok and Mistral AI Mixtral models. Originally launched in mid-2023, WormGPT enabled attackers to create phishing campaigns and malware, and even though the original project was shut down, new versions continue to circulate on cybercrime forums. These versions use existing large language models to generate uncensored, and often illegal, content, adapting them for malicious use (source).
WormGPT is described by security researchers as a brand for uncensored LLM tools, built by modifying established models to evade limits and produce unethical outputs.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Ubyx Raises $10M Seed Round to Streamline Stablecoin Transfers
- XRP Price Prediction: Traders See 80% Chance of $3.50 in 2025
- MEV Spam Consumes 60% of L2 Blockspace, Drives Up Fees
- Ethereum Staked Hits All-Time High of 35M After SEC Clarity
- Malaysia Launches Digital Asset Innovation Hub for Fintech Growth