- Google discovered five Malware families using large language models (LLMs) to create or hide malicious code during their execution.
- A North Korean group known as UNC1069 exploited Gemini to collect wallet data and develop phishing scripts.
- Malware now uses AI models such as Gemini and Qwen2.5-Coder to generate code “just-in-time,” adapting dynamically.
- Google has disabled related accounts and enhanced security measures to prevent misuse of AI models.
Google has identified a new wave of malware that leverages large language models (LLMs) in real time to generate or modify malicious code. This development marks an advanced stage in how state-linked and criminal entities use Artificial Intelligence in cyberattacks. The findings were shared in a recent report by the Google Threat Intelligence Group.
At least five distinct malware families actively query external AI models like Gemini and Qwen2.5-Coder during runtime. This technique, called “just-in-time code creation,” enables malware to produce malicious scripts and obfuscate code dynamically, improving evasion from detection systems. Unlike traditional malware with hard-coded logic, these variants outsource portions of their functionality to AI models for continuous adaptation.
Two of the malicious families, PROMPTFLUX and PROMPTSTEAL, illustrate this method clearly. PROMPTFLUX runs a “Thinking Robot” process that calls Gemini’s API hourly to rewrite its own VBScript code. PROMPTSTEAL, associated with Russia’s Cybersecurity-advisories/aa23-108″>APT28 group, uses the Qwen model hosted on Hugging Face to generate Windows commands as needed.
The research also highlights activity from the North Korean threat group UNC1069, also known as Masan. This group misused Gemini to locate wallet application data, create scripts for accessing encrypted storage, and develop multilingual phishing messages targeting cryptocurrency exchange employees. According to Google, these actions are part of broader efforts to steal digital assets.
In response, Google has disabled accounts tied to these operations and introduced stricter safeguards. These include improved prompt filtering and more rigorous monitoring of API access to limit abuse of their AI models. This emerging threat surface presents new challenges, as malware can now remotely query LLMs to generate tailored attacks and steal sensitive information.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- AI-Powered Ransomware Found in VS Code Extension, Removed
- Microsoft President Sells $20M Shares Amid 5% Stock Drop
- Tesla OKs Musk’s $1T Pay; Eyes SpaceX IPO, Shareholder Access
- Seven Crypto Firms Unite to Standardize Crosschain Stablecoin Transfers
- KuCoin Introduces Hold to Earn Feature: Passive Income Without Locking Assets
