Google Warns of AI-Powered Malware Dynamically Altering Code

  • Google discovered five Malware families using large language models (LLMs) to create or hide malicious code during their execution.
  • A North Korean group known as UNC1069 exploited Gemini to collect wallet data and develop phishing scripts.
  • Malware now uses AI models such as Gemini and Qwen2.5-Coder to generate code “just-in-time,” adapting dynamically.
  • Google has disabled related accounts and enhanced security measures to prevent misuse of AI models.

Google has identified a new wave of malware that leverages large language models (LLMs) in real time to generate or modify malicious code. This development marks an advanced stage in how state-linked and criminal entities use Artificial Intelligence in cyberattacks. The findings were shared in a recent report by the Google Threat Intelligence Group.

- Advertisement -

At least five distinct malware families actively query external AI models like Gemini and Qwen2.5-Coder during runtime. This technique, called “just-in-time code creation,” enables malware to produce malicious scripts and obfuscate code dynamically, improving evasion from detection systems. Unlike traditional malware with hard-coded logic, these variants outsource portions of their functionality to AI models for continuous adaptation.

Two of the malicious families, PROMPTFLUX and PROMPTSTEAL, illustrate this method clearly. PROMPTFLUX runs a “Thinking Robot” process that calls Gemini’s API hourly to rewrite its own VBScript code. PROMPTSTEAL, associated with Russia’s Cybersecurity-advisories/aa23-108″>APT28 group, uses the Qwen model hosted on Hugging Face to generate Windows commands as needed.

The research also highlights activity from the North Korean threat group UNC1069, also known as Masan. This group misused Gemini to locate wallet application data, create scripts for accessing encrypted storage, and develop multilingual phishing messages targeting cryptocurrency exchange employees. According to Google, these actions are part of broader efforts to steal digital assets.

In response, Google has disabled accounts tied to these operations and introduced stricter safeguards. These include improved prompt filtering and more rigorous monitoring of API access to limit abuse of their AI models. This emerging threat surface presents new challenges, as malware can now remotely query LLMs to generate tailored attacks and steal sensitive information.

- Advertisement -

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

Amazon, Meta Stock Outlook Amid Heavy AI Spending Plans

US stock markets show mixed signals as traditional tech giants project strength while precious...

China Warns RWA Tokenization Could Be Illegal

Chinese regulators have intensified their crypto crackdown, warning that tokenizing real-world assets could constitute...

Strategy loses $7B after missing Bitcoin profit

Strategy reported a catastrophic fourth-quarter diluted loss of $42.93 per share, a year-over-year increase...

Trump-Linked Crypto Tokens Plunge Amid Democratic Probe

TRUMP and WLFI tokens fell sharply, dropping 14.6% and 10.8% in the past day.The...

Dogecoin Falls Below 10 Cents for First Time Since Sept 2024

Dogecoin (DOGE) price dropped below $0.10 for the first time since September 2024.The decline...
- Advertisement -

Must Read

17 Best Cryptocurrency Wallets

If you are looking for a list with the best cryptocurrency wallets, then you've landed on the right page. Cryptocurrency, as we all know,...
🔥 #AD Get 20% OFF any new 12 month hosting plan from Hostinger. Click here!