Google Warns of AI-Powered Malware Dynamically Altering Code

  • Google discovered five Malware families using large language models (LLMs) to create or hide malicious code during their execution.
  • A North Korean group known as UNC1069 exploited Gemini to collect wallet data and develop phishing scripts.
  • Malware now uses AI models such as Gemini and Qwen2.5-Coder to generate code “just-in-time,” adapting dynamically.
  • Google has disabled related accounts and enhanced security measures to prevent misuse of AI models.

Google has identified a new wave of malware that leverages large language models (LLMs) in real time to generate or modify malicious code. This development marks an advanced stage in how state-linked and criminal entities use Artificial Intelligence in cyberattacks. The findings were shared in a recent report by the Google Threat Intelligence Group.

- Advertisement -

At least five distinct malware families actively query external AI models like Gemini and Qwen2.5-Coder during runtime. This technique, called “just-in-time code creation,” enables malware to produce malicious scripts and obfuscate code dynamically, improving evasion from detection systems. Unlike traditional malware with hard-coded logic, these variants outsource portions of their functionality to AI models for continuous adaptation.

Two of the malicious families, PROMPTFLUX and PROMPTSTEAL, illustrate this method clearly. PROMPTFLUX runs a “Thinking Robot” process that calls Gemini’s API hourly to rewrite its own VBScript code. PROMPTSTEAL, associated with Russia’s Cybersecurity-advisories/aa23-108″>APT28 group, uses the Qwen model hosted on Hugging Face to generate Windows commands as needed.

The research also highlights activity from the North Korean threat group UNC1069, also known as Masan. This group misused Gemini to locate wallet application data, create scripts for accessing encrypted storage, and develop multilingual phishing messages targeting cryptocurrency exchange employees. According to Google, these actions are part of broader efforts to steal digital assets.

In response, Google has disabled accounts tied to these operations and introduced stricter safeguards. These include improved prompt filtering and more rigorous monitoring of API access to limit abuse of their AI models. This emerging threat surface presents new challenges, as malware can now remotely query LLMs to generate tailored attacks and steal sensitive information.

- Advertisement -

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

Magic Eden Ends NFT Support to Focus on Casino

Magic Eden will end support for its Ethereum, Bitcoin, and wallet services by early...

Bitcoin Steadies Amid Iran Conflict; Futures Show Shorts Crowded

Bitcoin has steadied near $66,600 after an initial weekend selloff triggered by escalating Middle...

Kalshi Voids Iran Leader Death Bets, Pays Users

Kalshi voided certain prediction market positions concerning Iran's Supreme Leader, citing a policy against...

MicroStrategy Raises STRC Dividend to 11.5%, Pivots to Preferred Shares

Strategy has increased the dividend on its STRC perpetual preferred stock to 11.50% for...

Backpack Airdrops Equity to VIP Token Stakers

Backpack plans to offer equity to users who stake its upcoming token and join...

Must Read

Buy Domain With Bitcoin: Top 8 Domain Registrars That Accept Bitcoin And Crypto

You are here because you want to buy a domain with bitcoin, right? If you are looking for domain registrars that accept bitcoin or...
🔥 #AD Get 20% OFF any new 12 month hosting plan from Hostinger. Click here!