Loading cryptocurrency prices...

Google Warns of AI-Powered Malware Dynamically Altering Code

  • Google discovered five Malware families using large language models (LLMs) to create or hide malicious code during their execution.
  • A North Korean group known as UNC1069 exploited Gemini to collect wallet data and develop phishing scripts.
  • Malware now uses AI models such as Gemini and Qwen2.5-Coder to generate code “just-in-time,” adapting dynamically.
  • Google has disabled related accounts and enhanced security measures to prevent misuse of AI models.

Google has identified a new wave of malware that leverages large language models (LLMs) in real time to generate or modify malicious code. This development marks an advanced stage in how state-linked and criminal entities use Artificial Intelligence in cyberattacks. The findings were shared in a recent report by the Google Threat Intelligence Group.

- Advertisement -

At least five distinct malware families actively query external AI models like Gemini and Qwen2.5-Coder during runtime. This technique, called “just-in-time code creation,” enables malware to produce malicious scripts and obfuscate code dynamically, improving evasion from detection systems. Unlike traditional malware with hard-coded logic, these variants outsource portions of their functionality to AI models for continuous adaptation.

Two of the malicious families, PROMPTFLUX and PROMPTSTEAL, illustrate this method clearly. PROMPTFLUX runs a “Thinking Robot” process that calls Gemini’s API hourly to rewrite its own VBScript code. PROMPTSTEAL, associated with Russia’s Cybersecurity-advisories/aa23-108″>APT28 group, uses the Qwen model hosted on Hugging Face to generate Windows commands as needed.

The research also highlights activity from the North Korean threat group UNC1069, also known as Masan. This group misused Gemini to locate wallet application data, create scripts for accessing encrypted storage, and develop multilingual phishing messages targeting cryptocurrency exchange employees. According to Google, these actions are part of broader efforts to steal digital assets.

In response, Google has disabled accounts tied to these operations and introduced stricter safeguards. These include improved prompt filtering and more rigorous monitoring of API access to limit abuse of their AI models. This emerging threat surface presents new challenges, as malware can now remotely query LLMs to generate tailored attacks and steal sensitive information.

- Advertisement -

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

Taiwan Weighs Creating National Bitcoin Reserve Amid Report

Taiwan plans to issue a report on Bitcoin holdings confiscated by government agencies before...

Data Center Investments Surpass Oil in 2025, AI Leads Shift

Investments in data centers have surpassed those in the oil sector for the first...

XRP Soars as Nasdaq Certifies First U.S. Spot ETF; Bitcoin Tops $103K

Bitcoin, Ethereum, and other leading cryptocurrencies recorded gains after the U.S. government shutdown ended. XRP...

Canary Capital Files ETF for Cat-Themed MOG Memecoin Exposure

Canary Capital filed for an ETF based on MOG Coin, a cat-themed memecoin linked...

China Alleges US Stole $13B Bitcoin in 2020 Cyberattack

China’s National Computer Virus Emergency Response Center attributes a $13 billion Bitcoin theft to...
- Advertisement -

Must Read

Top Best Metaverse Worlds To Buy Land

The metaverse has grown in our everyday conversation since Facebook announced its rebranding in October 2021 to META. The metaverse is a virtual world,...