Loading cryptocurrency prices...

Google Warns of AI-Powered Malware Dynamically Altering Code

  • Google discovered five Malware families using large language models (LLMs) to create or hide malicious code during their execution.
  • A North Korean group known as UNC1069 exploited Gemini to collect wallet data and develop phishing scripts.
  • Malware now uses AI models such as Gemini and Qwen2.5-Coder to generate code “just-in-time,” adapting dynamically.
  • Google has disabled related accounts and enhanced security measures to prevent misuse of AI models.

Google has identified a new wave of malware that leverages large language models (LLMs) in real time to generate or modify malicious code. This development marks an advanced stage in how state-linked and criminal entities use Artificial Intelligence in cyberattacks. The findings were shared in a recent report by the Google Threat Intelligence Group.

- Advertisement -

At least five distinct malware families actively query external AI models like Gemini and Qwen2.5-Coder during runtime. This technique, called “just-in-time code creation,” enables malware to produce malicious scripts and obfuscate code dynamically, improving evasion from detection systems. Unlike traditional malware with hard-coded logic, these variants outsource portions of their functionality to AI models for continuous adaptation.

Two of the malicious families, PROMPTFLUX and PROMPTSTEAL, illustrate this method clearly. PROMPTFLUX runs a “Thinking Robot” process that calls Gemini’s API hourly to rewrite its own VBScript code. PROMPTSTEAL, associated with Russia’s Cybersecurity-advisories/aa23-108″>APT28 group, uses the Qwen model hosted on Hugging Face to generate Windows commands as needed.

The research also highlights activity from the North Korean threat group UNC1069, also known as Masan. This group misused Gemini to locate wallet application data, create scripts for accessing encrypted storage, and develop multilingual phishing messages targeting cryptocurrency exchange employees. According to Google, these actions are part of broader efforts to steal digital assets.

In response, Google has disabled accounts tied to these operations and introduced stricter safeguards. These include improved prompt filtering and more rigorous monitoring of API access to limit abuse of their AI models. This emerging threat surface presents new challenges, as malware can now remotely query LLMs to generate tailored attacks and steal sensitive information.

- Advertisement -

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

Clear Street Prepares $10B-$12B Crypto IPO Led by Goldman Sachs

Clear Street, a New York brokerage, plans a public offering with a valuation between...

BRICS Expands Gold Pact to 33 Nations, Boosts Dollar-Free Trade

The BRICS Gold pact now includes 33 countries aiming to trade precious metals independently...

Bitcoin Treasury Firms Face “Darwinian Phase” Amid Market Downturn

Bitcoin treasury companies face structural challenges as equity prices drop below Bitcoin net asset...

Shiba Inu Whale Withdraws 169B SHIB from Coinbase Sparking Speculation

A whale withdrew 169.13 billion SHIB tokens from Coinbase in six transfers over 17...

Crypto Firms Raise $16M for Hong Kong Tai Po Fire Relief Efforts

Over 30 cryptocurrency firms and fundraising groups have contributed about $16 million to Hong...
- Advertisement -

Must Read

Top 10 Best DeFi Tokens to Invest in 2022

Decentralized Finance (Defi), is one of the most talked-about topics in the crypto space alongside NFTs. So if you want to know the best...