- Google’s threat intelligence team observed the North Korean hacking group UNC2970 using the generative AI model Gemini to profile high-value cybersecurity and defense targets.
- Multiple state-backed threat actors, including clusters from China and Iran, are using AI to automate reconnaissance, code exploits, and craft social engineering campaigns.
- Malicious tools like the HONESTCUE downloader weaponize Gemini’s API to generate and execute malicious C# code directly in memory, leaving minimal forensic traces.
- Attackers are conducting large-scale model extraction attacks, using over 100,000 queries to replicate proprietary AI model behavior, a risk highlighted by security researchers.
Google’s Threat Intelligence Group reported on Thursday that the North Korean UNC2970 hacking group weaponized its Gemini AI for malicious cyber reconnaissance. This state-backed actor used the model to synthesize public intelligence on defense firms and map technical job roles for targeted phishing.
The activity, detailed in a report, blurs the line between professional research and malicious profiling. Consequently, it enables the group to identify soft targets and craft convincing personas for initial compromise.
Multiple other threat actors have integrated Gemini into their workflows, according to the findings. The Chinese-linked APT42 used it to develop a Python-based Google Maps scraper and research a WinRAR vulnerability.
APT41 and UNC795 also employed the AI to troubleshoot exploit code and develop web shells. The financially motivated UNC5356 cluster was linked to an AI-generated phishing kit called COINBAIT that mimics a cryptocurrency exchange.
Google also identified the HONESTCUE malware, which calls Gemini’s API to generate its secondary stage functionality. This fileless payload is compiled and executed directly in memory using the .NET CSharpCodeProvider framework.
Meanwhile, the company disrupted model extraction attacks involving over 100,000 prompts aimed at Gemini. A proof-of-concept extraction achieved 80.1% accuracy with just 1,000 queries, demonstrating the threat. Security researcher Farida Shafik noted, “Every query-response pair is a training example for a replica.”
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Binance SAFU Fund Now Holds $1 Billion in Bitcoin
- Jeffy Yu, Crypto Founder Who Faked Death, Allegedly Dies
- Unstable Ground: Looming U.S. Crypto Rules May Lack Legal Backing
- Apple Stock Forms Technical Buy Point, Nears Breakout
- LSEG to launch Digital Securities Sandbox for tokenization
