- Researchers identified the earliest known Malware incorporating Large Language Model (LLM) technology, called MalTerminal.
- MalTerminal can use AI to generate Ransomware or create a reverse shell, but there is no evidence it has been widely used.
- Attackers are now embedding hidden prompts in phishing emails to bypass AI detection and deliver malicious attachments.
- Cybercriminals use AI-driven web tools to host fake CAPTCHA pages, making phishing attacks harder to detect.
- Security companies warn that the use of generative AI is rapidly increasing attack sophistication and scale.
A team at SentinelOne SentinelLABS has found what they call the earliest example of malware with built-in Large Language Model (LLM) features. The malware, known as MalTerminal, was studied by researchers and shared at the LABScon 2025 security conference. The tool uses OpenAI‘s GPT-4 to create ransomware or reverse shell code, techniques often used for controlling infected systems.
The group explained that MalTerminal included a now-deprecated OpenAI API endpoint, meaning it was likely created before November 2023. There is no evidence this malware has been released widely, so it may only be a test example or a tool for Cybersecurity teams. Some related Python scripts can also create ransomware or reverse shells, and a detection tool named FalconShield uses an LLM to check if code is malicious.
SentinelOne said, “The incorporation of LLMs into malware marks a qualitative shift in adversary tradecraft.” With LLMs able to generate new commands while running, defenders face new challenges in stopping attacks.
The report also highlights a new method where criminals hide prompts in phishing emails to fool AI-based email security. These hidden messages are concealed in email attachments using styles like “display:none” or “color:white” so users do not see them. For example, an email may look like a business invoice but contain instructions to trick AI-based systems into thinking it is safe.
When a recipient opens the attachment, an attack can begin by exploiting a known vulnerability called Follina (CVE-2022-30190) to run extra software, disable Microsoft Defender, and keep itself active. This technique, called LLM Poisoning, uses comments in web code to bypass AI scanners.
A new report from Trend Micro shows more social engineering scams since January 2025 using AI-powered Hosting platforms like Lovable, Netlify, and Vercel. These fake sites often show a CAPTCHA page, then redirect users to phishing sites to steal passwords and other information.
According to Trend Micro researchers, “Victims are first shown a CAPTCHA, lowering suspicion, while automated scanners only detect the challenge page, missing the hidden credential-harvesting redirect.” Analysts warn that free and easy-to-use AI platforms are making these attacks cheaper and faster to run than before.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- JPMorgan Ups Yuan Forecast as BRICS Push De-Dollarization Efforts
- BitGo Files for US IPO, Targets NYSE Listing Amid Crypto Custody Boom
- LastPass Alerts macOS Users to GitHub Malware Targeting Popular Apps
- Shiba Inu Needs Global Adoption, Burns to Achieve $0.01 Milestone
- Saylor: Bitcoin Stability Draws Institutions, Bores Retail Investors