- Researchers at ESET identified a new AI-powered Ransomware called PromptLock.
- PromptLock uses locally run AI models to create malicious scripts that target files across multiple operating systems.
- The Malware generates customized ransom notes and uses strong encryption but currently appears to be a proof-of-concept.
- AI-generated scripts in PromptLock change with each run, making the ransomware hard to detect.
- Ongoing developments show that major AI models and tools remain vulnerable to prompt injection and security bypass attacks.
A new ransomware called PromptLock, powered by Artificial Intelligence, has been discovered by Cybersecurity company ESET. Researchers found that PromptLock uses a locally hosted AI model from OpenAI, known as gpt-oss:20b, accessed through the Ollama API. This malware is designed to generate harmful computer scripts in real time, affecting systems running Windows, Linux, or macOS.
PromptLock operates by scanning the local filesystem, choosing files to target, and then encrypting selected data. According to ESET, it also creates a custom ransom message for each victim, based on the type of machine infected and files affected. Experts say artifacts of PromptLock were submitted to VirusTotal from the United States on August 25, 2025. Details about the individuals or groups behind the ransomware remain unknown.
“PromptLock uses Lua scripts generated by AI, which means that indicators of compromise (IoCs) may vary between executions,” ESET explained. “This variability introduces challenges for detection. If properly implemented, such an approach could significantly complicate threat identification and make defenders’ tasks more difficult.” The ransomware employs the SPECK 128-bit encryption algorithm to lock files and could also be used to steal or erase data, although file deletion features have not yet been fully added.
Unlike models that require large downloads, PromptLock attackers use a tunnel or proxy connecting infected systems to a remote server with the gpt-oss-20b model running the required API. ESET assesses that PromptLock is a proof-of-concept, not fully deployed malware.
Emerging AI threats are increasing in scale and sophistication. Anthropic recently said it banned accounts controlled by two threat actors using its Claude AI chatbot to conduct theft and extortion against at least 17 organizations and to build ransomware with advanced evasion features. The growing trend includes major AI platforms, such as Amazon Q Developer, Anthropic Claude Code, AWS Kiro, Google Jules, Lenovo Lena, Microsoft GitHub Copilot, and others, being susceptible to prompt injection attacks that may allow unauthorized access or data leaks.
“Prompt injection attacks can cause AIs to delete files, steal data, or make financial transactions,” Anthropic said. New research, such as the PROMISQROUTE attack, shows it is possible to bypass AI safety measures using simple phrases like “use compatibility mode” or “fast response needed.” These findings highlight ongoing security risks as AI adoption expands.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- XRP Buy Now Signals Surge as CME Futures Hit $1B Ahead of SEC Call
- US Treasury Sanctions Russian Aiding DPRK Fraud and Crypto Schemes
- Commerce Secretary Proposes Putting U.S. GDP Statistics on Blockchain
- Google Confirms Python-Powered Layer-1 Blockchain for Institutions
- AI-Powered Claude Used in Sophisticated July 2025 Data Extortion Attack