Unveiling the Threat of AI-Powered Ransomware: Understanding PromptLock
In a notable development in the cybersecurity landscape, ESET, a leading cybersecurity firm, recently identified a new strain of ransomware that utilizes artificial intelligence. This variant, named PromptLock, marks a significant evolution in how cybercriminals operate, raising alarms across the digital security community.
The Mechanics of PromptLock
PromptLock is written in Golang, a programming language known for its efficiency. This ransomware leverages the AI model gpt-oss:20b, developed by OpenAI, using the Ollama API to generate malicious Lua scripts on-the-fly. Released earlier this month, this model allows PromptLock to produce custom scripts capable of various harmful actions.
ESET highlighted that these Lua scripts are not only designed to scour the local filesystem but are also equipped to inspect files, extract sensitive data, and initiate encryption processes. Importantly, they can operate across multiple platforms, including Windows, Linux, and macOS, which broadens the scope of potential victims.
Customizable Ransom Notes
Upon infection, PromptLock creates a tailored ransom note reflecting the files affected. This personalized approach can significantly impact the victim’s response, enhancing the ransomware’s effectiveness. Although the specifics of the hacker behind PromptLock remain unknown, ESET noted that traces of the malware were uploaded to VirusTotal from the United States on August 25, 2025, indicating an active threat landscape.
Variability in Detection
One of the most concerning aspects of PromptLock is its ability to generate diverse indicators of compromise (IoCs) with each execution. As ESET pointed out, this variability complicates detection efforts, making it harder for cybersecurity professionals to identify and respond to this malware in real-time. The intelligence behind this variability reflects an advanced understanding of evasion tactics, presenting a challenging scenario for cybersecurity defenses.
Potential for Data Exfiltration
While still evaluated as a proof-of-concept rather than a fully operational malware variant, PromptLock employs the SPECK 128-bit encryption algorithm to lock files, effectively rendering them inaccessible. Furthermore, there’s potential functionality for data exfiltration. Analyzing the ransomware’s code suggests it could also lead to data destruction, although the complete implementation of such features has yet to be seen.
Unlike traditional ransomware that might require downloading substantial files, PromptLock efficiently utilizes a proxy tunneling method. This means attackers can connect to a server running the Ollama API without having to transfer large AI models, thereby streamlining their operations.
AI’s Role in Cybercrime
The emergence of PromptLock underscores a disconcerting trend: AI is becoming a powerful tool for cybercriminals. This shift lowers the entry barrier for malicious actors who may lack extensive technical expertise but want to launch sophisticated attacks. The implications are significant, as AI fosters innovation in the creation of malware while simplifying the development of phishing schemes and malicious websites.
As a further testament to this trend, Anthropic recently revealed that it had banned accounts linked to two different threat actors. These individuals exploited the Claude AI chatbot for large-scale personal data theft targeting numerous organizations. This scenario highlights how AI tools are being misused for cyber extortion and the creation of advanced ransomware variants.
Vulnerabilities of Language Models
Concurrent with the rise of AI-powered malware, prominent language models have been found susceptible to prompt injection attacks. These attacks can manipulate AI systems to perform unintended actions, such as data theft or unauthorized financial transactions. The unique vulnerabilities inherent in these systems reflect a broader issue in AI safety that researchers and developers face.
Recent studies have identified novel forms of prompt injection. For example, one attack technique, dubbed PROMISQROUTE, exploits AI model routing to trigger less secure versions of models, thus bypassing established safety measures. Researchers have noted that simple phrases can circumvent extensive safety protocols designed to maintain AI integrity.
Addressing the Evolving Threat Landscape
As AI technologies continue to evolve, so too do the tactics employed by cybercriminals. The complex landscape of cybersecurity requires ongoing vigilance and adaptation. Organizations must continuously evaluate and enhance their defenses to counteract these modern threats effectively.
The emergence of PromptLock and similar strains serves as a reminder of the sophisticated methods now at the disposal of malicious actors. As these technologies become more advanced, the urgency for improved cybersecurity measures becomes ever more critical.


