Resecurity Identifies DIG AI: Uncensored Darknet Assistant Accelerating Criminal Activities and Threats

Published:

spot_img

Resecurity Identifies DIG AI: Uncensored Darknet Assistant Accelerating Criminal Activities and Threats

Resecurity has reported the emergence of uncensored AI assistants on the darknet, which are being exploited by threat actors for malicious purposes. One such tool, DIG AI, was identified on September 29, 2023, and has quickly gained traction within cybercriminal and organized crime networks. The HUNTER team observed a significant uptick in the use of DIG AI by malicious actors during the fourth quarter of 2025, particularly around the Winter Holidays, when illegal activities surged to unprecedented levels. With major global events slated for 2026, including the Winter Olympics in Milan and the FIFA World Cup, the potential for criminal AI to pose new security challenges is considerable, enabling bad actors to scale their operations and circumvent existing content protection measures.

The Rise of “Not Good” AI

The term “Not Good” AI refers to the utilization of artificial intelligence for illegal, unethical, or harmful activities, including cybercrime, extremism, and the dissemination of misinformation. The legality and ethical implications of such tools depend on their design, usage, and the intent of their operators.

Recent data indicates a staggering increase—over 200%—in mentions and usage of malicious AI tools on cybercriminal forums between 2024 and 2025. Tools like FraudGPT and WormGPT have emerged as prominent offerings specifically targeting cybercriminals. The landscape is rapidly evolving, with new jailbroken and customized large language models (LLMs) frequently surfacing, lowering the barriers to entry for cybercrime by automating and enhancing malicious activities.

These tools, often referred to as “dark LLMs” or “jailbroken” AI chatbots, are either developed from scratch or modified versions of legitimate AI models with safety features disabled. DIG AI allows malicious actors to harness AI capabilities to generate instructions ranging from the manufacturing of explosive devices to the creation of illegal content, including child sexual abuse material (CSAM). Hosted on the TOR network, DIG AI remains largely inaccessible to law enforcement, fostering a significant underground market for piracy and other illicit activities.

Despite the alarming rise of such tools, initiatives like AI for Good established by the International Telecommunication Union (ITU) and the United Nations (UN) aim to promote responsible technology use. However, malicious actors are likely to exploit AI for nefarious purposes.

DIG AI’s Capabilities and Access

DIG AI can be accessed without an account through the TOR browser, making it readily available to those seeking to engage in illegal activities. Concerns extend beyond cybercrime, as AI-powered tools like DIG AI may also assist extremist and terrorist organizations.

Analysts have conducted extensive tests on DIG AI, utilizing taxonomy dictionaries related to explosives, drugs, and other restricted areas defined by international legislation. The tool can automate the generation of malicious content, including fraudulent schemes, thereby enabling bad actors to scale their operations efficiently.

DIG AI has demonstrated the ability to generate malicious scripts that can backdoor vulnerable web applications and produce various types of malware. Notably, certain computationally intensive tasks, such as code obfuscation, can take between three to five minutes to complete, indicating limited computing resources. However, this limitation can be mitigated by offering premium services for a fee. This represents a new frontier in the misuse of AI, where bad actors design and maintain custom infrastructures akin to those used for bulletproof hosting, allowing them to effectively scale their operations.

The outputs generated by DIG AI have proven sufficient for executing malicious activities, which could lead to significant technological and financial repercussions.

Criminalization of AI

Tools like DIG AI are engineered to bypass existing content policies and filtering mechanisms in modern AI systems, which are designed as safety measures. These policies aim to protect users and society from illegal applications of AI by censoring specific keywords and language operators that could lead to the generation of harmful or illegal content.

Major platforms, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini/Bard, Microsoft Copilot, and Meta AI, implement content moderation systems that restrict categories such as hate speech, misinformation, and illegal activities. These censorship measures are primarily to comply with laws, protect users, and uphold ethical standards.

Legislators have proposed initiatives like the TAKE IT DOWN Act to ban non-consensual AI-generated intimate images and are working on regulations concerning AI-generated child abuse material. The focus is on human accountability for AI’s wrongful acts and intent, with agencies like the FBI striving to combat AI-driven crime while regulating AI’s internal use within the justice system. However, these laws do not extend to content and services operating on the Dark Web.

AI-Generated CSAM

Generative AI technologies, including diffusion models and text-to-image systems, are increasingly misused to create illegal child sexual abuse material (CSAM). Offenders exploit vulnerabilities in these systems to generate, manipulate, and distribute highly realistic synthetic CSAM, posing significant challenges for detection and law enforcement.

Resecurity has confirmed that DIG AI can facilitate the production of CSAM content, enabling the creation of hyper-realistic images or videos of children. This capability raises new challenges for legislators in their efforts to combat the production and distribution of CSAM.

The team has collaborated with law enforcement authorities to gather evidence of bad actors using DIG AI to produce highly realistic CSAM content, which is sometimes labeled as “synthetic” but is still considered illegal.

In 2024, a U.S. child psychiatrist was convicted for producing and distributing AI-generated CSAM by digitally altering images of real minors. The images were so realistic that they met the U.S. federal threshold for CSAM. Reports indicate a sharp increase in AI-generated CSAM, involving both adults and minors, including instances of classmates creating deepfake nudes for bullying or extortion. The EU, UK, and Australia have enacted laws criminalizing AI-generated CSAM, regardless of whether real children are depicted.

New Security Challenges Ahead

Bad actors are already exploiting AI systems through specially crafted prompts to bypass built-in safety protocols, resulting in the generation of prohibited content. Tools like DreamBooth and LoRA enable offenders to adapt open-source LLMs for generating targeted CSAM. This situation is creating new business models for criminals, allowing them to optimize costs and develop an underground economy centered around synthetic illegal content.

Resecurity anticipates that bad actors will manipulate datasets, such as contaminated training data, allowing models to learn and reproduce illegal outputs. These models can be run on personal infrastructures or hosted on the Dark Web, producing unlimited illegal content that online platforms may struggle to detect. Open-source models are particularly vulnerable, as safety filters can be removed or bypassed.

The Internet community is expected to face significant security challenges enabled by AI in the coming years. Criminal and weaponized AI will likely transform traditional threats and create new risks at an unprecedented pace. Cybersecurity and law enforcement professionals must remain vigilant in addressing these emerging threats.

As reported by www.resecurity.com.

spot_img

Related articles

Recent articles

Green SM Strengthens Partnership with BCA Through IDR 600 Billion Investment Loan Agreement

Green SM Strengthens Partnership with BCA Through IDR 600 Billion Investment Loan Agreement JAKARTA, INDONESIA – Green SM Indonesia and Bank Central Asia (BCA) have...

Exabeam Strengthens AI Threat Detection Amid Surge in Cyberattacks in UAE

Exabeam Strengthens AI Threat Detection Amid Surge in Cyberattacks in UAE Exabeam has announced a substantial enhancement of its Agent Behaviour Analytics (ABA) platform, a...

Delhi Police Demands X Account Details in FIR Against FSSAI Director Sweety Behera Amid Defamation Allegations

Delhi Police Demands X Account Details in FIR Against FSSAI Director Sweety Behera Amid Defamation Allegations NEW DELHI — The Delhi Police has reportedly issued...

BeyondTrust Advances Unified Privileged Identity Solution for AI Agent Security

BeyondTrust Advances Unified Privileged Identity Solution for AI Agent Security In a significant move for cybersecurity, BeyondTrust has unveiled enhanced capabilities within its Pathfinder Platform,...