In an alarming development, a new AI-driven tool called DIG AI is making waves among cybercriminals on the Dark Web. Researchers are voicing concerns that this uncensored AI assistant could dramatically intensify illicit activities as we approach 2026.
DIG AI equips even those with minimal technical expertise to create malware, scams, and other forbidden content. This situation highlights an unsettling trend: artificial intelligence is increasingly being repurposed for nefarious purposes, creating challenges that may outpace existing safeguards. Resecurity’s researchers have emphasized that the evolution of AI poses significant threats, allowing malicious actors to streamline their operations while evading conventional content protection methods.
The Surge of Criminal AI
DIG AI epitomizes a new wave of tools designed for criminal use, engineered to circumvent the content moderation systems integral to platforms like ChatGPT and Claude. Its swift adoption signals a potentially strong uptick in cybercrime, fraud, and extremist efforts—often outpacing the evolution of defensive measures.
Resecurity first identified DIG AI in September 2025, noting a substantial uptick in usage by the end of the year, particularly around the winter holidays. This timing corresponds with the seasonal spike in global illicit activity—and it comes just as organizations gear up for prominent events in 2026 like the Milan Winter Olympics and the FIFA World Cup, which typically heighten security risks.
A Dark Web AI Crafted for Malice
What sets DIG AI apart from legitimate AI services is its rarity of barriers to access: it doesn’t require user registration and can be easily navigated through the Tor browser. Resecurity discovered that this tool can produce content across various illegal activities ranging from fraud schemes to malware creation, and even extremist propaganda.
This AI can generate functional malicious scripts capable of infiltrating vulnerable web applications and automating scams. By utilizing external APIs, cybercriminals can efficiently scale their operations—lowering costs while boosting their output of illicit content. Though some complex tasks, such as code obfuscation, may require considerable processing time, attackers can quickly bypass these limitations through premium service upgrades.
The operator of DIG AI, going by the alias “Pitch,” claims the service operates on ChatGPT Turbo with all safety protocols stripped away. Attention-grabbing advertisements for DIG AI have surfaced in underground marketplaces linked to drug trafficking and stolen financial information, underlining its allure for organized crime rings.
The Troubling Role of Criminal AI in CSAM
A particularly disturbing aspect of DIG AI involves its potential to facilitate the creation of AI-generated child sexual abuse material (CSAM). Utilizing advanced generative AI technologies—like diffusion models and generative adversarial networks (GANs)—criminals can produce highly realistic synthetic images that may meet legal thresholds for CSAM, even if entirely fabricated.
Resecurity has confirmed that DIG AI can assist in crafting or altering explicit content involving minors. Law enforcement agencies are already reporting a surge in AI-generated cases related to CSAM, including manipulated images of real children and synthetic content used for extortion. In response, multiple jurisdictions, including the EU, UK, and Australia, have enacted strict laws against AI-generated CSAM, despite challenges in enforcement arising from the anonymous nature of these tools on the Dark Web.
The Governance Gap in the Dark Web
AI providers in the mainstream sector typically implement content moderation systems to prevent harmful outputs, guided by legal obligations and ethical standards. Unfortunately, these protective measures often fall short against Dark Web services like DIG AI, which operate outside established legal frameworks.
As criminals become adept at fine-tuning open-source AI models and removing safety filters, they create a thriving underground economy revolving around “AI-as-a-service” for criminal purposes. This emerging market mirrors legitimate business models but poses significantly higher risks to society.
Mitigating Risks from AI-Driven Threats
Organizations can implement several strategies to bolster their defenses against AI-powered threats:
- Enhance detection and monitoring for AI-assisted phishing, malware, and fraudulent activities across various platforms.
- Broaden threat intelligence programs to include insights from Dark Web marketplaces and early indicators of AI-driven targeting.
- Strengthen identity and access controls by advocating for phishing-resistant multi-factor authentication and least-privilege access policies.
- Conduct training sessions for employees, focusing on recognizing AI-generated scams and deepfake impersonations.
- Sharpen incident response readiness by integrating AI-enabled attack scenarios into training exercises and operational protocols.
- Reduce attack surfaces through network segmentation and proactive measures protecting public-facing assets.
These collective actions allow organizations to fortify their security posture against the multifaceted challenges posed by AI-enabled threats.
AI’s Transformation of Cyber Threats
DIG AI represents a significant shift in the way weaponized technologies are reshaping the current cyber threat landscape. As criminal actors increasingly adopt autonomous AI systems, security teams are confronted with threats that operate at unprecedented scale and efficiency, surpassing traditional human-driven attacks.
As the evolution of AI-enabled risks progresses, organizations are more frequently looking towards zero-trust principles as a foundational strategy to minimize risks and contain potential impacts.


