AI for Secure India: Navigating Cyber Threats in the Digital Age
NEW DELHI — On February 17, a remarkable session titled “AI for Secure India: Combating AI-Enabled Cybercrime, Deepfakes, Darkweb Threats & Data Breaches” brought together a packed audience at Bharat Mandapam’s L1 Meeting Room No. 15. Hosted by the Future Crime Research Foundation (FCRF) during the India AI Impact Summit, the discussion aimed to address the intersection of artificial intelligence and cybersecurity.
Expert Panel Insights
The panel featured an impressive lineup of experts: Prof. Triveni Singh, a retired IPS officer and Chief Mentor at FCRF; Rakesh Maheshwari, a specialist in cyber law and data governance; Dr. Sapna Bansal from Shri Ram College of Commerce at the University of Delhi; Tarun Wig, Co-Founder and CEO of Innefu Labs; and Senior Advocate Vivek Sood, serving at the Supreme Court of India. The session was skillfully moderated by Navneethan M., Senior Vice President and Chief Information Security Officer.
In a move that set a compelling tone, the discussion opened with a provocative question: Is the integration of AI in cybersecurity merely a marketing gimmick, or is it essential?
The Necessity of AI in Cyber Defense
The analytics-driven conversation quickly turned to the role of AI in reshaping the landscape of cybersecurity. The consensus highlighted that artificial intelligence is no longer an optional enhancement in cyber defense. Instead, it has evolved into a fundamental component necessary for both offense and defense in today’s digital warfare.
Machine-Speed Attacks
As the dialogue progressed, the focus shifted to the disturbing speed at which cyber attackers operate. The advent of AI has equipped adversaries with tools to execute automated phishing schemes that can create hyper-personalized attacks at scale. Malware can now evolve rapidly, while social engineering tactics have grown significantly more advanced and harder to detect in real-time.
Amidst this backdrop, a critical question emerged: Are we witnessing an actual rise in cyber incidents, or are our detection methods merely improving? This led to an intriguing paradox — while AI boosts our ability to detect threats and analyze vast amounts of data almost instantaneously, it simultaneously arms attackers with the means to conduct rapid reconnaissance and exploit vulnerabilities more efficiently. The reaction speed in breaches has shrunk dramatically: what previously took weeks to unfold can now happen in hours.
Traditional defenses, including static passwords and manual audits, are proving inadequate against sophisticated adversaries who utilize adaptive AI. This reality makes it clear that our defensive mechanisms must evolve in tandem with the threats they are designed to neutralize.
The Challenge of Deepfakes and Trust Erosion
The session took a fascinating turn as it addressed the rise of synthetic media and its potential to undermine digital trust. With the capability to fabricate convincing videos and audio, a critical question arose: how can organizations authenticate the veracity of content? Are deepfakes merely a financial risk, or do they extend to national security concerns?
The panel also discussed the booming AI-infused dark web marketplaces that offer various forms of crime-as-a-service, allowing novices to engage in cybercrime. These marketplaces now provide easy access to phishing kits, synthetic identity generators, and tools for producing deepfakes, effectively lowering the barrier of entry for would-be cybercriminals while amplifying the potential for damage.
The conversation also touched on data governance, which is closely tied to the discussion on cybersecurity. Weak internal controls, fragmented compliance protocols, and delayed reporting can aggravate the fallout from AI-driven cyber incidents. The emphasis was that insufficient data management may present as significant a vulnerability as external attacks.
In examining the existing legal frameworks, the panel highlighted the challenges posed by AI-generated content in terms of accountability and the difficulty of proving authenticity in legal settings. This raised concerns about whether current laws are sufficient to tackle crimes where verification itself is in question.
The Path to Sovereign AI Security
As the session drew to a close, the discussion expanded from enterprise-level cybersecurity to the national perspective. A provocative question emerged: should AI security be regarded as critical national infrastructure? With AI systems increasingly woven into vital sectors—banking, healthcare, governance, and defense—the responsibility of securing these systems appears to shift toward a national obligation rather than a solely private sector issue.
The dialog eventually pivoted toward potential solutions for the future. Can AI be effectively harnessed to track down criminals? Is there reliability in AI-driven decisions during critical moments? What urgent actions should be prioritized to enhance India’s AI resilience in the coming year?
A common thread among the responses was clear: security measures must be embedded into AI systems from their inception. In an era defined by machine-speed threats, reactive responses will not suffice.
Curated by FCRF as a Knowledge Partner, the session offered a necessary counterpoint to the prevailing excitement around advancements in artificial intelligence. While many discussions focus on the capabilities of AI, this panel drew attention to the imperative of governance, accountability, and resilience in our cybersecurity frameworks.
Inside Room No. 15, it became evident that in the world of synthetic realities, the balance between innovation and trust is essential. As India accelerates into an AI-driven future, earning and maintaining that trust may prove to be its most critical infrastructure.


