Preparing for AI Disruption: Insights from Fortinet’s CISO Predictions
In the rapidly evolving landscape of cybersecurity, the role of the Chief Information Security Officer (CISO) is becoming increasingly complex. Fortinet’s CISO, Carl Windsor, recently shared his predictions for 2026, emphasizing that AI is no longer just a technological tool—it’s a high-risk capability that demands robust governance. As companies face the reality of increasing disruptions fueled by AI, rethinking business resilience and response strategies is essential.
Rethinking Business Resilience in an AI World
Windsor’s insights underscore the urgency for CISOs to redefine their organization’s Minimum Viable Business (MVB). This involves a critical evaluation of AI-dependent systems to determine which are essential for operations. Key questions arise: Which automated decisions must be paused in the event of an incident? How do we proceed if a specific AI model or dataset becomes untrustworthy?
In 2026, resilience requires a comprehensive understanding of AI’s role in amplifying failures. Traditional continuity plans often overlook AI behavior in times of stress, making it crucial for organizations to adapt. Practicing tabletop exercises that include AI-related failure scenarios and corrupted data pipelines will provide invaluable insights. These drills must prepare teams for the rapid human intervention required when autonomous systems fail.
AI: A High-Risk Capability
With AI systems increasingly woven into various business operations, the associated risks also grow. Despite many departments—like marketing and development—utilizing AI, these tools often operate outside the purview of traditional security measures. This invites potential threats: AI can inadvertently leak sensitive information, be sabotaged through adversarial inputs, or behave unsafely through prompt manipulation.
In light of these vulnerabilities, it is imperative that CISOs treat AI as a high-risk capability. Governance should be established, detailing ownership, enforcing stringent access controls, securing training data, and rigorously monitoring AI behavior in real-world applications. Under proper management, AI can significantly bolster resilience by enhancing detection and response times, but without governance, it poses an amplified risk.
Enhancing Identity Security Across the Board
Identity management is at the heart of modern cybersecurity strategies, especially as AI introduces more complexity. The 2026 CISO Predictions highlighted the growing threat posed by non-human identities—like machines and AI agents—accounting for a significant number of identities within organizations. A single compromised identity can lead to cascading failures across various systems.
CISOs must ensure that identity controls remain consistent for users, machines, APIs, and AI agents alike. This includes employing continuous verification and implementing least-privilege access policies. As organizations continue to adopt automation, the approach to identity governance must evolve to accommodate speed and scale.
Collaborative Efforts Are Key
As AI blurs the lines between departments, the need for collaboration becomes paramount. The traditional decision-making processes, once confined to individuals, are now shared among systems, teams, and automated workflows. This can introduce complexities that may hinder incident response if responsibilities aren’t clearly defined.
Building resilience cannot happen in a vacuum. It requires the alignment of various functions, including security, IT, data science, and even legal teams, all focusing on shared perceptions of AI risk. Furthermore, external collaboration with partners, peers, and public organizations becomes even more crucial as AI threats grow on a global scale.
Embracing Continuous Adaptation
AI accelerates disruption at unprecedented rates. Malicious actors adapt swiftly, regulatory requirements change rapidly, and mistakes can cascade quickly through systems. Organizations need to adopt a mindset geared toward understanding and preparing for AI-accelerated disruptions.
This involves committing to ongoing testing, regularly reassessing how AI is employed, and fostering rapid feedback loops between security and business teams. Organizations that treat adaptation as an essential discipline—rather than a one-off evaluation—will be better positioned to navigate the challenges that lie ahead.
The Evolving Role of the CISO
The responsibilities of a CISO are expanding more than ever. To be effective in 2026, leaders must recognize AI not only as a technological advancement but also as a transformative force influencing governance, risk management, and business continuity strategies.
Resilience in this new era will belong to those who stay ahead of AI-driven disruptions, rigorously test existing policies, and ensure seamless operational capabilities despite failures in automated systems. This proactive approach will define the future success of organizations navigating the complexities of AI in cybersecurity.


