Cybersecurity Must Rethink Defense Strategies as Autonomous Agents Emerge in 2026
In March 2026, San Francisco once again took center stage in the cybersecurity landscape as the RSA Conference attracted thousands of industry professionals, vendors, and investors to the Moscone Center. The prevailing theme throughout the event was “Agentic AI,” emphasizing a paradigm shift where artificial intelligence is not merely a tool but an autonomous actor capable of making decisions and taking actions independently.
This evolution in AI technology encompasses a range of capabilities, from autonomous code generation to decision-making systems that operate without human oversight. A notable development in this arena is Mythos, a next-generation AI framework designed to orchestrate complex, multi-step cyber operations. This innovation underscores both the potential benefits and inherent risks associated with the rise of agentic AI.
The Cloud Security Alliance has issued warnings about an anticipated increase in simultaneous AI-powered attacks, urging cybersecurity defenders to leverage AI in their countermeasures. In response, OpenAI has expanded its Trusted Access for Cyber program, aiming to support thousands of verified defenders and numerous security teams. Gartner’s projections further highlight this trend, forecasting a staggering 44% growth in AI spending in 2026, with total investments reaching $47 trillion by 2029. This figure dwarfs the anticipated $238 billion allocated for information security and risk management solutions in the same year.
The Dual-Use Reality of Agentic AI
The emergence of technologies like Mythos reveals a critical truth: the capabilities that empower defenders can also be exploited by attackers. Adversaries are increasingly utilizing AI for various malicious activities, including:
- Autonomous reconnaissance and lateral movement
- Real-time adaptation to defensive measures
- Scalable, low-cost attacks requiring minimal human intervention
These developments are not hypothetical. Early rogue AI agents are actively probing systems, exploiting vulnerabilities, and mimicking legitimate user behaviors. Attackers no longer need to control every aspect of an operation; they can deploy agents that function as digital identities.
Historically, significant shifts in cybersecurity have led to an influx of point solutions, resulting in tool sprawl, siloed visibility, and operational complexity. These gaps often create opportunities for attackers. The risks associated with agentic AI are following a similar trajectory, with early indicators already apparent:
- AI security posture management tools
- AI runtime protection platforms
- AI-specific anomaly detection engines
- AI governance solutions
While each of these tools may offer value, the proliferation of additional solutions can lead to increased friction within organizations. The focus should not be on adding more dashboards but rather on enhancing context and control over the entities operating within their environments, whether human or machine.
During the AGC Cybersecurity Investor Conference, industry experts reached a consensus: organizations should treat AI as an identity. This perspective transcends the hype surrounding AI, positioning it within the established and critical domain of identity security rather than as a separate tool category requiring distinct security stacks.
Identity Threat Detection as the Foundation
When AI is conceptualized as an identity, identity threat detection and risk mitigation solutions emerge as the logical control plane. This approach emphasizes the analysis of behavior across credentials and systems, integrating adaptive verification, behavioral analytics, device intelligence, and risk scoring into a cohesive platform.
Applied to AI, this framework facilitates:
- Behavioral visibility to identify anomalies such as unusual access patterns, privilege escalation, or data exfiltration
- Risk-based controls to modify access, enforce additional verification, or isolate suspicious agents
- Unified policy enforcement across both human and machine identities
- Lifecycle management to prevent orphaned or unmanaged agents
As rogue AI agents—whether compromised or malicious—become more prevalent, identity-driven security offers a practical defense mechanism. This approach enforces the principle of least privilege, continuously validates access, detects abnormal behavior, and automates response actions. These capabilities are already integrated into modern identity security frameworks and can be extended to encompass AI without creating additional silos.
The Future of Cybersecurity
The discussions in San Francisco this March underscored a pivotal reality: the future of cybersecurity will be shaped by entities capable of independent action. Some of these entities will be human, while many will not.
As technologies like Mythos continue to redefine the capabilities of AI, the cybersecurity industry must adapt its defensive strategies accordingly. The most effective approach may be straightforward: if an entity can act autonomously, it should be treated as an identity.
By anchoring AI security within identity threat detection and risk mitigation frameworks, organizations can safeguard against rogue agents without adding yet another fragmented tool to their already complex defense arsenal.
For further insights on the evolving landscape of cybersecurity, visit SecurityWeek.
Keep reading for the latest cybersecurity developments, threat intelligence and breaking updates from across the Middle East.


