The Rising Tide of Generative AI: Navigating New Frontiers in Enterprise Security
As organizations shift rapidly towards incorporating generative AI (genAI) into their operations, the landscape of enterprise technology is evolving at an unprecedented pace. Recent findings from Netskope reveal a 50% surge in the use of genAI platforms among enterprises in just three months leading up to May 2025. While this signifies a leap into innovative capabilities, it simultaneously heightens vulnerabilities associated with shadow AI—unsanctioned applications adopted by employees, which now constitute over half of current app usage within organizations.
Shadow AI: The Double-Edged Sword
Netskope’s latest Cloud and Threat Report delves deep into this duality inherent in the rapid integration of genAI platforms. These platforms, with their user-friendly and flexible nature, have become the fastest-growing segment of shadow AI, making it imperative for organizations to identify who is creating and deploying AI applications within their ecosystems. This is not merely a technological challenge; it is a fundamental shift in how organizations will manage innovation and security.
“The rapid growth of shadow AI places the onus on organizations to identify who is creating new AI apps and AI agents," explains Ray Canzanese, the Director of Netskope Threat Labs. His insights underscore a critical concern: while security teams must safeguard innovation, they also need effective strategies to prevent potential breaches that shadow AI can introduce.
The Rise of Generative AI Platforms
In the evolving enterprise landscape, genAI platforms are not just tools; they serve as foundational infrastructures for custom AI applications. As organizations explore innovative AI solutions, the number of users on these platforms increased by 50% in recent months. Microsoft Azure OpenAI leads this charge with a significant share of adoption, followed closely by Amazon Bedrock and Google Vertex AI.
The implications are profound. A staggering 73% increase in network traffic related to genAI indicates a frantic integration into daily operations. As organizations flock to these platforms—41% are utilizing at least one—there is a pressing need for robust data loss prevention (DLP) measures to guard against unmonitored use.
Evolving Technology, Emerging Challenges
The deployment of homegrown solutions, such as on-premises AI via local GPU sources, reflects organizations’ interests in controlling their AI environments. Large Language Model (LLM) interfaces are also gaining traction, with 34% of organizations adopting these technologies. However, along with these advancements come new cybersecurity challenges, necessitating an evolve-or-be-left-behind mindset among security professionals.
As employees increasingly engage with these cutting-edge tools—67% report downloading resources from Hugging Face, and 39% use GitHub Copilot—organizations find themselves in a balancing act of fostering innovation while protecting sensitive information.
An Expanding Toolkit: The Proliferation of AI Apps
The rapid expansion of the genAI application ecosystem is evident, with Netskope tracking over 1,550 distinct genAI SaaS applications, up from just 317 in February. This explosion in innovation not only raises adoption rates—organizations are now utilizing an average of 15 genAI apps—but also correlates with a higher volume of data being uploaded to these platforms.
Notably, while general-purpose tools like ChatGPT have seen a decline in enterprise popularity, niche applications like Anthropic Claude and Perplexity AI have gained ground. This trend toward specialized, purpose-built tools reflects a nuanced understanding of security and productivity needs within organizations.
Recommendations for Security Leaders
In light of these developments, Netskope emphasizes essential steps for Chief Information Security Officers (CISOs) and security leaders to effectively navigate the complexities of a rapidly changing AI landscape:
- Assess your genAI landscape: Identify tools in use, users leveraging them, and their applications within the organization.
- Bolster app controls: Create and enforce robust policies for the usage of only approved genAI applications.
- Inventory local controls: For organizations utilizing local genAI infrastructure, apply relevant security frameworks to protect sensitive information.
- Continuous monitoring: Implement vigilant monitoring systems to detect new instances of shadow AI and adapt to shifting regulatory and ethical landscapes.
- Address the risks of agentic shadow AI: Collaborate with internal adopters of agentic AI to formulate practical policies to manage its use effectively.
Conclusion
As the enterprise landscape continues to be reshaped by generative AI, the pace of innovation brings both opportunities and challenges. By proactively addressing the threats posed by shadow AI and embracing responsible AI adoption, organizations can not only safeguard their data but also unleash the full potential of these remarkable technologies. The journey ahead is demanding, yet it promises to redefine how we think about technology and security in the digital age.


