The Rise of Generative AI in Enterprises: Navigating Gains and Growing Risks
The trajectory of generative AI (genAI) platforms within enterprise environments is not just ascending; it’s skyrocketing. Recent findings from research indicate a staggering 50% surge in enterprise adoption of these technologies over a mere three-month period leading up to May 2025. While this rapid growth highlights a compelling shift towards digital transformation, it also raises significant concerns regarding security and governance, particularly with the alarming rise of unregulated “shadow AI.”
Understanding Shadow AI: A Double-Edged Sword
As businesses embrace innovations that genAI offers, the prevalence of shadow AI—unsanctioned applications employed by employees for various tasks—has become a critical issue. This phenomenon is intricately detailed in Netskope Threat Labs’ latest Cloud and Threat Report, which explores the evolving landscape of genAI usage across organizations. Over half of all app adoption is now estimated to stem from these unofficial channels, signaling a pressing need for enterprises to recalibrate their security protocols.
Ray Canzanese, Director of Netskope Threat Labs, emphasized the challenge this presents. Security teams are in a conundrum: they must facilitate innovation while guarding against potential data breaches. “The rapid growth of shadow AI places the onus on organizations to identify who is creating new AI apps and where they are being deployed,” Canzanese noted. As the lure of personalized AI solutions beckons, the risks of insecure data connections expand exponentially, calling for robust data loss prevention (DLP) strategies and enhanced monitoring capacities.
The Surge of Generative AI Applications
The allure of generative AI platforms lies not only in their user-centric design but also in their foundational role for developing customized AI applications and agents. This adaptability has fueled a 50% increase in users, indicating a significant shift in how organizations leverage these technological advancements. Network traffic related to genAI saw a remarkable 73% jump in the preceding quarter, with 41% of organizations now utilizing at least one genAI platform. Leading the charge are Microsoft Azure OpenAI, Amazon Bedrock, and Google Vertex AI, which collectively underscore the competitive nature of this rapidly evolving sector.
However, the burgeoning popularity of these applications is not without its pitfalls. As enterprises increasingly connect their data stores directly to AI platforms, they expose themselves to new layers of data security risks. The continual influx of user-generated data represents not only an opportunity but also a gaping vulnerability that must be addressed through intentional policy modifications.
The Drive for Innovation
Organizations are eager to harness the power of genAI, exploring various routes for swift AI innovation, including local deployments and custom tool development. Large Language Model (LLM) interfaces, in particular, have gained traction, with a reported 34% engagement. Companies like Ollama dominate this space, although newer contenders such as LM Studio and Ramalama are emerging on the horizon.
Interestingly, employee enthusiasm for AI tools is palpable, evidenced by a 67% participation in resource downloads from platforms like Hugging Face. The drive towards creating AI agents showcases a new frontier; GitHub Copilot has found a home in 39% of enterprises, while agent frameworks are facilitating deeper integrations into organizational workflows.
Monitoring and Managing the Shadow
Recent data reveals that Netskope is tracking over 1,550 distinct genAI SaaS applications—a marked increase from 317 in February—highlighting the rapid pace of app innovation and enterprise uptake. The average organization now utilizes approximately 15 genAI apps, up from 13, showcasing an upward trend in the monthly volume of data processed through these tools.
As enterprises consolidate their focus on purpose-built tools like Gemini and Copilot, security teams are compelled to ensure that their integration into existing frameworks is seamless and safe. Notably, ChatGPT has seen its first decline in enterprise popularity since tracking began in 2023, signaling a shift in user preference towards solutions that promise greater productivity without compromising security.
Steps Toward Secure Adoption
To navigate the complexities posed by both innovation and risk, cybersecurity leaders are encouraged to adopt a proactive approach. Recommendations include:
-
Assessing the GenAI Landscape: Identifying which tools are being used and how they fit into existing operations is paramount.
-
Strengthening App Controls: Establishing strict policies regarding the use of sanctioned applications, coupled with robust user education and real-time support systems.
-
Inventorying Local Security Controls: For organizations utilizing local genAI infrastructure, applying established security frameworks ensures necessary protections are in place.
-
Implementing Continuous Monitoring: Regular oversight of genAI usage is essential to detect emerging shadow AI instances and adapt to evolving risks associated with AI technology.
- Collaborating on Agentic AI Policies: Engaging key adopters to develop realistic strategies can curtail the risks posed by unregulated AI implementations.
As the generative AI landscape evolves continually, embracing its potential while safeguarding data integrity will remain a defining challenge for enterprises worldwide. The future is undoubtedly rich with possibilities, but it necessitates a commitment to mindful innovation—a balance that organizations must strive to achieve as they navigate this dynamic terrain.


