The Rise of Generative AI in Enterprises: Balancing Innovation and Security
The world of generative AI (genAI) is experiencing a seismic shift, with enterprise adoption skyrocketing by 50% in just a few months. Fresh research has identified this upward trajectory, revealing that the urgency for organizations to harness the potential of AI also comes with a host of new cybersecurity challenges.
A Rapidly Evolving Landscape
Netskope Threat Labs’ latest Cloud and Threat Report illuminates the burgeoning interest in generative AI applications, both cloud-based and on-premises. This surge not only signifies growing excitement around the capabilities of AI but also introduces pressing security concerns amid a landscape increasingly characterized by "shadow AI"—unsanctioned applications generated by employees without organizational oversight.
As businesses rush to adopt these innovative technologies, the implications for data security have become increasingly apparent. While genAI platforms provide user-friendly interfaces for developing custom AI applications, their growing adoption also means heightened risk, particularly concerning unauthorized access to enterprise data.
Understanding Shadow AI
Shadow AI, while it offers opportunities for innovation, raises alarms among security professionals. The report highlights that over half of current application adoptions fall into this category. As Ray Canzanese, Director of Netskope Threat Labs, points out, “The rapid growth of shadow AI places the onus on organizations to identify who is creating new AI apps and AI agents using genAI platforms.”
The use of these AI applications directly connects enterprise data stores to dynamic AI environments, amplifying the risk of data leaks and other security threats. Continuous monitoring and rigorous data loss prevention (DLP) measures have become imperative as network traffic associated with genAI jumped by an astonishing 73% in three months.
The Numbers Behind the Trend
As of May 2025, approximately 41% of organizations have adopted at least one genAI platform. Leading the pack, Microsoft Azure OpenAI is utilized by 29% of enterprises, followed by Amazon Bedrock at 22%, and Google Vertex AI at 7.2%. Such widespread adoption underlines the urgent need for organizations to rethink their AI application controls and evolve their DLP policies.
As enterprises delve deeper into the world of AI, many are exploring diverse avenues for innovation. A significant trend is the deployment of genAI locally via on-premises GPU sources, allowing for enhanced interaction with SaaS applications. Large Language Model (LLM) interfaces have found favor, with 34% of organizations adopting them. Notably, Ollama leads in this niche with a 33% adoption rate.
Expanding Employee Engagement
Employee engagement with AI tools is also on the rise. An impressive 67% of organizations report users actively downloading resources from Hugging Face, while the search for intelligent AI agents drives users to experiment with various frameworks. Tools like GitHub Copilot are now utilized by 39% of companies, evidencing this trend.
As companies increasingly invest in on-premises solutions, they are broadening their capability to access APIs beyond conventional browser interfaces. Notably, two-thirds of organizations are making API calls to platforms like OpenAI, showcasing an evolving approach to data retrieval and manipulation through AI.
Consolidation Around Purpose-Built Tools
Faced with a diverse array of applications, enterprises are trending toward purpose-built tools that integrate seamlessly into their productivity ecosystems. Although the legendary ChatGPT has experienced its first reported decline in usage since tracking began, other applications such as Anthropic’s Claude, Perplexity AI, and Grammarly have seen increased traction.
Interestingly, Grok has made its way into the top ten most-used generative AI applications for the first time, signaling a shift in user preferences toward more specialized tools.
Recommendations for Securing AI Innovations
As businesses navigate the unpredictable waters of generative AI, it’s crucial to implement proactive measures to ensure responsible usage. Netskope outlines several key actions for Chief Information Security Officers (CISOs) and security leaders to consider:
- Assess the GenAI Landscape: Identify and monitor which tools are in use, as well as who is using them and how.
- Bolster GenAI App Controls: Implement strict policies permitting only company-approved applications while enforcing robust blocking mechanisms and real-time user coaching.
- Inventory Local Controls: For those utilizing local genAI infrastructures, apply relevant security frameworks like the OWASP Top 10 for Large Language Model Applications.
- Continuous Monitoring: Maintain ongoing vigilance over genAI usage to detect unauthorized applications and remain informed on ethical considerations and regulatory changes.
- Address Emerging Risks: Collaborate closely with primary adopters of agentic AI to develop realistic policies that effectively mitigate security risks.
Conclusion
The rapid adoption of generative AI platforms marks a new frontier for enterprises, ripe with opportunities for innovation as well as inherent challenges surrounding security and risk management. As organizations endeavor to integrate these powerful tools, a balanced approach that prioritizes both innovation and safety will be essential for sustainable success in the AI landscape.


