Safeguard User Data and Enhance Protection Against GenAI Loss

Published:

spot_img

📅Jun 06, 2025👤The Hacker NewsArtificial Intelligence / Zero Trust

Safeguard User Data and Enhance Protection Against GenAI Loss

When generative AI tools burst onto the scene in late 2022, their impact was felt far beyond tech circles. Employees from various sectors quickly recognized how these tools could enhance productivity, improve communication, and streamline workflows. Similar to earlier waves of technological advancements—like cloud storage and collaboration platforms—generative AI was adopted by employees seeking smarter ways to work, often outside corporate restrictions.

In response to the potential risks associated with disclosing sensitive information through public AI platforms, many organizations acted swiftly and decisively by blocking access. While this initial strategy might seem reasonable, it’s more of a temporary fix rather than a viable long-term solution. Blocking access can create an illusion of safety, but it often fails to address the underlying threats.

Shadow AI: The Hidden Danger

The Zscaler ThreatLabz team has been monitoring AI and machine learning traffic across organizations, revealing some eye-opening data. In 2024, they reported an astonishing 36-fold increase in AI and ML traffic compared to the previous year, uncovering over 800 distinct AI applications in use. This trend underscores the fact that simply blocking access doesn’t deter employees.

Employees often resort to workarounds—sending emails to personal accounts, using mobile devices, or capturing screenshots to transfer sensitive information into AI systems. These actions create a hidden realm known as Shadow AI, where sensitive data is at risk but eludes corporate oversight. So while it may appear that usage has ceased, organizations are often just blind to the activities taking place.

Lessons from SaaS Adoption

This situation isn’t unprecedented. Organizations faced similar challenges with the rise of Software as a Service (SaaS) tools, where IT departments struggled to manage unsanctioned use of cloud-based applications. The solution didn’t lie in outright bans but rather in offering secure, user-friendly alternatives that met employees’ needs for convenience and efficiency.

Today, the stakes are even higher. In the SaaS era, losing a file might be a setback, but exposing intellectual property to a public AI model can have far-reaching consequences, as deleted data may be irretrievable. Unlike simple software errors, the implications of using generative AI irresponsibly can be severe.

Visibility First, Then Governance

To govern AI usage effectively, organizations must first gain visibility into what is actually happening. Blocking traffic without understanding its nature is akin to erecting a fence without knowing where the boundaries lie.

Zscaler’s unique position in data traffic allows for real-time monitoring. They track access to applications, user interactions, and usage frequency. This insight is crucial for evaluating risks, shaping policies, and enabling safer AI adoption within organizations.

Traditional policy frameworks often reduce governance decisions to a binary choice of “allow” or “block.” A more nuanced approach is context-aware and aligns with zero-trust principles, which require continuous evaluation. Not all AI usage presents equal risks; therefore, policies must reflect this complexity.

For instance, organizations can permit cautious access to AI applications, allowing transactions only in controlled environments, such as browser-isolated modes. This prevents users from pasting sensitive data into applications. Redirecting users to enterprise-approved alternatives ensures productivity is maintained without compromising security. When employees have a secure and efficient way to use AI, they are less likely to seek unauthorized methods.

Furthermore, Zscaler’s data protection tools enable organizations to allow the use of certain public AI applications while blocking the transmission of sensitive data. Their research highlights over 4 million instances where attempts to send sensitive enterprise information to an AI application were intercepted by Zscaler’s policies. This data underscores how critical protective measures are in a landscape where the risk of data loss is ever-present.

Finding the Right Balance

Striking a balance between embracing AI and ensuring data security is essential. Organizations must recognize that supporting AI adoption doesn’t have to mean sacrificing protection. With the right mindset and tools, it’s possible to empower users while safeguarding critical information.

For further insights, visit zscaler.com/security.

spot_img

Related articles

Recent articles

Sierra Leone’s President Julius Maada Bio Celebrates Eid-Ul-Adha, Reinforces Commitment to Peace and Public Service

President Julius Maada Bio Celebrates Eid-Ul-Adha with the Muslim Community in Sierra Leone On this special occasion of Eid-Ul-Adha, President Dr. Julius Maada Bio joined...

Breaking: Solar City Tyres Targeted in BlackLock Ransomware Attack

Cyber Incident: Solar City Tyres Targeted by BlackLock Ransomware Cybersecurity threats continue to escalate as Solar City Tyres, a prominent tyre fitting and supplier based...

Discover Your Next Steps!

If you’re an AT&T customer, it’s crucial to take immediate action regarding your personal information. Your AT&T account details may...

New Atomic macOS Stealer Targets Apple Users via ClickFix Exploit

New Malware Campaign Targets macOS Users with ClickFix Tactics Cybersecurity experts are raising alarms over a recent malware campaign that uses social engineering to target...