New report from Netskope reveals that regulated data is vulnerable to security risks

Published:

Netskope’s new report finds regulated data at risk

Netskope’s new report has uncovered a concerning trend in the world of generative AI applications, with regulated data found to be at risk of costly data breaches. The research, conducted by Netskope Threat Labs, revealed that over a third of sensitive data shared with genAI apps is regulated, posing a significant threat to businesses.

The study also found that while 75% of businesses are now blocking at least one genAI app to mitigate the risk of data exfiltration, less than half are implementing data-centric controls to prevent sensitive information from being shared. This lack of advanced data loss prevention solutions is leaving many organizations vulnerable to potential breaches.

The use of genAI apps has skyrocketed in the past year, with 96% of businesses now utilizing them. On average, enterprises are using nearly 10 genAI apps, up from just three the previous year. This increase in usage has led to a surge in proprietary source code sharing within genAI apps, accounting for a significant portion of data policy violations.

Despite these challenges, there are positive signs of proactive risk management among enterprises. For example, 65% of organizations are implementing real-time user coaching to guide interactions with genAI apps, leading to 57% of users altering their actions after receiving coaching alerts.

James Robinson, Chief Information Security Officer at Netskope, emphasized the importance of investing in robust risk management strategies to safeguard data, reputation, and business continuity in the face of growing genAI usage. As genAI continues to permeate through enterprises, it is crucial for organizations to prioritize security and data loss prevention efforts to protect sensitive information and mitigate potential risks.

Related articles

Recent articles