Advertisment

Netskope Threat Labs Research - GenAI Being Regulated

The Netskope Threat Labs research reveals that three-quarters of the surveyed businesses now completely block at least one GenAI app to limit the risk of sensitive data exfiltration.

author-image
DQC Bureau
New Update
Netskope Threat Labs Research - GenAI Being Regulated

Netskope Threat Labs Research - GenAI Being Regulated

Netskope has published new research showing that regulated data, which organizations have a legal duty to protect, makes up more than a third of the sensitive data being shared with generative AI (genAI) applications. This presents a potential risk of costly data breaches for businesses.

Advertisment

The Netskope Threat Labs research reveals that three-quarters of the surveyed businesses now completely block at least one GenAI app to limit the risk of sensitive data exfiltration. However, fewer than half of the organizations apply data-centric controls to prevent sensitive information from being shared in input inquiries, indicating a lag in adopting the advanced data loss prevention (DLP) solutions needed to safely enable GenAI.

Using global data sets, the researchers found that 96% of businesses are now using genAI, a number that has tripled over the past 12 months. On average, enterprises now use nearly 10 genAI apps, up from three last year, with the top 1% adopters now using an average of 80 apps, up from 14. This increased use has led to a surge in proprietary source code sharing within genAI apps, accounting for 46% of all documented data policy violations. These dynamics complicate how enterprises control risk and highlight the need for a more robust DLP effort.

Positive signs of proactive risk management are evident in the security and data loss controls organizations are applying. For example, 65% of enterprises now implement real-time user coaching to help guide user interactions with genAI apps. The research indicates that effective user coaching has played a crucial role in mitigating data risks, prompting 57% of users to alter their actions after receiving coaching alerts.

Advertisment

“Securing genAI needs further investment and greater attention as its use permeates through enterprises with no signs that it will slow down soon,” said James Robinson, Chief Information Security Officer, Netskope. “Enterprises must recognize that genAI outputs can inadvertently expose sensitive information, propagate misinformation, or even introduce malicious content. It demands a robust risk management approach to safeguard data, reputation, and business continuity.”

Netskope’s Cloud and Threat Report: AI Apps in the Enterprise finds the following:

  • ChatGPT is the most popular app, used by more than 80% of enterprises.
  • Microsoft Copilot has shown the most dramatic growth since its launch in January 2024, with a 57% increase in use.
  • 19% of organizations have imposed a blanket ban on GitHub Copilot.
Advertisment

Netskope recommends that enterprises review, adapt, and tailor their risk frameworks specifically to AI or genAI using efforts like the NIST AI Risk Management Framework. Specific tactical steps to address risk from GenAI include:

Know Your Current State: Assess existing uses of AI and machine learning, data pipelines, and GenAI applications. Identify vulnerabilities and gaps in security controls.
Implement Core Controls: Establish fundamental security measures, such as access controls, authentication mechanisms, and encryption.
Plan for Advanced Controls: Develop a roadmap for advanced security controls. Consider threat modeling, anomaly detection, continuous monitoring, and behavioral detection to identify suspicious data movements across cloud environments to genAI apps that deviate from normal user patterns.
Measure, Start, Revise, Iterate: Regularly evaluate the effectiveness of security measures. Adapt and refine them based on real-world experiences and emerging threats.

 

Advertisment

Read more from Bharti Trehan..

Read about Generative AI..

Read IT Product News here..

Advertisment