Generative AI is scraping vast amounts of regulated sensitive data from organisations

According to a new study published by Netskope, regulated data (data that organisations have a legal duty to protect) makes up more than a third of the sensitive data being shared with generative AI (genAI) applicationsAccording to a new study published by Netskope, regulated data (data that organisations have a legal duty to protect) makes up more than a third of the sensitive data being shared with generative AI (genAI) applications—presenting a potential risk to businesses of costly data breaches. The new Netskope Threat Labs research claims to reveal that three-quarters of businesses surveyed now completely block at least one genAI app, which reflects the desire by enterprise technology leaders to limit the risk of sensitive data exfiltration.

Using global data sets, the researchers claim that 96 percent of businesses are now using genAI—a number that has tripled over the past 12 months. On average, enterprises now use nearly 10 genAI apps, up from three last year, with the top 1 percent adopters now using an average of 80 apps, up significantly from 14. With the increased use, enterprises have experienced a surge in proprietary source code sharing within genAI apps, accounting for 46 percent of all documented data policy violations. These shifting dynamics complicate how enterprises control risk, prompting the need for a more robust DLP effort.

There are positive signs of proactive risk management in the nuance of security and data loss controls organisations are applying: for example, 65 percent of enterprises now implement real-time user coaching to help guide user interactions with genAI apps. According to the research, effective user coaching has played a crucial role in mitigating data risks, prompting 57 percent of users to alter their actions after receiving coaching alerts.

Netskope’s Cloud and Threat Report: AI Apps in the Enterprise also suggests that:

  •   ChatGPT remains the most popular app, with more than 80 percent of enterprises using it
  •   Microsoft Copilot showed the most dramatic growth in use since its launch in January 2024 at 57 percent
  •   19 percent of organisations have imposed a blanket ban on GitHub CoPilot

Netskope recommends enterprises review, adapt and tailor their risk frameworks specifically to AI or genAI using efforts like the NIST AI Risk Management Framework. Specific tactical steps to address risk from Generative AI include:

  • Know Your Current State: Begin by assessing your existing uses of AI and machine learning, data pipelines, and genAI applications. Identify vulnerabilities and gaps in security controls.
  • Implement Core Controls: Establish fundamental security measures, such as access controls, authentication mechanisms, and encryption.
  • Plan for Advanced Controls: Beyond the basics, develop a roadmap for advanced security controls. Consider threat modeling, anomaly detection, continuous monitoring, and behavioral detection to identify suspicious data movements across cloud environments to genAI apps that deviate from normal user patterns.
  •  Measure, Start, Revise, Iterate: Regularly evaluate the effectiveness of your security measures. Adapt and refine them based on real-world experiences and emerging threats.