Netskope warns: Shadow AI risks as unsanctioned tools usage at workplaces

Netskope reports a 50% surge in generative AI adoption, with over half being unsanctioned "shadow AI," raising security risks. Companies must strengthen AI governance and DLP policies.

author-image
DQC Bureau
New Update
Netskope warns Shadow AI risks as unsanctioned tools usage at workplaces

Netskope warns: Shadow AI risks as unsanctioned tools usage at workplaces

Netskope has raised fresh concerns over the rapid growth of shadow AI, unsanctioned AI applications used by employees, amid a surge in generative AI (genAI) platform adoption across enterprises.

Advertisment

The company’s latest Netskope Threat Labs Cloud and Threat Report highlights a 50% spike in genAI platform usage by enterprise users in the three months ending May 2025. Yet, more than half of this adoption is estimated to fall under shadow AI, compounding potential security risks for organisations worldwide.

Netskope reports the rise of generative AI platforms

GenAI platforms are emerging as the fastest-growing category of shadow AI. Their simplicity and flexibility make them attractive for end users seeking to build custom AI apps and agents. Netskope reported a 73% increase in network traffic tied to genAI usage in the same period, with 41% of organisations now using at least one genAI platform.

Advertisment

Microsoft Azure OpenAI leads adoption at 29%, followed by Amazon Bedrock (22%) and Google Vertex AI (7.2%).

“The rapid growth of shadow AI places the onus on organisations to identify who is creating new AI apps and AI agents using genAI platforms and where they are building and deploying them,” said Ray Canzanese, Director of Netskope Threat Labs. “Security teams don’t want to hamper employee end users’ innovation aspirations, but AI usage is only going to increase. To safeguard this innovation, organisations need to overhaul their AI app controls and evolve their DLP policies to incorporate real-time user coaching elements.”

On-premises AI adoption gains momentum

Advertisment

Beyond SaaS solutions, organisations are also turning to on-premises AI innovation. Netskope found that 34% of organisations now use Large Language Model (LLM) interfaces. Ollama leads this segment with 33% adoption, far ahead of LM Studio (0.9%) and Ramalama (0.6%).

At the same time, AI marketplaces such as Hugging Face are gaining traction, with resources being downloaded at 67% of organisations. GitHub Copilot usage has climbed to 39%, while 5.5% of organisations now report employees running agents generated from on-premises frameworks.

The growing use of APIs underscores the trend: two-thirds of organisations have users making calls to api.openai.com, and 13% to api.anthropic.com.

Advertisment

Shifting SaaS AI landscape

Netskope is tracking more than 1,550 distinct GenAI SaaS applications, up from just 317 in February. Enterprise usage has also grown, with organisations now using an average of 15 GenAI apps compared to 13 in February.

The report notes that enterprise users are consolidating around integrated tools such as Google Gemini and Microsoft Copilot. Meanwhile, general-purpose chatbot ChatGPT has seen its first decline in enterprise usage since Netskope began tracking it in 2023. By contrast, Anthropic Claude, Perplexity AI, Grammarly and Gamma recorded gains. Grok also entered the top 10 most-used GenAI apps for the first time.

Advertisment

A call for stronger AI governance

Netskope urged security leaders to take immediate steps to manage the risks of shadow AI and agentic AI. Its recommendations include:

  • Assessing the genAI landscape: Map out which tools are being used, by whom, and for what purpose.

  • Bolstering GenAI app controls: Limit usage to company-approved applications and deploy real-time user coaching.

  • Reviewing local controls: Apply frameworks such as the OWASP Top 10 for LLM Applications to safeguard on-premises deployments.

  • Continuous monitoring: Detect new shadow AI instances and track regulatory or adversarial developments.

  • Managing agentic AI risks: Identify early adopters of AI agents and develop policies for secure and monitored usage.

Advertisment

James Robinson, Chief Information Security Officer, Netskope, stressed the urgency: “The newest form of shadow AI is the shadow AI agent. They are like a person coming into your office every day, handling your data, and taking actions on your systems, all while not being background checked or having security monitoring in place. Identifying who is using agentic AI and putting policies in place for their use should be an urgent priority for every organisation.”

The bigger picture

As organisations race to harness AI for innovation, the balance between enabling productivity and ensuring security has never been more critical. Netskope’s findings suggest that without stronger governance and controls, shadow AI could expose enterprises to new levels of risk at a pace that traditional security frameworks may struggle to keep up with.

Advertisment

 

Read More:

WSO2 CEO Sanjiva Veerawarna on India’s software growth strategy

AI and the human edge: Day 2 at ASIRT Synergy 2025 deepens the partner playbook

ASIRT Synergy Biz Conclave 2025 ignites AI-led growth for India’s IT partners

WSO2CON Asia 2025: Purpose driven innovation and open source at scale