/dqc/media/media_files/2025/04/30/67mYik3p0ixRpw5jBsDa.png)
Netskope Expands AI Security Capabilities on Netskope One Platform
Netskope has expanded the capabilities of its Netskope One platform to support a broader range of AI security use cases, including advanced protections for private applications and enhancements to Data Security Posture Management (DSPM). The platform now addresses risks beyond basic AI application access by offering visibility into sensitive data inputs to LLMs and providing risk assessments to support informed application selection and policy configuration.
Netskope One, built on SkopeAI technology, delivers unified AI security across users, data, agents, and applications. The platform supports real-time contextual controls and visibility to manage AI adoption across enterprise environments.
The growing use of AI tools within enterprises—including public genAI applications, embedded AI features, private AI deployments, and agent-based AI—has expanded the overall attack surface. This complexity has increased the need for integrated security approaches.
Findings from Netskope Threat Labs’ 2025 Generative AI Cloud and Threat Report show a 30-fold increase in data shared with GenAI applications by internal users over the past year. Much of this traffic results from "shadow AI" usage, where employees access AI tools using personal accounts. The report notes that 72% of enterprise GenAI users still rely on personal accounts to access tools such as ChatGPT, Google Gemini, and Grammarly. These trends highlight the need for comprehensive AI governance across the enterprise.
Key Enhancements to Netskope One Include:
-
Data Protection During Model Training: New DSPM capabilities help prevent sensitive or regulated data from being used in training public or private LLMs. This applies to direct training, retrieval-augmented generation (RAG), and fine-tuning scenarios.
-
Risk Assessment With Contextual Insights: DSPM leverages Netskope’s DLP engine and exposure data to classify information and assess associated AI-related risks. This enables prioritisation and policy alignment by security teams.
-
Policy-Based AI Governance: The platform supports automated enforcement based on data classification, origin, and usage patterns. Inline controls ensure that only approved data is used across AI-related workflows, including training, inference, and prompting.
“Organisations need to know that the data feeding into any part of their AI ecosystem is safe throughout every phase of the interaction, recognising how that data can be used in applications, accessed by users, and incorporated into AI agents,” said Sanjay Beri, CEO, Netskope. “In conversations I’ve had with leaders throughout the world, I’m consistently answering the same question: ‘How can my organisation fast track the development and deployment of AI applications to support the business without putting company data in harm's way at any point in the process?’ Netskope One takes the mystery out of AI, helping organisations to take their AI journeys driven by the full context of AI interactions and protecting data throughout.”
Netskope Expands AI Governance Capabilities to Support Enterprise Adoption
Netskope announced new capabilities in its Netskope One platform to support the secure adoption of AI across various enterprise use cases. The platform now enables organisations to build a consistent AI readiness foundation, monitor AI-related activity, and enforce risk-adaptive controls, extending governance across both public and private AI deployments.
Many organisations are already leveraging Netskope One to support the business use of AI tools. The expanded capabilities allow all customers to advance their AI security strategies and manage associated risks throughout their AI lifecycle.
Key Capabilities of Netskope One for AI Security:
-
Establishing AI Readiness:
The platform helps organisations identify and control data entering large language models (LLMs), whether from public genAI tools or private AI models. Netskope One prevents sensitive data exposure and supports the implementation of Data Loss Prevention (DLP) policies to reduce the risk of data poisoning. With enhanced discovery, classification, and labelling tools, organisations can track and manage data interactions with LLMs, AI agents, and applications. -
Comprehensive Organisational Visibility:
Security teams gain insight into AI usage across managed and unmanaged environments, with visibility into both personal and corporate accounts. Netskope’s Cloud Confidence Index (CCI) supports risk evaluation across more than 370 genAI applications and 82,000 SaaS platforms, providing assessments on data use, third-party sharing, and model training practices. -
Risk-Adaptive Enforcement:
Netskope enables granular control over AI application interactions based on user behaviour and data classification. Security teams can guide users toward approved enterprise-grade AI tools, such as Microsoft Copilot and ChatGPT, while restricting unauthorised actions like uploading, copying, or downloading. Advanced DLP policies extend to AI-generated responses, helping prevent the leakage of sensitive or regulated information.
Netskope will demonstrate the complete Netskope One platform, including these newly introduced AI capabilities, at the 2025 RSA Conference.
Read More:
Partner Managed Cloud Model Supports Our GTM Strategy
Joint Initiatives for Comprehensive Data Automation in Enterprises
AI, Security, and Quantum Computing Beholds the Future
SAP India Highlights AI Adoption in Enterprises at SAP NOW AI Tour