/dqc/media/media_files/2026/02/11/india-leads-the-apj-region-in-enterprise-aiml-usage-ranks-second-globally-zscaler-threatlabz-2026-ai-security-report-2026-02-11-12-59-27.jpg)
Zscaler ThreatLabz 2026 AI Security Report warns enterprises
The ThreatLabz 2026 AI Security Report highlights a shift in how artificial intelligence is reshaping enterprise risk. Based on analysis of nearly one trillion AI and ML transactions observed on the Zscaler Zero Trust Exchange platform between January and December 2025, the report warns that enterprises are not fully prepared for the next phase of AI-driven cyber threats.
The research indicates that AI has moved beyond productivity enablement and is increasingly becoming a vector for autonomous, machine-speed attacks. AI and ML traffic were analysed together, reflecting how enterprise AI systems rely on machine learning models to operate at scale.
AI adoption accelerates across sectors
The report notes that Indian enterprises generated 82.3 billion AI and ML transactions between June and December 2025, ranking second globally after the United States. Within the Asia Pacific region, India accounted for 46.2 percent of AI and ML transactions during that period, alongside 309.9 percent year-over-year growth.
Japan recorded 18.6 billion transactions with 122.8 percent year-over-year growth, while Australia logged 15.3 billion transactions and 104.1 percent growth.
Within India, AI and ML activity was concentrated in Technology and Communication, Manufacturing, Services and Finance and Insurance. The findings suggest that AI is now embedded across core business functions, often expanding faster than governance mechanisms.
Suvabrata Sinha, CISO-in-Residence, India at Zscaler, said the scale of AI adoption in India is accelerating more quickly than many organisations’ ability to govern it. He stated that enterprises must understand where AI is being used, inspect shared data and enforce consistent controls under a Zero Trust framework.
Vulnerabilities emerge at machine speed
The report also details findings from red team testing of enterprise AI systems. Under simulated adversarial conditions, critical vulnerabilities were identified within minutes. The median time to first critical failure was 16 minutes, with 90 percent of systems compromised in under 90 minutes. In one case, defences were bypassed in a single second.
Deepen Desai, EVP Cybersecurity at Zscaler, said that in the era of Agentic AI, intrusions can progress from discovery to lateral movement and data theft in minutes. He stated that traditional defences are insufficient against AI-driven attacks and emphasised the need for intelligent Zero Trust architectures.
ThreatLabz warns that autonomous and semi-autonomous AI agents are likely to assume roles in reconnaissance, exploitation and lateral movement, enabling cyberattacks to scale at machine speed rather than human speed.
AI usage and supply chain exposure
AI and ML activity increased 91 percent year-over-year across more than 3,400 applications. The report cautions that rapid adoption has created visibility gaps, with many organisations lacking a comprehensive inventory of AI models interacting with enterprise data.
The AI supply chain, including shared model files, is identified as a growing attack surface. Weaknesses in these components may allow attackers to move laterally into core systems.
Standalone AI tools such as ChatGPT and Codeium recorded significant transaction volumes in 2025. At the same time, embedded AI features within enterprise SaaS platforms have emerged as a source of unmanaged risk, particularly when activated by default and not monitored by legacy security systems.
Among analysed platforms, Atlassian was cited as a leading source of embedded AI activity due to AI-enabled features within products such as Jira and Confluence.
Data concentration and DLP violations
Enterprise data transfers to AI and ML applications reached 18,033 terabytes in 2025, reflecting a 93 percent year-over-year increase. The report equates this to approximately 3.6 billion digital photos in volume.
Significant Data Loss Prevention policy violations were recorded in connection with AI platforms. ChatGPT alone was linked to 410 million DLP violations, including attempts to share sensitive information such as Social Security numbers, source code and medical records.
The report frames this scale of data movement as transforming AI applications into concentrated repositories of corporate intelligence and potential targets for cyber espionage.
Zero Trust as a security framework
The findings conclude that legacy security tools such as firewalls and VPNs are insufficient in dynamic AI environments. The report outlines Zero Trust principles including continuous verification, least-privileged access, traffic inspection, data classification and AI-driven segmentation as necessary controls.
The ThreatLabz 2026 AI Security Report draws on analysis of 989.3 billion AI and ML transactions generated by approximately 9,000 organisations across the Zscaler Zero Trust Exchange platform during 2025. The data provides insight into how AI is being used and restricted in enterprise environments worldwide.
Read More:
/dqc/media/agency_attachments/3bO5lX4bneNNijz3HbB7.jpg)
Follow Us