/dqc/media/media_files/2026/01/15/kaspersky-maps-ai-2026-01-15-18-10-54.jpg)
Kaspersky maps AI-driven cybersecurity shifts in APAC
Asia Pacific is no longer a follower in the global artificial intelligence race. The region is outpacing global averages, with 78 per cent of surveyed professionals using AI at least weekly, compared to 72 per cent worldwide. This widespread adoption is reshaping daily workflows and accelerating digital transformation across markets.
What sets Asia Pacific apart is the way AI adoption is taking shape. Rather than being driven only by enterprise initiatives, AI is spreading from the ground up. Hyper-connected consumers, extensive device penetration, and younger, tech-savvy populations are integrating AI into everyday experiences well before formal enterprise rollouts occur.
This bottom-up momentum, supported by strong investment, CEO-led strategies, and fast-growing digital economies, is positioning Asia Pacific as a proving ground for what are described as “AI Frontier” companies. The region is increasingly where new enterprise transformation models are tested first.
AI adoption creates new cybersecurity realities
As AI adoption accelerates, its impact extends beyond productivity and customer experience. For cybersecurity leaders, the Asia Pacific’s pace of innovation presents both opportunity and risk. The same AI technologies driving transformation are also changing how cyber threats are created, automated, and deployed.
Kaspersky experts outline how AI development is reshaping the cybersecurity landscape in 2026 for both individuals and organisations. Large language models are influencing defensive capabilities while also expanding the toolkit available to threat actors.
Deepfakes move into the mainstream
Deepfakes are becoming a mainstream technology, with awareness continuing to grow. Organisations are increasingly discussing synthetic content risks and training employees to reduce exposure. As deepfake volumes rise, so does the range of formats in which they appear.
At the same time, awareness is increasing not only within enterprises but also among individual users. Consumers are encountering fake content more frequently and better understanding the nature of these threats. As a result, deepfakes are becoming a permanent part of security planning, requiring structured training and internal policies.
Quality improves as barriers fall
The quality of deepfakes is expected to improve further, particularly in audio. While visual realism is already high, realistic audio remains an area of rapid advancement.
Content generation tools are also becoming easier to use. Non-experts can now create mid-quality deepfakes with minimal effort. This accessibility is driving a steady rise in average quality and widening the pool of potential misuse by cybercriminals.
Labeling AI-generated content remains unresolved
Efforts to develop reliable systems for labeling AI-generated content are expected to continue. There are still no unified standards for identifying synthetic content, and existing labels can be easily removed or bypassed, particularly when open-source models are used.
This gap is likely to prompt new technical and regulatory initiatives aimed at addressing identification and accountability challenges.
Online deepfakes evolve but remain specialised
Real-time face and voice swapping technologies continue to improve, but they still require advanced technical skills to deploy. While mass adoption remains unlikely, risks in targeted scenarios are increasing.
Rising realism and the ability to manipulate video through virtual cameras are making such attacks more convincing, particularly in focused, high-value targets.
Open models expand misuse potential
Open-weight AI models are approaching closed systems in many cybersecurity-related tasks. While closed models maintain stronger safeguards and controls, open-source alternatives are rapidly narrowing the performance gap.
Because open models circulate with fewer restrictions, they create additional opportunities for misuse. This is blurring the distinction between proprietary and open systems, both of which can be used effectively for malicious purposes.
Distinguishing real from fake grows harder
The boundary between legitimate and fraudulent AI-generated content is becoming increasingly unclear. AI can already produce convincing scam emails, realistic visual identities, and high-quality phishing pages.
At the same time, major brands are adopting synthetic content in advertising, making AI-generated material appear familiar and normal. This trend is complicating detection for both users and automated systems.
AI spreads across the attack chain
AI is becoming a cross-chain tool in cyberattacks, supporting multiple stages of the kill chain. Threat actors are already using large language models to write code, build infrastructure, and automate operations.
Further advances are expected to extend AI use into preparation, communication, vulnerability probing, and deployment. Attackers are also working to hide signs of AI involvement, making analysis more difficult.
Security operations adapt to AI
AI is also becoming more common in security analysis. Agent-based systems are expected to continuously scan infrastructure, identify vulnerabilities, and collect contextual data, reducing manual workload.
As a result, security operations centre teams are shifting from searching for data to making decisions based on prepared insights. Security tools are also moving toward natural-language interfaces, replacing complex technical queries with prompts.
“AI is reshaping cybersecurity from both sides. Attackers are using it to automate attacks, exploit vulnerabilities, and create highly convincing fake content. At the same time, defenders are applying AI to scan systems, detect threats, and make faster, smarter decisions. AI is a powerful tool for both offense and defense, and the ability to manage it safely will definitely influence the future of cybersecurity,” said Vladislav Tushkanov, Research Development Group Manager at Kaspersky.
Adrian Hia, Managing Director for Asia Pacific at Kaspersky, said Asia Pacific is setting the global pace for AI adoption, with consumers and enterprises advancing faster than any other region. He noted that this momentum is creating opportunity while also redefining how cyber threats emerge and scale, adding that organisations can rely on Kaspersky’s experience to strengthen their defenses as they navigate this shift.
A defining phase for cybersecurity
The Kaspersky AI-driven cybersecurity trends in Asia Pacific highlight a region where innovation and risk are advancing together. As AI becomes embedded across digital ecosystems, managing its impact safely is emerging as a defining challenge for the future of cybersecurity.
/dqc/media/agency_attachments/3bO5lX4bneNNijz3HbB7.jpg)
Follow Us