/dqc/media/media_files/2025/11/24/how-ai-is-rewriting-the-ransomware-playbook-2025-11-24-11-25-31.png)
How AI is rewriting the ransomware playbook
Time for organisations in India to rethink their cybersecurity strategy
The era of slow, manually executed cyberattacks is disappearing. Ransomware today is becoming faster, cheaper, and significantly more scalable. What is driving this shift is the rapid adoption of generative AI by threat actors. GenAI is now woven into multiple stages of the attack lifecycle, accelerating everything from reconnaissance to extortion and reshaping the underlying economics of cybercrime. While agentic AI remains on the horizon, the groundwork being laid today signals a future where parts of a ransomware campaign could be conducted with minimal human intervention.
An IBM report suggests that roughly 16% of breaches in 2025 involved some form of attacker-used AI, most commonly in AI-generated phishing campaigns, which accounted for 37% of AI-related breaches, and deepfake-enabled impersonation attacks, which made up another 35%. At the same time, Tenable’s State of Cloud and AI Security 2025 report shows that many organisations are deploying AI internally without the guardrails needed to manage the associated risks. Most still lack strong access controls, review mechanisms or formal oversight for the AI tools spreading rapidly through their business. More than half rely primarily on compliance frameworks such as the NIST AI RMF or the EU AI Act to steer their approach. These frameworks provide important guidance, but they are not built to keep pace with the speed or complexity of modern AI adoption. This governance gap has become fertile ground for attackers, who routinely exploit misconfigurations, unsecured APIs and exposed credentials.
AI is reshaping ransomware economics
Ransomware-as-a-Service groups have embraced AI primarily as a force multiplier. They are not yet unleashing fully autonomous agents capable of independently navigating an entire kill chain, but they are using AI to amplify speed and volume. Tasks that once required specialised skills such as writing tailored phishing messages, crafting plausible lures, modifying malware variants or analysing large sets of public data, can now be executed at scale by operators with far less expertise. The ability to gather intelligence quickly, identify high-value targets and generate convincing social engineering content has significantly lowered the barrier to entry for new attackers. In parallel, established groups are using GenAI to shorten the time between initial access and impact, compressing intrusion timelines in ways that strain traditional incident response processes.
Shadow AI Is Becoming a Hidden Liability
As organisations rush to adopt AI for productivity and automation, shadow AI is proliferating at an alarming rate. Employees continue to experiment with AI tools outside the visibility of security teams, often pasting sensitive data into applications that have no formal approval or oversight. Misconfigured copilots can expose confidential information, while poorly secured plugins and integrations create new pathways for intruders. Attackers are already exploiting these weaknesses through prompt injection, manipulating AI systems into leaking data or taking unintended actions simply by feeding them crafted inputs or embedding malicious prompts in emails and documents. These issues are now appearing in breach reports as contributing factors. The challenge is not the AI technology itself, but the absence of governance, ownership and visibility around how it is being used.
Why exposure management is essential
In this environment, relying on traditional security controls is increasingly insufficient. Protecting a modern enterprise requires a precise understanding of the organisation’s true attack surface, including cloud environments, identities, APIs and new AI components threading through the business. Guesswork is no longer defensible. What security leaders need is continuous visibility into all assets, a way to understand which exposures could matter most to the business, and a mechanism to prioritise defensive action before attackers have the chance to exploit weaknesses.
Exposure management provides this capability. It brings together visibility across IT, cloud, identity, OT, IoT and AI systems in a unified view that highlights not just what is vulnerable but what is most important. It moves organisations from reacting to endless alerts to systematically reducing the attack paths that an AI-augmented adversary is most likely to exploit. Just as importantly, exposure management translates complex security issues into clear, measurable metrics that executives and boards can understand. This is a vital shift as regulatory expectations around AI, cyber resilience and operational accountability continue to rise.
AI is undoubtedly reshaping the ransomware playbook, but not in the way sensational narratives imply. GenAI has amplified the scale, speed and sophistication of attackers, while agentic AI is emerging slowly in the background, signalling a coming wave of more autonomous threats. Security leaders cannot settle for plugging holes one by one. They must understand their exposure end-to-end and reduce risk systematically, long before an automated adversary forces the issue. In a landscape where attackers are accelerating, visibility, context and proactive defence are now the only acceptable strategy.
Written by - Rajnish Gupta, MD & Country Manager, Tenable India
Read More:
How Vultr is redefining Cloud for SMBs, developers and AI workloads
Pelorus Technologies: Inside India’s forensics & cybersecurity engine
How Hitachi Vantara empowering green storage for AI-driven enterprises
GPT-5.1, a new chapter in Developer AI with agentic capabilities
/dqc/media/agency_attachments/3bO5lX4bneNNijz3HbB7.jpg)
Follow Us