The Truth About AI Cyber Attacks: Velocity, Not Novelty

Matt Bromiley
Matt Bromiley
November 25, 2025

These days, the headlines are hard to hear. AI is everywhere, and apparently, it’s being used maliciously a lot more than for good. However, we have to ask. is it, really? Let’s look at some of the most recent examples:

  • In November 2025, Anthropic disclosed what they assessed to be the first documented AI-orchestrated cyber espionage campaign. Their research identified a China-nexus threat actor where AI performed 80-90% of the work, with human operators stepping in at a handful of decision points. 
  • Google’s Threat Intelligence Group (GTIG) tracks malware families like PROMPTFLUX and PROMPTSTEAL that query large language models (LLMs) mid-execution, dynamically generating scripts and obfuscating their own code.
  • Underground marketplaces now offer tools like SpamGPT, WormGPT, and Xanthorox AI as turnkey solutions for phishing, malware development, and full attack chain management.

You’ll see the message everywhere: AI has fundamentally changed the threat landscape. 

Except, it hasn’t. Not in the way that headlines suggest.

Faster to the Bar - But the Bar Hasn’t Moved

What AI provides adversaries is speed and scale, not novelty. The tactics and techniques remain the same. The Anthropic case involved reconnaissance, vulnerability exploitation, credential harvesting, lateral movement, and data exfiltration. These phases have defined cyber intrusions for decades. Google’s reporting on state-sponsored groups,including APT28, APT41, and APT42, shows threat actors leveraging AI across their operations, from crafting phishing lures to developing C2 infrastructure and obfuscating malicious code. These aren’t new attack vectors. Rather, they’re established techniques simply executed more efficiently.

The MITRE ATT&CK framework doesn’t need a new matrix for “AI-enabled adversaries”. The techniques are already there. What’s changed is the velocity at which threat actors can move through them.

Consider the Anthropic disclosure: at peak activity, the AI-driven operation generated thousands of requests, often multiple per second. That throughput would be impossible for human operators alone. But the attack still depended on exploiting vulnerabilities, harvesting credentials, and establishing persistence. Again - the same objectives that defenders have been countering for years.

{{ebook-cta}}

The Underground Economy Has Matured

Google's research highlights a maturing marketplace for AI-enabled offensive tools. Offerings such as:

  • SpamGPT bundle AI-generated phishing content with SMTP cracking training modules,
  • PromptLock uses local LLMs to generate ransomware scripts dynamically, adapting to each target environment, and 
  • Xanthorox AI provides a modular, offline platform for managing complete attack chains.

These tools lower the barrier to entry for less sophisticated actors. That’s the real concern; the talent floor for conducting effective attacks has dropped. But the attacks themselves follow established patterns: phishing for initial access, credential theft, privilege escalation, data exfiltration. The delivery mechanism got easier. The kill chain didn't change.

This is consistent with what we've seen historically when new enabling technologies emerge. Exploitation frameworks like Metasploit and Cobalt Strike democratized capabilities that previously required significant expertise. The defender's response wasn't to abandon fundamentals. it was to double down on detection, visibility, and response capabilities that addressed the underlying techniques regardless of the tool used to execute them.

The Fundamentals Still Win

If the tactics haven't changed, neither should our defensive priorities. The organizations best positioned to withstand AI-enabled adversaries are those with mature implementations of core security principles:

  • Defense in Depth remains essential. Layered controls ensure that no single point of failure—wdhether exploited by a human or an AI—results in complete compromise. The Anthropic case succeeded against some targets but not others. The difference wasn't whether the target had "AI defenses." It was whether they had sufficient depth in their security architecture to detect and contain the intrusion before objectives were achieved.
  • Least privilege limits blast radius. AI can accelerate credential harvesting and lateral movement, but it can only move through the access paths that exist. Tightly scoped permissions, segmented networks, and just-in-time access controls constrain what an attacker—human or automated—can reach even after initial compromise.
  • Detection and visibility matter more, not less. When adversaries move faster, defenders need to detect faster. This means investing in logging, telemetry, and detection engineering that surfaces malicious behavior regardless of how it was generated. AI-written malware still executes on endpoints, still generates network traffic, still touches files and registries. The observables are there if you're collecting them.
  • Identity hygiene is non-negotiable. Credential theft appeared in every major campaign referenced in these reports. Phishing, credential harvesting, and token abuse remain primary attack vectors. Strong authentication, credential rotation, and anomaly detection on identity systems directly counter these techniques—whether the phishing email was written by a human or an LLM.
  • Patching discipline closes the doors AI is trained to find. Vulnerability exploitation remains a core technique. AI may help adversaries identify and weaponize vulnerabilities faster, but it can't exploit a vulnerability that's been remediated. Reducing time-to-patch remains one of the highest-leverage defensive investments available.

The Defender's Opportunity

There's a symmetry worth noting: the same AI capabilities that benefit attackers are available to defenders. Anthropic noted that their threat intelligence team used Claude extensively to analyze the data generated during their investigation. Google highlighted investments in AI-driven vulnerability discovery and automated patching. Security operations teams can leverage AI for alert triage, detection engineering, and response automation.

The organizations that will navigate this landscape successfully aren't those waiting for AI-specific defensive products. They're the ones investing in foundational security capabilities, and increasingly augmenting those capabilities with AI on the defensive side.

Gartner Report: Innovation Insights - AI SOC Agents

Get Gartner's guidance on evaluating and adopting AI SOC agents

Download Report
Download Ebook
Gartner Report: Innovation Insights - AI SOC Agents

Frequently Asked Questions

Insights
Exit icon