-min.webp)
These days, the headlines are hard to hear. AI is everywhere, and apparently, it’s being used maliciously a lot more than for good. However, we have to ask. is it, really? Let’s look at some of the most recent examples:
You’ll see the message everywhere: AI has fundamentally changed the threat landscape.
Except, it hasn’t. Not in the way that headlines suggest.
What AI provides adversaries is speed and scale, not novelty. The tactics and techniques remain the same. The Anthropic case involved reconnaissance, vulnerability exploitation, credential harvesting, lateral movement, and data exfiltration. These phases have defined cyber intrusions for decades. Google’s reporting on state-sponsored groups,including APT28, APT41, and APT42, shows threat actors leveraging AI across their operations, from crafting phishing lures to developing C2 infrastructure and obfuscating malicious code. These aren’t new attack vectors. Rather, they’re established techniques simply executed more efficiently.
The MITRE ATT&CK framework doesn’t need a new matrix for “AI-enabled adversaries”. The techniques are already there. What’s changed is the velocity at which threat actors can move through them.
Consider the Anthropic disclosure: at peak activity, the AI-driven operation generated thousands of requests, often multiple per second. That throughput would be impossible for human operators alone. But the attack still depended on exploiting vulnerabilities, harvesting credentials, and establishing persistence. Again - the same objectives that defenders have been countering for years.
{{ebook-cta}}
Google's research highlights a maturing marketplace for AI-enabled offensive tools. Offerings such as:
These tools lower the barrier to entry for less sophisticated actors. That’s the real concern; the talent floor for conducting effective attacks has dropped. But the attacks themselves follow established patterns: phishing for initial access, credential theft, privilege escalation, data exfiltration. The delivery mechanism got easier. The kill chain didn't change.
This is consistent with what we've seen historically when new enabling technologies emerge. Exploitation frameworks like Metasploit and Cobalt Strike democratized capabilities that previously required significant expertise. The defender's response wasn't to abandon fundamentals. it was to double down on detection, visibility, and response capabilities that addressed the underlying techniques regardless of the tool used to execute them.
If the tactics haven't changed, neither should our defensive priorities. The organizations best positioned to withstand AI-enabled adversaries are those with mature implementations of core security principles:
There's a symmetry worth noting: the same AI capabilities that benefit attackers are available to defenders. Anthropic noted that their threat intelligence team used Claude extensively to analyze the data generated during their investigation. Google highlighted investments in AI-driven vulnerability discovery and automated patching. Security operations teams can leverage AI for alert triage, detection engineering, and response automation.
The organizations that will navigate this landscape successfully aren't those waiting for AI-specific defensive products. They're the ones investing in foundational security capabilities, and increasingly augmenting those capabilities with AI on the defensive side.
Get Gartner's guidance on evaluating and adopting AI SOC agents

