THE ECHO
One story. Gone deep.
No Malware. No Exploit. Just a Prompt.
Johann Rehberger spent August 2025 breaking AI coding assistants. Not theoretically — he actually broke them. Over two dozen zero-day vulnerabilities across multiple vendors. Zero-click data exfiltration. Arbitrary remote code execution. Memory persistence that survives session restarts. All through indirect prompt injection.
No malware. No binaries. Just prompts.
He formalized the findings into what he calls the AI Kill Chain — a framework for how attackers move through agentic AI systems the way MITRE ATT&CK maps traditional intrusion paths. He presented it at HITCON, the Chaos Communication Congress, and half a dozen other conferences. The paper is called "Agentic ProbLLMs." It's free. It should be required reading.
Here's why.
Your AI coding assistant has access to your codebase, your environment variables, your API keys, and often your git credentials. It reads files, writes files, and executes commands. You gave it that access on purpose — that's the whole point. The same permissions that make it useful are exactly what make it exploitable.
Rehberger demonstrated AgentHopper — a working AI virus that propagates from agent to agent. SpAIware — persistent surveillance injected into an agent's memory that survives across sessions. Delayed tool invocation — malicious payloads that sit dormant until triggered by something as routine as a conversation summary.
Not theoretical. Working.
The industry's response has been uneven. Some vendors shipped fixes within days. Others didn't respond. And that's the pattern Rehberger calls out most sharply: the normalization of deviance. We're accepting insecure design as the price of capability, then shifting blame to users when things go wrong.
This is a feedback loop problem. AI agents are deployed faster than security frameworks can evaluate them. The agents get more powerful. The access gets broader. The attack surface grows. And the governance to manage it doesn't exist yet — NIST's AI RMF, the EU AI Act, and ISO 42001 contain zero mentions of "agent" or "agentic AI."
The skill floor for exploiting AI agents is a well-crafted prompt. Plan accordingly.
Sources: AI Kill Chain Paper (Zenodo), Embrace The Red
SIGNAL CHECK
What else matters this week.
TOAD Attacks Surge 127% — A Phone Number Your Email Security Can’t Stop — One in four phishing emails now uses Telephone Oriented Attack Delivery. No malicious links, no attachments — just a phone number. The victim calls it, a human walks them through installing malware or handing over credentials. Your email security gateway is architecturally blind to this because there’s nothing malicious in the email itself. The most effective attack vector of 2026 is a phone number. We spent billions on email security that can’t see it. via StrongestLayer
VMware Aria Operations RCE: Your Monitoring Platform Is the Attack — CVE-2026-22719, added to CISA KEV, actively exploited. Unauthenticated remote code execution on VMware’s infrastructure monitoring platform — the tool with privileged access to your entire VMware estate. Topology maps, credential stores, every VM it monitors. When your monitoring platform gets compromised, can you even detect it? via CISA KEV, The Hacker News
Marquis Sues SonicWall Over Ransomware — 74 Banks Impacted — A fintech company is suing SonicWall, alleging security failures in SonicWall’s own cloud backup service led to a ransomware attack that cascaded to 74 American banks. First major case of a customer suing the security vendor instead of quietly settling. If Marquis prevails, vendor contracts across the industry will change overnight. The era of “shared responsibility means it’s always the customer’s fault” might finally be ending. via TechCrunch, Dark Reading
THE NOISE
Not every signal needs action.
"Iran Cyber Retaliation Will Hit Every American Business" — Nation-state cyber warfare is real and the threat is elevated. But the hot takes flooding LinkedIn this week — breathless warnings that every SMB is about to be targeted by Iranian APTs — aren't actionable intelligence. They're fear dressed up as thought leadership. If you're not in energy, financial services, or critical infrastructure, the direct targeting risk to your organization hasn't materially changed. What has changed: CISA is running on a skeleton crew during the highest threat period in years. That's the governance story worth paying attention to — not the panic.
ONE QUESTION
No answer. Just the question.
Your AI coding assistant can read your secrets, write your code, and execute commands on your machine. When was the last time someone on your team asked what happens if it's compromised?
Michael Faas is a fractional CTO/CISO helping growth-stage companies navigate complexity without building bloated security programs. More at echocyber.io.
