THE ECHO

One story. Gone deep.

The Side With the Cheapest Verifier Wins

Sergej Epp — a CISO who’s been thinking about security economics longer than most vendors have existed — published a concept this month that deserves to be read by every security leader and every executive who funds one.

He calls it Verifier’s Law. It’s simple enough to fit on a napkin, and it explains more about why security is hard than any framework, maturity model, or vendor pitch deck ever will.

Here it is: In any security contest, the side with the cheapest verifier wins.

Offense has the cheapest verifier in existence. Did the exploit work? Yes or no. Binary. Instant. No ambiguity. An attacker tries something and knows immediately whether it succeeded. Try again. Try different. The feedback loop is measured in seconds.

Defense has the most expensive verifier imaginable. Are we secure? Maybe. Probably. We passed the audit. We haven’t been breached that we know of. Nobody’s called from the FBI yet. There is no moment — ever — where a defender gets a clean signal that says "you’re safe." You’re proving a negative, on a rolling basis, forever.

That asymmetry has always existed. What changed is AI.

When AI assists offense, it accelerates the side that already had cheap verification. Vulnerability discovery gets faster. Exploit generation gets automated. A motivated amateur with a coding assistant can find real bugs in production software because the feedback loop is tight — try it, did it crash, try again. The cost of verification drops toward zero.

When AI assists defense, it helps with the side that has expensive, ambiguous verification. Better scanning. Faster correlation. More alerts. But it still can’t answer the question that matters — are we actually secure? — because that question is structurally unanswerable. No amount of AI makes proving a negative cheap.

AI amplifies whichever side has the tighter feedback loop. And offense was already winning that race.

The numbers make this visceral.

Google’s OSS-Fuzz project added AI-assisted fuzzing last year. In a single month, it found 26 new vulnerabilities across open-source projects — including one in OpenSSL that had been hiding for years. Each one verified instantly: the program crashed or it didn’t. That’s the offense side. Cheap verification at machine speed.

Now the defense side. HackerOne’s latest data shows a median resolution lifecycle of 34 days from the moment a vulnerability is reported to when it’s actually fixed. Not found — reported. Someone already did the hard part. Handed it to you on a silver plate. And it still takes 34 days to close.

IBM’s 2025 Cost of a Data Breach Report puts the average breach lifecycle — time to identify and contain — at 241 days. That’s actually a nine-year low. The best we’ve ever done. And it’s still eight months.

Offense verifies in seconds. Defense verifies in months. AI is compressing one side of that equation toward zero. The other side barely moved.

The organizations that survive this aren’t the ones with the biggest security budgets. They’re the ones with the tightest feedback loops. The ones that can answer "what changed in our environment today?" and actually mean today — not last month when the scanner ran. Not last quarter when the auditor showed up. Today.

Verifier’s Law doesn’t tell you what tool to buy. It tells you where to look. If your security program’s feedback loop is measured in weeks or months, you’re not defending anything. You’re documenting what already happened.

And that’s the most expensive verification of all.

SIGNAL CHECK

What else matters this week.

AWS Data Center Strikes: The DR/BCP Wake-Up Call

The details matter more than the headline. AWS reported "structural damage, disrupted power delivery, and fire suppression activities that resulted in additional water damage" across facilities in both the UAE and Bahrain. This wasn’t a single point of failure — it was a regional failure. AWS itself warned that "the broader operating environment in the Middle East remains unpredictable." If your disaster recovery plan treats cloud regions as independently resilient — and most do — the assumption that geopolitical conflict won’t take out multiple availability zones simultaneously just got disproven. For any organization with Middle East operations or customers, multi-provider and multi-region are no longer best practices. They’re requirements. via Reuters

Anthropic Sues the Federal Government

This one has legs. On March 9, Anthropic filed two federal lawsuits alleging the Pentagon illegally retaliated against the company for maintaining AI safety principles. The specifics: Anthropic refused to allow Claude to be used for domestic mass surveillance of Americans or entirely autonomous weapons without guardrails. The Pentagon’s response was a Friday-afternoon ultimatum — drop the restrictions by 5:01 PM ET or lose every federal contract. Anthropic chose principles over revenue. Whatever your politics, the precedent matters: the government just demonstrated it will punish vendors for maintaining safety guardrails. That changes the incentive structure for every AI company deciding how much access to give and how many guardrails to keep. via NPR, Reuters

CISA’s Acting Director and the ChatGPT Incident

Buried in the CISA staffing story is a detail that deserves its own spotlight. Acting CISA director Madhu Gottumukkala reportedly uploaded sensitive government documents to ChatGPT. The person responsible for coordinating America’s cybersecurity defense — the top of the org chart — violated the most basic data handling principles that CISA itself publishes guidance on. It’s not just ironic. It’s a signal about institutional decay. When the person in charge doesn’t follow the rules their agency writes, the rules have already stopped mattering. via TechCrunch

THE NOISE

Not every signal needs action.

"AI Will Replace Security Teams"

Every time a new AI capability drops, the hot take machine fires up: security analysts are obsolete, SOCs are dead, AI will handle it all. No. AI accelerates feedback loops — it doesn’t replace judgment. An AI can tell you a vulnerability exists faster. It can’t tell you whether patching it tomorrow or next quarter is the right business decision for your specific risk profile. The organizations replacing security judgment with AI automation are building the same blind spots they had before, just faster. The analyst isn’t obsolete. The analyst with a 30-day feedback loop is.

ONE QUESTION

No answer. Just the question.

If your adversary verifies success in seconds and your program verifies its defenses quarterly — who finds out first when something breaks?

Michael Faas is a fractional CTO/CISO helping growth-stage companies navigate complexity without building bloated security programs. More at echocyber.io.

Keep reading