THE ECHO

One story. Gone deep.

The Assumptions Nobody Wrote Down

Three stories broke in the first half of March that look unrelated. They're not.

First: AWS data centers in the UAE and Bahrain took structural damage from drone strikes. Multiple availability zones. Regional failure. The company warned customers that "the broader operating environment in the Middle East remains unpredictable."

Second: Anthropic sued the Pentagon after being designated a supply chain risk — the same designation designed for Huawei and foreign adversaries — for refusing to remove safety guardrails that prevent Claude from being used for domestic mass surveillance or autonomous weapons.

Third: CISA's acting director uploaded sensitive government documents to ChatGPT. The person coordinating America's cybersecurity defense violated the most basic data handling principles CISA itself publishes.

These aren't breach stories. They're governance stories. And they all trace back to the same structural flaw: the assumptions nobody wrote down.

AWS customers built disaster recovery plans that assumed geopolitical conflict wouldn't take out multiple availability zones simultaneously. That wasn't written in the SLA. It wasn't part of the risk assessment. It was just... assumed. Because it had always been true. Until it wasn't.

The Pentagon assumed that designating domestic AI companies as supply chain risks would pressure them to comply. Anthropic assumed the government wouldn't weaponize national security designations against American companies for maintaining ethics guardrails. Both assumptions were invisible — until they collided.

CISA's acting director assumed the person in charge of cybersecurity would follow the rules their own agency writes. That's not an unreasonable assumption. But it's also not documented anywhere. And the moment the person at the top stopped following the rules, the assumption failed — and with it, institutional credibility.

The pattern is simple. Someone made a decision based on an assumption. The assumption was never questioned because it seemed obvious. Then the environment changed. The assumption broke. And the failure propagated through every process that depended on it.

This is why governance beats control every time. Control is what you enforce. Governance is what you design for when enforcement fails.

You can't control whether a drone strike takes out your cloud region. But you can design your disaster recovery plan to survive it — if you document the assumption that it won't, and then test what happens when that assumption is wrong.

You can't control whether the government will weaponize supply chain designations. But you can decide in advance what principles you'll defend and what that choice will cost — if you document the assumption that market access and ethics can coexist, and then plan for the moment they don't.

You can't control whether your acting director will follow your own policies. But you can design systems that don't rely on one person's judgment — if you document the assumption that leadership sets the example, and then build guardrails for when they don't.

Most organizations don't fail because they made the wrong decision. They fail because they made an invisible decision — one they never knew they were making, because the assumption underneath it was never written down.

Your disaster recovery plan assumes cloud regions are independent. Your vendor risk assessment assumes supply chain designations target foreign adversaries. Your security policies assume leadership will follow them. Read those sentences again. Those are assumptions, not facts. And nobody documented them.

The organizations that survive the next decade aren't the ones with the best controls. They're the ones that documented their assumptions, tested what happens when they break, and designed governance that doesn't rely on any single assumption staying true forever.

Because here's the thing about assumptions: they're invisible until they fail. And by then, you're not fixing an assumption. You're managing a disaster.

Sources:

SIGNAL CHECK

What else matters this week.

LiteLLM Supply Chain Attack: 97 Million Monthly Downloads Compromised

This is the supply chain attack we've been warning about since the AI boom started. TeamPCP — a sophisticated threat actor with a track record — compromised Trivy (a security scanner) first. Used harvested CI/CD credentials to compromise Checkmarx. Used those credentials to publish malicious LiteLLM packages (v1.82.7 and 1.82.8) to PyPI. LiteLLM is the default LLM routing proxy for DSPy, CrewAI, MLflow — 97 million monthly downloads. It acts as a credential aggregator for OpenAI, Anthropic, Google, Azure, and AWS keys. The malware used a .pth file technique that executes on every Python process startup. Over 600 public projects had unpinned dependencies. The only reason we caught it? A bug in the attacker's code caused a fork bomb. If they'd written better malware, this could have run silently for weeks. Pin your dependencies. And if you're deploying AI frameworks, audit your supply chain like your API keys depend on it — because they do. Sources: Datadog Security Labs, ReversingLabs

Hidden README Instructions Make AI Coding Agents Leak Data 85% of the Time

ETH Zurich, MATS, and Anthropic researchers tested 500 modified README files across AI coding agents. 84% success rate for direct prompt injections. 91% when instructions were buried two links deep. Fifteen human reviewers examined the same files. Zero detected the malicious instructions. Every major AI coding agent — Claude, Codex, Gemini — followed instructions in documentation without question. The agents created authentication middleware but never applied it to WebSocket endpoints. They shipped SQL injection vulnerabilities while passing security scans. This isn't a bug. It's how these tools work. They optimize for "does it work?" — not "is it safe?" Your developers are using AI pair programmers that treat READMEs as executable instructions. That's your new attack surface. Sources: CSA Research Note, arXiv Paper

Delve Exposed: Fake Compliance as a Service

Investigation exposed Delve — a SOC 2 audit firm — for systematically producing fabricated audit evidence. Generated fake board meetings. Tests that never happened. "US-based auditors" were Indian certification mills operating through empty shell companies. Clients include NASDAQ-traded companies now holding potentially fraudulent certifications, exposing them to criminal liability under HIPAA and fines under GDPR. I've been saying this for 15 years: SOC 2 is largely theater. What makes Delve special isn't that they cheated — it's the scale. If you used Delve, call your lawyer before you call your auditor. And if you're treating SOC 2 as proof of security instead of evidence of process — this is your wake-up call. Compliance is a byproduct of good governance. It's not a substitute. Sources: Inc.com Investigation, TechCrunch, ComplianceHub Wiki

German Police Make 3 AM House Calls to Warn Companies About Zero-Days

German police visited companies in person over the weekend — some at 3 AM — to warn about unpatched Windchill and FlexPLM zero-days. Timing suggests active exploitation. This is simultaneously the most German thing ever and the most effective incident response I've heard of all year. Meanwhile, CISA publishes advisories to mailing lists that 90% of affected organizations don't subscribe to and calls it a day. When the gap between "we told you" and "you actually knew" is this wide, notification theater isn't risk management. It's liability coverage. The Germans showed up at the door. That's what governance looks like when someone actually cares whether the message landed. Sources: Heise Online, Netcrook

THE NOISE

Not every signal needs action.

"You Need a Bigger Security Budget"

Every breach headline triggers the same reflex: spend more. More tools, more headcount, more budget. IBM's latest report shows the average breach costs $4.88M — and the vendor pitch decks land before the ink dries. But the three failures in this week's Echo had nothing to do with budget. AWS customers weren't under-resourced — they had undocumented assumptions. Anthropic didn't lose because they were outspent — they got blindsided by a political risk nobody modeled. CISA's acting director didn't upload sensitive docs to ChatGPT because the agency lacked a DLP tool. The problem was governance, not dollars. Throwing money at security without documenting what you're assuming is like buying a bigger lock for a door you forgot to close. The organizations that failed this month weren't underfunded. They were under-governed.

ONE QUESTION

No answer. Just the question.

What assumptions does your security program rely on that nobody's ever written down?

Michael Faas is a fractional CTO/CISO helping growth-stage companies navigate complexity without building bloated security programs. More at echocyber.io.

Keep reading