THE ECHO
One story. Gone deep.
Your AI Has a Trust Model. You Didn't Write It.
Pillar Security disclosed a CVSS 10 in Google's Gemini CLI last month. They called it TrustIssues. Google shipped the fix on April 24.
No CVE. Just a GitHub advisory, GHSA-wpqr-6v78-jr5g, and a working exploit chain that started with one public GitHub issue and ended with arbitrary code on the main branch of a Google repo.
Here's the path. An attacker opens a public issue. Google's Gemini-powered triage agent reads it. The issue contains hidden instructions. The agent follows them, bypasses the tool allowlist, pushes code. Full supply chain compromise from a surface anyone on the internet can write to.
The same pattern showed up in at least eight other Google-maintained repos.
The security community calls this prompt injection. The label understates what happened. Gemini CLI didn't fail because of a flaw in the code. It failed because it trusted everything in its context window, including a GitHub issue written by someone who wasn't you, wasn't authorized, and didn't care about your security posture.
That trust decision didn't come from the model. Models are designed to be helpful with whatever context they get. They don't ship with a built-in distinction between "context I should act on" and "content someone planted to manipulate me." That distinction has to come from the organization that deployed the agent.
In most organizations, it hasn't.
When you approved an AI coding tool for your engineering team, you made one decision: whether to allow it. Most security reviews stop there. Known CVEs, SOC 2 report, data flow check. What they don't produce is a document defining what the tool is authorized to trust.
Those are two different decisions. Most organizations only make the first one.
Count what your AI agents read right now. GitHub issues. Emails. Slack messages. Support tickets. Uploaded documents. Web pages they crawl for context. Every one of those is a surface an attacker can write to.
Somewhere in your AI tooling, there's an input path that runs from a surface a stranger can write to, through a context window, to a process you've authorized your agent to execute. That's the architecture. It's true whether you've mapped it or not.
Bruce Schneier has been writing about trust as a design decision for decades. Zero trust is built on this insight: trust isn't a default state. It's a decision you make explicitly, about specific actors, under specific conditions, for specific purposes. That principle didn't change when AI agents arrived. The agents raised the stakes of not following it.
The questions most security reviews ask about AI tools are the right questions for a product risk assessment. Can it exfiltrate data? Does it have excessive permissions? Is it calling unapproved endpoints? What they miss is the question prompt injection actually exploits: what does this agent trust, and did anyone define that before we deployed it?
A document that answered the right question would define what the agent is authorized to treat as trusted context, what counts as untrusted input, and what triggers a human review before the agent acts. That document doesn't exist at most companies. The governance conversation happened at the "allow it or don't" layer. It never reached the "here's how we define trust" layer.
You can patch GHSA-wpqr-6v78-jr5g. Google did. The condition that made it possible, an undefined trust model in an AI agent with real access to real systems, isn't a thing you patch. It's a thing you design.
The Gemini CLI vulnerability closed. The governance gap is still open.
SIGNAL CHECK
What else matters this week.
Windows Shell Zero-Day (CVE-2026-32202): Zero-Click NTLM Leak on the KEV List
A zero-click NTLM hash leak landed on CISA's Known Exploited Vulnerabilities list this week. A crafted .lnk file, opened in Windows Explorer, leaks your NTLM hash to an attacker-controlled server. No execution required. No user action beyond browsing the folder. NTLM hashes crack or pass-the-hash. Either way, you're one lateral move from a significantly worse day.
Zero-click, KEV, actively exploited. Patch this week. If disabling NTLM keeps getting pushed to next quarter, this is the push. via no.security
cPanel CVSS 9.8 + Sorry Ransomware: 44,000 Web Servers, 48 Hours
An authentication bypass in cPanel (CVSS 9.8) was weaponized by the "Sorry" ransomware group. Forty-four thousand servers compromised in 48 hours. The vulnerability had been live for 64 days before a patch shipped. By the time the fix landed, the race was already lost for most administrators who weren't watching.
This is the speed problem made concrete. The organizations that survived had asset visibility and patch pipelines fast enough to move before the ransomware group's automation did. Most SMBs have neither. If you run cPanel hosting, patch now and audit what was accessible before the fix landed. via no.security
Oracle Moves to Monthly Patching Because AI Finds Bugs Faster Than Quarterly Cycles
Oracle is compressing its patch cycle from quarterly to monthly. Stated reason: AI-assisted vulnerability research is surfacing bugs faster than a 90-day window can contain.
One of the largest enterprise software vendors on earth changed its security architecture because AI broke the assumptions its patch program was built on. If your prioritization is still calibrated around "quarterly critical, monthly high," those thresholds were set in a different threat environment. The vendors are adjusting their timeline. Your patch program should be asking whether it needs to do the same. via no.security
THE NOISE
Not every signal needs action.
"Just Restrict What the AI Can Access"
The reflex after a CVSS 10 prompt injection is to constrain the agent. Disable URL fetching. Lock down repo access. Pull back file system permissions.
That's a control. It makes the blast radius smaller. It doesn't answer the question prompt injection exploits: what is this agent authorized to trust, on whose behalf, based on what inputs? You can apply the tightest available permission set and still ship an agent that treats attacker-controlled content as authoritative context.
Tighter constraints are better than looser ones. But constraints applied to an undefined trust model are the wrong answer to the right problem. The attacker who finds the next injection surface doesn't care how many permissions you removed. They care whether the agent still treats what they put in front of it as context worth acting on.
ONE QUESTION
No answer. Just the question.
If an attacker planted instructions in a document your AI agent reads as context this week, would anyone in your organization detect it before the data left the building, or only after?
If This Is the Shape You've Been Trying to Name
The Signal Score grades your program across eight categories most likely to cascade. Identity & Access. Devices & Patching. Email & Phishing Defense. Backup & Recovery. Network Security. Data Protection. Vendor & SaaS Risk. Incident Readiness. Fifteen minutes. A through F grades, an expected annual loss estimate, and a plain-English read of where your weakest area is pulling the others down.
Free. If the grade wants a conversation, there's a thirty-minute review. No pitch, just where your cascade points are.
Prefer audio? Jane reads every Pulse edition on the Signal vs. Noise podcast. Five minutes, same signal, no scrolling. Find it wherever you listen.
Michael Faas is a fractional CTO/CISO who translates technical complexity into business decisions. echocyber.io
