THE ECHO

One story. Gone deep.

Cybersecurity is a process, not a product.

Jim Langevin

Jim Langevin has been saying the same thing for more than a decade. Cybersecurity is a process, not a product. The Congressman who helped stand up CISA, who co-chaired the Cyberspace Solarium Commission, who wrote the bill that put a National Cyber Director in the White House — when he landed on one sentence to carry the whole argument, that was it.

The line gets quoted, nodded at, embroidered on vendor booth banners, and then the same organizations go back to buying SKUs and checking the box.

AI coding assistants are the cleanest example of that category error I've seen in years.

You bought them as a product. A faster IDE. Smarter autocomplete. Priced per seat, measured in tickets closed per sprint. What you actually deployed is a process: an autonomous actor with network egress, package-manager privileges, shell access, and the ability to execute code it wrote based on dependencies it chose. Running continuously, in parallel, across every engineer who uses it.

Nobody wrote that process down. Nobody governs it. Because the invoice said "tool."

Brian Krebs wrote it clean earlier this spring: AI assistants are moving the security goalposts. The threat model built around humans with credentials doesn't translate when the credential holder is an AI. Assistants move faster, scale wider, and have none of the friction humans have when an action feels off. When Krebs says the goalposts moved, the answer isn't another product. Another product assumes the goalposts are still where they were.

They moved while most boards were still debating whether to allow Copilot.

Here's the question I keep asking mid-market CEOs: who in your organization is accountable for the behavior of your AI coding agents? Not the vendor relationship. Not the seats. The process — what they're allowed to do, what they log, what decisions they can make without a human, and what happens when one does something your control plane never anticipated.

The CISO assumes it's in engineering. Engineering assumes it's in security. The CEO assumes somebody is tracking it. None of them can name a person.

That's not a gap in your tool stack. That's a gap in your org chart.

Product purchases assume the problem is bounded. A process that runs across dozens of engineers, reaches into package registries every attacker in the world is targeting, and commits code under a service account with more trust than any intern — that's not bounded. It propagates. It compounds. The failures don't stay where they started.

You can't buy your way into governing a process. You can buy a SIEM, an EDR, an AI governance platform with a dashboard that reads well in a board deck. None of them becomes a process on its own. Somebody has to design it, own it, and make judgment calls when it hits an edge the vendor didn't plan for.

That somebody is what Langevin's line is about.

Vendors sell products because that's what they know how to invoice. Boards defer process ownership because it looks like headcount they can't justify. So the process runs without an owner, and everyone acts surprised when the outcomes look like nobody was watching.

This is the honest shape of the fractional CISO/CTO market. Most growth-stage companies don't need another tool. They need a person accountable for the process. A person with a name. A person who gets paged. A person whose job is to say no, we're not letting the agent do that yet — here's what we put in place first.

Your AI coding agent isn't an IT problem. It's a governance surface you haven't drawn yet. You didn't buy a product. You adopted a process. The process is running.

The only question is whether anyone in your organization is accountable for it.

SIGNAL CHECK

What else matters this week.

ADT Breached Again — Third Time in Two Years

Home security giant ADT disclosed another breach this week — their third since August 2024. This time: an SSO vishing attack gave attackers access to Salesforce, and ShinyHunters claims 10 million records (names, phones, addresses, partial SSNs) with an April 27 deadline before full leak.

Three breaches in two years isn't bad luck. It's a pattern. When you get compromised through your identity layer, patch the tooling, and then get compromised through your identity layer again — the problem isn't in the tooling. It's in the process around the tooling. ADT keeps buying products. The process keeps failing. via no.security

ConsentFix v3: OAuth Phishing Goes Industrial

A new toolkit called ConsentFix v3 dropped on XSS forum this week, automating the entire OAuth consent phishing chain. It bypasses MFA, passkeys, and Conditional Access by targeting Microsoft's Family of Client IDs (FOCI) to pivot across M365 apps after a single consent grant.

This is the product-vs-process gap in miniature. Passkeys were supposed to be the product that solved phishing. ConsentFix doesn't care — it works above the authentication layer, at the authorization layer. Another product assumption broken by an attacker who understood the process better than the defender. via Push Security

THE NOISE

Not every signal needs action.

"We Should Just Ban AI Coding Assistants Until We Figure This Out."

Boards are asking whether to ban AI coding assistants until governance catches up. They shouldn't, and they won't stick to it if they try. Engineers are already using them — on personal devices, on side branches, on weekends — because the productivity delta is real. A ban moves the process into shadow IT, which is the worst possible outcome.

The answer isn't to ban the process. It's to design it: which agents, which repos, which privileges, which review gates, which logging. Companies who ban end up with the same agents and less visibility. Companies who govern end up with fewer agents and a process that works.

ONE QUESTION

No answer. Just the question.

If your AI coding agent made a decision this week that your organization would not have approved in writing, who — by name — would have the authority to stop it the next time?

Prefer audio? Jane reads every Pulse edition on the Signal vs. Noise podcast — five minutes, same signal, no scrolling. Find it wherever you listen or Signal vs. Noise - Listen.

Michael Faas is a fractional CTO/CISO who translates technical complexity into business decisions. echocyber.io

Keep reading