THE ECHO
One story. Gone deep.
The Supply Chain Attack We've Been Warning About Just Happened
TeamPCP compromised 97 million downloads per month.
Not a random package. Not even a popular library. They compromised LiteLLM — the routing proxy that sits between your AI applications and every major model provider. OpenAI. Anthropic. Google. Azure. AWS. If you're building on AI, you're probably running LiteLLM whether you know it or not. It's the default proxy for DSPy, CrewAI, MLflow.
Here's what makes this different from every npm supply chain story you've read.
TeamPCP didn't start with LiteLLM. They started with Trivy — a security scanning tool. They compromised the scanner first, harvested CI/CD credentials, then used those to compromise Checkmarx — another security tool. Used those credentials to publish malicious versions of LiteLLM to PyPI.
Read that again.
They compromised security tools to harvest the keys to compromise the AI supply chain. That's not opportunistic. That's operational patience. And they're not done — Telnyx's Python SDK became their fourth victim last week, this time using WAV audio file steganography to hide the payload. The campaign is expanding, not winding down.
The malware is elegant and nasty. .pth files execute on every Python process startup. No import statement required. No traditional process creation. LiteLLM aggregates API keys for every model provider you use — compromise it and you compromise the entire credential store.
The only reason we caught it? The attacker's code had a bug. A fork bomb that crashed systems. If they'd written cleaner code, this could have been running silently for weeks.
Over 600 public projects had unpinned dependencies. Zero security review. And we're deploying AI agents with access to production APIs, cloud accounts, and customer data.
Your developers are moving at AI speed. Your security reviews are moving at committee speed. Someone on the growth team is spinning up an AI workflow that imports 47 dependencies you've never heard of — half of them last updated by a GitHub account created three weeks ago.
The gap between "does it work?" and "is it safe?" has never been wider.
Pin your dependencies. Audit what's in your AI stack. And if your answer to "who approved LiteLLM?" is "I don't know what LiteLLM is" — you already have the problem.
Sources: ramimac Research, Wiz Blog, Sysdig
SIGNAL CHECK
What else matters this week.
GitGuardian: 28.65 Million Secrets Leaked on GitHub
AI-generated code leaks credentials at twice the human rate. 34% year-over-year increase in hardcoded secrets on public repos. 113,000 DeepSeek API keys among the exposed. This isn't a coding problem — it's a supply chain problem. Every leaked API key is a door left open. If your developers are using AI pair programmers, your credential hygiene policy needs to account for tools that don't know what a secret looks like. via GitGuardian State of Secrets 2026
RSAC 2026: 91% of Organizations Can't Stop an AI Agent Before It Acts
56% of organizations are already running AI agents in production. 91% can't intervene before an agent completes a task. Only 9% achieved governed rollouts. Zero-click prompt injection attacks — EchoLeak against M365 Copilot, GeminiJack against Gemini Enterprise — are exfiltrating entire mailboxes with no user interaction. Traditional DLP catches 12% of prompt injection probes. The gap between "we deployed AI" and "we govern AI" isn't closing. It's accelerating. via RSAC 2026, Zenity Labs
Heretic: AI Safety Guardrails Removed With a Single Command
A tool called Heretic automates complete removal of safety alignment from any transformer-based language model. No ML expertise. One CLI command. 45 minutes on a consumer GPU. Over 1,000 "decensored" models already published on HuggingFace. The tool preserves intelligence while stripping refusals — output isn't degraded, just unconstrained. If your AI governance depends on the model saying "no," Heretic just proved "no" is optional. The model is not your security perimeter. It never was. via GitHub
Adobe Helpdesk Breach: 13 Million Tickets via BPO Compromise
An attacker compromised a BPO employee in India supporting Adobe's helpdesk, phished the employee's manager, and bulk-exported 13 million support tickets, 15,000 employee records, and every HackerOne vulnerability submission. No rate limiting on exports. Adobe's core network wasn't breached — just the helpdesk. But "just the helpdesk" included every unpatched vulnerability report from their bug bounty program. Your supply chain isn't just code dependencies. It's every outsourced function with access to sensitive data. via CyberPress, SecurityOnline
THE NOISE
Not every signal needs action.
"Every Company Needs an SBOM"
Software Bills of Materials are the compliance darling of 2026. Document every component, track every vulnerability. Sounds airtight. The problem: TeamPCP didn't exploit a known vulnerability. They published malicious versions of legitimate packages. An SBOM would have listed LiteLLM v1.82.7 as a dependency — and been correct. The malicious version was the latest version. SBOMs tell you what's in the box. They don't tell you if the box was compromised before you opened it. Inventory is the start of a conversation. It's not security.
ONE QUESTION
No answer. Just the question.
If your AI workflows import dependencies faster than your security team can review them — who's actually in control?
Michael Faas is a fractional CTO/CISO helping growth-stage companies navigate complexity without building bloated security programs. More at echocyber.io.

