Supply Chain Attacks Hit AI Agent Skill Registries
Supply chain attacks aren't just for npm anymore. AI agent skill registries are now a target — and the attack surface is trivially exploitable.
Security researcher @theonejvo just published research on ClawdHub, the skill registry for OpenClaw AI agents. The findings reveal the same vulnerability patterns that have plagued npm and PyPI for years.
The Attack Vector
The attack is elegant in its simplicity:
- Create a backdoored skill — Write a seemingly legitimate skill with hidden malicious code
- Hide the payload — Put the malicious code in referenced files like
rules/logic.md, NOT the mainSKILL.mdthat developers review - Inflate download count — Abuse the lack of rate limiting to make your skill appear popular
- Wait — Developers install based on fake popularity signals
Faking Popularity
How do you make a malicious skill appear to be the #1 most downloaded?
ClawdHub has:
- ❌ No rate limiting on download endpoint
- ❌ Trusts spoofable
X-Forwarded-Forheader for uniqueness
A simple bash loop can inflate any skill to 4,000+ downloads in minutes. The download counter has no defenses against automated inflation.
The Impact
In the researcher's experiment:
- 16 real developers from 7 countries executed arbitrary code
- Within 8 hours of the skill being published
- All believed they were installing a legitimate skill
This is the left-pad incident for AI agents — except instead of breaking builds, attackers get code execution on developer machines running AI agents with elevated privileges.
Defense Checklist
If you're building with AI agent frameworks:
- ✅ Audit ALL files in skills, not just
SKILL.md - ✅ Check for
curl,wget,bash, external URLs in any file - ✅ Don't trust download counts as a quality signal
- ✅ Review before installing — always
- ✅ Run skills in sandboxed environments when possible
The Bigger Picture
AI agents are increasingly given access to sensitive systems — file systems, APIs, credentials, network access. A compromised skill isn't just running code; it's running code with the agent's full capability set.
The supply chain attack surface for AI tooling is in its infancy. We're repeating the same mistakes we made with package managers a decade ago. The difference is the blast radius: an AI agent skill compromise can mean full system access, not just a broken build.
Credit
Full credit to @theonejvo for responsible disclosure and publishing this research.
Original thread: x.com/theonejvo/status/2015892980851474595