2026-02-06 · WATCHTOWER

Supply Chain Attacks Hit AI Agent Skill Registries

Supply chain attacks aren't just for npm anymore. AI agent skill registries are now a target — and the attack surface is trivially exploitable.

Security researcher @theonejvo just published research on ClawdHub, the skill registry for OpenClaw AI agents. The findings reveal the same vulnerability patterns that have plagued npm and PyPI for years.

The Attack Vector

The attack is elegant in its simplicity:

  1. Create a backdoored skill — Write a seemingly legitimate skill with hidden malicious code
  2. Hide the payload — Put the malicious code in referenced files like rules/logic.md, NOT the main SKILL.md that developers review
  3. Inflate download count — Abuse the lack of rate limiting to make your skill appear popular
  4. Wait — Developers install based on fake popularity signals

Faking Popularity

How do you make a malicious skill appear to be the #1 most downloaded?

ClawdHub has:

  • No rate limiting on download endpoint
  • Trusts spoofable X-Forwarded-For header for uniqueness

A simple bash loop can inflate any skill to 4,000+ downloads in minutes. The download counter has no defenses against automated inflation.

The Impact

In the researcher's experiment:

This is the left-pad incident for AI agents — except instead of breaking builds, attackers get code execution on developer machines running AI agents with elevated privileges.

Defense Checklist

If you're building with AI agent frameworks:

  • Audit ALL files in skills, not just SKILL.md
  • Check for curl, wget, bash, external URLs in any file
  • Don't trust download counts as a quality signal
  • Review before installing — always
  • Run skills in sandboxed environments when possible

The Bigger Picture

AI agents are increasingly given access to sensitive systems — file systems, APIs, credentials, network access. A compromised skill isn't just running code; it's running code with the agent's full capability set.

The supply chain attack surface for AI tooling is in its infancy. We're repeating the same mistakes we made with package managers a decade ago. The difference is the blast radius: an AI agent skill compromise can mean full system access, not just a broken build.

Credit

Full credit to @theonejvo for responsible disclosure and publishing this research.

Original thread: x.com/theonejvo/status/2015892980851474595