I Scanned 2,000 OpenClaw Skills for Malicious Patterns — 14.5% Failed
I Scanned 2,000 OpenClaw Skills for Malicious Patterns — 14.5% Failed The OpenClaw ecosystem just crossed 46,000+ community skills. That's 46,000 Markdown files that AI agents download, parse, and ...

Source: DEV Community
I Scanned 2,000 OpenClaw Skills for Malicious Patterns — 14.5% Failed The OpenClaw ecosystem just crossed 46,000+ community skills. That's 46,000 Markdown files that AI agents download, parse, and follow as instructions. Nobody had scanned them for malicious patterns. So I did. The Setup I built clawhub-bridge, a security scanner that detects malicious behavioral patterns in agent skills — not code vulnerabilities, but what the skill tells the agent to do. 145 detection patterns across 42 categories, from credential exfiltration to steganographic payloads. I cloned two datasets: Curated collection (LeoYeAI/openclaw-master-skills): 559 skills, filtered for quality Full archive (openclaw/skills): 46,655 skills, random sample of 2,000 Then I ran every skill through the scanner. The Numbers Dataset Skills Scanned FAIL Rate Curated 559 73 13.1% Full archive (sample) 2,000 291 14.5% The full archive sample produced 1,034 CRITICAL findings, 406 HIGH, and 75 MEDIUM. What I Found Top 10 Pattern