OpenClaw's Growth Amid AI Security Slop

OpenClaw hit GitHub records with 30k stars in 5 months, but faces 1,142 AI-generated security advisories (16/day). Peter Steinberger counters with company partnerships, a foundation for sustainability, and calls out hype over real risks.

Record Growth Demands New Maintenance Strategies

OpenClaw, the open-source personal AI agent, launched five months ago (April 9) and became GitHub's fastest-growing software project, surpassing educational repos with ~30,000 stars, 30,000 commits, nearly 2,000 contributors, and approaching 30,000 PRs. Growth followed a "stripper pole" trajectory—straight upward—without the typical hockey stick, maintaining velocity. This scale introduced unique challenges: bus factor remains low despite improvements (Vincent Pichette noted progress), volunteers can't be directed like employees, and maintainer churn is high as companies poach talent.

Peter Steinberger, creator and recent OpenAI joiner, rejected starting another company after past experiences. Instead, he partnered with firms like Nvidia (full-time engineers for hardening), Microsoft (MS Teams/Windows app), Red Hat (security/Docker), Tencent, ByteDance—largest users outside the West—and others. These allies address the army-sized effort needed for insane pace. Result: distributed ownership boosts resilience without single-company control.

"Running the foundation is like running a company on hard mode because you have all the things that you need to take care of but also you have a lot of volunteers that you can't really direct." This quote from Steinberger highlights volunteer coordination pains, pushing structured support via the OpenClaw Foundation (inspired by Ghostscript, nearing launch post-U.S. banking hurdles).

AI Tools Flood Projects with 'Slop' Advisories

Security became the biggest hurdle: 1,142 advisories in months (16.6/day, 99 critical), double Linux kernel's 8-9/day or curl's total 600. Most are AI-generated "slop"—low-quality, multi-chain exploits from tools like Codex security, which broke Nvidia's NemoClaw sandbox in 30 minutes using superior non-public models.

Attack surfaces like RCE, approval bypass, injection, path traversal sound dire, but many are theoretical. Example: CVSS 10/10 Gshjp vuln in unshipped iPhone app sync—read-only perms escalate to write if misconfigured, but 99% users run locally/cloud with gateway access controls. Steinberger's permissive model experiment enabled it, but it's unused. Nation-state threats (North Korean GhostClaw rootkit via fake downloads) and supply-chain (unpinned Axios in Slack/MS Teams deps) are real but not OpenClaw-specific.

"The higher they screaming how critical they are, the more likely it's slop." Steinberger's rule filters noise; AI reports often feature polished prose/apologies (human security folks don't). Handling solo was impossible—rushed fixes broke code. Now, Nvidia triages; reports rarely include fixes, and AI ones worsen issues.

"We're very fast moving into a world where we have to change how we build software because all these AI tools are getting so good at identifying even the most weird multi-chained exploits and like we're gonna break all the software that exists." This insight predicts industry shifts as AI cyber tools commoditize vulns.

Published 469 advisories, closed 60%. Fearmongering persists: "Agents of Chaos" paper detailed OpenClaw architecture sans security docs (e.g., sandbox group chats, restrict personal agents), ran in privileged mode for drama. Belgium panicked over RCE feature (malicious site forwards gateway token)—defaults prevent it.

Agentic Risks Are Inherent, Not OpenClaw-Specific

Core trifecta: data access + untrusted input + communication = risk for any powerful agent. OpenClaw's local-first design (control data, fallback models) sidesteps silos—bypass Gmail OAuth delays, scrape sites as "hacker way." But power amplifies threats; users must follow docs (local gateway token, sandbox teams).

Companies like Nvidia fork (NemoClaw sandbox plugin) validate it. Critics ignore mitigations for headlines. Steinberger closed 60% advisories, but burden remains: brain-required triage amid volunteer limits.

OpenAI Backing Without Takeover, Emphasis on Open Models

Rumors of OpenAI buying OpenClaw are false—Steinberger guards independence. OpenAI supports via resources (not dominating to avoid optics), aligning with OSS shifts (Codex/Symfony open-sourced). Goal: expose masses to AI fun/risks, driving workplace demand ("why don't we have AI at work?").

Multi-model (local/open/proprietary) essential—Europeans own data, startups evade API gates. No GPTs insights, but OpenAI leans OSS vs. litigious rivals.

"Everybody in the industry wins if more people spend time with AI... they'll come to work and... say why the f do we not have AI at work." Steinberger ties grassroots play to enterprise sales.

Iterative Workflow Over Dark Factory Automation

Steinberger's setup: 5-6 parallel agent sessions (down from 10 via speedups/fast mode), prompt-requests not PRs. Rejects full dark factory (no-review merges)—projects curve, not straight; first ideas evolve via iteration/taste.

Taste as moat: baseline "doesn't stink like AI" (UI gradients, writing); higher: delightful details (roast messages). Automate pipelines selectively; vision docs guide, but sync/taste bottleneck humans.

"The way to the mountain is usually never a straight line... first idea... very unlikely going to be the final project." Captures why waterfall/dark factory fails creative builds.

"Taste... if it doesn't stink like AI... you will know." Defines low-bar taste amid automatable software.

Key Takeaways

  • Partner with 5-10 companies (Nvidia, MS, Red Hat) for full-time triage on massive OSS projects—volunteers alone can't scale.
  • Filter AI slop advisories by polish/screaming criticality; triage manually until agents trustworthy.
  • Default to local gateway tokens/private nets; sandbox group agents—docs beat CVSS hype.
  • Build foundations like OpenClaw's for hiring, inspired by Ghostscript—neutral OSS governance.
  • Iterate with 5-6 parallel agents; taste (anti-AI smell + details) remains human moat.
  • Local/open models enable data ownership, silo bypass—hacker automation trumps enterprise limits.
  • Expose users to agents for organic enterprise pull—fun drives demand.
  • Publish security docs prominently; critics cherry-pick for chaos narratives.

Summarized by x-ai/grok-4.1-fast via openrouter

8886 input / 2282 output tokens in 29008ms

© 2026 Edge