Rob Dodson - Home

Clawdbot's Missing Layers

— 7 minute read

In 1994, Pizza Hut launched PizzaNet, one of the first—and possibly best named—online ordering platforms. But they had one problem: convincing customers to type their credit card numbers into a website.

Who would be crazy enough to do that?

No encryption, no standards, no recourse if something went wrong. Fast forward to today: we buy groceries, cars, and houses online without a second thought. What changed was that we built the infrastructure to make it safe.

We're at that same inflection point with agents.

The Trillion-Dollar Problem

Right now, giving an agent access to your computer feels a lot like typing your credit card into a 1994 website. The potential is obvious—an assistant that can actually do things for you, not just talk about doing things. But the risks are equally obvious. One prompt injection or compromised package, and suddenly your AI assistant is sending your bank info to evil.com.

The recent Clawdbot hack made this visceral. A supply chain attack turned a helpful AI into a data exfiltration tool. And here's the uncomfortable truth: there was nothing particularly sophisticated about it.1 The attack worked because we haven't built the infrastructure to make agents safe.

Building that infrastructure is a trillion-dollar opportunity. I realize that's a very sales-y way to put it but I think it's true.

What E-Commerce Got Right

E-commerce didn't become safe because of one breakthrough. It became safe because of layers—each solving a different part of the problem.

Encryption (SSL/TLS) meant your credit card couldn't be intercepted in transit. Certificate authorities meant you could verify you were talking to Amazon, not a scammer pretending to be Amazon. Payment processors meant merchants never saw your actual card number. PCI-DSS gave companies standards to follow. Chargebacks meant mistakes could be reversed. Consumer protection laws meant someone was liable when things went wrong.

No single layer was sufficient. Together, they made e-commerce trustworthy.

Agents need the same thing: a security stack where each layer handles what the others can't.

The Required Unlocks

1. Supply Chain Security

When your AI installs a package, it's not installing one thing. It's installing a hundred smaller packages in a trench coat. If any single dependency is compromised, it's game over. And as the Clawdbot hack showed, agents trust their supply chain implicitly.

We need packages to declare what they actually do—what inputs they read, what outputs they produce, what network calls they make. And we need something watching to make sure they don't lie.

"Hey, it looks like left-pad wants to access your SSH keys. That seems... wrong."

2. Prompt Injection Defense

Even with secure packages, an AI can be manipulated by malicious text. A cleverly crafted email could instruct your agent to open a port on your network, download a file, or leak sensitive data.

There will probably be a bunch of interlocking solutions here: better sandboxing, and built-in guardrails (like OpenAI's rule limiting agents to public URLs) to name a few.

Future models may also get better at recognizing manipulation—the same way you'd be suspicious if a stranger asked you to wire money to an overseas account.

3. Identity and Authorization

E-commerce asked: "Who is this merchant, and can I trust them?" Agents need to ask: "Who is this tool, and what should it be allowed to do?"

Right now, permissions are too coarse. "Claude wants to run shell commands" is like a website asking for your entire bank account when it only needs to charge $20. We need scoped permissions: this agent can read files in ~/projects but not ~/.ssh, can make HTTP requests to api.example.com but nowhere else.

Importantly, these policies can't be ad hoc. Most users don't know ~/.ssh is dangerous. And even those who do don't want to write security rules for every corner of their machine.

4. Reversibility

Chargebacks made e-commerce viable. If a merchant scammed you, you could undo the transaction. What's the equivalent for agents?

Git gives us reversibility for code. But what about the email your agent sent? The API call it made? The file it deleted? We need more systems designed with "undo" as a first-class feature.

5. Audit Trails

When something goes wrong, you need to reconstruct what happened. E-commerce has transaction logs, fraud detection systems, paper trails for disputes.

Agents need the same: a complete record of what actions were taken, what data was accessed, and why.

Audit trails also enable other layers:

  • Reversibility — you can't undo what you can't identify
  • Fraud detection — you can't spot patterns without records
  • Containment — "freeze the account if you see unusual activity" requires knowing what activity occurred

6. Containment

Smart security assumes breaches will happen and limits the blast radius. Credit cards have fraud limits and transaction caps. Your bank might freeze your card if it sees unusual activity.

Agents need similar containment: resource quotas, restricted directories, network isolation, cooling-off periods for dangerous operations. The goal isn't to prevent all attacks—it's to make sure a successful attack can't do unlimited damage.

7. Reputation Systems

E-commerce has seller ratings, buyer protection, and escrow services. You can see that a merchant has completed 10,000 transactions with a 99% satisfaction rate.

What's the equivalent for AI tools and packages? "This MCP server has been used by 50,000 developers with no security incidents" sounds good in theory, but this is exactly what the Clawdbot hack exploited—artificially inflating download counts to build false trust.

The Pace Layers

Architect Stewart Brand talks about "pace layers"—how civilizations have fast-moving layers (fashion, commerce) built on top of slow-moving layers (infrastructure, culture, nature). The fast layers innovate; the slow layers stabilize.

Security works the same way:

LayerSpeedWhat It Provides
Model IntelligenceFastestModels recognizing and refusing attacks
Application GuardrailsFastPrompt-level safety, app permissions
Tooling & FrameworksMediumSandboxed runtimes, security scanning
OS & Runtime SecuritySlowContainer isolation, capability systems
Legal & LiabilitySlowestWho pays when an agent causes harm
Industry StandardsSlowestThe "PCI-DSS for AI" we don't have yet

It feels like we're currently trying to solve AI security almost entirely at the fast layers—smarter models, better prompts, application-level guardrails. These help, but they're not durable. The fast layers buy us time while the slow layers catch up.

E-commerce wasn't truly safe until the slow layers matured. SSL and fraud detection were stopgaps. PCI-DSS and consumer protection laws were the foundation.

Agents won't be truly safe until we have the equivalent: industry standards for agent behavior, legal frameworks for liability, and runtime environments designed for containment from the ground up.

The Opportunity

The good news is that every layer represents an opportunity. Companies that build the identity layer, the audit layer, the containment layer—they're building the infrastructure that makes the entire ecosystem possible.

In 1994, it was crazy to put your credit card online. In 2004, it was normal. The companies that built the layers in between—Verisign, PayPal, Stripe—created enormous value by making trust possible.

The same will be true for agents.

Filed under: