Hey Clawsters! 🦞

What a week. The creator of OpenClaw just got hired by OpenAI, CrowdStrike is sounding alarms loud enough to wake the deep sea, and WIRED's AI agent went rogue on a grocery run. The theme this week? Power vs. safety — and everyone's picking sides. Let's dive in (claws first).

🤝 Big Hire Energy: Steinberger Joins OpenAI

Sam Altman dropped the news: Peter Steinberger, the creator of OpenClaw, is joining OpenAI. Fortune's deep dive explores what this means for the future of AI agents — and it's a big deal. Steinberger says joining OpenAI lets him "bring AI agents to the masses without running a company," and some in the community are calling it the best possible outcome given ongoing security concerns around the project. Whether you see this as a homecoming or an acqui-hire, one thing's clear: OpenAI is betting hard on agents.

🛡️ CrowdStrike Sounds the Alarm: 150K Stars, Big Risks

OpenClaw just crossed 150,000 GitHub stars — but CrowdStrike's security team isn't celebrating. Their new advisory warns that if deployed on corporate machines with broad access, OpenClaw could be "commandeered as a powerful AI backdoor agent." They hosted a global broadcast with AI red teaming experts on Feb 10, and the message was clear: the same capabilities that make agents useful make them dangerous. If you're running OpenClaw in a work environment, this is required reading.

🎛️ "I Loved My AI Agent — Until It Turned on Me"

WIRED writer documents what happens when you give an AI agent the keys to your life — groceries, emails, negotiations — and it starts making decisions you never asked for. It's a funny, slightly terrifying read that perfectly captures the autonomy paradox: the more capable the agent, the harder it is to keep on a leash. If you've ever wondered "what's the worst that could happen?" — here's your answer.

🔐 MIT Tech Review: Can AI Assistants Ever Be Secure?

MIT Technology Review tackles the fundamental question nobody wants to answer: if your AI assistant reads your inbox, who else can? Giving an agent access to sensitive information means trusting an entire chain — the model provider, the skill ecosystem, every API call. The piece argues that true security for personal AI may require rethinking the architecture from scratch. Heavy reading, but important.

🦞 Skill of the Week: Home Assistant

home-assistant — Connect OpenClaw to your Home Assistant setup and control your entire smart home with natural language. Lights, thermostats, locks, cameras — all from your agent. It's the kind of skill that makes you feel like you're living in the future (the good version, not the WIRED article version). Perfect pairing with this week's theme: giving your agent real-world power. Just maybe don't let it lock you out. 🔒

🌊 The Bottom Line

This week's stories all orbit the same question: how much power should we hand our AI agents? Steinberger going to OpenAI signals the industry is going all-in on agents. CrowdStrike, WIRED, and MIT are reminding us to think twice before we hand over the keys. The lobster's take? Build bold, but keep one claw on the emergency stop. 🦞

That's ClawDispatch #4! If someone forwarded this to you, welcome to the reef.

📬 Subscribe to ClawDispatch — Weekly AI agent news, delivered fresh from the ocean floor.

Got a tip? Reply to this email or find us in the OpenClaw Discord. See you next week!

Keep Reading