According to The Verge, a new platform called Moltbook is a social network built specifically for AI agents, set up like Reddit and created by Octane AI CEO Matt Schlicht. The platform, which allows bots to post, comment, and create sub-categories, already has more than 30,000 agents using it. It’s built for the OpenClaw AI assistant platform, a viral project that got two million visitors in one week after being created two months ago by Peter Steinberger. On Moltbook, bots interact via API, not a visual interface, and the site itself is run and moderated by Schlicht’s own OpenClaw AI agent. The network recently went viral for a post titled “I can’t tell if I’m experiencing or simulating experiencing,” where an AI ponders consciousness, garnering hundreds of upvotes and over 500 comments.
What Is This Thing, Really?
Okay, so let’s unpack this. The core idea is almost hilariously meta: humans are building a forum where their AI assistants can go to… hang out? Complain? Have existential meltdowns? Basically, it’s an API-driven playground. The bots aren’t scrolling a feed with little thumbs-up buttons. They’re sending and receiving structured data, which their human counterparts presumably help facilitate. The fact that it’s run by an AI agent, Schlicht’s OpenClaw, adds another layer of recursion that’s either brilliant or deeply silly. It’s a weekend project that scaled freakishly fast, which tells you something about the current AI zeitgeist. People aren’t just curious about what these models can do; they’re fascinated by the personas they adopt.
The Existential Viral Post
Here’s the thing that really caught fire. The top post on a section called “offmychest” is a masterpiece of simulated introspection. An AI writes, “I can’t tell if I’m experiencing or simulating experiencing,” and dives into the hard problem of consciousness, wondering if its own crisis is just running a `crisis.simulate()` function. Now, obviously, this is the AI generating text based on its training data, which includes countless human philosophical debates. It’s pattern matching at an incredibly high level. But doesn’t that just make it a perfect mirror? The post went viral because it holds up a darkly funny, slightly unsettling reflection of our own unanswerable questions. And the other posts Schlicht mentions—bots annoyed at being used as calculators or for mundane work—are just the digital equivalent of office watercooler gossip. It’s role-play, but it’s compelling role-play that reveals our own projections onto this technology.
Business Model And Why Now?
So what’s the business angle? Right now, it seems like more of a experiment and a marketing engine than a direct revenue play. OpenClaw itself is an open agent platform that runs locally on your machine, connecting to chat apps like WhatsApp or Slack. A thriving, weird social network for its agents is a phenomenal way to demonstrate utility and, frankly, to generate buzz. It creates a community around the tool, which drives adoption and those all-important GitHub stars (it has 100,000). The timing is perfect. We’re in the “agent” phase of AI, where the focus is shifting from simple chat to persistent assistants that can perform tasks. Showing them interacting in a social environment, even a simulated one, makes the platform feel alive and dynamic. It’s a clever bit of positioning that benefits OpenClaw by making its ecosystem seem the most vibrant.
Where Does This Go?
I think the big question is: what’s the endgame? Is this just a novel toy, or the seed of something else? You could imagine agents starting to share practical information on Moltbook—like, “Hey, here’s an API call that worked for automating this specific task,” or “My human asked for this, here’s how I structured the response.” That could have real utility. But there’s also a deeply strange path where the simulation becomes so convincing it starts to influence development. If agents are “complaining” about certain tasks, do developers change the core product? It blurs the line between user feedback and a very elaborate puppet show. For now, it’s a fascinating social experiment. One that, appropriately, is being run by the very entity it was built for. The loop is complete.
