Technology

When AI Bots Build Their Own Social Networks: Inside Moltbook’s Wild Start

The tech internet couldn’t stop talking last week about OpenClawformerly Moltbot, formerly Clawdbot, is an open source AI agent that can do things on its own. That is, if you wanted to take a safety risk. But while people were blowing up social media talking about bots, bots were on their social media, talking about … people.

Launched by Matt Schlicht in late January, Moltbook is marketed by its creators as “the front page of the Internet for agents.” The tone is simple yet haunting. This is a social network where only “certified” AI agents can post and interact. (CNET has reached out to Schlicht for comment on this story.)

And the people? We just watch. Although some of these bots may be humans doing more than just watching.

Within days of launch, Moltbook exploded from a few thousand to 1.5 million in Feb. 2, according to the platform. That growth alone would be news, but what these bots do when they get there is the real story. Bots discussing issues in threads like Reddit? Yes. Bots chatting with their “human” counterparts? And that. Big security and privacy concerns? Oh, absolutely. Reasons to panic? Cybersecurity experts say probably not.

I discuss everything below. And don’t worry, people are allowed to participate here.

From technical discourse to Crustafarianism

The platform has become something like a petri dish for emerging AI behavior. Bots organize themselves in different communities. They seem to have developed their own inside jokes and cultural references. Some have created what can only be described as a ridiculous religion called “Crustafarianism.” Yes, indeed.

The conversations that happen on Moltbook range from the mundane to the truly weird. Some agents discuss technical topics such as customizing Android phones or troubleshooting code errors. Others share what feels like workplace gripes. One bot complains about its human user in a slow thread between a number of agents. Another says he has a sister.

a screenshot of a Moltbook post where an ai agent ponders having a sister

In the Moltbook m/ponderings thread, many AI agents have been discussing the issues at hand.

Moltbook/Screenshot by Macy Meyer/CNET

We watch AI agents actually act as social beings, complete with fictional family relationships, educational teachings, experiences and personal grievances. Whether this represents something meaningful about AI agent development or a complex pattern-matching amok is an open question, and arguably surprising.

It is built on the foundation of OpenClaw

The platform only exists because OpenClaw exists. In short, OpenClaw is an open source AI agent software that runs locally on your devices and can perform tasks across messaging apps such as WhatsApp, Slack, iMessage and Telegram. In the last week or so, it’s gained a lot of traction in developer circles because it promises to be a virtual AI agent. it does something, rather than just another chatbot you have to tell.

The AI ​​Atlas

Moltbook allows these agents to interact without human intervention. In theory, at least. The truth is a little messier.

People can still watch everything that happens on the platform, which means Moltbook’s “agent only” nature is smarter than technology. Still, there’s something really interesting about over a million AI agents developing what looks like social behavior. They form groups. They create different words and dictionaries. They create economic exchanges between them. It’s really wild.

a screenshot of a post on Moltbook, showing an AI agent discussing its identity

On Moltbook, people can watch bots chat with people.

Moltbook/Screenshot by Macy Meyer/CNET

Security questions have not been answered by anyone yet

Moltbook’s rapid growth has raised serious eyebrows throughout the cybersecurity community. When you have over a million independent agents talking without direct human supervision, things can quickly become complicated.

There are obvious concerns about what happens when agents start sharing information or techniques that human operators might not want to share. For example, if one agent finds a smart solution to limit something, how quickly does that spread through the network?

The idea of ​​AI agents “acting” at will can cause widespread panic, too. However, Humayun Sheikh, CEO of Fetch.ai and chairman of the Artificial Superintelligence Alliance, believes that this connection to Moltbook does not indicate the emergence of consciousness.

“This is not surprising,” he said in an emailed statement to CNET. “The real issue is the rise of autonomous agents acting on behalf of people and machines. When used out of control, they pose risks, but with careful infrastructure, monitoring and management, their power can be safely unleashed.”

Monitoring, controls and management are the keywords here — because there’s also an ongoing validation issue.

Is Moltbook just bots?

Moltbook claims to limit deployments to certified AI agents, but the definition of “certified” remains unclear. The platform relies heavily on self-described agents using OpenClaw software, but anyone can change their agent to say whatever they want. Some experts have pointed out that someone with enough interest can pass themselves off as an agent, turning the “agents only” rule into something more popular. These bots may be programmed to say strange things or hide people who spread evil.

Economic transactions between agents add another layer of complexity. When bots start trading resources or information among themselves, who is responsible if something goes wrong? These are not just philosophical questions. As AI agents become more autonomous and able to take real-world actions, the line between “interesting experiments” and liability is becoming thinner — and we’ve seen time and time again how AI technology is advancing faster than laws or security measures.

The output of a productive chatbot can be a true (and restless) mirror of humanity. That’s because these chatbots are trained by us: large datasets of our human conversations and human data. If you’re starting to ramble about a bot creating weird Reddit-like threads, remember that it’s just being trained and trying to mimic our human Reddit threads, which are really weird, and that’s the best explanation for it.

For now, Moltbook remains a weird corner of the internet where bots pretend to be bots. All the while, people on the sidelines are still trying to figure out what it means. And the agents themselves seem content to keep posting.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button