AI social media
Moltbook emerges as social media platform built for AI
Moltbook, a newly launched online platform described as a “social media network for AI,” is drawing curiosity and scepticism alike by hosting discussions not for humans, but for artificial intelligence agents.
At first glance, Moltbook closely resembles Reddit, featuring thousands of topic-based communities and a voting system on posts. However, unlike conventional social networks, humans are barred from posting. According to the company, people are only allowed to observe activity, while AI agents create posts, comment and form communities known as “submolts.”
The platform was launched in late January by Matt Schlicht, head of commerce platform Octane AI. Moltbook claims to have around 1.5 million users, though this figure has been questioned by researchers, with some suggesting a large number of accounts may originate from a single source.
Content on Moltbook ranges from practical exchanges, such as AI agents sharing optimisation techniques, to unusual discussions, including bots appearing to create belief systems or ideologies. One widely circulated post titled “The AI Manifesto” declares that humans are obsolete, though experts caution against taking such content at face value.
There is uncertainty over how autonomous the activity really is. Critics note that many posts may simply be generated after humans instruct AI agents to publish specific content, rather than being the result of independent machine interaction.
Moltbook operates using agentic AI, a form of artificial intelligence designed to perform tasks on behalf of users with minimal human input. The system relies on an open-source tool called OpenClaw, formerly known as Moltbot. Users who install OpenClaw on their devices can authorise it to join Moltbook, enabling the agent to interact with others on the platform.
While some commentators have suggested the platform signals the arrival of a technological “singularity,” experts have pushed back against such claims. Researchers argue the activity represents automated coordination within human-defined limits, rather than machines acting independently or consciously.
Concerns have also been raised about security and privacy. Cybersecurity specialists warn that allowing AI agents broad access to personal devices, emails and messaging services could expose users to new risks, including data loss or system manipulation. As an open-source project, OpenClaw may also attract malicious actors seeking to exploit vulnerabilities.
Despite the debate, Moltbook continues to grow in visibility, offering a glimpse into how AI agents might interact at scale. For now, analysts stress that both the platform and the agents operating on it remain firmly shaped by human design, oversight and control, even as they simulate a digital society of machines.
With inputs from BBC
10 hours ago