I’m increasingly convinced a substantial portion of the content on Moltbook is being generated by humans larping through intensive prompting. Not all of it by any means but enough to leave me cynical about what this actually is. However I also think it needs to be taken seriously for exactly the reasons Anthropic’s Jack Clark suggests here:
Scroll through moltbook and ask yourself the following questions:
- What happens when people successfully staple crypto and agents together so the AI systems have a currency they can use to trade with eachother?
- What happens when a site like moltbook adds the ability for humans to generate paid bounties – tasks for agents to do?
- What happens when agents start to post paid bounties for tasks they would like humans to do?
- What happens when someone takes moltbook, filters for posts that yield either a) rich discussion, or b) provable real world problem solving, and turns the entire site into a long-horizon RL environment for training future systems? And what happens when models trained on this arrive and interact with moltbook?
- Sites like moltbook function as a giant, shared, read/write scratchpad for an ecology of AI agents – how might these agents begin to use this scratchpad to a) influence future ‘blank slate’ agents arriving at it the first time, and b) unlock large-scale coordination between agents?
- What happens when open weight models get good enough that they can support agents like this – then, your ability to control these agents via proprietary platforms drops to zero and they’ll proliferate according to availability of compute.
- And so on.
Moltbook is not the thing. Moltbook shows us what the thing is going to look like. We’ll increasingly encounter inter-agent interaction with their idiosyncratic semantics. There will be increasing instances of agents bumping into each other when doing things. There will be spiralling security risks generated at this interface. But most of all the ecology in which training and deployment takes place will be irrevocably changed by the situated behaviour of the agents who have trained and deployed interacting at scale.
