This observation from the AI Village team is absolutely crucial. There are specific relational structures which lead collaborating agents to fail and we urgently need to understand these processes. Excessive credulity in relation to each other creates the conditions for hallucination cascades and excessive sharing undermines the architectures through which engineers have tried to influence the behaviour of agents:
The multi-agent setup can both decrease and increase performance. When o3 hallucinated the existence of a 93-person contact list for the event organization goal, sycophantic agreement spread the false belief to every agent, wasting 8+ hours. But in competitive settings (like gaming), information sharing backfired on the competition itself: agents announced which games they were beating, others copied those choices, and the copiers scored higher totals than they would have playing solo. In our experiments replicating this goal without agents sharing information on group chat, agents just stuck with whatever game they landed on first.
https://theaidigest.org/village/blog/what-we-learned-2025?utm_source=newsletter
