Paper

Cooperate or Collapse: Emergence of Sustainability Behaviors in a Society of LLM Agents

As AI systems pervade human life, ensuring that large language models (LLMs) make safe decisions is a significant challenge. This paper introduces the Governance of the Commons Simulation (GovSim), a generative simulation platform designed to study strategic interactions and cooperative decision-making in LLMs. Using GovSim, we investigate the dynamics of sustainable resource sharing in a society of AI agents. This environment allows us to study the influence of ethical considerations, strategic planning, and negotiation skills on cooperative outcomes for AI agents. We develop an LLM-based agent architecture designed for these social dilemmas and test it with a variety of LLMs. We find that all but the most powerful LLM agents fail to achieve a sustainable equilibrium in GovSim. Ablations reveal that successful multi-agent communication between agents is critical for achieving cooperation in these cases. Furthermore, our analyses show that the failure to achieve sustainable cooperation in most LLMs stems from their inability to formulate and analyze hypotheses about the long-term effects of their actions on the equilibrium of the group. Finally, we show that agents that leverage ``Universalization''-based reasoning, a theory of moral thinking, are able to achieve significantly greater sustainability. Taken together, GovSim enables us to study the mechanisms that underlie sustainable self-government with significant specificity and scale. We open source the full suite of our research results, including the simulation environment, agent prompts, and a comprehensive web interface.

Results in Papers With Code
(↓ scroll down to see all results)