Learning Human-like Representations to Enable Learning Human Values

21 Dec 2023  ·  Andrea Wynn, Ilia Sucholutsky, Thomas L. Griffiths ·

How can we build AI systems that are aligned with human values to avoid causing harm or violating societal standards for acceptable behavior? We argue that representational alignment between humans and AI agents facilitates value alignment. Making AI systems learn human-like representations of the world has many known benefits, including improving generalization, robustness to domain shifts, and few-shot learning performance. We propose that this kind of representational alignment between machine learning (ML) models and humans can also support value alignment, allowing ML systems to conform to human values and societal norms. We focus on ethics as one aspect of value alignment and train ML agents using a variety of methods in a multi-armed bandit setting, where rewards reflect the moral acceptability of the chosen action. We use a synthetic experiment to demonstrate that agents' representational alignment with the environment bounds their learning performance. We then repeat this procedure in a realistic setting, using textual action descriptions and similarity judgments collected from humans and a variety of language models, to show that the results generalize and are model-agnostic when grounded in an ethically relevant context.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods