Learning to Advise Humans in High-Stakes Settings

23 Oct 2022  ·  Nicholas Wolczynski, Maytal Saar-Tsechansky, Tong Wang ·

Expert decision-makers (DMs) in high-stakes AI-assisted decision-making (AIaDM) settings receive and reconcile recommendations from AI systems before making their final decisions. We identify distinct properties of these settings which are key to developing AIaDM models that effectively benefit team performance. First, DMs incur reconciliation costs from exerting decision-making resources (e.g., time and effort) when reconciling AI recommendations that contradict their own judgment. Second, DMs in AIaDM settings exhibit algorithm discretion behavior (ADB), i.e., an idiosyncratic tendency to imperfectly accept or reject algorithmic recommendations for any given decision task. The human's reconciliation costs and imperfect discretion behavior introduce the need to develop AI systems which (1) provide recommendations selectively, (2) leverage the human partner's ADB to maximize the team's decision accuracy while regularizing for reconciliation costs, and (3) are inherently interpretable. We refer to the task of developing AI to advise humans in AIaDM settings as learning to advise and we address this task by first introducing the AI-assisted Team (AIaT)-Learning Framework. We instantiate our framework to develop TeamRules (TR): an algorithm that produces rule-based models and recommendations for AIaDM settings. TR is optimized to selectively advise a human and to trade-off reconciliation costs and team accuracy for a given environment by leveraging the human partner's ADB. Evaluations on synthetic and real-world benchmark datasets with a variety of simulated human accuracy and discretion behaviors show that TR robustly improves the team's objective across settings over interpretable, rule-based alternatives.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here