Best-Response Bayesian Reinforcement Learning with Bayes-adaptive POMDPs for Centaurs
Abstract
Centaurs are half-human, half-AI decision-makers where the AI's goal is to complement the human. To do so, the AI must be able to recognize the goals and constraints of the human and have the means to help them. We present a novel formulation of the interaction between the human and the AI as a sequential game where the agents are modelled using Bayesian best-response models. We show that in this case the AI's problem of helping bounded-rational humans make better decisions reduces to a Bayes-adaptive POMDP. In our simulated experiments, we consider an instantiation of our framework for humans who are subjectively optimistic about the AI's future behaviour. Our results show that when equipped with a model of the human, the AI can infer the human's bounds and nudge them towards better decisions. We discuss ways in which the machine can learn to improve upon its own limitations as well with the help of the human. We identify a novel trade-off for centaurs in partially observable tasks: for the AI's actions to be acceptable to the human, the machine must make sure their beliefs are sufficiently aligned, but aligning beliefs might be costly. We present a preliminary theoretical analysis of this trade-off and its dependence on task structure.
- Publication:
-
arXiv e-prints
- Pub Date:
- April 2022
- DOI:
- arXiv:
- arXiv:2204.01160
- Bibcode:
- 2022arXiv220401160M
- Keywords:
-
- Computer Science - Artificial Intelligence;
- Computer Science - Machine Learning;
- Computer Science - Multiagent Systems
- E-Print:
- This paper is presented in part at the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) 2022