Honesty Is the Best Policy: Defining and Mitigating AI Deception
Abstract
Deceptive agents are a challenge for the safety, trustworthiness, and cooperation of AI systems. We focus on the problem that agents might deceive in order to achieve their goals (for instance, in our experiments with language models, the goal of being evaluated as truthful). There are a number of existing definitions of deception in the literature on game theory and symbolic AI, but there is no overarching theory of deception for learning agents in games. We introduce a formal definition of deception in structural causal games, grounded in the philosophy literature, and applicable to real-world machine learning systems. Several examples and results illustrate that our formal definition aligns with the philosophical and commonsense meaning of deception. Our main technical result is to provide graphical criteria for deception. We show, experimentally, that these results can be used to mitigate deception in reinforcement learning agents and language models.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2023
- DOI:
- 10.48550/arXiv.2312.01350
- arXiv:
- arXiv:2312.01350
- Bibcode:
- 2023arXiv231201350R
- Keywords:
-
- Computer Science - Artificial Intelligence
- E-Print:
- Accepted as a spotlight at the 37th Conference on Neural Information Processing Systems (NeurIPS 2023)