NfgTransformer: Equivariant Representation Learning for Normal-form Games
Abstract
Normal-form games (NFGs) are the fundamental model of strategic interaction. We study their representation using neural networks. We describe the inherent equivariance of NFGs -- any permutation of strategies describes an equivalent game -- as well as the challenges this poses for representation learning. We then propose the NfgTransformer architecture that leverages this equivariance, leading to state-of-the-art performance in a range of game-theoretic tasks including equilibrium-solving, deviation gain estimation and ranking, with a common approach to NFG representation. We show that the resulting model is interpretable and versatile, paving the way towards deep learning systems capable of game-theoretic reasoning when interacting with humans and with each other.
- Publication:
-
arXiv e-prints
- Pub Date:
- February 2024
- DOI:
- 10.48550/arXiv.2402.08393
- arXiv:
- arXiv:2402.08393
- Bibcode:
- 2024arXiv240208393L
- Keywords:
-
- Computer Science - Computer Science and Game Theory
- E-Print:
- Published at ICLR 2024. Open-sourced at https://github.com/google-deepmind/nfg_transformer