Cycles of cooperation and defection in imperfect learning
Abstract
We investigate a model of learning the iterated prisoner's dilemma game. Players have the choice between three strategies: always defect (ALLD), always cooperate (ALLC) and tit-for-tat (TFT). The only strict Nash equilibrium in this situation is ALLD. When players learn to play this game convergence to the equilibrium is not guaranteed, for example we find cooperative behaviour if players discount observations in the distant past. When agents use small samples of observed moves to estimate their opponent's strategy the learning process is stochastic, and sustained oscillations between cooperation and defection can emerge. These cycles are similar to those found in stochastic evolutionary processes, but the origin of the noise sustaining the oscillations is different and lies in the imperfect sampling of the opponent's strategy. Based on a systematic expansion technique, we are able to predict the properties of these learning cycles, providing an analytical tool with which the outcome of more general stochastic adaptation processes can be characterised.
- Publication:
-
Journal of Statistical Mechanics: Theory and Experiment
- Pub Date:
- August 2011
- DOI:
- 10.1088/1742-5468/2011/08/P08007
- arXiv:
- arXiv:1101.4378
- Bibcode:
- 2011JSMTE..08..007G
- Keywords:
-
- Physics - Physics and Society;
- Computer Science - Social and Information Networks;
- Nonlinear Sciences - Adaptation and Self-Organizing Systems
- E-Print:
- 18 pages, 11 figures