Recursive Markov Process for Iterated Games with Markov Strategies
Abstract
The dynamics in games involving multiple players, who adaptively learn from their past experience, is not yet well understood. We analyzed a class of stochastic games with Markov strategies in which players choose their actions probabilistically. This class is formulated as a $k^{\text{th}}$ order Markov process, in which the probability of choice is a function of $k$ past states. With a reasonably large $k$ or with the limit $k \to \infty$, numerical analysis of this random process is unfeasible. This study developed a technique which gives the marginal probability of the stationary distribution of the infiniteorder Markov process, which can be constructed recursively. We applied this technique to analyze an iterated prisoner's dilemma game with two players who learn using infinite memory.
 Publication:

arXiv eprints
 Pub Date:
 September 2015
 arXiv:
 arXiv:1509.00535
 Bibcode:
 2015arXiv150900535H
 Keywords:

 Mathematics  Probability;
 60J20;
 91A15;
 91A60
 EPrint:
 21 pages, submitted to a journal