Finding geodesics on graphs using reinforcement learning
Abstract
It is wellknown in biology that ants are able to find shortest paths between their nest and the food by successive random explorations, without any mean of communication other than the pheromones they leave behind them. This striking phenomenon has been observed experimentally and modelled by different meanfield reinforcementlearning models in the biology literature. In this paper, we introduce the first probabilistic reinforcementlearning model for this phenomenon. In this model, the ants explore a finite graph in which two nodes are distinguished as the nest and the source of food. The ants perform successive random walks on this graph, starting from the nest and stopped when first reaching the food, and the transition probabilities of each random walk depend on the realizations of all previous walks through some dynamic weighting of the graph. We discuss different variants of this model based on different reinforcement rules and show that slight changes in this reinforcement rule can lead to drastically different outcomes. We prove that, in two variants of this model and when the underlying graph is, respectively, any seriesparallel graph and a 5edge nonseriesparallel losange graph, the ants indeed eventually find the shortest path(s) between their nest and the food. Both proofs rely on the electrical network method for random walks on weighted graphs and on Rubin's embedding in continuous time. The proof in the seriesparallel cases uses the recursive nature of this family of graphs, while the proof in the seeminglysimpler losange case turns out to be quite intricate: it relies on a fine analysis of some stochastic approximation, and on various couplings with standard and generalised Pólya urns.
 Publication:

arXiv eprints
 Pub Date:
 October 2020
 DOI:
 10.48550/arXiv.2010.04820
 arXiv:
 arXiv:2010.04820
 Bibcode:
 2020arXiv201004820K
 Keywords:

 Mathematics  Probability;
 60K35;
 05C81;
 62L20