Faster quantum mixing for slowly evolving sequences of Markov chains
Abstract
Markov chain methods are remarkably successful in computational physics, machine learning, and combinatorial optimization. The cost of such methods often reduces to the mixing time, i.e., the time required to reach the steady state of the Markov chain, which scales asδ−1, the inverse of the spectral gap. It has long been conjectured that quantum computers offer nearly generic quadratic improvements for mixing problems. However, except in special cases, quantum algorithms achieve a run-time ofO(δ−1N), which introduces a costly dependence on the Markov chain sizeN,not present in the classical case. Here, we re-address the problem of mixing of Markov chains when these form a slowly evolving sequence. This setting is akin to the simulated annealing setting and is commonly encountered in physics, material sciences and machine learning. We provide a quantum memory-efficient algorithm with a run-time ofO(δ−1N4), neglecting logarithmic terms, which is an important improvement for large state spaces. Moreover, our algorithms output quantum encodings of distributions, which has advantages over classical outputs. Finally, we discuss the run-time bounds of mixing algorithms and show that, under certain assumptions, our algorithms are optimal.
- Publication:
-
Quantum
- Pub Date:
- November 2018
- DOI:
- 10.22331/q-2018-11-09-105
- arXiv:
- arXiv:1503.01334
- Bibcode:
- 2018Quant...2..105O
- Keywords:
-
- Quantum Physics;
- Computer Science - Artificial Intelligence;
- Computer Science - Data Structures and Algorithms
- E-Print:
- 20 pages, 2 figures