Algorithmic Theories of Everything
Abstract
The probability distribution P from which the history of our universe is sampled represents a theory of everything or TOE. We assume P is formally describable. Since most (uncountably many) distributions are not, this imposes a strong inductive bias. We show that P(x) is small for any universe x lacking a short description, and study the spectrum of TOEs spanned by two Ps, one reflecting the most compact constructive descriptions, the other the fastest way of computing everything. The former derives from generalizations of traditional computability, Solomonoff's algorithmic probability, Kolmogorov complexity, and objects more random than Chaitin's Omega, the latter from Levin's universal search and a natural resourceoriented postulate: the cumulative prior probability of all x incomputable within time t by this optimal algorithm should be 1/t. Between both Ps we find a universal cumulatively enumerable measure that dominates traditional enumerable measures; any such CEM must assign low probability to any universe lacking a short enumerating program. We derive Pspecific consequences for evolving observers, inductive reasoning, quantum physics, philosophy, and the expected duration of our universe.
 Publication:

arXiv eprints
 Pub Date:
 November 2000
 arXiv:
 arXiv:quantph/0011122
 Bibcode:
 2000quant.ph.11122S
 Keywords:

 Quantum Physics;
 Computer Science  Artificial Intelligence;
 Computer Science  Computational Complexity;
 Computer Science  Machine Learning;
 High Energy Physics  Theory;
 Mathematical Physics;
 Mathematics  Mathematical Physics;
 Physics  Computational Physics
 EPrint:
 10 theorems, 50 pages, 100 refs, 20000 words. Minor revisions: added references