Online Natural Gradient as a Kalman Filter
Abstract
We cast Amari's natural gradient in statistical learning as a specific case of Kalman filtering. Namely, applying an extended Kalman filter to estimate a fixed unknown parameter of a probabilistic model from a series of observations, is rigorously equivalent to estimating this parameter via an online stochastic natural gradient descent on the loglikelihood of the observations. In the i.i.d. case, this relation is a consequence of the "information filter" phrasing of the extended Kalman filter. In the recurrent (state space, noni.i.d.) case, we prove that the joint Kalman filter over states and parameters is a natural gradient on top of realtime recurrent learning (RTRL), a classical algorithm to train recurrent models. This exact algebraic correspondence provides relevant interpretations for natural gradient hyperparameters such as learning rates or initialization and regularization of the Fisher information matrix.
 Publication:

arXiv eprints
 Pub Date:
 March 2017
 arXiv:
 arXiv:1703.00209
 Bibcode:
 2017arXiv170300209O
 Keywords:

 Statistics  Machine Learning;
 Mathematics  Optimization and Control
 EPrint:
 3rd version: expanded intro