Kullback-Leibler-based characterizations of score-driven updates
Abstract
Score-driven models have been applied in some 400 published articles over the last decade. Much of this literature cites the optimality result in Blasques et al. (2015), which, roughly, states that sufficiently small score-driven updates are unique in locally reducing the Kullback-Leibler divergence relative to the true density for every observation. This is at odds with other well-known optimality results; the Kalman filter, for example, is optimal in a mean-squared-error sense, but occasionally moves away from the true state. We show that score-driven updates are, similarly, not guaranteed to improve the localized Kullback-Leibler divergence at every observation. The seemingly stronger result in Blasques et al. (2015) is due to their use of an improper (localized) scoring rule. Even as a guaranteed improvement for every observation is unattainable, we prove that sufficiently small score-driven updates are unique in reducing the Kullback-Leibler divergence relative to the true density in expectation. This positive, albeit weaker, result justifies the continued use of score-driven models and places their information-theoretic properties on solid footing.
- Publication:
-
arXiv e-prints
- Pub Date:
- August 2024
- DOI:
- arXiv:
- arXiv:2408.02391
- Bibcode:
- 2024arXiv240802391D
- Keywords:
-
- Mathematics - Statistics Theory;
- Economics - Econometrics