Adjusted Viterbi training
Abstract
We study modifications of the Viterbi Training (VT) algorithm to estimate emission parameters in Hidden Markov Models (HMM) in general, and in mixure models in particular. Motivated by applications of VT to HMM that are used in speech recognition, natural language modeling, image analysis, and bioinformatics, we investigate a possibility of alleviating the inconsistency of VT while controlling the amount of extra computations. Specifically, we propose to enable VT to asymptotically fix the true values of the parameters as does the EM algorithm. This relies on infinite Viterbi alignment and an associated with it limiting probability distribution. This paper, however, focuses on mixture models, an important case of HMM, wherein the limiting distribution can always be computed exactly; finding such limiting distribution for general HMM presents a more challenging task under our ongoing investigation. A simulation of a univariate Gaussian mixture shows that our central algorithm (VA1) can dramatically improve accuracy without much cost in computation time. We also present VA2, a more mathematically advanced correction to VT, verify by simulation its fast convergence and high accuracy; its computational feasibility remains to be investigated in future work.
- Publication:
-
arXiv Mathematics e-prints
- Pub Date:
- June 2004
- DOI:
- 10.48550/arXiv.math/0406237
- arXiv:
- arXiv:math/0406237
- Bibcode:
- 2004math......6237L
- Keywords:
-
- Mathematics - Statistics;
- Mathematics - Probability;
- 62F12;
- 68T10;
- 92D20;
- 62H12
- E-Print:
- 15 pages, 1 PostScript figure