Temporal-Difference Networks for Dynamical Systems with Continuous Observations and Actions
Abstract
Temporal-difference (TD) networks are a class of predictive state representations that use well-established TD methods to learn models of partially observable dynamical systems. Previous research with TD networks has dealt only with dynamical systems with finite sets of observations and actions. We present an algorithm for learning TD network representations of dynamical systems with continuous observations and actions. Our results show that the algorithm is capable of learning accurate and robust models of several noisy continuous dynamical systems. The algorithm presented here is the first fully incremental method for learning a predictive representation of a continuous dynamical system.
- Publication:
-
arXiv e-prints
- Pub Date:
- May 2012
- DOI:
- 10.48550/arXiv.1205.2608
- arXiv:
- arXiv:1205.2608
- Bibcode:
- 2012arXiv1205.2608V
- Keywords:
-
- Computer Science - Machine Learning;
- Statistics - Machine Learning
- E-Print:
- Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI2009)