Time-Scale Separation in Q-Learning: Extending TD($\triangle$) for Action-Value Function Decomposition
Abstract
Q-Learning is a fundamental off-policy reinforcement learning (RL) algorithm that has the objective of approximating action-value functions in order to learn optimal policies. Nonetheless, it has difficulties in reconciling bias with variance, particularly in the context of long-term rewards. This paper introduces Q($\Delta$)-Learning, an extension of TD($\Delta$) for the Q-Learning framework. TD($\Delta$) facilitates efficient learning over several time scales by breaking the Q($\Delta$)-function into distinct discount factors. This approach offers improved learning stability and scalability, especially for long-term tasks where discounting bias may impede convergence. Our methodology guarantees that each element of the Q($\Delta$)-function is acquired individually, facilitating expedited convergence on shorter time scales and enhancing the learning of extended time scales. We demonstrate through theoretical analysis and practical evaluations on standard benchmarks like Atari that Q($\Delta$)-Learning surpasses conventional Q-Learning and TD learning methods in both tabular and deep RL environments.
- Publication:
-
arXiv e-prints
- Pub Date:
- November 2024
- DOI:
- 10.48550/arXiv.2411.14019
- arXiv:
- arXiv:2411.14019
- Bibcode:
- 2024arXiv241114019H
- Keywords:
-
- Computer Science - Machine Learning;
- Statistics - Machine Learning;
- F.2.2;
- I.2.7
- E-Print:
- 17 pages