Learning by Playing - Solving Sparse Reward Tasks from Scratch
Abstract
We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors - from scratch - in the presence of multiple sparse reward signals. To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL. The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment - enabling it to excel at sparse reward RL. Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach.
- Publication:
-
arXiv e-prints
- Pub Date:
- February 2018
- arXiv:
- arXiv:1802.10567
- Bibcode:
- 2018arXiv180210567R
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Robotics;
- Statistics - Machine Learning
- E-Print:
- A video of the rich set of learned behaviours can be found at https://youtu.be/mPKyvocNe_M