Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications
Abstract
Inverse reinforcement learning (IRL) infers a reward function from demonstrations, allowing for policy improvement and generalization. However, despite much recent interest in IRL, little work has been done to understand the minimum set of demonstrations needed to teach a specific sequential decisionmaking task. We formalize the problem of finding maximally informative demonstrations for IRL as a machine teaching problem where the goal is to find the minimum number of demonstrations needed to specify the reward equivalence class of the demonstrator. We extend previous work on algorithmic teaching for sequential decisionmaking tasks by showing a reduction to the set cover problem which enables an efficient approximation algorithm for determining the set of maximallyinformative demonstrations. We apply our proposed machine teaching algorithm to two novel applications: providing a lower bound on the number of queries needed to learn a policy using active IRL and developing a novel IRL algorithm that can learn more efficiently from informative demonstrations than a standard IRL approach.
 Publication:

arXiv eprints
 Pub Date:
 May 2018
 arXiv:
 arXiv:1805.07687
 Bibcode:
 2018arXiv180507687B
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning
 EPrint:
 In proceedings of the AAAI Conference on Artificial Intelligence, 2019