Modifying RL Policies with Imagined Actions: How Predictable Policies Can Enable Users to Perform Novel Tasks
Abstract
It is crucial that users are empowered to use the functionalities of a robot to creatively solve problems on the fly. A user who has access to a Reinforcement Learning (RL) based robot may want to use the robot's autonomy and their knowledge of its behavior to complete new tasks. One way is for the user to take control of some of the robot's action space through teleoperation while the RL policy simultaneously controls the rest. However, an out-of-the-box RL policy may not readily facilitate this. For example, a user's control may bring the robot into a failure state from the policy's perspective, causing it to act in a way the user is not familiar with, hindering the success of the user's desired task. In this work, we formalize this problem and present Imaginary Out-of-Distribution Actions, IODA, an initial algorithm for addressing that problem and empowering user's to leverage their expectation of a robot's behavior to accomplish new tasks.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2023
- DOI:
- 10.48550/arXiv.2312.05991
- arXiv:
- arXiv:2312.05991
- Bibcode:
- 2023arXiv231205991S
- Keywords:
-
- Computer Science - Robotics;
- Computer Science - Artificial Intelligence;
- Computer Science - Human-Computer Interaction
- E-Print:
- Pre-print to be published in the AAAI Fall Symposium 2023 Proceedings (part of the AI-HRI Symposium)