ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Abstract
We present ALFRED (Action Learning From Realistic Environments and Directives), a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. ALFRED includes long, compositional tasks with non-reversible state changes to shrink the gap between research benchmarks and real-world applications. ALFRED consists of expert demonstrations in interactive visual environments for 25k natural language directives. These directives contain both high-level goals like "Rinse off a mug and place it in the coffee maker." and low-level language instructions like "Walk to the coffee maker on the right." ALFRED tasks are more complex in terms of sequence length, action space, and language than existing vision-and-language task datasets. We show that a baseline model based on recent embodied vision-and-language tasks performs poorly on ALFRED, suggesting that there is significant room for developing innovative grounded visual language understanding models with this benchmark.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2019
- DOI:
- 10.48550/arXiv.1912.01734
- arXiv:
- arXiv:1912.01734
- Bibcode:
- 2019arXiv191201734S
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition;
- Computer Science - Artificial Intelligence;
- Computer Science - Computation and Language;
- Computer Science - Machine Learning;
- Computer Science - Robotics
- E-Print:
- Computer Vision and Pattern Recognition (CVPR) 2020