SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences
Abstract
Semantic scene understanding is important for various applications. In particular, self-driving cars need a fine-grained understanding of the surfaces and objects in their vicinity. Light detection and ranging (LiDAR) provides precise geometric information about the environment and is thus a part of the sensor suites of almost all self-driving cars. Despite the relevance of semantic scene understanding for this application, there is a lack of a large dataset for this task which is based on an automotive LiDAR. In this paper, we introduce a large dataset to propel research on laser-based semantic segmentation. We annotated all sequences of the KITTI Vision Odometry Benchmark and provide dense point-wise annotations for the complete $360^{o}$ field-of-view of the employed automotive LiDAR. We propose three benchmark tasks based on this dataset: (i) semantic segmentation of point clouds using a single scan, (ii) semantic segmentation using multiple past scans, and (iii) semantic scene completion, which requires to anticipate the semantic scene in the future. We provide baseline experiments and show that there is a need for more sophisticated models to efficiently tackle these tasks. Our dataset opens the door for the development of more advanced methods, but also provides plentiful data to investigate new research directions.
- Publication:
-
arXiv e-prints
- Pub Date:
- April 2019
- DOI:
- arXiv:
- arXiv:1904.01416
- Bibcode:
- 2019arXiv190401416B
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition;
- Computer Science - Robotics
- E-Print:
- ICCV2019. See teaser video at http://bit.ly/SemanticKITTI-teaser