Streaming Dense Video Captioning
Abstract
An ideal model for dense video captioning -- predicting captions localized temporally in a video -- should be able to handle long input videos, predict rich, detailed textual descriptions, and be able to produce outputs before processing the entire video. Current state-of-the-art models, however, process a fixed number of downsampled frames, and make a single full prediction after seeing the whole video. We propose a streaming dense video captioning model that consists of two novel components: First, we propose a new memory module, based on clustering incoming tokens, which can handle arbitrarily long videos as the memory is of a fixed size. Second, we develop a streaming decoding algorithm that enables our model to make predictions before the entire video has been processed. Our model achieves this streaming ability, and significantly improves the state-of-the-art on three dense video captioning benchmarks: ActivityNet, YouCook2 and ViTT. Our code is released at https://github.com/google-research/scenic.
- Publication:
-
arXiv e-prints
- Pub Date:
- April 2024
- DOI:
- arXiv:
- arXiv:2404.01297
- Bibcode:
- 2024arXiv240401297Z
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition
- E-Print:
- CVPR 2024. Code is available at https://github.com/google-research/scenic/tree/main/scenic/projects/streaming_dvc