MS-LSTM: Exploring Spatiotemporal Multiscale Representations in Video Prediction Domain
Abstract
The drastic variation of motion in spatial and temporal dimensions makes the video prediction task extremely challenging. Existing RNN models obtain higher performance by deepening or widening the model. They obtain the multi-scale features of the video only by stacking layers, which is inefficient and brings unbearable training costs (such as memory, FLOPs, and training time). Different from them, this paper proposes a spatiotemporal multi-scale model called MS-LSTM wholly from a multi-scale perspective. On the basis of stacked layers, MS-LSTM incorporates two additional efficient multi-scale designs to fully capture spatiotemporal context information. Concretely, we employ LSTMs with mirrored pyramid structures to construct spatial multi-scale representations and LSTMs with different convolution kernels to construct temporal multi-scale representations. We theoretically analyze the training cost and performance of MS-LSTM and its components. Detailed comparison experiments with twelve baseline models on four video datasets show that MS-LSTM has better performance but lower training costs.
- Publication:
-
arXiv e-prints
- Pub Date:
- April 2023
- DOI:
- 10.48550/arXiv.2304.07724
- arXiv:
- arXiv:2304.07724
- Bibcode:
- 2023arXiv230407724M
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition
- E-Print:
- arXiv admin note: substantial text overlap with arXiv:2206.03010