Flowing ConvNets for Human Pose Estimation in Videos
Abstract
The objective of this work is human pose estimation in videos, where multiple frames are available. We investigate a ConvNet architecture that is able to benefit from temporal context by combining information across the multiple frames using optical flow. To this end we propose a network architecture with the following novelties: (i) a deeper network than previously investigated for regressing heatmaps; (ii) spatial fusion layers that learn an implicit spatial model; (iii) optical flow is used to align heatmap predictions from neighbouring frames; and (iv) a final parametric pooling layer which learns to combine the aligned heatmaps into a pooled confidence map. We show that this architecture outperforms a number of others, including one that uses optical flow solely at the input layers, one that regresses joint coordinates directly, and one that predicts heatmaps without spatial fusion. The new architecture outperforms the state of the art by a large margin on three video pose estimation datasets, including the very challenging Poses in the Wild dataset, and outperforms other deep methods that don't use a graphical model on the single-image FLIC benchmark (and also Chen & Yuille and Tompson et al. in the high precision region).
- Publication:
-
arXiv e-prints
- Pub Date:
- June 2015
- DOI:
- 10.48550/arXiv.1506.02897
- arXiv:
- arXiv:1506.02897
- Bibcode:
- 2015arXiv150602897P
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition
- E-Print:
- ICCV'15