Perceptron Synthesis Network: Rethinking the Action Scale Variances in Videos
Abstract
Video action recognition has been partially addressed by the CNNs stacking of fixed-size 3D kernels. However, these methods may under-perform for only capturing rigid spatial-temporal patterns in single-scale spaces, while neglecting the scale variances across different action primitives. To overcome this limitation, we propose to learn the optimal-scale kernels from the data. More specifically, an \textit{action perceptron synthesizer} is proposed to generate the kernels from a bag of fixed-size kernels that are interacted by dense routing paths. To guarantee the interaction richness and the information capacity of the paths, we design the novel \textit{optimized feature fusion layer}. This layer establishes a principled universal paradigm that suffices to cover most of the current feature fusion techniques (e.g., channel shuffling, and channel dropout) for the first time. By inserting the \textit{synthesizer}, our method can easily adapt the traditional 2D CNNs to the video understanding tasks such as action recognition with marginal additional computation cost. The proposed method is thoroughly evaluated over several challenging datasets (i.e., Somehting-to-Somthing, Kinetics and Diving48) that highly require temporal reasoning or appearance discriminating, achieving new state-of-the-art results. Particularly, our low-resolution model outperforms the recent strong baseline methods, i.e., TSM and GST, with less than 30\% of their computation cost.
- Publication:
-
arXiv e-prints
- Pub Date:
- July 2020
- DOI:
- 10.48550/arXiv.2007.11460
- arXiv:
- arXiv:2007.11460
- Bibcode:
- 2020arXiv200711460T
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition;
- Computer Science - Machine Learning;
- Electrical Engineering and Systems Science - Image and Video Processing