Deep visual-semantic for crowded video understanding
Abstract
Visual-semantic features play a vital role for crowded video understanding. Convolutional Neural Networks (CNNs) have experienced a significant breakthrough in learning representations from images. However, the learning of visualsemantic features, and how it can be effectively extracted for video analysis, still remains a challenging task. In this study, we propose a novel visual-semantic method to capture both appearance and dynamic representations. In particular, we propose a spatial context method, based on the fractional Fisher vector (FV) encoding on CNN features, which can be regarded as our main contribution. In addition, to capture temporal context information, we also applied fractional encoding method on dynamic images. Experimental results on the WWW crowed video dataset demonstrate that the proposed method outperform the state of the art.
- Publication:
-
MIPPR 2017: Pattern Recognition and Computer Vision
- Pub Date:
- March 2018
- DOI:
- 10.1117/12.2285848
- Bibcode:
- 2018SPIE10609E..1ED