Learning Audio-Visual embedding for Person Verification in the Wild
Abstract
It has already been observed that audio-visual embedding is more robust than uni-modality embedding for person verification. Here, we proposed a novel audio-visual strategy that considers aggregators from a fusion perspective. First, we introduced weight-enhanced attentive statistics pooling for the first time in face verification. We find that a strong correlation exists between modalities during pooling, so joint attentive pooling is proposed which contains cycle consistency to learn the implicit inter-frame weight. Finally, each modality is fused with a gated attention mechanism to gain robust audio-visual embedding. All the proposed models are trained on the VoxCeleb2 dev dataset and the best system obtains 0.18%, 0.27%, and 0.49% EER on three official trial lists of VoxCeleb1 respectively, which is to our knowledge the best-published results for person verification.
- Publication:
-
arXiv e-prints
- Pub Date:
- September 2022
- DOI:
- arXiv:
- arXiv:2209.04093
- Bibcode:
- 2022arXiv220904093S
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition;
- Computer Science - Multimedia;
- Computer Science - Sound;
- Electrical Engineering and Systems Science - Audio and Speech Processing