CentralNet: a Multilayer Approach for Multimodal Fusion
Abstract
This paper proposes a novel multimodal fusion approach, aiming to produce best possible decisions by integrating information coming from multiple media. While most of the past multimodal approaches either work by projecting the features of different modalities into the same space, or by coordinating the representations of each modality through the use of constraints, our approach borrows from both visions. More specifically, assuming each modality can be processed by a separated deep convolutional network, allowing to take decisions independently from each modality, we introduce a central network linking the modality specific networks. This central network not only provides a common feature embedding but also regularizes the modality specific networks through the use of multi-task learning. The proposed approach is validated on 4 different computer vision tasks on which it consistently improves the accuracy of existing multimodal fusion approaches.
- Publication:
-
arXiv e-prints
- Pub Date:
- August 2018
- DOI:
- 10.48550/arXiv.1808.07275
- arXiv:
- arXiv:1808.07275
- Bibcode:
- 2018arXiv180807275V
- Keywords:
-
- Computer Science - Artificial Intelligence;
- Computer Science - Computer Vision and Pattern Recognition;
- Computer Science - Multimedia
- E-Print:
- European Conference on Computer Vision Workshops: Multimodal Learning and Applications, Sep 2018, Munich, Germany. https://mula2018.github.io/