Contrastive Learning for Weakly Supervised Phrase Grounding
Abstract
Phrase grounding, the problem of associating image regions to caption words, is a crucial component of vision-language tasks. We show that phrase grounding can be learned by optimizing word-region attention to maximize a lower bound on mutual information between images and caption words. Given pairs of images and captions, we maximize compatibility of the attention-weighted regions and the words in the corresponding caption, compared to non-corresponding pairs of images and captions. A key idea is to construct effective negative captions for learning through language model guided word substitutions. Training with our negatives yields a $\sim10\%$ absolute gain in accuracy over randomly-sampled negatives from the training data. Our weakly supervised phrase grounding model trained on COCO-Captions shows a healthy gain of $5.7\%$ to achieve $76.7\%$ accuracy on Flickr30K Entities benchmark.
- Publication:
-
arXiv e-prints
- Pub Date:
- June 2020
- DOI:
- arXiv:
- arXiv:2006.09920
- Bibcode:
- 2020arXiv200609920G
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition;
- Computer Science - Computation and Language;
- Computer Science - Machine Learning;
- Statistics - Machine Learning
- E-Print:
- ECCV 2020 (spotlight paper), Project page: http://tanmaygupta.info/info-ground