Learning Graph-Based Priors for Generalized Zero-Shot Learning
Abstract
The task of zero-shot learning (ZSL) requires correctly predicting the label of samples from classes which were unseen at training time. This is achieved by leveraging side information about class labels, such as label attributes or word embeddings. Recently, attention has shifted to the more realistic task of generalized ZSL (GZSL) where test sets consist of seen and unseen samples. Recent approaches to GZSL have shown the value of generative models, which are used to generate samples from unseen classes. In this work, we incorporate an additional source of side information in the form of a relation graph over labels. We leverage this graph in order to learn a set of prior distributions, which encourage an aligned variational autoencoder (VAE) model to learn embeddings which respect the graph structure. Using this approach we are able to achieve improved performance on the CUB and SUN benchmarks over a strong baseline.
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2020
- DOI:
- 10.48550/arXiv.2010.11369
- arXiv:
- arXiv:2010.11369
- Bibcode:
- 2020arXiv201011369S
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition
- E-Print:
- Presented at AAAI 2020 Workshop on Deep Learning on Graphs: Methodologies and Applications (DLGMA'20)