FiLM: Visual Reasoning with a General Conditioning Layer
Abstract
We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning - answering image-related questions which require a multi-step, high-level process - a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.
- Publication:
-
arXiv e-prints
- Pub Date:
- September 2017
- DOI:
- 10.48550/arXiv.1709.07871
- arXiv:
- arXiv:1709.07871
- Bibcode:
- 2017arXiv170907871P
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition;
- Computer Science - Artificial Intelligence;
- Computer Science - Computation and Language;
- Statistics - Machine Learning
- E-Print:
- AAAI 2018. Code available at http://github.com/ethanjperez/film . Extends arXiv:1707.03017