Debiased Fine-Tuning for Vision-language Models by Prompt Regularization
Abstract
We present a new paradigm for fine-tuning large-scale visionlanguage pre-trained models on downstream task, dubbed Prompt Regularization (ProReg). Different from traditional fine-tuning which easily overfits to the downstream task data, ProReg uses the prediction by prompting the pretrained model to regularize the fine-tuning. The motivation is: by prompting the large model "a photo of a [CLASS]", the fil-lin answer is only dependent on the pretraining encyclopedic knowledge while independent of the task data distribution, which is usually biased. Specifically, given a training sample prediction during fine-tuning, we first calculate its KullbackLeibler loss of the prompt prediction and Cross-Entropy loss of the ground-truth label, and then combine them with a proposed sample-wise adaptive trade-off weight, which automatically adjusts the transfer between the pretrained and downstream domains. On various out-of-distribution benchmarks, we show the consistently strong performance of ProReg compared with conventional fine-tuning, zero-shot prompt, prompt tuning, and other state-of-the-art methods.
- Publication:
-
arXiv e-prints
- Pub Date:
- January 2023
- DOI:
- arXiv:
- arXiv:2301.12429
- Bibcode:
- 2023arXiv230112429Z
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition
- E-Print:
- AAAI2023 accepted