Objectness-Aware Few-Shot Semantic Segmentation
Abstract
Few-shot semantic segmentation models aim to segment images after learning from only a few annotated examples. A key challenge for them is how to avoid overfitting because limited training data is available. While prior works usually limited the overall model capacity to alleviate overfitting, this hampers segmentation accuracy. We demonstrate how to increase overall model capacity to achieve improved performance, by introducing objectness, which is class-agnostic and so not prone to overfitting, for complementary use with class-specific features. Extensive experiments demonstrate the versatility of our simple approach of introducing objectness for different base architectures that rely on different data loaders and training schedules (DENet, PFENet) as well as with different backbone models (ResNet-50, ResNet-101 and HRNetV2-W48). Given only one annotated example of an unseen category, experiments show that our method outperforms state-of-art methods with respect to mIoU by at least 4.7% and 1.5% on PASCAL-5i and COCO-20i respectively.
- Publication:
-
arXiv e-prints
- Pub Date:
- April 2020
- DOI:
- 10.48550/arXiv.2004.02945
- arXiv:
- arXiv:2004.02945
- Bibcode:
- 2020arXiv200402945Z
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition