Rethinking Pre-trained Feature Extractor Selection in Multiple Instance Learning for Whole Slide Image Classification
Abstract
Multiple instance learning (MIL) has become a preferred method for gigapixel whole slide image (WSI) classification without requiring patch-level annotations. Current MIL research primarily relies on embedding-based approaches, which extract patch features using a pre-trained feature extractor and aggregate them for slide-level prediction. Despite the critical role of feature extraction, there is limited guidance on selecting optimal feature extractors to maximize WSI performance. This study addresses this gap by systematically evaluating MIL feature extractors across three dimensions: pre-training dataset, backbone model, and pre-training method. Extensive experiments were conducted on two public WSI datasets (TCGA-NSCLC and Camelyon16) using four state-of-the-art (SOTA) MIL models. Our findings reveal that selecting a robust self-supervised learning (SSL) method has a greater impact on performance than relying solely on an in-domain pre-training dataset. Additionally, prioritizing Transformer-based backbones with deeper architectures over CNN-based models and using larger, more diverse pre-training datasets significantly enhances classification outcomes. We believe these insights provide practical guidance for optimizing WSI classification and explain the reasons behind the performance advantages of current SOTA pathology foundation models. Furthermore, this work may inform the development of more effective foundation models. Our code is publicly available at https://anonymous.4open.science/r/MIL-Feature-Extractor-Selection
- Publication:
-
arXiv e-prints
- Pub Date:
- August 2024
- DOI:
- 10.48550/arXiv.2408.01167
- arXiv:
- arXiv:2408.01167
- Bibcode:
- 2024arXiv240801167W
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition
- E-Print:
- Under submission to ISBI 2025