Leveraging Segment Anything Model for Source-Free Domain Adaptation via Dual Feature Guided Auto-Prompting
This repository contains Pytorch implementation of our source-free domain adaptation (SFDA) method with Dual Feature Guided (DFG) auto-prompting approach.
Create the environment from the environment.yml
file:
conda env create -f environment.yml
conda activate dfg
- Download the BTCV dataset from MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge, and the CHAOS dataset from 2019 CHAOS Challenge. Then preprocess the downloaded data referring to
./preprocess.ipynb
. You can also directly download our preprocessed datasets from here.
The following are the steps for the CHAOS (MRI) to BTCV (CT) adaptation.
- Download the source domain model from here or specify the data path in
configs/train_source_seg.yaml
and then run
python main_trainer_source.py --config_file configs/train_source_seg.yaml --gpu_id 0
- Download the trained model after the feature aggregation phase from here or specify the source model path and data path in
configs/train_target_adapt_FA.yaml
, and then run
python main_trainer_fa.py --config_file configs/train_target_adapt_FA.yaml --gpu_id 0
- Download the MedSAM model checkpoint from here and put it under
./medsam/work_dir/MedSAM
. - Specify the model (after feature aggregation) path, data path, and refined pseudo-label paths in
configs/train_target_adapt_SAM.yaml
, and then run
python main_trainer_sam.py --config_file configs/train_target_adapt_SAM.yaml --gpu_id 0