8000 GitHub - xmed-lab/DFG: DFG
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

xmed-lab/DFG

Repository files navigation

Leveraging Segment Anything Model for Source-Free Domain Adaptation via Dual Feature Guided Auto-Prompting

This repository contains Pytorch implementation of our source-free domain adaptation (SFDA) method with Dual Feature Guided (DFG) auto-prompting approach.

Installation

Create the environment from the environment.yml file:

conda env create -f environment.yml
conda activate dfg

Data preparation

Training

The following are the steps for the CHAOS (MRI) to BTCV (CT) adaptation.

  • Download the source domain model from here or specify the data path in configs/train_source_seg.yaml and then run
python main_trainer_source.py --config_file configs/train_source_seg.yaml --gpu_id 0
  • Download the trained model after the feature aggregation phase from here or specify the source model path and data path in configs/train_target_adapt_FA.yaml, and then run
python main_trainer_fa.py --config_file configs/train_target_adapt_FA.yaml --gpu_id 0
  • Download the MedSAM model checkpoint from here and put it under ./medsam/work_dir/MedSAM.
  • Specify the model (after feature aggregation) path, data path, and refined pseudo-label paths in configs/train_target_adapt_SAM.yaml, and then run
python main_trainer_sam.py --config_file configs/train_target_adapt_SAM.yaml --gpu_id 0

Releases

No releases published

Packages

No packages published
0