8000 GitHub - VinyehShaw/HFF
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

VinyehShaw/HFF

Repository files navigation

IEEE TMI 2025
HFF: Rethinking Brain Tumor Segmentation from the Frequency Domain Perspective

logo

paper Overview 1. Download Models BibTeX

Official implementation of the IEEE Transactions on Medical Imaging paper Rethinking Brain Tumor Segmentation from the Frequency Domain Perspective, provided by Minye Shao.

TL;DR

HFF-Net is proposed to address the persistent performance degradation in segmenting contrast-enhancing tumor (ET) regions in brain MRI, a task often compromised by low inter-modality contrast, heterogeneous intensity profiles, and indistinct anatomical boundaries. Built upon a frequency-aware dual-branch architecture, HFF-Net disentangles and fuses complementary feature representations through Dual-Tree Complex Wavelet Transform (DTCWT) for capturing shift-invariant low-frequency structure and Nonsubsampled Contourlet Transform (NSCT) for extracting high-frequency, multi-directional textural details. Integrated with adaptive Laplacian convolution (ALC) and frequency-domain cross-attention (FDCA), the framework significantly enhances the discriminability of ET subregions, offering precise and robust delineation across diverse MRI modalities.

HFF-Net Comparison Figure: Comparison of our method and prior work in a complex glioma case, highlighting improved segmentation of the ET region using fused frequency-domain features.

Overview

(a) Architecture of our HFF-Net: A multimodal dual-branch network decomposing and integrating multi-directional HF and LF MRI features with three components: ALC, FDCA, and FDD. It uses Lunsup for output consistency between branches and LsupH,L to align each branch's main and side outputs with ground truth. (b) Our ALC uses elastic weight consolidation to dynamically update weights, maintaining HF filtering functionality while extracting features from multimodal and multi-directional inputs. (c) FDCA enhances the extraction and processing of anisotropic volumetric features in MRI images through multi-dimensional cross-attention mechanisms in the frequency domain. (d) FDD processes multi-sequence MRI slices by decomposing them into HF and LF inputs using distinct frequency domain transforms. (e) The fusion block integrates the deep HF and LF features from the deep layers during the encoding process.

Some experimental results for illustration. For more details, please refer to our paper.


πŸ› οΈ Install Dependencies

βœ… Tested on Ubuntu 22.04/24.04 + Pytorch 2.1.2

Clone this repo and install environment:

git clone https://github.com/VinyehShaw/HFF.git
cd HFF
conda create -n hff python=3.8
conda activate hff
pip install -r requirements.txt

Install MATLAB, please make sure to install the Image Processing Toolbox as an additional component.

βœ… The project has been tested with the MATLAB versions R2025a, R2024b, R2023b.

πŸ“¦ Datasets Preparation

1. Download

Datasets Preparation Please download and prepare the following training datasets:

After downloading and extracting, the data is organized under the following structure (Here, we use BraTS 2020 as an example, same layout applies to other datasets):

your_data_path/
└── MICCAI_BraTS2020_TrainingData/
    β”œβ”€β”€ BraTS20_Training_001/
    β”‚   β”œβ”€β”€ BraTS20_Training_001_flair.nii
    β”‚   β”œβ”€β”€ BraTS20_Training_001_t1.nii
    β”‚   β”œβ”€β”€ BraTS20_Training_001_t1ce.nii
    β”‚   β”œβ”€β”€ BraTS20_Training_001_t2.nii
    β”‚   └── BraTS20_Training_001_seg.nii
    β”œβ”€β”€ BraTS20_Training_002/
    β”œβ”€β”€ ...
    └── BraTS20_Training_369/

2. Frequency Decomposition

To extract the low-frequency components of MRI volumes, run the Python script ./DTCWT_LF.py. You only need to modify the --data_root argument to point to your dataset location. For example, for BraTS 2020:

python DTCWT_LF.py --data_root yourpath/MICCAI_BraTS2020_TrainingData/

Then, to perform high-frequency transformation on MRI volumes, use the Matlab script ./NSCT_BTS/nsct_hf.m Make sure the original MRI data is correctly organized. For example, in the BraTS 2020 case:

baseDir = 'yourpath/MICCAI_BraTS2020_TrainingData';
nsct_tbx_dir = './NSCT_BTS/nsct_toolbox'; % Ensure this path is relative to the current working directory

This process may take some time, so feel free to take a break while it runs. β˜•οΈ


After running frequency decomposition, each subject folder is expected to have the following structure:

your_data_path/
└── MICCAI_BraTS2020_TrainingData/
    β”œβ”€β”€ BraTS20_Training_001/
    β”‚   β”œβ”€β”€ BraTS20_Training_001_flair.nii
    β”‚   β”œβ”€β”€ BraTS20_Training_001_flair_H1.nii.gz
    β”‚   β”œβ”€β”€ BraTS20_Training_001_flair_H2.nii.gz
    β”‚   β”œβ”€β”€ BraTS20_Training_001_flair_H3.nii.gz
    β”‚   β”œβ”€β”€ BraTS20_Training_001_flair_H4.nii.gz
    β”‚   β”œβ”€β”€ BraTS20_Training_001_flair_L.nii.gz  
    β”‚
    β”‚   β”œβ”€β”€ BraTS20_Training_001_t1.nii
    β”‚   β”œβ”€β”€ BraTS20_Training_001_t1_H1.nii.gz
    β”‚   β”œβ”€β”€ ...
    β”‚   β”œβ”€β”€ BraTS20_Training_001_t1_L.nii.gz
    β”‚
    β”‚   β”œβ”€β”€ BraTS20_Training_001_t1ce.nii
    β”‚   β”œβ”€β”€ BraTS20_Training_001_t1ce_H1.nii.gz
    β”‚   β”œβ”€β”€ ...
    β”‚   β”œβ”€β”€ BraTS20_Training_001_t1ce_L.nii.gz
    β”‚
    β”‚   β”œβ”€β”€ BraTS20_Training_001_t2.nii
    β”‚   β”œβ”€β”€ BraTS20_Training_001_t2_H1.nii.gz
    β”‚   β”œβ”€β”€ ...
    β”‚   β”œβ”€β”€ BraTS20_Training_001_t2_L.nii.gz
    β”‚
    β”‚   └── BraTS20_Training_001_seg.nii
    β”œβ”€β”€ BraTS20_Training_002/
    β”œβ”€β”€ BraTS20_Training_003/
    β”œβ”€β”€ ...
    └── BraTS20_Training_369/

Run split.py with --brain_dir (e.g., your_data_path/MICCAI_BraTS2020_TrainingData/), --save_dir (where to store the split .txt files), and --n_split (number of random splits) according to your needs.

πŸš€ Training

Check Before Training

Make sure the label_filename and m_path variables in ./loader/dataload3d.py follow the correct naming convention with underscores (e.g., _seg.nii vs -seg.nii)


To start training, run train.py with the following key arguments:

--train and val_list: path to the training and validation .txt file

--val_list: path to the validation .txt file

--dataset_name: choose from ['brats19', 'brats20', 'brats23men', 'msdbts']

--class_type: choose segmentation target from ['et', 'tc', 'wt', 'all']

ET: enhancing tumor, TC: tumor core, WT: whole tumor β€” each defines a binary segmentation task; ALL: preserves all labels for a 4-class segmentation task.

--display_iter: how many iterations between validation runs

--selected_modal: list of input MRI modalities (see below)

For BraTS 2020, --selected_modal should be like:

['flair_L', 't1_L', 't1ce_L', 't2_L', 'flair_H1', 'flair_H2', 'flair_H3', 'flair_H4', 't1_H1', 't1_H2', 't1_H3', 't1_H4', 't1ce_H1', 't1ce_H2', 't1ce_H3', 't1ce_H4', 't2_H1', 't2_H2', 't2_H3', 't2_H4']

Modify the paths and settings based on your dataset and experiment requirements.

The model will be saved under ./result/checkpoints. The training takes about 1.5 days on a single RTX 4090.

πŸ“Š Evaluation

To evaluate a trained model, run ./eval.py with the following arguments:

--selected_modal: list of input modalities (same as used during training)

--dataset_name: one of ['brats19', 'brats20', 'brats23men', 'msdbts']

--class_type: segmentation target (et, tc, wt, or all)

--checkpoint: path to the trained model checkpoint

--test_list: path to the .txt file listing MRI samples to evaluate


Citation

Both NSCT and DTCWT offer powerful frequency-domain signal processing tools that enhance feature extraction and show strong potential for a wide range of downstream medical imaging tasks β€” including segmentation, inpainting, generation, registration, and beyond. We encourage researchers to explore these techniques further and apply our method to future and annual BraTS challenges as well as other medical imaging benchmarks.

If you find our work helpful in your research or clinical tool development, please consider citing us:

@ARTICLE{11032150,
  author={Shao, Minye and Wang, Zeyu and Duan, Haoran and Huang, Yawen and Zhai, Bing and Wang, Shizheng and Long, Yang and Zheng, Yefeng},
  journal={IEEE Transactions on Medical Imaging}, 
  title={Rethinking Brain Tumor Segmentation from the Frequency Domain Perspective}, 
  year={2025},
  volume={},
  number={},
  pages={1-1},
  keywords={Frequency-domain analysis;Tumors;Brain tumors;Magnetic resonance imaging;Three-dimensional displays;Biomedical imaging;Imaging;Feature extraction;Electronic mail;Convolution;Brain tumor segmentation;Frequency domain;Multi-modal feature fusion},
  doi={10.1109/TMI.2025.3579213}}

Acknowledgments

πŸ’‘ Thanks to Zeyu Wang for his guidance and support throughout this work.

This repo is based in part on the works of Zhou et al. (ICCV 2023) and Ganasala et al. (JDI 2014). We thank the authors for their valuable contributions, which inspired and guided our implementation.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0