|
Official implementation of the IEEE Transactions on Medical Imaging paper Rethinking Brain Tumor Segmentation from the Frequency Domain Perspective, provided by Minye Shao.
(a) Architecture of our HFF-Net: A multimodal dual-branch network decomposing and integrating multi-directional HF and LF MRI features with three components: ALC, FDCA, and FDD. It uses Lunsup for output consistency between branches and LsupH,L to align each branch's main and side outputs with ground truth. (b) Our ALC uses elastic weight consolidation to dynamically update weights, maintaining HF filtering functionality while extracting features from multimodal and multi-directional inputs. (c) FDCA enhances the extraction and processing of anisotropic volumetric features in MRI images through multi-dimensional cross-attention mechanisms in the frequency domain. (d) FDD processes multi-sequence MRI slices by decomposing them into HF and LF inputs using distinct frequency domain transforms. (e) The fusion block integrates the deep HF and LF features from the deep layers during the encoding process.
Some experimental results for illustration. For more details, please refer to our paper.
β Tested on Ubuntu 22.04/24.04 + Pytorch 2.1.2
Clone this repo and install environment:
git clone https://github.com/VinyehShaw/HFF.git
cd HFF
conda create -n hff python=3.8
conda activate hff
pip install -r requirements.txt
Install MATLAB, please make sure to install the Image Processing Toolbox as an additional component.
β The project has been tested with the MATLAB versions R2025a, R2024b, R2023b.
Datasets Preparation Please download and prepare the following training datasets:
-
BraTS 2019 is now available for download on Kaggle. The training set includes all samples from both the HGG and LGG subsets.
-
BraTS 2020 is also now available for download on Kaggle.
-
MSD Brain Tumor (Task 01_BrainTumour) is available from via AWS or Google Drive.
-
To download BraTS 2023, simply create an account on Synapse and register for the challenge on the official website (register through BraTS-Lighthouse 2025 Challenge now).
After downloading and extracting, the data is organized under the following structure (Here, we use BraTS 2020 as an example, same layout applies to other datasets):
your_data_path/
βββ MICCAI_BraTS2020_TrainingData/
βββ BraTS20_Training_001/
β βββ BraTS20_Training_001_flair.nii
β βββ BraTS20_Training_001_t1.nii
β βββ BraTS20_Training_001_t1ce.nii
β βββ BraTS20_Training_001_t2.nii
β βββ BraTS20_Training_001_seg.nii
βββ BraTS20_Training_002/
βββ ...
βββ BraTS20_Training_369/
To extract the low-frequency components of MRI volumes, run the Python script ./DTCWT_LF.py
. You only need to modify the --data_root argument to point to your dataset location. For example, for BraTS 2020:
python DTCWT_LF.py --data_root yourpath/MICCAI_BraTS2020_TrainingData/
Then, to perform high-frequency transformation on MRI volumes, use the Matlab script ./NSCT_BTS/nsct_hf.m
Make sure the original MRI data is correctly organized. For example, in the BraTS 2020 case:
baseDir = 'yourpath/MICCAI_BraTS2020_TrainingData';
nsct_tbx_dir = './NSCT_BTS/nsct_toolbox'; % Ensure this path is relative to the current working directory
This process may take some time, so feel free to take a break while it runs. βοΈ
After running frequency decomposition, each subject folder is expected to have the following structure:
your_data_path/
βββ MICCAI_BraTS2020_TrainingData/
βββ BraTS20_Training_001/
β βββ BraTS20_Training_001_flair.nii
β βββ BraTS20_Training_001_flair_H1.nii.gz
β βββ BraTS20_Training_001_flair_H2.nii.gz
β βββ BraTS20_Training_001_flair_H3.nii.gz
β βββ BraTS20_Training_001_flair_H4.nii.gz
β βββ BraTS20_Training_001_flair_L.nii.gz
β
β βββ BraTS20_Training_001_t1.nii
β βββ BraTS20_Training_001_t1_H1.nii.gz
β βββ ...
β βββ BraTS20_Training_001_t1_L.nii.gz
β
β βββ BraTS20_Training_001_t1ce.nii
β βββ BraTS20_Training_001_t1ce_H1.nii.gz
β βββ ...
β βββ BraTS20_Training_001_t1ce_L.nii.gz
β
β βββ BraTS20_Training_001_t2.nii
β βββ BraTS20_Training_001_t2_H1.nii.gz
β βββ ...
β βββ BraTS20_Training_001_t2_L.nii.gz
β
β βββ BraTS20_Training_001_seg.nii
βββ BraTS20_Training_002/
βββ BraTS20_Training_003/
βββ ...
βββ BraTS20_Training_369/
Run split.py with --brain_dir
(e.g., your_data_path/MICCAI_BraTS2020_TrainingData/), --save_dir
(where to store the split .txt files), and --n_split
(number of random splits) according to your needs.
Make sure the
label_filename
andm_path
variables in./loader/dataload3d.py
follow the correct naming convention with underscores (e.g., _seg.nii vs -seg.nii)
To start training, run train.py
with the following key arguments:
--train
and val_list
: path to the training and validation .txt file
--val_list: path to the validation .txt file
--dataset_name
: choose from ['brats19', 'brats20', 'brats23men', 'msdbts']
--class_type
: choose segmentation target from ['et', 'tc', 'wt', 'all']
ET: enhancing tumor, TC: tumor core, WT: whole tumor β each defines a binary segmentation task; ALL: preserves all labels for a 4-class segmentation task.
--display_iter
: how many iterations between validation runs
--selected_modal
: list of input MRI modalities (see below)
For BraTS 2020, --selected_modal should be like:
['flair_L', 't1_L', 't1ce_L', 't2_L', 'flair_H1', 'flair_H2', 'flair_H3', 'flair_H4', 't1_H1', 't1_H2', 't1_H3', 't1_H4', 't1ce_H1', 't1ce_H2', 't1ce_H3', 't1ce_H4', 't2_H1', 't2_H2', 't2_H3', 't2_H4']
Modify the paths and settings based on your dataset and experiment requirements.
The model will be saved under ./result/checkpoints. The training takes about 1.5 days on a single RTX 4090.
To evaluate a trained model, run ./eval.py
with the following arguments:
--selected_modal
: list of input modalities (same as used during training)
--dataset_name
: one of ['brats19', 'brats20', 'brats23men', 'msdbts']
--class_type
: segmentation target (et, tc, wt, or all)
--checkpoint
: path to the trained model checkpoint
--test_list
: path to the .txt file listing MRI samples to evaluate
Both NSCT and DTCWT offer powerful frequency-domain signal processing tools that enhance feature extraction and show strong potential for a wide range of downstream medical imaging tasks β including segmentation, inpainting, generation, registration, and beyond. We encourage researchers to explore these techniques further and apply our method to future and annual BraTS challenges as well as other medical imaging benchmarks.
If you find our work helpful in your research or clinical tool development, please consider citing us:
@ARTICLE{11032150,
author={Shao, Minye and Wang, Zeyu and Duan, Haoran and Huang, Yawen and Zhai, Bing and Wang, Shizheng and Long, Yang and Zheng, Yefeng},
journal={IEEE Transactions on Medical Imaging},
title={Rethinking Brain Tumor Segmentation from the Frequency Domain Perspective},
year={2025},
volume={},
number={},
pages={1-1},
keywords={Frequency-domain analysis;Tumors;Brain tumors;Magnetic resonance imaging;Three-dimensional displays;Biomedical imaging;Imaging;Feature extraction;Electronic mail;Convolution;Brain tumor segmentation;Frequency domain;Multi-modal feature fusion},
doi={10.1109/TMI.2025.3579213}}
π‘ Thanks to Zeyu Wang for his guidance and support throughout this work.
This repo is based in part on the works of Zhou et al. (ICCV 2023) and Ganasala et al. (JDI 2014). We thank the authors for their valuable contributions, which inspired and guided our implementation.