10000 GitHub - xwmaxwma/rschange: [ICME2024 Oral, JSTARS2025] Change detection of remote sensing images
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

xwmaxwma/rschange

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

cap

๐Ÿ“ท Introduction

rschange is an open-source change detection toolbox, which is dedicated to reproducing and developing advanced methods for change detection of remote sensing images.

๐Ÿ”ฅ News

  • 2025/04/16: CDxLSTM has been accepted by GRSL2025.

  • 2025/03/13: The official files of the environment preparation are now available in rscd_mamba.

  • 2025/02/11: The official implementation of CD-Lamba, CDXLSTM and some other popular methods (RSMamba, ChangeMamba) are now available.

  • 2025/01/02: STeInFormer has been accepted by JSTARS2025.

  • 2024/07/14: Class activation maps and some other popular methods (BIT, SNUNet, ChangeFormer, LGPNet, SARAS-Net) are now supported.

  • 2024/06/24: CDMask has been submitted to Arxiv, see here, and the official implementation of CDMask is available!

๐Ÿ” Preparation

  • Environment preparation

      conda create --name rscd python=3.8
      conda activate rscd
      conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia
      pip install pytorch-lightning==2.0.5
      pip install scikit-image==0.19.3 numpy==1.24.4
      pip install torchmetrics==1.0.1
      pip install -U catalyst==20.09
      pip install albumentations==1.3.1
      pip install einops==0.6.1
      pip install timm==0.6.7
      pip install addict==2.4.0
      pip install soundfile==0.12.1
      pip install ttach==0.0.3
      pip install prettytable==3.8.0
      pip install -U openmim
      pip install triton==2.0.0
      mim install mmcv
      pip install -U fvcore

    If you need to run a model based on Mamba, please additionally download the releases and then perform the following installation for the Mamba environment.

    pip install causal_conv1d-1.2.0.post1+cu118torch2.0cxx11abiFALSE-cp38-cp38-linux_x86_64.whl
    pip install mamba_ssm-1.2.0.post1+cu118torch2.0cxx11abiFALSE-cp38-cp38-linux_x86_64.whl

    [Optional] We have also prepared compressed files rscd_mamba for the CD-Lamba's environment, which you can download directly and install according to the following instructions.

    // Firstly, you must be in a Linux environment (Ubuntu in Linux or WSL2 in windows).
    // Then, place this compressed file in the folder of \home\xxx\anaconda3\envs\
    // Finally,
    mkdir -p rscd_mamba && tar -xzf rscd_mamba.tar.gz -C rscd_mamba
    conda activate rscd_mamba

    Note: same as rsseg. If you have already installed the environment of rsseg, use it directly.

  • Dataset preprocessing

    LEVIR-CD๏ผšThe original images are sized at 1024x1024. Following its original division method, we crop these images into non-overlapping patches of 256x256.

    WHU-CD: It contains a pair of dual-time aerial images measuring 32507 ร— 15354. These images are cropped into patches of 256 ร— 256 size. The dataset is then randomly divided into three subsets: the training set, the validation set, and the test set, following a ratio of 8:1:1.

    DSIFN-CD & CLCD & SYSU-CD: The A434 y all follow the original image size and dataset division method.

    Note: We also provide the pre-processed data, which can be downloaded at this link

๐Ÿ“’ Folder Structure

Prepare the following folders to organize this repo:

  rschangedetection
      โ”œโ”€โ”€ rscd (code)
      โ”œโ”€โ”€ work_dirs (save the model weights and training logs)
      โ”‚   โ””โ”€CLCD_BS4_epoch200 (dataset)
      โ”‚       โ””โ”€stnet (model)
      โ”‚           โ””โ”€version_0 (version)
      โ”‚              โ”‚  โ””โ”€ckpts
      โ”‚              โ”‚      โ”œโ”€test (the best ckpts in test set)
      โ”‚              โ”‚      โ””โ”€val (the best ckpts in validation set)
      โ”‚              โ”œโ”€log (tensorboard logs)
      โ”‚              โ”œโ”€train_metrics.txt (train & val results per epoch)
      โ”‚              โ”œโ”€test_metrics_max.txt (the best test results)
      โ”‚              โ””โ”€test_metrics_rest.txt (other test results)
      โ””โ”€โ”€ data
          โ”œโ”€โ”€ LEVIR_CD
          โ”‚   โ”œโ”€โ”€ train
          โ”‚   โ”‚   โ”œโ”€โ”€ A
          โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ images1.png
          โ”‚   โ”‚   โ”œโ”€โ”€ B
          โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ images2.png
          โ”‚   โ”‚   โ””โ”€โ”€ label
          โ”‚   โ”‚       โ””โ”€โ”€ label.png
          โ”‚   โ”œโ”€โ”€ val (the same with train)
          โ”‚   โ””โ”€โ”€ test(the same with train)
          โ”œโ”€โ”€ DSIFN
          โ”‚   โ”œโ”€โ”€ train
          โ”‚   โ”‚   โ”œโ”€โ”€ t1
          โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ images1.jpg
          โ”‚   โ”‚   โ”œโ”€โ”€ t2
          โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ images2.jpg
          โ”‚   โ”‚   โ””โ”€โ”€ mask
          โ”‚   โ”‚       โ””โ”€โ”€ mask.png
          โ”‚   โ”œโ”€โ”€ val (the same with train)
          โ”‚   โ””โ”€โ”€ test
          โ”‚       โ”œโ”€โ”€ t1
          โ”‚       โ”‚   โ””โ”€โ”€ images1.jpg
          โ”‚       โ”œโ”€โ”€ t2
          โ”‚       โ”‚   โ””โ”€โ”€ images2.jpg
          โ”‚       โ””โ”€โ”€ mask
          โ”‚           โ””โ”€โ”€ mask.tif
          โ”œโ”€โ”€ WHU_CD
          โ”‚   โ”œโ”€โ”€ train
          โ”‚   โ”‚   โ”œโ”€โ”€ image1
          โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ images1.png
          โ”‚   โ”‚   โ”œโ”€โ”€ image2
          โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ images2.png
          โ”‚   โ”‚   โ””โ”€โ”€ label
          โ”‚   โ”‚       โ””โ”€โ”€ label.png
          โ”‚   โ”œโ”€โ”€ val (the same with train)
          โ”‚   โ””โ”€โ”€ test(the same with train)
          โ”œโ”€โ”€ CLCD (the same with WHU_CD)
          โ””โ”€โ”€ SYSU_CD
              โ”œโ”€โ”€ train
              โ”‚   โ”œโ”€โ”€ time1
              โ”‚   โ”‚   โ””โ”€โ”€ images1.png
              โ”‚   โ”œโ”€โ”€ time2
              โ”‚   โ”‚   โ””โ”€โ”€ images2.png
              โ”‚   โ””โ”€โ”€ label
              โ”‚       โ””โ”€โ”€ label.png
              โ”œโ”€โ”€ val (the same with train)
              โ””โ”€โ”€ test(the same with train)

๐Ÿ“š Use example

  • Training

    python train.py -c configs/STNet.py
  • Testing

    python test.py \
    -c configs/STNet.py \
    --ckpt work_dirs/CLCD_BS4_epoch200/stnet/version_0/ckpts/test/epoch=45.ckpt \
    --output_dir work_dirs/CLCD_BS4_epoch200/stnet/version_0/ckpts/test \
  • Count params and flops

    python tools/params_flops.py --size 256
  • Class activation maps

    python tools/grad_cam_CNN.py -c configs/cdxformer.py --layer=model.net.decoderhead.LHBlock2.mlp_l

๐ŸŒŸ Citation

If you are interested in our work, please consider giving a ๐ŸŒŸ and citing our work below. We will update rschange regularly.

@inproceedings{stnet,
  title={STNet: Spatial and Temporal feature fusion network for change detection in remote sensing images},
  author={Ma, Xiaowen and Yang, Jiawei and Hong, Tingfeng and Ma, Mengting and Zhao, Ziyan and Feng, Tian and Zhang, Wei},
  booktitle={2023 IEEE International Conference on Multimedia and Expo (ICME)},
  pages={2195--2200},
  year={2023},
  organization={IEEE}
}

@INPROCEEDINGS{ddlnet,
  author={Ma, Xiaowen and Yang, Jiawei and Che, Rui and Zhang, Huanting and Zhang, Wei},
  booktitle={2024 IEEE International Conference on Multimedia and Expo (ICME)}, 
  title={DDLNet: Boosting Remote Sensing Change Detection with Dual-Domain Learning}, 
  year={2024},
  volume={},
  number={},
  pages={1-6},
  doi={10.1109/ICME57554.2024.10688140}}

@article{cdmask,
  title={Rethinking Remote Sensing Change Detection With A Mask View},
  author={Ma, Xiaowen and Wu, Zhenkai and Lian, Rongrong and Zhang, Wei and Song, Siyang},
  journal={arXiv preprint arXiv:2406.15320},
  year={2024}
}

@ARTICLE{steinformer,
  author={Ma, Xiaowen and Wu, Zhenkai and Ma, Mengting and Zhao, Mengjiao and Yang, Fan and Du, Zhenhong and Zhang, Wei},
  journal={IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing}, 
  title={STeInFormer: Spatialโ€“Temporal Interaction Transformer Architecture for Remote Sensing Change Detection}, 
  year={2025},
  volume={18},
  number={},
  pages={3735-3745},
  doi={10.1109/JSTARS.2024.3522329}}

@ARTICLE{cdxlstm,
  author={Wu, Zhenkai and Ma, Xiaowen and Lian, Rongrong and Zheng, Kai and Zhang, Wei},
  journal={IEEE Geoscience and Remote Sensing Letters}, 
  title={CDxLSTM: Boosting Remote Sensing Change Detection With Extended Long Short-Term Memory}, 
  year={2025},
  volume={22},
  number={},
  pages={1-5},
  keywords={Feature extraction;Semantics;Transformers;Sensors;Remote sensing;Graphics processing units;Correlation;Convolutional codes;Computational modeling;Computational complexity;Extended long short-term memory (xLSTM);remote sensing change detection (RS-CD);spatiotemporal interaction},
  doi={10.1109/LGRS.2025.3562480}}

๐Ÿ“ฎ Contact

If you are confused about the content of our paper or look forward to further academic exchanges and cooperation, please do not hesitate to contact us. The e-mail address is xwma@zju.edu.cn. We look forward to hearing from you!

๐Ÿ’ก Acknowledgement

Thanks to previous open-sourced repo:

Thanks to the main contributor Zhenkai Wu

About

[ICME2024 Oral, JSTARS2025] Change detection of remote sensing images

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages

0