Overview of the proposed VADMamba. (a) The training and inference process of VADMamba. (b) The framework of the proposed VQ-MaU. (c) Non-negative Vision State Space block. The dashed line indicates that addition is used in the second loop. (d) Vision State-Space (VSS) with SS2D.
Flownet2 model: https://github.com/NVIDIA/flownet2-pytorch
This part code is adjusted from VM-UNet.
If you use this work, please cite:
@article{lyu2025vadmamba,
title={VADMamba: Exploring State Space Models for Fast Video Anomaly Detection},
author={Lyu, Jiahao and Zhao, Minghua and Hu, Jing and Huang, Xuewen and Chen, Yifei and Du, Shuangli},
journal={arXiv preprint arXiv:2503.21169},
year={2025}
}