EventDance: Unsupervised Source-free Cross-modal Adaptation for Event-based Object Recognition pdf
EventDance++: Language-guided Unsupervised Source-free Cross-modal Adaptation for Event-based Object Recognition pdf
[09/2024], codes for EventDance++ are in progress. [07/2024], codes for evaluation are released.
git clone --
cd --
conda create -n evd python=3.9
conda activate evd
pip install -r requirements.txt
Used Datasets: Event Modality: NMNIST / NCALTECH101 / NCIFAR10 Image Modality: MNIST / CALTECH101 / CIFAR10
datasets/
├── NMIST
│ ├── Test
│ └── Train
├── MNIST
│ ├── image
│ └── raw
├── NCALTECH101
│ ├── accordion
│ │ ...
│ └── yin_yang
├── CALTECH101
│ ├── 101_ObjectCategories
│ └── Annotations
We appreciate the previous open-source works: Back to Event Basics: Self-Supervised Learning of Image Reconstruction for Event Cameras via Photometric Constancy
If you are interested in this work, please cite the following works:
@inproceedings{zheng2024eventdance,
title={EventDance: Unsupervised Source-free Cross-modal Adaptation for Event-based Object Recognition},
author={Zheng, Xu and Wang, Lin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={17448--17458},
year={2024}
}