- Core techniques codes of proposed MoDGS.
- Training Codes(preprocessing and training).
- Rendering instruction
- Self-collected data
- Codes Branch dealing with moving camera scenes.(e.g, Davis)
Hi, this is Qingming. So sorry for the late release. I’m currently very busy(for my exam), but you can refer to our core codes located in train_PointTrackGS.py
(3D-aware initialization and Ordinal depth loss) and pointTrack.py
( 3D Flow pairs extraction). I will provide detailed processing instructions later when I have more time(one or two week). Thank you for your understanding!
- prepare your env:
- Create the Conda virtual environment using the command below (Python and PyTorch versions higher than the specified ones should work fine):
conda create -n MoDGS python=3.7
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
- Install dependencies:
- Follow GaussianFlow to install the Gaussian rasterizer.
- Follow the official 3D-GS repo to install Simple-KNN.
- Install other packages using the 8000 command below:
pip install -r requirements.txt
There are several steps required to prepare your dataset for MoDGS training:
- Depth estimation
- Exhaustive flow estimation
- Scale alignment
- 3D Flow extraction
Detailed instructions for this part will be updated later.
you can check our code below. however, please note that I haven’t cleaned or fully tested it yet, so it may contain bugs. Thank you for your understanding!
./scripts/preprocess_selfmadeData.sh
Preprocessed Dataset Example: A sample dataset is provided here: OneDrive
Run the following script:
./scripts/run_stage1.sh
Run the following script:
./scripts/run_stage2.sh
Once the training is complete, you can load the checkpoint and render scenes:
# Spiral cameras
./scripts/render_vis_cam.sh
To be updated.
This project is licensed under the MIT License - see the LICENSE file for details.
@inproceedings{
liu2025modgs,
title={Mo{DGS}: Dynamic Gaussian Splatting from Casually-captured Monocular Videos with Depth Priors},
author={Qingming LIU and Yuan Liu and Jiepeng Wang and Xianqiang Lyu and Peng Wang and Wenping Wang and Junhui Hou},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=2prShxdLkX}
}
And thanks to the authors of Geowizard, NSFF, 3D Gaussians, GaussianFlow, D-GS and Omnimotion for their excellent code, please consider also cite their paper:
@Article{kerbl3Dgaussians,
author = {Kerbl, Bernhard and Kopanas, Georgios and Leimk{\"u}hler, Thomas and Drettakis, George},
title = {3D Gaussian Splatting for Real-Time Radiance Field Rendering},
journal = {ACM Transactions on Graphics},
number = {4},
volume = {42},
month = {July},
year = {2023},
url = {https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/}
}
@inproceedings{wang2023omnimotion,
title = {Tracking Everything Everywhere All at Once},
author = {Wang, Qianqian and Chang, Yen-Yu and Cai, Ruojin and Li, Zhengqi and Hariharan, Bharath and Holynski, Aleksander and Snavely, Noah},
booktitle = {International Conference on Computer Vision},
year = {2023}
}
@article{yang2023deformable3dgs,
title={Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction},
author={Yang, Ziyi and Gao, Xinyu and Zhou, Wen and Jiao, Shaohui and Zhang, Yuqing and Jin, Xiaogang},
journal={arXiv preprint arXiv:2309.13101},
year={2023}
}
@article{gao2024gaussianflow,
author = {Quankai Gao and Qiangeng Xu and Zhe Cao and Ben Mildenhall and Wenchao Ma and Le Chen and Danhang Tang and Ulrich Neumann},
title = {GaussianFlow: Splatting Gaussian Dynamics for 4D Content Creation},
journal = {},
year = {2024},
}
@inproceedings{fu2024geowizard,
title={GeoWizard: Unleashing the Diffusion Priors for 3D Geometry Estimation from a Single Image},
author={Fu, Xiao and Yin, Wei and Hu, Mu and Wang, Kaixuan and Ma, Yuexin and Tan, Ping and Shen, Shaojie and Lin, Dahua and Long, Xiaoxiao},
booktitle={ECCV},
year={2024}
}
@InProceedings{li2020neural,
title={Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes},
author={Li, Zhengqi and Niklaus, Simon and Snavely, Noah and Wang, Oliver},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2021}
}