8000 GitHub - TrajectoryCrafter/TrajectoryCrafter: Official implementation of TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Official implementation of TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models

License

Notifications You must be signed in to change notification settings

TrajectoryCrafter/TrajectoryCrafter

Repository files navigation

TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models

       

🤗 If you find TrajectoryCrafter useful, please help ⭐ this repo, which is important to Open-Source projects. Thanks!

🔆 Introduction

  • [2025-03-10]: 🔥🔥 Update the arXiv preprint.
  • [2025-02-23]: Launch the project page.

TrajectoryCrafter can generate high-fidelity novel views from casually captured monocular video, while also supporting highly precise pose control. Below shows some examples:

Input Video                   New Camera Trajectory

⚙️ Setup

0. GPU memory requirement

We recommend deploying it on a GPU with VRAM ≥ 28GB.

1. Clone TrajectoryCrafter

git clone --recursive https://github.com/TrajectoryCrafter/TrajectoryCrafter.git
cd TrajectoryCrafter

2. Setup environments

conda create -n trajcrafter python=3.10
conda activate trajcrafter
pip install -r requirements.txt

3. Download pretrained models

Ideally, you can load pretrained models directly from HuggingFace. If you encounter issues connecting to HuggingFace, you can download the pretrained models locally instead. To do so, you can:

  1. Download the pretrained models using HuggingFace or using git-lfs
# HuggingFace (recommend)
sh download/download_hf.sh 

# git-lfs (much slower but more stable)
sh download/download_lfs.sh 
  1. Change default path of the pretrained models to your local path in inference.py.

💫 Inference

1. Command line

Run inference.py using the following script. Please refer to the configuration document to set up inference parameters and camera trajectory.

  sh run.sh

2. Local gradio demo

  python gradio_app.py

📢 Limitations

Our model excels at handling videos with well-defined objects and clear motion, as demonstrated in the demo videos. However, since it is built upon a pretrained video diffusion model, it may struggle with complex cases that go beyond the generation capabilities of the base model.

🤗 Related Works

Including but not limited to: CogVideo-Fun, ViewCrafter, DepthCrafter, GCD, NVS-Solver, DimensionX, ReCapture, TrajAttention, GS-DiT, DaS, RecamMaster, GEN3C, CAT4D...

About

Official implementation of TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  
0