8000 GitHub - nv-tlabs/GEN3C: [CVPR 2025 Highlight] GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

nv-tlabs/GEN3C

Repository files navigation

GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control

Gen3C.-.GUI.demo.mp4

GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control
Xuanchi Ren*, Tianchang Shen*, Jiahui Huang, Huan Ling, Yifan Lu, Merlin Nimier-David, Thomas Müller, Alexander Keller, Sanja Fidler, Jun Gao
* indicates equal contribution
Paper, Project Page

Abstract: We present GEN3C, a generative video model with precise Camera Control and temporal 3D Consistency. Prior video models already generate realistic videos, but they tend to leverage little 3D information, leading to inconsistencies, such as objects popping in and out of existence. Camera control, if implemented at all, is imprecise, because camera parameters are mere inputs to the neural network which must then infer how the video depends on the camera. In contrast, GEN3C is guided by a 3D cache: point clouds obtained by predicting the pixel-wise depth of seed images or previously generated frames. When generating the next frames, GEN3C is conditioned on the 2D renderings of the 3D cache with the new camera trajectory provided by the user. Crucially, this means that GEN3C neither has to remember what it previously generated nor does it have to infer the image structure from the camera pose. The model, instead, can focus all its generative power on previously unobserved regions, as well as advancing the scene state to the next frame. Our results demonstrate more precise camera control than prior work, as well as state-of-the-art results in sparse-view novel view synthesis, even in challenging settings such as driving scenes and monocular dynamic video. Results are best viewed in videos.

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing. For any other questions related to the model, please contact Xuanchi, Tianchang or Jun.

News

  • 2025-06-06 Code and model released! In a future update, we plan to include the pipeline for jointly predicting depth and camera pose from video, as well as a driving-finetuned model. Stay tuned!

Installation

Please follow the "Inference" section in INSTALL.md to set up your environment.

Inference

Download checkpoints

  1. Generate a Hugging Face access token (if you haven't done so already). Set the access token to Read permission (default is Fine-grained).

  2. Log in to Hugging Face with the access token:

    huggingface-cli login
  3. Download the GEN3C model weights from Hugging Face:

    CUDA_HOME=$CONDA_PREFIX PYTHONPATH=$(pwd) python scripts/download_gen3c_checkpoints.py --checkpoint_dir checkpoints

Interactive GUI usage

GEN3C interactive GUI

GEN3C can be used through an interactive GUI, allowing to visualize the inputs in 3D, author arbitrary camera trajectories, and start inference from a single window. Please see the dedicated instructions.

Command-line usage

GEN3C supports both images and videos as input. Below are examples of running GEN3C on single images and videos with predefined camera trajectory patterns.

Example 1: Single Image to Video Generation

Single GPU

Generate a 121-frame video from a single image:

CUDA_HOME=$CONDA_PREFIX PYTHONPATH=$(pwd) python cosmos_predict1/diffusion/inference/gen3c_single_image.py \
    --checkpoint_dir checkpoints \
    --input_image_path assets/diffusion/000000.png \
    --video_save_name test_single_image \
    --guidance 1 \
    --foreground_masking

Multi-GPU (8 GPUs)

NUM_GPUS=8
CUDA_HOME=$CONDA_PREFIX PYTHONPATH=$(pwd) torchrun --nproc_per_node=${NUM_GPUS} cosmos_predict1/diffusion/inference/gen3c_single_image.py \
    --checkpoint_dir checkpoints \
    --input_image_path assets/diffusion/000000.png \
    --video_save_name test_single_image_multigpu \
    --num_gpus ${NUM_GPUS} \
    --guidance 1 \
    --foreground_masking

Additional Options

  • To generate longer videos autoregressively, specify the number of frames using --num_video_frames. The number of frames must follow the pattern: 121 * N - 1 (e.g., 241, 361, etc.)
  • To save buffer images alongside the output video, add the --save_buffer flag
  • You can control camera trajectories using --trajectory, --camera_rotation, and --movement_distance arguments. See the "Camera Movement Options" section below for details.

Camera Movement Options

Trajectory Types

The --trajectory argument controls the path the camera takes during video generation. Available options:

Option Description
left Camera moves to the left (default)
right Camera moves to the right
up Camera moves upward
down Camera moves downward
zoom_in Camera moves closer to the scene
zoom_out Camera moves away from the scene
clockwise Camera moves in a clockwise circular path
counterclockwise Camera moves in a counterclockwise circular path
Camera Rotation Modes

The --camera_rotation argument controls how the camera rotates during movement. Available options:

Option Description
center_facing Camera always rotates to look at the (estimated) center of the scene (default)
no_rotation Camera maintains its original orientation while moving
trajectory_aligned Camera rotates to align with the direction of movement
Movement Distance

The --movement_distance argument controls how far the camera moves from its initial position. The default value is 0.3. A larger value will result in more dramatic camera movement, while a smaller value will create more subtle movement.

Example 2: Video to Video Generation

For video input, GEN3C requires additional depth information, camera intrinsics, and extrinsics. These can be obtained using your choice of SLAM packages. For testing purposes, we provide example data.

First, you need to download the test samples:

# Download test samples from Hugging Face
huggingface-cli download nvidia/GEN3C-Testing-Example --repo-type dataset --local-dir assets/diffusion/dynamic_video_samples

Single GPU

CUDA_HOME=$CONDA_PREFIX PYTHONPATH=$(pwd) python cosmos_predict1/diffusion/inference/gen3c_dynamic.py \
    --checkpoint_dir checkpoints \
    --input_image_path assets/diffusion/dynamic_video_samples/batch_0000 \
    --video_save_name test_dynamic_video \
    --guidance 1

Multi-GPU (8 GPUs)

NUM_GPUS=8
CUDA_HOME=$CONDA_PREFIX PYTHONPATH=$(pwd) torchrun --nproc_per_node=${NUM_GPUS} cosmos_predict1/diffusion/inference/gen3c_dynamic.py \
    --checkpoint_dir checkpoints \
    --input_image_path assets/diffusion/dynamic_video_samples/batch_0000 \
    --video_save_name test_dynamic_video_multigpu \
    --num_gpus ${NUM_GPUS} \
    --guidance 1

Gallery

  • GEN3C can be easily applied to video/scene creation from a single image
  • ... or sparse-view images (we use 5 images here)
  • .. and dynamic videos

Acknowledgement

Our model is based on NVIDIA Cosmos and Stable Video Diffusion.

We are also grateful to several other open-source repositories that we drew inspiration from or built upon during the development of our pipeline:

Citation

 @inproceedings{ren2025gen3c,
    title={GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control},
    author={Ren, Xuanchi and Shen, Tianchang and Huang, Jiahui and Ling, Huan and
        Lu, Yifan and Nimier-David, Merlin and Müller, Thomas and Keller, Alexander and
        Fidler, Sanja and Gao, Jun},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    year={2025}
}

License and Contact

This project will download and install additional third-party open source software projects. Review the license terms of these open source projects before use.

GEN3C source code is released under the Apache 2 License.

GEN3C models are released under the NVIDIA Open Model License. For a custom license, please visit our website and submit the form: NVIDIA Research Licensing.

About

[CVPR 2025 Highlight] GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0