8000 GitHub - zibojia/MiniMax-Remover: This is the official implementation of our paper: "MiniMax-Remover: Taming Bad Noise Helps Video Object Removal"
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

This is the official implementation of our paper: "MiniMax-Remover: Taming Bad Noise Helps Video Object Removal"

Notifications You must be signed in to change notification settings

zibojia/MiniMax-Remover

Repository files navigation

MiniMax-Remover: Taming Bad Noise Helps Video Object Removal

Bojia Zi*, Weixuan Peng*, Xianbiao Qi, Jianan Wang, Shihao Zhao, Rong Xiao, Kam-Fai Wong
* Equal contribution. Corresponding author.

Huggingface Model Github Huggingface Space arXiv YouTube Demo Page


🚀 Overview

MiniMax-Remover is a fast and effective video object remover based on minimax optimization. It operates in two stages: the first stage trains a remover using a simplified DiT architecture, while the second stage distills a robust remover with CFG removal and fewer inference steps.


✨ Features:

  • Fast: Requires only 6 inference steps and does not use CFG, making it highly efficient.

  • Effective: Seamlessly removes objects from videos and generates high-quality visual content.

  • Robust: Maintains robustness by preventing the regeneration of undesired objects or artifacts within the masked region, even under varying noise conditions.


🛠️ Installation

All dependencies are listed in requirements.txt.

pip install -r requirements.txt

📂 Download

huggingface-cli download zibojia/minimax-remover --include vae transformer scheduler --local-dir .

⚡ Quick Start

Minimal Example

import torch
from diffusers.utils import export_to_video
from decord import VideoReader
from diffusers.models import AutoencoderKLWan
from transformer_minimax_remover import Transformer3DModel
from diffusers.schedulers import UniPCMultistepScheduler
from pipeline_minimax_remover import Minimax_Remover_Pipeline

random_seed = 42
video_length = 81
device = torch.device("cuda:0")

# Load model weights separately
vae = AutoencoderKLWan.from_pretrained("./vae", torch_dtype=torch.float16)
transformer = Transformer3DModel.from_pretrained("./transformer", torch_dtype=torch.float16)
scheduler = UniPCMultistepScheduler.from_pretrained("./scheduler")

images = # images in range [-1, 1]
masks = # masks in range [0, 1]

# Initialize the pipeline (pass the loaded weights as objects)
pipe = Minimax_Remover_Pipeline(vae=vae, transformer=transformer, \
    scheduler=scheduler, torch_dtype=torch.float16
).to(device)

result = pipe(images=images, masks=masks, num_frames=video_length, height=480, width=832, \
    num_inference_steps=12, generator=torch.Generator(device=device).manual_seed(random_seed), iterations=6 \
).frames[0]
export_to_video(result, "./output.mp4")

📧 Contact

Feel free to send an email to 19210240030@fudan.edu.cn if you have any questions or suggestions.

About

This is the official implementation of our paper: "MiniMax-Remover: Taming Bad Noise Helps Video Object Removal"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

0