Stars
Huggingface cloth segmentation using U2NET
Rembg is a tool to remove images background
ViViD: Video Virtual Try-on using Diffusion Models
Character Animation (AnimateAnyone, Face Reenactment)
Code for "ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on", accepted at WACV 2021 Generation of Human Behavior Workshop.
Official code for "MIRROR: Towards Generalizable On-Device Video Virtual Try-On for Mobile Shopping", IMWUT 2023 / Ubicomp 2024.
diffusion model baesd video-virtual-try-on
HiFuse: Hierarchical Multi-Scale Feature Fusion Network for Medical Image Classification
Large-scale text-video dataset. 10 million captioned short videos.
Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)
Command & Conquer: Remastered Collection
Fine-Grained Open Domain Image Animation with Motion Guidance
[TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.
📺 NeuralAtlases - A Pytorch implementation of the paper "Layered Neural Atlases for Consistent Video Editing" (https://arxiv.org/abs/2109.11418v1)
animatediff prompt travel
This repository contains the pytorch implementation of ControlNet-Attached 'Tune-A-Video'.
From comfyui workflow to web app, in seconds
Utilities library for working with the ComfyUI API
Animate a given image with animatediff and controlnet
Generative Models by Stability AI
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
add a controlnet to animatediff to animate a given image
[ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editing
[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
[ICLR 2024] Code for FreeNoise based on VideoCrafter