-
University of York
- York, United Kingdom
-
06:10
(UTC +01:00) - @willrowan3
Sort Name ascending (A-Z)
Stars
Cosmos-Drive-Dreams: Scalable Synthetic Driving Data Generation with World Foundation Models
DiffusionRenderer (Cosmos): Neural Inverse and Forward Rendering with Video Diffusion Models
SkyReels-V2: Infinite-length Film Generative model
[SIGGRAPH 2025] LAM: Large Avatar Model for One-shot Animatable Gaussian Head
[CVPR 2025] Official Implementation of GaussianIP: Identity-Preserving Realistic 3D Human Generation via Human-Centric Diffusion Prior
You like Pytorch? You like LaTeX? You love NeuraLaTeX! ❤️
Real-Time High-Resolution Background Matting
Expressive Gaussian Human Avatars from Monocular RGB Video (NeurIPS 2024)
SkyReels V1: The first and most advanced open-source human-centric video foundation model
[CVPR 2025] MatAnyone: Stable Video Matting with Consistent Memory Propagation
Official Code Release for [SIGGRAPH 2024] DiLightNet: Fine-grained Lighting Control for Diffusion-based Image Generation
InverseRenderNet: Learning single image inverse rendering, CVPR 2019.
Official Repository of **CaPa**: Carve-n-Paint Synthesis for Efficient 4K Textured Mesh Generation
[Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models
Open source impl of **MV-DUSt3R+ Single-Stage Scene Reconstruction from Sparse Views In 2 Seconds** from Meta Reality Labs. Project page https://mv-dust3rp.github.io/
A curated set of references to useful UK Government datasets
[CAAI AIR'24] Bilateral Reference for High-Resolution Dichotomous Image Segmentation
New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos
[CVPR2025] Neural LightRig: Unlocking Accurate Object Normal and Material Estimation with Multi-Light Diffusion
Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2
DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding
[CVPR2025] PyTorch-based reimplementation of CrossFlow, as proposed in 'Flowing from Words to Pixels: A Noise-Free Framework for Cross-Modality Evolution'