Stars
An open-source AI agent that brings the power of Gemini directly into your terminal.
Core validation logic for pydantic written in rust
Build resilient language agents as graphs.
🦜🔗 Build context-aware reasoning applications
Large Language Model based Multi-Agents: A Survey of Progress and Challenges
Source code and demo for memory bank and SiliconFriend
[NeurIPS 2024] Human 3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion Models
Official repo for paper "Structured 3D Latents for Scalable and Versatile 3D Generation" (CVPR'25 Spotlight).
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflo…
Filament is a real-time physically based rendering engine for Android, iOS, Windows, Linux, macOS, and WebGL2
Open 3D Engine (O3DE) is an Apache 2.0-licensed multi-platform 3D engine that enables developers and content creators to build AAA games, cinema-quality 3D worlds, and high-fidelity simulations wit…
A refreshingly simple data-driven game engine built in Rust
Godot Engine – Multi-platform 2D and 3D game engine
Audio2Motion Official implementation for Audio2Motion: Generating Diverse Gestures from Speech with Conditional Variational Autoencoders.
Matter (formerly Project CHIP) creates more connections between more objects, simplifying development for manufacturers and increasing compatibility for consumers, guided by the Connectivity Standa…
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
TransNet V2: Shot Boundary Detection Neural Network
Open-Sora: Democratizing Efficient Video Production for All
The official implementation of GTCRN, an ultra-lightweight SE model.
[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
Awesome LLM compression research papers and tools.
Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.