Stars
MkDocs plugin for autogenerating API docs with navigation
A Sphinx extension that automatically generates API documentation for your Python packages.
A high-throughput and memory-efficient inference and serving engine for LLMs
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
Fully open reproduction of DeepSeek-R1
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization
Fast and accurate automatic speech recognition (ASR) for edge devices
Best practices & guides on how to write distributed pytorch training code
Swiss-army tool for scraping and extracting data from online assets, made for hackers
Interactive Variational Autoencoder (VAE)
Your Next Store: Modern Commerce with Next.js and Stripe as the backend.
LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve speech capabilities at the GPT-4o level.
A plotting tool that outputs Line Rider maps, so you can watch a man on a sled scoot down your loss curves. 🎿
Agentic AI framework for enterprise workflow automation.
An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations.
Analyze a project: How are the maintainers doing?
A collection of utilities and notebooks for analysing your finances
The application is a end-user training and evaluation system for standard knowledge graph embedding models. It was developed to optimise the WikiKG90Mv2 dataset