Highlights
- Pro
Stars
Simple & Scalable Pretraining for Neural Architecture Research
Scalable toolkit for efficient model reinforcement
Minimalistic 4D-parallelism distributed training framework for education purpose
Source code for Twitter's Recommendation Algorithm
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
verl: Volcano Engine Reinforcement Learning for LLMs
Copilot Chat extension for VS Code
Everything about the SmolLM and SmolVLM family of models
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
An open-source library for GPU-accelerated robot learning and sim-to-real transfer.
The official Python SDK for Model Context Protocol servers and clients
Multi-Joint dynamics with Contact. A general purpose physics simulator.
You like pytorch? You like micrograd? You love tinygrad! ❤️
Release repo for our SLAM Handbook
Make huge neural nets fit in memory
Minimal implementation of scalable rectified flow transformers, based on SD3's approach
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
Fully open data curation for reasoning models
The simplest, fastest repository for training/finetuning small-sized VLMs.
A framework for serving and evaluating LLM routers - save LLM costs without compromising quality
DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling
FlashMLA: Efficient MLA decoding kernels