Stars
- All languages
- Assembly
- C
- C#
- C++
- CSS
- Clojure
- Crystal
- Cuda
- Dart
- Dockerfile
- Elixir
- Emacs Lisp
- Go
- Groff
- Groovy
- HCL
- HTML
- Handlebars
- Java
- JavaScript
- Jinja
- Jupyter Notebook
- Lua
- MDX
- Makefile
- Markdown
- Nix
- OCaml
- Open Policy Agent
- PHP
- Perl
- Python
- R
- Ruby
- Rust
- SCSS
- Scala
- Scheme
- Shell
- Starlark
- Svelte
- Swift
- TypeScript
- Vue
- YAML
- Zig
An AI-powered task-management system you can drop into Cursor, Lovable, Windsurf, Roo, and others.
This is MCP server for Claude that gives it terminal control, file system search and diff file editing capabilities
Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
A magical tool for using local LLMs with MCP servers.
an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM
🚀 The fast, Pythonic way to build MCP servers and clients
OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"
PyTorch native quantization and sparsity for training and inference
End-to-end recipes for optimizing diffusion models with torchao and diffusers (inference and FP8 training).
https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching
Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model
xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism
Get your documents ready for gen AI
Generalist and Lightweight Model for Named Entity Recognition (Extract any entity types from texts) @ NAACL 2024
DSPy: The framework for programming—not prompting—language models
💫 Industrial-strength Natural Language Processing (NLP) in Python
A language for constraint-guided and efficient LLM programming.
Plug-and-play document processing pipelines with zero-shot models.
SGLang is a fast serving framework for large language models and vision language models.
FlashInfer: Kernel Library for LLM Serving
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
A guidance language for controlling large language models.
🔥 InfiniteYou: Flexible Photo Recrafting While Preserving Your Identity