Stars
The official Java SDK for Model Context Protocol servers and clients. Maintained in collaboration with Spring AI
🚀 The fast, Pythonic way to build MCP servers and clients
A professional cross-platform SSH/Sftp/Shell/Telnet/Tmux/Serial terminal.
✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
Enhanced ChatGPT Clone: Features Agents, DeepSeek, Anthropic, AWS, OpenAI, Assistants API, Azure, Groq, o1, GPT-4o, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message se…
Build Conversational AI in minutes ⚡️
AI as Workspace - An elegant AI chat client. Full-featured, lightweight. Support multiple workspaces, plugin system, cross-platform, local first + real-time cloud sync, Artifacts, MCP | 更好的 AI 客户端
A live stream development of RL tunning for LLM agents
MiMo: Unlocking the Reasoning Potential of Language Model – From Pretraining to Posttraining
Apache CloudStack is an opensource Infrastructure as a Service (IaaS) cloud computing platform
🍒 Cherry Studio is a desktop client that supports for multiple LLM providers.
Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1 fast & friendly API.
Cloud Native Agentic AI | Discord: https://bit.ly/kagentdiscord
A Model Context Protocol (MCP) server that converts Mermaid diagrams to PNG images
A Kubernetes MCP (Model Control Protocol) server that enables interaction with Kubernetes clusters through MCP tools.
The official Python SDK for Model Context Protocol servers and clients
Model Context Protocol Servers
🚀🚀 「大模型」2小时完全从0训练26M的小参数GPT!🌏 Train a 26M-parameter GPT from scratch in just 2h!
Autonomous coding agent right in your IDE, capable of creating/editing files, executing commands, using the browser, and more with your permission every step of the way.
Cost-efficient and pluggable Infrastructure components for GenAI inference
Kyanos is a networking analysis tool using eBPF. It can visualize the time packets spend in the kernel, capture requests/responses, makes troubleshooting more efficient.
eBPF-based Cloud Native Monitoring Tool
A high-performance distributed file system designed to address the challenges of AI training and inference workloads.
⏩ Create, share, and use custom AI code assistants with our open-source IDE extensions and hub of models, rules, prompts, docs, and other building blocks
FlashMLA: Efficient MLA decoding kernels
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train Qwen3, Llama 4, DeepSeek-R1, Gemma 3, TTS 2x faster with 70% less VRAM.