FlashInfer: Kernel Library for LLM Serving
-
Updated
Jun 13, 2025 - Cuda
8000
FlashInfer: Kernel Library for LLM Serving
Quantized Attention achieves speedup of 2-5x and 3-11x compared to FlashAttention and xformers, without lossing end-to-end metrics across language, image, and video models.
SpargeAttention: A training-free sparse attention that can accelerate any model inference.
📚FFPA: Extend FA-2 with Split-D for large headdim, 2x↑ vs SDPA.
Code for the paper "Cottention: Linear Transformers With Cosine Attention"
Patch-Based Stochastic Attention (efficient attention mecanism)
Add a description, image, and links to the attention topic page so that developers can more easily learn about it.
To associate your repository with the attention topic, visit your repo's landing page and select "manage topics."