Quantized Attention achieves speedup of 2-5x and 3-11x compared to FlashAttention and xformers, without lossing end-to-end metrics across language, image, and video models.
cuda triton attention vit quantization video-generation mlsys inference-acceleration efficient-attention llm llm-infra video-generate
-
Updated
Jun 11, 2025 - Cuda