8000 GitHub - lyogavin/Anima: Moved to here: https://github.com/lyogavin/airllm
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

lyogavin/Anima

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 

Repository files navigation

airllm_logo

AirLLM optimizes inference memory usage, allowing 70B large language models to run inference on a single 4GB GPU card without quantization, distillation and pruning. And you can run 405B Llama3.1 on 8GB vram now.

GitHub Repo stars

Moved to here: https://github.com/lyogavin/airllm

About

Moved to here: https://github.com/lyogavin/airllm

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0