-
Notifications
You must be signed in to change notification settings - Fork 400
Fix file sizes of LLM's #945
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Reorganization of models.json file
Add support for SlimOrca 13B
Falcon 7B file size bugfix
Models Filesize controlAlfred-40B KO |
Falcon 7B filesize updated
Falcon 7B fix
Vicuna-13B fix
SlimOrca bugfix
I'm not sure what are you changing? @Smartappli |
I'm in the process of testing each module one after the other to make sure it downloads |
@Smartappli You can do that with the huggingface api. I just need to update the sizes. Some of the them are off |
Try this: from huggingface_hub import list_files_info
print(list(list_files_info(repo_id="TheBloke/CodeLlama-7B-GGUF",
repo_type="model",
revision="main",
))
) It should print the size |
we need to update the size of the ones mentioned KO :) |
Model filesize updated Thx Gaby for the snippet :)
Wow, they all work:) |
@Smartappli Thanks for the help, will review later today. 💪 |
Will do, thanks |
Add support for Llama-2-70B-OASST
Add support for Llama-2-70B-OASST
tiny correctoin
Add Support for LLaMA 2 7B, 13B, 70B
Add support for LLaMA 2 7B, 13B, 70B
Reorganization of models.json file
Fixes #910