8000 Decrease model precision · Issue #119 · evo-design/evo · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Decrease model pre 8574 cision #119

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
alexandre239 opened this issue Mar 20, 2025 · 0 comments
Open

Decrease model precision #119

alexandre239 opened this issue Mar 20, 2025 · 0 comments

Comments

@alexandre239
Copy link

Hi!

After seeing some issues related to OOM errors due to high prompt length, I was wondering if an option to generate sequences with Evo using higher prompts (>1kb and so on) would be, rather than GPU sharding, to decrease the float precision?

  • I believe that currently it is set to float16 (as in model.backbone = model.backbone.to(torch.bfloat16) from the generation_to_folding.py script), but would float8 be an option (is it anyhow compatible with Evo?)?
  • If so, do you expect a big decrease in generation performance, or do you already have data representing precision vs performance?

Thanks so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant
0