8000 remove check for cuda version and package so the bf16 check passes on non Nvidia CUDA devices that support bf16 by supernovae · Pull Request #803 · pytorch/torchtune · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

remove check for cuda version and package so the bf16 check passes on non Nvidia CUDA devices that support bf16 #803

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 19, 2024

Conversation

supernovae
Copy link
Contributor

Remove checks that prohibit AMD cards that support pytorch as cuda devices from training.

Context

I have a 7900xtx running pytorch 2.2 w/ROCM 6.1 and wanted to train a mistral/llama model but ran into an error that my card didn't support bf16

Changelog

Remove cuda package and version check - returns none for rocm

Test plan

Training mistral 07b on my 7900xtx using lora.

…upported cuda device such as AMD Radeon/Mi300
Copy link
pytorch-bot bot commented Apr 18, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/803

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit b114186 with merge base a79554e (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

Hi @supernovae!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@kartikayk
Copy link
Contributor

@supernovae thanks so much for the PR, the change has been on my TODO list for a while and I'm glad you got to it first. Mind signing the CLA and then I can accept and merge.

BTW I'd be curious on what sort of training speed you're seeing (just a simple iter/sec number from tqdm would be good enough).

@supernovae
Copy link
Contributor Author
supernovae commented Apr 18, 2024

I did sign the CLA, i'll double check to make sure i didn't fat finger the username :)

I am getting 1.33 to 1.60it/s if i leave my computer alone

is there a list of it/sec for 3090/4090 cards?

did the CLA again, hopefully i didn't break anything

@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 19, 2024
@kartikayk
Copy link
Contributor

Hmm, I'm getting around 3it/sec on a 4090 so 2x faster. To confirm, you're training using the default config?

Copy link
Contributor
@kartikayk kartikayk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making the change! I added one fix to fix the lint error.

@kartikayk kartikayk merged commit b74fd3a into pytorch:main Apr 19, 2024
27 checks passed
@supernovae
Copy link
Contributor Author

Hmm, I'm getting around 3it/sec on a 4090 so 2x faster. To confirm, you're training using the default config?

I am using default config.

I did notice this error:

/home/me/pyenv/versions/3.11.8/lib/python3.11/site-packages/torchtune/modules/attention.py:207: UserWarning: 1Torch was not compiled with memory efficient attention. (Triggered internally at ../aten/src/ATen/native/transformers/hip/sdp_utils.cpp:515.) output = nn.functional.scaled_dot_product_attention(
a quick check in python toch.backends.cuda.flash_sdp_enabled() returns True but i found there is a specific ROCM flash attention patch pending merge into pytorch 2.3.0 nightly with a reference to some poor performing PR from before.

pytorch/pytorch#121561

I'm not sure if that PR is the "memory efficient flash attention" but i'm hopefully i can see it merged into a nightly i can try 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0