-
Notifications
You must be signed in to change notification settings - Fork 394
Comparing changes
Open a pull request
base repository: openvla/openvla
base: main
head repository: moojink/openvla-oft
compare: main
- 19 commits
- 48 files changed
- 1 contributor
Commits on Feb 28, 2025
-
First commit for paper: "Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success" - Moo Jin Kim, Chelsea Finn, Percy Liang Website: https://openvla-oft.github.io/
Configuration menu - View commit details
-
Copy full SHA for 3758245 - Browse repository at this point
Copy the full SHA 3758245View commit details -
Configuration menu - View commit details
-
Copy full SHA for 9bccb2a - Browse repository at this point
Copy the full SHA 9bccb2aView commit details
Commits on Mar 1, 2025
-
Update finetune.py: Sync model files on master process only
Prevents race conditions in multi-GPU training runs.
Configuration menu - View commit details
-
Copy full SHA for 40d6c89 - Browse repository at this point
Copy the full SHA 40d6c89View commit details -
Configuration menu - View commit details
-
Copy full SHA for a45877f - Browse repository at this point
Copy the full SHA a45877fView commit details
Commits on Mar 7, 2025
-
Configuration menu - View commit details
-
Copy full SHA for 588f9f9 - Browse repository at this point
Copy the full SHA 588f9f9View commit details -
Configuration menu - View commit details
-
Copy full SHA for 2fde5e9 - Browse repository at this point
Copy the full SHA 2fde5e9View commit details -
Update openvla_utils.py: Add torch.inference_mode() to get_vla_action()
Also update ALOHA VRAM usage in README.md
Configuration menu - View commit details
-
Copy full SHA for 0b75f6f - Browse repository at this point
Copy the full SHA 0b75f6fView commit details -
Update README: Update ALOHA VRAM usage to 18 GB
(Previous commit was supposed to do this)
Configuration menu - View commit details
-
Copy full SHA for 7f9efa6 - Browse repository at this point
Copy the full SHA 7f9efa6View commit details
Commits on Mar 8, 2025
-
Configuration menu - View commit details
-
Copy full SHA for d8bf499 - Browse repository at this point
Copy the full SHA d8bf499View commit details -
Update ALOHA.md: Remove comment about Python 3.8.10
(Python 3.10 actually works for the client conda environment)
Configuration menu - View commit details
-
Copy full SHA for b9d7702 - Browse repository at this point
Copy the full SHA b9d7702View commit details
Commits on Mar 9, 2025
-
Configuration menu - View commit details
-
Copy full SHA for edc1f9e - Browse repository at this point
Copy the full SHA edc1f9eView commit details
Commits on Mar 26, 2025
-
Configuration menu - View commit details
-
Copy full SHA for 1fe43cf - Browse repository at this point
Copy the full SHA 1fe43cfView commit details
Commits on Apr 28, 2025
-
Configuration menu - View commit details
-
Copy full SHA for 72f94a8 - Browse repository at this point
Copy the full SHA 72f94a8View commit details -
Configuration menu - View commit details
-
Copy full SHA for d282364 - Browse repository at this point
Copy the full SHA d282364View commit details -
Remove old base repo Bridge eval code for now
To avoid confusion; will add back later with OpenVLA-OFT integration
Configuration menu - View commit details
-
Copy full SHA for f9bc894 - Browse repository at this point
Copy the full SHA f9bc894View commit details -
Configuration menu - View commit details
-
Copy full SHA for 43f33b9 - Browse repository at this point
Copy the full SHA 43f33b9View commit details -
Fix bug: diffusion inference with fewer steps at test time than train…
… time This only affects evals with the OpenVLA-OFT diffusion variant when the number of denoising steps at test time is different than the amount specified during train time (e.g., fewer denoising steps for faster inference)
Configuration menu - View commit details
-
Copy full SHA for 94d345f - Browse repository at this point
Copy the full SHA 94d345fView commit details -
Add explicit lora_rank arg to deploy scripts
In case the user changes it from the default value (32) during training; since the lora_rank at test time must match the value used during train time
Configuration menu - View commit details
-
Copy full SHA for ae22da4 - Browse repository at this point
Copy the full SHA ae22da4View commit details -
Configuration menu - View commit details
-
Copy full SHA for 213c5ff - Browse repository at this point
Copy the full SHA 213c5ffView commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff main...main