8000 Releases · huggingface/accelerate · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Releases: huggingface/accelerate

v1.8.1: Patchfix

20 Jun 15:43
Compare
Choose a tag to compare

Full Changelog: v1.8.0...v1.8.1

v1.8.0: FSDPv2 + FP8, Regional Compilation for DeepSpeed, Faster Distributed Training on Intel CPUs, ipex.optimize deprecation

19 Jun 15:37
Compare
Choose a tag to compare

FSDPv2 refactor + FP8 support

We've simplified how to prepare FSDPv2 models, as there were too many ways to compose FSDP2 with other features (e.g., FP8, torch.compile, activation checkpointing, etc.). Although the setup is now more restrictive, it leads to fewer errors and a more performant user experience. We’ve also added support for FP8. You can read about the results here. Thanks to @S1ro1 for this contribution!

Faster Distributed Training on Intel CPUs

We updated the CCL_WORKER_COUNT variable and added KMP parameters for Intel CPU users. This significantly improves distributed training performance (e.g., Tensor Parallelism), with up to a 40% speed-up on Intel 4th Gen Xeon when training transformer TP models.

Regional Compilation for DeepSpeed

We added support for regional compilation with the DeepSpeed engine. DeepSpeed’s .compile() modifies models in-place using torch.nn.Module.compile(...), rather than the out-of-place torch.compile(...), so we had to account for that. Thanks @IlyasMoutawwakil for this feature!

ipex.optimize deprecation

ipex.optimize is being deprecated. Most optimizations have been upstreamed to PyTorch, and future improvements will land there directly. For users without PyTorch 2.8, we’ll continue to rely on IPEX for now.

Better XPU Support

We've greatly expanded and stabilized support for Intel XPUs:

Trackers

We've added support for SwanLab as an experiment tracking backend. Huge thanks to @ShaohonChen for this contribution ! We also deferred all tracker initializations to prevent premature setup of distributed environments.

  • Integrate SwanLab for offline/online experiment tracking for Accelerate by @ShaohonChen in #3605
  • Fix: Defer Tracker Initialization to Prevent Premature Distributed Setup by @yuanjua in #3581

What's Changed

New Contributors

Full Changelog: v1.7.0...v1.8.0

v1.7.0 : Regional compilation, Layerwise casting hook, FSDPv2 + QLoRA

15 May 12:33
Compare
Choose a tag to compare

Regional compilation

Instead of compiling the entire model at once, regional compilation targets repeated blocks (such as decoder layers) first. This allows the compiler to cache and reuse optimized code for subsequent blocks, significantly reducing the cold start compilation time typically seen during the first inference. Thanks @IlyasMoutawwakil for the feature ! You can view the full benchmark here, and check out our updated compilation guide for more details!

compilation_time-1

To enable this feature, set use_regional_compilation=True in the TorchDynamoPlugin configuration.

# Configure the compilation backend
dynamo_plugin = TorchDynamoPlugin(
    use_regional_compilation=True,
    ... # other parameters
)
# Initialize accelerator with the plugin
accelerator = Accelerator(dynamo_plugin=dynamo_plugin)
# This will apply compile_regions to your model
model = accelerator.prepare(model)

Layerwise casting hook

We've introduced a new hook that enables per-layer upcasting and downcasting (e.g., for Linear layers) during inference. This allows users to run models with separate storage and compute dtypes, resulting in memory savings. The concept was first implemented in diffusers, where downcasting models to FP8 proved effective without major quality degradation. Contributed by @sayakpaul in #3427

model = ....
storage_dtype = torch.float8_e4m3fn
compute_dtype = torch.bfloat16
attach_layerwise_casting_hooks(
            model,
            storage_dtype=storage_dtype,
            compute_dtype=compute_dtype,
        )

Better FSDP2 support

This release includes numerous new features and bug fixes. Notably, we’ve added support for FULL_STATE_DICT, a widely used option in FSDP, now enabling .save_pretrained() in transformers to work with FSDP2 wrapped models. QLoRA training is now supported as well but more testing is needed. We have also resolved a backend issue related to parameter offloading to CPU. Additionally, a significant memory spike that occurred when cpu_ram_efficient_loading=True was enabled has been fixed. Several other minor improvements and fixes are also included—see the What’s Changed section for full details.

  • FULL_STATE_DICT have been enabled by @S1ro1 in #3527
  • QLoRA support by @winglian in #3546
  • set backend correctly for CUDA+FSDP2+cpu-offload in #3574
  • memory spike fixed when using cpu_ram_efficient_loading=True by @S1ro1 in #3482

Better HPU support:

We have added a documentation for Intel Gaudi hardware !
The support is already available since v1.5.0 through this PR.

Torch.compile breaking change for dynamic argument

We've updated the logic for setting self.dynamic to explicitly preserve None rather than defaulting to False when the USE_DYNAMIC environment variable is unset. This change aligns the behavior with the PyTorch documentation for torch.compile. Thanks to @yafshar for contributing this improvement in #3567.

What's Changed

Read more

v1.6.0: FSDPv2, DeepSpeed TP and XCCL backend support

01 Apr 13:48
Compare
Choose a tag to compare

FSDPv2 support

This release introduces the support for FSDPv2 thanks to @S1ro1.

If you are using python code, you need to set fsdp_version=2 in FullyShardedDataParallelPlugin:

from accelerate import FullyShardedDataParallelPlugin, Accelerator

fsdp_plugin = FullyShardedDataParallelPlugin(
    fsdp_version=2
    # other options...
)
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)

If want to convert a YAML config that contains the FSDPv1 config to FSDPv2 one , use our conversion tool:

accelerate to-fsdp2 --config_file config.yaml --output_file new_config.yaml`

To learn more about the difference between FSDPv1 and FSDPv2, read the following documentation.

DeepSpeed TP support

We have added initial support for DeepSpeed + TP. Not many changes were required as the DeepSpeed APIs was already compatible. We only needed to make sure that the dataloader was compatible with TP and that we were able to save the TP weights. Thanks @inkcherry for the work ! #3390.

To use TP with deepspeed, you need to update the setting in the deepspeed config file by including tensor_parallel key:

    ....
    "tensor_parallel":{
      "autotp_size": ${autotp_size}
    },
   ...

More details in this deepspeed PR.

Support for XCCL distributed backend

We've added support for XCCL which is an Intel distributed backend which can be used with XPU devices. More details in this torch PR. Thanks @dvrogozh for the integration !

What's Changed

New Contributors

Full Changelog: v1.5.2...v1.6.0

Patch: v1.5.2

14 Mar 14:16
Compare
Choose a tag to compare

Bug Fixes:

  • Fixed an issue with torch.get_default_device() requiring a higher version than what we support
  • Fixed a broken pytest import in prod

Full Changelog: v1.5.0...v1.5.2

v1.5.0: HPU support

12 Mar 14:18
Compare
Choose a tag to compare

HPU Support

  • Adds in HPU accelerator support for 🤗 Accelerate

What's Changed

New Contributors

Full Changelog: v1.4.0...v1.5.0

v1.4.0: `torchao` FP8, TP & dataLoader support, fix memory leak

17 Feb 17:18
Compare
Choose a tag to compare

torchao FP8, initial Tensor Parallel support, and memory leak fixes

torchao FP8

This release introduces a new FP8 API and brings in a new backend: torchao. To use, pass in AORecipeKwargs to the Accelerator while setting mixed_precision="fp8". This is initial support, as it matures we will incorporate more into it (such as accelerate config/yaml) in future releases. See our benchmark examples here

TensorParallel

We have intial support for an in-house solution to TP when working with accelerate dataloaders. check out the PR here

Bug fixes

What's Changed

New Contributors

Full Changelog: v1.3.0...v1.4.0

v1.3.0 Bug fixes + Require torch 2.0

17 Jan 15:56
Compare
Choose a tag to compare

Torch 2.0

As it's been ~2 years since torch 2.0 was first released, we are now requiring this as the minimum version for Accelerate, which similarly was done in transformers as of its last release.

Core

  • [docs] no hard-coding cuda by @faaany in #3270
  • fix load_state_dict for npu by @ji-huazhong in #3211
  • Add keep_torch_compile param to unwrap_model and extract_model_from_parallel for distributed compiled model. by @ggoggam in #3282
  • [tests] make cuda-only test case device-agnostic by @faaany in #3340
  • latest bnb no longer has optim_args attribute on optimizer by @winglian in #3311
  • add torchdata version check to avoid "in_order" error by @faaany in #3344
  • [docs] fix typo, change "backoff_filter" to "backoff_factor" by @suchot in #3296
  • dataloader: check that in_order is in kwargs before trying to drop it by @dvrogozh in #3346
  • feat(tpu): remove nprocs from xla.spawn by @tengomucho in #3324

Big Modeling

Examples

  • Give example on how to handle gradient accumulation with cross-entropy by @ylacombe in #3193

Full Changelog

What's Changed

New Contributors

Full Changelog: v1.2.1...v1.3.0

v1.2.1: Patchfix

13 Dec 18:56
Compare
Choose a tag to compare
  • fix: add max_memory to _init_infer_auto_device_map's return statement in #3279 by @Nech-C
  • fix load_state_dict for npu in #3211 by @statelesshz

Full Changelog: v1.2.0...v1.2.1

v1.2.0: Bug Squashing & Fixes across the board

13 Dec 18:47
Compare
Choose a tag to compare

Core

  • enable find_executable_batch_size on XPU by @faaany in #3236
  • Use numpy._core instead of numpy.core by @qgallouedec in #3247
  • Add warnings and fallback for unassigned devices in infer_auto_device_map by @Nech-C in #3066
  • Allow for full dynamo config passed to Accelerator by @muellerzr in #3251
  • [WIP] FEAT Decorator to purge accelerate env vars by @BenjaminBossan in #3252
  • [data_loader] Optionally also propagate set_epoch to batch sampler by @tomaarsen in #3246
  • use XPU instead of GPU in the accelerate config prompt text by @faaany in #3268

Big Modeling

  • Fix align_module_device, ensure only cpu tensors for get_state_dict_offloaded_model by @kylesayrs in #3217
  • Remove hook for bnb 4-bit by @SunMarc in #3223
  • [docs] add instruction to install bnb on non-cuda devices by @faaany in #3227
  • Take care of case when "_tied_weights_keys" is not an attribute by @fabianlim in #3226
  • Update deferring_execution.md by @max-yue in #3262
  • Revert default behavior of get_state_dict_from_offload by @kylesayrs in #3253
  • Fix: Resolve #3060, preload_module_classes is lost for nested modules by @wejoncy in #3248

DeepSpeed

  • Select the DeepSpeedCPUOptimizer based on the original optimizer class. by @eljandoubi in #3255
  • support for wrapped schedulefree optimizer when using deepspeed by @winglian in #3266

Documentation

New Contributors

Full Changelog

  • Fix align_module_device, ensure only cpu tensors for get_state_dict_offloaded_model by @kylesayrs in #3217
  • remove hook for bnb 4-bit by @SunMarc in #3223
  • enable find_executable_batch_size on XPU by @faaany in #3236
  • take care of case when "_tied_weights_keys" is not an attribute by @fabianlim in #3226
  • [docs] update code in tracking documentation by @faaany in #3235
  • Add warnings and fallback for unassigned devices in infer_auto_device_map by @Nech-C in #3066
  • [data_loader] Optionally also propagate set_epoch to batch sampler by @tomaarsen in #3246
  • [docs] add instruction to install bnb on non-cuda devices by @faaany in #3227
  • Use numpy._core instead of numpy.core by @qgallouedec in #3247
  • Allow for full dynamo config passed to Accelerator by @muellerzr in #3251
  • [WIP] FEAT Decorator to purge accelerate env vars by @BenjaminBossan in #3252
  • use XPU instead of GPU in the accelerate config prompt text by @faaany in #3268
  • support for wrapped schedulefree optimizer when using deepspeed by @winglian in #3266
  • Update deferring_execution.md by @max-yue in #3262
  • Fix: Resolve #3257 by @as12138 in #3261
  • Replaced set/check breakpoint with set/check trigger in the troubleshooting documentation by @relh in #3259
  • Select the DeepSpeedCPUOptimizer based on the original optimizer class. by @eljandoubi in #3255
  • Revert default behavior of get_state_dict_from_offload by @kylesayrs in #3253
  • Fix: Resolve #3060, preload_module_classes is lost for nested modules by @wejoncy in #3248
  • [docs] update set-seed by @faaany in #3228
  • [docs] fix typo by @faaany in #3221
  • [docs] use real path for checkpoint by @faaany in #3220
  • Fixed multiple typos for Tutorials and Guides docs by @henryhmko in #3274

Code Diff

Release diff: v1.1.1...v1.2.0

0