Releases: huggingface/accelerate
v1.8.1: Patchfix
- Add support for e5e2 and default to hybrid when launcher is used by @IlyasMoutawwakil in #3640
- shards by @SunMarc in #3645
Full Changelog: v1.8.0...v1.8.1
v1.8.0: FSDPv2 + FP8, Regional Compilation for DeepSpeed, Faster Distributed Training on Intel CPUs, ipex.optimize deprecation
FSDPv2 refactor + FP8 support
We've simplified how to prepare FSDPv2 models, as there were too many ways to compose FSDP2 with other features (e.g., FP8, torch.compile, activation checkpointing, etc.). Although the setup is now more restrictive, it leads to fewer errors and a more performant user experience. We’ve also added support for FP8. You can read about the results here. Thanks to @S1ro1 for this contribution!
Faster Distributed Training on Intel CPUs
We updated the CCL_WORKER_COUNT
variable and added KMP
parameters for Intel CPU users. This significantly improves distributed training performance (e.g., Tensor Parallelism), with up to a 40% speed-up on Intel 4th Gen Xeon when training transformer TP models.
- Set ccl and KMP param in simple launch by @jiqing-feng in #3575
Regional Compilation for DeepSpeed
We added support for regional compilation with the DeepSpeed engine. DeepSpeed’s .compile() modifies models in-place using torch.nn.Module.compile(...), rather than the out-of-place torch.compile(...), so we had to account for that. Thanks @IlyasMoutawwakil for this feature!
- Fix deepspeed regional compilation by @IlyasMoutawwakil in #3609
ipex.optimize deprecation
ipex.optimize
is being deprecated. Most optimizations have been upstreamed to PyTorch, and future improvements will land there directly. For users without PyTorch 2.8, we’ll continue to rely on IPEX for now.
- remove ipex.optimize in accelerate by @yao-matrix in #3608
Better XPU Support
We've greatly expanded and stabilized support for Intel XPUs:
- enable fsdp2 benchmark on XPU by @yao-matrix in #3590
- enable big_model_inference on xpu by @yao-matrix in #3595
- enable test_load_checkpoint_and_dispatch_with_broadcast cases on XPU by @yao-matrix in
- enable test_cli & test_example cases on XPU by @yao-matrix in #3578
- enable torchao and pippy test cases on XPU by @yao-matrix in #3599
- enable regional_compilation benchmark on xpu by @yao-matrix in #3592
- fix xpu 8bit value loading by @jiqing-feng in #3623
- add device-agnostic GradScaler by @yao-matrix in #3588
- add xpu support in TorchTensorParallelPlugin by @yao-matrix in #3627
Trackers
We've added support for SwanLab as an experiment tracking backend. Huge thanks to @ShaohonChen for this contribution ! We also deferred all tracker initializations to prevent premature setup of distributed environments.
- Integrate SwanLab for offline/online experiment tracking for Accelerate by @ShaohonChen in #3605
- Fix: Defer Tracker Initialization to Prevent Premature Distributed Setup by @yuanjua in #3581
What's Changed
- Fix bf16 training with TP by @SunMarc in #3610
- better handle FP8 with and without deepspeed by @IlyasMoutawwakil in #3611
- Update Gaudi Runners by @IlyasMoutawwakil in #3593
- goodbye torch_ccl by @yao-matrix in #3580
- Add support for standalone mode when default port is occupied on single node by @laitifranz in #3576
- Resolve logger warnings by @emmanuel-ferdman in #3582
- Add kwargs to optimizer, scheduler and dataloader using function
accelerator().load_state()
by @luiz0992 in #3540 - [docs] no hard-coded cuda in the ddp documentation by @faaany in #3589
- change to use torch.device by @yao-matrix in #3594
- Fix: list object has no attribute keys by @S1ro1 in #3603
- Update Gaudi Runners by @IlyasMoutawwakil in #3593
- Fix bf16 training with TP by @SunMarc in #3610
- better handle FP8 with and without deepspeed by @IlyasMoutawwakil in #3611
- Remove device_count for TPU launcher to avoid initializing runtime by @sorgfresser in #3587
- Fix missing te.LayerNorm in intel_transformer_engine by @IlyasMoutawwakil in #3619
- Add fp8_e5m2 support in
dtype_byte_size
by @SunMarc in #3625 - [Deepspeed] deepspeed auto grad accum by @kashif in #3630
- Remove hardcoded cuda from fsdpv2 by @IlyasMoutawwakil in #3631
- Integrate SwanLab for offline/online experiment tracking for Accelerate by @ShaohonChen in #3605
- Fix Typos in Documentation and Comments by @leopardracer in #3621
- feat: use datasets.IterableDataset shard if possible by @SunMarc in #3635
- [DeepSpeed] sync gradient accum steps from deepspeed plugin by @kashif in #3632
- Feat: add cpu offload by @S1ro1 in #3636
- Fix: correct labels for fsdp2 examples by @S1ro1 in #3637
- fix grad acc deepspeed by @SunMarc in #3638
New Contributors
- @laitifranz made their first contribution in #3576
- @emmanuel-ferdman made their first contribution in #3582
- @yuanjua made their first contribution in #3581
- @sorgfresser made their first contribution in #3587
- @ShaohonChen made their first contribution in #3605
- @leopardracer made their first contribution in #3621
Full Changelog: v1.7.0...v1.8.0
v1.7.0 : Regional compilation, Layerwise casting hook, FSDPv2 + QLoRA
Regional compilation
Instead of compiling the entire model at once, regional compilation targets repeated blocks (such as decoder layers) first. This allows the compiler to cache and reuse optimized code for subsequent blocks, significantly reducing the cold start compilation time typically seen during the first inference. Thanks @IlyasMoutawwakil for the feature ! You can view the full benchmark here, and check out our updated compilation guide for more details!
To enable this feature, set use_regional_compilation=True
in the TorchDynamoPlugin
configuration.
# Configure the compilation backend
dynamo_plugin = TorchDynamoPlugin(
use_regional_compilation=True,
... # other parameters
)
# Initialize accelerator with the plugin
accelerator = Accelerator(dynamo_plugin=dynamo_plugin)
# This will apply compile_regions to your model
model = accelerator.prepare(model)
Layerwise casting hook
We've introduced a new hook that enables per-layer upcasting and downcasting (e.g., for Linear layers) during inference. This allows users to run models with separate storage and compute dtypes, resulting in memory savings. The concept was first implemented in diffusers, where downcasting models to FP8 proved effective without major quality degradation. Contributed by @sayakpaul in #3427
model = ....
storage_dtype = torch.float8_e4m3fn
compute_dtype = torch.bfloat16
attach_layerwise_casting_hooks(
model,
storage_dtype=storage_dtype,
compute_dtype=compute_dtype,
)
Better FSDP2 support
This release includes numerous new features and bug fixes. Notably, we’ve added support for FULL_STATE_DICT
, a widely used option in FSDP, now enabling .save_pretrained()
in transformers to work with FSDP2 wrapped models. QLoRA training is now supported as well but more testing is needed. We have also resolved a backend issue related to parameter offloading to CPU. Additionally, a significant memory spike that occurred when cpu_ram_efficient_loading=True
was enabled has been fixed. Several other minor improvements and fixes are also included—see the What’s Changed section for full details.
FULL_STATE_DICT
have been enabled by @S1ro1 in #3527- QLoRA support by @winglian in #3546
- set backend correctly for CUDA+FSDP2+cpu-offload in #3574
- memory spike fixed when using
cpu_ram_efficient_loading=True
by @S1ro1 in #3482
Better HPU support:
We have added a documentation for Intel Gaudi hardware !
The support is already available since v1.5.0 through this PR.
- Add the HPU into accelerate config by @yuanwu2017 in #3495
- Add Gaudi doc by @regisss in #3537
Torch.compile breaking change for dynamic
argument
We've updated the logic for setting self.dynamic
to explicitly preserve None rather than defaulting to False
when the USE_DYNAMIC
environment variable is unset. This change aligns the behavior with the PyTorch documentation for torch.compile. Thanks to @yafshar for contributing this improvement in #3567.
What's Changed
- use device agnostic torch.OutOfMemoryError from pytorch 2.5.0 by @yao-matrix in #3475
- Adds style bot by @zach-huggingface in #3478
- Fix a tiny typo in
low_precision_training
guide by @sadra-barikbin in #3488 - Fix check_tied_parameters_in_config for multimodal models by @SunMarc in #3479
- Don't create new param for TorchAO sequential offloading due to weak BC guarantees by @a-r-r-o-w in #3444
- add support for custom function for reducing the batch size by @winglian in #3071
- Fix fp8 deepspeed config by @SunMarc in #3492
- fix warning error by @faaany in #3491
- [bug] unsafe_serialization option in "merge-weights" doesn't work by @cyr0930 in #3496
- Add the HPU into accelerate config by @yuanwu2017 in #3495
- Use
torch.distributed.checkpoint.state_dict.set_model_state_dict
inload_checkpoint_in_model
by @ringohoffman in #3432 - nit: needed sanity checks for fsdp2 by @kmehant in #3499
- (Part 1) fix: make TP training compatible with new transformers by @kmehant in #3457
- Fix deepspeed tests by @S1ro1 in #3503
- Add FP8 runners + tweak building FP8 image by @zach-huggingface in #3493
- fix: apply torchfix to set
weights_only=True
by @bzhong-solink in #3497 - Fix: require transformers version for tp tests by @S1ro1 in #3504
- Remove deprecated PyTorch/XLA APIs by @zpcore in #3484
- Fix cache issue by upgrading github actions version by @SunMarc in #3513
- [Feat] Layerwise casting hook by @sayakpaul in #3427
- Add torchao to FP8 error message by @jphme in #3514
- Fix unwanted cuda init due to torchao by @SunMarc in #3530
- Solve link error in internal_mechanism documentation (#3506) by @alvaro-mazcu in #3507
- [FSDP2] Enable FULL_STATE_DICT by @S1ro1 in #3527
- [FSDP2] Fix memory spike with
cpu_ram_efficient_loading=True
by @S1ro1 in #3482 - [FSDP2] Issues in Wrap Policy and Mixed Precision by @jhliu17 in #3528
- Fix logic in
accelerator.prepare
+ IPEX for 2+nn.Models
and/oroptim.Optimizers
by @mariusarvinte in #3517 - Update Docker builds to align with CI requirements by @matthewdouglas in #3532
- Fix CI due to missing package by @SunMarc in #3535
- Update big_modeling.md for layerwise casting by @sayakpaul in #3548
- [FSDP2] Fix: "..." is not a buffer or a paremeter by @S1ro1 in
- fix notebook_launcher for Colab TPU compatibility. by @BogdanDidenko in #3541
- Fix typos by @omahs in #3549
- Dynamo regional compilation by @IlyasMoutawwakil in #3529
- add support for port 0 auto-selection in multi-GPU environments by @hellobiondi in #3501
- Fix the issue where
set_epoch
does not take effect. by @hongjx175 in #3556 - [FSDP2] Fix casting in
_cast_and_contiguous
by @dlvp in #3559 - [FSDP] Make env var and dataclass flag consistent for
cpu_ram_efficient_loading
by @SumanthRH in #3307 - canonicalize optimized names before fixing optimizer in fdsp2 by @pstjohn in #3560
- [docs] update deepspeed config path by @faaany in #3561
- preserve parameter keys when removing prefix by @mjkvaak-amd in #3564
- Add Gaudi doc by @regisss in #3537
- Update dynamic env handling to preserve None when USE_DYNAMIC is unset by @yafshar in #3567
- add a
synchronize
call for xpu in_gpu_gather
by @faaany in #3563 - simplify model.to logic by @yao-matrix in #3562
- tune env command output by @yao-matrix in #3570
- Add regional compilation to cli tools and env vars by @IlyasMoutawwakil in #3572
- reenable FSDP2+qlora support by @winglian in #3546
- Fix prevent duplicate GPU usage in distributed processing by @ved1beta in #3526
- set backend correctly for CUDA+FSDP2+cpu-offload by @SunMarc in #3574
- enable test_dispatch_model_tied_weights_memory_with_nested_offload_cpu on xpu by @yao-matrix i...
v1.6.0: FSDPv2, DeepSpeed TP and XCCL backend support
FSDPv2 support
This release introduces the support for FSDPv2 thanks to @S1ro1.
If you are using python code, you need to set fsdp_version=2
in FullyShardedDataParallelPlugin
:
from accelerate import FullyShardedDataParallelPlugin, Accelerator
fsdp_plugin = FullyShardedDataParallelPlugin(
fsdp_version=2
# other options...
)
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
If want to convert a YAML config that contains the FSDPv1 config to FSDPv2 one , use our conversion tool:
accelerate to-fsdp2 --config_file config.yaml --output_file new_config.yaml`
To learn more about the difference between FSDPv1 and FSDPv2, read the following documentation.
DeepSpeed TP support
We have added initial support for DeepSpeed + TP. Not many changes were required as the DeepSpeed APIs was already compatible. We only needed to make sure that the dataloader was compatible with TP and that we were able to save the TP weights. Thanks @inkcherry for the work ! #3390.
To use TP with deepspeed, you need to update the setting in the deepspeed config file by including tensor_parallel
key:
....
"tensor_parallel":{
"autotp_size": ${autotp_size}
},
...
More details in this deepspeed PR.
Support for XCCL distributed backend
We've added support for XCCL which is an Intel distributed backend which can be used with XPU devices. More details in this torch PR. Thanks @dvrogozh for the integration !
What's Changed
- Add
log_artifact
,log_artifacts
andlog_figure
capabilities to the MLflowTracker. by @luiz0992 in #3419 - tensor parallel dataloder for deepspeed accelerator by @inkcherry in #3390
- Fix prod issues by @muellerzr in #3441
- Fix attribute issue with deepspeed tp by @SunMarc in #3443
- Fixed typo in the multi node FSDP slurm example script by @JacobB33 in #3447
- feat: Add no_ssh and slurm multinode launcher options for deepspeed by @hsmallbone in #3329
- Fixup ao module filter func by @muellerzr in #3450
- remove device index workaround on xpu since xpu supports integer device index as cuda now by @yao-matrix in #3448
- enable 2 UT cases on XPU by @yao-matrix in #3445
- Fix AMD GPU support with should_reduce_batch_size() by @cameronshinn in #3405
- Fix device KeyError in tied_params_map by @dvrogozh in #3403
- Initial FSDP2 support by @S1ro1 in #3394
- Fix: clip grad norm in fsdp2 by @S1ro1 in #3465
- Update @ by @muellerzr in #3466
- Fix seeding of new generator for multi GPU by @albertcthomas in #3459
- Fix get_balanced_memory for MPS by @booxter in #3464
- Update CometMLTracker to allow re-using experiment by @Lothiraldan in #3328
- Apply ruff py39 fixes by @cyyever in #3461
- xpu: enable xccl distributed backend by @dvrogozh in #3401
- Update ruff target-version to py39 and apply more fixes by @cyyever in #3470
- [MLU] fix deepspeed dependency by @huismiling in #3472
- remove use_xpu to fix ut issues, we don't need this since XPU is OOB … by @yao-matrix in #3460
- Bump ruff to 0.11.2 by @cyyever in #3471
New Contributors
- @luiz0992 made their first contribution in #3419
- @inkcherry made their first contribution in #3390
- @JacobB33 made their first contribution in #3447
- @hsmallbone made their first contribution in #3329
- @yao-matrix made their first contribution in #3448
- @cameronshinn made their first contribution in #3405
- @S1ro1 made their first contribution in #3394
- @albertcthomas made their first contribution in #3459
- @booxter made their first contribution in #3464
- @Lothiraldan made their first contribution in #3328
- @cyyever made their first contribution in #3461
Full Changelog: v1.5.2...v1.6.0
Patch: v1.5.2
Bug Fixes:
- Fixed an issue with
torch.get_default_device()
requiring a higher version than what we support - Fixed a broken
pytest
import in prod
Full Changelog: v1.5.0...v1.5.2
v1.5.0: HPU support
HPU Support
- Adds in HPU accelerator support for 🤗 Accelerate
What's Changed
- [bug] fix device index bug for model training loaded with bitsandbytes by @faaany in #3408
- [docs] add the missing
import torch
by @faaany in #3396 - minor doc fixes by @nbroad1881 in #3365
- fix: ensure CLI args take precedence over config file. by @cyr0930 in #3409
- fix: Add
device=torch.get_default_device()
intorch.Generator
s by @saforem2 in #3420 - Add Tecorigin SDAA accelerator support by @siqi654321 in #3330
- fix typo : thier -> their by @hackty in #3423
- Fix quality by @muellerzr in #3424
- Distributed inference example for llava_next by @VladOS95-cyber in #3417
- HPU support by @IlyasMoutawwakil in #3378
New Contributors
- @cyr0930 made their first contribution in #3409
- @saforem2 made their first contribution in #3420
- @siqi654321 made their first contribution in #3330
- @hackty made their first contribution in #3423
- @VladOS95-cyber made their first contribution in #3417
- @IlyasMoutawwakil made their first contribution in #3378
Full Changelog: v1.4.0...v1.5.0
v1.4.0: `torchao` FP8, TP & dataLoader support, fix memory leak
torchao
FP8, initial Tensor Parallel support, and memory leak fixes
torchao
FP8
This release introduces a new FP8 API and brings in a new backend: torchao
. To use, pass in AORecipeKwargs
to the Accelerator
while setting mixed_precision="fp8"
. This is initial support, as it matures we will incorporate more into it (such as accelerate config
/yaml) in future releases. See our benchmark examples here
TensorParallel
We have intial support for an in-house solution to TP when working with accelerate dataloaders. check out the PR here
Bug fixes
- fix triton version check by @faaany in #3345
- fix torch_dtype in estimate memory by @SunMarc in #3383
- works for fp8 with deepspeed by @XiaobingSuper in #3361
- [
memory leak
] Replace GradientState -> DataLoader reference with weakrefs by @tomaarsen in #3391
What's Changed
- fix triton version check by @faaany in #3345
- [tests] enable BNB test cases in
tests/test_quantization.py
on XPU by @faaany in #3349 - [Dev] Update release directions by @muellerzr in #3352
- [tests] make cuda-only test work on other hardware accelerators by @faaany in #3302
- [tests] remove
require_non_xpu
test markers by @faaany in #3301 - Support more functionalities for MUSA backend by @fmo-mt in #3359
- [tests] enable more bnb tests on XPU by @faaany in #3350
- feat: support tensor parallel & Data loader by @kmehant in #3173
- DeepSpeed github repo move sync by @stas00 in #3376
- [tests] Fix bnb cpu error by @faaany in #3351
- fix torch_dtype in estimate memory by @SunMarc in #3383
- works for fp8 with deepspeed by @XiaobingSuper in #3361
- fix: typos in documentation files by @maximevtush in #3388
- [examples] upgrade code for seed setting by @faaany in #3387
- [
memory leak
] Replace GradientState -> DataLoader reference with weakrefs by @tomaarsen in #3391 - add xpu check in
get_quantized_model_device_map
by @faaany in #3397 - Torchao float8 training by @muellerzr in #3348
New Contributors
- @kmehant made their first contribution in #3173
- @XiaobingSuper made their first contribution in #3361
- @maximevtush made their first contribution in #3388
Full Changelog: v1.3.0...v1.4.0
v1.3.0 Bug fixes + Require torch 2.0
Torch 2.0
As it's been ~2 years since torch 2.0 was first released, we are now requiring this as the minimum version for Accelerate, which similarly was done in transformers
as of its last release.
Core
- [docs] no hard-coding cuda by @faaany in #3270
- fix load_state_dict for npu by @ji-huazhong in #3211
- Add
keep_torch_compile
param tounwrap_model
andextract_model_from_parallel
for distributed compiled model. by @ggoggam in #3282 - [tests] make cuda-only test case device-agnostic by @faaany in #3340
- latest bnb no longer has optim_args attribute on optimizer by @winglian in #3311
- add torchdata version check to avoid "in_order" error by @faaany in #3344
- [docs] fix typo, change "backoff_filter" to "backoff_factor" by @suchot in #3296
- dataloader: check that in_order is in kwargs before trying to drop it by @dvrogozh in #3346
- feat(tpu): remove nprocs from xla.spawn by @tengomucho in #3324
Big Modeling
- Fix test_nested_hook by @SunMarc in #3289
- correct the return statement of _init_infer_auto_device_map by @Nech-C in #3279
- Use torch.xpu.mem_get_info for XPU by @dvrogozh in #3275
- Ensure that tied parameter is children of module by @pablomlago in #3327
- Fix for offloading when using TorchAO >= 0.7.0 by @a-r-r-o-w in #3332
- Fix offload generate tests by @SunMarc in #3334
Examples
Full Changelog
What's Changed
- [docs] no hard-coding cuda by @faaany in #3270
- fix load_state_dict for npu by @ji-huazhong in #3211
- Fix test_nested_hook by @SunMarc in #3289
- correct the return statement of _init_infer_auto_device_map by @Nech-C in #3279
- Give example on how to handle gradient accumulation with cross-entropy by @ylacombe in #3193
- Use torch.xpu.mem_get_info for XPU by @dvrogozh in #3275
- Add
keep_torch_compile
param tounwrap_model
andextract_model_from_parallel
for distributed compiled model. by @ggoggam in #3282 - Ensure that tied parameter is children of module by @pablomlago in #3327
- Bye bye torch <2 by @muellerzr in #3331
- Fixup docker build err by @muellerzr in #3333
- feat(tpu): remove nprocs from xla.spawn by @tengomucho in #3324
- Fix offload generate tests by @SunMarc in #3334
- [tests] make cuda-only test case device-agnostic by @faaany in #3340
- latest bnb no longer has optim_args attribute on optimizer by @winglian in #3311
- Fix for offloading when using TorchAO >= 0.7.0 by @a-r-r-o-w in #3332
- add torchdata version check to avoid "in_order" error by @faaany in #3344
- [docs] fix typo, change "backoff_filter" to "backoff_factor" by @suchot in #3296
- dataloader: check that in_order is in kwargs before trying to drop it by @dvrogozh in #3346
New Contributors
- @ylacombe made their first contribution in #3193
- @ggoggam made their first contribution in #3282
- @pablomlago made their first contribution in #3327
- @tengomucho made their first contribution in #3324
- @suchot made their first contribution in #3296
Full Changelog: v1.2.1...v1.3.0
v1.2.1: Patchfix
- fix: add max_memory to _init_infer_auto_device_map's return statement in #3279 by @Nech-C
- fix load_state_dict for npu in #3211 by @statelesshz
Full Changelog: v1.2.0...v1.2.1
v1.2.0: Bug Squashing & Fixes across the board
Core
- enable
find_executable_batch_size
on XPU by @faaany in #3236 - Use
numpy._core
instead ofnumpy.core
by @qgallouedec in #3247 - Add warnings and fallback for unassigned devices in infer_auto_device_map by @Nech-C in #3066
- Allow for full dynamo config passed to Accelerator by @muellerzr in #3251
- [WIP] FEAT Decorator to purge accelerate env vars by @BenjaminBossan in #3252
- [
data_loader
] Optionally also propagate set_epoch to batch sampler by @tomaarsen in #3246 - use XPU instead of GPU in the
accelerate config
prompt text by @faaany in #3268
Big Modeling
- Fix
align_module_device
, ensure only cpu tensors forget_state_dict_offloaded_model
by @kylesayrs in #3217 - Remove hook for bnb 4-bit by @SunMarc in #3223
- [docs] add instruction to install bnb on non-cuda devices by @faaany in #3227
- Take care of case when "_tied_weights_keys" is not an attribute by @fabianlim in #3226
- Update deferring_execution.md by @max-yue in #3262
- Revert default behavior of
get_state_dict_from_offload
by @kylesayrs in #3253 - Fix: Resolve #3060,
preload_module_classes
is lost for nested modules by @wejoncy in #3248
DeepSpeed
- Select the DeepSpeedCPUOptimizer based on the original optimizer class. by @eljandoubi in #3255
- support for wrapped schedulefree optimizer when using deepspeed by @winglian in #3266
Documentation
-
Replaced set/check breakpoint with set/check trigger in the troubleshooting documentation by @relh in #3259
-
Fixed multiple typos for Tutorials and Guides docs by @henryhmko in #3274
New Contributors
- @winglian made their first contribution in #3266
- @max-yue made their first contribution in #3262
- @as12138 made their first contribution in #3261
- @relh made their first contribution in #3259
- @wejoncy made their first contribution in #3248
- @henryhmko made their first contribution in #3274
Full Changelog
- Fix
align_module_device
, ensure only cpu tensors forget_state_dict_offloaded_model
by @kylesayrs in #3217 - remove hook for bnb 4-bit by @SunMarc in #3223
- enable
find_executable_batch_size
on XPU by @faaany in #3236 - take care of case when "_tied_weights_keys" is not an attribute by @fabianlim in #3226
- [docs] update code in tracking documentation by @faaany in #3235
- Add warnings and fallback for unassigned devices in infer_auto_device_map by @Nech-C in #3066
- [
data_loader
] Optionally also propagate set_epoch to batch sampler by @tomaarsen in #3246 - [docs] add instruction to install bnb on non-cuda devices by @faaany in #3227
- Use
numpy._core
instead ofnumpy.core
by @qgallouedec in #3247 - Allow for full dynamo config passed to Accelerator by @muellerzr in #3251
- [WIP] FEAT Decorator to purge accelerate env vars by @BenjaminBossan in #3252
- use XPU instead of GPU in the
accelerate config
prompt text by @faaany in #3268 - support for wrapped schedulefree optimizer when using deepspeed by @winglian in #3266
- Update deferring_execution.md by @max-yue in #3262
- Fix: Resolve #3257 by @as12138 in #3261
- Replaced set/check breakpoint with set/check trigger in the troubleshooting documentation by @relh in #3259
- Select the DeepSpeedCPUOptimizer based on the original optimizer class. by @eljandoubi in #3255
- Revert default behavior of
get_state_dict_from_offload
by @kylesayrs in #3253 - Fix: Resolve #3060,
preload_module_classes
is lost for nested modules by @wejoncy in #3248 - [docs] update set-seed by @faaany in #3228
- [docs] fix typo by @faaany in #3221
- [docs] use real path for
checkpoint
by @faaany in #3220 - Fixed multiple typos for Tutorials and Guides docs by @henryhmko in #3274
Code Diff
Release diff: v1.1.1...v1.2.0