Releases: pytorch/tensordict
v0.8.3: Better CudaGraphModule
This minor release provides some fixes to CudaGraphModule, allowing the module to run on different devices than the default.
It also adds __copy__
to the TensorDict ops, such that copy(td)
triggers td.copy()
, resulting in a copy of the TD stucture without new memory allocation.
Full Changelog: v0.8.2...v0.8.3
v0.8.2: Fix memory leakage due to validate
This release fixes an apparent memory leak due to the value validation in tensordict.
The leak is apparent, as in it disappears in gc.collect()
is invoked.
See #1309 for context.
Full Changelog: v0.8.1...v0.8.2
Minor fix: Statically link _C extension against the Python library
This new minor fixes the _C build pipeline, which was failing on some machines as the extension was build with dynamic linkage against libpython
v0.8.0: Non-tensor data handling
What's Changed
We're excited to announce a new tensordict
release, packed with new features, packaging perks as well as bug fixes.
New features
The interaction with non-tensor data is now much easier to get by ():
set_list_to_stack(True).set() # Ask for new behaviour
td = TensorDict(batch_size=(3, 2))
td["numbers"] = [["0", "1"], ["2", "3"], ["4", "5"]]
print(td)
# TensorDict(
# fields={
# numbers: NonTensorStack(
# [['0', '1'], ['2', '3'], ['4', '5']],
# batch_size=torch.Size([3, 2]),
# device=None)},
# batch_size=torch.Size([3, 2]),
# device=None,
# is_shared=False)
Stacks of non-tensor data can also be reshaped 58ccbf5. Using the previous example:
td = td.view(-1)
td["numbers"]
# ['0', '1', '2', '3', '4', '5']
We also made it easier to get values of lazy stacks (f7bc839):
tds = [TensorDict(a=torch.zeros(3)), TensorDict(a=torch.ones(2))]
td = lazy_stack(tds)
print(td.get("a", as_list=True))
# [tensor([0., 0., 0.]), tensor([1., 1.])]
print(td.get("a", as_nested_tensor=True))
# NestedTensor(size=(2, j1), offsets=tensor([0, 3, 5]), contiguous=True)
print(td.get("a", as_padded_tensor=True, padding_value=-1))
# tensor([[ 0., 0., 0.],
# [ 1., 1., -1.]])
Packaging
You can now install tensordict
with any PyTorch version. We only provide test coverage for the latest pytorch (currently 2.7.0), so for any other version you will be on your own in terms of compatibility but there should be no limitations in term of installing the library with older version of pytorch.
New features
- [Feature] Add init value option to AddStateIndependentNormalScale (#1259) 0dc90b2 @lin-erica
- [Feature] Allow neg dims during LazyStack 539651f @vmoens
- [Feature] Allow update to increment the number of tds in a lazy stack 3ec3be7 @vmoens
- [Feature] Deepcopy acd8f57 @vmoens
- [Feature] NotImplemented for abstractmethods in base.py to avoid empty calls 37e3454 @vmoens
- [Feature] TensorDict.tolist() (#1229) a3493de @vmoens
- [Feature] TensorDictModule method and kwargs specification bbd5ba8 @vmoens
- [Feature] Update batch-size 635d036 @vmoens
- [Feature]
strict
kwarg in TDModule 06215b6 @vmoens - [Feature] as_list, as_padded_tensor and as_nested_tensor for Lazy stacks f7bc839 @vmoens
- [Feature] as_tensordict_module 9738142 @vmoens
- [Feature] capture_non_tensor_stack 7b0fd93 @vmoens
- [Feature] from_dataclass with dest_cls arg fced90d @vmoens
- [Feature] inplace arg for TensorDictSequential 55fab2a @vmoens
- [Feature] inplace kwarg for Probabilistic TD Sequential ab1b532 @vmoens
- [Feature] view, flatten and unflatten for lazy stacks 58ccbf5 @vmoens
Bug Fixes
- [BugFix] *_like functions with dtypes 947cb19 @vmoens
- [BugFix] Better check for TDModule bf9007c @vmoens
- [BugFix] Better list assignment in tensorclasses 6d8119c @vmoens
- [BugFix] Better prop of args in update c61d045 @vmoens
- [BugFix] Consolidate lazy stacks of non-tensors 0b901a7 @vmoens
- [BugFix] Consolidate lazy stacks of non-tensors f67a15c @vmoens
- [BugFix] Enforce
zip(..., strict=True)
in TDModules 0a8638d @vmoens - [BugFix] Ensure that maybe_dense_stack preserves the TC type eb2fd8e @vmoens
- [BugFix] Faster and safer non-tensor stack e23ce5c @vmoens
- [BugFix] Fix .item() warning on tensors that require grad 910c953 @vmoens
- [BugFix] Fix MISSING check in tensorclass (#1275) cc0f464 @vmoens*
- [BugFix] Fix TDParams compatibility with export ecdde0b @vmoens
- [BugFix] Fix compile during _check_keys 2ad9f95 @vmoens
- [BugFix] Fix get for nestedkeys with default in tensorclass 4e1c646 @vmoens
- [BugFix] Fix non-deterministic key order in stack (#1230) c35d7aa @vmoens
- [BugFix] Fix rsub 4b77779 @vmoens
- [BugFix] Fix serialization of stacks of Tensorclasses 635c9c0 @vmoens
- [BugFix] Fix tensorclass indexing 0fccd26 @vmoens
- [BugFix] Fix tensorclass update cb81b5e @vmoens
- [BugFix] Fix tensordict.get in TDModule tensor retrieval for NonTensorData feb736f @vmoens
- [BugFix] Local cuda assessment 5b8bce8 @vmoens
- [BugFix] Method _is_list_tensor_compatible missing return value (#1277) a9cc632 @albertbou92
- [BugFix] Pass type directly during reduction ad0a8dd @vmoens
- [BugFix] Proper auto-batch size for unbatched tensors ba53d07 @vmoens
- [BugFix] _PASSTHROUGH_MEMO for passthrough tensorclass d25bd54 @vmoens
- [BugFix]
none
value for _CAPTURE_NONTENSOR_STACK 2f60f8d @vmoens - [BugFix] fix get for tensorclass with lazy stacks 36a49ad @vmoens
- [BugFix] fix replacing nested keys in rename_key_ 06fd756 @vmoens
- [BugFix] num_samples=None fix be88a17 @vmoens
Performance
- [Performance] Dedicated validate funcs 604b471 @vmoens
- [Performance] Faster
_get_item
1e33a18 @vmoens - [Performance] tensor_only for tensorclass d4bc34c @vmoens
Miscellaneous
- [BE] Better errors for TensorDictSequential 28fbea1 @vmoens
- [CI,Setup] Fix CI and bump python3.13 (#1260) 57ca3d4 @vmoens*
- [CI] Fix 3.13t wheels 8774e73 @vmoens
- [CI] python 3.13 nightly build 0eb2ad3 @vmoens
- [Deprecation] Deprecate KJT support 63730af @vmoens
- [Deprecation] Enact deprecations 3d2e6d2 @vmoens
- [Deprecation] Softly deprecate extra-tensors wrt out_keys 7dd385b @vmoens
- [Doc,BugFix] Fix default argument in doc for
AddStateIndependentNormalScale
(#1216) 309c0a3 Faury Louis* - [Doc] Better doc for TensorDictModuleBase b8d3ff9 @vmoens
- [Doc] Fix typo in pad_sequence docstring example (#1262) a121330 @codingWhale13
- [Minor] Expose is_leaf_nontensor and default_is_leaf 8bdbf01 @vmoens
- [Minor] Fix tensorclass prints 9eba0c1 @vmoens
- [Refactor] Setting lists in tensordict instances 6ad496b @vmoens
- [Setup] Fix setuptools error 4018b8b @vmoens
- [Setup] Parametrize PyTorch version with env variable for local builds (#1245) b95c382 @dvmazur
- [Test] Deactivate failing test on 2.6 2df09c9 @vmoens
- [Test] Test dataclasses.field as tensorclass default e8294da @vmoens
- [Versioning] Bump v0.8.0 37afe37 @vmoens
New Contributors
- @dvmazur made their first contribution in #1245
- @lin-erica made their first contribution in #1259
- @codingWhale13 made their first contribution in #1262
Full Changelog: v0.7.0...v0.8.0
v0.7.2
We are pleased to announce the release of tensordict v0.7.2, which includes several bug fixes and backend improvements.
Bug Fixes:
- Consolidated lazy stacks of non-tensors (#1222, #1224)
- Passed type directly during reduction (#1225)
- Fixed non-deterministic key order in stack (#1230)
- Added _PASSTHROUGH_MEMO for passthrough tensorclass (#1231)
- Improved performance and safety of non-tensor stack (#1232)
- Fixed serialization of stacks of Tensorclasses (#1236)
- Fixed compile during _check_keys (#1239)
Backend Improvements:
Improved errors for TensorDictSequential (#1227)
Documentation Updates:
Improved documentation for TensorDictModuleBase (#1226)
Full Changelog: v0.7.1...v0.7.2
0.7.1: Fixes and doc improvements
We are pleased to announce the release of tensordict v0.7.1, which includes several bug fixes and a deprecation notice.
Bug Fixes
- Fixed get method for nested keys with default values in TensorClass (#1211)
- Enforced zip(..., strict=True) in TDModules to prevent potential issues (#1212)
- Properly handled auto-batch size for unbatched tensors (#1213)
- Fixed indexing issues in TensorClass (#1217)
Deprecation Notice
Softly deprecated extra-tensors with respect to out_keys (#1215). We make sure a warning is raised when the number of output tensors and output keys do not match.
Full Changelog: v0.7.0...v0.7.1
v0.7.0: More robust composite distributions, TensorClass superclass
v0.7.0: More robust composite distributions, TensorClass superclass
v0.7.0 brings a lot of new features and bug fixes. Thanks to the vibrant community to help us keeping this project
alive!
New Contributors
A special thanks to our new contributors (+ people interacting with us on the PyTorch forum, discord or via issues and
social platforms)!
- @egaznep made their first contribution in #1056
- @priba made their first contribution in #1065
- @SamAdamDay made their first contribution in #1075
- @emmanuel-ferdman made their first contribution in #1105
- @bachdj-px made their first contribution in #1101
- @jgrhab made their first contribution in #1130
- @amdfaa made their first contribution in #1191
BC-breaking changes
- In #1180, we use the same object for
min
andmax
operations as we do with torch.Tensor.min. Previously, tensorclasses
were used, but that lead to some undefined behaviors when indexing (in PyTorch,min
returns a namedtuple that can be
indexed to get the values or argmax, whereas indexing a tensorclass indexes it along the batch dimension). - In #1166, we introduce broadcasting for pointwise operations between tensors and tensordicts. Now, the following two
operations on either sides of the==
sign are exactly equivalent:td = TensorDict(..., batch_size=[3, 5]) t = torch.randn(5) td + t == td + t.expand(td.shape)
Announced API changes
CompositeDistribution
TL;DR: We're changing the way log-probs and entropies are collected and written in ProbabilisticTensorDictModule
and
in CompositeDistribution
. The "sample_log_prob"
default key will soon be "<value>_log_prob
(or
("path", "to", "<value>_log_prob")
for nested keys). For CompositeDistribution
, a different log-prob will be
written for each leaf tensor in the distribution. This new behavior is controlled by the
tensordict.nn.set_composite_lp_aggregate(mode: bool)
function or by the COMPOSITE_LP_AGGREGATE
environment variable.
We strongly encourage users to adopt the new behavior by setting tensordict.nn.set_composite_lp_aggregate(False).set()
at the beginning of their training script.
We've had multiple rounds of refactoring for CompositeDistribution
which relied on some very specific assumptions and
resulted in a brittle and painful API. We now settled on the following API that will be enforced in v0.9, unless the
tensordict.nn.set_composite_lp_aggregate(mode)
value is explicitly set to True
(the current default).
The bulk of the problem was that log-probs were aggregated in a single tensor and registered in td["sample_log_prob"]
.
This had the following problems:
- Summing the log-probs isn't a good idea, users should be entitled to user the log-probs as they please.
"sample_log_prob"
is a generic but inappropriate name (the data may not be a randomsample
but anything else.)- Summing requires reduction (because log-probs may have different shapes), but sometimes we don't want to reduce to the
shape of the root tensordict (see pytorch/rl#2756 for instance).
What's new
tensorclass
- Introduction of the
TensorClass
class to do simple inheritance-style coding, which is accompanied by a stub file
that encodes all the TensorDict op signatures (we ensure this in the CI). See #1067 - @Tensorclass shadow attributes: you can now do
@tensorclass(shadow=True)
orclass T(TensorClass["shadow"]): ...
and you will be able to use dedicated names likeget
orvalues
as attribute names. This is slightly unsafe when you
nest the tensorclass, as we can't guarantee that the container won't be calling these methods directly on the
tensorclass. - Similarly,
@tensorclass(nocast=True)
andTensorClass["nocast"]
will deactivate the auto-casting in tensorclasses.
The behavior is now:- No value: tensorclass will cast things like TensorDict (ie,
int
ornp.arrays
will be cast totorch.Tensor
instances for example). autocast=True
will cause@tensorclass
to go one step further and attempt to cast values to the type indicated
in the dataclass definition.nocast=True
keeps values as they are. All non-tensor (or non-tensordict/tensorclass) values will be wrapped in
aNonTensorData
.
- No value: tensorclass will cast things like TensorDict (ie,
NonTensorData
- It is not easier to build non-tensor stacks through a simple
NonTensorStack(*values)
.
See the full list of features here:
[Feature] Add __abs__
docstrings, __neg__
, __rxor__
, __ror__
, __invert__
, __and__
, __rand__
, __radd__
, __rtruediv__
, __rmul__
, __rsub__
, __rpow__
, bitwise_and
, logical_and
(#1154) (d1363eb) by @vmoens ghstack-source-id: 97ce710b5a4b552d9477182e1836cf3777c2d756
[Feature] Add expln map to NormalParamExtractor (#1204) (e900b24) by @vmoens ghstack-source-id: 9003ceafbe8ecb73c701ea1ce96c0a342d0679b0
[Feature] Add missing __torch_function__
(#1169) (bc6390c) by @vmoens ghstack-source-id: 3dbefb4f5322a944664bbc2d29af7f862cb92342
[Feature] Better list casting in TensorDict.from_any (#1108) (1ffc463) by @vmoens ghstack-source-id: 427d19d5ef7c0d2779e064e64522fc0094a885af
[Feature] Better logs of key errors in assert_close (#1082) (747c593) by @vmoens ghstack-source-id: 46cb41d0da34b17ccc248119c43ddba586d29d80
[Feature] COMPOSITE_LP_AGGREGATE env variable (#1190) (9733d6e) by @vmoens ghstack-source-id: 16b07d0eac582cfd419612f87e38e1a7acffcfc0
[Feature] CompositeDistribution.from_distributions (#1113) (a45c7e3) by @vmoens ghstack-source-id: 04a62439b0fe60422fbc901172df46306e161cc5
[Feature] Ensure all dists work with DETERMINSTIC type without warning (#1182) (8e63112) by @vmoens ghstack-source-id: 63117f9b3ac4125a2be4e3e55719cc718051fc10
[Feature] Expose WrapModule (#1118) (d849756) by @vmoens ghstack-source-id: 55caa5d7c39e0f98c1e0558af2a076fee15f7984
[Feature] Fix type assertion in Seq build (#1143) (eaafc18) by @vmoens ghstack-source-id: 83d3dcafe45568c366207395a22b22fb35f61de1
[Feature] Force log_prob to return a tensordict when kwargs are passed to ProbabilisticTensorDictSequential.log_prob (#1146) (98c57ee) by @vmoens ghstack-source-id: 326d0763c9bbb13b51daac91edca4f0e821adf62
[Feature] Make ProbabilisticTensorDictSequential account for more than one distribution (#1114) (c7bd20c) by @vmoens ghstack-source-id: b62b81b5cfd49168b5875f7ba9b4f35b51cd2423
[Feature] NonTensorData(*sequence_of_any) (#1160) (70d4ed1) by @vmoens ghstack-source-id: 537f3d87b0677a1ae4992ca581a585420a10a284
[Feature] NonTensorStack.data (#1132) (4404abe) by @vmoens ghstack-source-id: 86065377cc1cd7c7283ed0a468f5d5602d60526d
[Feature] NonTensorStack.from_list (#1107) (f924afc) by @vmoens ghstack-source-id: e8f349cb06a72dcb69a639420b14406c9c08aa99
[Feature] Optional in_keys for WrapModule (#1145) (2d37d92) by @vmoens ghstack-source-id: a18dd5dff39937b027243fcebc6ef449b547e0b0
[Feature] OrderedDict for TensorDictSequential (#1142) (7df2062) by @vmoens ghstack-source-id: a8aed1eaefe066dafaa974f5b96190860de2f8f1
[Feature] ProbabilisticTensorDictModule.num_samples (#1117) (978d96c) by @vmoens ghstack-source-id: dc6b1c98cee5fefc891f0d65b66f0d17d10174ba
[Feature] ProbabilisticTensorDictSequential.default_interaction_type (#1123) (68ce9c3) by @vmoens ghstack-source-id: 37d38df36263e8accd84d6cb895269d50354e537
[Feature] Subclass conservation in td ops (#1186) (070ca61) by @vmoens ghstack-source-id: 83e79abda6a4bb6839d99240052323380981855c
[Feature] TensorClass (#1067) (a6a0dd6) by @vmoens ghstack-source-id: c3d4e17599a3204d4ad06bceb45e4fdcd0fd1be5
[Feature] TensorClass shadow attributes (#1159) (c744bcf) by @vmoens ghstack-source-id: b5cc7c7fea2d48394e63d289ee2d6f215c2333bc
[Feature] TensorDict.(dim='feature') (#1121) (ba43159) by @vmoens ghstack-source-id: 68f21aca722895e8a240dbca66e97310c20a6b5d
[Feature] TensorDict.clamp (#1165) (646683c) by @vmoens ghstack-source-id: 44f0937c195d969055de10709402af7c4473df32
[Feature] TensorDict.logsumexp (#1162) (e564b3a) by @vmoens ghstack-source-id: 84148ad9c701029db6d02dfb84ddb0a9b26c9ab7
[Feature] TensorDict.separates (#1120) (674f356) by @vmoens ghstack-source-id: be142a150bf4378a0806347257c3cf64c78e4eda
[Feature] TensorDict.softmax (#1163) (c0c6c14) by @vmoens ghstack-source-id: a88bebc23e6aaa02ec297db72dbda68ec9628ce7
[Feature] TensorDictModule in_keys allowed as Dict[str, tuple | list] to enable multi use of a sample feature (#1101) (e871b7d) by @bachdj-px
[Feature] UnbatchedTensor (#1170) (74cae09) by @vmoens ghstack-source-id: fa25726d61e913a725a71f1579eb06b09455e7c8
[Feature] intersection
for assert_close (#1078) (84d31db) by @vmoens ghstack-source-id: 3ae83c4ef90a9377405aebbf1761ace1a39417b1
[Feature] allow tensorclass to be customized (#1080) (31c7330) by @vmoens ghstack-source-id: 0b65b0a2dfb0cd7b5113e245c9444d3a0b55d085
[Feature] broadcast pointwise ops for tensor/tensordict mixed inputs (#1166) (aeff837) by @vmoens ghstack-source-id: bbefbb1a2e9841847c618bb9cf49160ff1a5c36a
[Feature] compatibility of consolidate with compile (quick version) (#1061) (3cf52a0) by @vmoens ghstack-source-id: 1bf3ca550dfe5499b58f878f72c4f1687b0f247e
[Feature] dist_params_keys and dist_sample_keys (#1179) (a728a4f) by @vmoens ghstack-source-id: d1e53e780132d04ddf37d613358b24467520230f
[Feature] flexible return type when indexing prob sequences (#1189) (790bef6) by @vmoens ghstack-source-id: 74d28ee84d965c11c527c60b20d9123ef30007f6
[Feature] from_any with UserDict (#1106) (3485c2c) by @vmoens ghstack-source-id: 420464209cff29c3a1c58ec521fbf4ed69d1355f
[Feature] inplace to method (#1066) (fbb71...
v0.6.2:
Minor bug fixes
[BugFix] Fix none ref in during reduction (#1090) 88c86f8
[Versioning] v0.6.2 fix cdd5cb3
[BugFix] smarter check in set_interaction_type 0b3c778
[Versioning] 0.6.2 b26bbe3
[Quality] Better use of StrEnum in set_interaction_type 477d85b
[Minor] print_directory_tree returns a string 05c0fe7
[Doc] Add doc on export with nested keys d64c33d
[Feature] Better logs of key errors in assert_close 1ef1188
[Refactor] Make _set_dispatch_td_nn_modules compatible with compile f24e3d8
[Doc] Better docstring for to_module 178dfd9
[Feature] intersection
for assert_close 8583392
[Quality] Better error message for incongruent lists of keys 866943c
[BugFix] Better repr of lazy stacks e00965c
[BugFix] calling pad with immutable sequence (#1075) d3bcb6e
[Performance] Faster to f031bf2
Full Changelog: v0.6.1...v0.6.2
v0.6.1: Minor fixes and perf improvements
v0.6.0: Export, streaming and `CudaGraphModule`
What's Changed
TensorDict 0.6.0 makes the @dispatch
decorator compatible with torch.export
and related APIs,
allowing you to get rid of tensordict altogether when exporting your models:
from torch.export import export
model = Seq(
# 1. A small network for embedding
Mod(nn.Linear(3, 4), in_keys=["x"], out_keys=["hidden"]),
Mod(nn.ReLU(), in_keys=["hidden"], out_keys=["hidden"]),
Mod(nn.Linear(4, 4), in_keys=["hidden"], out_keys=["latent"]),
# 2. Extracting params
Mod(NormalParamExtractor(), in_keys=["latent"], out_keys=["loc", "scale"]),
# 3. Probabilistic module
Prob(
in_keys=["loc", "scale"],
out_keys=["sample"],
distribution_class=dists.Normal,
),
)
model_export = export(model, args=(), kwargs={"x": x})
See our new tutorial to learn more about this feature.
The library integration with the PT2 stack is also furt
9E88
her improved by the introduction of CudaGraphModule
,
which can be used to speed-up model execution under a certain set of assumptions; mainly that the inputs and outputs
are non-differentiable, that they are all tensors or constant and that the whole graph can be executed on cuda with
buffers of constant shape (ie, dynamic shape is not allowed).
We also introduce a new tutorial on streaming tensordicts.
Note: The aarch64
binaries are attached to these release notes and not available in PyPI at the moment.
Deprecations
- [Deprecate] Make calls to make_functional error #1034 by @vmoens
- [Deprecation] Act warned deprecations for v0.6 #1001 by @vmoens
- [Refactor] make TD.get default to None, like dict (#948) by @vmoens
Features
- [Feature] Allow to specify log_prob_key in CompositeDistribution (#961) by @albertbou92
- [Feature] Better typing for tensorclass #983 by @vmoens
- [Feature] Cudagraphs (#986) by @vmoens
- [Feature] Densify lazy tensordicts #955 by @vmoens
- [Feature] Frozen tensorclass #984 by @vmoens
- [Feature] Make NonTensorData a callable (#939) by @vmoens
- [Feature] NJT with lengths #1021 by @vmoens
- [Feature] Non-blocking for consolidated TD #1020 by @vmoens
- [Feature] Propagate
existsok
in memmap* methods #990 by @vmoens - [Feature] TD+NJT to(device) support #1022 by @vmoens
- [Feature] TensorDict.record_stream #1016 by @vmoens
- [Feature] Unify composite dist method signatures with other dists (#981) Co-authored-by: Vincent Moens vincentmoens@gmail.com^M
- [Feature] foreach_copy for update_ #1032 by @vmoens
- [Feature]
data_ptr()
method #1024 by @vmoens - [Feature]
inplace
arg in TDM constructor #992 by @vmoens - [Feature]
selected_out_keys
arg in TDS constructor #993 by @vmoens - [Feature] better sync and instantiation of cudagraphs (#1013) by @vmoens
- [Feature] callables for merge_tensordicts #1033 by @vmoens
- [Feature] cat and stack_from_tensordict #1018 by @vmoens
- [Feature] cat_tensors and stack_tensors #1017 by @vmoens
- [Feature] from_struct_array and to_struct_array (#938) by @vmoens
- [Feature] give a
__name__
to TDModules #1045 by @vmoens - [Feature] param_count #1046 by @vmoens
- [Feature] sorted keys, values and items #965 by @vmoens
- [Feature] str2td #953 by @vmoens
- [Feature] torch.export and onnx compatibility #991 by @vmoens
Code improvements
- [Quality] Better error for mismatching TDs (#964) by @vmoens
- [Quality] Better type hints for
__init__
(#1014) by @vmoens - [Quality] Expose private classmethods (#982) by @vmoens
- [Quality] Fewer recompiles with tensordict (#1015) by @vmoens
- [Quality] Type checks #976 by @vmoens
- [Refactor, Tests] Move TestCudagraphs by @vmoens
- [Refactor, Tests] Move TestCudagraphs #1007 by @vmoens
- [Refactor] Make @Tensorclass work properly with pyright (#1042) by @maxim
- [Refactor] Update nn inline_inbuilt check #1029 by @vmoens
- [Refactor] Use IntEnum for interaction types (#989) by @vmoens
- [Refactor] better AddStateIndependentNormalScale #1028 by @vmoens
Fixes
- [BugFix] Add nullbyte in memmap files to make fbcode happy (#943) by @vmoens
- [BugFix] Add sync to cudagraph module (#1026) by @vmoens
- [BugFix] Another compiler fix for older pytorch #980 by @vmoens
- [BugFix] Compatibility with non-tensor inputs in CudaGraphModule #1039 by @vmoens
- [BugFix] Deserializing a consolidated TD reproduces a consolidated TD #1019 by @vmoens
- [BugFix] Fix foreach_copy for older versions of PT #1035 by @vmoens
- [BugFix] Fix buffer identity in Params._apply (#1027) by @vmoens
- [BugFix] Fix key errors catch in del_ and related (#949) by @vmoens
- [BugFix] Fix number check in array parsing (np>=2 compatibility) #999 by @vmoens
- [BugFix] Fix pre 2.1 _apply compatibility #1050 by @vmoens
- [BugFix] Fix select in tensorclass (#936) by @vmoens
- [BugFix] Fix td device sync when error is raised #988 by @vmoens
- [BugFix] Fix tree_leaves import for older versions of PT #995 by @vmoens
- [BugFix] Fix vmap monkey patching #1009 by @vmoens
- [BugFix] Make probabilistic sequential modules compatible with compile #1030 by @vmoens
- [BugFix] Other dynamo fixes #977 by @vmoens
- [BugFix] Propagate maybe_dense_stack in _stack #1036 by @vmoens
- [BugFix] Regular swap_tensor for to_module in dynamo (#963) by @vmoens
- [BugFix] Remove ForkingPickler to account for change of API in torch.mp #998 by @vmoens
- [BugFix] Remove forkingpickler (#1049) by @bhack
- [BugFix] Resilient deterministic_sample for CompositeDist #1000 by @vmoens
- [BugFix] Simple syncs (#942) by @vmoens
- [BugFix] Softly revert get changes (#950) by @vmoens
- [BugFix] TDParams.to(device) works as nn.Module, not TDParams contained TD #1025 by @vmoens
- [BugFix] Use separate streams for cudagraph warmup #1010 by @vmoens
- [BugFix] dynamo compat refactors #975 by @vmoens
- [BugFix] resilient _exclude_td_from_pytree #1038 by @vmoens
- [BugFix] restrict usage of Buffers to non-batched, non-tracked tensors #979 by @vmoens
Doc
- [Doc] Broken links in GETTING_STARTED.md (#945) by @vmoens
- [Doc] Fail-on-warning in sphinx #1005 by @vmoens
- [Doc] Fix tutorials #1002 by @vmoens
- [Doc] Refactor README and add GETTING_STARTED.md (#944) by @vmoens
- [Doc] Streaming tensordicts #956 by @vmoens
- [Doc] export tutorial, TDM tuto refactoring #994 by @vmoens
Performance
Not user facing
- [Benchmark] Benchmark H2D transfer #1044 by @vmoens
- [CI, BugFix] Fix nightly build (#941) by @vmoens
- [CI] Add aarch64-linux wheels (#987) by @vmoens
- [CI] Fix versioning of h2d tests #1053 by @vmoens
- [CI] Fix windows wheels #1006 by @vmoens
- [CI] Upgrade 3.8 workflows (#967) by @vmoens
- [Minor, Format] Fix fbcode lint (#940) by @vmoens
- [Minor] Refactor is_dynamo_compiling for older torch versions (#978) by @vmoens
- [Setup] Correct read_file encoding in setup (#962) by @vmoens
- [Test] Keep a tight control over warnings (#951) by @vmoens
- [Test] Make h5py tests optional if no h5py installed (#947) by @vmoens
- [Test] Mark MP tests as slow (#946) by @vmoens
- [Test] Rename duplicated test #997 by @vmoens
- [Test] Skip compile tests that require 2.5 for stable #996 by @vmoens
- [Versioning] Versions for 0.6 (#1052) by @vmoens
New Contributors
Full Changelog: v0.5.0...v0.6.0
Co-authored-by: Vincent Moens vmoens@meta.com by @albertbou92