Releases: usnistgov/h5wasm
Releases · usnistgov/h5wasm
v0.7.0
v0.7.0 2023-12-05
Changed
package.json
modified so that exports are defined in a way that works better with bundlers- (should now work with e.g. NextJS)
- Breaking change the
nodejs
import is now 'h5wasm/node' rather than 'h5wasm'. Thenodejs
examples in the README have been updated, as well as the tests. - tests modified to use smaller compressed array (reduce package size)
v0.6.10
v0.6.10 2023-11-09
Added
- export more symbols to support
blosc2
filter
v0.6.9
v0.6.9 2023-11-06
Fixed
- added missing FileSystem API function
mkdirTree
to Emscripten typescript interface
Added
- Functions for working with dimension scales in typescript interface:
// convert dataset to dimension scale:
Dataset.make_scale(scale_name: string)
// attach a dimension scale to the "index" dimension of this dataset:
Dataset.attach_scale(index: number, scale_dset_path: string)
// detach a dimension scale from "index" dimension
Dataset.detach_scale(index: number, scale_dset_path: string)
// get full paths to all datasets that are attached as dimension scales
// to the specified dimension (at "index") of this dataset:
Dataset.get_attached_scales(index: number)
// if this dataset is a dimension scale, returns name as string
// (returns empty string if no name defined, but it is a dimension scale)
// else returns null if it is not set as a dimension scale:
Dataset.get_scale_name()
- Functions for working with dimension labels (not related to dimension scales)
// label dimension at "index" of this dataset with string "label":
Dataset.set_dimension_label(index: number, label: string)
// fetch labels for all dimensions of this dataset (null if label not defined):
Dataset.get_dimension_labels()
v0.6.8
v0.6.8 2023-11-02
Added
- Functions for creating, attaching and detaching dimension scales (no support for reading yet)
- h5wasm.Module.set_scale(file_id: bigint, dataset_name: string, dim_name: string) -> number // error code, zero on success
- h5wasm.Module.attach_scale(file_id: bigint, target_dataset_name: string, dimscale_dataset_name: string, index: number) -> number
- h5wasm.Module.detach_scale(file_id: bigint, target_dataset_name: string, dimscale_dataset_name: string, index: number) -> number
Usage:
import h5wasm from "h5wasm";
await h5wasm.ready;
f = new h5wasm.File('dimscales.h5', 'w');
f.create_dataset({name: "data", data: [0,1,2,3,4,5], shape: [2,3], dtype: '<f4'});
f.create_dataset({name: "dimscale", data: [2,4,6], dtype: '<f4'});
h5wasm.Module.set_scale(f.file_id, 'dimscale', 'y1'); // -> 0 on success
h5wasm.Module.attach_scale(f.file_id, 'data', 'dimscale', 1); // -> 0 on success
f.close();
v0.6.3
v0.6.3 2023-09-15
Added
- extra symbols used by the LZ4 plugin added to EXPORTED_FUNCTIONS:
- from <arpa/inet.h>:
htonl
,htons
,ntohl
,ntohs
- from HDF5:
H5allocate_memory
,H5free_memory
- from <arpa/inet.h>:
- node.js library compiled as MAIN_MODULE, allowing plugin use (if they are installed in
/usr/local/hdf5/lib/plugin
)
v0.6.2
v0.6.2 2023-08-28
Added
- From PR #59 (thanks to @TheLartians !): allows slicing datasets with a specified step size. This is extremely useful if we want to query a high resolution dataset using a lower sample rate than originally recorded, e.g. for performance / bandwidth reasons. Also can be used with
Dataset.write_slice
Example:
const dset = write_file.create_dataset({
name: "data",
data: [0,1,2,3,4,5,6,7,8,9],
shape: [10],
dtype: "<f4"
});
const slice = dset.slice([[null, null, 2]]); // -> [0,2,4,6,8]
dset.write_slice([[null, 7, 2]], [-2,-3,-4,-5]);
dset.value; // -> Float32Array(10) [-2, 1, -3, 3, -4, 5, -5, 7, 8, 9];
v0.6.1
v0.6.1 2023-07-13
Fixed
- memory error from double-free when creating vlen str datasets (no need to call H5Treclaim when memory for vector will be automatically garbage-collected)
v0.6.0
v0.6.0 2023-07-03
Added
compression: number | 'gzip'
andcompression_opts: number[]
arguments tocreate_dataset
.- Must specify chunks when using
compression
- Can not specify
compression
with VLEN datasets - if
compression
is supplied as a number withoutcompression_opts
, the 'gzip' (DEFLATE, filter_id=1) filter is applied, and the compression level used is taken from the value ofcompression
. An integer value of 0-9 is required for the compression level of the 'gzip' filter. - if
compression
andcompression_opts
are supplied, thecompression
value is passed as the filter_id (or 1 if 'gzip' is supplied), and the value ofcompression_opts
is promoted to a list if it is a number, or passed as-is if it is a list. Use both compression (numeric filter_id) and compression_opts when specifying any filter other than 'gzip', e.g. with a filter plugin - if
compression === 'gzip'
andcompression_opts === undefined
, the default gzip compression level of 4 is applied.
- Must specify chunks when using
Example usage:
// short form: uses default compression (DEFLATE, filter_id=1)
// with compression level specified (here, 9)
Group.create_dataset({..., compression=9});
// specify gzip and compression level:
Group.create_dataset({..., compression='gzip', compression_opts=[4]});
// specify another filter, e.g. BLOSC plugin:
Group.create_dataset({..., compression=32001, compression_opts=[]});
Changed
- BREAKING: for
create_dataset
, all arguments are supplied in a single object, as
Group.create_dataset(args: {
name: string,
data: GuessableDataTypes,
shape?: number[] | null,
dtype?: string | null,
maxshape?: (number | null)[] | null,
chunks?: number[] | null,
compression?: (number | 'gzip'),
compression_opts?: number | number[]
}): Dataset {}
v0.5.2
v0.5.0
v0.5.0 2023-05-15
Fixed
- with emscripten >= 3.1.28, the shim for ES6 builds in nodejs is no longer needed (see emscripten-core/emscripten#17915)
- added
malloc
andfree
to the list of explicit exports for theModule
(newer emscripten was optimizing them away)
Changed
- POSSIBLY BREAKING: building as MAIN_MODULE=2 with
POSITION_INDEPENDENT_CODE ON
(to allow dynamic linking) - Using newer HDF5 version 1.12.2 libraries
- compiled using Emscripten 3.1.28
- simplified imports for tests to use "h5wasm" directly
Added
- Plugins can now be used if they are compiled as SIDE_MODULE, by loading them into the expected plugin folder
/usr/local/hdf5/lib/plugin
in the emscripten virtual file system (might need the name of the plugin file to end in .so, even if it is WASM)