Open
Description
I find myself rather often trying to convert different sorts of hdf5
files to vtk with h5tovtk
.
Despite having access to High-Performance-Computing resources, when converting large files (from about 100 GB up) it is rather frustrating to get the same hopeless response: arrayh5 error: out of memory
.
A distributed (MPI?) version of the tool (for multin-node system) or at least a buffered one, capable of copying data in chunks, would be very welcome!
Any hope of seeing something like this in the near future?
Or is this already possible and I'm doing something wrong?
Thank you!
Metadata
Metadata
Assignees
Labels
No labels