HEFTIE stands for "Handling Enormous Files from Tomography Imaging Experiments". We are an EU funded project, which aims to:
develop a comprehensive digital textbook and new software tools for working with chunked 3D imaging datasets
For a high level, public facing view of our project, see our project page on the funder website. This organisation README gives some more technical overview of what we are doing, targeted at existing chunked data communities so you can understand what we're up to.
- @dstansby is the project lead, and will be writing the digital texbook.
- Software engineers from UCL's Centre for Advanced Research Computing. will be working on Python tools and benchmarking.
- @scalableminds will be working on visualisation improvements, led by @normanrz.
Our work is split into three work packages:
The goal of this work package is to write a digital textbook for working with huge 3D imaging datasets. This textbook wil explain how to work with huge 3D imaging datsets using modern data formats, focusing on explaining how chunked data formats work, and how to use them with practical examples. Our examples will use the scientific Python stack with the zarr and OME-zarr data formats, and dask for chunked processing. The draft chapter headings are:
- Chunked dataset theory
- Choosing parameters for chunked dataset creation
- Converting between chunked datasets and 'traditional' file formats (e.g., TIFF, JPEG2000, NIfTI)
- Downsampling chunked datasets
- Working with remotely stored chunked datasets
- Simple visualisation of chunked datasets
- Simple analysis of chunked datasets
This is being developed at HEFTIEProject/heftie-book, and a live preview is available at https://heftie-textbook.readthedocs.io.
The goals of this work package are to:
- Develop a set of benchmarks to understand the best parameters for creating OME-zarr datasets.
- Work on Python tools for working with OME-zarr datasets.
Instead of developing new tools, we want to help improve and fix existing tools. Exactly what tools we work on will be guided by gaps we find when completing the digital textbook in WP1.
Tip
We would love feedback from the 3D imaging community on what tools they want to see developed!
Current items on our TODO list include:
- Converting sub-volumes back and forth between 'tradditional' file formats (e.g., nifti, TIFF, JPEG)
- Caching remote data stores locally on disk
- Isotropically downsampling 3D images
But this is a tentative list - we would love to hear from you! Please email @dstansby with your thoughts or suggestions!
In this work package we will improve the existing open source webknossos tool to add some more features for 3D datasets. These improvements include:
- Region growing annotation tool in 3D
- Visualisation/interpolation/segmentation along tilted planes
- Expanding skeletons to segmentations
- Volume rendering in the 3D viewport
This project is funded by the OSCARS project, which has received funding from the European Commission’s Horizon Europe Research and Innovation programme under grant agreement No. 101129751.