7.1.2. algotom.io.loadersaver
¶
Module for I/O tasks:
Load data from an image file (tif, png, jpeg) or a hdf/nxs file.
Get information from a hdf/nxs file.
Search for datasets in a hdf/nxs file.
Save a 2D array as a tif image or 2D, 3D array to a hdf/nxs file.
Get file names, make file/folder name.
Load distortion coefficients from a txt file.
Get the tree view of a hdf/nxs file.
Functions for loading stacks of images from multiple datasets, e.g. to be used by speckle-based phase contrast tomography.
Functions:
|
Load data from an image. |
|
Get information of datasets in a hdf/nxs file. |
|
Find datasets matching the name-pattern in a hdf/nxs file. |
|
Load a hdf/nexus dataset as an object. |
|
Create a folder for saving file if the folder does not exist. |
|
Create a new file name to avoid overwriting. |
|
Create a new folder name to avoid overwriting. |
|
Search file |
|
Save a 2D array to an image. |
|
Write an array to a hdf/nxs file with options to add metadata. |
|
Load distortion coefficients from a text file. |
|
Write distortion coefficients to a text file. |
|
Get the tree view of a hdf/nxs file. |
|
A method for multi-position speckle-based phase-contrast tomography to get two stacks of reference images (speckle images) and sample images (at the same rotation angle from each tomographic dataset). |
|
Get two stacks of reference images (speckle images) and sample images (at the same rotation angle from each tomographic dataset). |
|
Load tif images to a stack. |
|
To get multiple images with the same index from multiple datasets (tif format or hdf format). |
|
Load list of images in parallel. |
|
Save an 3D-array to a list of tif images in parallel. |
- algotom.io.loadersaver.load_image(file_path)[source]¶
Load data from an image.
- Parameters
file_path (str) – Path to the file.
- Returns
array_like – 2D array.
- algotom.io.loadersaver.get_hdf_information(file_path, display=False)[source]¶
Get information of datasets in a hdf/nxs file.
- Parameters
file_path (str) – Path to the file.
display (bool) – Print the results onto the screen if True.
- Returns
list_key (str) – Keys to the datasets.
list_shape (tuple of int) – Shapes of the datasets.
list_type (str) – Types of the datasets.
- algotom.io.loadersaver.find_hdf_key(file_path, pattern, display=False)[source]¶
Find datasets matching the name-pattern in a hdf/nxs file.
- Parameters
file_path (str) – Path to the file.
pattern (str) – Pattern to find the full names of the datasets.
display (bool) – Print the results onto the screen if True.
- Returns
list_key (str) – Keys to the datasets.
list_shape (tuple of int) – Shapes of the datasets.
list_type (str) – Types of the datasets.
- algotom.io.loadersaver.load_hdf(file_path, key_path, return_file_obj=False)[source]¶
Load a hdf/nexus dataset as an object.
- Parameters
file_path (str) – Path to the file.
key_path (str) – Key path to the dataset.
return_file_obj (bool, optional)
- Returns
objects – hdf-dataset object, and file-object if return_file_obj is True.
- algotom.io.loadersaver.make_folder(file_path)[source]¶
Create a folder for saving file if the folder does not exist. This is a supplementary function for savers.
- Parameters
file_path (str) – Path to a file.
- algotom.io.loadersaver.make_file_name(file_path)[source]¶
Create a new file name to avoid overwriting.
- Parameters
file_path (str)
- Returns
str – Updated file path.
- algotom.io.loadersaver.make_folder_name(folder_path, name_prefix='Output', zero_prefix=5)[source]¶
Create a new folder name to avoid overwriting. E.g: Output_00001, Output_00002…
- Parameters
folder_path (str) – Path to the parent folder.
name_prefix (str) – Name prefix
zero_prefix (int) – Number of zeros to be added to file names.
- Returns
str – Name of the folder.
- algotom.io.loadersaver.find_file(path)[source]¶
Search file
- Parameters
path (str) – Path and pattern to find files.
- Returns
str or list of str – List of files.
- algotom.io.loadersaver.save_image(file_path, mat, overwrite=True)[source]¶
Save a 2D array to an image.
- Parameters
file_path (str) – Path to the file.
mat (int or float) – 2D array.
overwrite (bool) – Overwrite an existing file if True.
- Returns
str – Updated file path.
- algotom.io.loadersaver.open_hdf_stream(file_path, data_shape, key_path='entry/data', data_type='float32', overwrite=True, **options)[source]¶
Write an array to a hdf/nxs file with options to add metadata.
- Parameters
file_path (str) – Path to the file.
data_shape (tuple of int) – Shape of the data.
key_path (str) – Key path to the dataset.
data_type (str) – Type of data.
overwrite (bool) – Overwrite the existing file if True.
options (dict, optional) – Add metadata. E.g options={“entry/angles”: angles, “entry/energy”: 53}.
- Returns
object – hdf object.
- algotom.io.loadersaver.load_distortion_coefficient(file_path)[source]¶
Load distortion coefficients from a text file. The file must use the following format: x_center : float y_center : float factor0 : float factor1 : float …
- Parameters
file_path (str) – Path to the file
- Returns
tuple of float and list – Tuple of (xcenter, ycenter, list_fact).
- algotom.io.loadersaver.save_distortion_coefficient(file_path, xcenter, ycenter, list_fact, overwrite=True)[source]¶
Write distortion coefficients to a text file.
- Parameters
file_path (str) – Path to the file.
xcenter (float) – Center of distortion in x-direction.
ycenter (float) – Center of distortion in y-direction.
list_fact (float) – 1D array. Coefficients of the polynomial fit.
overwrite (bool) – Overwrite an existing file if True.
- Returns
str – Updated file path.
- algotom.io.loadersaver.get_hdf_tree(file_path, output=None, add_shape=True, display=True)[source]¶
Get the tree view of a hdf/nxs file.
- Parameters
file_path (str) – Path to the file.
output (str or None) – Path to the output file in a text-format file (.txt, .md,…).
add_shape (bool) – Including the shape of a dataset to the tree if True.
display (bool) – Print the tree onto the screen if True.
- Returns
list of string
- algotom.io.loadersaver.get_reference_sample_stacks_dls(proj_idx, list_path, data_key=None, image_key=None, crop=(0, 0, 0, 0), flat_field=None, dark_field=None, num_use=None, fix_zero_div=True)[source]¶
A method for multi-position speckle-based phase-contrast tomography to get two stacks of reference images (speckle images) and sample images (at the same rotation angle from each tomographic dataset).
The method is specific to tomographic datasets acquired at Diamond Light Source (DLS) where projection-images, flat-field images, and dark-field images are in the same 3d array. There is a dataset named “image_key” inside a hdf/nxs file used to distinguish image types.
- Parameters
proj_idx (int) – Index of a projection-image in a tomographic dataset.
list_path (list of str) – List of file paths (hdf/nxs format) to tomographic datasets.
data_key (str, optional) – Key to images. Automatically find the key if None.
image_key (str, list, tuple, ndarray, optional) – Key to 1d-array dataset for specifying image types. Automatically find the key if None. Can be used to pass the 1d-array manually.
crop (tuple of int, optional) – Crop the images from the edges, i.e. crop = (crop_top, crop_bottom, crop_left, crop_right).
flat_field (ndarray, optional) – 2D array or None. Used for flat-field correction if not None.
dark_field (ndarray, optional) – 2D array or None. Used for dark-field correction if not None.
num_use (int, optional) – Number of datasets used for stacking.
fix_zero_div (bool, optional) – Correct zeros to avoid zero-division problem down the processing line.
- Returns
ref_stack (ndarray) – Return if reference-images found. 3D array.
sam_stack (ndarray) – 3D array. A stack of sample-images.
- algotom.io.loadersaver.get_reference_sample_stacks(proj_idx, ref_path, sam_path, ref_key, sam_key, crop=(0, 0, 0, 0), flat_field=None, dark_field=None, num_use=None, fix_zero_div=True)[source]¶
Get two stacks of reference images (speckle images) and sample images (at the same rotation angle from each tomographic dataset). A method for multi-position speckle-based phase-contrast tomography.
- Parameters
proj_idx (int) – Index of a projection-image in a tomographic dataset.
ref_path (list of str) – List of file paths (hdf/nxs format) to reference-image datasets.
sam_path (list of str) – List of file paths (hdf/nxs format) to tomographic datasets.
ref_key (str) – Key to a reference-image dataset.
sam_key (str) – Key to a projection-image dataset.
crop (tuple of int, optional) – Crop the images from the edges, i.e. crop = (crop_top, crop_bottom, crop_left, crop_right).
flat_field (ndarray, optional) – 2D array or None. Used for flat-field correction if not None.
dark_field (ndarray, optional) – 2D array or None. Used for dark-field correction if not None.
num_use (int, optional) – Number of datasets used for stacking.
fix_zero_div (bool, optional) – Correct zeros to avoid zero-division problem down the processing line.
- Returns
ref_stack (ndarray) – 3D array. A stack of reference-images.
sam_stack (ndarray) – 3D array. A stack of sample-images.
- algotom.io.loadersaver.get_tif_stack(file_base, idx=None, crop=(0, 0, 0, 0), flat_field=None, dark_field=None, num_use=None, fix_zero_div=True)[source]¶
Load tif images to a stack. Supplementary method for ‘get_image_stack’.
- Parameters
file_base (str) – Folder path to tif images.
idx (int or None) – Load single or multiple images.
crop (tuple of int, optional) – Crop the images from the edges, i.e. crop = (crop_top, crop_bottom, crop_left, crop_right).
flat_field (ndarray, optional) – 2D array or None. Used for flat-field correction if not None.
dark_field (ndarray, optional) – 2D array or None. Used for dark-field correction if not None.
num_use (int, optional) – Number of images used for stacking.
fix_zero_div (bool, optional) – Correct zeros to avoid zero-division problem down the processing line.
- Returns
img_stack (ndarray) – 3D array. A stack of images.
- algotom.io.loadersaver.get_image_stack(idx, paths, data_key=None, average=False, crop=(0, 0, 0, 0), flat_field=None, dark_field=None, num_use=None, fix_zero_div=True)[source]¶
To get multiple images with the same index from multiple datasets (tif format or hdf format). For tif images, if “paths” is a string (not a list) use idx=None to load all images. For getting a stack of images from a single hdf file, use the “load_hdf” method instead.
- Parameters
idx (int or None) – Index of an image in a dataset. Use None to load all images if only one dataset provided.
paths (list of str or str) – List of hdf/nxs file-paths, list of folders of tif-images, or a folder of tif-images.
data_key (str) – Requested if input is a hdf/nxs files.
average (bool, optional) – Average images in a dataset if True.
crop (tuple of int, optional) – Crop the images from the edges, i.e. crop = (crop_top, crop_bottom, crop_left, crop_right).
flat_field (ndarray, optional) – 2D array or None. Used for flat-field correction if not None.
dark_field (ndarray, optional) – 2D array or None. Used for dark-field correction if not None.
num_use (int, optional) – Number of datasets used for stacking.
fix_zero_div (bool, optional) – Correct zeros to avoid zero-division problem down the processing line.
- Returns
img_stack (ndarray) – 3D array. A stack of images.
- algotom.io.loadersaver.load_image_multiple(list_path, ncore=None, prefer='threads')[source]¶
Load list of images in parallel.
- Parameters
list_path (str) – List of file paths.
ncore (int or None) – Number of cpu-cores. Automatically selected if None.
prefer ({“threads”, “processes”}) – Prefer backend for parallel processing.
- Returns
array_like – 3D array.
- algotom.io.loadersaver.save_image_multiple(list_path, image_stack, axis=0, overwrite=True, ncore=None, prefer='threads', start_idx=0)[source]¶
Save an 3D-array to a list of tif images in parallel.
- Parameters
list_path (str) – List of output paths or a folder path
image_stack (array_like) – 3D array.
axis (int) – Axis to slice data.
overwrite (bool) – Overwrite an existing file if True.
ncore (int or None) – Number of cpu-cores. Automatically selected if None.
prefer ({“threads”, “processes”}) – Prefer backend for parallel processing.
start_idx (int) – Starting index of the output files if input is a folder.