Command-line tools
nabu
: perform tomographic reconstruction
nabu --help
usage: nabu [-h] [--logfile LOGFILE] [--log_file LOG_FILE] [--slice SLICE]
[--gpu_mem_fraction GPU_MEM_FRACTION]
[--cpu_mem_fraction CPU_MEM_FRACTION]
[--max_chunk_size MAX_CHUNK_SIZE] [--phase_margin PHASE_MARGIN]
[--force_use_grouped_pipeline FORCE_USE_GROUPED_PIPELINE]
[--version]
input_file
Perform a tomographic reconstruction.
positional arguments:
input_file Nabu input file
options:
-h, --help show this help message and exit
--logfile LOGFILE Log file. Default is dataset_prefix_nabu.log
--log_file LOG_FILE Same as logfile. Deprecated, use --logfile instead.
--slice SLICE Slice(s) indice(s) to reconstruct, in the format
z1-z2. Default (empty) is the whole volume. This
overwrites the configuration file start_z and end_z.
You can also use --slice first, --slice last, --slice
middle, and --slice all
--gpu_mem_fraction GPU_MEM_FRACTION
Which fraction of GPU memory to use. Default is 0.9.
--cpu_mem_fraction CPU_MEM_FRACTION
Which fraction of memory to use. Default is 0.9.
--max_chunk_size MAX_CHUNK_SIZE
Maximum chunk size to use.
--phase_margin PHASE_MARGIN
Specify an explicit phase margin to use when
performing phase retrieval.
--force_use_grouped_pipeline FORCE_USE_GROUPED_PIPELINE
Force nabu to use the 'grouped' reconstruction
pipeline - slower but should work for all big
datasets.
--version, -V show program's version number and exit
nabu-config
: create a configuration file for a tomographic reconstruction (to be provided to nabu
)
nabu-config --help
usage: nabu-config [-h] [--bootstrap] [--convert CONVERT] [--output OUTPUT]
[--nocomments] [--level LEVEL] [--dataset DATASET]
[--template TEMPLATE] [--helical HELICAL] [--overwrite]
Initialize a nabu configuration file
options:
-h, --help show this help message and exit
--bootstrap DEPRECATED, this is the default behavior. Bootstrap a
configuration file from scratch.
--convert CONVERT UNSUPPORTED. This option has no effect and will
disappear. Convert a PyHST configuration file to a nabu
configuration file.
--output OUTPUT Output filename
--nocomments Remove the comments in the configuration file (default:
False)
--level LEVEL Level of options to embed in the configuration file.
Can be 'required', 'optional', 'advanced'.
--dataset DATASET Pre-fill the configuration file with the dataset path.
--template TEMPLATE Use a template configuration file. Available are:
id19_pag, id16_holo, id16_ctf, id16a_fluo, bm05_pag.
You can also define your own templates via the
NABU_TEMPLATES_PATH environment variable.
--helical HELICAL Prepare configuration file for helical
--overwrite Whether to overwrite the output file if exists
nabu-helical
: Perform a helical tomographic reconstruction
nabu-helical --help
usage: nabu-helical [-h] [--logfile LOGFILE] [--log_file LOG_FILE]
[--slice SLICE] [--gpu_mem_fraction GPU_MEM_FRACTION]
[--cpu_mem_fraction CPU_MEM_FRACTION]
[--max_chunk_size MAX_CHUNK_SIZE]
[--phase_margin PHASE_MARGIN]
[--force_use_grouped_pipeline FORCE_USE_GROUPED_PIPELINE]
[--dry_run DRY_RUN] [--diag_zpro_run DIAG_ZPRO_RUN]
[--version]
input_file
Perform a helical tomographic reconstruction
positional arguments:
input_file Nabu input file
options:
-h, --help show this help message and exit
--logfile LOGFILE Log file. Default is dataset_prefix_nabu.log
--log_file LOG_FILE Same as logfile. Deprecated, use --logfile instead.
--slice SLICE Slice(s) indice(s) to reconstruct, in the format
z1-z2. Default (empty) is the whole volume. This
overwrites the configuration file start_z and end_z.
You can also use --slice first, --slice last, --slice
middle, and --slice all
--gpu_mem_fraction GPU_MEM_FRACTION
Which fraction of GPU memory to use. Default is 0.9.
--cpu_mem_fraction CPU_MEM_FRACTION
Which fraction of memory to use. Default is 0.9.
--max_chunk_size MAX_CHUNK_SIZE
Maximum chunk size to use.
--phase_margin PHASE_MARGIN
Specify an explicit phase margin to use when
performing phase retrieval.
--force_use_grouped_pipeline FORCE_USE_GROUPED_PIPELINE
Force nabu to use the 'grouped' reconstruction
pipeline - slower but should work for all big
datasets.
--dry_run DRY_RUN Stops after printing some information on the
reconstruction layout.
--diag_zpro_run DIAG_ZPRO_RUN
run the pipeline without reconstructing but collecting
the contributing radios slices for angles theta+n*360.
The given argument is the number of thet in the
interval [0 ,180[. The same number is taken if
available in [180,360[. And the whole is repated is
available in [0,360[ for a total of 4*diag_zpro_run
possible exctracted contributions
--version, -V show program's version number and exit
nabu-multicor
: Perform a tomographic reconstruction of a single slice using multiple centers of rotation
nabu-multicor --help
usage: nabu-multicor [-h] [--logfile LOGFILE] [--log_file LOG_FILE]
[--gpu_mem_fraction GPU_MEM_FRACTION]
[--cpu_mem_fraction CPU_MEM_FRACTION]
[--max_chunk_size MAX_CHUNK_SIZE]
[--phase_margin PHASE_MARGIN]
[--force_use_grouped_pipeline FORCE_USE_GROUPED_PIPELINE]
[--version]
input_file slice cor
Perform a tomographic reconstruction of a single slice using multiple centers
of rotation
positional arguments:
input_file Nabu input file
slice Slice(s) indice(s) to reconstruct, in the format
z1-z2. Default (empty) is the whole volume. This
overwrites the configuration file start_z and end_z.
You can also use --slice first, --slice last, --slice
middle, and --slice all
cor Absolute positions of the center of rotation. It must
be a list of comma-separated scalars, or in the form
start:stop:step, where start, stop and step can all be
floating-point values.
options:
-h, --help show this help message and exit
--logfile LOGFILE Log file. Default is dataset_prefix_nabu.log
--log_file LOG_FILE Same as logfile. Deprecated, use --logfile instead.
--gpu_mem_fraction GPU_MEM_FRACTION
Which fraction of GPU memory to use. Default is 0.9.
--cpu_mem_fraction CPU_MEM_FRACTION
Which fraction of memory to use. Default is 0.9.
--max_chunk_size MAX_CHUNK_SIZE
Maximum chunk size to use.
--phase_margin PHASE_MARGIN
Specify an explicit phase margin to use when
performing phase retrieval.
--force_use_grouped_pipeline FORCE_USE_GROUPED_PIPELINE
Force nabu to use the 'grouped' reconstruction
pipeline - slower but should work for all big
datasets.
--version, -V show program's version number and exit
nabu-reduce-dark-flat
: Compute reduce dark(s) and flat(s) of a dataset
nabu-reduce-dark-flat --help
usage: nabu-reduce-dark-flat [-h] [--entry ENTRY] [--dark-method DARK_METHOD]
[--flat-method FLAT_METHOD] [--overwrite]
[--debug]
[--output-reduced-flats-file OUTPUT_REDUCED_FLATS_FILE]
[--output-reduced-flats-data-path OUTPUT_REDUCED_FLATS_DATA_PATH]
[--output-reduced-darks-file OUTPUT_REDUCED_DARKS_FILE]
[--output-reduced-darks-data-path OUTPUT_REDUCED_DARKS_DATA_PATH]
[--version]
dataset
Compute reduce dark(s) and flat(s) of a dataset
positional arguments:
dataset Dataset (NXtomo or EDF folder) to be treated
options:
-h, --help show this help message and exit
--entry ENTRY an entry can be specify in case of an NXtomo
--dark-method DARK_METHOD
Define the method to be used for computing darks.
Valid methods are ('mean', 'median', 'first', 'last',
'none')
--flat-method FLAT_METHOD
Define the method to be used for computing flats.
Valid methods are ('mean', 'median', 'first', 'last',
'none')
--overwrite Overwrite dark/flats if exists
--debug Set logging system in debug mode
--output-reduced-flats-file OUTPUT_REDUCED_FLATS_FILE, --orfl OUTPUT_REDUCED_FLATS_FILE
Where to save reduced flats. If not provided will be
dump near the .nx file at {scan_prefix}_flats.hdf5
--output-reduced-flats-data-path OUTPUT_REDUCED_FLATS_DATA_PATH, --output-reduced-flats-dp OUTPUT_REDUCED_FLATS_DATA_PATH, --orfdp OUTPUT_REDUCED_FLATS_DATA_PATH
Path in the output reduced flats file to save the
dataset. If not provided will be saved at
{entry}/flats/
--output-reduced-darks-file OUTPUT_REDUCED_DARKS_FILE, --ordf OUTPUT_REDUCED_DARKS_FILE
Where to save reduced dark. If not provided will be
dump near the .nx file at {scan_prefix}_darks.hdf5
--output-reduced-darks-data-path OUTPUT_REDUCED_DARKS_DATA_PATH, --output-reduced-darks-dp OUTPUT_REDUCED_DARKS_DATA_PATH, --orddp OUTPUT_REDUCED_DARKS_DATA_PATH
Path in the output reduced darks file to save the
dataset. If not provided will be saved at
{entry}/darks/
--version, -V show program's version number and exit
nabu-histogram
: extract or compute a histogram of a reconstructed volume
This command only works for HDF5 output.
Ideally, the histogram is computed during the reconstruction, so that the volume does not have to be re-loaded from disk after reconstruction.
- If the volume histogram is available in the final HDF5 file, then this command extracts this histogram and creates a dedicated file.
- If not, then the full histogram is computed from the volume, which takes time. You can tune how to compute the histogram (number of bins and amount of memory to use).
nabu-histogram --help
/home/pierre/.venv/py311/lib/python3.11/site-packages/pytools/persistent_dict.py:52: RecommendedHashNotFoundWarning: Unable to import recommended hash 'siphash24.siphash13', falling back to 'hashlib.sha256'. Run 'python3 -m pip install siphash24' to install the recommended hash.
warn("Unable to import recommended hash 'siphash24.siphash13', "
usage: nabu-histogram [-h] [--bins BINS]
[--chunk_size_slices CHUNK_SIZE_SLICES]
[--chunk_size_GB CHUNK_SIZE_GB] [--loglevel LOGLEVEL]
h5_file [h5_file ...] output_file
Extract/compute histogram of volume(s).
positional arguments:
h5_file HDF5 file(s). It can be one or several paths to HDF5
files. You can specify entry for each file with
/path/to/file.h5?entry0000
output_file Output file (HDF5)
options:
-h, --help show this help message and exit
--bins BINS Number of bins for histogram if they have to be
computed. Default is one million.
--chunk_size_slices CHUNK_SIZE_SLICES
If histogram are computed, specify the maximum
subvolume size (in number of slices) for computing
histogram.
--chunk_size_GB CHUNK_SIZE_GB
If histogram are computed, specify the maximum
subvolume size (in GibaBytes) for computing histogram.
--loglevel LOGLEVEL Logging level. Can be 'debug', 'info', 'warning',
'error'. Default is 'info'.
nabu-double-flatfield
: compute the "double flat-field" of a dataset
"Double flat-field" is a way to remove rings artefacts during pre-processing of the projections. The principle is to compute the average of all projections, which creates a (artificial) flat image. Then, each projection is divided by this artificial flat.
This CLI tool is used to generate the "artificial flat", meaning that it simply does the mean of all projections (or more involved processing if necessary, please refer to the options). The resulting image can then be fed to the nabu configuration file.
nabu-double-flatfield --help
usage: nabu-double-flatfield [-h] [--entry ENTRY] [--flatfield FLATFIELD]
[--sigma SIGMA] [--loglevel LOGLEVEL]
[--chunk_size CHUNK_SIZE]
dataset output
A command-line utility for computing the double flatfield of a dataset.
positional arguments:
dataset Path to the dataset.
output Path to the output file (HDF5).
options:
-h, --help show this help message and exit
--entry ENTRY HDF5 entry (for HDF5 datasets). By default, the first
entry is taken.
--flatfield FLATFIELD
Whether to perform flat-field normalization. Default
is True.
--sigma SIGMA Enable high-pass filtering on double flatfield with
this value of 'sigma'
--loglevel LOGLEVEL Logging level. Can be 'debug', 'info', 'warning',
'error'. Default is 'info'.
--chunk_size CHUNK_SIZE
Maximum number of lines to read in each projection in
a single pass. Default is 100
nabu-compare-volumes
: generic tool to compare two HDF5 volumes
This command enables you to "quickly" compare 3D volumes. The data is loaded by chunks to avoid overloading memory.
By default the full volumes are compared, but you can choose to stop the comparison as soon as the difference exceeds a given threshold.
The datasets have to be three-dimensional, this command won't work on 1D or 2D.
nabu-compare-volumes --help
usage: nabu-compare-volumes [-h] [--entry ENTRY] [--hdf5_path HDF5_PATH]
[--chunk_size CHUNK_SIZE] [--stop_at STOP_AT]
[--statistics STATISTICS]
volume1 volume2
A command-line utility for comparing two volumes.
positional arguments:
volume1 Path to the first volume.
volume2 Path to the first volume.
options:
-h, --help show this help message and exit
--entry ENTRY HDF5 entry. By default, the first entry is taken.
--hdf5_path HDF5_PATH
Full HDF5 path to the data. Default is
<entry>/reconstruction/results/data
--chunk_size CHUNK_SIZE
Maximum number of images to read in each step. Default
is 100.
--stop_at STOP_AT Stop the comparison immediately when the difference
exceeds this threshold. Default is to compare the full
volumes.
--statistics STATISTICS
Compute statistics on the compared (sub-)volumes. Mind
that in this case the command output will not be
empty!
nabu-composite-cor
: Find the center of rotation from a scan or a series of scans.
This is a shortcut for estimating the center of rotation from one or several scans. It will use the "composite CoR estimator" which does the estimation on several sinograms.
nabu-composite-cor --help
usage: nabu-composite-cor [-h] --filename_template FILENAME_TEMPLATE
[--entry_name ENTRY_NAME]
[--num_of_stages NUM_OF_STAGES]
[--oversampling OVERSAMPLING]
[--n_subsampling_y N_SUBSAMPLING_Y]
[--theta_interval THETA_INTERVAL]
[--first_stage FIRST_STAGE]
[--output_file OUTPUT_FILE] --cor_options
COR_OPTIONS [--version]
Application to extract with the composite cor finder the center of rotation
for a scan or a series of scans
options:
-h, --help show this help message and exit
--filename_template FILENAME_TEMPLATE
The filename template. It can optionally contain a
segment equal to "X"*ndigits which will be replaced by
the stage number if several stages are requested by
the user
--entry_name ENTRY_NAME
Optional. The entry_name. It defaults to entry0000
--num_of_stages NUM_OF_STAGES
Optional. How many stages. Example: from 0 to 43 ->
--num_of_stages 44. It is optional.
--oversampling OVERSAMPLING
Oversampling in the research of the axis position.
Defaults to 4
--n_subsampling_y N_SUBSAMPLING_Y
How many lines we are going to take from each radio.
Defaults to 10.
--theta_interval THETA_INTERVAL
Angular step for composing the image. Default to 5
--first_stage FIRST_STAGE
Optional. The first stage.
--output_file OUTPUT_FILE
Optional. Where the list of cors will be written.
Default is the filename postixed with cors.txt. If the
output filename is postfixed with .json the output
will be in json format
--cor_options COR_OPTIONS
the cor_options string used by Nabu. Example
--cor_options "side='near'; near_pos = 300.0;
near_width = 20.0"
--version, -V show program's version number and exit
nabu-rotate
: apply a rotation on all the images of a dataset
This command only works for HDF5 datasets. It perform a rotation with a given angle on each projection image in the input dataset.
nabu-rotate --help
/home/pierre/.venv/py311/lib/python3.11/site-packages/pytools/persistent_dict.py:52: RecommendedHashNotFoundWarning: Unable to import recommended hash 'siphash24.siphash13', falling back to 'hashlib.sha256'. Run 'python3 -m pip install siphash24' to install the recommended hash.
warn("Unable to import recommended hash 'siphash24.siphash13', "
usage: nabu-rotate [-h] [--entry ENTRY] [--center CENTER]
[--loglevel LOGLEVEL] [--batchsize BATCHSIZE]
[--use_cuda USE_CUDA]
[--use_multiprocessing USE_MULTIPROCESSING]
dataset angle output
A command-line utility for performing a rotation on all the radios of a
dataset.
positional arguments:
dataset Path to the dataset. Only HDF5 format is supported for
now.
angle Rotation angle in degrees
output Path to the output file. Only HDF5 output is
supported. In the case of HDF5 input, the output file
will have the same structure.
options:
-h, --help show this help message and exit
--entry ENTRY HDF5 entry. By default, the first entry is taken.
--center CENTER Rotation center, in the form (x, y) where x (resp. y)
is the horizontal (resp. vertical) dimension, i.e
along the columns (resp. lines). Default is (Nx/2 -
0.5, Ny/2 - 0.5).
--loglevel LOGLEVEL Logging level. Can be 'debug', 'info', 'warning',
'error'. Default is 'info'.
--batchsize BATCHSIZE
Size of the batch of images to process. Default is 100
--use_cuda USE_CUDA Whether to use Cuda if available
--use_multiprocessing USE_MULTIPROCESSING
Whether to use multiprocessing if available
nabu-zsplit
: split a H5-NX file according to z-positions
This command is only relevant for HDF5 datasets.
Some datasets are acquired with the sample stage moving vertically between each scan ("Z-series"). This is different from helical scans where the vertical movement occurs during the scan. In the case of Z-series, the sample stage moves vertically once a scan is completed, resulting in a series of datasets with different "z" values.
This command is used to split such datasets into several files, where each file has a distinct "z" value.
By default, this command creates no additional data (no duplication) and use HDF5 virtual datasets
nabu-zsplit --help
/home/pierre/.venv/py311/lib/python3.11/site-packages/nabu/app/nx_z_splitter.py:11: Warning: This command-line utility is intended as a temporary solution. Please do not rely too much on it.
warnings.warn(
usage: nabu-zsplit [-h] [--loglevel LOGLEVEL] [--entry ENTRY]
[--n_stages N_STAGES]
[--use_virtual_dataset USE_VIRTUAL_DATASET]
input_file output_directory
Split a HDF5-Nexus file according to z translation (z-series)
positional arguments:
input_file Input HDF5-Nexus file
output_directory Output directory to write split files.
options:
-h, --help show this help message and exit
--loglevel LOGLEVEL Logging level. Can be 'debug', 'info', 'warning',
'error'. Default is 'info'.
--entry ENTRY HDF5 entry to take in the input file. By default, the
first entry is taken.
--n_stages N_STAGES Number of expected stages (i.e different 'Z' values).
By default it is inferred from the dataset.
--use_virtual_dataset USE_VIRTUAL_DATASET
Whether to use virtual datasets for output file. Not
using a virtual dataset duplicates data and thus
results in big files ! However virtual datasets
currently have performance issues. Default is False
nabu-generate-info
: generate the volume ".info
file" for legacy tools
Some post-processing tools need the .info
file that was generated by PyHST.
Although all the information in this file can be obtained in the HDF5 reconstruction file, producing this .info
file is helpful in order to keep the post-processing tools working without modification.
nabu-generate-info --help
usage: nabu-generate-info [-h] [--hist_file HIST_FILE]
[--hist_entry HIST_ENTRY] [--bliss_file BLISS_FILE]
[--bliss_entry BLISS_ENTRY] [--info_file INFO_FILE]
[--edf_proj EDF_PROJ]
output
Generate a .info file
positional arguments:
output Output file name
options:
-h, --help show this help message and exit
--hist_file HIST_FILE
HDF5 file containing the histogram, either the
reconstruction file or a dedicated histogram file.
--hist_entry HIST_ENTRY
Histogram HDF5 entry. Defaults to the first available
entry.
--bliss_file BLISS_FILE
HDF5 master file produced by BLISS
--bliss_entry BLISS_ENTRY
Entry in the HDF5 master file produced by BLISS. By
default, take the first entry.
--info_file INFO_FILE
Path to the .info file, in the case of a EDF dataset
--edf_proj EDF_PROJ Path to a projection, in the case of a EDF dataset
nabu-validator
: check that a dataset can be reconstructed
Goal
This application checks a dataset to ensure it can be reconstructed.
By default it will check that phase retrieval and reconstruction can be done. This entails checking for default values and valid links (in the case of virtual HDF5 dataset).
Usage
nabu-validator --help
usage: nabu-validator [-h] [--ignore-dark] [--ignore-flat]
[--no-phase-retrieval] [--check-nan]
[--skip-links-check] [--all-entries] [--extend]
path [entries ...]
Check if provided scan(s) seems valid to be reconstructed.
positional arguments:
path Data to validate (h5 file, edf folder)
entries Entries to be validated (in the case of a h5 file)
options:
-h, --help show this help message and exit
--ignore-dark Do not check for dark
--ignore-flat Do not check for flat
--no-phase-retrieval Check scan energy, distance and pixel size
--check-nan Check frames if contains any nan.
--skip-links-check, --no-link-check
Check frames dataset if have some broken links.
--all-entries Check all entries of the files (for HDF5 only for now)
--extend By default it only display items with issues. Extend
will display them all
Example
On the bamboo_hercules
dataset some data has not been copied. This ends up with the following output:
$: nabu-validator bambou_hercules_0001_1_1.nx --extend
💥💣💥
3 issues found from hdf5 scan(master_file: bamboo_hercules/bambou_hercules_0001/bambou_hercules_0001_1_1.nx, entry: entry0000)
- projection(s) : INVALID - At least one dataset seems to have broken link
- dark(s) : INVALID - At least one dataset seems to have broken link
- flat(s) : INVALID - At least one dataset seems to have broken link
+ distance : VALID
+ energy : VALID
+ pixel size : VALID
Same example but with all related data copied (link):
👌👍👌
No issue found from hdf5 scan(master_file: bamboo_hercules/bambou_hercules_0001/bambou_hercules_0001_1_1.nx, entry: entry0000).
+ projection(s) : VALID
+ dark(s) : VALID
+ flat(s) : VALID
+ distance : VALID
+ energy : VALID
+ pixel size : VALID
nabu-cast
apply volume casting
Goal
This application is casting a user provided volume to a given expected volume and an output data type.
Usage
nabu-cast --help
Fail to import glymur. won't be able to load / save volume to jp2k. You can install it by calling pip.
Fail to import glymur. won't be able to load / save volume to jp2k. You can install it by calling pip.
usage: nabu-cast [-h] [--output-data-type OUTPUT_DATA_TYPE]
[--output_volume OUTPUT_VOLUME] [--output_type OUTPUT_TYPE]
[--data_min DATA_MIN] [--data_max DATA_MAX]
[--rescale_min_percentile RESCALE_MIN_PERCENTILE]
[--rescale_max_percentile RESCALE_MAX_PERCENTILE]
[--overwrite] [--compression-ratios COMPRESSION_RATIOS]
[--histogram-url HISTOGRAM_URL] [--remove-input-volume]
input_volume
positional arguments:
input_volume input volume. To define a volume you can either provide:
* an url (recommanded way) - see details lower
* a path. For hdf5 and multitiff we expect a file path. For edf, tif and jp2k we expect a folder path. In this case we will try to deduce the Volume from it.
url must be defined like:
- EDFVolume : edf:volume:/path/to/my/my_folder ; edf:volume:/path/to/my/my_folder?file_prefix=mybasename (if mybasename != folder name)
- HDF5Volume : hdf5:volume:/path/to/file_path?path=entry0000
- JP2KVolume : jp2k:volume:/path/to/my/my_folder ; jp2k:volume:/path/to/my/my_folder?file_prefix=mybasename (if mybasename != folder name)
- MultiTIFFVolume: tiff3d:volume:/path/to/tiff_file.tif
- TIFFVolume : tiff:volume:/path/to/my/my_folder ; tiff:volume:/path/to/my/my_folder?file_prefix=mybasename (if mybasename != folder name)
options:
-h, --help show this help message and exit
--output-data-type OUTPUT_DATA_TYPE
output data type. Valid value are numpy default types name like (uint8, uint16, int8, int16, int32, float32, float64)
--output_volume OUTPUT_VOLUME
output volume. Must be provided if 'output_type' isn't. Must looks like:
To define a volume you can either provide:
* an url (recommanded way) - see details lower
* a path. For hdf5 and multitiff we expect a file path. For edf, tif and jp2k we expect a folder path. In this case we will try to deduce the Volume from it.
url must be defined like:
- EDFVolume : edf:volume:/path/to/my/my_folder ; edf:volume:/path/to/my/my_folder?file_prefix=mybasename (if mybasename != folder name)
- HDF5Volume : hdf5:volume:/path/to/file_path?path=entry0000
- JP2KVolume : jp2k:volume:/path/to/my/my_folder ; jp2k:volume:/path/to/my/my_folder?file_prefix=mybasename (if mybasename != folder name)
- MultiTIFFVolume: tiff3d:volume:/path/to/tiff_file.tif
- TIFFVolume : tiff:volume:/path/to/my/my_folder ; tiff:volume:/path/to/my/my_folder?file_prefix=mybasename (if mybasename != folder name)
--output_type OUTPUT_TYPE
output type. Must be provided if 'output_volume' isn't. Valid values are ('h5', 'hdf5', 'nexus', 'nx', 'npy', 'npz', 'tif', 'tiff', 'jp2', 'jp2k', 'j2k', 'jpeg2000', 'edf', 'vol')
--data_min DATA_MIN value to clamp to volume cast new min. Any lower value will also be clamp to this value.
--data_max DATA_MAX value to clamp to volume cast new max. Any higher value will also be clamp to this value.
--rescale_min_percentile RESCALE_MIN_PERCENTILE
used to determine data_min if not provided. Expected as percentage. Default is 10%
--rescale_max_percentile RESCALE_MAX_PERCENTILE
used to determine data_max if not provided. Expected as percentage. Default is 90%
--overwrite Overwrite file or dataset if exists
--compression-ratios COMPRESSION_RATIOS
Define compression ratios for jp2k. Expected as a list like [20, 10, 1] for [quality layer 1, quality layer 2, quality layer 3]... Pass parameter to glymur. See https://glymur.readthedocs.io/en/latest/how_do_i.html#write-images-with-different-compression-ratios-for-different-layers for more details
--histogram-url HISTOGRAM_URL
Provide url to the histogram - like: '/{path}/my_file.hdf5?path/to/my/data' with my_file.hdf5 is the file containing the histogram. Located under 'path'. And 'path/to/my/data' is the location of the HDF5 dataset
--remove-input-volume, --remove
Whether to remove the input volume after cast. Default is False.
examples
A.
The following line:
nabu-cast hdf5:volume:./bambou_hercules_0001slice_1080.hdf5?path=entry0000/reconstruction --output-data-type uint8 --output_volume_url tiff:volume:./cast_volume?file_prefix=bambou_hercules_0001slice_1080
does:
- convert volume contains in 'bambou_hercules_0001slice_1080' at 'entry0000/reconstruction' path (volume dataset is in "entry0000/reconstruction/results/data")
- insure output volume will be
- a uint8 dataset
- a folder containing tif files under
./cast_volume
folder. Files will be named "bambou_hercules_0001slice_1080_0000.tif", "bambou_hercules_0001slice_1080_0001.tif"...
B.
The following line:
nabu-cast edf:volume:./volume_folder?file_prefix=5.06 --output-data-type uint16 --output_volume_url hdf5:volume:./cast_volume.hdf5?path=volume
does:
- convert volume contains in 'volume_folder' with edf files named like 5.06_XXXX.edf
- insure output volume will be
- a uint16 dataset
- saved under a hdf5 files named
cast_volume.hdf5
at/volume
group. As a result volume dataset will be store in/volume/results/data
.
nabu-helical-prepare-weights-double
: create the weights map file which is mandatory for the helical pipeline.
This command prepares an hdf5 file containing default weight map and double flat-field to be used by the helical pipeline. The weights are based on averaged flatfields of the dataset. While the double flat filed is set to one. The usage is the following
nabu-helical-prepare-weights-double nexus_file_name entry_name [target_file name [transition_width]]
Where the arguments within square brackets are optional. The transition_widths is given in pixel and determines how the weights go to zero close to the borders
nabu-composite-cor
: Application to extract the center of rotation for a scan or a series of scans.
The usage is the following
nabu-composite-cor --help
usage: nabu-composite-cor [-h] --filename_template FILENAME_TEMPLATE
[--entry_name ENTRY_NAME]
[--num_of_stages NUM_OF_STAGES]
[--oversampling OVERSAMPLING]
[--n_subsampling_y N_SUBSAMPLING_Y]
[--theta_interval THETA_INTERVAL]
[--first_stage FIRST_STAGE]
[--output_file OUTPUT_FILE] --cor_options
COR_OPTIONS [--version]
Application to extract with the composite cor finder the center of rotation
for a scan or a series of scans
options:
-h, --help show this help message and exit
--filename_template FILENAME_TEMPLATE
The filename template. It can optionally contain a
segment equal to "X"*ndigits which will be replaced by
the stage number if several stages are requested by
the user
--entry_name ENTRY_NAME
Optional. The entry_name. It defaults to entry0000
--num_of_stages NUM_OF_STAGES
Optional. How many stages. Example: from 0 to 43 ->
--num_of_stages 44. It is optional.
--oversampling OVERSAMPLING
Oversampling in the research of the axis position.
Defaults to 4
--n_subsampling_y N_SUBSAMPLING_Y
How many lines we are going to take from each radio.
Defaults to 10.
--theta_interval THETA_INTERVAL
Angular step for composing the image. Default to 5
--first_stage FIRST_STAGE
Optional. The first stage.
--output_file OUTPUT_FILE
Optional. Where the list of cors will be written.
Default is the filename postixed with cors.txt. If the
output filename is postfixed with .json the output
will be in json format
--cor_options COR_OPTIONS
the cor_options string used by Nabu. Example
--cor_options "side='near'; near_pos = 300.0;
near_width = 20.0"
--version, -V show program's version number and exit
The mandatory parameters are the filename and the cor options. Here an example
nabu-composite-cor --filename_template HA1400_00p01_supersample-225MeV_0002_0001.nx --cor_options "side='near'; near_pos = 1400.0; near_width = 30.0"
nabu-poly2map
: builds a distortion map.
This application builds two arrays. Let us call them map_x and map_z. Both are 2D arrays with shape given by (nz, nx)
. These maps are meant to be used to generate a corrected detector
image, using them to obtain the pixel (i,j) of the corrected image by interpolating the raw data at position (map_z(i,j), map_x(i,j))
. This map is determined by a user given polynomial
P(rs)
in the radial variable rs = sqrt( (z-center_z)**2 + (x-center_x)**2 ) / (nx/2)
where center_z and center_x give the center around which the deformation is centered. The perfect
position (zp,xp)
, that would be observed on a perfect detector, of a photon observed at pixel (z,x)
of the distorted detector is: (zp, xp) = (center_z, center_x) + P(rs) * ( z -
center_z , x - center_x )
The polynomial is given by P(rs) = rs *(1 + c2 * rs**2 + c4 * rs**4)
The map is rescaled and reshifted so that a perfect match is realised at the borders of a
horizontal line passing by the center. This ensures coerence with the procedure of pixel size calibration which is performed moving a needle horizontally and reading the motor positions
at the extreme positions. The maps are written in the target file, creating it as hdf5 file, in the datasets "/coords_source_x" "/coords_source_z" The URLs of these two maps can be used
for the detector correction of type map_xz
in the nabu configuration file as in this example
[dataset]
...
detector_distortion_correction = map_xz
detector_distortion_correction_options = map_x="silx:./map_coordinates.h5?path=/coords_source_x" ; map_z="silx:./map_coordinates.h5?path=/coords_source_z"
usage:
nabu-poly2map --help
usage: nabu-poly2map [-h] --nz NZ --nx NX --center_z CENTER_Z --center_x
CENTER_X --c4 C4 --c2 C2 --target_file TARGET_FILE
[--axis_pos AXIS_POS] [--loglevel LOGLEVEL] [--version]
This method is meant for those applications which wants to use the
functionalities of the poly2map entry point through a standar python API. The
argument arg_dict must contain the keys that you can find in cli_configs.py:
CreateDistortionMapHorizontallyMatchedFromPolyConfig Look at this files for
variables and their meaning and defaults Parameters:: args_dict : dict a
dictionary containing keys : center_x, center_z, nz, nx, c2, c4, axis_pos
return: max_x, map_z, new_rot_pos
options:
-h, --help show this help message and exit
--nz NZ vertical dimension of the detector
--nx NX horizontal dimension of the detector
--center_z CENTER_Z vertical position of the optical center
--center_x CENTER_X horizontal position of the optical center
--c4 C4 order 4 coefficient
--c2 C2 order 2 coefficient
--target_file TARGET_FILE
The map output filename
--axis_pos AXIS_POS Optional argument. If given it will be corrected for
use with the produced map. The value is printed, or
given as return argument if the utility is used from a
script
--loglevel LOGLEVEL Logging level. Can be 'debug', 'info', 'warning',
'error'. Default is 'info'.
--version, -V show program's version number and exit
nabu-stitching
: perform stitching on a set of volume or projections
See stitching
nabu-stitching --help
usage: nabu-stitching [-h] [--loglevel LOGLEVEL] [--only-create-master-file]
input-file
Run stitching from a configuration file. Configuration can be obtain from
`stitching-config`
positional arguments:
input-file Nabu configuraiton file for stitching (can be obtain
from nabu-stitching-boostrap command)
options:
-h, --help show this help message and exit
--loglevel LOGLEVEL Logging level. Can be 'debug', 'info', 'warning',
'error'. Default is 'info'.
--only-create-master-file
Will create the master file with all sub files
(volumes or scans). It expects the processing to be
finished. It can happen if all slurm job have been
submitted but you've been kicked out of the cluster of
if you need to relaunch manually some failling job
slurm for any reason
nabu-stitching-config
: create a configuration file for a stitching (to provide to nabu-stitching
)
nabu-stitching-config --help
usage: nabu-stitching-config [-h] [--stitching-type STITCHING_TYPE]
[--level LEVEL] [--output OUTPUT]
[--datasets [DATASETS ...]]
Initialize a 'nabu-stitching' configuration file
options:
-h, --help show this help message and exit
--stitching-type STITCHING_TYPE
User can provide stitching type to filter some
parameters. Must be in [<StitchingType.Y_PREPROC:
'y-preproc'>, <StitchingType.Z_PREPROC: 'z-preproc'>,
<StitchingType.Z_POSTPROC: 'z-postproc'>].
--level LEVEL Level of options to embed in the configuration file.
Can be 'required', 'optional', 'advanced'.
--output OUTPUT output file to store the configuration
--datasets [DATASETS ...]
datasets to be stitched together
nabu-shrink-dataset
: Shrink a NX dataset
nabu-shrink-dataset --help
usage: nabu-shrink-dataset [-h] [--entry ENTRY] [--binning BINNING]
[--subsampling SUBSAMPLING] [--threads THREADS]
input_file output_file
Shrink a NX dataset
positional arguments:
input_file Path to the NX file
output_file Path to the output NX file
options:
-h, --help show this help message and exit
--entry ENTRY HDF5 entry in the file. Default is to take the first
entry.
--binning BINNING Binning factor, in the form (bin_z, bin_x). Each image
(projection, dark, flat) will be binned by this factor
--subsampling SUBSAMPLING
Subsampling factor for projections (and metadata)
--threads THREADS Number of threads to use for binning. Default is 1.
nabu-diag2rot
: Find the cor as a function f z translation and write an hdf5 which contains interpolable tables.
nabu-diag2rot --help
usage: nabu-diag2rot [-h] --diag_file DIAG_FILE [--near NEAR]
[--original_scan ORIGINAL_SCAN] [--entry_name ENTRY_NAME]
[--near_width NEAR_WIDTH] [--low_pass LOW_PASS]
[--high_pass HIGH_PASS]
[--linear_interpolation LINEAR_INTERPOLATION]
[--use_l1_norm USE_L1_NORM] --cor_file COR_FILE
[--version]
Find the cor as a function f z translation and write an hdf5 which contains
interpolable tables. This file can be used subsequently with the correct-rot
utility.
options:
-h, --help show this help message and exit
--diag_file DIAG_FILE
The reconstruction file obtained by nabu-helical using
the diag_zpro_run option
--near NEAR This is a relative offset respect to the center of the
radios. The cor will be searched around the provided
value. If not given the optinal parameter
original_scan must be the original nexus file; and the
estimated core will be taken there. The netry_name
parameter also must be provided in this case
--original_scan ORIGINAL_SCAN
The original nexus file. Required only if near
parameter is not given
--entry_name ENTRY_NAME
The original nexus file entry name. Required only if
near parameter is not given
--near_width NEAR_WIDTH
For the horizontal correlation, searching the cor. The
radius around the near value
--low_pass LOW_PASS Data are filtered horizontally. details smaller than
the provided value are filtered out. Default is 1(
gaussian sigma)
--high_pass HIGH_PASS
Data are filtered horizontally. Bumps larger than the
provided value are filtered out. Default is 10(
gaussian sigma)
--linear_interpolation LINEAR_INTERPOLATION
If True(default) the cor will vary linearly with
z_transl
--use_l1_norm USE_L1_NORM
If false then a L2 norm will be used for the error
metric, considering the overlaps, if true L1 norm will
be considered
--cor_file COR_FILE The file where the information to correct the cor are
written
--version, -V show program's version number and exit
nabu-display-timings
: Display reconstruction performances from a log file
nabu-display-timings --help
usage: nabu-display-timings [-h] [--cutoff CUTOFF] [--type TYPE] logfile
Display reconstruction performances from a log file
positional arguments:
logfile Path to the log file.
options:
-h, --help show this help message and exit
--cutoff CUTOFF Cut-off parameter. Timings below this value will be
discarded. For a upper-bound cutoff, provide a value in the
form 'low, high'
--type TYPE How to display the result. Default is a pie chart. Possible
values are: pie, bars, violin