Command-line tools

nabu-histogram: extract or compute a histogram of a reconstructed volume

This command only works for HDF5 output.

Ideally, the histogram is computed during the reconstruction, so that the volume does not have to be re-loaded from disk after reconstruction.

  • If the volume histogram is available in the final HDF5 file, then this command extracts this histogram and creates a dedicated file.

  • If not, then the full histogram is computed from the volume, which takes time. You can tune how to compute the histogram (number of bins and amount of memory to use).

nabu-double-flatfield: compute the “double flat-field” of a dataset

“Double flat-field” is a way to remove rings artefacts during pre-processing of the projections. The principle is to compute the average of all projections, which creates a (artificial) flat image. Then, each projection is divided by this artificial flat.

This CLI tool is used to generate the “artificial flat”, meaning that it simply does the mean of all projections (or more involved processing if necessary, please refer to the options). The resulting image can then be fed to the nabu configuration file.

nabu-compare-volumes: compare two HDF5 volumes

This command enables you to “quickly” compare 3D volumes. The data is loaded by chunks to avoid overloading memory.

By default the full volumes are compared, but you can choose to stop the comparison as soon as the difference exceeds a given threshold.

The datasets have to be three-dimensional, this command won’t work on 1D or 2D.

nabu-shrink-volume: reduce a dataset

This command will perform binning and/or subsampling on the projections of a NX dataset.

nabu-composite-cor: Find the center of rotation from a scan or a series of scans.

This is a shortcut for estimating the center of rotation from one or several scans. It will use the “composite CoR estimator” which does the estimation on several sinograms.

nabu-rotate: apply a rotation on all the images of a dataset

This command only works for HDF5 datasets. It perform a rotation with a given angle on each projection image in the input dataset.

nabu-compare-volumes: generic tool to compare two HDF5 volumes

This command enables you to “quickly” compare 3D volumes. The data is loaded by chunks to avoid overloading memory.

By default the full volumes are compared, but you can choose to stop the comparison as soon as the difference exceeds a given threshold.

The datasets have to be three-dimensional, this command won’t work on 1D or 2D.

nabu-zsplit: split a H5-NX file according to z-positions

This command is only relevant for HDF5 datasets.

Some datasets are acquired with the sample stage moving vertically between each scan (“Z-series”). This is different from helical scans where the vertical movement occurs during the scan. In the case of Z-series, the sample stage moves vertically once a scan is completed, resulting in a series of datasets with different “z” values.

This command is used to split such datasets into several files, where each file has a distinct “z” value.

By default, this command creates no additional data (no duplication) and use HDF5 virtual datasets

nabu-generate-info: generate the volume “.info file” for legacy tools

Some post-processing tools need the .info file that was generated by PyHST. Although all the information in this file can be obtained in the HDF5 reconstruction file, producing this .info file is helpful in order to keep the post-processing tools working without modification.

nabu-validator: check that a dataset can be reconstructed

Goal

This application checks a dataset to ensure it can be reconstructed.

By default it will check that phase retrieval and reconstruction can be done. This entails checking for default values and valid links (in the case of virtual HDF5 dataset).

Usage

$: nabu-validator [-h] [--ignore-dark] [--ignore-flat] [--no-phase-retrieval] [--check-nan] [--skip-links-check] [--all-entries] [--extend] path [entries [entries ...]]

Check if provided scan(s) seems valid to be reconstructed.

positional arguments:
  path                  Data to validate (h5 file, edf folder)
  entries               Entries to be validated (in the case of a h5 file)

optional arguments:
  -h, --help            show this help message and exit
  --ignore-dark         Do not check for dark
  --ignore-flat         Do not check for flat
  --no-phase-retrieval  Check scan energy, distance and pixel size
  --check-nan           Check frames if contains any nan.
  --skip-links-check, --no-link-check
                        Check frames dataset if have some broken links.
  --all-entries         Check all entries of the files (for HDF5 only for now)
  --extend              By default it only display items with issues. Extend will display them all

Example

On the bamboo_hercules dataset some data has not been copied. This ends up with the following output:

$: nabu-validator bambou_hercules_0001_1_1.nx --extend
💥💣💥
 3 issues found from hdf5 scan(master_file: bamboo_hercules/bambou_hercules_0001/bambou_hercules_0001_1_1.nx, entry: entry0000)
   - projection(s)  : INVALID - At least one dataset seems to have broken link
   - dark(s)        : INVALID - At least one dataset seems to have broken link
   - flat(s)        : INVALID - At least one dataset seems to have broken link
   + distance       :  VALID
   + energy         :  VALID
   + pixel size     :  VALID

Same example but with all related data copied (link):

👌👍👌
 No issue found from hdf5 scan(master_file: bamboo_hercules/bambou_hercules_0001/bambou_hercules_0001_1_1.nx, entry: entry0000).
   + projection(s)  :  VALID
   + dark(s)        :  VALID
   + flat(s)        :  VALID
   + distance       :  VALID
   + energy         :  VALID
   + pixel size     :  VALID

nabu-cast apply volume casting

Goal

This application is casting a user provided volume to a given expected volume and an output data type.

Usage

usage: nabu-cast [-h] [--output-data-type OUTPUT_DATA_TYPE]
                 [--output_volume OUTPUT_VOLUME] [--output_type OUTPUT_TYPE]
                 [--data_min DATA_MIN] [--data_max DATA_MAX]
                 [--rescale_min_percentile RESCALE_MIN_PERCENTILE]
                 [--rescale_max_percentile RESCALE_MAX_PERCENTILE]
                 [--overwrite]
                 input_volume

positional arguments:
  input_volume          input volume. To define a volume you can either provide: 
                        
                            * an url (recommanded way) - see details lower 
                        
                            * a path. For hdf5 and multitiff we expect a file path. For edf, tif and jp2k we expect a folder path. In this case we will try to deduce the Volume from it. 
                         
                            url must be defined like: 
                        - EDFVolume      : edf:volume:/path/to/my/my_folder ; edf:volume:/path/to/my/my_folder?file_prefix=mybasename (if mybasename != folder name)
                        - HDF5Volume     : hdf5:volume:/path/to/file_path?path=entry0000
                        - JP2KVolume     : jp2k:volume:/path/to/my/my_folder ; jp2k:volume:/path/to/my/my_folder?file_prefix=mybasename (if mybasename != folder name)
                        - MultiTIFFVolume: tiff3d:volume:/path/to/tiff_file.tif
                        - TIFFVolume     : tiff:volume:/path/to/my/my_folder ; tiff:volume:/path/to/my/my_folder?file_prefix=mybasename (if mybasename != folder name)
                            

optional arguments:
  -h, --help            show this help message and exit
  --output-data-type OUTPUT_DATA_TYPE
                        output data type. Valid value are numpy default types name like (uint8, uint16, int8, int16, int32, float32, float64)
  --output_volume OUTPUT_VOLUME
                        output volume. Must be provided if 'output_type' isn't. Must looks like: 
                        To define a volume you can either provide: 
                        
                            * an url (recommanded way) - see details lower 
                        
                            * a path. For hdf5 and multitiff we expect a file path. For edf, tif and jp2k we expect a folder path. In this case we will try to deduce the Volume from it. 
                         
                            url must be defined like: 
                        - EDFVolume      : edf:volume:/path/to/my/my_folder ; edf:volume:/path/to/my/my_folder?file_prefix=mybasename (if mybasename != folder name)
                        - HDF5Volume     : hdf5:volume:/path/to/file_path?path=entry0000
                        - JP2KVolume     : jp2k:volume:/path/to/my/my_folder ; jp2k:volume:/path/to/my/my_folder?file_prefix=mybasename (if mybasename != folder name)
                        - MultiTIFFVolume: tiff3d:volume:/path/to/tiff_file.tif
                        - TIFFVolume     : tiff:volume:/path/to/my/my_folder ; tiff:volume:/path/to/my/my_folder?file_prefix=mybasename (if mybasename != folder name)
                            
  --output_type OUTPUT_TYPE
                        output type. Must be provided if 'output_volume' isn't. Valid values are ('h5', 'hdf5', 'nexus', 'nx', 'npy', 'npz', 'tif', 'tiff', 'jp2', 'jp2k', 'j2k', 'jpeg2000', 'edf', 'vol')
  --data_min DATA_MIN   value to clamp to volume cast new min. Any lower value will also be clamp to this value.
  --data_max DATA_MAX   value to clamp to volume cast new max. Any higher value will also be clamp to this value.
  --rescale_min_percentile RESCALE_MIN_PERCENTILE
                        used to determine data_min if not provided. Expected as percentage. Default is 10%
  --rescale_max_percentile RESCALE_MAX_PERCENTILE
                        used to determine data_max if not provided. Expected as percentage. Default is 90%
  --overwrite           Overwrite file or dataset if exists

examples

A.

The following line:

nabu-cast hdf5:volume:./bambou_hercules_0001slice_1080.hdf5?path=entry0000/reconstruction --output-data-type uint8 --output_volume_url tiff:volume:./cast_volume?file_prefix=bambou_hercules_0001slice_1080

does:

  • convert volume contains in ‘bambou_hercules_0001slice_1080’ at ‘entry0000/reconstruction’ path (volume dataset is in “entry0000/reconstruction/results/data”)

  • insure output volume will be

    • a uint8 dataset

    • a folder containing tif files under ./cast_volume folder. Files will be named “bambou_hercules_0001slice_1080_0000.tif”, “bambou_hercules_0001slice_1080_0001.tif”…

B.

The following line:

nabu-cast edf:volume:./volume_folder?file_prefix=5.06 --output-data-type uint16 --output_volume_url hdf5:volume:./cast_volume.hdf5?path=volume

does:

  • convert volume contains in ‘volume_folder’ with edf files named like 5.06_XXXX.edf

  • insure output volume will be

    • a uint16 dataset

    • saved under a hdf5 files named cast_volume.hdf5 at /volume group. As a result volume dataset will be store in /volume/results/data.

nabu-helical-prepare-weights-double: create the weights map file which is mandatory for the helical pipeline.

This command prepares an hdf5 file containing default weight map and double flat-field to be used by the helical pipeline. The weights are based on averaged flatfields of the dataset. While the double flat filed is set to one. The usage is the following

nabu-helical-prepare-weights-double   nexus_file_name   entry_name    [target_file name [transition_width]]

Where the arguments within square brackets are optional. The transition_widths is given in pixel and determines how the weights go to zero close to the borders

nabu-composite-cor: Application to extract the center of rotation for a scan or a series of scans.

The usage is the following

 nabu-composite-cor [-h] --filename_template FILENAME_TEMPLATE [--entry_name ENTRY_NAME] [--num_of_stages NUM_OF_STAGES] [--oversampling OVERSAMPLING]
                          [--n_subsampling_y N_SUBSAMPLING_Y] [--theta_interval THETA_INTERVAL] [--first_stage FIRST_STAGE] [--output_file OUTPUT_FILE] --cor_options COR_OPTIONS
                          [--version]

The mandatory parameters are the filename and the cor options. Here an example

nabu-composite-cor --filename_template  HA1400_00p01_supersample-225MeV_0002_0001.nx --cor_options  "side='near'; near_pos = 1400.0; near_width = 30.0"

The whole set of parameters is the following:

  -h, --help            show this help message and exit
  --filename_template FILENAME_TEMPLATE
                        The filename template. It can optionally contain a segment equal to "X"*ndigits which will be replaced by the stage number if several stages are requested by the
                        user
  --entry_name ENTRY_NAME
                        Optional. The entry_name. It defaults to entry0000
  --num_of_stages NUM_OF_STAGES
                        Optional. How many stages. Example: from 0 to 43 -> --num_of_stages 44. It is optional.
  --oversampling OVERSAMPLING
                        Oversampling in the research of the axis position. Defaults to 4
  --n_subsampling_y N_SUBSAMPLING_Y
                        How many lines we are going to take from each radio. Defaults to 10.
  --theta_interval THETA_INTERVAL
                        Angular step for composing the image. Default to 5
  --first_stage FIRST_STAGE
                        Optional. The first stage.
  --output_file OUTPUT_FILE
                        Optional. Where the list of cors will be written. Default is the filename postixed with cors.txt
  --cor_options COR_OPTIONS
                        the cor_options string used by Nabu. Example --cor_options "side='near'; near_pos = 300.0; near_width = 20.0"
  --version, -V         show program's version number and exit

nabu-poly2map: builds a distortion map.

This application builds two arrays. Let us call them map_x and map_z. Both are 2D arrays with shape given by (nz, nx). These maps are meant to be used to generate a corrected detector image, using them to obtain the pixel (i,j) of the corrected image by interpolating the raw data at position (map_z(i,j), map_x(i,j)). This map is determined by a user given polynomial P(rs) in the radial variable rs = sqrt( (z-center_z)**2 + (x-center_x)**2 ) / (nx/2) where center_z and center_x give the center around which the deformation is centered. The perfect position (zp,xp) , that would be observed on a perfect detector, of a photon observed at pixel (z,x) of the distorted detector is: (zp, xp) = (center_z, center_x) + P(rs) * ( z - center_z , x - center_x ) The polynomial is given by P(rs) = rs *(1 + c2 * rs**2 + c4 * rs**4) The map is rescaled and reshifted so that a perfect match is realised at the borders of a horizontal line passing by the center. This ensures coerence with the procedure of pixel size calibration which is performed moving a needle horizontally and reading the motor positions at the extreme positions. The maps are written in the target file, creating it as hdf5 file, in the datasets “/coords_source_x” “/coords_source_z” The URLs of these two maps can be used for the detector correction of type map_xz in the nabu configuration file as in this example

[dataset]
...
detector_distortion_correction = map_xz
detector_distortion_correction_options = map_x="silx:./map_coordinates.h5?path=/coords_source_x" ; map_z="silx:./map_coordinates.h5?path=/coords_source_z"

usage:

nabu-poly2map [-h] --nz NZ --nx NX --center_z CENTER_Z --center_x CENTER_X --c4 C4 --c2 C2 --target_file TARGET_FILE [--axis_pos AXIS_POS] [--loglevel LOGLEVEL] [--version]

detailed parameters:

  -h, --help            show this help message and exit
  --nz NZ               vertical dimension of the detector
  --nx NX               horizontal dimension of the detector
  --center_z CENTER_Z   vertical position of the optical center
  --center_x CENTER_X   horizontal position of the optical center
  --c4 C4               order 4 coefficient
  --c2 C2               order 2 coefficient
  --target_file TARGET_FILE
                        The map output filename
  --axis_pos AXIS_POS   Optional argument. If given it will be corrected for use with the produced map. The value is printed, or given as return argument if the utility is used from a
                        script
  --loglevel LOGLEVEL   Logging level. Can be 'debug', 'info', 'warning', 'error'. Default is 'info'.
  --version, -V         show program's version number and exit