Parameters estimation

When some parameters cannot be inferred directly from the dataset metadata (ex. center of rotation for reconstruction), one often have to determine them by trial and error. Nabu provide some estimation utilities, in the nabu.estimation module..

Center of Rotation

Nabu offers a variety of Center of Rotation (CoR) estimation methods, classified in two types:

  • projections-based methods

  • sinogram-based methods

Note

Currently, no reconstruction-based methods are available. However higher level tools like tomwer can be used.

Projection-based methods find the half-shift between two images. The center of axial vertical rotation is obtained when the fist image is a radiography at the rotation angle 0 and the second image is given by the radiography at the rotation angle 180 after flipping the image horizontally. The rotation axis position is the center of the image plus the found shift.

Configuration file: section [reconstruction], key rotation_axis_position. Values can be:

  • Empty (default): the CoR is set to the middle of the detector: (detector_width - 1)/2.0

  • A number (known CoR)

  • centered: a fast and simple auto-CoR method. It only works when the CoR is not far from the middle of the detector. It does not work for half-tomography.

  • global: a slow but robust auto-CoR.

  • sliding-window: automatically find the CoR with a sliding window applied on two opposite projections. You have to specify on which side the CoR is (left, center, right).

  • sino-sliding-window: same as sliding-window. Two sinogram halves are used instead of two opposite projections. You have to specify on which side the CoR is (left, center, right).

  • growing-window : automatically find the CoR with a sliding-and-growing window applied to two-opposite projections. You can tune the option with the parameter cor_options.

  • sino-growing-window : same as growing-window Two sinogram halves are used instead of two opposite projections. You can tune the option with the parameter cor_options.

  • sino-coarse-to-fine: Estimate CoR from sinogram. Only works for 360 degrees scans.

  • composite-coarse-to-fine: Estimate CoR from composite multi-angle images. Only works for 360 degrees scans.

  • fourier-angles: Estimate CoR from sino based on an angular correlation analysis. You can tune the option with the parameter cor_options.

  • octave-accurate: Legacy from octave accurate COR estimation algorithm. It first estimates the COR with global fourier-based correlation, then refines this estimation with local correlation based on the variance of the difference patches. You can tune the option with the parameter cor_options.

API: CenterOfRotation

Advanced parameters in configuration file

Advanced parameters can be provided in the configuration file, with the keycor_options. The parameters are separated by commas and passed as name=value. For example:

cor_options = low_pass=1; high_pass=20

Mind the semicolon separator (;).

One important option is side. This option (new rule in 2024.1.0) merges the former side and near_pos options. side can be set to either:

  • from_file (default): the chosen algorithm will use the estimated_cor_from_motor field of the NX file as a coarse estimate.

  • A numeric value: this will be used as a coarse estimation to guide the chosen algorithm. If a numeric value is set, it will overrides the value in the NX file.

  • left, right, center or all.

Other options can be set. They corrrespond to a mix of the arguments of function find_shift and to specific options which are not arguments of the function. In any case, the options that can be set in the config file are detailed in the following algorithm descriptions.

Among options that are common to all algorithms (except composite-coarse-to-fine and sino-coarse-to-fine):

  • roi_yxhw (default: None-> deactivated): A 4-element vector containing: vertical and horizontal coordinates of first pixel, plus height and width of the Region of Interest (RoI). Or a 2-elemen vector containing: plus height and width of the centered Region of Interest (RoI). Default is None -> deactivated.

  • median_filt_shape (default: None-> deactivated): A 2-list. Shape of the median filter window.

  • padding_mode (default: None): Padding mode, which determines the type of convolution. If None or wrap are passed, this resorts to the traditional circular convolution. If edge or constant are passed, it results in a linear convolution. All options are: None | constant | edge | linear_ramp | maximum | mean | median | minimum | reflect | symmetric |wrap

  • peak_fit_radius (default: 1): Radius size around the max correlation pixel, for sub-pixel fitting. Minimum value is 1.

  • low_pass: Low-pass filter properties, as described in nabu.misc.fourier_filters

  • high_pass: High-pass filter properties, as described in nabu.misc.fourier_filters

COR estimation algorithms in details

centered

  • Works on an image pair.

  • This algorithm estimates the shift that maximizes the correlation between two 180°-apart projections. The correlation are computed in Fourier space, over of Region-Of-Interest (ROI) inside both images (note that this ROI can be the whole image). Once the minimum of the correlation is located (at the image resolution level), a sub-pixel estimation of the CoR is performed by fitting the correlation values to a second-order 2D polynomial.

  • This is equivalent to the Octave global algorithm, which computes “global” correlations. The fastomo/accurate algorithm build on the top of this global correlations to compute extra local correlations.

  • This algorithm will not benefit from the Bliss estimated_cor_frm_motor field, since it acts globally. Though, this prior could be used to associate a degree of confidence to the outcome.

  • Specific arguments: None.

  • Other options to be set in cor_options are : None.

global

  • Works on an image pair.

  • This adaptive method works by applying a gaussian which highlights, by apodization, a region which can possibly contain the good center of rotation. The whole image is spanned during several applications of the apodization. At each application the apodization function, which is gaussian, is moved to a new guess position. The length of the step, by which the gaussian is moved, and its sigma are obtained by multiplying the shortest distance from the left or right border with a self.step_fraction and self.sigma_fraction factors which ensure global overlapping. For each step a region around the CoR of each image is elected, and the regions of the two images are compared to calculate a cost function (a normalized l2-norm squared diff). The value of the cost function, at its minimum is used to select the best step at which the CoR is taken as final result. The option filtered_cost= True (default) triggers the filtering (according to low_pass and high_pass) of the wo images which are used for the cost function. (Note: the low_pass and high_pass options are used, if given, also without the filtered_cost option, by being passed to the base class CenterOfRotation).

  • Raise an error if no approximate is found. Could return a default value (center of det. e.g.).

  • Up to now, this algorithm does not use the Bliss estimated_cor_frm_motor field.

  • Specific arguments:

    • margins (default: None): A couple of floats or ints, that sets the search region to between margin1 and dim_x - 1 - margin2 if margins is None or in the form of (margin1,margin2) the search is done between margin1 and dim_x-1-margin2. If left to None then by default (margin1,margin2) = ( 10, 10 ).

    • filtered_cost: boolean. True by default. It triggers the use of filtered images in the calculation of the cost function.

  • API: CenterOfRotationAdaptiveSearch

  • Other options to be set in cor_options are : None.

sliding-window or sino-sliding-window

  • Works on an image pair (sliding-window) or two halves of a sinogram (sino-sliding-window).

  • This algo slides a ROI of one image over the second image, along the horizontal axis until the sum of absolute differences (SAD) between the two corresponding RoIs is minimized. If the acquisition is a standard one, the width of the RoI is 3/4 of image width. If it is a half-acquisition, the width is 1/10 of the image width. A sub-pixel estimation is done by fitting the SAD values to a 1D quadratic polynomial.

  • This algorithm can take the Bliss estimated_cor_frm_motor field a first guess.

  • Specific arguments:

    • side: specify left or right if half-acquisition (see above for the possible values of side).

  • API: CenterOfRotationSlidingWindow

  • Other options to be set in cor_options are : None.

growing-window

  • Works on an image pair.

  • Similar to sliding-window but uses a increasing size of the sliding window.

  • Specific arguments:

    • side (default: all)

    • min_window_width (default: 11)

  • API: CenterOfRotationGorwingWindow

  • Other options to be set in cor_options are : None.

sino-coarse-to-fine

  • Works on sinogram halves.

  • This algorithm is two-step and works on the principle of the sliding-window algo: first, estimate a “coarse” COR by sliding the whole image along a given window. Then, refine the COR estimation by sliding a restricted ROI over a restricted sliding window centered on the coarse estimate of the COR. The ROI is shifted by 1/10th of pixel.

  • Specific arguments:

    • side (default: right)

    • window_width (default: None)

    • neighborhood (default: 7)

    • shift_value (default: 0.1)

  • API: SinoCOR

  • Other options to be set in cor_options are : None.

CompositeCORFinder

This algorithm is a hybrid version of classical COR estimator (which correlates two opposites views) with the SinoCOR estimator (which correlates two sinogram halves, separated by 180 degrees). It can be seen as a robustifier of a COR estimator (by incorporating more than one pair of radios), which drops some rows of the radios to limit the computational overload. It is two-step:

  • Build a composite sinogram made of P rows of N pairs of 180-degrees apart projections. The resulting sinogram has 2*N*P rows.

  • Oversample the selected projections to the desired COR estimation accuracy (typically upsampling by a factor 4).

  • Finds the shift that minimizes the mean difference between one composite image and its 180-degree counterpart.

  • Specific arguments:

    • dataset_info:

    • oversampling (default:4): Corresponds to the subpixel accuracy that is required. oversampling=4 produces a quarter-pixel accuracy.

    • theta_interval (default:5):

    • n_subsampling_y (default:10):

    • take_log (default:True): if log should be applied or not.

    • spike_threshold (default:0.04):

    • norm_order (default:1): type of norm to be applied to differences (l1 or l2).

  • API: CompositeCORFinder

  • Possible cor_options are listed below (in a (key-word, default):description). See below for the formatting of the cor_optionsfield.`

    • low_pass (default: 0.4): pre-processing low-pass filter cut-off.

    • high_pass (default: 10): pre-processing high-pass filter cut-off.

    • side (default: center): can be center, right, lleft or near. If set ot near, expects the option near_pos to be set.

    • near_pos (default: 0): coarse estimate of the COR provided by the user.

    • near_width (default: 20): Width of the overlappin window.

fourier-angles

  • Works on one sinogram.

  • This algorithm computes the Fourier transform of a sinogram over the rotation angles and maximizes some kind of correlation.

  • API: FourierAngles

  • Possible cor_options are listed below (in a (key-word, default):description). See below for the formatting of the cor_optionsfield.`

    • crop_around_cor, False: Wether to limit (or not) the size of the detector’s subregion used in the computation of the score function. This might be of interest if a good approximation of the COR can be provided as a first guess. If set to True, tune the option near_std (the search region will be set to near_pos +/- 2 * near_std.)

    • side, center: If no estimation of the COR, you can provide, in case of HA, either left, center(default) orright`.

    • near_pos, None: Approximate absolute position of the COR (populated with the NX file if present)

    • near_std: 100: width of the detector’s subregion used for the computation of the score function. Used only if corp_around_cor is used.

    • near_width, 20: Margin that should be taken with respect to detector’s edges.

    • near_shape, tukey: type of window-function to apply. The window-function is a near-std-wide neighborhood of near-pos.

    • near_weight, 0.1: drives the decrease rate of the window-function towards zero.

    • near_alpha, 0.5: specific to the Tukey window-function.

    • shift_sino, True:

    • near_step, 0.5: determine the sampling of the score function, hence the accuracy of the estimate. 0.5 (default) means a precision of 0.5 pixel.

    • refine, False: wether to search for a better accuracy of the estimation (i.e. sub near_step) by fitting a polynomial.

octave-accurate

  • Works on image pair.

  • This algorithm is nabu conversion of the leagacy Octave/Accurate COR estimator. It first computes a global COR, which maximizes global correlation (computed in Fourier space). From the first estimate, it computes local correlation by minimizing the local variance between ROIs of both images, within a user-provided window-length and finally achieves subpixel accuracy by fitting the local correlation values with a 2D-quadratic polynomial.

  • API: OctaveAccurate

  • Possible cor_options are listed below (in a (key-word, default):description). See below for the formatting of the cor_optionsfield.`

    • maxsize, [5, 5]: Sliding range for the computation of the variance in computation of local correlations.

    • refine, None: whether to search for a better accuracy of the estimation (i.e. sub near_step) by fitting a polynomial.

    • pmcc, False: If True, use Pearson Correlation Coefficient instead of Variance (default) for the computation of the local correlations.

    • normalize, True: If True (default), the data are scaled to 1-mean.

    • low_pass, 0.01:

    • limz, 0.5: Maximum vertical drift before warning.

Tilt angle

Nabu provides methods for detecting the detector tilt angle, or, equivalently, the rotation axis tilt in the plane parallel to the detector plane. When such a tilt occurs, the columns of the detector are not parallel to the rotation axis.

Configuration file: section [preproc], key tilt_correction. Values can be:

  • Empty (default): no tilt detection/correction.

  • A scalar value: user-provided tilt angle in degrees.

  • 1d-correlation: auto-detect tilt with the 1D correlation method (fastest, but works best for small tilts)

  • fft-polar: auto-detect tilt with polar FFT method (slower, but works well on all ranges of tilts)

When a tilt is detected or provided by the user, each projection image is rotated to correct for this angle.

API: CameraTilt

Advanced parameters in configuration file

Advanced parameters can be provided in the configuration file, by the means of key autotilt_options. The parameters are separated by commas and passed as name=value. For example:

autotilt_options = median_filt_shape(3,3); threshold=0.5

Mind the semicolon separator (;). These advanced parameters corrrespond to the arguments of function compute_angle.

The parameter threshold indicates the value, in pixels, below which the effect of tilt is ignored. Given a tilt angle a, the maximum effect on a detector of width N is N/2 * sin(a). If N/2 * sin(a) < threshold, then the tilt is considered too small for being corrected and is subsequently ignored. For example, on a detector of width 2560 pixels, a tilt of 0.05 degree induces a shift of 1.12 pixel at most. The default threshold is 0.25 pixel.

Detector Translation Along the Beam

When moving the detector along the longitudinal translation axis the beam shape image, recorded on the detector, can be seen moving if the translation is not parallel to the beam. The occurring shifts can be found, and the beam tilt respect to the translation axis ca be inferred. The vertical and horizontal shifts are returned in pixels-per-unit-translation. To compute the vertical and horizontal tilt angles from the obtained shift_pix:

tilt_deg = np.rad2deg(np.arctan(shift_pix_per_unit_translation * pixel_size))

where pixel_size and and the input parameter img_pos have to be expressed in the same units.

API: DetectorTranslationAlongBeam