nabu.stitching.z_stitching module

nabu.stitching.z_stitching.z_stitching(configuration: ZStitchingConfiguration, progress=None) BaseIdentifier[source]

Apply stitching from provided configuration. Return a DataUrl with the created NXtomo or Volume

class nabu.stitching.z_stitching.ZStitcher(configuration, progress: Progress = None)[source]

Bases: object

static param_is_auto(param)[source]
property frame_composition
get_final_axis_positions_in_px() dict[source]
Returns:

dict with tomo object identifier (str) as key and a tuple of position in pixel (axis_0_pos, axis_1_pos, axis_2_pos)

Return type:

dict

from_abs_pos_to_rel_pos(abs_position: tuple)[source]

return relative position from on object to the other but in relative this time :param tuple abs_position: tuple containing the absolute positions :return: len(abs_position) - 1 relative position :rtype: tuple

from_rel_pos_to_abs_pos(rel_positions: tuple, init_pos: int)[source]

return absolute positions from a tuple of relative position and an initial position :param tuple rel_positions: tuple containing the absolute positions :return: len(rel_positions) + 1 relative position :rtype: tuple

stitch(store_composition: bool = True) BaseIdentifier[source]

Apply expected stitch from configuration and return the DataUrl of the object created

Parameters:

store_composition (bool) – if True then store the composition used for stitching in frame_composition. So it can be reused by third part (like tomwer) to display composition made

settle_flips()[source]

User can provide some information on existing flips at frame level. The goal of this step is to get one flip_lr and on flip_ud value per scan or volume

property z_serie: Serie
property configuration: ZStitchingConfiguration
property progress: Progress | None
static get_overlap_areas(upper_frame: ndarray, lower_frame: ndarray, upper_frame_key_line: int, lower_frame_key_line: int, overlap_size: int, stitching_axis: int)[source]

return the requested area from lower_frame and upper_frame.

Lower_frame contains at the end of it the ‘real overlap’ with the upper_frame. Upper_frame contains the ‘real overlap’ at the end of it.

For some reason the user can ask the stitching height to be smaller than the real overlap.

Here are some drawing to have a better of view of those regions:

apidoc/images/stitching/z_stitch_real_overlap.png apidoc/z_stitch_stitch_height.png
rescale_frames(frames: tuple)[source]

rescale_frames if requested by the configuration

normalize_frame_by_sample(frames: tuple)[source]

normalize frame from a sample picked on the left or the right

static stitch_frames(frames: tuple | ndarray, x_relative_shifts: tuple, y_relative_shifts: tuple, output_dtype: ndarray, stitching_axis: int, overlap_kernels: tuple, output_dataset: Dataset | ndarray | None = None, dump_frame_fct=None, check_inputs=True, shift_mode='nearest', i_frame=None, return_composition_cls=False, alignment='center', pad_mode='constant', new_width: int | None = None) ndarray[source]

shift frames according to provided shifts (as y, x tuples) then stitch all the shifted frames together and save them to output_dataset.

Parameters:

frames (tuple) – element must be a DataUrl or a 2D numpy array

class nabu.stitching.z_stitching.PreProcessZStitcher(configuration, progress=None)[source]

Bases: ZStitcher

property reading_orders

as scan can be take on one direction or the order (rotation goes from X to Y then from Y to X) we might need to read data from one direction or another

property x_flips: list
property y_flips: list
stitch(store_composition=True) BaseIdentifier[source]
Parameters:

return_composition (bool) – if True then return the frame composition (used by the GUI for example to display a background with the same class)

class nabu.stitching.z_stitching.PostProcessZStitcher(configuration, progress: Progress = None)[source]

Bases: ZStitcher

stitch(store_composition=True) BaseIdentifier[source]

Apply expected stitch from configuration and return the DataUrl of the object created

get_output_data_type()[source]
nabu.stitching.z_stitching.stitch_vertically_raw_frames(frames: tuple, key_lines: tuple, overlap_kernels: ~nabu.stitching.overlap.ZStichOverlapKernel | tuple, output_dtype: ~numpy.dtype = <class 'numpy.float32'>, check_inputs=True, raw_frames_compositions: ~nabu.stitching.frame_composition.ZFrameComposition | None = None, overlap_frames_compositions: ~nabu.stitching.frame_composition.ZFrameComposition | None = None, return_composition_cls=False, alignment='center', pad_mode='constant', new_width: int | None = None) ndarray[source]

stitches raw frames (already shifted and flat fielded !!!) together using raw stitching (no pixel interpolation, y_overlap_in_px is expected to be a int). Sttiching is done vertically (along the y axis of the frame ref)

————–
| |
| Frame 1 | ————–
| | | Frame 1 |
————– | |
Y | –> stitching |~ stitching ~|
————– | |
| | | Frame 2 |
| Frame 2 | ————–
| |
————–

/

returns stitched_projection, raw_img_1, raw_img_2, computed_overlap proj_0 and pro_1 are already expected to be in a row. Having stitching_height_in_px in common. At top of proj_0 and at bottom of proj_1

Parameters:
  • frames (tuple) – tuple of 2D numpy array. Expected to be Z up oriented at this stage

  • key_lines (tuple) – for each jonction define the two lines to overlaid (from the upper and the lower frames). In the reference where 0 is the bottom line of the image.

  • overlap_kernels – ZStichOverlapKernel overlap kernel to be used or a list of kernel (one per overlap). Define startegy and overlap heights

  • output_dtype (numpy.dtype) – dataset dtype. For now must be provided because flat field corrcetion change data type (numpy.float32 for now)

  • check_inputs (bool) – if True will do more test on inputs parameters like checking frame shapes, coherence of the request.. As it can be time consuming it is optional

  • raw_frames_compositions – pre computed raw frame composition. If not provided will compute them. allow providing it to speed up calculation

  • overlap_frames_compositions – pre computed stitched frame composition. If not provided will compute them. allow providing it to speed up calculation

  • return_frame_compositions (bool) – if False return simply the stitched frames. Else return a tuple with stitching frame and the dictionnary with the composition frames…

class nabu.stitching.z_stitching.StitchingPostProcAggregation(stitching_config: ZStitchingConfiguration, futures: tuple | None = None, existing_objs_ids: tuple | None = None)[source]

Bases: object

for remote stitching each process will stitch a part of the volume or projections. Then once all are finished we want to aggregate them all to a final volume or NXtomo.

This is the goal of this class. Please be careful with API. This is already inheriting from a tomwer class

Parameters:
  • stitching_config (ZStitchingConfiguration) – configuration of the stitching configuration

  • futures (Optional[tuple]) – futures that just runned

  • existing_objs (Optional[tuple]) – futures that just runned

:param

property futures
retrieve_tomo_objects()[source]

Return tomo objects to be stitched together. Either from future or from existing_objs

dump_stiching_config_as_nx_process(file_path: str, data_path: str, overwrite: bool, process_name: str)[source]
property stitching_config: ZStitchingConfiguration
process() None[source]

main function

nabu.stitching.z_stitching.get_obj_width(obj: NXtomoScan | VolumeBase) int[source]

return tomo object width