integrator.multiproc_integration module#

class integrator.multiproc_integration.MultiprocIntegrator(ai_config, resources, dataset=None, output_file=None, n_images_per_integrator='auto', logger=None, extra_options=None, **integrator_kwargs)#

Bases: integrator.base.DistributedIntegrator

Initialize a DistributedStackIntegrator.

Parameters
  • ai_config (AIConfiguration) – Azimuthal Integration configuration

  • processing_resources (any) – Data structure describing computational resources

  • dataset (DatasetParser, optional) – XRD dataset information object. If not provided, set_new_dataset() will have to be called prior to integrate_dataset().

  • output_file (str, optional) – Path where the integrated data will be stored (hdf5 file)

  • n_images_per_integrator (int, optional) – Number of images to process at each stack. By default it is automatically inferred by inspecting the dataset file

  • logger (Logger) – Logger object

  • extra_options (dict, optional) –

    Dictionary of advanced options. Current values are:
    • ”create_h5_subfolders”: True

      Whether to create a sub-directory to store files that contain the result of each integrated stack

    • ”scan_num_as_h5_entry”: False

      Whether to use the current scan number as HDF5 entry.

  • class. (The other named arguments (**kwargs) are passed to the StackIntegrator) –

device_type = 'gpu'#
set_new_dataset(datasets, output_files)#
set_workers_name_prefix(name_prefix=None)#
distribute_integration_tasks()#
close_processes(timeout=None)#
integrator.multiproc_integration.worker_scan(worker_name, ai_config, n_threads, datasets, output_files, output_dirs, extra_options=None)#
integrator.multiproc_integration.mp_integrate_datasets(ai_config, resources, datasets, output_files, worker_id, **kwargs)#