# Configuration parameters This file lists all the current configuration parameters available in the [configuration file](config_file.md). ### dataset Input dataset to integrate: path to a HDF5-Nexus file. ```ini location = ``` Entry in the HDF5 file, if applicable. Default is the first available entry. ```ini hdf5_entry = ``` Which folders layout to use for parsing datasets and creating output files. Available are: ID15A, ID11 ```ini layout = ID15A ``` ### azimuthal integration Number of azimuthal bins. ```ini n_pts = 2500 ``` Radial unit. Can be r_mm, q_A^-1, 2th_deg ```ini unit = r_mm ``` Detector name (ex. eiger2_cdte_4m), or path to the a detector file specification (ex. id15_pilatus.h5) ```ini detector = ``` Force detector name if not present in the 'detector' file description ```ini detector_name = ``` Path to the mask file ```ini mask_file = ``` Path to the pyFAI calibration file ```ini poni_file = ``` Error model for azimuthal integration. Can be poisson or None. ```ini error_model = poisson ``` azimuthal ranges in the form (min, max, n_slices) ```ini azimuthal_range = (-180., 180., 1) ``` Lower and upper range of the radial unit. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored. ```ini radial_range = ``` Which azimuthal integration method to use. ```ini ai_method = opencl ``` Polarization factor for PyFAI. Default is 1.0 ```ini polarization_factor = 1.0 ``` Whether to correct solid angle. Put this parameter if you wish to correct solid angle. ```ini correct_solid_angle = 0 ``` Whether to additionally perform azimuthal integration on average of each image stack ```ini average_xy = 0 ``` Whether to perform pixel splitting. ```ini pixel_splitting = 0 ``` Which outliers removal method to use. Can be none (default), median, or sigma clip. NB: when using one of the latter two, neither azimuthal caking nor uncertainty estimation are possible, only one azimuthal slice is used. ```ini trim_method = ``` Number of azimuthal bins for trimmed mean. ```ini trim_n_pts = ``` Bounds for trim methods: - For median: percentiles in the form (cut_low, cut_high). Integration uses medfilt1d with only one azimuthal slice. - sigma clip: thresholds in the form (thresh_low, thresh_high). ```ini trim_bounds = ``` ### computations distribution Number of workers to use ```ini n_workers = 10 ``` Number of CPU cores per workers to use ```ini cores_per_worker = 10 ``` Time limit for SLURM job duration ```ini time = 00:30:00 ``` Amount of memory per worker. Default is 10 GB. ```ini memory_per_worker = 10GB ``` Name of the project to be displayed on SLURM ```ini project_name = distributed-azimuthal-integration ``` Whether to automatically respawn SLURM resources once the walltime is reached. ```ini auto_respawn = 1 ``` If non-empty, connect to an existing dask-distributed cluster. It should be in the form 'tcp://ip:port' ```ini cluster_address = ``` Name of the SLURM partition (queue) ```ini partition = nice ``` Path to the python executable. Useful if the compute nodes have a different environment from the front-end. Defaults to current python executable (so accounts for virtual environments) ```ini python_exe = ``` TCP Port for the dask-distributed scheduler ```ini port = 36087 ``` TCP Port for the dashboard ```ini dashboard_port = 36088 ``` Number of images processed in a single pass by each integrator. By default (empty), each integrator determines the optimal number of images to load. ```ini n_images_per_integrator = ``` If auto_respawn is set to True, this defines the number of times the resources can be respawned. ```ini max_respawns = 100 ``` Period (in seconds) ```ini healthcheck_period = 10 ``` Defines a python executable to use for each SLURM partition. Mind the semicolon (;) as a separator. ```ini python_executables = nice='/scisoft/tomotools_env/integrator/ubuntu20.04/x86_64/latest/bin/python' ; p9gpu='/scisoft/tomotools_env/integrator/ubuntu20.04/ppc64le/latest/bin/python' ; gpu='/scisoft/tomotools_env/integrator/ubuntu20.04/x86_64/latest/bin/python' ``` ### output Path to the output file. If not provided, it will be in the same directory as the input dataset. NOTA: a directory with the same name (less the extension) will be created at the same level, for storing the actual integrated data. ```ini location = ``` What to do if output already exists. Possible values are: - skip: go to next image of the output already exists (and has the same processing configuration) - overwrite: re-do the processing, overwrite the file - raise: raise an error and exit ```ini existing_output = skip ``` ### live integration ### pipeline Level of verbosity of the processing. 0 = terse, 3 = much information. ```ini verbosity = 2 ```