pyFAI.ext package
pyFAI.ext.bilinear module
Module with makes a discrete 2D-array appear like a continuous function thanks to bilinear interpolations.
- class pyFAI.ext.bilinear.Bilinear
Bases:
object
Bilinear interpolator for finding max.
Instance attribute defined in pxd file
- cp_local_maxi(self, Py_ssize_t x) Py_ssize_t
- data
- f_cy(self, x)
Function -f((y,x)) where f is a continuous function (y,x) are pixel coordinates pixels outside the image are given an arbitrary high value to help the minimizer
- Parameters:
x – 2-tuple of float
- Returns:
Interpolated negative signal from the image (negative for using minimizer to search for peaks)
- height
- local_maxi(self, x)
Return the local maximum with sub-pixel refinement.
Sub-pixel refinement: Second order Taylor expansion of the function; first derivative is null
\[delta = x-i = -Inverse[Hessian].gradient\]If Hessian is singular or \(|delta|>1\): use a center of mass.
- Parameters:
x – 2-tuple of integers
- Returns:
2-tuple of float with the nearest local maximum
- many(self, x)
Call the bilinear interpolator on many points…
- Parameters:
x – array of points of shape (2, n), like (array_of_y, array_of_x)
- Returns:
array of shape n
- maxi
- mini
- width
- pyFAI.ext.bilinear.calc_cartesian_positions(signatures, args, kwargs, defaults)
Calculate the Cartesian position for array of position (d1, d2) with pixel coordinated stored in array pos. This is bilinear interpolation.
- Parameters:
d1 – position in dim1
d2 – position in dim2
pos – array with position of pixels corners
- Returns:
3-tuple of position.
- pyFAI.ext.bilinear.convert_corner_2D_to_4D(signatures, args, kwargs, defaults)
Convert 2 (or 3) arrays of corner position into a 4D array of pixel corner coordinates
- Parameters:
ndim – 2d or 3D output
d1 – 2D position in dim1 (shape +1)
d2 – 2D position in dim2 (shape +1)
d3 – 2D position in dim3 (z) (shape +1)
- Returns:
pos 4D array with position of pixels corners
pyFAI.ext.fastcrc module
Simple Cython module for doing CRC32 for checksums, possibly with SSE4 acceleration
- class pyFAI.ext.fastcrc.SlowCRC
Bases:
object
This class implements a fail-safe version of CRC using a look-up table … not very performant
- crc(self, buffer)
Calculate the CRC checksum of the numpy array
- initialized
- table
- pyFAI.ext.fastcrc.check_sse4()
Checks if the SSE4 implementation is available
- pyFAI.ext.fastcrc.crc32_sse4(data)
Calculate the CRC32 on the data using the SSE4 implementation
- pyFAI.ext.fastcrc.crc32_table(data)
Calculate the CRC32 on the data using the look-up table
- pyFAI.ext.fastcrc.get_crc_table()
retruns the internal table for calculating the CRC
- pyFAI.ext.fastcrc.get_crc_table_key()
KEY with which the table was initialized
- pyFAI.ext.fastcrc.init_crc32_table(uint32_t key=0x1EDC6F41)
Initialize the CRC table
- pyFAI.ext.fastcrc.is_crc32_sse4_available()
Tells if the SSE4 implementation is available
pyFAI.ext.histogram module
A set of histogram functions, some of which with OpenMP enabled.
Re-implementation of the numpy.histogram, optimized for azimuthal integration.
Can be replaced by silx.math.histogramnd
.
- pyFAI.ext.histogram.calc_area(I1, I2, slope, intercept)
Calculate the area between I1 and I2 of a line with a given slope & intercept
- pyFAI.ext.histogram.clip(value, min_val, int max_val)
Limits the value to bounds
- Parameters:
value – the value to clip
min_value – the lower bound
max_value – the upper bound
- Returns:
clipped value in the requested range
- pyFAI.ext.histogram.histogram(pos, weights, bins=100, bin_range=None, pixelSize_in_Pos=None, nthread=0, empty=0.0, normalization_factor=1.0)
_histogram_omp(pos, weights, int bins=100, bin_range=None, pixelSize_in_Pos=None, int nthread=0, double empty=0.0, double normalization_factor=1.0)
Calculates histogram of pos weighted by weights Multi threaded implementation
- Parameters:
pos – 2Theta array
weights – array with intensities
bins – number of output bins
pixelSize_in_Pos – size of a pixels in 2theta: DESACTIVATED
nthread – OpenMP is disabled. unused
empty – value given to empty bins
normalization_factor – divide the result by this value
- Returns:
2theta, I, weighted histogram, raw histogram
- pyFAI.ext.histogram.histogram1d_engine(radial, int npt, raw, dark=None, flat=None, solidangle=None, polarization=None, absorption=None, mask=None, dummy=None, delta_dummy=None, normalization_factor=1.0, data_t empty=0.0, split_result=False, variance=None, dark_variance=None, error_model=ErrorModel.NO, radial_range=None)
Implementation of rebinning engine (without splitting) using pure cython histograms
- Parameters:
radial – radial position 2D array (same shape as raw)
npt – number of points to integrate over
raw – 2D array with the raw signal
dark – array containing the value of the dark noise, to be subtracted
flat – Array containing the flatfield image. It is also checked for dummies if relevant.
solidangle – the value of the solid_angle. This processing may be performed during the rebinning instead. left for compatibility
polarization – Correction for polarization of the incident beam
absorption – Correction for absorption in the sensor volume
mask – 2d array of int/bool: non-null where data should be ignored
dummy – value of invalid data
delta_dummy – precision for invalid data
normalization_factor – final value is divided by this
empty – value to be given for empty bins
variance – provide an estimation of the variance
dark_variance – provide an estimation of the variance of the dark_current,
error_model – One of the several ErrorModel, only variance and Poisson are implemented.
NaN are always considered as invalid values
if neither empty nor dummy is provided, empty pixels are left at 0.
- Nota: “azimuthal_range” has to be integrated into the
mask prior to the call of this function
- Returns:
Integrate1dtpl named tuple containing: position, average intensity, std on intensity, plus the various histograms on signal, variance, normalization and count.
- pyFAI.ext.histogram.histogram2d(pos0, pos1, bins, weights, split=False, nthread=None, data_t empty=0.0, data_t normalization_factor=1.0)
Calculate 2D histogram of pos0,pos1 weighted by weights
- Parameters:
pos0 – 2Theta array
pos1 – Chi array
weights – array with intensities
bins – number of output bins int or 2-tuple of int
split – pixel splitting is disabled in histogram
nthread – Unused ! see below
empty – value given to empty bins
normalization_factor – divide the result by this value
- Returns:
I, bin_centers0, bin_centers1, weighted histogram(2D), unweighted histogram (2D)
Nota: the histogram itself is not parallelized as it is slower than in serial mode (cache contention)
- pyFAI.ext.histogram.histogram2d_engine(radial, azimuthal, bins, raw, dark=None, flat=None, solidangle=None, polarization=None, absorption=None, mask=None, dummy=None, delta_dummy=None, double normalization_factor=1.0, data_t empty=0.0, variance=None, dark_variance=None, int error_model=ErrorModel.NO, radial_range=None, azimuth_range=None, bool allow_radial_neg=False, bool chiDiscAtPi=1, bool clip_pos1=True)
Implementation of 2D rebinning engine using pure numpy histograms
- Parameters:
radial – radial position 2D array (same shape as raw)
azimuthal – azimuthal position 2D array (same shape as raw)
bins – number of points to integrate over in (radial, azimuthal) dimensions
raw – 2D array with the raw signal
dark – array containing the value of the dark noise, to be subtracted
flat – Array containing the flatfield image. It is also checked for dummies if relevant.
solidangle – the value of the solid_angle. This processing may be performed during the rebinning instead. left for compatibility
polarization – Correction for polarization of the incident beam
absorption – Correction for absorption in the sensor volume
mask – 2d array of int/bool: non-null where data should be ignored
dummy – value of invalid data
delta_dummy – precision for invalid data
normalization_factor – final value is divided by this
empty – value to be given for empty bins
variance – provide an estimation of the variance
dark_variance – provide an estimation of the variance of the dark_current,
error_model – set to “poisson” for assuming the detector is poissonian and variance = raw + dark
radial_range – enforce boundaries in radial dimention, 2tuple with lower and upper bound
azimuth_range – enforce boundaries in azimuthal dimention, 2tuple with lower and upper bound
allow_radial_neg – clip negative radial position (can a dimention be negative ?)
chiDiscAtPi – set the azimuthal discontinuity at π (True) or at 0/2π (False)
clip_pos1 – clip the azimuthal range to [-π π] (or [0 2π]), set to False to deactivate behavior.
NaN are always considered as invalid values
if neither empty nor dummy is provided, empty pixels are left at 0.
- Nota: “azimuthal_range” has to be integrated into the
mask prior to the call of this function
- Returns:
Integrate1dtpl named tuple containing: position, average intensity, std on intensity, plus the various histograms on signal, variance, normalization and count.
- Returns:
Integrate2dtpl namedtuple: “radial azimuthal intensity error signal variance normalization count”
- pyFAI.ext.histogram.histogram_preproc(pos, weights, int bins=100, bin_range=None, int error_model=0)
Calculates histogram of pos weighted by weights in the case data have been preprocessed, i.e. each datapoint contains (signal, normalization), (signal, variance, normalization), (signal, variance, normalization, count)
- Parameters:
pos – radial array
weights – array with intensities, variance, normalization and count
bins – number of output bins
bin_range – 2-tuple with lower and upper bound for the valid position range.
error_model – 0:no error propagation, 1:variance 2:poisson 3: azimuthal
- Returns:
5 histograms concatenated, radial position (bin center)
- pyFAI.ext.histogram.recenter(position_t[:, ::1] pixel, bool chiDiscAtPi=1)
This function checks the pixel to be on the azimuthal discontinuity via the sign of its algebric area and recenters the corner coordinates in a consistent manner to have all azimuthal coordinate in
Nota: the returned area is negative since the positive area indicate the pixel is on the discontinuity.
- Parameters:
pixel – 4x2 array with radius, azimuth for the 4 corners. MODIFIED IN PLACE !!!
chiDiscAtPi – set to 0 to indicate the range goes from 0-2π instead of the default -π:π
- Returns:
signed area (approximate & negative)
pyFAI.ext.inpainting module
Cython module for doing inpaining of images.
- class pyFAI.ext.inpainting.Bilinear
Bases:
object
Bilinear interpolator for finding max.
Instance attribute defined in pxd file
- cp_local_maxi(self, Py_ssize_t x) Py_ssize_t
- data
- f_cy(self, x)
Function -f((y,x)) where f is a continuous function (y,x) are pixel coordinates pixels outside the image are given an arbitrary high value to help the minimizer
- Parameters:
x – 2-tuple of float
- Returns:
Interpolated negative signal from the image (negative for using minimizer to search for peaks)
- height
- local_maxi(self, x)
Return the local maximum with sub-pixel refinement.
Sub-pixel refinement: Second order Taylor expansion of the function; first derivative is null
\[delta = x-i = -Inverse[Hessian].gradient\]If Hessian is singular or \(|delta|>1\): use a center of mass.
- Parameters:
x – 2-tuple of integers
- Returns:
2-tuple of float with the nearest local maximum
- many(self, x)
Call the bilinear interpolator on many points…
- Parameters:
x – array of points of shape (2, n), like (array_of_y, array_of_x)
- Returns:
array of shape n
- maxi
- mini
- width
- pyFAI.ext.inpainting.largest_width(int8_t[:, :] image)
Calculate the width of the largest part in the binary image Nota: this is along the horizontal direction.
- pyFAI.ext.inpainting.polar_inpaint(signatures, args, kwargs, defaults)
Relaplce the values flagged in topaint with possible values If mask is provided, those values are knows to be invalid and not re-calculated
- Parameters:
img – image in polar coordinates
topaint – pixel which deserves inpatining
mask – pixels which are masked and do not need to be inpainted
dummy – value for masked
- Returns:
image with missing values interpolated from neighbors.
- pyFAI.ext.inpainting.polar_interpolate(signatures, args, kwargs, defaults)
Perform the bilinear interpolation from polar data into the initial array data
- Parameters:
data – image with holes, of a given shape
mask – array with the holes marked
radial – 2D array with the radial position
polar – 2D radial/azimuthal averaged image (continuous). shape is pshape
radial_pos – position of the radial bins (evenly spaced, size = pshape[-1])
azim_pos – position of the azimuthal bins (evenly spaced, size = pshape[0])
- Returns:
inpainted image
pyFAI.ext.invert_geometry module
Module providing inversion transformation from pixel coordinate to radial/azimuthal coordinate.
- class pyFAI.ext.invert_geometry.InvertGeometry
Bases:
object
Class to inverse the geometry: takes the nearest pixel then use linear interpolation
- Parameters:
radius – 2D array with radial position
angle – 2D array with azimuthal position
Call it with (r,chi) to retrieve the pixel where it comes from.
- many(self, radial, azimuthal, bool refined=True)
Interpolate many points for the …
- Parameters:
radial – array of radial positions
azimuthal – array of azimuthal positions, same shape as radial
refined – if True: use linear interpolation, else provide nearest pixel
- Returns:
array of shape (n,2) with coordinated (y, x)
- pyFAI.ext.invert_geometry.calc_area(I1, I2, slope, intercept)
Calculate the area between I1 and I2 of a line with a given slope & intercept
- pyFAI.ext.invert_geometry.clip(value, min_val, int max_val)
Limits the value to bounds
- Parameters:
value – the value to clip
min_value – the lower bound
max_value – the upper bound
- Returns:
clipped value in the requested range
- pyFAI.ext.invert_geometry.recenter(position_t[:, ::1] pixel, bool chiDiscAtPi=1)
This function checks the pixel to be on the azimuthal discontinuity via the sign of its algebric area and recenters the corner coordinates in a consistent manner to have all azimuthal coordinate in
Nota: the returned area is negative since the positive area indicate the pixel is on the discontinuity.
- Parameters:
pixel – 4x2 array with radius, azimuth for the 4 corners. MODIFIED IN PLACE !!!
chiDiscAtPi – set to 0 to indicate the range goes from 0-2π instead of the default -π:π
- Returns:
signed area (approximate & negative)
pyFAI.ext.morphology module
This module provides a couple of binary morphology operations on images.
They are also implemented in scipy.ndimage
in the general case, but not as
fast.
- pyFAI.ext.morphology.binary_dilation(int8_t[:, ::1] image, float radius=1.0)
Return fast binary morphological dilation of an image.
Morphological dilation sets a pixel at (i,j) to the maximum over all pixels in the neighborhood centered at (i,j). Dilation enlarges bright regions and shrinks dark regions.
:param image : ndarray :param radius: float :return: ndiamge
- pyFAI.ext.morphology.binary_erosion(int8_t[:, ::1] image, float radius=1.0)
Return fast binary morphological erosion of an image.
Morphological erosion sets a pixel at (i,j) to the minimum over all pixels in the neighborhood centered at (i,j). Erosion shrinks bright regions and enlarges dark regions.
:param image : ndarray :param radius: float :return: ndiamge
pyFAI.ext.preproc module
Contains a preprocessing function in charge of the dark-current subtraction, flat-field normalization… taking care of masked values and normalization.
- pyFAI.ext.preproc.calc_area(I1, I2, slope, intercept)
Calculate the area between I1 and I2 of a line with a given slope & intercept
- pyFAI.ext.preproc.clip(value, min_val, int max_val)
Limits the value to bounds
- Parameters:
value – the value to clip
min_value – the lower bound
max_value – the upper bound
- Returns:
clipped value in the requested range
- pyFAI.ext.preproc.preproc(raw, dark=None, flat=None, solidangle=None, polarization=None, absorption=None, mask=None, dummy=None, delta_dummy=None, normalization_factor=None, empty=None, split_result=False, variance=None, dark_variance=None, error_model=ErrorModel.NO, dtype=numpy.float32, out=None)
Common preprocessing step for all integrators
- Parameters:
raw – raw value, as a numpy array, 1D or 2D
mask – array non null where data should be ignored
dummy – value of invalid data
delta_dummy – precision for invalid data
dark – array containing the value of the dark noise, to be subtracted
flat – Array containing the flatfield image. It is also checked for dummies if relevant.
solidangle – the value of the solid_angle. This processing may be performed during the rebinning instead. left for compatibility
polarization – Correction for polarization of the incident beam
absorption – Correction for absorption in the sensor volume
normalization_factor – final value is divided by this
empty – value to be given for empty bins
variance – variance of the data
dark_variance – variance of the dark
error_model – set to “poisson” to consider the variance is equal to raw signal (minimum 1)
dtype – type for working: float32 or float64
All calculation are performed in the dtype precision
NaN are always considered as invalid
if neither empty nor dummy is provided, empty pixels are 0
- pyFAI.ext.preproc.recenter(position_t[:, ::1] pixel, bool chiDiscAtPi=1)
This function checks the pixel to be on the azimuthal discontinuity via the sign of its algebric area and recenters the corner coordinates in a consistent manner to have all azimuthal coordinate in
Nota: the returned area is negative since the positive area indicate the pixel is on the discontinuity.
- Parameters:
pixel – 4x2 array with radius, azimuth for the 4 corners. MODIFIED IN PLACE !!!
chiDiscAtPi – set to 0 to indicate the range goes from 0-2π instead of the default -π:π
- Returns:
signed area (approximate & negative)
pyFAI.ext.reconstruct module
Cython module to reconstruct the masked values of an image.
It’s a simple inpainting module for reconstructing the missing part of an image (masked) to be able to use more common algorithms.
- pyFAI.ext.reconstruct.reconstruct(data, mask=None, dummy=None, delta_dummy=None)
reconstruct missing part of an image (tries to be continuous)
- Parameters:
data – the input image
mask – where data should be reconstructed.
dummy – value of the dummy (masked out) data
delta_dummy – precision for dummy values
- Returns:
reconstructed image.
pyFAI.ext.relabel module
Module providing features to relabel regions.
It is used to flag from largest regions to the smallest.
- pyFAI.ext.relabel.countThem(label, data, blured)
Count
- Parameters:
label – 2D array containing labeled zones
data – 2D array containing the raw data
blured – 2D array containing the blured data
- Returns:
2D arrays containing:
count pixels in labeled zone: label == index).sum()
max of data in that zone: data[label == index].max()
max of blurred in that zone: blured[label == index].max()
data-blurred where data is max.
pyFAI.ext.sparse_builder module
- class pyFAI.ext.sparse_builder.SparseBuilder(nbin, mode='block', block_size=512, heap_size=0)
Bases:
object
This class provade an API to build a sparse matrix from bin data
It provides different internal structure to be able to use it in different context. It can boost a fast insert, or speed up fast convertion to CSR format.
- Param:
int nbin: Number of bin to store
- Parameters:
mode (str) –
Internal structure used to store the data:
- ”pack”: Alloc a heap_size and feed it with tuple (bin, indice, value).
The insert is very fast, conversion to CSR is done using sequencial read and a random write.
- ”heaplist”: Alloc a heap_size and feed it with a linked list per bins
containing (indice, value, next). The insert is very fast, conversion to CSR is done using random read and a sequencial write.
- ”block”: Alloc block_size per bins and feed it with values and indices.
The conversion to CSR is done sequencially using block copy. The heap_size should be a multiple of the block_size. If the heap_size is zero, block are allocated one by one without management.
”stdlist”: Use standard C++ list. It is head as reference for testing.
block_size (Union[None|int]) – Number of element in a block if used. If more space is needed another block are allocated on the fly.
heap_size (Union[None|int]) – Number of element in the global memory managment. This system allocation a single time memory for many needs. It reduce the overhead of memory allocation. If set to None or 0, this management is disabled.
- __init__(*args, **kwargs)
- get_bin_coefs(self, bin_id)
Returns the values stored in a specific bin.
- Parameters:
bin_id (int) – Index of the bin
- Return type:
numpy.array
- get_bin_indexes(self, bin_id)
Returns the indices stored in a specific bin.
- Parameters:
bin_id (int) – Index of the bin
- Return type:
numpy.array
- get_bin_size(self, bin_id)
Returns the size of a specific bin.
- Parameters:
bin_id (int) – Number of the bin requested
- Return type:
int
- get_bin_sizes(self)
Returns the size of all the bins.
- Return type:
numpy.ndarray(dtype=int)
- insert(self, bin_id, index, coef)
Insert an indice and a value in a specific bin.
- Parameters:
bin_id (int) – Index of the bin
index (int) – Indice of the data to store
coef (int) – Value of the data to store
- mode(self)
Returns the storage mode used by the builder.
- Return type:
str
- size(self)
Returns the number of elements contained in the structure.
- Return type:
int
- to_csr(self)
Returns a CSR representation from the stored data.
The first array contains all floating values. Sorted by bin number.
The second array contains all indices. Sorted by bin number.
- Lookup table from the bin index to the first index in the 2 first
arrays. array[10 + 0] contains the index of the first element of the bin 10. array[10 + 1] - 1 is the last elements. This array always starts with 0 and contains one more element than the number of bins.
- Return type:
Tuple(numpy.ndarray, numpy.ndarray, numpy.ndarray)
- Returns:
A tuple containing values, indices and bin indexes
- to_lut(self)
Returns a LUT representation from the stored data.
The first array contains all floating values. Sorted by bin number.
The second array contains all indices. Sorted by bin number.
- Lookup table from the bin index to the first index in the 2 first
arrays. array[10 + 0] contains the index of the first element of the bin 10. array[10 + 1] - 1 is the last elements. This array always starts with 0 and contains one more element than the number of bins.
- Return type:
numpy.ndarray
- Returns:
A 2D array tuple containing values, indices and bin indexes
- pyFAI.ext.sparse_builder.calc_area(I1, I2, slope, intercept)
Calculate the area between I1 and I2 of a line with a given slope & intercept
- pyFAI.ext.sparse_builder.clip(value, min_val, int max_val)
Limits the value to bounds
- Parameters:
value – the value to clip
min_value – the lower bound
max_value – the upper bound
- Returns:
clipped value in the requested range
- pyFAI.ext.sparse_builder.feed_histogram(SparseBuilder builder, pos, weights, int bins=100, double empty=0.0, double normalization_factor=1.0)
Missing docstring for feed_histogram Is this just a demo ?
- warning:
Unused argument ‘empty’
Unused argument ‘normalization_factor’
- pyFAI.ext.sparse_builder.recenter(position_t[:, ::1] pixel, bool chiDiscAtPi=1)
This function checks the pixel to be on the azimuthal discontinuity via the sign of its algebric area and recenters the corner coordinates in a consistent manner to have all azimuthal coordinate in
Nota: the returned area is negative since the positive area indicate the pixel is on the discontinuity.
- Parameters:
pixel – 4x2 array with radius, azimuth for the 4 corners. MODIFIED IN PLACE !!!
chiDiscAtPi – set to 0 to indicate the range goes from 0-2π instead of the default -π:π
- Returns:
signed area (approximate & negative)
pyFAI.ext.sparse_utils module
Common Look-Up table/CSR object creation tools and conversion
- class pyFAI.ext.sparse_utils.ArrayBuilder
Bases:
object
Sparse matrix builder: deprecated, please use sparse_builder
- append(self, line, col, value)
Python wrapper for _append in cython
- as_CSR(self)
- as_LUT(self)
- nbytes
Calculate the actual size of the object (in bytes)
- size
- pyFAI.ext.sparse_utils.CSR_to_LUT(data, indices, indptr)
Conversion between sparse matrix representations
- Parameters:
data – coef of the sparse matrix as 1D array
indices – index of the col position in input array as 1D array
indptr – index of the start of the row in the indices array
- Returns:
the same matrix as LUT representation
- Return type:
record array of (int idx, float coef)
- class pyFAI.ext.sparse_utils.CsrIntegrator(tuple lut, int image_size, data_t empty=0.0)
Bases:
object
Abstract class which implements only the integrator…
Now uses CSR (Compressed Sparse raw) with main attributes: * nnz: number of non zero elements * data: coefficient of the matrix in a 1D vector of float32 * indices: Column index position for the data (same size as * indptr: row pointer indicates the start of a given row. len nrow+1
Nota: nnz = indptr[-1]+1 = len(indices) = len(data)
- __init__()
Constructor for a CSR generic integrator
- Parameters:
lut – Sparse matrix in CSR format, tuple of 3 arrays with (data, indices, indptr)
size – input image size
empty – value for empty pixels
- data
- empty
empty: ‘data_t’
- indices
- indptr
- input_size
- integrate(weights, dummy, delta_dummy, dark, flat, solidAngle, polarization, normalization_factor, coef_power)
CsrIntegrator.integrate_legacy(self, weights, dummy=None, delta_dummy=None, dark=None, flat=None, solidAngle=None, polarization=None, double normalization_factor=1.0, int coef_power=1)
Actually perform the integration which in this case looks more like a matrix-vector product
Deprecated version !
- Parameters:
weights (ndarray) – input image
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
normalization_factor – divide the valid result by this value
coef_power – set to 2 for variance propagation, leave to 1 for mean calculation
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
4-tuple of ndarrays
- integrate_legacy(self, weights, dummy=None, delta_dummy=None, dark=None, flat=None, solidAngle=None, polarization=None, double normalization_factor=1.0, int coef_power=1)
Actually perform the integration which in this case looks more like a matrix-vector product
Deprecated version !
- Parameters:
weights (ndarray) – input image
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
normalization_factor – divide the valid result by this value
coef_power – set to 2 for variance propagation, leave to 1 for mean calculation
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
4-tuple of ndarrays
- integrate_ng(self, weights, variance=None, error_model=ErrorModel.NO, dummy=None, delta_dummy=None, dark=None, flat=None, solidangle=None, polarization=None, absorption=None, data_t normalization_factor=1.0)
- Actually perform the integration which in this case consists of:
Calculate the signal, variance and the normalization parts
Perform the integration which is here a matrix-vector product
- Parameters:
weights (ndarray) – input image
variance (ndarray) – the variance associate to the image
error_model – enum ErrorModel
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
absorption (ndarray) – Apparent efficiency of a pixel due to parallax effect
normalization_factor – divide the valid result by this value
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
Integrate1dtpl 4-named-tuple of ndarrays
- nnz
- output_size
- preprocessed
- sigma_clip(self, weights, dark=None, dummy=None, delta_dummy=None, variance=None, dark_variance=None, flat=None, solidangle=None, polarization=None, absorption=None, bool safe=True, error_model=ErrorModel.NO, data_t normalization_factor=1.0, double cutoff=0.0, int cycle=5)
Perform a sigma-clipping iterative filter within each along each row. see the doc of scipy.stats.sigmaclip for more descriptions.
If the error model is “azimuthal”: the variance is the variance within a bin, which is refined at each iteration, can be costly !
Else, the error is propagated according to:
\[signal = (raw - dark) variance = variance + dark_variance normalization = normalization_factor*(flat * solidangle * polarization * absortoption) count = number of pixel contributing\]Integration is performed using the CSR representation of the look-up table on all arrays: signal, variance, normalization and count
The threshold can automaticlly be calculated from Chauvenet’s: sqrt(2*log(nbpix/sqrt(2.0f*pi)))
- Parameters:
weights (ndarray) – input image
dark (ndarray) – array with the dark-current value to be subtracted (if any)
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
variance (ndarray) – the variance associate to the image
dark_variance (ndarray) – the variance associate to the dark
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
absorption (ndarray) – Apparent efficiency of a pixel due to parallax effect
safe – set to True to save some tests
error_model – set to “poissonian” to use signal as variance (minimum 1), “azimuthal” to use the variance in a ring.
normalization_factor – divide the valid result by this value
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
Integrate1dtpl 4-named-tuple of ndarrays
- pyFAI.ext.sparse_utils.LUT_to_CSR(lut)
Conversion between sparse matrix representations
- Parameters:
lut – Look-up table as 2D array of (int idx, float coef)
- Returns:
the same matrix as CSR representation
- Return type:
3-tuple of numpy array (data, indices, indptr)
- class pyFAI.ext.sparse_utils.LutIntegrator(lut_t[:, ::1] lut, int image_size, data_t empty=0.0)
Bases:
object
Abstract class which implements only the integrator…
Now uses LUT format with main attributes: * width: width of the LUT * data: coefficient of the matrix in a 1D vector of float32 * indices: Column index position for the data (same size as * indptr: row pointer indicates the start of a given row. len nrow+1
Nota: nnz = indptr[-1]+1 = len(indices) = len(data)
- __init__()
Constructor for a CSR generic integrator
- Parameters:
lut – Sparse matrix in CSR format, tuple of 3 arrays with (data, indices, indptr)
size – input image size
empty – value for empty pixels
- empty
empty: ‘data_t’
- input_size
- integrate(weights, dummy, delta_dummy, dark, flat, solidAngle, polarization, normalization_factor, coef_power)
LutIntegrator.integrate_legacy(self, weights, dummy=None, delta_dummy=None, dark=None, flat=None, solidAngle=None, polarization=None, double normalization_factor=1.0, int coef_power=1)
Actually perform the integration which in this case looks more like a matrix-vector product
- Parameters:
weights (ndarray) – input image
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
normalization_factor – divide the valid result by this value
coef_power – put coef to a given power, 2 for variance, 1 for mean
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
4-tuple of ndarrays
- integrate_legacy(self, weights, dummy=None, delta_dummy=None, dark=None, flat=None, solidAngle=None, polarization=None, double normalization_factor=1.0, int coef_power=1)
Actually perform the integration which in this case looks more like a matrix-vector product
- Parameters:
weights (ndarray) – input image
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
normalization_factor – divide the valid result by this value
coef_power – put coef to a given power, 2 for variance, 1 for mean
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
4-tuple of ndarrays
- integrate_ng(self, weights, variance=None, error_model=ErrorModel.NO, dummy=None, delta_dummy=None, dark=None, flat=None, solidangle=None, polarization=None, absorption=None, data_t normalization_factor=1.0)
- Actually perform the integration which in this case consists of:
Calculate the signal, variance and the normalization parts
Perform the integration which is here a matrix-vector product
- Parameters:
weights (ndarray) – input image
variance (ndarray) – the variance associate to the image
erro_model – enum ErrorModel.
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
absorption (ndarray) – Apparent efficiency of a pixel due to parallax effect
normalization_factor – divide the valid result by this value
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
Integrate1dtpl 4-named-tuple of ndarrays
- lut
Getter a copy of the LUT as an actual numpy array
- lut_size
- output_size
- preprocessed
- class pyFAI.ext.sparse_utils.Vector
Bases:
object
Variable size vector: deprecated, please use sparse_builder
- allocated
- append(self, idx, coef)
Python implementation of _append in cython
- get_data(self)
- nbytes
Calculate the actual size of the object (in bytes)
- size
- pyFAI.ext.sparse_utils.calc_area(I1, I2, slope, intercept)
Calculate the area between I1 and I2 of a line with a given slope & intercept
- pyFAI.ext.sparse_utils.clip(value, min_val, int max_val)
Limits the value to bounds
- Parameters:
value – the value to clip
min_value – the lower bound
max_value – the upper bound
- Returns:
clipped value in the requested range
- pyFAI.ext.sparse_utils.recenter(position_t[:, ::1] pixel, bool chiDiscAtPi=1)
This function checks the pixel to be on the azimuthal discontinuity via the sign of its algebric area and recenters the corner coordinates in a consistent manner to have all azimuthal coordinate in
Nota: the returned area is negative since the positive area indicate the pixel is on the discontinuity.
- Parameters:
pixel – 4x2 array with radius, azimuth for the 4 corners. MODIFIED IN PLACE !!!
chiDiscAtPi – set to 0 to indicate the range goes from 0-2π instead of the default -π:π
- Returns:
signed area (approximate & negative)
pyFAI.ext.splitBBox module
Calculates histograms of pos0 (tth) weighted by Intensity
Splitting is done on the pixel’s bounding box similar to fit2D
- pyFAI.ext.splitBBox.calc_area(I1, I2, slope, intercept)
Calculate the area between I1 and I2 of a line with a given slope & intercept
- pyFAI.ext.splitBBox.clip(value, min_val, int max_val)
Limits the value to bounds
- Parameters:
value – the value to clip
min_value – the lower bound
max_value – the upper bound
- Returns:
clipped value in the requested range
- pyFAI.ext.splitBBox.histoBBox1d(weights, pos0, delta_pos0, pos1=None, delta_pos1=None, Py_ssize_t bins=100, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, dark=None, flat=None, solidangle=None, polarization=None, empty=None, double normalization_factor=1.0, int coef_power=1, **back_compat_kwargs)
Calculates histogram of pos0 (tth) weighted by weights
Splitting is done on the pixel’s bounding box like fit2D
- Parameters:
weights – array with intensities
pos0 – 1D array with pos0: tth or q_vect
delta_pos0 – 1D array with delta pos0: max center-corner distance
pos1 – 1D array with pos1: chi
delta_pos1 – 1D array with max pos1: max center-corner distance, unused !
bins – number of output bins
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels & value of “no good” pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
dark – array (of float32) with dark noise to be subtracted (or None)
flat – array (of float32) with flat-field image
solidangle – array (of float32) with solid angle corrections
polarization – array (of float32) with polarization corrections
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the result by this value
coef_power – set to 2 for variance propagation, leave to 1 for mean calculation
- Returns:
2theta, I, weighted histogram, unweighted histogram
- pyFAI.ext.splitBBox.histoBBox1d_engine(weights, pos0, delta_pos0, pos1=None, delta_pos1=None, Py_ssize_t bins=100, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, int error_model=ErrorModel.NO, dark=None, flat=None, solidangle=None, polarization=None, bool allow_pos0_neg=False, data_t empty=0.0, double normalization_factor=1.0)
Calculates histogram of pos0 (tth) weighted by weights
Splitting is done on the pixel’s bounding box like fit2D New implementation with variance propagation
- Parameters:
weights – array with intensities
pos0 – 1D array with pos0: tth or q_vect
delta_pos0 – 1D array with delta pos0: max center-corner distance
pos1 – 1D array with pos1: chi
delta_pos1 – 1D array with max pos1: max center-corner distance, unused !
bins – number of output bins
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels & value of “no good” pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
dark – array (of float32) with dark noise to be subtracted (or None)
flat – array (of float32) with flat-field image
solidangle – array (of float32) with solid angle corrections
polarization – array (of float32) with polarization corrections
allow_pos0_neg – allow radial dimention to be negative (useful in log-scale!)
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the result by this value
- Returns:
namedtuple with “position intensity error signal variance normalization count”
- pyFAI.ext.splitBBox.histoBBox1d_ng(weights, pos0, delta_pos0, pos1=None, delta_pos1=None, bins=100, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, error_model=0, dark=None, flat=None, solidangle=None, polarization=None, allow_pos0_neg=False, empty=0.0, normalization_factor=1.0)
histoBBox1d_engine(weights, pos0, delta_pos0, pos1=None, delta_pos1=None, Py_ssize_t bins=100, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, int error_model=ErrorModel.NO, dark=None, flat=None, solidangle=None, polarization=None, bool allow_pos0_neg=False, data_t empty=0.0, double normalization_factor=1.0)
Calculates histogram of pos0 (tth) weighted by weights
Splitting is done on the pixel’s bounding box like fit2D New implementation with variance propagation
- Parameters:
weights – array with intensities
pos0 – 1D array with pos0: tth or q_vect
delta_pos0 – 1D array with delta pos0: max center-corner distance
pos1 – 1D array with pos1: chi
delta_pos1 – 1D array with max pos1: max center-corner distance, unused !
bins – number of output bins
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels & value of “no good” pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
dark – array (of float32) with dark noise to be subtracted (or None)
flat – array (of float32) with flat-field image
solidangle – array (of float32) with solid angle corrections
polarization – array (of float32) with polarization corrections
allow_pos0_neg – allow radial dimention to be negative (useful in log-scale!)
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the result by this value
- Returns:
namedtuple with “position intensity error signal variance normalization count”
- pyFAI.ext.splitBBox.histoBBox2d(weights, pos0, delta_pos0, pos1, delta_pos1, bins=(100, 36), pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, dark=None, flat=None, solidangle=None, polarization=None, bool allow_pos0_neg=0, bool chiDiscAtPi=1, empty=0.0, double normalization_factor=1.0, int coef_power=1, bool clip_pos1=1, **back_compat_kwargs)
Calculate 2D histogram of pos0(tth),pos1(chi) weighted by weights
Splitting is done on the pixel’s bounding box like fit2D
- Parameters:
weights – array with intensities
pos0 – 1D array with pos0: tth or q_vect
delta_pos0 – 1D array with delta pos0: max center-corner distance
pos1 – 1D array with pos1: chi
delta_pos1 – 1D array with max pos1: max center-corner distance, unused !
bins – number of output bins (tth=100, chi=36 by default)
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels & value of “no good” pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
dark – array (of float32) with dark noise to be subtracted (or None)
flat – array (of float32) with flat-field image
solidangle – array (of float32) with solid angle corrections
polarization – array (of float32) with polarization corrections
chiDiscAtPi – boolean; by default the chi_range is in the range ]-pi,pi[ set to 0 to have the range ]0,2pi[
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the result by this value
coef_power – set to 2 for variance propagation, leave to 1 for mean calculation
clip_pos1 – clip the azimuthal range to -pi/pi (or 0-2pi), set to False to deactivate behavior
- Returns:
I, bin_centers0, bin_centers1, weighted histogram(2D), unweighted histogram (2D)
- pyFAI.ext.splitBBox.histoBBox2d_engine(weights, pos0, delta_pos0, pos1, delta_pos1, bins=(100, 36), pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, int error_model=ErrorModel.NO, dark=None, flat=None, solidangle=None, polarization=None, bool allow_pos0_neg=False, bool chiDiscAtPi=1, data_t empty=0.0, double normalization_factor=1.0, bool clip_pos1=True)
Calculate 2D histogram of pos0(tth),pos1(chi) weighted by weights
Splitting is done on the pixel’s bounding box, similar to fit2D New implementation with variance propagation
- Parameters:
weights – array with intensities
pos0 – 1D array with pos0: tth or q_vect
delta_pos0 – 1D array with delta pos0: max center-corner distance
pos1 – 1D array with pos1: chi
delta_pos1 – 1D array with max pos1: max center-corner distance, unused !
bins – number of output bins (tth=100, chi=36 by default)
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels & value of “no good” pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
variance – variance associated with the weights
dark – array (of float32) with dark noise to be subtracted (or None)
flat – array (of float32) with flat-field image
solidangle – array (of float32) with solid angle corrections
polarization – array (of float32) with polarization corrections
error_model – 0 for no error propagation, 1 for variance, 2 for Poisson, 3,4 not implemented
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the result by this value
chiDiscAtPi – boolean; by default the chi_range is in the range ]-pi,pi[ set to 0 to have the range ]0,2pi[
clip_pos1 – clip the azimuthal range to [-pi pi] (or [0 2pi]), set to False to deactivate behavior
- Returns:
Integrate2dtpl namedtuple: “radial azimuthal intensity error signal variance normalization count”
- pyFAI.ext.splitBBox.histoBBox2d_ng(weights, pos0, delta_pos0, pos1, delta_pos1, bins=(100, 36), pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, error_model=0, dark=None, flat=None, solidangle=None, polarization=None, allow_pos0_neg=False, chiDiscAtPi=True, empty=0.0, normalization_factor=1.0, clip_pos1=True)
histoBBox2d_engine(weights, pos0, delta_pos0, pos1, delta_pos1, bins=(100, 36), pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, int error_model=ErrorModel.NO, dark=None, flat=None, solidangle=None, polarization=None, bool allow_pos0_neg=False, bool chiDiscAtPi=1, data_t empty=0.0, double normalization_factor=1.0, bool clip_pos1=True)
Calculate 2D histogram of pos0(tth),pos1(chi) weighted by weights
Splitting is done on the pixel’s bounding box, similar to fit2D New implementation with variance propagation
- Parameters:
weights – array with intensities
pos0 – 1D array with pos0: tth or q_vect
delta_pos0 – 1D array with delta pos0: max center-corner distance
pos1 – 1D array with pos1: chi
delta_pos1 – 1D array with max pos1: max center-corner distance, unused !
bins – number of output bins (tth=100, chi=36 by default)
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels & value of “no good” pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
variance – variance associated with the weights
dark – array (of float32) with dark noise to be subtracted (or None)
flat – array (of float32) with flat-field image
solidangle – array (of float32) with solid angle corrections
polarization – array (of float32) with polarization corrections
error_model – 0 for no error propagation, 1 for variance, 2 for Poisson, 3,4 not implemented
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the result by this value
chiDiscAtPi – boolean; by default the chi_range is in the range ]-pi,pi[ set to 0 to have the range ]0,2pi[
clip_pos1 – clip the azimuthal range to [-pi pi] (or [0 2pi]), set to False to deactivate behavior
- Returns:
Integrate2dtpl namedtuple: “radial azimuthal intensity error signal variance normalization count”
- pyFAI.ext.splitBBox.recenter(position_t[:, ::1] pixel, bool chiDiscAtPi=1)
This function checks the pixel to be on the azimuthal discontinuity via the sign of its algebric area and recenters the corner coordinates in a consistent manner to have all azimuthal coordinate in
Nota: the returned area is negative since the positive area indicate the pixel is on the discontinuity.
- Parameters:
pixel – 4x2 array with radius, azimuth for the 4 corners. MODIFIED IN PLACE !!!
chiDiscAtPi – set to 0 to indicate the range goes from 0-2π instead of the default -π:π
- Returns:
signed area (approximate & negative)
pyFAI.ext.splitBBoxCSR module
Calculates histograms of pos0 (tth) weighted by Intensity
Splitting is done on the pixel’s bounding box like fit2D, reverse implementation based on a sparse matrix multiplication
- class pyFAI.ext.splitBBoxCSR.CsrIntegrator(tuple lut, int image_size, data_t empty=0.0)
Bases:
object
Abstract class which implements only the integrator…
Now uses CSR (Compressed Sparse raw) with main attributes: * nnz: number of non zero elements * data: coefficient of the matrix in a 1D vector of float32 * indices: Column index position for the data (same size as * indptr: row pointer indicates the start of a given row. len nrow+1
Nota: nnz = indptr[-1]+1 = len(indices) = len(data)
- __init__()
Constructor for a CSR generic integrator
- Parameters:
lut – Sparse matrix in CSR format, tuple of 3 arrays with (data, indices, indptr)
size – input image size
empty – value for empty pixels
- data
- empty
empty: ‘data_t’
- indices
- indptr
- input_size
- integrate(weights, dummy, delta_dummy, dark, flat, solidAngle, polarization, normalization_factor, coef_power)
CsrIntegrator.integrate_legacy(self, weights, dummy=None, delta_dummy=None, dark=None, flat=None, solidAngle=None, polarization=None, double normalization_factor=1.0, int coef_power=1)
Actually perform the integration which in this case looks more like a matrix-vector product
Deprecated version !
- Parameters:
weights (ndarray) – input image
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
normalization_factor – divide the valid result by this value
coef_power – set to 2 for variance propagation, leave to 1 for mean calculation
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
4-tuple of ndarrays
- integrate_legacy(self, weights, dummy=None, delta_dummy=None, dark=None, flat=None, solidAngle=None, polarization=None, double normalization_factor=1.0, int coef_power=1)
Actually perform the integration which in this case looks more like a matrix-vector product
Deprecated version !
- Parameters:
weights (ndarray) – input image
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
normalization_factor – divide the valid result by this value
coef_power – set to 2 for variance propagation, leave to 1 for mean calculation
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
4-tuple of ndarrays
- integrate_ng(self, weights, variance=None, error_model=ErrorModel.NO, dummy=None, delta_dummy=None, dark=None, flat=None, solidangle=None, polarization=None, absorption=None, data_t normalization_factor=1.0)
- Actually perform the integration which in this case consists of:
Calculate the signal, variance and the normalization parts
Perform the integration which is here a matrix-vector product
- Parameters:
weights (ndarray) – input image
variance (ndarray) – the variance associate to the image
error_model – enum ErrorModel
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
absorption (ndarray) – Apparent efficiency of a pixel due to parallax effect
normalization_factor – divide the valid result by this value
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
Integrate1dtpl 4-named-tuple of ndarrays
- nnz
- output_size
- preprocessed
- sigma_clip(self, weights, dark=None, dummy=None, delta_dummy=None, variance=None, dark_variance=None, flat=None, solidangle=None, polarization=None, absorption=None, bool safe=True, error_model=ErrorModel.NO, data_t normalization_factor=1.0, double cutoff=0.0, int cycle=5)
Perform a sigma-clipping iterative filter within each along each row. see the doc of scipy.stats.sigmaclip for more descriptions.
If the error model is “azimuthal”: the variance is the variance within a bin, which is refined at each iteration, can be costly !
Else, the error is propagated according to:
\[signal = (raw - dark) variance = variance + dark_variance normalization = normalization_factor*(flat * solidangle * polarization * absortoption) count = number of pixel contributing\]Integration is performed using the CSR representation of the look-up table on all arrays: signal, variance, normalization and count
The threshold can automaticlly be calculated from Chauvenet’s: sqrt(2*log(nbpix/sqrt(2.0f*pi)))
- Parameters:
weights (ndarray) – input image
dark (ndarray) – array with the dark-current value to be subtracted (if any)
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
variance (ndarray) – the variance associate to the image
dark_variance (ndarray) – the variance associate to the dark
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
absorption (ndarray) – Apparent efficiency of a pixel due to parallax effect
safe – set to True to save some tests
error_model – set to “poissonian” to use signal as variance (minimum 1), “azimuthal” to use the variance in a ring.
normalization_factor – divide the valid result by this value
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
Integrate1dtpl 4-named-tuple of ndarrays
- class pyFAI.ext.splitBBoxCSR.HistoBBox1d(pos0, delta_pos0, pos1=None, delta_pos1=None, bins=100, pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit='undefined', empty=None, chiDiscAtPi=True, clip_pos1=False)
Bases:
CsrIntegrator
,SplitBBoxIntegrator
Now uses CSR (Compressed Sparse raw) with main attributes: * nnz: number of non zero elements * data: coefficient of the matrix in a 1D vector of float32 * indices: Column index position for the data (same size as * indptr: row pointer indicates the start of a given row. len nrow+1
Nota: nnz = indptr[-1]
- __init__(self, pos0, delta_pos0, pos1=None, delta_pos1=None, int bins=100, pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit=u'undefined', empty=None, bool chiDiscAtPi=True, bool clip_pos1=False)
- Parameters:
pos0 – 1D array with pos0: tth or q_vect or r …
delta_pos0 – 1D array with delta pos0: max center-corner distance
pos1 – 1D array with pos1: chi
delta_pos1 – 1D array with max pos1: max center-corner distance, unused !
bins – number of output bins, 100 by default
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
mask – array (of int8) with masked pixels with 1 (0=not masked)
allow_pos0_neg – enforce the q<0 is usually not possible
unit – can be 2th_deg or r_nm^-1 …
empty – value for bins without contributing pixels
chiDiscAtPi – tell if azimuthal discontinuity is at 0° or 180°
clip_pos1 – clip the azimuthal range to [-π π] (or [0 2π] depending on chiDiscAtPi), set to False to deactivate behavior
- property check_mask
- property outPos
- class pyFAI.ext.splitBBoxCSR.HistoBBox2d(pos0, delta_pos0, pos1, delta_pos1, bins=(100, 36), pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit='undefined', empty=None, chiDiscAtPi=True, clip_pos1=True)
Bases:
CsrIntegrator
,SplitBBoxIntegrator
2D histogramming with pixel splitting based on a look-up table stored in CSR format
The initialization of the class can take quite a while (operation are not parallelized) but each integrate is parallelized and efficient.
- __init__(self, pos0, delta_pos0, pos1, delta_pos1, bins=(100, 36), pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit=u'undefined', empty=None, bool chiDiscAtPi=True, bool clip_pos1=True)
- Parameters:
pos0 – 1D array with pos0: tth or q_vect
delta_pos0 – 1D array with delta pos0: max center-corner distance
pos1 – 1D array with pos1: chi
delta_pos1 – 1D array with max pos1: max center-corner distance, unused !
bins – number of output bins (tth=100, chi=36 by default)
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
mask – array (of int8) with masked pixels with 1 (0=not masked)
allow_pos0_neg – enforce the q<0 is usually not possible
unit – can be 2th_deg or r_nm^-1 …
empty – value for bins where no pixels are contributing
chiDiscAtPi – tell if azimuthal discontinuity is at 0 (0° when False) or π (180° when True)
clip_pos1 – clip the azimuthal range to [-π π] (or [0 2π] depending on chiDiscAtPi), set to False to deactivate behavior
- property check_mask
- property outPos0
- property outPos1
- pyFAI.ext.splitBBoxCSR.calc_area(I1, I2, slope, intercept)
Calculate the area between I1 and I2 of a line with a given slope & intercept
- pyFAI.ext.splitBBoxCSR.clip(value, min_val, int max_val)
Limits the value to bounds
- Parameters:
value – the value to clip
min_value – the lower bound
max_value – the upper bound
- Returns:
clipped value in the requested range
- pyFAI.ext.splitBBoxCSR.recenter(position_t[:, ::1] pixel, bool chiDiscAtPi=1)
This function checks the pixel to be on the azimuthal discontinuity via the sign of its algebric area and recenters the corner coordinates in a consistent manner to have all azimuthal coordinate in
Nota: the returned area is negative since the positive area indicate the pixel is on the discontinuity.
- Parameters:
pixel – 4x2 array with radius, azimuth for the 4 corners. MODIFIED IN PLACE !!!
chiDiscAtPi – set to 0 to indicate the range goes from 0-2π instead of the default -π:π
- Returns:
signed area (approximate & negative)
pyFAI.ext.splitBBoxLUT module
Calculates histograms of pos0 (tth) weighted by Intensity
Splitting is done on the pixel’s bounding box like fit2D, reverse implementation based on a sparse matrix multiplication
- class pyFAI.ext.splitBBoxLUT.HistoBBox1d(pos0, delta_pos0, pos1=None, delta_pos1=None, bins=100, pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit='undefined', empty=None, chiDiscAtPi=True, clip_pos1=False)
Bases:
LutIntegrator
,SplitBBoxIntegrator
1D histogramming with pixel splitting based on a Look-up table
The initialization of the class can take quite a while (operation are not parallelized) but each integrate is parallelized and quite efficient.
- __init__(self, pos0, delta_pos0, pos1=None, delta_pos1=None, int bins=100, pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit=u'undefined', empty=None, bool chiDiscAtPi=True, bool clip_pos1=False)
- Parameters:
pos0 – 1D array with pos0: tth or q_vect or r …
delta_pos0 – 1D array with delta pos0: max center-corner distance
pos1 – 1D array with pos1: chi
delta_pos1 – 1D array with max pos1: max center-corner distance, unused !
bins – number of output bins, 100 by default
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
mask – array (of int8) with masked pixels with 1 (0=not masked)
allow_pos0_neg – enforce the q<0 is usually not possible
unit – can be 2th_deg or r_nm^-1 …
empty – value for bins without contributing pixels
chiDiscAtPi – tell if azimuthal discontinuity is at 0° or 180°
clip_pos1 – clip the azimuthal range to [-π π] (or [0 2π] depending on chiDiscAtPi), set to False to deactivate behavior
- property check_mask
- property outPos
- class pyFAI.ext.splitBBoxLUT.HistoBBox2d(pos0, delta_pos0, pos1, delta_pos1, bins=(100, 36), pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit='undefined', empty=None, chiDiscAtPi=True, clip_pos1=True)
Bases:
LutIntegrator
,SplitBBoxIntegrator
2D histogramming with pixel splitting based on a look-up table
The initialization of the class can take quite a while (operation are not parallelized) but each integrate is parallelized and efficient.
- __init__(self, pos0, delta_pos0, pos1, delta_pos1, bins=(100, 36), pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit=u'undefined', empty=None, bool chiDiscAtPi=True, bool clip_pos1=True)
- Parameters:
pos0 – 1D array with pos0: tth or q_vect
delta_pos0 – 1D array with delta pos0: max center-corner distance
pos1 – 1D array with pos1: chi
delta_pos1 – 1D array with max pos1: max center-corner distance, unused !
bins – number of output bins (tth=100, chi=36 by default)
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
mask – array (of int8) with masked pixels with 1 (0=not masked)
allow_pos0_neg – enforce the q<0 is usually not possible
unit – can be 2th_deg or r_nm^-1 …
empty – value for bins where no pixels are contributing
chiDiscAtPi – tell if azimuthal discontinuity is at 0 (0° when False) or π (180° when True)
clip_pos1 – clip the azimuthal range to [-π π] (or [0 2π] depending on chiDiscAtPi), set to False to deactivate behavior
- property check_mask
- property outPos0
- property outPos1
- class pyFAI.ext.splitBBoxLUT.LutIntegrator(lut_t[:, ::1] lut, int image_size, data_t empty=0.0)
Bases:
object
Abstract class which implements only the integrator…
Now uses LUT format with main attributes: * width: width of the LUT * data: coefficient of the matrix in a 1D vector of float32 * indices: Column index position for the data (same size as * indptr: row pointer indicates the start of a given row. len nrow+1
Nota: nnz = indptr[-1]+1 = len(indices) = len(data)
- __init__()
Constructor for a CSR generic integrator
- Parameters:
lut – Sparse matrix in CSR format, tuple of 3 arrays with (data, indices, indptr)
size – input image size
empty – value for empty pixels
- empty
empty: ‘data_t’
- input_size
- integrate(weights, dummy, delta_dummy, dark, flat, solidAngle, polarization, normalization_factor, coef_power)
LutIntegrator.integrate_legacy(self, weights, dummy=None, delta_dummy=None, dark=None, flat=None, solidAngle=None, polarization=None, double normalization_factor=1.0, int coef_power=1)
Actually perform the integration which in this case looks more like a matrix-vector product
- Parameters:
weights (ndarray) – input image
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
normalization_factor – divide the valid result by this value
coef_power – put coef to a given power, 2 for variance, 1 for mean
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
4-tuple of ndarrays
- integrate_legacy(self, weights, dummy=None, delta_dummy=None, dark=None, flat=None, solidAngle=None, polarization=None, double normalization_factor=1.0, int coef_power=1)
Actually perform the integration which in this case looks more like a matrix-vector product
- Parameters:
weights (ndarray) – input image
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
normalization_factor – divide the valid result by this value
coef_power – put coef to a given power, 2 for variance, 1 for mean
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
4-tuple of ndarrays
- integrate_ng(self, weights, variance=None, error_model=ErrorModel.NO, dummy=None, delta_dummy=None, dark=None, flat=None, solidangle=None, polarization=None, absorption=None, data_t normalization_factor=1.0)
- Actually perform the integration which in this case consists of:
Calculate the signal, variance and the normalization parts
Perform the integration which is here a matrix-vector product
- Parameters:
weights (ndarray) – input image
variance (ndarray) – the variance associate to the image
erro_model – enum ErrorModel.
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
absorption (ndarray) – Apparent efficiency of a pixel due to parallax effect
normalization_factor – divide the valid result by this value
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
Integrate1dtpl 4-named-tuple of ndarrays
- lut
Getter a copy of the LUT as an actual numpy array
- lut_size
- output_size
- preprocessed
- pyFAI.ext.splitBBoxLUT.calc_area(I1, I2, slope, intercept)
Calculate the area between I1 and I2 of a line with a given slope & intercept
- pyFAI.ext.splitBBoxLUT.clip(value, min_val, int max_val)
Limits the value to bounds
- Parameters:
value – the value to clip
min_value – the lower bound
max_value – the upper bound
- Returns:
clipped value in the requested range
- pyFAI.ext.splitBBoxLUT.recenter(position_t[:, ::1] pixel, bool chiDiscAtPi=1)
This function checks the pixel to be on the azimuthal discontinuity via the sign of its algebric area and recenters the corner coordinates in a consistent manner to have all azimuthal coordinate in
Nota: the returned area is negative since the positive area indicate the pixel is on the discontinuity.
- Parameters:
pixel – 4x2 array with radius, azimuth for the 4 corners. MODIFIED IN PLACE !!!
chiDiscAtPi – set to 0 to indicate the range goes from 0-2π instead of the default -π:π
- Returns:
signed area (approximate & negative)
pyFAI.ext.splitPixel module
Calculates histograms of pos0 (tth) weighted by Intensity
Splitting is done by full pixel splitting Histogram (direct) implementation
- pyFAI.ext.splitPixel.calc_area(I1, I2, slope, intercept)
Calculate the area between I1 and I2 of a line with a given slope & intercept
- pyFAI.ext.splitPixel.clip(value, min_val, int max_val)
Limits the value to bounds
- Parameters:
value – the value to clip
min_value – the lower bound
max_value – the upper bound
- Returns:
clipped value in the requested range
- pyFAI.ext.splitPixel.fullSplit1D(pos, weights, Py_ssize_t bins=100, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, dark=None, flat=None, solidangle=None, polarization=None, float empty=0.0, double normalization_factor=1.0, Py_ssize_t coef_power=1, bool allow_pos0_neg=False)
Calculates histogram of pos weighted by weights
Splitting is done on the pixel’s bounding box like fit2D. No compromise for speed has been made here.
- Parameters:
pos – 3D array with pos0; Corner A,B,C,D; tth or chi
weights – array with intensities
bins – number of output bins
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
dark – array (of float64) with dark noise to be subtracted (or None)
flat – array (of float64) with flat image
polarization – array (of float64) with polarization correction
solidangle – array (of float64) with flat image
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the valid result by this value
coef_power – set to 2 for variance propagation, leave to 1 for mean calculation
allow_pos0_neg – allow radial dimention to be negative (useful in log-scale!)
- Returns:
2theta, I, weighted histogram, unweighted histogram
- pyFAI.ext.splitPixel.fullSplit1D_engine(pos, weights, Py_ssize_t bins=100, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, int error_model=ErrorModel.NO, dark=None, flat=None, solidangle=None, polarization=None, data_t empty=0.0, double normalization_factor=1.0, bool allow_pos0_neg=True, bool chiDiscAtPi=True)
Calculates histogram of pos weighted by weights
Splitting is done on the pixel’s bounding box like fit2D. New implementation with variance propagation
- Parameters:
pos – 3D array with pos0; Corner A,B,C,D; tth or chi
weights – array with intensities
bins – number of output bins
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
dark – array (of float64) with dark noise to be subtracted (or None)
flat – array (of float64) with flat image
polarization – array (of float64) with polarization correction
solidangle – array (of float64) with flat image
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the valid result by this value
allow_pos0_neg – allow radial dimention to be negative (useful in log-scale!)
chiDiscAtPi – tell if azimuthal discontinuity is at 0° or 180°
- Returns:
namedtuple with “position intensity error signal variance normalization count”
- pyFAI.ext.splitPixel.fullSplit1D_ng(pos, weights, bins=100, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, error_model=0, dark=None, flat=None, solidangle=None, polarization=None, empty=0.0, normalization_factor=1.0, allow_pos0_neg=True, chiDiscAtPi=True)
fullSplit1D_engine(pos, weights, Py_ssize_t bins=100, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, int error_model=ErrorModel.NO, dark=None, flat=None, solidangle=None, polarization=None, data_t empty=0.0, double normalization_factor=1.0, bool allow_pos0_neg=True, bool chiDiscAtPi=True)
Calculates histogram of pos weighted by weights
Splitting is done on the pixel’s bounding box like fit2D. New implementation with variance propagation
- Parameters:
pos – 3D array with pos0; Corner A,B,C,D; tth or chi
weights – array with intensities
bins – number of output bins
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
dark – array (of float64) with dark noise to be subtracted (or None)
flat – array (of float64) with flat image
polarization – array (of float64) with polarization correction
solidangle – array (of float64) with flat image
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the valid result by this value
allow_pos0_neg – allow radial dimention to be negative (useful in log-scale!)
chiDiscAtPi – tell if azimuthal discontinuity is at 0° or 180°
- Returns:
namedtuple with “position intensity error signal variance normalization count”
- pyFAI.ext.splitPixel.fullSplit2D(pos, weights, bins, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, dark=None, flat=None, solidangle=None, polarization=None, bool allow_pos0_neg=True, bool chiDiscAtPi=1, float empty=0.0, double normalization_factor=1.0, Py_ssize_t coef_power=1)
Calculate 2D histogram of pos weighted by weights
Splitting is done on the pixel’s bounding box like fit2D
- Parameters:
pos – 3D array with pos0; Corner A,B,C,D; tth or chi
weights – array with intensities
bins – number of output bins int or 2-tuple of int
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
dark – array (of float64) with dark noise to be subtracted (or None)
flat – array (of float64) with flat-field image
polarization – array (of float64) with polarization correction
solidangle – array (of float64)with solid angle corrections
allow_pos0_neg – allow radial dimention to be negative (useful in log-scale!)
chiDiscAtPi – boolean; by default the chi_range is in the range ]-pi,pi[ set to 0 to have the range ]0,2pi[
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the valid result by this value
coef_power – set to 2 for variance propagation, leave to 1 for mean calculation
- Returns:
I, edges0, edges1, weighted histogram(2D), unweighted histogram (2D)
- pyFAI.ext.splitPixel.fullSplit2D_engine(pos, weights, bins, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, int error_model=ErrorModel.NO, dark=None, flat=None, solidangle=None, polarization=None, bool allow_pos0_neg=0, bool chiDiscAtPi=1, float empty=0.0, double normalization_factor=1.0)
Calculate 2D histogram of pos weighted by weights
Splitting is done on the pixel’s boundary (straight segments) New implementation with variance propagation
- Parameters:
pos – 3D array with pos0; Corner A,B,C,D; tth or chi
weights – array with intensities
bins – number of output bins int or 2-tuple of int
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
variance – variance associated with the weights
dark – array (of float64) with dark noise to be subtracted (or None)
flat – array (of float64) with flat-field image
polarization – array (of float64) with polarization correction
solidangle – array (of float64)with solid angle corrections
allow_pos0_neg – set to true to allow negative radial values.
chiDiscAtPi – boolean; by default the chi_range is in the range ]-pi,pi[ set to 0 to have the range ]0,2pi[
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the valid result by this value
- Returns:
Integrate2dtpl namedtuple: “radial azimuthal intensity error signal variance normalization count”
- pyFAI.ext.splitPixel.pseudoSplit2D_engine(pos, weights, bins, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, int error_model=ErrorModel.NO, dark=None, flat=None, solidangle=None, polarization=None, bool allow_pos0_neg=0, bool chiDiscAtPi=1, float empty=0.0, double normalization_factor=1.0)
Calculate 2D histogram of pos weighted by weights
Splitting is done on the pixel’s bounding box, similar to fit2D New implementation with variance propagation
- Parameters:
pos – 3D array with pos0; Corner A,B,C,D; tth or chi
weights – array with intensities
bins – number of output bins int or 2-tuple of int
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
variance – variance associated with the weights
dark – array (of float64) with dark noise to be subtracted (or None)
flat – array (of float64) with flat-field image
polarization – array (of float64) with polarization correction
solidangle – array (of float64)with solid angle corrections
allow_pos0_neg – set to true to allow negative radial values.
chiDiscAtPi – boolean; by default the chi_range is in the range ]-pi,pi[ set to 0 to have the range ]0,2pi[
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the valid result by this value
- Returns:
Integrate2dtpl namedtuple: “radial azimuthal intensity error signal variance normalization count”
- pyFAI.ext.splitPixel.pseudoSplit2D_ng(pos, weights, bins, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, error_model=0, dark=None, flat=None, solidangle=None, polarization=None, allow_pos0_neg=False, chiDiscAtPi=True, empty=0.0, normalization_factor=1.0)
pseudoSplit2D_engine(pos, weights, bins, pos0_range=None, pos1_range=None, dummy=None, delta_dummy=None, mask=None, variance=None, dark_variance=None, int error_model=ErrorModel.NO, dark=None, flat=None, solidangle=None, polarization=None, bool allow_pos0_neg=0, bool chiDiscAtPi=1, float empty=0.0, double normalization_factor=1.0)
Calculate 2D histogram of pos weighted by weights
Splitting is done on the pixel’s bounding box, similar to fit2D New implementation with variance propagation
- Parameters:
pos – 3D array with pos0; Corner A,B,C,D; tth or chi
weights – array with intensities
bins – number of output bins int or 2-tuple of int
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
dummy – value for bins without pixels
delta_dummy – precision of dummy value
mask – array (of int8) with masked pixels with 1 (0=not masked)
variance – variance associated with the weights
dark – array (of float64) with dark noise to be subtracted (or None)
flat – array (of float64) with flat-field image
polarization – array (of float64) with polarization correction
solidangle – array (of float64)with solid angle corrections
allow_pos0_neg – set to true to allow negative radial values.
chiDiscAtPi – boolean; by default the chi_range is in the range ]-pi,pi[ set to 0 to have the range ]0,2pi[
empty – value of output bins without any contribution when dummy is None
normalization_factor – divide the valid result by this value
- Returns:
Integrate2dtpl namedtuple: “radial azimuthal intensity error signal variance normalization count”
- pyFAI.ext.splitPixel.recenter(position_t[:, ::1] pixel, bool chiDiscAtPi=1)
This function checks the pixel to be on the azimuthal discontinuity via the sign of its algebric area and recenters the corner coordinates in a consistent manner to have all azimuthal coordinate in
Nota: the returned area is negative since the positive area indicate the pixel is on the discontinuity.
- Parameters:
pixel – 4x2 array with radius, azimuth for the 4 corners. MODIFIED IN PLACE !!!
chiDiscAtPi – set to 0 to indicate the range goes from 0-2π instead of the default -π:π
- Returns:
signed area (approximate & negative)
pyFAI.ext.splitPixelFullCSR module
Full pixel Splitting implemented using Sparse-matrix Dense-Vector multiplication, Sparse matrix represented using the CompressedSparseRow.
- class pyFAI.ext.splitPixelFullCSR.CsrIntegrator(tuple lut, int image_size, data_t empty=0.0)
Bases:
object
Abstract class which implements only the integrator…
Now uses CSR (Compressed Sparse raw) with main attributes: * nnz: number of non zero elements * data: coefficient of the matrix in a 1D vector of float32 * indices: Column index position for the data (same size as * indptr: row pointer indicates the start of a given row. len nrow+1
Nota: nnz = indptr[-1]+1 = len(indices) = len(data)
- __init__()
Constructor for a CSR generic integrator
- Parameters:
lut – Sparse matrix in CSR format, tuple of 3 arrays with (data, indices, indptr)
size – input image size
empty – value for empty pixels
- data
- empty
empty: ‘data_t’
- indices
- indptr
- input_size
- integrate(weights, dummy, delta_dummy, dark, flat, solidAngle, polarization, normalization_factor, coef_power)
CsrIntegrator.integrate_legacy(self, weights, dummy=None, delta_dummy=None, dark=None, flat=None, solidAngle=None, polarization=None, double normalization_factor=1.0, int coef_power=1)
Actually perform the integration which in this case looks more like a matrix-vector product
Deprecated version !
- Parameters:
weights (ndarray) – input image
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
normalization_factor – divide the valid result by this value
coef_power – set to 2 for variance propagation, leave to 1 for mean calculation
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
4-tuple of ndarrays
- integrate_legacy(self, weights, dummy=None, delta_dummy=None, dark=None, flat=None, solidAngle=None, polarization=None, double normalization_factor=1.0, int coef_power=1)
Actually perform the integration which in this case looks more like a matrix-vector product
Deprecated version !
- Parameters:
weights (ndarray) – input image
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
normalization_factor – divide the valid result by this value
coef_power – set to 2 for variance propagation, leave to 1 for mean calculation
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
4-tuple of ndarrays
- integrate_ng(self, weights, variance=None, error_model=ErrorModel.NO, dummy=None, delta_dummy=None, dark=None, flat=None, solidangle=None, polarization=None, absorption=None, data_t normalization_factor=1.0)
- Actually perform the integration which in this case consists of:
Calculate the signal, variance and the normalization parts
Perform the integration which is here a matrix-vector product
- Parameters:
weights (ndarray) – input image
variance (ndarray) – the variance associate to the image
error_model – enum ErrorModel
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
absorption (ndarray) – Apparent efficiency of a pixel due to parallax effect
normalization_factor – divide the valid result by this value
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
Integrate1dtpl 4-named-tuple of ndarrays
- nnz
- output_size
- preprocessed
- sigma_clip(self, weights, dark=None, dummy=None, delta_dummy=None, variance=None, dark_variance=None, flat=None, solidangle=None, polarization=None, absorption=None, bool safe=True, error_model=ErrorModel.NO, data_t normalization_factor=1.0, double cutoff=0.0, int cycle=5)
Perform a sigma-clipping iterative filter within each along each row. see the doc of scipy.stats.sigmaclip for more descriptions.
If the error model is “azimuthal”: the variance is the variance within a bin, which is refined at each iteration, can be costly !
Else, the error is propagated according to:
\[signal = (raw - dark) variance = variance + dark_variance normalization = normalization_factor*(flat * solidangle * polarization * absortoption) count = number of pixel contributing\]Integration is performed using the CSR representation of the look-up table on all arrays: signal, variance, normalization and count
The threshold can automaticlly be calculated from Chauvenet’s: sqrt(2*log(nbpix/sqrt(2.0f*pi)))
- Parameters:
weights (ndarray) – input image
dark (ndarray) – array with the dark-current value to be subtracted (if any)
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
variance (ndarray) – the variance associate to the image
dark_variance (ndarray) – the variance associate to the dark
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
absorption (ndarray) – Apparent efficiency of a pixel due to parallax effect
safe – set to True to save some tests
error_model – set to “poissonian” to use signal as variance (minimum 1), “azimuthal” to use the variance in a ring.
normalization_factor – divide the valid result by this value
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
Integrate1dtpl 4-named-tuple of ndarrays
- class pyFAI.ext.splitPixelFullCSR.FullSplitCSR_1d(pos, bins=100, pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit='undefined', empty=None, chiDiscAtPi=True)
Bases:
CsrIntegrator
,FullSplitIntegrator
Now uses CSR (Compressed Sparse raw) with main attributes: * nnz: number of non zero elements * data: coefficient of the matrix in a 1D vector of float32 * indices: Column index position for the data (same size as * indptr: row pointer indicates the start of a given row. len nrow+1
Nota: nnz = indptr[-1]
- __init__(self, pos, int bins=100, pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit=u'undefined', empty=None, bool chiDiscAtPi=True)
- Parameters:
pos – 3D or 4D array with the coordinates of each pixel point
bins – number of output bins, 100 by default
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
mask – array (of int8) with masked pixels with 1 (0=not masked)
allow_pos0_neg – enforce the q<0 is usually not possible
unit – can be 2th_deg or r_nm^-1 …
empty – value of output bins without any contribution when dummy is None
chiDiscAtPi – tell if azimuthal discontinuity is at 0° or 180°
- property check_mask
- property outPos
- class pyFAI.ext.splitPixelFullCSR.FullSplitCSR_2d(pos, bins=(100, 36), pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit='undefined', empty=None, chiDiscAtPi=True, clip_pos1=True)
Bases:
CsrIntegrator
,FullSplitIntegrator
Now uses CSR (Compressed Sparse raw) with main attributes: * nnz: number of non zero elements * data: coefficient of the matrix in a 1D vector of float32 * indices: Column index position for the data (same size as * indptr: row pointer indicates the start of a given row. len nrow+1
Nota: nnz = indptr[-1]
- __init__(self, pos, bins=(100, 36), pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit=u'undefined', empty=None, bool chiDiscAtPi=True, bool clip_pos1=True)
- Parameters:
pos – 3D or 4D array with the coordinates of each pixel point
bins – number of output bins (tth=100, chi=36 by default)
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
mask – array (of int8) with masked pixels with 1 (0=not masked)
allow_pos0_neg – enforce the q<0 is usually not possible
unit – can be 2th_deg or r_nm^-1 …
empty – value for bins where no pixels are contributing
chiDiscAtPi – tell if azimuthal discontinuity is at 0° or 180°
clip_pos1 – True if azimuthal direction is periodic (chi angle), False for non periodic units
- property check_mask
- property outPos0
- property outPos1
- pyFAI.ext.splitPixelFullCSR.calc_area(I1, I2, slope, intercept)
Calculate the area between I1 and I2 of a line with a given slope & intercept
- pyFAI.ext.splitPixelFullCSR.clip(value, min_val, int max_val)
Limits the value to bounds
- Parameters:
value – the value to clip
min_value – the lower bound
max_value – the upper bound
- Returns:
clipped value in the requested range
- pyFAI.ext.splitPixelFullCSR.recenter(position_t[:, ::1] pixel, bool chiDiscAtPi=1)
This function checks the pixel to be on the azimuthal discontinuity via the sign of its algebric area and recenters the corner coordinates in a consistent manner to have all azimuthal coordinate in
Nota: the returned area is negative since the positive area indicate the pixel is on the discontinuity.
- Parameters:
pixel – 4x2 array with radius, azimuth for the 4 corners. MODIFIED IN PLACE !!!
chiDiscAtPi – set to 0 to indicate the range goes from 0-2π instead of the default -π:π
- Returns:
signed area (approximate & negative)
pyFAI.ext.splitPixelFullLUT module
Full pixel Splitting implemented using Sparse-matrix Dense-Vector multiplication, Sparse matrix represented using the LUT representation.
- class pyFAI.ext.splitPixelFullLUT.HistoLUT1dFullSplit(pos, bins=100, pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit='undefined', empty=None, chiDiscAtPi=True)
Bases:
LutIntegrator
,FullSplitIntegrator
Now uses LUT representation for the integration
- __init__(self, pos, int bins=100, pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit=u'undefined', empty=None, bool chiDiscAtPi=True)
- Parameters:
pos – 3D or 4D array with the coordinates of each pixel point
bins – number of output bins, 100 by default
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
mask – array (of int8) with masked pixels with 1 (0=not masked)
allow_pos0_neg – enforce the q<0 is usually not possible
unit – can be 2th_deg or r_nm^-1 …
empty – value of output bins without any contribution when dummy is None
chiDiscAtPi – tell if azimuthal discontinuity is at 0° or 180°
- property check_mask
- property outPos
- class pyFAI.ext.splitPixelFullLUT.HistoLUT2dFullSplit(pos, bins=(100, 36), pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit='undefined', empty=None, chiDiscAtPi=True, clip_pos1=True)
Bases:
LutIntegrator
,FullSplitIntegrator
Now uses CSR (Compressed Sparse raw) with main attributes: * nnz: number of non zero elements * data: coefficient of the matrix in a 1D vector of float32 * indices: Column index position for the data (same size as * indptr: row pointer indicates the start of a given row. len nrow+1
Nota: nnz = indptr[-1]
- __init__(self, pos, bins=(100, 36), pos0_range=None, pos1_range=None, mask=None, mask_checksum=None, allow_pos0_neg=False, unit=u'undefined', empty=None, bool chiDiscAtPi=True, bool clip_pos1=True)
- Parameters:
pos – 3D or 4D array with the coordinates of each pixel point
bins – number of output bins (tth=100, chi=36 by default)
pos0_range – minimum and maximum of the 2th range
pos1_range – minimum and maximum of the chi range
mask – array (of int8) with masked pixels with 1 (0=not masked)
allow_pos0_neg – enforce the q<0 is usually not possible
unit – can be 2th_deg or r_nm^-1 …
empty – value for bins where no pixels are contributing
chiDiscAtPi – tell if azimuthal discontinuity is at 0° or 180°
clip_pos1 – True if azimuthal direction is periodic (chi angle), False for non periodic units
- property check_mask
- class pyFAI.ext.splitPixelFullLUT.LutIntegrator(lut_t[:, ::1] lut, int image_size, data_t empty=0.0)
Bases:
object
Abstract class which implements only the integrator…
Now uses LUT format with main attributes: * width: width of the LUT * data: coefficient of the matrix in a 1D vector of float32 * indices: Column index position for the data (same size as * indptr: row pointer indicates the start of a given row. len nrow+1
Nota: nnz = indptr[-1]+1 = len(indices) = len(data)
- __init__()
Constructor for a CSR generic integrator
- Parameters:
lut – Sparse matrix in CSR format, tuple of 3 arrays with (data, indices, indptr)
size – input image size
empty – value for empty pixels
- empty
empty: ‘data_t’
- input_size
- integrate(weights, dummy, delta_dummy, dark, flat, solidAngle, polarization, normalization_factor, coef_power)
LutIntegrator.integrate_legacy(self, weights, dummy=None, delta_dummy=None, dark=None, flat=None, solidAngle=None, polarization=None, double normalization_factor=1.0, int coef_power=1)
Actually perform the integration which in this case looks more like a matrix-vector product
- Parameters:
weights (ndarray) – input image
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
normalization_factor – divide the valid result by this value
coef_power – put coef to a given power, 2 for variance, 1 for mean
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
4-tuple of ndarrays
- integrate_legacy(self, weights, dummy=None, delta_dummy=None, dark=None, flat=None, solidAngle=None, polarization=None, double normalization_factor=1.0, int coef_power=1)
Actually perform the integration which in this case looks more like a matrix-vector product
- Parameters:
weights (ndarray) – input image
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
normalization_factor – divide the valid result by this value
coef_power – put coef to a given power, 2 for variance, 1 for mean
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
4-tuple of ndarrays
- integrate_ng(self, weights, variance=None, error_model=ErrorModel.NO, dummy=None, delta_dummy=None, dark=None, flat=None, solidangle=None, polarization=None, absorption=None, data_t normalization_factor=1.0)
- Actually perform the integration which in this case consists of:
Calculate the signal, variance and the normalization parts
Perform the integration which is here a matrix-vector product
- Parameters:
weights (ndarray) – input image
variance (ndarray) – the variance associate to the image
erro_model – enum ErrorModel.
dummy (float) – value for dead pixels (optional)
delta_dummy (float) – precision for dead-pixel value in dynamic masking
dark (ndarray) – array with the dark-current value to be subtracted (if any)
flat (ndarray) – array with the dark-current value to be divided by (if any)
solidAngle (ndarray) – array with the solid angle of each pixel to be divided by (if any)
polarization (ndarray) – array with the polarization correction values to be divided by (if any)
absorption (ndarray) – Apparent efficiency of a pixel due to parallax effect
normalization_factor – divide the valid result by this value
- Returns:
positions, pattern, weighted_histogram and unweighted_histogram
- Return type:
Integrate1dtpl 4-named-tuple of ndarrays
- lut
Getter a copy of the LUT as an actual numpy array
- lut_size
- output_size
- preprocessed
- pyFAI.ext.splitPixelFullLUT.calc_area(I1, I2, slope, intercept)
Calculate the area between I1 and I2 of a line with a given slope & intercept
- pyFAI.ext.splitPixelFullLUT.clip(value, min_val, int max_val)
Limits the value to bounds
- Parameters:
value – the value to clip
min_value – the lower bound
max_value – the upper bound
- Returns:
clipped value in the requested range
- pyFAI.ext.splitPixelFullLUT.recenter(position_t[:, ::1] pixel, bool chiDiscAtPi=1)
This function checks the pixel to be on the azimuthal discontinuity via the sign of its algebric area and recenters the corner coordinates in a consistent manner to have all azimuthal coordinate in
Nota: the returned area is negative since the positive area indicate the pixel is on the discontinuity.
- Parameters:
pixel – 4x2 array with radius, azimuth for the 4 corners. MODIFIED IN PLACE !!!
chiDiscAtPi – set to 0 to indicate the range goes from 0-2π instead of the default -π:π
- Returns:
signed area (approximate & negative)
pyFAI.ext.watershed module
Peak peaking via inverse watershed for connecting region of high intensity
- class pyFAI.ext.watershed.Bilinear
Bases:
object
Bilinear interpolator for finding max.
Instance attribute defined in pxd file
- cp_local_maxi(self, Py_ssize_t x) Py_ssize_t
- data
- f_cy(self, x)
Function -f((y,x)) where f is a continuous function (y,x) are pixel coordinates pixels outside the image are given an arbitrary high value to help the minimizer
- Parameters:
x – 2-tuple of float
- Returns:
Interpolated negative signal from the image (negative for using minimizer to search for peaks)
- height
- local_maxi(self, x)
Return the local maximum with sub-pixel refinement.
Sub-pixel refinement: Second order Taylor expansion of the function; first derivative is null
\[delta = x-i = -Inverse[Hessian].gradient\]If Hessian is singular or \(|delta|>1\): use a center of mass.
- Parameters:
x – 2-tuple of integers
- Returns:
2-tuple of float with the nearest local maximum
- many(self, x)
Call the bilinear interpolator on many points…
- Parameters:
x – array of points of shape (2, n), like (array_of_y, array_of_x)
- Returns:
array of shape n
- maxi
- mini
- width
- class pyFAI.ext.watershed.InverseWatershed(data, thres=1.0)
Bases:
object
Idea:
label all peaks
define region around those peaks which raise always to this peak
define the border of such region
search for the pass between two peaks
merge region with high pass between them
- NAME = 'Inverse watershed'
- VERSION = '1.0'
- __init__(self, data, thres=1.0)
- Parameters:
data – 2d image as numpy array
- init(self)
- init_borders(self)
- init_labels(self)
- init_pass(self)
- init_regions(self)
- classmethod load(cls, fname)
Load data from a HDF5 file
- merge_intense(self, thres=1.0)
Merge groups then (pass-mini)/(maxi-mini) >=thres
- merge_singleton(self)
merge single pixel region
- merge_twins(self)
Twins are two peak region which are best linked together: A -> B and B -> A
- peaks_from_area(self, mask, Imin=None, keep=None, bool refine=True, float dmin=0.0, **kwarg)
- Parameters:
mask – mask of data points valid
Imin – Minimum intensity for a peak
keep – Number of points to keep
refine – refine sub-pixel position
dmin – minimum distance from
- save(self, fname)
Save all regions into a HDF5 file
- class pyFAI.ext.watershed.Region
Bases:
object
- border
- get_borders(self)
- get_highest_pass(self)
- get_index(self)
- get_maxi(self)
- get_mini(self)
- get_neighbors(self)
- get_pass_to(self)
- get_size(self)
- highest_pass
- index
- init_values(self, float[::1] flat)
Initialize the values : maxi, mini and pass both height and so on :param flat: flat view on the data (intensity) :return: True if there is a problem and the region should be removed
- maxi
- merge(self, Region other)
merge 2 regions
- mini
- neighbors
- pass_to
- peaks
- size
Module contents
Package containing all Cython binary extensions
pyFAI.ext private package
ext._bispev
Module
Module containing a re-implementation of bi-cubic spline evaluation from scipy.
- pyFAI.ext._bispev.bisplev(x, y, tck, dx=0, dy=0)
Evaluate a bivariate B-spline and its derivatives.
Return a rank-2 array of spline function values (or spline derivative values) at points given by the cross-product of the rank-1 arrays x and y. In special cases, return an array or just a float if either x or y or both are floats. Based on BISPEV from FITPACK.
See
bisplrep()
to generate the tck representation.See also
splprep()
,splrep()
,splint()
,sproot()
,splev()
,UnivariateSpline()
,BivariateSpline()
- Parameters:
x (ndarray) – Rank-1 arrays specifying the domain over which to evaluate the spline or its derivative.
y (ndarray) – Rank-1 arrays specifying the domain over which to evaluate the spline or its derivative.
tck (tuple) – A sequence of length 5 returned by bisplrep containing the knot locations, the coefficients, and the degree of the spline: [tx, ty, c, kx, ky].
dx (int) – The orders of the partial derivatives in x. This version does not implement derivatives.
dy (int) – The orders of the partial derivatives in y. This version does not implement derivatives.
- Return type:
ndarray
- Returns:
The B-spline or its derivative evaluated over the set formed by the cross-product of x and y.
ext._blob
Module
Some Cythonized function for blob detection function.
It is used to find peaks in images by performing subsequent blurs.
- pyFAI.ext._blob.local_max(float[:, :, ::1] dogs, mask=None, bool n_5=False)
Calculate if a point is a maximum in a 3D space: (scale, y, x)
- Parameters:
dogs – 3D array of difference of gaussian
mask – mask with invalid pixels
N_5 – take a neighborhood of 5x5 pixel in plane
- Returns:
3d_array with 1 where is_max
ext._convolution
Module
Implementation of a separable 2D convolution.
It is used in real space are used to blurs images, used in blob-detection algorithm.
- pyFAI.ext._convolution.gaussian(sigma, width=None)
Return a Gaussian window of length “width” with standard-deviation “sigma”.
- Parameters:
sigma – standard deviation sigma
width – length of the windows (int) By default 8*sigma+1,
Width should be odd.
The FWHM is 2*sqrt(2 * pi)*sigma
- pyFAI.ext._convolution.gaussian_filter(img, sigma)
Performs a gaussian bluring using a gaussian kernel.
- Parameters:
img – input image
sigma – width parameter of the gaussian
- pyFAI.ext._convolution.horizontal_convolution(float[:, ::1] img, float[::1] filter)
Implements a 1D horizontal convolution with a filter. The only implemented mode is “reflect” (default in scipy.ndimage.filter)
Use Mixed precision accumulator
- Parameters:
img – input image
filter – 1D array with the coefficients of the array
- Returns:
array of the same shape as image with
- pyFAI.ext._convolution.vertical_convolution(float[:, ::1] img, float[::1] filter)
Implements a 1D vertical convolution with a filter. The only implemented mode is “reflect” (default in scipy.ndimage.filter)
Use Mixed precision accumulator
- Parameters:
img – input image
filter – 1D array with the coefficients of the array
- Returns:
array of the same shape as image with
ext._distortion
Module
Distortion correction are correction are applied by look-up table (or CSR)
- class pyFAI.ext._distortion.Distortion(detector='detector', shape=None)
Bases:
object
This class applies a distortion correction on an image.
It is also able to apply an inversion of the correction.
- __init__(self, detector='detector', shape=None)
- Parameters:
detector – detector instance or detector name
- calc_LUT(self)
- calc_LUT_size(self)
Considering the “half-CCD” spline from ID11 which describes a (1025,2048) detector, the physical location of pixels should go from: [-17.48634 : 1027.0543, -22.768829 : 2028.3689] We chose to discard pixels falling outside the [0:1025,0:2048] range with a lose of intensity
We keep self.pos: pos_corners will not be compatible with systems showing non adjacent pixels (like some xpads)
- calc_pos(self)
- correct(self, image)
Correct an image based on the look-up table calculated …
- Parameters:
image – 2D-array with the image
- Returns:
corrected 2D image
- uncorrect(self, image)
Take an image which has been corrected and transform it into it’s raw (with loss of information)
- Parameters:
image – 2D-array with the image
- Returns:
uncorrected 2D image and a mask (pixels in raw image
- pyFAI.ext._distortion.calc_CSR(float32_t[:, :, :, :] pos, shape, bin_size, max_pixel_size, int8_t[:, ::1] mask=None, offset=(0, 0))
Calculate the Look-up table as CSR format
- Parameters:
pos – 4D position array
shape – output shape
bin_size – number of input element per output element (as numpy array)
max_pixel_size – (2-tuple of int) size of a buffer covering the largest pixel
mask – array with invalid pixels marked
offset – global offset for pixel coordinates
- Returns:
look-up table in CSR format: 3-tuple of array
- pyFAI.ext._distortion.calc_LUT(float32_t[:, :, :, ::1] pos, shape, bin_size, max_pixel_size, int8_t[:, :] mask=None, offset=(0, 0))
- Parameters:
pos – 4D position array
shape – output shape
bin_size – number of input element per output element (numpy array)
max_pixel_size – (2-tuple of int) size of a buffer covering the largest pixel
mask – arry with bad pixels marked as True
offset – global offset for pixel position
- Returns:
look-up table
- pyFAI.ext._distortion.calc_area(I1, I2, slope, intercept)
Calculate the area between I1 and I2 of a line with a given slope & intercept
- pyFAI.ext._distortion.calc_pos(signatures, args, kwargs, defaults)
Calculate the pixel boundary position on the regular grid
- Parameters:
pixel_corners – pixel corner coordinate as detector.get_pixel_corner()
shape – requested output shape. If None, it is calculated
pixel2 (pixel1,) – pixel size along row and column coordinates
- Returns:
pos, delta1, delta2, shape_out, offset
- pyFAI.ext._distortion.calc_size(signatures, args, kwargs, defaults)
Calculate the number of items per output pixel
- Parameters:
pos – 4D array with position in space
shape – shape of the output array
mask – input data mask
offset – 2-tuple of float with the minimal index of
- Returns:
number of input element per output elements
- pyFAI.ext._distortion.calc_sparse(float32_t[:, :, :, ::1] pos, shape, max_pixel_size=(8, 8), int8_t[:, ::1] mask=None, format=u'csr', int bins_per_pixel=8, offset=(0, 0))
Calculate the look-up table (or CSR) using OpenMP
- Parameters:
pos – 4D position array
shape – output shape
max_pixel_size – (2-tuple of int) size of a buffer covering the largest pixel
mask – array with invalid pixels marked (True)
format – can be “CSR” or “LUT”
bins_per_pixel – average splitting factor (number of pixels per bin)
offset – global pixel offset
- Returns:
look-up table in CSR/LUT format
- pyFAI.ext._distortion.calc_sparse_v2(float32_t[:, :, :, ::1] pos, shape, max_pixel_size=(8, 8), int8_t[:, ::1] mask=None, format=u'csr', int bins_per_pixel=8, builder_config=None)
Calculate the look-up table (or CSR) using OpenMP :param pos: 4D position array :param shape: output shape :param max_pixel_size: (2-tuple of int) size of a buffer covering the largest pixel :param format: can be “CSR” or “LUT” :param bins_per_pixel: average splitting factor (number of pixels per bin) #deprecated :return: look-up table in CSR/LUT format
- pyFAI.ext._distortion.clip(value, min_val, int max_val)
Limits the value to bounds
- Parameters:
value – the value to clip
min_value – the lower bound
max_value – the upper bound
- Returns:
clipped value in the requested range
- pyFAI.ext._distortion.correct(image, shape_in, shape_out, LUT, dummy=None, delta_dummy=None, method='double')
Correct an image based on the look-up table calculated … dispatch according to LUT type
- Parameters:
image – 2D-array with the image
shape_in – shape of input image
shape_out – shape of output image
LUT – Look up table, here a 2D-array of struct
dummy – value for invalid pixels
delta_dummy – precision for invalid pixels
method – integration method: can be “kahan” using single precision compensated for error or “double” in double precision (64 bits)
- Returns:
corrected 2D image
- pyFAI.ext._distortion.correct_CSR(image, shape_in, shape_out, LUT, dummy=None, delta_dummy=None, variance=None, method='double')
Correct an image based on the look-up table calculated …
- Parameters:
image – 2D-array with the image
shape_in – shape of input image
shape_out – shape of output image
LUT – Look up table, here a 3-tuple array of ndarray
dummy – value for invalid pixels
delta_dummy – precision for invalid pixels
variance – unused for now … TODO: propagate variance.
method – integration method: can be “kahan” using single precision compensated for error or “double” in double precision (64 bits)
- Returns:
corrected 2D image
Nota: patch image on proper buffer size if needed.
- pyFAI.ext._distortion.correct_CSR_double(image, shape_out, LUT, dummy=None, delta_dummy=None)
Correct an image based on the look-up table calculated … using double precision accumulator
- Parameters:
image – 2D-array with the image
shape_in – shape of input image
shape_out – shape of output image
LUT – Look up table, here a 3-tuple array of ndarray
dummy – value for invalid pixels
delta_dummy – precision for invalid pixels
- Returns:
corrected 2D image
- pyFAI.ext._distortion.correct_CSR_kahan(image, shape_out, LUT, dummy=None, delta_dummy=None)
Correct an image based on the look-up table calculated … using kahan’s error compensated algorithm
- Parameters:
image – 2D-array with the image
shape_in – shape of input image
shape_out – shape of output image
LUT – Look up table, here a 3-tuple array of ndarray
dummy – value for invalid pixels
delta_dummy – precision for invalid pixels
- Returns:
corrected 2D image
- pyFAI.ext._distortion.correct_CSR_preproc_double(image, shape_out, LUT, dummy=None, delta_dummy=None, empty=numpy.NaN)
Correct an image based on the look-up table calculated … implementation using double precision accumulator
- Parameters:
image – 2D-array with the image (signal, variance, normalization)
shape_in – shape of input image
shape_out – shape of output image
LUT – Look up table, here a 3-tuple array of ndarray
dummy – value for invalid pixels
delta_dummy – precision for invalid pixels
empty – numerical value for empty pixels (if dummy is not provided)
method – integration method: can be “kahan” using single precision compensated for error or “double” in double precision (64 bits)
- Returns:
corrected 2D image + array with (signal, variance, norm)
- pyFAI.ext._distortion.correct_LUT(image, shape_in, shape_out, lut_t[:, ::1] LUT, dummy=None, delta_dummy=None, method=u'double')
Correct an image based on the look-up table calculated … dispatch between kahan and double
- Parameters:
image – 2D-array with the image
shape_in – shape of input image
shape_out – shape of output image
LUT – Look up table, here a 2D-array of struct
dummy – value for invalid pixels
delta_dummy – precision for invalid pixels
method – integration method: can be “kahan” using single precision compensated for error or “double” in double precision (64 bits)
- Returns:
corrected 2D image
- pyFAI.ext._distortion.correct_LUT_double(image, shape_out, lut_t[:, ::1] LUT, dummy=None, delta_dummy=None)
Correct an image based on the look-up table calculated … double precision accumulated
- Parameters:
image – 2D-array with the image
shape_in – shape of input image
shape_out – shape of output image
LUT – Look up table, here a 2D-array of struct
dummy – value for invalid pixels
delta_dummy – precision for invalid pixels
- Returns:
corrected 2D image
- pyFAI.ext._distortion.correct_LUT_kahan(image, shape_out, lut_t[:, ::1] LUT, dummy=None, delta_dummy=None)
Correct an image based on the look-up table calculated …
- Parameters:
image – 2D-array with the image
shape_in – shape of input image
shape_out – shape of output image
LUT – Look up table, here a 2D-array of struct
dummy – value for invalid pixels
delta_dummy – precision for invalid pixels
- Returns:
corrected 2D image
- pyFAI.ext._distortion.correct_LUT_preproc_double(image, shape_out, lut_t[:, ::1] LUT, dummy=None, delta_dummy=None, empty=numpy.NaN)
Correct an image based on the look-up table calculated … implementation using double precision accumulator
- Parameters:
image – 2D-array with the image (signal, variance, normalization)
shape_in – shape of input image
shape_out – shape of output image
LUT – Look up table, here a 2D-array of struct
dummy – value for invalid pixels
delta_dummy – precision for invalid pixels
empty – numerical value for empty pixels (if dummy is not provided)
method – integration method: can be “kahan” using single precision compensated for error or “double” in double precision (64 bits)
- Returns:
corrected 2D image + array with (signal, variance, norm)
- pyFAI.ext._distortion.recenter(position_t[:, ::1] pixel, bool chiDiscAtPi=1)
This function checks the pixel to be on the azimuthal discontinuity via the sign of its algebric area and recenters the corner coordinates in a consistent manner to have all azimuthal coordinate in
Nota: the returned area is negative since the positive area indicate the pixel is on the discontinuity.
- Parameters:
pixel – 4x2 array with radius, azimuth for the 4 corners. MODIFIED IN PLACE !!!
chiDiscAtPi – set to 0 to indicate the range goes from 0-2π instead of the default -π:π
- Returns:
signed area (approximate & negative)
- pyFAI.ext._distortion.resize_image_2D(image, shape=None)
Reshape the image in such a way it has the required shape
- Parameters:
image – 2D-array with the image
shape – expected shape of input image
- Returns:
2D image with the proper shape
- pyFAI.ext._distortion.resize_image_3D(image, shape=None)
Reshape the image in such a way it has the required shape This version is optimized for n-channel images used after preprocesing like: nlines * ncolumn * (value, variance, normalization)
- Parameters:
image – 3D-array with the preprocessed image
shape – expected shape of input image (2D only)
- Returns:
3D image with the proper shape
- pyFAI.ext._distortion.uncorrect_CSR(image, shape, LUT)
Take an image which has been corrected and transform it into it’s raw (with loss of information)
- Parameters:
image – 2D-array with the image
shape – shape of output image
LUT – Look up table, here a 3-tuple of ndarray
- Returns:
uncorrected 2D image and a mask (pixels in raw image not existing)
- pyFAI.ext._distortion.uncorrect_LUT(image, shape, lut_t[:, :] LUT)
Take an image which has been corrected and transform it into it’s raw (with loss of information)
- Parameters:
image – 2D-array with the image
shape – shape of output image
LUT – Look up table, here a 2D-array of struct
- Returns:
uncorrected 2D image and a mask (pixels in raw image not existing)
ext._geometry
Module
This extension is a fast-implementation for calculating the geometry, i.e. where every pixel of an array stays in space (x,y,z) or its (r, \(\chi\)) coordinates.
- pyFAI.ext._geometry.calc_chi(double L, double rot1, double rot2, double rot3, pos1, pos2, pos3=None, int orientation=0, bool chi_discontinuity_at_pi=True)
Calculate the chi array (azimuthal angles) using OpenMP
X1 = p1*cos(rot2)*cos(rot3) + p2*(cos(rot3)*sin(rot1)*sin(rot2) - cos(rot1)*sin(rot3)) - L*(cos(rot1)*cos(rot3)*sin(rot2) + sin(rot1)*sin(rot3)) X2 = p1*cos(rot2)*sin(rot3) - L*(-(cos(rot3)*sin(rot1)) + cos(rot1)*sin(rot2)*sin(rot3)) + p2*(cos(rot1)*cos(rot3) + sin(rot1)*sin(rot2)*sin(rot3)) X3 = -(L*cos(rot1)*cos(rot2)) + p2*cos(rot2)*sin(rot1) - p1*sin(rot2) tan(Chi) = X2 / X1
- Parameters:
L – distance sample - PONI
rot1 – angle1
rot2 – angle2
rot3 – angle3
pos1 – numpy array with distances in meter along dim1 from PONI (Y)
pos2 – numpy array with distances in meter along dim2 from PONI (X)
pos3 – numpy array with distances in meter along Sample->PONI (Z), positive behind the detector
orientaion – orientation of the detector, values 1-4
chi_discontinuity_at_pi – set to False to obtain chi in the range [0, 2pi[ instead of [-pi, pi[
- Returns:
ndarray of double with same shape and size as pos1
- pyFAI.ext._geometry.calc_cosa(double L, pos1, pos2, pos3=None)
Calculate the cosine of the incidence angle using OpenMP. Used for sensors thickness effect corrections
- Parameters:
L – distance sample - PONI
pos1 – numpy array with distances in meter along dim1 from PONI (Y)
pos2 – numpy array with distances in meter along dim2 from PONI (X)
pos3 – numpy array with distances in meter along Sample->PONI (Z), positive behind the detector
- Returns:
ndarray of double with same shape and size as pos1
- pyFAI.ext._geometry.calc_delta_chi(signatures, args, kwargs, defaults)
Calculate the delta chi array (azimuthal angles) using OpenMP
- Parameters:
centers – numpy array with chi angles of the center of the pixels
corners – numpy array with chi angles of the corners of the pixels
- Returns:
ndarray of double with same shape and size as centers woth the delta chi per pixel
- pyFAI.ext._geometry.calc_pos_zyx(double L, double poni1, double poni2, double rot1, double rot2, double rot3, pos1, pos2, pos3=None, int orientation=0)
Calculate the 3D coordinates in the sample’s referential
- Parameters:
L – distance sample - PONI
poni1 – PONI coordinate along y axis
poni2 – PONI coordinate along x axis
rot1 – angle1
rot2 – angle2
rot3 – angle3
pos1 – numpy array with distances in meter along dim1 from PONI (Y)
pos2 – numpy array with distances in meter along dim2 from PONI (X)
pos3 – numpy array with distances in meter along Sample->PONI (Z), positive behind the detector
orientation – value 1-4
- Returns:
3-tuple of ndarray of double with same shape and size as pos1
- pyFAI.ext._geometry.calc_q(double L, double rot1, double rot2, double rot3, pos1, pos2, double wavelength, pos3=None, int orientation=0)
Calculate the q (scattering vector) array using OpenMP
X1 = p1*cos(rot2)*cos(rot3) + p2*(cos(rot3)*sin(rot1)*sin(rot2) - cos(rot1)*sin(rot3)) - L*(cos(rot1)*cos(rot3)*sin(rot2) + sin(rot1)*sin(rot3)) X2 = p1*cos(rot2)*sin(rot3) - L*(-(cos(rot3)*sin(rot1)) + cos(rot1)*sin(rot2)*sin(rot3)) + p2*(cos(rot1)*cos(rot3) + sin(rot1)*sin(rot2)*sin(rot3)) X3 = -(L*cos(rot1)*cos(rot2)) + p2*cos(rot2)*sin(rot1) - p1*sin(rot2) tan(Chi) = X2 / X1
- Parameters:
L – distance sample - PONI
rot1 – angle1
rot2 – angle2
rot3 – angle3
pos1 – numpy array with distances in meter along dim1 from PONI (Y)
pos2 – numpy array with distances in meter along dim2 from PONI (X)
pos3 – numpy array with distances in meter along Sample->PONI (Z), positive behind the detector
wavelength – in meter to get q in nm-1
orientation – unused
- Returns:
ndarray of double with same shape and size as pos1
- pyFAI.ext._geometry.calc_r(double L, double rot1, double rot2, double rot3, pos1, pos2, pos3=None, int orientation=0)
Calculate the radius array (radial direction) in parallel
- Parameters:
L – distance sample - PONI
rot1 – angle1
rot2 – angle2
rot3 – angle3
pos1 – numpy array with distances in meter along dim1 from PONI (Y)
pos2 – numpy array with distances in meter along dim2 from PONI (X)
pos3 – numpy array with distances in meter along Sample->PONI (Z), positive behind the detector
orientation – unused
- Returns:
ndarray of double with same shape and size as pos1
- pyFAI.ext._geometry.calc_rad_azim(double L, double poni1, double poni2, double rot1, double rot2, double rot3, pos1, pos2, pos3=None, space=u'2th', wavelength=None, int orientation=0, bool chi_discontinuity_at_pi=True)
Calculate the radial & azimutal position for each pixel from pos1, pos2, pos3.
- Parameters:
L – distance sample - PONI
poni1 – PONI coordinate along y axis
poni2 – PONI coordinate along x axis
rot1 – angle1
rot2 – angle2
rot3 – angle3
pos1 – numpy array with distances in meter along dim1 from PONI (Y)
pos2 – numpy array with distances in meter along dim2 from PONI (X)
pos3 – numpy array with distances in meter along Sample->PONI (Z), positive behind the detector
space – can be “2th”, “q” or “r” for radial units. Azimuthal units are radians
orientation – values from 1 to 4
chi_discontinuity_at_pi – set to False to obtain chi in the range [0, 2pi[ instead of [-pi, pi[
- Returns:
ndarray of double with same shape and size as pos1 + (2,),
- Raise:
KeyError when space is bad ! ValueError when wavelength is missing
- pyFAI.ext._geometry.calc_sina(double L, pos1, pos2, pos3=None)
Calculate the sine of the incidence angle using OpenMP. Used for sensors thickness effect corrections
- Parameters:
L – distance sample - PONI
pos1 – numpy array with distances in meter along dim1 from PONI (Y)
pos2 – numpy array with distances in meter along dim2 from PONI (X)
pos3 – numpy array with distances in meter along Sample->PONI (Z), positive behind the detector
- Returns:
ndarray of double with same shape and size as pos1
- pyFAI.ext._geometry.calc_tth(double L, double rot1, double rot2, double rot3, pos1, pos2, pos3=None, int orientation=0)
Calculate the 2theta array (radial angle) in parallel
- Parameters:
L – distance sample - PONI
rot1 – angle1
rot2 – angle2
rot3 – angle3
pos1 – numpy array with distances in meter along dim1 from PONI (Y)
pos2 – numpy array with distances in meter along dim2 from PONI (X)
pos3 – numpy array with distances in meter along Sample->PONI (Z), positive behind the detector
orientation – unused
- Returns:
ndarray of double with same shape and size as pos1
ext._tree
Module
Module used in file hierarchy tree for the diff_map graphical user interface.
- class pyFAI.ext._tree.TreeItem(unicode label=None, TreeItem parent=None)
Bases:
object
Node of a tree …
Each node contains:
children: list
parent: TreeItem parent
label: str
order: int
type: str can be “dir”, “file”, “group” or “dataset”
extra: any object
- __init__(*args, **kwargs)
- add_child(self, TreeItem child)
- children
children: list
- extra
extra: object
- has_child(self, unicode label) bool
- label
label: unicode
- name
- order
order: ‘int’
- parent
parent: pyFAI.ext._tree.TreeItem
- size
- sort(self)
- type
type: unicode
- update(self, TreeItem new_root)
Add new children in tree