Filtering signal in azimuthal space#
Usually, diffraction signal presents a polar symmetry, this means all pixel with the same azimuthal angle (χ) have similar intensities. The best way to exploit this is to take the mean, what is called azimuthal average. But the average is very sensitive to outlier, like gaps, missing pixels, shadows, cosmic rays or reflection coming from larger crystallite. In this tutorial we will see two alternative ways to remove those unwanted signal and focus on the majority of pixels: sigma clipping and median filtering.
[1]:
import os
os.environ["PYOPENCL_COMPILER_OUTPUT"]="0"
[2]:
%matplotlib inline
from matplotlib.pyplot import subplots
from pyFAI.gui import jupyter
import numpy, fabio, pyFAI
from pyFAI import benchmark
from pyFAI.test.utilstest import UtilsTest
ai = pyFAI.load(UtilsTest.getimage("Pilatus6M.poni"))
img = fabio.open(UtilsTest.getimage("Pilatus6M.cbf")).data
fig, ax = subplots(1, 2)
jupyter.display(img, ax=ax[1])
jupyter.plot1d(ai.integrate1d(img, 1000), ax=ax[0])
ax[1].set_title("With a few Bragg peaks")
pass
WARNING:pyFAI.gui.matplotlib:matplotlib already loaded, setting its backend may not work

Azimuthal sigma-clipping#
The idea is to discard pixels which look like outliers in the distribution of all pixels contributing to a single azimuthal bin. It requires an error model like poisson but it has been proven to be better to use the variance in the given azimuthal ring. All details are available in this publication: https://doi.org/10.1107/S1600576724011038 also available at https://doi.org/10.48550/arXiv.2411.09515
[3]:
fig, ax = subplots(1, 2)
jupyter.display(img, ax=ax[1])
jupyter.plot1d(ai.sigma_clip(img, 1000, error_model="hybrid", method=("no", "csr", "cython")), ax=ax[0])
ax[1].set_title("With a few Bragg peaks")
ax[0].set_title("Sigma_clipping")
pass

Of course, sigma-clip takes several extra parameters like the number of iterations to perform, the cut-off, the error model, … There are alo a few limitation:
The algorithm needs to be the CSR-sparse matrix multiplication: since several integration are needed, it makes no sense to use an histogram based algorithm.
The algorithm is available with any implementation: Python (using scipy.saprse), Cython and OpenCL, and it runs just fine on GPU.
Sigma-clipping is incompatible with any kind of pixel splitting: With pixel splitting, a single pixel can contribute to several azimuthal bins and discarding a pixel in one ring could disable it in the neighboring ring (or not, since bins are processed in parallel).
Sigma-clipping performances:#
[4]:
method = ["no", "csr", "cython"]
[5]:
%%time
perfs_integrate_python = {}
perfs_integrate_cython = {}
perfs_integrate_opencl = {}
perfs_sigma_clip_python = {}
perfs_sigma_clip_cython = {}
perfs_sigma_clip_opencl = {}
for ds in pyFAI.benchmark.PONIS:
ai = pyFAI.load(UtilsTest.getimage(ds))
if ai.wavelength is None: ai.wavelength=1.54e-10
img = fabio.open(UtilsTest.getimage(pyFAI.benchmark.datasets[ds])).data
size = numpy.prod(ai.detector.shape)
print(ds)
print(" Cython")
meth = tuple(method)
nbin = max(ai.detector.shape)
print(" * integrate ", end="")
perfs_integrate_cython[size] = %timeit -o ai.integrate1d(img, nbin, method=meth)
print(" * sigma-clip", end="")
perfs_sigma_clip_cython[size] = %timeit -o ai.sigma_clip(img, nbin, method=meth, error_model="azimuthal")
print(" Python")
meth = tuple(method[:2]+["python"])
print(" * integrate ", end="")
perfs_integrate_python[size] = %timeit -o ai.integrate1d(img, nbin, method=meth)
print(" * sigma-clip", end="")
perfs_sigma_clip_python[size] = %timeit -o ai.sigma_clip(img, nbin, method=meth, error_model="azimuthal")
print(" OpenCL")
meth = tuple(method[:2]+["opencl"])
print(" * integrate ", end="")
perfs_integrate_opencl[size] = %timeit -o ai.integrate1d(img, nbin, method=meth)
print(" * sigma-clip", end="")
perfs_sigma_clip_opencl[size] = %timeit -o ai.sigma_clip(img, nbin, method=meth, error_model="azimuthal")
Pilatus1M.poni
Cython
10 ms ± 928 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
10.7 ms ± 296 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Python
11.8 ms ± 356 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
173 ms ± 1.68 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
OpenCL
1.98 ms ± 51.1 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
5.07 ms ± 36 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Pilatus2M.poni
Cython
18.8 ms ± 627 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
26.8 ms ± 4.74 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Python
29.9 ms ± 692 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
554 ms ± 4.75 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
4.88 ms ± 333 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
18.8 ms ± 175 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Eiger4M.poni
Cython
32.3 ms ± 4.06 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
45.9 ms ± 2.71 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Python
57.1 ms ± 380 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
1.11 s ± 6.26 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
8.68 ms ± 295 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
34.9 ms ± 424 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Pilatus6M.poni
Cython
41.3 ms ± 1.44 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
53 ms ± 1.37 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Python
83.1 ms ± 285 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
1.56 s ± 15 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
12 ms ± 416 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
40.1 ms ± 162 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Eiger9M.poni
Cython
77.6 ms ± 3.95 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
117 ms ± 2.34 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Python
170 ms ± 805 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
3.32 s ± 48.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
19.6 ms ± 660 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
97.9 ms ± 601 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Mar3450.poni
Cython
81 ms ± 14.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
112 ms ± 3.84 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Python
178 ms ± 614 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
3.56 s ± 26.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
21.9 ms ± 98.9 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
96.3 ms ± 545 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Fairchild.poni
Cython
110 ms ± 6.51 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
145 ms ± 2.66 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Python
410 ms ± 1.11 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
8.3 s ± 50.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
27.4 ms ± 117 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
83.4 ms ± 170 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
CPU times: user 16min 42s, sys: 35.6 s, total: 17min 17s
Wall time: 5min 54s
[6]:
fig, ax = subplots()
ax.set_xlabel("Image size (Mpix)")
ax.set_ylabel("Frames per seconds")
sizes = numpy.array(list(perfs_integrate_python.keys()))/1e6
ax.plot(sizes, [1/i.best for i in perfs_integrate_python.values()], label="Integrate/Python", color='green', linestyle='dashed', marker='1')
ax.plot(sizes, [1/i.best for i in perfs_integrate_cython.values()], label="Integrate/Cython", color='orange', linestyle='dashed', marker='1')
ax.plot(sizes, [1/i.best for i in perfs_integrate_opencl.values()], label="Integrate/OpenCL", color='blue', linestyle='dashed', marker='1')
ax.plot(sizes, [1/i.best for i in perfs_sigma_clip_python.values()], label="Sigma-clip/Python", color='green', linestyle='dotted', marker='2')
ax.plot(sizes, [1/i.best for i in perfs_sigma_clip_cython.values()], label="Sigma-clip/Cython", color='orange', linestyle='dotted', marker='2')
ax.plot(sizes, [1/i.best for i in perfs_sigma_clip_opencl.values()], label="Sigma-clip/OpenCL", color='blue', linestyle='dotted', marker='2')
ax.set_yscale("log")
ax.legend()
ax.set_title("Performance of Sigma-clipping vs integrate")
[6]:
Text(0.5, 1.0, 'Performance of Sigma-clipping vs integrate')

The penalties is very limited in Cython, much more in Python.
The biggest limitation of sigma-clipping is its incompatibility with pixel-splitting, feature needed when oversampling, i.e. taking many more points than the size of the diagonal of the image. While oversampling is not recommended in general case (due to the cross-corelation between bins it creates), it can be a nessessary evil, especially when performing Rietveld refinement where 5 points per peaks are needed, resolution that cannot be obtained with the pixel-size/distance couple accessible by the experimental setup.
Median filter in Azimuthal space#
The idea is to sort all pixels contibuting to an azimuthal bin and to average out all pixel between the lower and upper quantile. When those two thresholds are at one half, this filter provides actually the median. In order to be compatible with pixel splitting, each pixel is duplicated as many times as it contributes to different bins. After sorting fragments of pixels according to their normalization corrected signal, the cummulative sum of normalization is performed in order to detemine which fragments to average out.
[7]:
ai = pyFAI.load(UtilsTest.getimage("Pilatus6M.poni"))
img = fabio.open(UtilsTest.getimage("Pilatus6M.cbf")).data
method = ["full", "csr", "cython"]
percentile=(40,60)
pol=0.99
[8]:
fig, ax = subplots(1, 2)
jupyter.display(img, ax=ax[1])
jupyter.plot1d(ai.medfilt1d_ng(img, 1000, method=method, percentile=percentile, polarization_factor=pol), ax=ax[0])
ax[1].set_title("With a few Bragg peaks")
ax[0].set_title("Median filtering")
pass

Unlike the sigma-clipping, this median filter does not equires any error model; but the computationnal cost induced by the sort is huge. In addition, the median is very sensitive and requires a good geometry and modelisation of the polarization.
[9]:
%%time
perf2_integrate_python = {}
perf2_integrate_cython = {}
perf2_integrate_opencl = {}
perf2_medfilt_python = {}
perf2_medfilt_cython = {}
perf2_medfilt_opencl = {}
for ds in pyFAI.benchmark.PONIS:
ai = pyFAI.load(UtilsTest.getimage(ds))
if ai.wavelength is None: ai.wavelength=1.54e-10
img = fabio.open(UtilsTest.getimage(pyFAI.benchmark.datasets[ds])).data
size = numpy.prod(ai.detector.shape)
print(ds)
print(" Cython")
meth = tuple(method)
nbin = max(ai.detector.shape)
print(" * integrate ", end="")
perf2_integrate_cython[size] = %timeit -o ai.integrate1d(img, nbin, method=meth)
print(" * medianfilter", end="")
perf2_medfilt_cython[size] = %timeit -o ai.medfilt1d_ng(img, nbin, method=meth)
print(" Python")
meth = tuple(method[:2]+["python"])
print(" * integrate ", end="")
perf2_integrate_python[size] = %timeit -o ai.integrate1d(img, nbin, method=meth)
print(" * medianfilter", end="")
perf2_medfilt_python[size] = %timeit -o ai.medfilt1d_ng(img, nbin, method=meth)
print(" OpenCL")
meth = tuple(method[:2]+["opencl"])
print(" * integrate ", end="")
perf2_integrate_opencl[size] = %timeit -o ai.integrate1d(img, nbin, method=meth)
print(" * medianfilter", end="")
perf2_medfilt_opencl[size] = %timeit -o ai.medfilt1d_ng(img, nbin, method=meth)
Pilatus1M.poni
Cython
8.77 ms ± 277 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
29.4 ms ± 2.96 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Python
14.2 ms ± 206 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
1.05 s ± 3.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
2.3 ms ± 9.26 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
98 ms ± 771 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Pilatus2M.poni
Cython
19.5 ms ± 267 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
94.2 ms ± 1.46 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Python
42.4 ms ± 1.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
3.64 s ± 38.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
5.7 ms ± 24.6 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
430 ms ± 3.25 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Eiger4M.poni
Cython
39.7 ms ± 5.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
156 ms ± 3.93 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Python
78.5 ms ± 89.8 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
6.27 s ± 45.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
9.93 ms ± 282 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
718 ms ± 16.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Pilatus6M.poni
Cython
44.6 ms ± 599 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
213 ms ± 1.68 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Python
125 ms ± 2.23 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
9.22 s ± 38.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
13.2 ms ± 58.5 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
1.1 s ± 20.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Eiger9M.poni
Cython
74.2 ms ± 950 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
363 ms ± 3.98 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Python
234 ms ± 2.56 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
15.1 s ± 97.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
22 ms ± 987 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.99 s ± 20.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Mar3450.poni
Cython
73.7 ms ± 1.05 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
393 ms ± 12.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Python
257 ms ± 2.68 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
17.3 s ± 115 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
24.9 ms ± 341 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
2.32 s ± 18.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Fairchild.poni
Cython
97.3 ms ± 1.02 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
426 ms ± 4.59 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Python
381 ms ± 1.37 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
20.8 s ± 89.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
OpenCL
30.5 ms ± 641 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
2.49 s ± 41.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
CPU times: user 16min 26s, sys: 15.4 s, total: 16min 41s
Wall time: 13min 3s
[10]:
fig, ax = subplots()
ax.set_xlabel("Image size (Mpix)")
ax.set_ylabel("Frames per seconds")
sizes = numpy.array(list(perf2_integrate_python.keys()))/1e6
ax.plot(sizes, [1/i.best for i in perf2_integrate_python.values()], label="Integrate/Python", color='green', linestyle='dashed', marker='1')
ax.plot(sizes, [1/i.best for i in perf2_integrate_cython.values()], label="Integrate/Cython", color='orange', linestyle='dashed', marker='1')
ax.plot(sizes, [1/i.best for i in perf2_integrate_opencl.values()], label="Integrate/OpenCL", color='blue', linestyle='dashed', marker='1')
ax.plot(sizes, [1/i.best for i in perf2_medfilt_python.values()], label="Medfilt/Python", color='green', linestyle='dotted', marker='2')
ax.plot(sizes, [1/i.best for i in perf2_medfilt_cython.values()], label="Medfilt/Cython", color='orange', linestyle='dotted', marker='2')
ax.plot(sizes, [1/i.best for i in perf2_medfilt_opencl.values()], label="Medfilt/OpenCL", color='blue', linestyle='dotted', marker='2')
ax.set_yscale("log")
ax.legend()
ax.set_title("Performance of Median filtering vs integrate")
pass

As one can see, the penalities are much larger for OpenCL and Python than for Cython.
Conclusion#
Sigma-clipping and median-filtering are alternatives to azimuthal integration and offer the ability to reject outliers. They are not more difficult to use but slightly slower owing to their greater complexity.
[ ]: