Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Shot-to-shot flat-field correction at X-ray free-electron lasers

Open Access Open Access

Abstract

X-ray free-electron lasers (XFELs) provide high-brilliance pulses, which offer unique opportunities for coherent X-ray imaging techniques, such as in-line holography. One of the fundamental steps to process in-line holographic data is flat-field correction, which mitigates imaging artifacts and, in turn, enables phase reconstructions. However, conventional flat-field correction approaches cannot correct single XFEL pulses due to the stochastic nature of the self-amplified spontaneous emission (SASE), the mechanism responsible for the high brilliance of XFELs. Here, we demonstrate on simulated and megahertz imaging data, measured at the European XFEL, the possibility of overcoming such a limitation by using two different methods based on principal component analysis and deep learning. These methods retrieve flat-field corrected images from individual frames by separating the sample and flat-field signal contributions; thus, enabling advanced phase-retrieval reconstructions. We anticipate that the proposed methods can be implemented in a real-time processing pipeline, which will enable online data analysis and phase reconstructions of coherent full-field imaging techniques such as in-line holography at XFELs.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Corrections

21 March 2022: A typographical correction was made to the author listing.

1. Introduction

The advent of X-ray free-electron lasers (XFELs) [1,2] providing ultrashort, high coherent flux X-ray pulses has opened the opportunity to explore spatial and temporal resolutions not possible before with X-rays [3]. The high brilliance of XFELs is a consequence of the self-amplified spontaneous emission (SASE) [4,5] and large-scale facilities deploying this radiation for application have been constructed worldwide [1,69]. Although SASE produces high intensity, short, and coherent X-ray pulses, it is a stochastic process that introduces shot noise and poor temporal coherence [10]. This shot noise, which manifests itself in differing physical properties of XFEL pulses from shot to shot, may degrade the quality of X-ray techniques, such as X-ray imaging techniques.

Coherent X-ray imaging techniques benefit directly from the high brilliance of XFELs by exploiting phase contrast rather than the attenuation contrast as used by conventional X-ray imaging techniques. Thus, such techniques, when implemented at high-brilliance sources, can explore spatial and temporal resolutions not reachable before with traditional techniques. One example of coherent X-ray imaging techniques is in-line holography [11]. In-line holography is a full-field imaging technique that records interferometric images (holograms) from which the phase information can be extracted by means of phase retrieval algorithms [12,13]. However, the holograms have to be flat-field corrected [14] in order to apply many of those algorithms.

Flat-field correction (FFC) is a technique to mitigate systematic image artifacts that may arise from detector behavior and/or from an optical system. Thus, the FFC of a homogeneous sample should produce a uniform signal in the optical system. For this purpose, images without any sample (flat-field images) and any illumination (dark-current images) are collected. Originally, FFC was a technique conceived to reduce fixed-pattern noise in a dataset. In such a scenario, the FFC is obtained by normalizing a sample image with the average flat-field image and subtracting the average dark-current image. However, using the average value as a representative flat-field image for flat-field correction is not appropriate when large variations in the illumination per frame are observed. This is a typical issue for XFEL single-pulse experiments due to the stochastic nature of the SASE source. Thus, conventional flat-field correction methods are not applicable, and approaches capable of estimating each pulse’s flat-field contribution are needed. One of these methods is retrospective flat-field [1517]. Such methods estimate the flat-field contribution only from sample measurements. Nonetheless, these methods assume that the acquisitions are performed under similar illumination conditions. Therefore, such methods cannot trivially account for the stochastic fluctuation of a SASE source. Another simple solution to estimate the FFC for each pulse is to use a beamsplitter prior to the sample to estimate the illumination on the sample. However, this approach may be challenging for magnification geometries. It also reduces the photon flux on the sample, which is undesirable when aiming for optimum illumination (and hence contrast). An alternative to this approach that avoids the aforementioned limitations is to use dynamic flat-field correction approaches capable of estimating the flat-field per pulse by exploiting a series of flat-field images.

Dynamic FFC using eigen flat fields has been successfully applied to the data collected from storage rings [18] and recently applied to X-ray free-electron lasers [19]. The performance of this dynamic flat-field correction method [18] in XFELs depends on the dimensionality of the eigen flat-fields to describe the SASE fluctuations up to a certain confidence level. Thus, the performance and execution time varies between XFELs and experimental stations. Another relevant aspect that affects the performance of such methods is how well the flat-field images reproduce the illumination on the sample. For example, systematic and stochastic effects, which vary from shot to shot up to time scales of hours and days, can alter the sample illumination during the experiment. These variations should be well-described by the flat-field images; otherwise, they cannot be corrected even if they are merely translated due to the locality of the method. One common way to minimize or circumvent this problem is to acquire flat-field images before and after each sample acquisition. Thus, reducing the time interval between flat-field and sample acquisitions minimizes those potential systematic effects. Detrimentally, this solution can significantly reduce the available experimental time, and it does not address potential variations like translations that may happen during long acquisitions. As experimental time at XFEL facilities is limited and competitively acquired, shift-invariant approaches that can relax the requirements imposed by local approaches and minimize the acquisition of flat-field images are desirable.

Among state-of-the-art shift-invariant approaches, convolutional neural networks (CNNs) [20] are deep-learning approaches (DL) [21,22] that have revolutionized signal processing and pattern recognition. Since their first implementations, CNNs have demonstrated their capability to identify and classify distorted and shifted patterns [20,23]. Such capabilities relax the aforementioned constraints associated with local approaches as illumination artifacts don’t have to be in the same position or exact shape to be corrected. However, conventional CNN approaches to address FFC require large amounts of paired data between sample images and their flat-field corrected counterparts. Nonetheless, this data can be acquired before and after the experiment, not before and after each acquisition, greatly reducing the experimental overhead time to collect flat and dark-field frames.

This work studies the performance and implementation of flat-field correction approaches in scenarios where conventional FFC based on an average illumination does not work. Here, we study state-of-the-art dynamic flat-field correction approaches capable of estimating the flat-field contribution for each sample frame. Additionally, we apply and implement DL approaches based on CNNs to address this problem. Both approaches are validated and studied with simulated data and experimental data coming from single-pulse experiments at the European XFEL. From these studies, we demonstrate that DL approaches perform at the level of state-of-the-art dynamic flat-field correction approaches. However, DL approaches are not sensitive to local artifacts, and their execution is faster and compatible with online (i.e., real- or quasi-real-time) reconstructions.

The remainder of the paper is structured as follows: First, we introduce FFC and the details of conventional, dynamic, and DL flat-field correction approaches. Second, we perform studies of the performance of such algorithms on simulated data based on single-pulse imaging experiments performed at the Single Particles, Clusters, and Biomolecules and Serial Femtosecond Crystallography (SPB/SFX) [24] instrument of the European XFEL [9,25]. Third, we apply them to experimental data coming from MHz microscopy [26], an imaging technique that exploits single pulses and the unique repetition rate of the European XFEL to record megahertz movies with online holography. Finally, we discuss the capabilities of these algorithms to address FFC at XFELs as well as their compatibility with online processing and real-time analysis.

2. Flat-field correction methods

Flat-field correction (FFC) is a technique conceived for decreasing the fixed-pattern noise, which occurs with the same pattern under the same conditions. In the case of X-ray holography and X-ray imaging techniques, the fixed-pattern noise can arise from various causes: non-uniform scintillator-screen sensitivity, non-uniformity of a 2D X-ray detector, and an in-homogeneous X-ray beam [27].

To correct for the fixed-pattern noise, flat-field correction approaches assume that the recorded image of an object has two contributions, as shown in Fig. 1. One of these contributions comes from the illumination (flat-field), and the other one comes from the detector’s electronic noise (dark current). In order to disentangle the object contribution from these two components, we acquire two sets of images. The first set includes flat-field images ($\mathbf {f}$), i.e., images with the X-ray illumination but without any sample. The second set, known as the dark-current set ($\mathbf {d}$), contains images without X-ray beam illumination. Then, the object contribution is estimated by calculating the normalized image $\mathbf {n}_j$ of a sample image $\mathbf {s}_j$ following:

$$\mathbf{n}_j = \frac{\mathbf{s}_j-\mathbf{d}}{\mathbf{f}-\mathbf{d}}~,$$
where the index $j$ refers to the image number.

 figure: Fig. 1.

Fig. 1. Flat-field and dark-current contribution to a recorded image of a reference object as assumed by conventional flat-field correction approaches.

Download Full Size | PDF

2.1 Conventional FFC

Generally, the FFC is computed by using the average value of large sets of flat-field $$ (\overline{\mathbf{f}}) $$ and dark-field $$ (\overline{\mathbf{d}}) $$ images. In such a scenario, Eq. (1) is rewritten as follows:

$$\mathbf{n}_j = \frac{\mathbf{s}_j-{\bar{\mathbf d}}}{{\bar{\mathbf f}}-{\bar{\mathbf d}}}.$$
This approach is suitable for imaging systems with stationary illumination and detector response. However, this method is not applicable when the frame-to-frame variations are not well-described by their average value.

2.2 Dynamic FFC

As noted, for SASE XFELs, the X-ray illumination is not necessarily uniform from XFEL pulse to pulse. Hence, XFEL pulses produced via SASE vary their wavelength, position, and intensity profile from shot to shot. To deal with this problem, the flat-field contribution for each image or pulse must be estimated. Among state-of-the-art approaches to address this issue, dynamic flat-field correction methods estimate the flat-field image ($\mathbf {f}_j$) for each measured image ($\mathbf {s}_j$) [18]. Thus, Eq. (1) becomes:

$$\mathbf{n}_j = \frac{\mathbf{s}_j-{\bar{\mathbf d}}}{\mathbf{f}_j-{\bar{\mathbf d}}}~.$$
$\mathbf {f}_j$ is then estimated using a flat-field basis with a number of elements $K$:
$$\hat{\mathbf{f}}_j \approx {\bar{\mathbf f}} + \sum_{k=1}^{K} \hat{w}_{jk}\mathbf{u}_k~,$$
where $\mathbf {u}_k$ is an element of the basis, and $\hat {w}_{jk}$ is the projection coefficient for an element of the basis $k$ to the measured frame $j$. Thus, this method can be decomposed into two main processes: calculating the flat-field basis $\mathbf {u}_k$ and optimizing the coefficients $\hat {w}_{jk}$. Figure 2 summarizes the main steps of these two processes.

 figure: Fig. 2.

Fig. 2. The processes to estimate a flat field of a sample image used in the dynamic flat-field correction approaches.

Download Full Size | PDF

The first process, the estimation of the flat-field basis, can be decomposed into three steps: computation, selection, and filtering of the flat-field basis, as shown in the left column of Fig. 2. The flat-field basis was extracted using the principal-component-analysis function (PCA) in MATLAB. This process generates the normalized basis analogous to the direct calculation done in Ref. [18]. One advantage of using this PCA function is that it returns the percentage of the total variance explained by each element of the basis. In this work, we have selected the elements of the basis based on the percentage of the total variance explained by each element, instead of selecting based on the parallel analysis as done by Ref. [18]. The obtained flat-field basis from these methods still contained noise. Thus, the basis is then filtered by using the block matching filter [28] to improve the signal-to-noise ratio.

After calculating the flat-field basis, the second process or computation of the coefficients $\hat {w}_{jk}$ is performed. The $\hat {w}_{jk}$ coefficients are computed to estimate the flat field $\hat {\mathbf {f}}_j$, as described in Eq. (4). This is done by optimizing $\hat {w}_{jk}$ to minimize the variation in the normalized image $\mathbf {n}_j$, as evaluated in Ref. [18].

2.3 Deep-learning FFC (DL FFC)

The aforementioned solutions to the flat-field correction problem are local, e.g., they cannot correct for systematic drifts in the illumination. Solutions to circumvent this limitation are translation or shift-invariant approaches such as CNNs. CNNs are deep-learning approaches that have shown great potential in image segmentation, pattern recognition, denoising, and super-resolution. Given their potential, we propose a convolutional encoder-decoder neural network to address the FFC at XFELs. First, the encoder reduces the resolution of the images by repeated application of convolution and max pooling operations, extracting and separating key features of the images. Then, the decoder recovers the dimension of the images through transposed convolution and upsampling operations, removing the unwanted illumination artifacts and producing a normalized image.

For this work, we used a state-of-the-art encoder-decoder architecture based on U-Net [29]. Our U-Net or flat-field correction generator ($G$) is trained using a generative adversarial network (GAN) [30], i.e., we constrain it with an adversarial counterpart known as discriminator ($D$) as in pix2pix [31]. GANs were first introduced in 2014 by Goodfellow [32] and have evolved with many variations since then. The basic idea of a GAN is to train the generator and discriminator networks simultaneously instead of just training one single network. The two networks compete with each other: the generator receives feedback from the discriminator and adjusts its behavior accordingly, while the discriminator learns to distinguish between real data and the data produced by the generator. Unlike conventional CNN trained with an $L_2$ norm, which learns to converge to the average of all possible solutions [33], GAN, on the other hand, can provide better modeling of the data distribution [32,34], which in our case will provide sharper and clear flat-field corrected images. Specifically, we used the PatchGAN discriminator [34] for this work. Both networks (U-Net generator $G$ and PatchGAN discriminator $D$) were trained simultaneously.

The total loss function used for our training has three components or loss functions ($\mathcal {L}_{GAN}$, $\mathcal {L}_{L_2}$, and $\mathcal {L}_{FRC}$). $\mathcal {L}_{GAN}$ loss includes the adversarial component. $\mathcal {L}_{L_2}$ loss calculates the $L_2$ distance between the ground truth and the generated flat-field corrected images. Finally, the $\mathcal {L}_{FRC}$ or Fourier Ring Correlation (FRC) loss calculates the cross-correlation of two images over rings in frequency space [35,36], which helps to avoid defects in the frequency space and provide enhanced results [37]. With these three components, the total loss was expressed as:

$$\begin{aligned} \mathcal{L} =\mathrm{arg}~ \underset G {\operatorname{min}}\ \underset D {\operatorname{max}}~ \underbrace{\mathrm{log}(D(\mathbf{n})) + \mathrm{log}(1-D(G(\mathbf{s}))}_{\mathcal{L}_{GAN}} + ~ \lambda \underbrace{\left \| \mathbf{n} - G(\mathbf{s}) \right \|_2}_{\mathcal{L}_{L_2}} + \\ \mu \underbrace{\left \|1 - \frac{\sum_{r \in R} \mathcal{F}[\mathbf{n}] \left(r\right) \cdot \mathcal{F}[{G(\mathbf{s})}]\left(r\right)^{*}}{\sqrt{\sum_{r \in R}\left|\mathcal{F}[\mathbf{n}]\left(r\right)\right|^{2} \cdot \sum_{r \in R}\left|F[{G(\mathbf{s})}]\left(r\right)\right|^{2}}}\right \|_2 }_{\mathcal{L}_{FRC}}, \end{aligned}$$
where $\mathbf {n}$ denotes the normalized image, $\mathbf {s}$ denotes the measured sample image, $R$ is the radius of the ring in Fourier space, and $r$ stands for individual rings. The weight parameters $\lambda$ and $\mu$ specify the relative weight of the $\mathcal {L}_{L_2}$ and the $\mathcal {L}_{FRC}$ with respect to the $\mathcal {L}_{GAN}$, respectively.

As training proceeds, the generator learns to generate flat-field-noise-free images, while the discriminator, supervised by the adversarial loss, learns to distinguish whether the images are flat-field corrected or not. In this way, the generator will, in the end, learn to generate flat-field corrected images, which the discriminator cannot identify.

3. Simulation study based on the experimental data

This section compares and validates the three aforementioned flat-field correction methods using simulated data based on experiments performed at the European XFEL.

The data was created following the approach depicted in Fig. 1. First, we created flat-field images from a linear combination of a flat-field basis extracted from data collected in 2019 at the SPB/SFX instrument of European XFEL [26]. The experimental frames were recorded using a Shimadzu HPV-X2 camera at approximately a 1 MHz rate with frames of $250\times 400$ pixels. To create the flat fields, the basis extracted from the experimental data were sorted from the maximum to the minimum percentage of the total variance explained by each basis. We used the first 15 elements of the basis and generated the coefficients for each basis. The generation of the coefficients was done by dividing the basis into three groups: i) the first group contained the first and the second elements of the basis, ii) the second group contained the third to the eighth elements, and iii) the rest of the elements were in group 3. The coefficients for each element of the basis were randomly generated, but the total weight of the coefficients in each group was fixed. Specifically, the total weight of the coefficients for each element of the basis in the first, second, and third groups was fixed to 0.75, 0.20, and 0.05, respectively. The criteria to assign the total weight of each group was based on the total-variance percentage of the basis elements from the experimental data. Second, we created reference objects with random geometrical shapes and positions that were multiplied by one of the randomly generated flat-field images.

The simulated sample images, together with their reference object or oracle, were used to evaluate the performance of each of the flat-field correction methods, as shown in Fig. 3. A total of one thousand flat-field images were generated for conventional and dynamic flat-field correction methods. To train the DL FFC, a total of five thousand pairs of reference objects together with their images were generated. We used four Nvidia Tesla V100 GPUs for the training. We used the weight parameters $\lambda = 5000$ and $\mu = 1$. In the beginning, the training was mainly supervised by the $L_2$ loss $\mathcal {L}_{L_2}$, which provides fast convergence. As training went on, $\mathcal {L}_{L_2}$ and the adversarial loss $\mathcal {L}_{GAN}$ became both dominant. The FRC loss $\mathcal {L}_{FRC}$ was orders of magnitude smaller than $\mathcal {L}_{L_2}$ and worked as an auxiliary part. The batch size or the number of images passed to both networks per training point was 180. The initial learning rates or updating weight for the generator $G$ and the discriminator $D$ were set to be 0.0002 and 0.0001, respectively. After every 100 epochs, i.e., 100 complete training cycles over the whole dataset, the learning rates were reduced by a factor of 0.1. We stopped the training after 700 epochs, which took around six hours to finish. One hundred extra images were generated to test the performance of all the methods. One of these generated reference objects together with its simulated sample image are depicted in Fig. 3(a) and 3(b), respectively. The flat-field corrected versions of this simulated sample image are shown in Fig. 3(c), 3(d), and 3(e) for conventional FFC, dynamic FFC, and DL FFC, respectively.

 figure: Fig. 3.

Fig. 3. Flat-field corrected results from a single reference image (a) and its simulated image (b). The flat-field corrected images for conventional, dynamic, and deep-learning approaches are shown in (c-e), respectively. The line profile over the red line in patches (a-e) is shown in (f). All the flat-field corrected line profiles are linearly transformed to match the reference-image range for visualization and comparison purposes, while the simulated-image and reference line profiles were not modified.

Download Full Size | PDF

To evaluate the performance of each flat-field correction method, we calculated the variance between the flat-field corrected image or estimated normalized image $\hat {\mathbf {n}}_j$ and the simulated normalized image $\mathbf {n}_j$ for all the test images ($N=100$), as follows:

$$\sigma^{2} = \frac{1}{N}\sum_{j=1}^{N}\sum_{i=1}^{M}\left(\hat{\mathbf{n}}_{ij}-\mathbf{n}_{ij}\right)^{2}~,$$
where the indices $i$ and $j$ refer to each of the $M$ pixels of each image and the test image, respectively. The average variances and their standard deviation over the one hundred images for the conventional, the dynamic, and the DL flat-field correction methods are $1.1\times 10^{-3}\pm 1.3\times 10^{-3}$, $4\times 10^{-5}\pm 4\times 10^{-5}$ and $6\times 10^{-6}\pm 4\times 10^{-6}$, respectively. The results show that the DL FFC provides the best results. In contrast, conventional FFC cannot correct for the simulated illumination fluctuations, and the variance is approximately two orders of magnitude worse than for the two other methods.

This simulation demonstrates the potential to perform flat-field correction with DL approaches. Furthermore, the simulations do not contain any non-local transformations that may enhance the performance of DL vs. dynamic flat-field correction approaches. Another aspect to consider for online and real-time flat-field correction applications is the computation time. For instance, the computational times to flat-field correct the simulated Shimadzu HPV-X2 frames using our DL approach or the dynamic method took in the order of 0.1 ms and 1 s, respectively.

4. FFC of XFEL experimental data

In this section, we describe the application of the presented flat-field correction methods to experimental data collected in the European XFEL in 2021 to study the Venturi effect at megahertz rate with micrometer resolution using in-line holography. The experiments were performed at the SPB/SFX instrument at 9.3 keV, recording continuously at 1.125 MHz, which was the repetition rate used for these experiments within one train of the European XFEL [26]. The recorded movies were obtained using a Shimadzu HPV-X2 camera with an effective pixel size of 3.2 $\mu$m. For such an effective pixel size, the field of view was $1.28\times 0.8$ mm. The distance between the sample and the detector or defocusing distance was 0.29 m. Figure 4(a) depicts one of the experimental frames to study the Venturi effect. On the top-right part of this frame, one can observe the turbulent dynamics induced inside the tube.

 figure: Fig. 4.

Fig. 4. Flat-field corrected results from experimental data. The flat-field corrected images from an acquired sample frame (a) for the conventional, dynamic, and DL methods are shown in patches, (b-d), respectively.

Download Full Size | PDF

We performed FFC for all the methods described in section 2. In order to perform conventional and dynamic FFC, we collected two sets of flat-field images: one of them before and the other one after the sample acquisitions. We observed that some of the features present in the flat-field images were shifted between the two sets. In fact, we determined by performing dynamic FFC that the flat-field images after the sample acquisition reproduced the illumination artifacts present in the sample set better than the first set of flat-field images or a combination of both of them. Thus, we only used the latter dataset given the locality of the dynamic and conventional flat-field correction methods. The flat-field images acquired after the sample sets contained a total of 11904 frames. Following the process described in section 2, we obtained a basis with 27 elements, which predict 87 % of the total variance.

The DL flat-field correction method relies on large training datasets. We performed our training using $20315$ pairs of sample ($\mathbf {s}_j$) and dynamic flat-field corrected ($\mathbf {n}_j$) images. The input images were padded into $256\times 512$ pixels to fit into our U-Net implementation. We used the same initial learning rates and loss weights parameters as the training for simulation study (section 3.). The learning rates were reduced by a factor of 0.68 every 10 epochs, and we stopped the training after 100 epochs. The training process took about 14 hours to finish.

The results for a specific sample image are depicted in Fig. 4. The input image ($\mathbf {s}$) is shown in Fig. 4(a). This image contained artifacts such as stripe artifacts in the marked red-square area. Thus, flat-field correction methods are required to minimize these stripes and artifacts, which may lead to misinterpretations of the studied dynamics and hinder the applicability of state-of-the-art phase-reconstruction methods. The flat-field corrected results obtained for this image with the conventional, dynamic, and DL methods are displayed in Fig. 4(b)-(d), respectively. One can observe that the conventional flat-field correction method cannot mitigate the strip artifacts present in the red area. However, the dynamic and DL flat-field correction methods are successful in suppressing those artifacts.

In order to quantify the performance of the flat-field correction methods, we estimated the average modulus of the gradient over an area without any sample feature. Specifically, we selected the red-square area shown in Fig. 4, which did not contain any feature over 1024 sample frames. The gradient modulus in the aforementioned scenario is expected to be zero up to the noise variations. Thus, we used a 2D Gaussian filter with $\sigma =3$ pixels to mitigate the noise while preserving the slow varying features coming from the illumination before calculating the gradient modulus. The gradient modulus over the selected region for the 1024 images after applying FFC using the conventional, the dynamic, and the DL method are $0.055\pm 0.004$, $0.034\pm 0.003$, and $0.035\pm 0.003$, respectively. The dynamic and DL flat-field correction methods obtained the best results as expected from the results shown in Fig. 4. Given that we trained the DL flat-field correction method with dynamic flat-field corrected data, it is not expected that DL FFC can outperform the dynamic flat-field correction method. Nonetheless, DL FFC can obtain flat-field corrected images four orders of magnitude faster than the dynamic methods.

5. Conclusion

We have studied three different flat-field correction approaches and their application to single SASE pulses produced by the European XFEL. In such a scenario, we demonstrate that conventional flat-field correction methods cannot provide a satisfactory solution as the stochastic fluctuations between SASE pulses are not well-described by the average values of the flat-field images. This limitation is overcome by dynamic and DL flat-field correction methods. Both methods can disentangle the flat-field or illumination effects from the sample effects enabling image processing and phase-reconstruction approaches for coherent or phase-contrast techniques such as in-line holography.

To validate the capabilities of the flat-field correction methods, we have performed simulations of single-pulse experiments based on megahertz imaging datasets obtained from the European XFEL. Our results show that dynamic and DL flat-field correction approaches can obtain good flat-field corrected images with small variations with respect to the simulated objects. DL FFC performs slightly better than the dynamic approach as it is a supervised approach trained on the perfect data. Nonetheless, the DL approach can obtain flat-field corrected images four orders of magnitude faster than the dynamic approach.

Furthermore, we have applied these flat-field correction methods to experimental data acquired from the SPB/SFX instrument of the European XFEL. From the results presented here, it can be observed and confirmed that dynamic and DL methods provide good flat-field corrected images, while conventional methods cannot correct most of the illumination features present in the collected images. We have also quantified the performance of the algorithms using the average value of the modulus of the gradient over a flat area, i.e., an area without any sample feature. The results confirm that dynamic and DL approaches perform at the same level. This is a consequence of using dynamic flat-field corrected images to train our DL approach. Thus, the performance of the DL approaches will be at the level of the dynamic approaches.

We conclude that dynamic and DL approaches can address the challenge of obtaining flat-field corrected images from single SASE pulses of an XFEL or any other source with stochastic illumination. Thus, these methods enable the application of coherent X-ray imaging methods, such as in-line holography, for these sources. Given the non-local or shift-invariant properties of DL approaches such as CNNs, we propose the use of such approaches to avoid the continuous acquisition of flat-field images as required by dynamic flat-field correction approaches whenever systematic drifts or changes in the illumination are observed. Moreover, the fast execution of DL approaches, four orders of magnitude faster than dynamic approaches, will enable the development of a real-time processing pipeline for phase reconstruction and image processing at XFELs for techniques such as in-line holography. To achieve this goal, we have envisioned the deployment of dynamic and DL flat-field correction approaches at the European XFEL. The former will provide the training dataset for the DL approach. The latter will be used to provide real-time flat-field corrected images and maximize the available usage of the beamtime by minimizing the requirement of recording flat-field images.

Funding

R&D EuXFEL project (MHz microscopy at EuXFEL); Vetenskapsrådet (2017-06719); Bundesministerium für Bildung und Forschung (05K18XXA).

Acknowledgements

We are grateful to V. V. Nieuwenhove and J. Sijbers for the insightful discussion and for providing their code to perform dynamic FFC. We are grateful to Z. Matej for his support and access to the GPU-computing cluster at MAX IV. We also gratefully acknowledge the support of NVIDIA Corporation with the donation of a Quadro P4000 GPU used for this research. We acknowledge European XFEL in Schenefeld, Germany, for provision of X-ray free-electron laser beamtime at Scientific Instrument SPB/SFX and thank the staff for their assistance, especially R. Bean, T. Dietze, L. Morillo Lopez, B. Manning, N. Reimers, and C. M. Signe Takem. We would like to acknowledge H. Soyama and C. Dieter for their support in the sample preparation and insightful discussions.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Code availability

The code underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. P. Emma, R. Akre, J. Arthur, R. Bionta, C. Bostedt, J. Bozek, A. Brachmann, P. Bucksbaum, R. Coffee, F. J. Decker, Y. Ding, D. Dowell, S. Edstrom, A. Fisher, J. Frisch, S. Gilevich, J. Hastings, G. Hays, P. Hering, Z. Huang, R. Iverson, H. Loos, M. Messerschmidt, A. Miahnahri, S. Moeller, H. D. Nuhn, G. Pile, D. Ratner, J. Rzepiela, D. Schultz, T. Smith, P. Stefan, H. Tompkins, J. Turner, J. Welch, W. White, J. Wu, G. Yocky, and J. Galayda, “First lasing and operation of an ångstrom-wavelength free-electron laser,” Nat. Photonics 4(9), 641–647 (2010). [CrossRef]  

2. B. W. J. McNeil and N. R. Thompson, “X-ray free-electron lasers,” Nat. Photonics 4(12), 814–821 (2010). [CrossRef]  

3. C. Pellegrini, A. Marinelli, and S. Reiche, “The physics of x-ray free-electron lasers,” Rev. Mod. Phys. 88(1), 015006 (2016). [CrossRef]  

4. A. M. Kondratenko and E. L. Saldin, “Generation of coherent radiation by a relativistic-electron beam in an undulator,” Sov. Phys. Doklady 24, 986 (1979).

5. S. V. Milton, E. Gluskin, N. D. Arnold, C. Benson, W. Berg, S. G. Biedron, M. Borland, Y.-C. Chae, R. J. Dejus, P. K. Den Hartog, B. Deriy, M. Erdmann, Y. I. Eidelman, M. W. Hahne, Z. Huang, K.-J. Kim, J. W. Lewellen, Y. Li, A. H. Lumpkin, O. Makarov, E. R. Moog, A. Nassiri, V. Sajaev, R. Soliday, B. J. Tieman, E. M. Trakhtenberg, G. Travish, I. B. Vasserman, N. A. Vinokurov, X. J. Wang, G. Wiemerslage, and B. X. Yang, “Exponential gain and saturation of a self-amplified spontaneous emission free-electron laser,” Science 292(5524), 2037–2041 (2001). [CrossRef]  

6. Z. Huang and I. Lindau, “SACLA hard-X-ray compact FEL,” Nat. Photonics 6(8), 505–506 (2012). [CrossRef]  

7. H.-S. Kang, C.-K. Min, H. Heo, C. Kim, H. Yang, G. Kim, I. Nam, S. Y. Baek, H.-J. Choi, G. Mun, B. R. Park, Y. J. Suh, D. C. Shin, J. Hu, J. Hong, S. Jung, S.-H. Kim, K. Kim, D. Na, S. S. Park, Y. J. Park, J.-H. Han, Y. G. Jung, S. H. Jeong, H. G. Lee, S. Lee, S. Lee, W.-W. Lee, B. Oh, H. S. Suh, Y. W. Parc, S.-J. Park, M. H. Kim, N.-S. Jung, Y.-C. Kim, M.-S. Lee, B.-H. Lee, C.-W. Sung, I.-S. Mok, J.-M. Yang, C.-S. Lee, H. Shin, J. H. Kim, Y. Kim, J. H. Lee, S.-Y. Park, J. Kim, J. Park, I. Eom, S. Rah, S. Kim, K. H. Nam, J. Park, J. Park, S. Kim, S. Kwon, S. H. Park, K. S. Kim, H. Hyun, S. N. Kim, S. Kim, S.-m. Hwang, M. J. Kim, C.-y. Lim, C.-J. Yu, B.-S. Kim, T.-H. Kang, K.-W. Kim, S.-H. Kim, H.-S. Lee, H.-S. Lee, K.-H. Park, T.-Y. Koo, D.-E. Kim, and I. S. Ko, “Hard X-ray free-electron laser with femtosecond-scale timing jitter,” Nat. Photonics 11(11), 708–713 (2017). [CrossRef]  

8. E. Prat, R. Abela, M. Aiba, A. Alarcon, J. Alex, Y. Arbelo, C. Arrell, V. Arsov, C. Bacellar, C. Beard, P. Beaud, S. Bettoni, R. Biffiger, M. Bopp, H.-H. Braun, M. Calvi, A. Cassar, T. Celcer, M. Chergui, P. Chevtsov, C. Cirelli, A. Citterio, P. Craievich, M. C. Divall, A. Dax, M. Dehler, Y. Deng, A. Dietrich, P. Dijkstal, R. Dinapoli, S. Dordevic, S. Ebner, D. Engeler, C. Erny, V. Esposito, E. Ferrari, U. Flechsig, R. Follath, F. Frei, R. Ganter, T. Garvey, Z. Geng, A. Gobbo, C. Gough, A. Hauff, C. P. Hauri, N. Hiller, S. Hunziker, M. Huppert, G. Ingold, R. Ischebeck, M. Janousch, P. J. M. Johnson, S. L. Johnson, P. Juranić, M. Jurcevic, M. Kaiser, R. Kalt, B. Keil, D. Kiselev, C. Kittel, G. Knopp, W. Koprek, M. Laznovsky, H. T. Lemke, D. L. Sancho, F. Löhl, A. Malyzhenkov, G. F. Mancini, R. Mankowsky, F. Marcellini, G. Marinkovic, I. Martiel, F. Märki, C. J. Milne, A. Mozzanica, K. Nass, G. L. Orlandi, C. O. Loch, M. Paraliev, B. Patterson, L. Patthey, B. Pedrini, M. Pedrozzi, C. Pradervand, P. Radi, J.-Y. Raguin, S. Redford, J. Rehanek, S. Reiche, L. Rivkin, A. Romann, L. Sala, M. Sander, T. Schietinger, T. Schilcher, V. Schlott, T. Schmidt, M. Seidel, M. Stadler, L. Stingelin, C. Svetina, D. M. Treyer, A. Trisorio, C. Vicario, D. Voulot, A. Wrulich, S. Zerdane, and E. Zimoch, “A compact and cost-effective hard X-ray free-electron laser driven by a high-brightness and low-energy electron beam,” Nat. Photonics 14(12), 748–754 (2020). [CrossRef]  

9. W. Decking, S. Abeghyan, P. Abramian, A. Abramsky, A. Aguirre, C. Albrecht, P. Alou, M. Altarelli, P. Altmann, K. Amyan, V. Anashin, E. Apostolov, K. Appel, D. Auguste, V. Ayvazyan, S. Baark, F. Babies, N. Baboi, P. Bak, V. Balandin, R. Baldinger, B. Baranasic, S. Barbanotti, O. Belikov, V. Belokurov, L. Belova, V. Belyakov, S. Berry, M. Bertucci, B. Beutner, A. Block, M. Blöcher, T. Böckmann, C. Bohm, M. Böhnert, V. Bondar, E. Bondarchuk, M. Bonezzi, P. Borowiec, C. Bösch, U. Bösenberg, A. Bosotti, R. Böspflug, M. Bousonville, E. Boyd, Y. Bozhko, A. Brand, J. Branlard, S. Briechle, F. Brinker, S. Brinker, R. Brinkmann, S. Brockhauser, O. Brovko, H. Brück, A. Brüdgam, L. Butkowski, T. Büttner, J. Calero, E. Castro-Carballo, G. Cattalanotto, J. Charrier, J. Chen, A. Cherepenko, V. Cheskidov, M. Chiodini, A. Chong, S. Choroba, M. Chorowski, D. Churanov, W. Cichalewski, M. Clausen, W. Clement, C. Cloué, J. A. Cobos, N. Coppola, S. Cunis, K. Czuba, M. Czwalinna, B. D’Almagne, J. Dammann, H. Danared, A. de Zubiaurre Wagner, A. Delfs, T. Delfs, F. Dietrich, T. Dietrich, M. Dohlus, M. Dommach, A. Donat, X. Dong, N. Doynikov, M. Dressel, M. Duda, P. Duda, H. Eckoldt, W. Ehsan, J. Eidam, F. Eints, C. Engling, U. Englisch, A. Ermakov, K. Escherich, J. Eschke, E. Saldin, M. Faesing, A. Fallou, M. Felber, M. Fenner, B. Fernandes, J. M. Fernández, S. Feuker, K. Filippakopoulos, K. Floettmann, V. Fogel, M. Fontaine, A. Francés, I. F. Martin, W. Freund, T. Freyermuth, M. Friedland, L. Fröhlich, M. Fusetti, J. Fydrych, A. Gallas, O. García, L. Garcia-Tabares, G. Geloni, N. Gerasimova, C. Gerth, P. Geßler, V. Gharibyan, M. Gloor, J. Głowinkowski, A. Goessel, Z. Gołeębiewski, N. Golubeva, W. Grabowski, W. Graeff, A. Grebentsov, M. Grecki, T. Grevsmuehl, M. Gross, U. Grosse-Wortmann, J. Grünert, S. Grunewald, P. Grzegory, G. Feng, H. Guler, G. Gusev, J. L. Gutierrez, L. Hagge, M. Hamberg, R. Hanneken, E. Harms, I. Hartl, A. Hauberg, S. Hauf, J. Hauschildt, J. Hauser, J. Havlicek, A. Hedqvist, N. Heidbrook, F. Hellberg, D. Henning, O. Hensler, T. Hermann, A. Hidvégi, M. Hierholzer, H. Hintz, F. Hoffmann, M. Hoffmann, M. Hoffmann, Y. Holler, M. Hüning, A. Ignatenko, M. Ilchen, A. Iluk, J. Iversen, J. Iversen, M. Izquierdo, L. Jachmann, N. Jardon, U. Jastrow, K. Jensch, J. Jensen, M. Jeżabek, M. Jidda, H. Jin, N. Johansson, R. Jonas, W. Kaabi, D. Kaefer, R. Kammering, H. Kapitza, S. Karabekyan, S. Karstensen, K. Kasprzak, V. Katalev, D. Keese, B. Keil, M. Kholopov, M. Killenberger, B. Kitaev, Y. Klimchenko, R. Klos, L. Knebel, A. Koch, M. Koepke, S. Köhler, W. Köhler, N. Kohlstrunk, Z. Konopkova, A. Konstantinov, W. Kook, W. Koprek, M. Körfer, O. Korth, A. Kosarev, K. Kosiński, D. Kostin, Y. Kot, A. Kotarba, T. Kozak, V. Kozak, R. Kramert, M. Krasilnikov, A. Krasnov, B. Krause, L. Kravchuk, O. Krebs, R. Kretschmer, J. Kreutzkamp, O. Kröplin, K. Krzysik, G. Kube, H. Kuehn, N. Kujala, V. Kulikov, V. Kuzminych, D. La Civita, M. Lacroix, T. Lamb, A. Lancetov, M. Larsson, D. Le Pinvidic, S. Lederer, T. Lensch, D. Lenz, A. Leuschner, F. Levenhagen, Y. Li, J. Liebing, L. Lilje, T. Limberg, D. Lipka, B. List, J. Liu, S. Liu, B. Lorbeer, J. Lorkiewicz, H. H. Lu, F. Ludwig, K. Machau, W. Maciocha, C. Madec, C. Magueur, C. Maiano, I. Maksimova, K. Malcher, T. Maltezopoulos, E. Mamoshkina, B. Manschwetus, F. Marcellini, G. Marinkovic, T. Martinez, H. Martirosyan, W. Maschmann, M. Maslov, A. Matheisen, U. Mavric, J. Meißner, K. Meissner, M. Messerschmidt, N. Meyners, G. Michalski, P. Michelato, N. Mildner, M. Moe, F. Moglia, C. Mohr, S. Mohr, W. Möller, M. Mommerz, L. Monaco, C. Montiel, M. Moretti, I. Morozov, P. Morozov, and D. Mross, “A MHz-repetition-rate hard X-ray free-electron laser driven by a superconducting linear accelerator,” Nat. Photonics 14(6), 391–397 (2020). [CrossRef]  

10. R. Bonifacio, L. De Salvo, P. Pierini, N. Piovella, and C. Pellegrini, “Spectrum, temporal structure, and fluctuations in a high-gain free-electron laser starting from noise,” Phys. Rev. Lett. 73(1), 70–73 (1994). [CrossRef]  

11. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

12. J.-P. Guigay, “Fourier-transform analysis of Fresnel diffraction patterns and in-line holograms,” Optik 49, 121–125 (1977).

13. M. Reed Teague, “Deterministic phase retrieval: a Green’s function solution,” J. Opt. Soc. Am. 73(11), 1434 (1983). [CrossRef]  

14. J. A. Seibert, J. M. Boone, and K. K. Lindfors, “Flat-field correction technique for digital detectors,” in Medical Imaging 1998: Physics of Medical Imaging, vol. 3336J. T. D. III and J. M. Boone, eds., International Society for Optics and Photonics (SPIE, 1998), pp. 348–354.

15. B. Likar, J. B. Maintz, M. A. Viergever, and F. Pernuš, “Retrospective shading correction based on entropy minimization,” J. Microsc. 197(3), 285–295 (2000). [CrossRef]  

16. K. Smith, Y. Li, F. Piccinini, G. Csucs, C. Balazs, A. Bevilacqua, and P. Horvath, “CIDRE: an illumination-correction method for optical microscopy,” Nat. Methods 12(5), 404–406 (2015). [CrossRef]  

17. P. Kask, K. Palo, C. Hihhah, and T. Pommerencke, “Flat field correction for high-throughput imaging of fluorescent samples,” J. Microsc. 263(3), 328–340 (2016). [CrossRef]  

18. V. V. Nieuwenhove, J. D. Beenhouwer, F. D. Carlo, L. Mancini, F. Marone, and J. Sijbers, “Dynamic intensity normalization using eigen flat fields in x-ray imaging,” Opt. Express 23(21), 27975–27989 (2015). [CrossRef]  

19. J. Hagemann, M. Vassholz, H. Hoeppe, M. Osterhoff, J. M. Rosselló, R. Mettin, F. Seiboth, A. Schropp, J. Möller, J. Hallmann, C. Kim, M. Scholz, U. Boesenberg, R. Schaffer, A. Zozulya, W. Lu, R. Shayduk, A. Madsen, C. G. Schroer, and T. Salditt, “Single-pulse phase-contrast imaging at free-electron lasers in the hard X-ray regime,” J. Synchrotron Radiat. 28(1), 52–63 (2021). [CrossRef]  

20. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural Comput. 1(4), 541–551 (1989). [CrossRef]  

21. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

22. L. Deng and D. Yu, “Deep learning: Methods and applications,” Found. Trends Signal Process. 7(3-4), 197–387 (2013). [CrossRef]  

23. W. Zhang, K. Itoh, J. Tanida, and Y. Ichioka, “Parallel distributed processing model with local space-invariant interconnections and its optical architecture,” Appl. Opt. 29(32), 4790–4797 (1990). [CrossRef]  

24. A. P. Mancuso, A. Aquila, L. Batchelor, R. J. Bean, J. Bielecki, G. Borchers, K. Doerner, K. Giewekemeyer, R. Graceffa, O. D. Kelsey, Y. Kim, H. J. Kirkwood, A. Legrand, R. Letrun, B. Manning, L. Lopez Morillo, M. Messerschmidt, G. Mills, S. Raabe, N. Reimers, A. Round, T. Sato, J. Schulz, C. Signe Takem, M. Sikorski, S. Stern, P. Thute, P. Vagovič, B. Weinhausen, and T. Tschentscher, “The Single Particles, Clusters and Biomolecules and Serial Femtosecond Crystallography instrument of the European XFEL: initial installation,” J. Synchrotron Radiat. 26(3), 660–676 (2019). [CrossRef]  

25. M. Altarelli, R. Brinkmann, M. Chergui, W. Decking, B. Dobson, S. Düsterer, G. Grübel, W. Graeff, H. Graafsma, J. Hajdu, J. Marangos, J. Pflüger, H. Redlin, D. Riley, I. Robinson, J. Rossbach, A. Schwarz, K. Tiedtke, T. Tschentscher, I. Vartaniants, H. Wabnitz, H. Weise, R. Wichmann, K. Witte, A. Wolf, M. Wulff, and M. Yurkov, “The European X-Ray Free-Electron Laser,” (2007).

26. P. Vagovič, T. Sato, L. Mikeš, G. Mills, R. Graceffa, F. Mattsson, P. Villanueva-Perez, A. Ershov, T. Faragó, J. Uličný, H. Kirkwood, R. Letrun, R. Mokso, M.-C. Zdora, M. P. Olbinado, A. Rack, T. Baumbach, J. Schulz, A. Meents, H. N. Chapman, and A. P. Mancuso, “Megahertz x-ray microscopy at x-ray free-electron laser and synchrotron sources,” Optica 6(9), 1106–1109 (2019). [CrossRef]  

27. L. Tlustos, M. Campbell, E. Heijne, and X. Llopart, “Signal variations in high granularity si pixel detectors,” in 2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515), vol. 3 (2003), pp. 1588–1593.

28. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising with block-matching and 3D filtering,” in Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning, vol. 6064N. M. Nasrabadi, S. A. Rizvi, E. R. Dougherty, J. T. Astola, and K. O. Egiazarian, eds., International Society for Optics and Photonics (SPIE, 2006), pp. 354–365.

29. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, Cham, 2015), pp. 234–241.

30. M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784 (2014).

31. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), pp. 1125–1134.

32. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Adv. in neural information processing systems 27 (2014).

33. A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther, “Autoencoding beyond pixels using a learned similarity metric,” in International conference on machine learning, (PMLR, 2016), pp. 1558–1566.

34. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), pp. 4681–4690.

35. W. Saxton and W. Baumeister, “The correlation averaging of a regularly arranged bacterial cell envelope protein,” J. Microsc. 127(2), 127–138 (1982). [CrossRef]  

36. M. Van Heel and M. Schatz, “Fourier shell correlation threshold criteria,” J. Struct. Biol. 151(3), 250–262 (2005). [CrossRef]  

37. Y. Zhang, M. A. Noack, P. Vagovic, K. Fezzaa, F. Garcia-Moreno, T. Ritschel, and P. Villanueva-Perez, “Phasegan: a deep-learning phase-retrieval approach for unpaired datasets,” Opt. Express 29(13), 19593–19604 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Code availability

The code underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Flat-field and dark-current contribution to a recorded image of a reference object as assumed by conventional flat-field correction approaches.
Fig. 2.
Fig. 2. The processes to estimate a flat field of a sample image used in the dynamic flat-field correction approaches.
Fig. 3.
Fig. 3. Flat-field corrected results from a single reference image (a) and its simulated image (b). The flat-field corrected images for conventional, dynamic, and deep-learning approaches are shown in (c-e), respectively. The line profile over the red line in patches (a-e) is shown in (f). All the flat-field corrected line profiles are linearly transformed to match the reference-image range for visualization and comparison purposes, while the simulated-image and reference line profiles were not modified.
Fig. 4.
Fig. 4. Flat-field corrected results from experimental data. The flat-field corrected images from an acquired sample frame (a) for the conventional, dynamic, and DL methods are shown in patches, (b-d), respectively.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

nj=sjdfd ,
(f¯)
(d¯)
nj=sjd¯f¯d¯.
nj=sjd¯fjd¯ .
f^jf¯+k=1Kw^jkuk ,
L=arg minG maxD log(D(n))+log(1D(G(s))LGAN+ λnG(s)2LL2+μ1rRF[n](r)F[G(s)](r)rR|F[n](r)|2rR|F[G(s)](r)|22LFRC,
σ2=1Nj=1Ni=1M(n^ijnij)2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.