Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Real-time phase-retrieval and wavefront sensing enabled by an artificial neural network

Open Access Open Access

Abstract

In this manuscript we demonstrate a method to reconstruct the wavefront of focused beams from a measured diffraction pattern behind a diffracting mask in real-time. The phase problem is solved by means of a neural network, which is trained with simulated data and verified with experimental data. The neural network allows live reconstructions within a few milliseconds, which previously with iterative phase retrieval took several seconds, thus allowing the adjustment of complex systems and correction by adaptive optics in real time. The neural network additionally outperforms iterative phase retrieval with high noise diffraction patterns.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Artificial neural networks have become an effective solution for solving many problems. For example, neural networks are used for pattern recognition [1] (e.g. face identification) or medical diagnosis [2,3]. Furtheremore, artificial neural networks have become an effective solution for solving many problems in optics, particularly phase retrieval [47]. This is mainly due to increasingly affordable graphics cards computational power required to train neural networks, and the recent advancements in neural network architecture design and training. CDI (Coherent Diffractive Imaging) is a lensless imaging method used to image an object by measuring the diffraction pattern which it produces when illuminated with coherent radiation source. Up to now iterative phase retrieval algorithms are employed for the necessary phase retrieval on the measured data, which are time consuming. The ability for neural networks to out-perfrom iterative CDI significantly in terms of retrieval speed has been proven [8]. In this contribution we apply a neural network to the field of CDI.

Recently, Neural networks have been used in application to CDI [6] and ptychography [7] as a new alternative to the iterative method for CDI [9] and therefore allow significantly faster reconstruction of the phase and intensity of objects from measured diffraction patterns.

In this work, we present the first application of a neural network to the phase retrieval of measured experimental data of a CDI-based wavefront sensor consisting of 10 x 10 circular holes which are cut in a metal membrane [10]. Our demonstration paves the way for real-time lensless wavefront sensing, particularly at short XUV and X-ray wavelengths, where lensless imaging techniques provide nanoscale resolution beyond the capabilities of imaging optics. Access to a real-time phase retrieval may help alignment of complex optical systems with feedback in real-time. Imaging an object with CDI requires only one image to be captured in contrast to ptychography, where many images are captured. Therefore the application of a neural network to CDI enables realtime wavefront analysis due to the fast retrieval time of the neural network combined with the need for only a single image capture required for CDI.

2. Basics of lenseless imaging and phase retrieval

Lenseless coherent diffractive imaging is used to characterize objects and coherent beams based on their diffraction pattern (Fig. 1(c)). The advantage of this method is this is a form of imaging with no lenses required, and therefore the quality of imaging is not dependent on the quality of optics. This is particularly important in the extreme ultraviolet and X-ray spectral region, where high-quality optics are difficult to fabricate and lensless microscopes have readily demonstrated few-nm resolution beyond the capabilities of today’s optics [11,12]. Since only an intensity image is recorded, a phase retrieval method needs to be employed in the image reconstruction. In conventional coherent diffractive imaging (CDI) the constraint is an area around the object that is known to have zero transmission (support constraint) [9]. A known transmission function of the object enables the characterization of the input beam in amplitude and phase, which we term wavefront sensing. Recently, wavefront sensing with high accuracy has been demonstrated in the XUV wavelength [10]. To perform retrieval of the objects exit wave with the iterative method, a constraint must first be applied to the object for the algorithm to converge. For our wavefront sensing application the object is a transmission mask consisting of a thin, absorbing metal film on a silicon nitride membrane with holes. This mask is placed in the path of the laser beam, the laser beam passes through the holes and a diffraction pattern is produced which is imaged in the far field by a CCD camera.

 figure: Fig. 1.

Fig. 1. Iterative CDI Phase Retrieval. (a) Object Domain constraint (wavefront sensor amplitude mask) (b) Retrieved object (c) Frequency domain constraint (diffraction pattern)

Download Full Size | PDF

The transmission function is 1 in the holes and 0 outside, thus the object exit wave is isolated and the structure can be directly applied a support constraint for iterative phase retrieval. By using a phase retrieval to find the object, which produced the measured diffraction pattern, we characterize the input beam. The iterative phase retrieval algorithm (Fig. 1) for CDI uses an initial guess for the object, and applies a support constraint in the spatial (object) domain as well as a Fourier constraint in the frequency domain. The support constraint in the object domain forces the retrieved object to fit within the support area, and the Fourier constraint in the frequency domain forces the retrieved object diffraction pattern to match the measured diffraction pattern. These constraints are applied iteratively, and each iteration involves two Fourier transforms between object and frequency domain. To measure the object behind the wavefront sensor using the iterative retrieval CDI method, an interpolation must be performed because the masked object (Fig. 3(d)) will be retrieved. In contrast to an iterative algorithm, the neural network is able to retrieve the object from a diffraction pattern using a learned spatial support constraint without any iterative algorithm. It performs a retrieval by a singlepass through the neural network, which is highly optimized on a GPU and therefore is able to retrieve the phase in the order of milliseconds.

3. Convolutional neural network

Our neural network retrieved object differs from an object retrieved from a diffraction pattern using iterative CDI algorithm [9] because rather than retrieving the object which is the Fourier transform of the diffraction pattern, we retrieve the wavefront before it propagates through a mask (Fig. 3(c)), our wavefront sensor. The neural network is able to perform an inference to retrieve the wavefront which is incident on the wavefront sensor (constraint). The neural network structure (Fig. 2) consists of convolutional and deconvolutional layers. The input to the neural network is the diffraction pattern. This input is down-sampled by several zero-padded convolutional layers with stride. The stride is the distance between points where the convolutional kernel is applied to the previous layer. Stride value greater than 1 is used to decrease the matrix size along the image (x,y) dimensions after passing through each convolutional layer. At the encoded layer (smallest x,y dimensions), there are two split branches of deconvolutional layers which up-sample the encoded layer to a real and imaginary output. Therefore, the diffraction pattern is input to the network, and the real and imaginary parts of the object which created the diffraction pattern are output. We use the real and imaginary part of the object rather than the amplitude and phase to avoid problems with phase wrapping and sharp discontinuities in the retrieved phase. We train the neural network with a dataset of simulated data. The data set for training a neural network using supervised learning consists of inputs and their corresponding outputs. In this case the input is the diffraction pattern. The output is what we want the neural network to retrieve from the input. The neural network output is the corresponding complex object (Fig. 3(c)) which propagates through the wavefront sensor (Fig. 3(d)) to create the diffraction pattern that is input to the network. The training process for the neural network is the network learning a mapping of the input to the correct output. The training is done by minimizing a cost function (Eq. (1)), which is the error between the output of the network and the corresponding actual real and imaginary part of the object, and the error of the reconstructed diffraction pattern compared to the input diffraction pattern, flattened to 1-dimensional vectors. The neural network is constructed and trained using Python with the Tensorflow library.

$$C=\frac{1}{n}\sum_{i=1}^{n}(\textrm{U}_{\textrm{real}_i}^{\textrm{retrieved}} - \textrm{U}_{\textrm{real}_i}^{\textrm{actual}} )+\frac{1}{n}\sum_{i=1}^{n}(\textrm{U}_{\textrm{imag}_i}^{\textrm{retrieved}} - \textrm{U}_{\textrm{imag}_i}^{\textrm{actual}} )+\frac{1}{n}\sum_{i=1}^{n}(\textrm{I}_{i}^{\textrm{retrieved}} - \textrm{I}_{i}^{\textrm{actual}} )$$
$$\begin{aligned} I :& \quad\textrm{Diffraction Pattern} \\ U :& \quad\quad\quad\quad\textrm{Object} \end{aligned}$$

The use of a reconstructed diffraction pattern in the cost function has been shown to effectively improve the accuracy of neural network training related to coherent diffractive imaging [7]. The reconstructed diffraction pattern is calculated by simulating the propagation of the retrieved object through the wavefront sensor using the multi slice approach [13]. This training process is the computationally expensive part of using neural networks because the training process involves the calculation of many partial derivatives of the cost function with respect to all the weights of the artificial neurons in the network over many iterations. This process is highly parallelizable and is the reason the graphics card is highly effective for use in neural networks. Training the neural network on a Tesla V100 graphics card takes approximately 17 hours to train for 10 epochs on a dataset of 36000 unique diffraction patterns at various noise levels (further explained in section 5). Once the network is trained, using the network to produce an output from a diffraction pattern takes a very short time (milliseconds). Running on the same Tesla V100 graphics card, the complex object is retrieved from a diffraction pattern in 23 milliseconds. In comparison to iterative phase retrieval, hundreds of iterations must be ran, which usually takes several seconds on a standard machine. The neural network is mathematically a series of convolutional layers, activation functions and batch normalizations which can be ran in parralel on a GPU highly efficiently. Additionally, after the iterative retrieval has converged, the retrieved object must be interpolated to reveal the wavefront incident on the sensor, which takes additional computational time [10]. This interpolation is performed by the neural network in the single pass through the network.

 figure: Fig. 2.

Fig. 2. Neural Network Structure. The diffraction pattern is input to the encoder, which consists of 4 convolutional layers with a stride of two, and increasing number of output channels used to down-sample the input. The signal is up-sampled by two series of convolutional transpose operations which output the real and imaginary part of the retrieved object

Download Full Size | PDF

We generate our training data set by simulating aberrations described by Zernike polynomials. We apply randomized aberrations to a beam and calculate the the shape of the beam in the focus by applying a Fourier transform. The randomized aberrations are calculated by multiplying the Zernike polynomials by a random scalar number to create a phase (Eq. (2)), and applying it to a Gaussian amplitude. The Zernike polynomials are limited to a total of 14, including the linear phase polynomials ($z^{-1}_1 , z^{1}_1$) which correspond to a position shift of the beam along the wavefront sensor. The $z^0_0$ term is ignored because we are not interested in a constant phase shift. The polynomials range from the $z_1^{-1 .. 1}$ (linear phase) to the $z_4^{-4 \cdots 4}$ polynomials. We limit the Zernike polynomials to the 4th order because we estimate that the aberrations of our optical system are describable within 4th order Zernike polynomials, however in the case of a more complex optical system, it is possible to generate a training dataset with higher order Zernike polynomials if needed.

$$\phi_{\textrm{zernike}}(x,y)=C_1 \cdot z^{{-}1}_{1}(x,y) + C_2 \cdot z^{1}_{1}(x,y) + C_3 \cdot z^{{-}2}_{2}(x,y) + \cdots+ C_{13} \cdot z^{2}_{4}(x,y) + C_{14} \cdot z^{4}_{4}(x,y)$$
$$\begin{aligned} z^m_n :& \quad\quad\textrm{zernike polynomial of m and n order} \\ C_i :& \quad\textrm{randomized scalar for zernike polynomial} \end{aligned}$$

The Gaussian with applied phase is propagated to the far-field with Fourier transform (Eq. (3)).

$$\hat{E}_{\textrm{wavefront}}(x,y) = FFT {\bigg[} \exp {\bigg(} \frac{-x^2 - y^2}{w_0^2} {\bigg)} \cdot \exp {\bigg(} i \cdot \phi_{\textrm{zernike}} {\bigg)} {\bigg]}$$

This object is then propagated through the wavefront sensor $\hat {M}$ using the multi slice BPM technique [13] (Eq. (4)).

$$ \gamma(f_x,f_y) : \sqrt{1 - (\lambda \cdot f_x)^2 - (\lambda \cdot f_y)^2} $$
$$\hat{E}^{\textrm{i+1}}(x,y) = FFT^{{-}1} {\bigg[} FFT {\bigg[} \hat{E}^i(x,y) \cdot \hat{M}_{\textrm{wavefront sensor}}(x,y) {\bigg]} \cdot \exp {\bigg(} i \frac{2\pi \cdot dz}{\lambda} \cdot \gamma(f_x,f_y) {\bigg)} {\bigg]}$$

The propagation takes into account the wavelength of the laser beam, which also shapes the produced diffraction pattern. The diffraction pattern is produced by Fourier transforming the propagated beam. The desired output of the neural network is the complex object (Fig. 3(c)), and the input is the produced diffraction pattern (Fig. 3(e)). By constructing the training data set in this way, we build in the constraint to the neural network. Because we propagate all the objects through the same wavefront sensor, the neural network learns to retrieve the objects behind this specific wavefront sensor. This is analogous to using a fixed support in the iterative CDI retrieval method.

 figure: Fig. 3.

Fig. 3. Object simulation and neural network. (a) Gaussian profile (b) Zernike polynomials which are applied to the Gaussian (c) Fourier transform of the Gaussian with applied phase (d) The output after propagation through the wavefront sensor (e) The Fourier transform of the object after the wavefront sensor, and the diffraction pattern which is imaged by the detector (f) The object retrieved by the neural network

Download Full Size | PDF

It is possible for different objects to produce an identical diffraction pattern (ambiguity) when the object contains a constant phase shift. An ambiguity is contained in a data set when there are multiple diffraction patterns which are (nearly) identical, and correspond to different wavefronts. Ambiguities in a neural network data set must be removed to train the neural network, or the training will not be able to converge on the solution. We remove ambiguities in the wavefront phase by setting the phase angle of the complex object to 0 at the center for each wavefront in the training data set. In this way, the neural network learns the convention of setting the phase angle to 0 at the center of the retrieved wavefront, and data set does not consist of ambiguities due to constant phase shift. The twin image problem [14], which also imposes an ambiguity, is solved by choosing an asymmetrical geometry of the binary mask (wavefront sensor) [10].

 figure: Fig. 4.

Fig. 4. Neural Network Retrieval on Validation data. (a) Input Diffraction Pattern (b) Retrieved Intensity (c) Retrieved Phase (d) Actual Intensity (e) Actual Phase.

Download Full Size | PDF

Two data sets are constructed using the model shown in Fig. 3, a training data set and an validation data set. The training data set consists of 36000 samples, and the validation data set consists of 200 samples. We use the training data set to calculate gradients of the cost function with respect to all the weights in the neural network, and adjust the weights to minimize the cost function using ADAM optimization [15]. The validation set is used to determine when to stop training the network. We stop the training when the error on the validation data starts to increase, while the error on the training data continues to decrease. This determines whether the network is over-fitted to the data set. The validation data set is representative of actual measured diffraction patterns because the network is not trained with this data. Retrievals of the neural network on samples from the validation data set are shown in Fig. 4.

4. Experimental results

We test our trained neural network on experimentally measured data. An XUV source is used to illuminate a pinhole of diameter 2.7 $\mu m$. We use the neural network to retrieve the beam which is propagated through the pinhole. The wavefront sensor is placed behind the pinhole (Fig. 5) and we measure the diffraction pattern produced by the wavefront sensor placed various distances from the pinhole. The detector is placed behind the wavefront sensor at a fixed distance such that the measured diffraction pattern is sufficiently measured in the far field, and Fraunhofer Diffraction is observed.

 figure: Fig. 5.

Fig. 5. Experimental setup. The Detector is placed behind a pinhole of diameter 2.7 $\mu m$ with an incoming XUV beam of wavelength 13.5 $nm$. The distance between the pinhole and the wavefront sensor is adjusted.

Download Full Size | PDF

The diameter of the retrieved beam increases as the distance between the wavefront sensor and the pinhole is increased. We verify that our retrieved object is correct by using the near field propagator (Eq. (5)).

$$E_{\textrm{prop}}(x,y) = FFT^{{-}1} {\bigg[} FFT {\bigg[} E(x,y) {\bigg]} e^{i \frac{2 \pi z}{\lambda}\gamma(f_x,f_y)} {\bigg]}$$

We know the diameter of the pinhole is 2.7 $\mu m$ from an electron microscope image, so we are able to verify the accuracy of our retrieval by propagating the retrieved back to the pinhole. We propagate the retrieved complex object which is retrieved at a distance of 500 $\mu m$ from the pinhole (Fig. 6(h) and (i)). The retrieved wavefront is propagated by a distance of 500 $\mu m$ toward the spherical aperture (Fig. 6(j) and (k)), and we find that it matches the 2.7 micrometer diameter of our pinhole (Fig. 6).

 figure: Fig. 6.

Fig. 6. Retrieval with Neural Network at various distances behind pinhole. The trained neural network is used to retrieve the object behind the pinhole. The retrieved object at 0 $\mu m$ meters is propagated a distance of 500 $\mu m$ to the pinhole.

Download Full Size | PDF

The neural network is able to retrieve the object without an iterative retrieval, and it retrieves the object behind the wavefront sensor, so there is no need to perform interpolation of the object to get the wavefront, as is the case with iterative CDI retrieval.

5. Noisy diffraction patterns

We test the neural network and the iterative phase retrieval method on simulated samples with noise applied to simulate a camera image captured from a weak laser source with limited number of photons. The finite number of counts measured by the detector is characterized mathematically by the Poisson distribution. The neural network is trained on a training data constructed from 36000 unique objects. For each object the neural network is trained on the corresponding diffraction pattern with no noise (infinite counts), 50, 40, 30, 20, 10, and 5 peak signal counts. 200 samples which were not used during the neural network training are retrieved using both the neural network and the iterative phase retrieval method at various signal to noise (SNR) levels. The samples are generated using the same method to generate the training data, randomized Zernike coefficients are applied to a Gaussian and Fourier transformed. The RMSE error of these retrievals is calculated by measuring the difference between the actual intensity/phase and the retrieved intensity/phase. The neural network outperforms the iterative phase retrieval as shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Retrieval on noisy diffraction patterns compared both the iterative and neural network methods are used to retrieve intensity (a) and Phase (b) from diffraction patterns with an increasing amount of noise characterized by the SNR of the measured diffraction pattern. The Average retrieval error is plotted along with the standard deviation.

Download Full Size | PDF

Both the neural network and the iterative phase retrieval method are used to retrieve the diffraction pattern generated by a simulated wavefront with spherical $(z_2^0)$ phase, with and without noise artificially added to the diffraction pattern. The result is shown in Fig. 8. In the high noise case with the diffraction pattern imaged with 10 peak counts (3.16 SNR), the neural network is still able to retrieve the object with high accuracy (Fig. 8(i) and (j)), and the object retrieved by the iterative method (Fig. 8(k) and (l)) does not resemble the actual object.

 figure: Fig. 8.

Fig. 8. Diffraction pattern with and without noise retrieved an identical object (f,g,m,n) is used to construct a diffraction pattern with (h) and without (a) noise. The object is retrieved from the noise free diffraction pattern (a), the result of the neural network (b,c) and the iterative retrieval (d,e) are shown, and compared to the actual object (f,g). The object is retrieved from the noisy diffraction pattern (h), the results of the neural network (i,j) and the iterative retrieval (k,l) are compared to the actual object (m,n)

Download Full Size | PDF

The accuracy of the neural network retrieval in high noise diffraction patterns in combination with the speed of retrieval makes the neural network method highly advantageous when working in an environment with low intensity beams, the integration time of capturing the image can be shorter, even enabling single shot measurements with a small fraction of the beam.

6. Conclusion

In conclusion, we demonstrate a new approach to coherent diffractive imaging which works in conjunction with wavefront sensing using a mask. The retrieval time has been reduced from seconds to milliseconds, which will enable live in-focal wavefront measurements, even in very weak beams which produce a noisy diffraction pattern with a short integration time. The wavefront sensor can be applied to a broad range of wavelengths ranging from visible light to X-ray. Potentially this method can be applied to beam characterization in high harmonic generation, Synchrotrons and free electron lasers. Additionally, the mask wavefront sensor is a cost effective alternative to commercial wavefront sensors which are more difficult to align. Using the mask wavefront sensor, only the camera-sample distance must be known.

Funding

Proexcellence initiative (APC2020); European Social Fund (ESF); Federal State of Thuringia (2017 FGR 0076).

Acknowledgment

Financial support by the Thuringian State Government within its ProExcellence initiative (APC2020) is acknowledged.

Disclosures

The authors declare no conflicts of interest

Data availability

All the code for this project is available upon request to the corresponding author.

References

1. S. Lawrence, C. Giles, A. Tsoi, and A. Back, “Face recognition: A convolutional neural network approach,” IEEE Trans. Neural Netw. 8(1), 98–113 (1997). [CrossRef]  

2. B. Djavan, M. Remzi, A. Zlotta, C. Seitz, P. Snow, and M. Marberger, “Novel artificial neural network for early detection of prostate cancer,” J. Clin. Oncol. 20(4), 921–929 (2002). [CrossRef]  

3. L. Bottaci, P. J. Drew, J. E. Hartley, M. B. Hadfield, R. Farouk, P. W. Lee, I. M. Macintyre, G. S. Duthie, and J. R. Monson, “Artificial neural networks applied to outcome prediction for colorectal cancer patients in separate institutions,” The Lancet 350(9076), 469–472 (1997). [CrossRef]  

4. Z. Zhu, J. White, Z. Chang, and S. Pang, “Attosecond pulse retrieval from noisy streaking traces with conditional variational generative network,” Sci. Rep. 10(1), 5782 (2020). [CrossRef]  

5. T. Zahavy, A. Dikopoltsev, D. Moss, G. I. Haham, O. Cohen, S. Mannor, and M. Segev, “Deep learning reconstruction of ultrashort pulses,” Optica 5, 666–673 (2018). [CrossRef]  

6. M. Cherukara, Y. Nashed, and R. Harder, “Real-time coherent diffraction inversion using deep generative networks,” Sci. Rep. 8(1), 16520 (2018). [CrossRef]  

7. Z. Guan, E. H. Tsai, X. Huang, K. Yager, and H. Qin, “Ptychonet: Fast and high quality phase retrieval for ptychography,” in British Machine Vision Conference, 2019.

8. Y. Nishizaki, R. Horisaki, K. Kitaguchi, M. Saito, and J. Tanida, “Analysis of non-iterative phase retrieval based on machine learning,” Opt. Rev. 27(1), 136–141 (2020). [CrossRef]  

9. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

10. W. Eschen, G. Tadesse, Y. Peng, M. Steinert, T. Pertsch, J. Limpert, and J. Rothhardt, “Single-shot characterization of strongly focused coherent xuv and soft x-ray beams,” Opt. Lett. 45(17), 4798–4801 (2020). [CrossRef]  

11. M. P. Benk, K. A. Goldberg, A. Wojdyla, C. N. Anderson, F. Salmassi, P. P. Naulleau, and M. Kocsis, “Demonstration of 22-nm half pitch resolution on the sharp euv microscope,” J. Vac. Sci. Technol., B: Nanotechnol. Microelectron.: Mater., Process., Meas., Phenom. 33(6), 06FE01 (2015). [CrossRef]  

12. W. Chao, B. D. Harteneck, J. A. Liddle, E. H. Anderson, and D. T. Attwood, “Soft x-ray microscopy at a spatial resolution better than 15 nm,” Nature 435(7046), 1210–1213 (2005). [CrossRef]  

13. G. Tadesse, W. Eschen, R. Klas, M. Tschernajew, F. Tuitje, M. Steinert, M. Zilk, V. Schuster, M. Zuerch, T. Pertsch, C. Spielmann, J. Limpert, and J. Rothhardt, “Wavelength-scale ptychographic coherent diffractive imaging using a high-order harmonic source,” Sci. Rep. 9(1), 1735 (2019). [CrossRef]  

14. M. Guizar-Sicairos and J. R. Fienup, “Understanding the twin-image problem in phase retrieval,” J. Opt. Soc. Am. A 29(11), 2367–2375 (2012). [CrossRef]  

15. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference for Learning Representation, 2014.

Data availability

All the code for this project is available upon request to the corresponding author.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Iterative CDI Phase Retrieval. (a) Object Domain constraint (wavefront sensor amplitude mask) (b) Retrieved object (c) Frequency domain constraint (diffraction pattern)
Fig. 2.
Fig. 2. Neural Network Structure. The diffraction pattern is input to the encoder, which consists of 4 convolutional layers with a stride of two, and increasing number of output channels used to down-sample the input. The signal is up-sampled by two series of convolutional transpose operations which output the real and imaginary part of the retrieved object
Fig. 3.
Fig. 3. Object simulation and neural network. (a) Gaussian profile (b) Zernike polynomials which are applied to the Gaussian (c) Fourier transform of the Gaussian with applied phase (d) The output after propagation through the wavefront sensor (e) The Fourier transform of the object after the wavefront sensor, and the diffraction pattern which is imaged by the detector (f) The object retrieved by the neural network
Fig. 4.
Fig. 4. Neural Network Retrieval on Validation data. (a) Input Diffraction Pattern (b) Retrieved Intensity (c) Retrieved Phase (d) Actual Intensity (e) Actual Phase.
Fig. 5.
Fig. 5. Experimental setup. The Detector is placed behind a pinhole of diameter 2.7 $\mu m$ with an incoming XUV beam of wavelength 13.5 $nm$. The distance between the pinhole and the wavefront sensor is adjusted.
Fig. 6.
Fig. 6. Retrieval with Neural Network at various distances behind pinhole. The trained neural network is used to retrieve the object behind the pinhole. The retrieved object at 0 $\mu m$ meters is propagated a distance of 500 $\mu m$ to the pinhole.
Fig. 7.
Fig. 7. Retrieval on noisy diffraction patterns compared both the iterative and neural network methods are used to retrieve intensity (a) and Phase (b) from diffraction patterns with an increasing amount of noise characterized by the SNR of the measured diffraction pattern. The Average retrieval error is plotted along with the standard deviation.
Fig. 8.
Fig. 8. Diffraction pattern with and without noise retrieved an identical object (f,g,m,n) is used to construct a diffraction pattern with (h) and without (a) noise. The object is retrieved from the noise free diffraction pattern (a), the result of the neural network (b,c) and the iterative retrieval (d,e) are shown, and compared to the actual object (f,g). The object is retrieved from the noisy diffraction pattern (h), the results of the neural network (i,j) and the iterative retrieval (k,l) are compared to the actual object (m,n)

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

C = 1 n i = 1 n ( U real i retrieved U real i actual ) + 1 n i = 1 n ( U imag i retrieved U imag i actual ) + 1 n i = 1 n ( I i retrieved I i actual )
I : Diffraction Pattern U : Object
ϕ zernike ( x , y ) = C 1 z 1 1 ( x , y ) + C 2 z 1 1 ( x , y ) + C 3 z 2 2 ( x , y ) + + C 13 z 4 2 ( x , y ) + C 14 z 4 4 ( x , y )
z n m : zernike polynomial of m and n order C i : randomized scalar for zernike polynomial
E ^ wavefront ( x , y ) = F F T [ exp ( x 2 y 2 w 0 2 ) exp ( i ϕ zernike ) ]
γ ( f x , f y ) : 1 ( λ f x ) 2 ( λ f y ) 2
E ^ i+1 ( x , y ) = F F T 1 [ F F T [ E ^ i ( x , y ) M ^ wavefront sensor ( x , y ) ] exp ( i 2 π d z λ γ ( f x , f y ) ) ]
E prop ( x , y ) = F F T 1 [ F F T [ E ( x , y ) ] e i 2 π z λ γ ( f x , f y ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.