Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Simultaneous reconstruction of phase and amplitude for wavefront measurements based on nonlinear optimization algorithms

Open Access Open Access

Abstract

The non-perfect determined amplitude distribution in the pupil would affect the convergence speed and accuracy of phase retrieval method, which depends on the amplitude of fields to reconstruct the phase. In this paper, we propose two kinds of phase retrieval methods based on hybrid point-polynomial and point-by-point nonlinear optimization algorithms to reconstruct simultaneously the amplitude and phase of the wavefront. Intensity quantized errors are avoided by using modified first derivatives. For simple and general wavefront testing, the accuracy and robustness of proposed algorithms are verified both numerically and experimentally.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The coherent diffraction imaging (CDI) is a non-interference phase retrieval technology to obtain the missing phase information based on the intensity-only measurement taken in the near focal region. Phase retrieval algorithms are popularly applied to image reconstruction [1], image encryption [2], super-resolution [3], wavefront sensing [4], etc. Phase retrieval algorithms mainly include the transport of intensity equation (TIE) [5,6] and the iterative phase retrieval algorithm [7]. TIE can realize high precision reconstruction of the microscopic image using partially coherent illumination. The iterative phase retrieval algorithm, which is the main method for wavefront measurement, has evolved into alternative projection phase retrieval algorithms and gradient-search algorithms. Alternative projection phase retrieval algorithm estimates the pupil phase iteratively between the pupil and the known amplitude plane based on the scalar diffraction theory. Gradient-search algorithm minimizes the error metric (also called objective function) in the desired plane based on a nonlinear optimization algorithm. Nonlinear optimization methods mainly consist of point-by-point optimization algorithms and modal-based optimization algorithms [8]. The former algorithm is to parameterize each pixel value by gradient optimization, the latter algorithm is to optimize the coefficient value of polynomial exploited for representing the measured phase.

It is usually assumed that the amplitude of pupil is known when phase retrieval is applied to wavefront measurement. However, it is difficult to accurately determine the amplitude of pupil. One reason is that the illumination light affected by the installation error of the optical system is not an ideal spherical wave or plane wave [9]. Another reason is that the amplitude of pupil is difficult to calibrate due to the randomness of coherent noise. The accuracy of interference measurement is hardly affected by non-uniform illumination. However, for the CDI method, since the amplitude and phase information are both included in the diffraction patterns, it is very important to reconstruct the phase and amplitude simultaneously for improving the measurement accuracy [10]. Two methods to directly reconstruct phase and amplitude simultaneously are coherent modulation imaging and sub-aperture stitching. Coherent modulation imaging [11,12] allows for retrieving more information from each recording and thus eliminates the stagnation problem. Therefore, the convergence and robustness of iterative method for complex wavefront reconstruction are significantly improved. The sub-aperture stitching [13,14] method has greater stability and accuracy for complex wavefront reconstruction with overlapping sub-aperture. However, these methods require additional components, thereby increasing the risk of errors introduced by the components.

Another wavefront reconstruction method is that the collected intensities at different positions of the extension axis are served as the known amplitude to recover phase. According to this framework, phase diversity is the generic method for estimating the phase in the measurement planes [10,15]. Particularly, phase diversity method based on the first-order Taylor expansion of the optical transfer function has a higher accuracy with less time [16]. The small aberration is accurately measured by using this method. Furthermore, the nonlinear optimization algorithm is proposed to minimize the weighted sum-of-squared-errors metric [17]. One of the measured amplitudes is acted as the known amplitude distribution to estimate phase with the rest of the measured amplitudes. In numerical experiments, large-scale general phase retrieval is achieved via a nonlinear optimization algorithm. In addition, the method of simultaneously reconstructing the phase and amplitude based on the expectation-maximization has also been proposed [18].

Compared to alternative projection phase retrieval algorithm, gradient-search algorithm is more flexible to measure wavefront via optimizing pixel values or polynomial coefficients. In this paper, the phase and amplitude are reconstructed simultaneously by using single or multiple diffraction intensities collected along the optical axis based on nonlinear optimization algorithm. Simultaneous reconstruction of phase and amplitude would avoid affecting the wavefront measurement accuracy due to the inaccurate amplitude. In addition, the intensity quantized errors introduced by the sensor are rectified by using modified first derivatives.

This paper is organized as follows. Section 2 introduces the modified nonlinear optimization theory and proposed phase retrieval methods. Section 3 verifies the accuracy and stability of methods through numerical simulation. Section 4 presents some verified experiments. Section 5 discusses the results and concludes the paper.

2. Algorithm description

2.1 Objective function for wavefront measurement

The optical testing system is shown in Fig. 1, the incident light field at the aperture can be expressed as

$${U_0}(x,y) = {A_0}(x,y)\exp [{i{\varphi_0}(x,y)} ],$$
where ${{i}^{2}}{\; = \; - 1}$, ${{A}_{0}}({{x,\; y}} )$ and ${\varphi _{0}}({{x,\; y}} )$ are the amplitude and phase distribution of the incident light, respectively.

 figure: Fig. 1.

Fig. 1. Experimental configuration for wavefront measurement. The collimated beam emitted from a ZYGO interferometer illuminates the plate and aperture stop. ${{U}_{0}}$: the wavefront of illumination. ${{U}_{{total}}}$: the wavefront behind the plate. AS: aperture stop. PL: plate with wavefront error. CL: condensing lens. CCD: charge coupled device.

Download Full Size | PDF

The wavefront of tested plate ${{U}_{{plate}}}({{x,\; y}} )$ is

$${U_{plate}}(x,y) = \exp [{i{\varphi_{plate}}(x,y)} ],$$
where ${\varphi _{{plate}}}({{x,\; y}} )$ is the transmission wavefront phase corresponding to the error of tested plate.

Therefore, the field behind the plate is

$${U_{total}}(x,y) = {A_0}(x,y)\exp [{i{\varphi_{total}}(x,y)} ],$$
with
$${\varphi _{total}}(x,y) = {\varphi _0}(x,y) + {\varphi _{plate}}(x,y).$$

After ${{U}_{{total}}}({{x,\; y}} )$ transmitting through ideal condensing lens, the pupil field is

$$g(x,y) = {U_{total}}(x,y) \cdot \exp \left( { - ik\frac{{{x^2} + {y^2}}}{{2f}}} \right),$$
where k is the wave number, $k = 2\pi /\lambda $, $\lambda $ is the wavelength and$\; f$ is the focal length.

According to the Fourier optics theory [19], the ${j^{th}}$ defocus field with a defocus length $\Delta {z_j}\; $is

$${G_j}(\mu ,\textrm{ }\nu ) = \wp [{g(x,y),\Delta {z_j}} ],$$
where $\wp [{\cdot} ]$ represents two-step Fresnel diffraction propagation model [20]. $({{\mu}{,\; \nu }} )$ is the coordinate of the intensity measurement plane. Therefore, at $\Delta {z_j}$, the intensity distribution can be pixel-by-pixel calculated as
$$I_{cal}^j = {G_j}(\mu ,\textrm{ }\nu ) \cdot G_j^\ast (\mu ,\textrm{ }\nu ),$$
where the superscript * denotes the complex conjugate operator.

The objective function of this wavefront measurement model is defined as

$${E_j} = \sum\limits_{\mu ,\nu } {{W_j}(\mu ,\nu ){{\left[ {\sqrt {\widehat I_{cal}^j(\mu ,\textrm{ }\nu )} - \sqrt {\widehat I_{mea}^j(\mu ,\textrm{ }\nu )} } \right]}^2}} ,$$
where $\hat{I}_{{mea}}^j$ and $\hat{I}_{{cal}}^j$ are normalized intensity of $I_{{mea}}^j$ and $I_{{cal}}^j$, respectively, $I_{{mea}}^j{\; }$is the measured intensity, ${{W}_{j}}({{\mu}{,\; \nu }} ){\; }$is a weighting function that is used to discard the effect of bad or saturated detector pixels and the pixels with the poor signal-to-noise ratio.

2.2 Modified gradient calculated algorithm

If each pixel value is differentiated to optimize the objective function, it is impractical to calculate gradient within the limited computational and memory resources available on ordinary desktop computers. Fortunately, Fienup et al. [7] proposed a simple gradient analytic expression based on Fourier transform. However, if diffraction pattern is collected near the focal region, the quantization error is introduced because the bit depth of the image measured by the sensor is finite. Simulation indicates that this error has a great impact on the wavefront measurement accuracy, especially in marginal areas. Here the gradient algorithm is modified to eliminate the measurement accuracy degradation caused by quantization error.

In our situation, the pixel-by-pixel gradient values can be calculated with:

$$\frac{{\partial {E_j}}}{{\partial {\theta _j}}} = 2{\mathop{\rm Im}\nolimits} [{{g_j}({x,y} )g{^{\prime}_j}^\ast ({x,y} )} ],$$
$$\frac{{\partial {E_j}}}{{\partial {a_j}}} ={-} 2{\mathop{\rm Re}\nolimits} [{{g_j}({x,y} )g{^{\prime}_j}^\ast ({x,y} )} ],$$
where $\partial {{E}_{j}}/\partial {{\theta}_{j}}$ and $\partial {\textrm{E}_\textrm{j}}/\partial {{a}_{j}}{\; }$represent the gradient for phase and amplitude, respectively. ${g}_{j}^{\prime\ast }({{x,\; y}} ){\; }$is given by
$$g{^{\prime}_j}^\ast ({x,y} )= {\wp ^{ - 1}}[{G_j^w({\mu ,\nu } )} ],$$
where ${\wp ^{ - 1}}[{\cdot} ]{\; }$is the inverse two-step Fresnel diffraction propagation and
$$G_j^w({\mu ,\nu } )= {W_j}({\mu ,\nu } )G_j^M({\mu ,\nu } )\frac{{{G_j}({\mu ,\nu } )}}{{|{{G_j}({\mu ,\nu } )} |}},$$
where
$$G_j^M({\mu ,\nu } )= |{{F_j}({\mu ,\nu } )} |- \sqrt {\lceil{{{{{|{{G_j}({\mu ,\nu } )} |}^2} \cdot M} \mathord{\left/ {\vphantom {{{{|{{G_j}({\mu ,\nu } )} |}^2} \cdot M} {\max ({{{|{{G_j}({\mu ,\nu } )} |}^2}} )}}} \right.} {\max ({{{|{{G_j}({\mu ,\nu } )} |}^2}} )}}} \rceil } ,$$
where ${M}$ is sensor output bit depth, $[{\cdot}] $denotes rounding up, $|{{F}({{\mu}{,\; \nu }} )} |\; $is the normalized amplitude
$$|{{F_j}({\mu ,\nu } )} |\textrm{ = }\sqrt {M \cdot \widehat I_{mea}^j} .$$

The phase of wavefront can be characterized by polynomials. Considering that Zernike polynomials are easy to correspond with Seidel aberration term, which provides an effective method for wavefront analysis and optimization of system performance, the Zernike basis function is chosen to describe wavefront

$$\varphi (x,y) = \sum\limits_{s = 1}^S {{\alpha _s}{Z_s}(x,y)} ,$$
where ${{Z}_{s}}({{x,\; y}} )$ is the ${{s}^{{th}}}$ Zernike polynomial [21], ${\; S}$ is the number of polynomials, and ${{\alpha}_{s}}$ is the coefficient of the ${{s}^{{th}}}\; $Zernike polynomial. The gradients about Zernike polynomials are calculated with
$$\frac{{\partial {E_j}}}{{\partial \alpha _{j,s}^\theta }} = 2{\mathop{\rm Im}\nolimits} \left[ {\sum\limits_{x,y} {{g_j}({x,y} ){Z_s}({x,y} )g{^{\prime}_j}^\ast ({x,y} )} } \right].$$

The amplitude also can be decomposed into a linear combination of base polynomials [22], and the gradient expression about Zernike polynomials to optimize amplitude is

$$\frac{{\partial {E_j}}}{{\partial \alpha _{j,s}^a}} ={-} 2{\mathop{\rm Re}\nolimits} \frac{{\left[ {\sum\limits_{x,y} {{g_j}({x,y} ){Z_s}({x,y} )g{^{\prime}_j}^\ast ({x,y} )} } \right]}}{{\sum\limits_{x,y} {|{{Z_s}({x,y} )} |} }}.$$

By internalizing the estimated value ${{G}_{j}}({{\mu}{,\; \nu }} )$ to match the bit depth of the intensity collected by the sensor, called the quantization error correction algorithm, the calculation error caused by the quantization can be avoided. At the same time, the collected intensity $I_{{mea}}^j$ normalized can avert the error caused by the unstable power of the light source.

If both phase and amplitude are described by polynomials, then the reconstruction of both phase and amplitude will be affected by noise. Because noise usually has characteristics of high frequency and random distribution, and tested wavefront errors are the mid- and low- frequency errors generally. Therefore, in general, only the phase is represented by polynomials and reconstructed by optimizing polynomial phase coefficients, while the amplitude is reconstructed by optimizing point-by-point. In the iterative calculation process, it can be expected that the high-frequency noise will be driven from the phase to the amplitude by polynomials, thereby improving the accuracy of phase reconstruction.

2.3 Algorithm framework details

The measurement system and the specific procedure of the gradient calculation are described above. Next, we describe the algorithm framework of complex wavefront reconstruction based on nonlinear optimization algorithms. The algorithm framework has the following procedures:

  • (1) Set the maximum iteration step K, the number of patterns J, the step length for amplitude ${{h}_{{amp}}}\; $and phase ${{h}_{{phase}}}$. And the reconstruction is started with ideal lens ${{g}_{0}}({{x,\; y}} )$;
  • (2) Calculate corresponding gradient function including $\partial {{E}_{j}}/\partial {{a}_{{j,i}}}$ for amplitude and$\; \partial {{E}_{j}}/\partial {{\theta}_{{j,i}}}$ or $\partial {{E}_{j}}/\partial {\alpha}_{{j,s,i}}^{\theta}$ for phase based on Section 2.2;
  • (3) Update estimated amplitude with
    $$|{{{g}_i}} |= |{{g_{i - 1}}} |+ {h_{amp}}\left( {\frac{{\partial {E_j}}}{{\partial {a_{_{j,i}}}}} + \omega \frac{{\partial {E_j}}}{{\partial {a_{_{j,i - 1}}}}}} \right),$$
    with weighting factor $\omega \in ({0,\; 0.1} )$ and $\partial {{E}_{j}}/\partial {{a}_{{j,i}}} = \partial {{E}_{j}}/\partial {{a}_{{j,i\backslash J}}}$;
  • (4) Calculate the estimated phase ${\theta _{i}}({{x,\; y}} )$ by
    $${\theta _i}(x,y) = {\theta _{i - 1}}(x,y) + {h_{phase}}\sum\limits_S {\left( {\frac{{\partial {E_j}}}{{\partial \alpha_{j,s,i}^\theta }} + \sigma \frac{{\partial {E_j}}}{{\partial \alpha_{j,s,i - 1}^\theta }}} \right)} {Z_s}(x,y),$$
    or
    $${\theta _i}(x,y) = {\theta _{i - 1}}(x,y) + {h_{phase}}\left( {\frac{{\partial {E_j}}}{{\partial {\theta_{j,i}}}} + \sigma \frac{{\partial {E_j}}}{{\partial {\theta_{j,i - 1}}}}} \right),$$
    with weighting factor $\sigma = ({{0,0}{.1}} )$ and $\partial {{E}_{j}}/\partial \alpha _{{j,s,i}}^\theta = \partial {{E}_{j}}/\partial \alpha _{{j,s,i\backslash J}}^\theta $ or $\partial {{E}_{j}}/\partial {\theta _{{j,i}}} = \partial {{E}_{j}}/\partial {\theta _{{j,i\backslash J}}}$;
  • (5) Fit the estimated wavefront using the following equation:
    $${g_i} = |{{g_i}} |\exp [{i{\theta_i}(x,y)} ];$$
  • (6) ${i\; = \; i\; + \; 1}$, repeat steps 2-5 till satisfying ${\; i\; > \; K}$;
  • (7) Calculate plate wavefront phase error
    $${\varphi _{plate}}(x,y){ = }{\theta _{i - 1}}(x,y) - {\varphi _0}(x,y).$$
    where ${\varphi _{0}}({{x,\; y}} )$ is the systematic phase error.

In the proposed algorithm framework, the derivatives calculated in two iterations process are combined with a weight coefficient are applied to the gradient direction. Using serial iterative updating and parallel gradient composition, the proposed algorithm not only has the characteristics of fast convergence of single-beam, multiple-intensity reconstruction technique (SBMIR) [23] algorithm but also has the advantages of stability and high robustness of amplitude-phase retrieval (APR) [24] algorithm. Making use of the two gradients simultaneously would be better to eliminate the local minimum stagnation. We derive four algorithms based on the proposed algorithm framework: single-image amplitude-point and phase-point nonlinear optimization algorithm (Single-NOPP), multiple-images amplitude-point and phase-point nonlinear optimization algorithm (Multi-NOPP), single-image amplitude-point and phase-Zernike nonlinear optimization algorithm (Single-NOPZ) and multiple-images amplitude-point and phase-Zernike nonlinear optimization algorithm (Multi-NOPZ). To obtain the actual wavefront, measured wavefront is modeled by using a superposition standard Zernike polynomial to remove the influence of noise for point-by-point iterative algorithms such as Gerchberg-Saxton (GS) algorithm and phase point-by-point nonlinear optimization algorithm. In theory, the NOPZ have higher robustness and accuracy for low-frequency wavefront measurement, while the NOPP offers more flexibility not only in complex wavefront measurement but also for non-interferometric phase measurement technology such as Fourier ptychography [25] or Fourier ptychographic diffraction tomography [26]. This paper focuses on optical measurement and the feasibility of proposed algorithms is verified via wavefront measurement experiments. If the tested wavefront is simple, NOPZ should be used. If the wavefront involves the high-spatial frequency error, which can not be fitted via more moderate Zernike polynomial terms, NOPP can be applied to reconstruct wavefront.

3. Numerical simulations

The proposed four methods are demonstrated through simulations. All simulations have the same configurations with the focal length 1079.41 mm, the aperture diameter 22.9 mm as well as the wavelength 632.8 nm. The diffraction pattern has ${512 \times 512}$ pixels and the pixel size is ${4}{.4 \times 4}{.4\; }{\mu}{\rm{m}}$. In all simulations, three diffraction patterns, which are measured at defocus distance $\Delta {z = }[{{10,15,20}} ]{\; \rm{mm}}$, quantized to 4096 levels are used.

3.1 Quantization error correction

The validity of the quantization error correction algorithm is verified. The phase fitted by the first 36 terms Zernike polynomials and uniform amplitude are used as the ground truth. The reconstruction contrast experiment results are shown in Fig. 2. Here the Multi-GS algorithm using three intensity patterns is chosen as the comparison algorithm. Compared with the classical GS algorithm, it has higher convergence accuracy and speed. When the wavefront reconstructed by using the Multi-GS algorithm, there is no quantization error correction. And the accuracy of Multi-NOPP and Multi-NOPZ algorithms with or without quantization error correction are compared.

 figure: Fig. 2.

Fig. 2. The influence of quantized error for different algorithms. Suffix ‘-0’ represents no quantization error correction. Suffix ‘-1’ represents that quantization error is corrected.

Download Full Size | PDF

In the absence of quantization error correction, it is proved that when the wavefront recovered via the Multi-NOPZ algorithm, the quantization error is driven into the amplitude and has little effect on the phase. However, there is a great impact on the accuracy of phase and amplitude reconstruction for the two kinds of point-by-point algorithms on the edge of the wavefront. When the quantization error is corrected, the reconstruction accuracy is greatly improved. It is worth noting that if noise is added to intensities, we find that the noise is driven to the amplitude, resulting in the phenomenon of edge warping in reconstructed amplitude, as shown in the last set. To verify the stability of the quantization error correction algorithm, 60 groups of random numerical simulation experiments have been done, and the reconstruction error is compared as shown in Fig. 3. When the quantization error is corrected, the residual error is very small. For last group numerical experiments shown in Fig. 3, the noise is added for diffraction intensities. This further proves that the Multi-NOPZ algorithm has higher robustness.

 figure: Fig. 3.

Fig. 3. Summary of 60 numerical experiments to verify the quantization error correction algorithm.

Download Full Size | PDF

3.2 Simple wavefront reconstruction

The phase retrieval algorithm is usually used to test aberration fitted by dozens of Zernike polynomials. All algorithms are verified to retrieve simple wavefront. The phase of the simple wavefront is obtained by fitting the first 36 terms of the Zernike polynomials, and the surface fitted by the first 528 terms of Zernike polynomials is superimposed on uniform amplitude. The amplitude and phase of wavefront are both normalized. White Gaussian noise is added and the signal-to-noise ratio is 35dB for every intensity pattern. Four proposed algorithms, APR and GS algorithm are applied to the numerical experiment. For retrieved results of the point-by-point algorithm, the phase and amplitude are both fitted by Zernike polynomials to achieve the comparison of experimental results over the same spatial bandwidth. The accuracy and speed of reconstruction are contrast as shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Numerical reconstructions for simple wavefront. (a1) and (a2) are the ground truth; (b1) and (b2) are reconstruction residual error for six algorithms, respectively; (c1) and (c2) are RMSE convergence curve.

Download Full Size | PDF

From Figs. 4(b1) and (b2), it is concluded that Single-GS and Single-NOPP algorithms cannot successfully reconstruct wavefront, but the Single-NOPZ algorithm can accurately reconstruct wavefront and its reconstruction accuracy is higher than the Multi-GS algorithm. Meanwhile, although the APR algorithm is convergent, its convergence accuracy and speed are not as good as other multiple images phase retrieval algorithms. Figures 4(c1) and (c2) show the error convergence curve of wavefront reconstruction. Due to the influence of noise, the accuracy of wavefront reconstruction is reduced. In addition, as shown in Figs. 4(b1) and (b2), compared with phase, amplitude reconstruction error is larger. We think the reason is that the influence of noise on amplitude is greater than that on phase. In contrast, the speed of wavefront reconstruction based on nonlinear optimization algorithm is obviously higher than that of APR and Multi-GS Algorithm, especially the amplitude reconstruction. In addition, the reconstruction speed of amplitude is higher than that of phase. It also verifies the advantages of iterative nonlinear optimization of amplitude and phase, respectively. The accuracy of the five successful algorithms is Multi-NOPZ > Single-NOPZ > Multi-NOPP > Multi-GS > APR. To verify the stability of the algorithm, 60 groups of random numerical simulation experiments were also carried out, and the reconstruction errors of the different algorithms are shown in Fig. 5. When we directly optimize the polynomial coefficients, the stability of the algorithms is better than that of the point-by-point reconstruction algorithms, whether multiple diffraction patterns are used or not.

 figure: Fig. 5.

Fig. 5. Summary of 60 numerical experiments for simple wavefront. (a) The residual error phase retrieved with different algorithms; (b) The residual error amplitude retrieved with different algorithms.

Download Full Size | PDF

3.3 General wavefront reconstruction

To verify the effectiveness of the proposed algorithm for more general wavefront errors, the surface fitted by the first 300 Q-type polynomials [27] as the phase to be measured. Similarly, White Gaussian noise is added and the signal-to-noise ratio is 35dB for every intensity pattern. The proposed three algorithms, as well as APR and Multi-GS algorithms, which are successfully applied to simple wavefront reconstruction, are used for general wavefront reconstruction verification as shown in Fig. 6. By comparing the reconstruction errors, we find that the reconstruction errors of Single-NOPZ algorithm are larger than that of the other two proposed algorithms and the stability is worse than the other two proposed algorithms. Therefore, for the measurement of the general wavefront, the multiple diffraction patterns should be applied to wavefront measurement to improve the stability and accuracy of the algorithm.

 figure: Fig. 6.

Fig. 6. Summary of 60 numerical experiments for general wavefront. (a1) and (a2) are the example of phase and amplitude tested, respectively. (b1) is the residual phase retrieved with different algorithms; (b2) is the residual amplitude retrieved with different algorithms.

Download Full Size | PDF

4. Experiments and results

The experiments were carried out to verify the feasibility and accuracy of the proposed methods. The collimated beam with $\; \lambda = 632.8$ nm emitted from a ZYGO interferometer illuminates the plate and a circular aperture stop of diameter 22.9 mm is placed in front of the plate. A lens with a focal length of 1079.41 mm is used to focus the beam. The camera, which is a beam profiler (BGP-USB-SP620U) with 12 bit depth and ${4}{.4 \times 4}{.4\; }{\mu}{\rm{m}}$ pixel size, is mounted on a mobile platform and collects diffraction patterns at different defocus position. In this experiment, the defocusing distances we selected are 5mm, 10mm and 15mm respectively. The sample matrix is set as ${512 \times 512}$ for all algorithms. The parameters settings are shown in Table 1.

Tables Icon

Table 1. the parameters setting for all algorithms

4.1 Systematic errors calibration

Firstly, the systematic wavefront error is measured as shown in Fig. 7. Through the previous simulations, we can conclude that the Single-GS and Single-NOPP cannot converge when both the phase and amplitude of wavefront are recovered at the same time, in addition, APR is less effective than Multi-GS. Here only the three proposed effective algorithms and Multi-GS are used for wavefront reconstruction verification. There are eight subfigures as shown in Fig. 7. The top group shows the phase retrieved by four algorithms, and the bottom group shows the corresponding amplitude retrieved.

 figure: Fig. 7.

Fig. 7. Systematic errors calibration. The phase of reconstruction (top) and the amplitude of reconstruction (bottom).

Download Full Size | PDF

From the phase map in Fig. 7, it is concluded that the main systematic error is defocusing error. After removing the piston, tip and tilt, the systematic error is 0.1174 λ (0.7376 rad), RMS. The difference between the four algorithms is less than 0.0060 λ (0.0377 rad), RMS. The sources of systematic errors mainly include two aspects. On the one hand, to attenuate the light and prevent overexposure, a filter in front of the camera is added, which acts as a parallel plate, leading to the defocusing error caused by the focus moving forward. On the other hand, the lens with the focal length of 1079.41 mm is acted as the focusing lens, the F number of the system is large, therefore, it is difficult to determine the actual focal position. Compared with the amplitude retrieved by the Multi-GS and Multi-NOPP algorithm, the result of the Multi-GS algorithm has obvious edge warp. It is experimentally proved that the near focal diffraction patterns quantized to 12 bits would cause edge warping and the validity of quantization error correction algorithm. It is also experimentally proved that by recovering the polynomial coefficient, the noise is compressed into the recovered amplitude and the phase reconstruction accuracy is improved as shown in Multi-NOPZ algorithm results.

4.2 Simple wavefront error case

Simple wavefront measurement was done by inserting one plate sample. The RMS of this sample’s wavefront measured by the interferometer (ZYGO) is 0.1059 λ (0.6654 rad) after removing the piston, tip and tilt. Additionally, the first 36 terms Zernike superposition is used to model the phase measured by the Multi-GS and Multi-NOPP algorithms. We also optimize the 36 terms Zernike coefficients for two NOPZ algorithms. The phase retrieval results are shown in Fig. 8. The difference between the phase retrieval results and the interferometer test result is shown in Fig. 8(b). The measured result of Multi-NOPZ algorithm agrees to interferometric data to 0.0119 λ (0.0747 rad), RMS. The measurement accuracy of three proposed algorithms is better than that of the classical Multi-GS algorithm. It is worth noting that for the Single-NOPZ algorithm, only one diffraction pattern collected at the 5 mm defocus position can be used to recover a better wavefront, while the recovery results from diffraction patterns collected at the other two defocus positions cannot be compared with the interferometer test results. This phenomenon suggests that the diffraction effect is enhanced when the diffraction intensity is collected at a properly defocusing position [28], so the diffraction pattern at a certain defocus position can get better recover results.

 figure: Fig. 8.

Fig. 8. The simple sample. (a) The ZYGO result; (b) Summary of computed differences between different algorithms and ZYGO result; (c1) ∼ (f1) Four algorithms reconstruction phase; (c2) ∼ (f2) Comparison of four algorithms reconstruction with interferometric data; (c3) ∼ (f3) Four algorithms reconstruction amplitude;

Download Full Size | PDF

4.3 General wavefront error case

To verify the effectiveness of the proposed algorithms for general wavefront measurement, another sample with 0.1008 λ (0.6333 rad), RMS, is tested. The measurement results are shown in Fig. 9. The first 190 terms Zernike polynomials are used for all algorithms. It is obvious that using only one diffractive intensity cannot effectively recover the complex wavefront, which is also consistent with the previous simulation that the single image is not stable for general wavefront reconstruction. Multi-NOPZ still has the highest recovery accuracy in all algorithms, agreeing with interferometric measurement to 0.0300 λ (0.1885 rad), RMS.

 figure: Fig. 9.

Fig. 9. The general sample. (a) The ZYGO result; (b) Summary of computed differences between different algorithms and ZYGO result; (c1) ∼ (f1) Four algorithms reconstruction phase; (c2) ∼ (f2) Comparison of four algorithms reconstruction with interferometric data; (c3) ∼ (f3) Four algorithms reconstruction amplitude.

Download Full Size | PDF

5. Discussion and conclusion

Although we have discussed extensively proposed methods numerically and experimentally, there are still some interesting issues to think further. As shown in Fig. 9, the retrieved amplitude morphology is similar to phase. we believe that the phenomenon is driven via the incomplete wavefront reconstruction, and the reason would be explored in the future. In this paper, based on the prior constraint of the interferometer data, the appropriate polynomial terms can be chosen to represent the wavefront errors. There is a linear relationship between the peak power spectral density (PSD) and the Zernike order [29], the possibility of adaptive modal selection based on PSD constraints would be explored in the future as well.

In this paper, we develop two kinds of nonlinear optimization algorithms to reconstruct amplitude and phase at the same time and verify the accuracy and stability of our algorithms through comparative experiments. Compared with the data measured by the interferometer, the reconstruction accuracy of the simple wavefront is close to ${\lambda}/{100}$, RMS, while that of the general wavefront is better than ${\lambda}/{30}$, RMS. In addition, the quantization error correction algorithm to avoid the errors caused by sensor quantization is proposed. We believe that simultaneous reconstruction of phase and amplitude methods and quantization error correction algorithm have broad application prospects in optical testing and imaging.

Funding

Science Challenge Project (TZ2016006-0502-02); China Academy of Engineering Physics (ZD18005).

Disclosures

The authors declare no conflicts of interest.

References

1. Y. Geng, J. Tan, C. Guo, C. Shen, W. Ding, S. Liu, and Z. Liu, “Computational coherent imaging by rotating a cylindrical lens,” Opt. Express 26(17), 22110–22122 (2018). [CrossRef]  

2. X. He, H. Tao, Z. Jiang, Y. Kong, and C. Liu, “Single-shot optical multiple-image encryption by jointly using wavelength multiplexing and position multiplexing,” Appl. Opt. 59(1), 9–15 (2020). [CrossRef]  

3. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

4. A. M. Michalko and J. R. Fienup, “Verification of transverse translation diverse phase retrieval for concave optical metrology,” Opt. Lett. 43(19), 4827–4830 (2018). [CrossRef]  

5. C. Zuo, Q. Chen, and A. Asundi, “Boundary-artifact-free phase retrieval with the transport of intensity equation: fast solution with use of discrete cosine transform,” Opt. Express 22(8), 9220–9244 (2014). [CrossRef]  

6. C. Zuo, Q. Chen, H. Li, W. Qu, and A. Asundi, “Boundary-artifact-free phase retrieval with the transport of intensity equation II: applications to microlens characterization,” Opt. Express 22(15), 18310–18324 (2014). [CrossRef]  

7. J. R. Fienup, “Phase Retrieval Algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

8. J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt. 32(10), 1737–1746 (1993). [CrossRef]  

9. D. Zhang, S. Xu, N. Liu, and X. Wang, “Detecting wavefront amplitude and phase using linear phase diversity,” Appl. Opt. 56(22), 6293–6299 (2017). [CrossRef]  

10. S. M. Jefferies, M. Lloyd-Hart, E. K. Hege, and J. Georges, “Sensing wave-front amplitude and phase with phase diversity,” Appl. Opt. 41(11), 2095–2102 (2002). [CrossRef]  

11. F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A 75(4), 043805 (2007). [CrossRef]  

12. F. Zhang, B. Chen, G. R. Morrison, J. Vila-Comamala, M. Guizar-Sicairos, and I. K. Robinson, “Phase retrieval by coherent modulation imaging,” Nat. Commun. 7(1), 13367 (2016). [CrossRef]  

13. H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscope: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]  

14. G. R. Brady, M. Guizar-Sicairos, and J. R. Fienup, “Optical wavefront measurement using phase retrieval with transverse translation diversity,” Opt. Express 17(2), 624–639 (2009). [CrossRef]  

15. S. Echeverri-Chacón, R. Restrepo, C. Cuartas-Vélez, and N. UribePatarroyo, “Vortex-enhanced coherent-illumination phase diversity for phase retrieval in coherent imaging systems,” Opt. Lett. 41(8), 1817–1820 (2016). [CrossRef]  

16. D. Zhang, X. Zhang, S. Xu, N. Liu, and L. Zhao, “Simplified Phase Diversity algorithm based on first-order Taylor expansion,” Appl. Opt. 55(28), 7872–7877 (2016). [CrossRef]  

17. G. R. Brady and J. R. Fienup, “Nonlinear optimization algorithm for retrieving the full complex pupil function,” Opt. Express 14(2), 474–486 (2006). [CrossRef]  

18. J. Fang and D. Savransky, “Amplitude and phase retrieval with simultaneous diversity estimation using expectation maximization,” J. Opt. Soc. Am. A 35(2), 293–300 (2018). [CrossRef]  

19. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

20. C. Rydberg and J. Bengtsson, “Efficient numerical representation of the optical field for the propagation of partially coherent radiation with a specified spatial and temporal coherence function,” J. Opt. Soc. Am. A 23(7), 1616–1625 (2006). [CrossRef]  

21. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66(3), 207–211 (1976). [CrossRef]  

22. D. B. Moore and J. R. Fienup, “Wavefront Sensing by Phase and Modal Amplitude Retrieval,” in Imaging and Applied Optics, OSA Technical Digest (online) (Optical Society of America, 2013), paper OTu1A.4.

23. G. Pedrini, W. Osten, and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett. 30(8), 833–835 (2005). [CrossRef]  

24. Z. Liu, C. Guo, J. Tan, Q. Wu, L. Pan, and S. Liu, “Iterative phase-amplitude retrieval with multiple intensity images at output plane of gyrator transforms,” J. Opt. 17(2), 025701 (2015). [CrossRef]  

25. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]  

26. C. Zuo, J. Sun, J. Li, A. Asundi, and Q. Chen, “Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography,” Opt. Lasers Eng. 128, 106003 (2016). [CrossRef]  

27. G. W. Forbes, “Fitting freeform shapes with orthogonal bases,” Opt. Express 21(16), 19061–19081 (2013). [CrossRef]  

28. J. R. Fienup, J. C. Marron, T. J. Schulz, and J. H. Seldin, “Hubble space telescope characterized by using phase-retrieval algorithms,” Appl. Opt. 32(10), 1747–1767 (1993). [CrossRef]  

29. B. H. Dean and C. W. Bowers, “Diversity selection for phase-diverse phase retrieval,” J. Opt. Soc. Am. A 20(8), 1490–1504 (2003). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Experimental configuration for wavefront measurement. The collimated beam emitted from a ZYGO interferometer illuminates the plate and aperture stop. ${{U}_{0}}$: the wavefront of illumination. ${{U}_{{total}}}$: the wavefront behind the plate. AS: aperture stop. PL: plate with wavefront error. CL: condensing lens. CCD: charge coupled device.
Fig. 2.
Fig. 2. The influence of quantized error for different algorithms. Suffix ‘-0’ represents no quantization error correction. Suffix ‘-1’ represents that quantization error is corrected.
Fig. 3.
Fig. 3. Summary of 60 numerical experiments to verify the quantization error correction algorithm.
Fig. 4.
Fig. 4. Numerical reconstructions for simple wavefront. (a1) and (a2) are the ground truth; (b1) and (b2) are reconstruction residual error for six algorithms, respectively; (c1) and (c2) are RMSE convergence curve.
Fig. 5.
Fig. 5. Summary of 60 numerical experiments for simple wavefront. (a) The residual error phase retrieved with different algorithms; (b) The residual error amplitude retrieved with different algorithms.
Fig. 6.
Fig. 6. Summary of 60 numerical experiments for general wavefront. (a1) and (a2) are the example of phase and amplitude tested, respectively. (b1) is the residual phase retrieved with different algorithms; (b2) is the residual amplitude retrieved with different algorithms.
Fig. 7.
Fig. 7. Systematic errors calibration. The phase of reconstruction (top) and the amplitude of reconstruction (bottom).
Fig. 8.
Fig. 8. The simple sample. (a) The ZYGO result; (b) Summary of computed differences between different algorithms and ZYGO result; (c1) ∼ (f1) Four algorithms reconstruction phase; (c2) ∼ (f2) Comparison of four algorithms reconstruction with interferometric data; (c3) ∼ (f3) Four algorithms reconstruction amplitude;
Fig. 9.
Fig. 9. The general sample. (a) The ZYGO result; (b) Summary of computed differences between different algorithms and ZYGO result; (c1) ∼ (f1) Four algorithms reconstruction phase; (c2) ∼ (f2) Comparison of four algorithms reconstruction with interferometric data; (c3) ∼ (f3) Four algorithms reconstruction amplitude.

Tables (1)

Tables Icon

Table 1. the parameters setting for all algorithms

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

U 0 ( x , y ) = A 0 ( x , y ) exp [ i φ 0 ( x , y ) ] ,
U p l a t e ( x , y ) = exp [ i φ p l a t e ( x , y ) ] ,
U t o t a l ( x , y ) = A 0 ( x , y ) exp [ i φ t o t a l ( x , y ) ] ,
φ t o t a l ( x , y ) = φ 0 ( x , y ) + φ p l a t e ( x , y ) .
g ( x , y ) = U t o t a l ( x , y ) exp ( i k x 2 + y 2 2 f ) ,
G j ( μ ,   ν ) = [ g ( x , y ) , Δ z j ] ,
I c a l j = G j ( μ ,   ν ) G j ( μ ,   ν ) ,
E j = μ , ν W j ( μ , ν ) [ I ^ c a l j ( μ ,   ν ) I ^ m e a j ( μ ,   ν ) ] 2 ,
E j θ j = 2 Im [ g j ( x , y ) g j ( x , y ) ] ,
E j a j = 2 Re [ g j ( x , y ) g j ( x , y ) ] ,
g j ( x , y ) = 1 [ G j w ( μ , ν ) ] ,
G j w ( μ , ν ) = W j ( μ , ν ) G j M ( μ , ν ) G j ( μ , ν ) | G j ( μ , ν ) | ,
G j M ( μ , ν ) = | F j ( μ , ν ) | | G j ( μ , ν ) | 2 M / | G j ( μ , ν ) | 2 M max ( | G j ( μ , ν ) | 2 ) max ( | G j ( μ , ν ) | 2 ) ,
| F j ( μ , ν ) |  =  M I ^ m e a j .
φ ( x , y ) = s = 1 S α s Z s ( x , y ) ,
E j α j , s θ = 2 Im [ x , y g j ( x , y ) Z s ( x , y ) g j ( x , y ) ] .
E j α j , s a = 2 Re [ x , y g j ( x , y ) Z s ( x , y ) g j ( x , y ) ] x , y | Z s ( x , y ) | .
| g i | = | g i 1 | + h a m p ( E j a j , i + ω E j a j , i 1 ) ,
θ i ( x , y ) = θ i 1 ( x , y ) + h p h a s e S ( E j α j , s , i θ + σ E j α j , s , i 1 θ ) Z s ( x , y ) ,
θ i ( x , y ) = θ i 1 ( x , y ) + h p h a s e ( E j θ j , i + σ E j θ j , i 1 ) ,
g i = | g i | exp [ i θ i ( x , y ) ] ;
φ p l a t e ( x , y ) = θ i 1 ( x , y ) φ 0 ( x , y ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.