A moveable lens is used for determining amplitude and phase on the object plane. The extended fractional Fourier transform is introduced to address the single lens imaging. We put forward a fast algorithm for the transform by convolution. Combined with parallel iterative phase retrieval algorithm, it is applied to reconstruct the complex amplitude of the object. Compared with inline holography, the implementation of our method is simple and easy. Without the oversampling operation, the computational load is less. Also the proposed method has a superiority of accuracy over the direct focusing measurement for the imaging of small size objects.
© 2016 Optical Society of America
Wavefront reconstruction plays a vital role in many technical and scientific applications, such as optical metrology and imaging. The various ways of wavefront reconstruction can be categorized into two groups: (1) methods with a reference beam (interferometry) and (2) methods without a reference beam (phase retrieval) [1, 2]. Phase retrieval addresses the inverse problem of how to determine the phase distribution of a complex-valued function from some modulus data and additional priori information from the optical system, like wavelength and path length. It has been applied successfully in X-ray crystallography , astronomy  and coherent light microscopy [2, 5].
The iterative phase retrieval algorithm was created by Gerchberg and Saxton . Then it evolved into Yang-Gu and Fienup’s algorithms [7–9], which optimize the convergence process. A feedback control  is introduced into the Gerchberg-Saxton phase retrieval algorithm . This type of methods are constrained by the condition that the test object is either a phase-only or an amplitude-only object. As a synthesis, the multiple-image phase retrieval technology [10–15] was invented to obtain more accurate convergence result in iterative computing procedure. It is composed of several units, for which the computation of light propagation is the same. Then outcomes of these units are integrated to accelerate the convergence of iteration . The multi-stage algorithm has been invented by Rodrigo et. al. , with serial computing applied to the phase retrieval in gyrator transform domains and Fresnel diffraction . As a class of effective strategy, the retrieval procedures [11, 12] can improve the convergence stagnation in the Gerchberg-Saxton method . It alters the rotation angles of cylindrical lenses to generate multiple intensity patterns corresponding to different transform parameters. Alternatively, illumination wavelength can be tuned to realize the serial calculation . Another serial computing structure, named single-beam multiple-intensity reconstruction (SBMIR) algorithm, has been reported in . Extensively, the implementation based on focus-tunable lens has been proposed in .
As the other integration type of multiple-image phase retrieval, parallel phase retrieval algorithm has been presented  as the amplitude-phase retrieval (APR) technique, which is discussed in gyrator transform domains  and fractional Fourier transform domains . For the phase retrieval scheme in gyrator transform domains [11, 16], recording the output images is inconvenient in experiment due to the operation of thin cylinder lenses.
In this paper, we put forward an easy measurement scheme of amplitude and phase by moving a single lens along the optical axis of system. The extended fractional Fourier transform (eFrFT) , is employed to address the optical process. The recorded images are imported into iterative phase retrieval algorithm to determine the intensity distribution and phase distribution at the object plane. Here a convolution equation is derived for the computation of discrete eFrFT. As another possible choice, a two-step Fresnel diffraction with the phase modulation of lens is also calculated by contrast with eFrFT in phase retrieval to verify the convolution scheme. At length, the advantages of our scheme over the focusing measurement have been proved and analyzed. Compared with the coherent diffractive imaging (CDI) proposed by Shechtman et al. , our work extends to the visible wavelength regime and avoid the problem of non-existing lenses for hard X-ray. The oversampling process (support constraint) is not required, which is also different from APR in Fresnel domains . Thus the computation load can be lessened.
2. Measurement schemeEquation (1) can be converted into a convolution formula
The functions (, and ) are phase-only. Mathematically, the inverse transform of eFrFT is according to Eq. (2). The fast algorithm of eFrFT can be designed by applying FFT to Eq. (2). Here the Fourier transform of the function is calculated asEqs. (1)-(4).
In Fig. 1, when a coherent light illuminates the SLM, it will carry the object plane information to be refracted by the lens and form a diffraction field after the lens. We keep the object plane and the image plane still and move the lens along the optical axis. Then the corresponding intensities of the diffraction patterns are recorded by CCD and stored in the computer for further data processing. During the initialization, the phase of object plane can been taken as a constant considering the uniform illumination of laser. In the APR algorithm, the amplitude and phase at the object plane are updated for the next loop by the average of computed data . Inline holography needs beam splitter and recording medium. A precise reconstruction system is also required for determining the object information. Compared with it, the proposed measurement system is simple and easy.
3. Results and discussion
In our study, the working wavelength of laser is chosen as 632.8nm and the focal length is 200mm. The physical side length of square test images is fixed at 5mm, which is only changed in sub-section 3.5.
3.1 Effects of support constraint size
The oversampling method, which is an effective way of implementing various phasing algorithms, adapts magnitude data through zero padding, i.e., in the padding region, which mathematically acts as a support constraint of the original image [20, 22].
The 256 × 256 binary resolution chart and the 256 × 256 Elaine  (Seen in Fig. 2) serve as the object function . Both images are surrounded by a dark (zero-valued) border to simulate images with support constraint. The width of dark margin is w, in proportion to the size of the padding region (Fig. 2(a)), whose unit is pixel. The resulting images are employed as input of the above algorithm. Here the object distance is sequentially taken as 50mm, 70mm, 90mm, 110mm and 130mm. Accordingly, the image distance is calculated by the equation .
At the top of Fig. 3, five intensity images used for Elaine reconstruction are displayed, followed by the retrieved results after 1500 iterations with different support constraint sizes, where w is equal to 0 or 20. It can be directly perceived that reconstructions in Fig. 3(b) are quite close to the original resolution chart. Considering some pixel values in the resolution chart are zero functioning as the support constraint, the Elaine picture, whose pixel values range from 28 to 241, is used as the input. A similar result can be seen in Fig. 3(c). Quantitatively, the normalized correlation coefficient (NCC) is calculated to evaluate the similarity between test image and reconstructed complex amplitude . First, their covariance matrix can be defined as
In terms of the convergence speed, the mean square error (MSE) between model and reconstruction of Elaine image is plotted as a function of iteration numbers in Fig. 4. Here MSE is defined as
Therefore, it can be concluded that the reconstruction accuracy is unaffected by the support constraint size, but the increasing constraint size will slow down the convergence. Thus, when our scheme is applied, adding the support constraint is not recommended. This is an outstanding advantage to the existing retrieval method . Usually, images of loose support are typically more challenging to reconstruct. Our scheme does not require pre-processing measured data by adding the artificial support constraint and thus simplifies the operation and lessens the computational load. In light of this, the oversampling setting is dropped in the following and only the binary resolution chart is tested.
3.2 Effects of sequential measurement distance
The distance of displacing the lens every time also has an impact on the reconstruction quality because the improvement during iterations depends on the variation of intensity that, in turn, depends on the distance . If the distance is very small, the intensity distributions do not change substantially, which means the frequency domain constraints will not work well. The discussion in this part is to determine the minimum sequential measurement distance that will result in satisfactory reconstructions. Here, the median of is kept as 100mm and the object distance values of 5 measurements are distributed symmetrically at both sides of 150mm according to the corresponding interval . The distance is calculated by the equation .
Figure 5 displays the plot of NCC as a function of the measurement distance under 500 iterations. It can be observed that the quality of retrieved image increases with the distance growing. If the correlation threshold is set as 0.99, a measurement distance of 21.7mm can be taken as the minimum value to obtain the satisfactory reconstruction for this particular object wavefront.
With regard to the convergence speed, Fig. 6 suggests that the larger the measurement distance is, the more rapidly the APR algorithm converges. The curves with a distance less than 21.7mm stagnate at a high MSE value, indicating that more iterations cannot improve the reconstruction quality but just entail longer processing time when the measurement distance is too small. The other two curves keep falling down. Significantly, the measurement distance should not be too large considering the limited detector sensing area in practice. Unacceptable reconstructions could be achieved if the main pattern of measured data is oversize for the CCD.
3.3 Effects of number of intensity measurements
Due to various experimental factors and phase randomization, the appropriate number of intensity measurements for the satisfactory wavefront reconstruction is not easy to pin down. If the number is small, the reconstruction will be poor; if large, the computing time will surge. Hence the intention of this discussion is to approximate the appropriate number of intensity measurements. Here, the object distance is taken according to Table 1. The corresponding image distance is calculated by the equation .
From Fig. 7(a), it can be observed that utilizing more measured intensity images can speed up the convergence indeed. The curves representing 6 and 8 measurements keep falling off while the other two curves become stagnant at an early stage. Evident is that the slope of the 8 measurements curve is much steeper than the 6 measurements one. The eventual MSE values each curve ends up with also indicate retrieval from 8 measurements is the best.
Figure 7(b) shows a matrix of representative reconstructions to reveal the effects of the number of intensity measurements on the reconstruction quality as iteration goes on. Moving from up to down, the number of intensity measurements rises from 2 to 8 with an interval of 2. Moving from left to right, the number of iterations increases from 100 to 500 with an interval of 100. It is notable that the top two rows, which use less than 5 measurements, do not get acceptable retrieved results even after 500 iterations. The reconstructed images are blurry and full of noise. For the quantitative analysis, reconstructions are correlated with the original object and results are labeled in red above each reconstruction. In conclusion, 8 measurements is a proper choice for the satisfactory retrieval in this discussion, taking the computation time into consideration.
3.4 Comparison of eFrFT and two-step Fresnel transform
As is demonstrated in Section 2, a single lens imaging can be seen as an eFrFT from the aspect of information optics. If discussed in the field of wave optics, this process consists of two Fresnel transforms intervened by a phase modulation from single lens . Naturally, we want to compare these two methods. Here the object distance is sequentially taken as 90mm, 120mm, 150mm 180mm and 210mm. Then the image distance is determined by .
Figure 8 gives a comparison of these two methods’ reconstructions after 500 iterations. It is hard to compare their reconstruction quality with naked eyes, so MSE is calculated and labeled under two reconstructions, which shows the reconstructed image retrieved by eFrFT is better than the two-step Fresnel transform one. Besides, it can be observed that the two convergence curves are both similar to an inclined distorted letter ‘w’. They descend slowly during the initial iterations, plunge after some inflection point and end up close to a slant line. But for the eFrFT curve, its inflection point comes earlier so its final converged MSE value is smaller. This shows eFrFT’s superiority to two-step Fresnel transform under the same iterations. The finding may be attributed to their systematic difference. The latter has two transportations but the former is just one transform, bringing in less accumulative errors. Moreover, the computational load of the eFrFT convolution scheme is lower than two-step Fresnel transform.
3.5 Comparison between focusing and defocusing
Traditionally, a scaled intensity distribution can be detected on the conjugated plane with the object plane when the Gaussian lens formula
Figure 9(a) illustrates the measured amplitude distribution error curve as the physical side length of the observed object increases from 1mm to 6mm. It is shown that the curve corresponding to MDM always winds below the DFM curve. Smaller MSE of the MDM curve indicates that the reconstructed image by our scheme is closer to the ground truth than the direct intensity detection result. For example, when the side length is 3mm, distribution of error between the reconstructed amplitude data from MDM and the ground truth is displayed in Fig. 9(c), the average of which is 10−15, much smaller than the counterpart from DFM in Fig. 9(b). Results suggest our scheme outperforms DFM in intensity reconstruction when the observed object has a micro-size. In addition to the intrinsic property of phase retrieval, our scheme shows great potential in the microscopic imaging.
We present a reconstruction method by using the APR algorithm in eFrFT domains. A fast eFrFT algorithm is given by convolution. The effects of different parameters on the reconstruction quality are investigated and the results can be summarized to: (1) the reconstruction accuracy of our scheme does not rely on the support constraint size, which is different from the APR performance in Fresnel domains. (2) The quality of reconstructed images is improved significantly with the measurement interval increasing in a reasonable range, considering the practical CCD size. (3) Reconstruction quality gets enhanced with an increasing number of intensity measurements. The convolution scheme of eFrFT outperforms the two-step Fresnel transform analysis in reconstruction accuracy and computational load. The APR measurement with multiple defocused images is superior to the traditional direct focusing measurement from the aspect of phase retrieval and intensity reconstruction under the condition of small size objects.
The oversampling process can be dropped, which simplifies the calculation. Besides, the optical structure has several other advantages, such as simple experimental implementation, easy operation and controllable diffraction patterns. These features make our method quite promising. We anticipate that it will find application in coherent diffraction imaging, especially for micro-size objects.
This work was supported by the National Natural Science Foundation of China (NSFC) (Nos. 61377016, 61575055 and 61575053), the Fundamental Research Funds for the Central Universities (No. HIT.BRETIII.201406), the Program for New Century Excellent Talents in University (No. NCET-12-0148), and the China Postdoctoral Science Foundation (Nos. 2013M540278 and 2015T80340), and the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry, China. Authors are indebted to the reviewers for their helpful suggestions and comments.
References and links
3. R. P. Millane, “Phase retrieval in crystallography and optics,” J. Opt. Soc. Am. A 7(3), 394–411 (1990). [CrossRef]
4. J. C. Dainty and J. R. Fienup, “Phase retrieval and image reconstruction for astronomy,” in Image Recovery: Theory and Application, H. Stark, ed. (Academic, 1987), 231–275.
5. J. Miao, P. Charalambous, J. Kirz, and D. Sayre, “Extending the methodology of X-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens,” Nature 400(6742), 342–344 (1999). [CrossRef]
6. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik (Jena) 35, 237–246 (1972).
7. V. Soifer, V. Kotlyar, and L. Doskolovich, “Iterative methods for diffractive optical elements computation,” Taylor & Francis Inc. 1997.
8. G. Z. Yang, B. Z. Dong, B. Y. Gu, J. Y. Zhuang, and O. K. Ersoy, “Gerchberg-Saxton and Yang-Gu algorithms for phase retrieval in a nonunitary transform system: a comparison,” Appl. Opt. 33(2), 209–218 (1994). [CrossRef] [PubMed]
10. G. Cheng, C. Wei, J. Tan, K. Chen, S. Liu, Q. Wu, and Z. Liu, “A review of iterative phase retrieval for measurement and encryption,” Opt. Lasers Eng., doi:. [CrossRef]
12. J. A. Rodrigo, T. Alieva, G. Cristóbal, and M. L. Calvo, “Wavefield imaging via iterative retrieval based on phase modulation diversity,” Opt. Express 19(19), 18621–18635 (2011). [CrossRef] [PubMed]
16. Z. Liu, C. Guo, J. Tan, Q. Wu, L. Pan, and S. Liu, “Iterative phase amplitude retrieval from multiple images in gyrator domains,” J. Opt. 17, 025701 (2015). [CrossRef]
17. C. Guo, J. Tan, and Z. Liu, “Precision influence of a phase retrieval algorithm in fractional Fourier domains from position measurement error,” Appl. Opt. 54(22), 6940–6947 (2015). [CrossRef] [PubMed]
18. J. Hua, L. Liu, and G. Li, “Extended fractional Fourier transforms,” J. Opt. Soc. Am. A 14(12), 3316–3322 (1997). [CrossRef]
19. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging,” IEEE Signal Process. Mag. 32(3), 87–109 (2015). [CrossRef]
20. Z. Liu, C. Shen, J. Tan, and S. Liu, “A recovery method of double random phase encoding system with a parallel phase retrieval,” IEEE Photonics J. 8(1), 7801807 (2016). [CrossRef]
21. A. W. Lohmann, “Image rotation, Wigner rotation, and the fractional Fourier transform,” J. Opt. Soc. Am. A 10(10), 2181–2186 (1993). [CrossRef]