Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot phase reconstruction based on beam splitting encoding and averaging

Open Access Open Access

Abstract

Coherent modulation imaging (CMI) can effectively improve the convergence performance of coherent diffraction imaging by introducing a pre-characterized wave modulator. However, traditional CMI algorithms suffer from a low signal-to-noise ratio (SNR), with insufficient information redundancy inheriting from a single diffraction pattern. Additionally, the strong modulation capability of the modulator with a small basic pitch is preferred; however, it leads to the difficulty of fabrication and measurement with a limited aperture size of the detector. To overcome those obstacles, this study proposes a revised CMI algorithm based on beam splitting encoding and averaging. A diffraction pattern array was recorded after the incident wave was split by grating and modulated by a weak scattering modulator simultaneously. This approach differed from the previous grating-based single-shot phase retrieval algorithm because the diffraction array was not segmented and used integrally during the iteration process, which guarantees the capability of diffraction-limited resolution in theory. Additionally, an average process was employed in the image plane of the object to improve SNR significantly. The performance of the revised algorithm was demonstrated by simulations and experiments and can be applied as a universal single-shot phase retrieval algorithm to various fields practically with fast convergence speed and high SNR.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Coherent diffraction imaging (CDI) is a lens-free phase imaging technique and can obtain complex amplitude information of the beam [1,2]. Compared with the interference method, the optical path of CDI is simple, stable, and suitable for single-shot phase reconstruction. It is widely used in biological cell detection [3,4], chemistry [5,6], materials [7,8], plasma [9,10], ultra-fast phase retrieval [11,12], and other fields. However, the excellent convergence performance and high signal-to-noise ratio (SNR) are the primary problems that need to be addressed to improve the resolution in CDI. Therefore, in 2004, Rodenburg et al. proposed Ptychography [13,14], which uses the overlap scan method to retrieve the phase of the wavefront with multiple diffracted patterns. This method has high redundancy; therefore, it can achieve high resolution, low noise, and fast convergence. However, it needs to scan multiple diffraction patterns, which is not suitable for single-shot experiments. Pan et al. proposed a type of single-shot ptychography with grating splitting [15], and Oren Cohen et al. proposed that with pinhole array and 4f system [16]. These methods record one diffracted pattern only once but can obtain multiple diffraction patterns by dividing the diffraction pattern on the detector plane, which reduces the resolution because the spatial frequency is cut off.

To achieve single-shot phase retrieval, another strategy is to introduce wavefront modulation. The representative algorithm is the coherent modulation imaging (CMI) technology proposed by Zhang [17,18] and adds a random modulation plate to the traditional CDI optical path. The complex amplitude distribution of the modulation plate should be measured in advance, and it can achieve fast convergence of phase retrieval under a single-shot condition. The algorithm initially uses 0-π binary step random phase as the modulator. Owing to the high structural density of this type of modulation plate, there are infinitely high-order diffraction terms. The goal of single-shot wavefront reconstruction can be achieved; however, high-precision calibration under the condition of limited detection aperture is difficult to achieve. In addition, the loss of strong high-frequency information also leads to significant reconstruction noise [19], which is also a key issue currently faced by CMI. Xiaoliang He et al. proposed a dual-detector scheme to reduce the CMI noise level [20], where noise reduction is obvious; however, two cameras are required and pattern matching errors are introduced. As mentioned above, in CMI, the spatial-spectral characteristics of the binary modulation structure limit the final SNR. Therefore, Pan et al. proposed a CMI algorithm with a continuous weak modulation structure [19], which uses a continuous phase plate with a limited spatial-spectral width as the modulator. To guarantee the strong modulation process, the zero-frequency saturated diffraction pattern is recorded and combined with the convolution characteristics of the beam diffraction, where the weak modulator is equivalent to a strong modulation structure with a limited frequency spectrum. This can ensure fast convergence while improving the SNR under single-shot; however, this method has low photon utilization efficiency and potential to damage the detector due to the introduction of strong zero-order diffraction pattern. In addition, multiple weak modulations are also an important idea to improve the SNR of CMI. Zhenfei He et al. used a cascaded configuration in the algorithm [21]. Weaker modulation plates can be used in this method, but more modulation plates need to be introduced, which introduces more calibration errors. Xi He et al. proposed the use of grating beam splitting, and sub-beams pass through different positions of the modulation plate [22,23]. This method needs to divide the diffraction pattern on the detector plane, and it can realize the phase retrieval of multiple patterns in a single shot. However, similar to single-shot ptychography, this technique sacrifices spatial resolution to obtain high SNR and single-shot wavefront reconstruction.

Aiming at the contradictory problems of convergence, SNR, and resolution faced by the current CMI technology, this study proposes a phase retrieval technology based on beam splitting encoding and averaging (BSEA). The wavefront to be measured is divided into multiple beams by grating and then irradiated to different positions of the weak modulator (the element size is greater than 100µm), and the emitted beam array is simultaneously recorded by a single detector. Different from the diffraction pattern segmentation algorithm, BSEA processes the pattern array as a whole without a space segmentation process. Therefore, the limit resolution is close to the diffraction limit, and there is a wavefront amplitude averaging process on the specimen or image plane, which is equivalent to a single wavefront to be measured undergoing multiple modulation processes. It not only ensures the convergence under a single shot but also effectively improves the SNR without sacrificing the effective numerical aperture of the detector. This method guarantees the theoretical diffraction limit resolution capability and has important application prospects for the fields that require single-shot measurement.

2. Method

The optical path of the revised CMI algorithm based on BSEA is shown in Fig. 1. The device includes Specimen, Grating, Encoding plate, and Detector. The spatial coordinates corresponding to each plane are expressed as $({x_0},{y_0})$, $(x,y)$, $(x^{\prime},y^{\prime})$, and $(X,Y)$, and the distance between each plane is LSG, LGE, and LED, respectively. The complex amplitude of the wavefront before specimen is $P({x_0},{y_0})$, and the transmittance function of the specimen is $O({x_0},{y_0})$. The wavefront distribution on the front of the grating is calculated as:

$$G(x,y) = \Im [{P({x_0},{y_0}) \cdot O({x_0},{y_0}),{L_{SG}}} ]$$

In formula (1), $\Im [{,L} ]$ represents the forward propagation operation of the beam, L represents the forward propagation distance. The propagation process is realized by the angular spectrum kernel. The wavefront $G(x,y)$ is split into m sub-beams of different angles after being split by a two-dimensional grating, and the wavefront becomes $G(x,y) \cdot {e^{j\overrightarrow k \cdot \overrightarrow {{r_m}} }}$, where j is the imaginary unit, $\overrightarrow k $ is the wave vector, and ${\overrightarrow r _m}$ is the direction vector. The wavefront in front of the encoding plate can be calculated by the following formula:

$${E_m}(x^{\prime},y^{\prime}) = \Im [{G(x,y) \cdot {e^{j\overrightarrow k \cdot \overrightarrow {{r_m}} }},{L_{GE}}} ]$$

 figure: Fig. 1.

Fig. 1. Basic scheme of revised CMI based on BSEA.

Download Full Size | PDF

The transmittance function of the encoding plate is expressed as: $T(x^{\prime},y^{\prime})$, and the modulated wavefront after the encoding plate is ${\varphi _m}(x^{\prime},y^{\prime}) = {E_m}(x^{\prime},y^{\prime}) \cdot T(x^{\prime},y^{\prime})$. Since each sub-beam has a different angle to the optical axis, the modulation position of the corresponding encoding plate is also different. The amplitude of diffraction pattern on the detector is expressed as:

$${D_m}(X,Y) = \Im [{{\varphi_m}(x^{\prime},y^{\prime}),{L_{ED}}} ]$$

The light intensity of the diffracted pattern recorded by the detector is expressed by the following formula:

$$I(X,Y) = \sum\limits_{m = 1}^M {{{|{{D_m}(X,Y)} |}^2}} $$

The iteration process of BSEA includes four planes, as shown in Fig. 2, which are the focal plane, the average plane (specimen plane), the encoding plate plane, and the detector plane. The corresponding spatial position coordinates are represented by $({u_0},{v_0})$, $(u,v)$, $(x^{\prime},y^{\prime})$, and $(X,Y)$, and the distances between the focal plane, the average plane, the encoding plate plane, and the detector plane are LFA, LAE, and LED, respectively. The grating in the optical path divides the specimen to be measured into multiple, and no operation is performed on this plane during the iterative process. To improve the convergence and reduce the noise, it is necessary to add corresponding constraints on the four planes. Constraint 1: a hole constraint is employed in the focal plane, and the hole function is expressed as the following formula:

$${H_m}({u_0},{v_0},{a_m},{b_m})\textrm{ = }\left\{ \begin{array}{ll} 1 &{({{u_0}\textrm{ - }{a_m}} )^2} + {({{v_0}\textrm{ - }{b_m}} )^2} < {R^2}\\ 0 &{({{u_0}\textrm{ - }{a_m}} )^2} + {({{v_0}\textrm{ - }{b_m}} )^2} > {R^2} \end{array} \right.$$
where the radius R becomes larger as the number of iterations K increases, $R = {R_0} + K/2$, ${R_0}$ is the initial radius, and ${a_m},{b_m}$ is the spatial focal position corresponding to the sub-beams, as shown in the focal plane in Fig. 2. In the illustration, f1, f2, f3, and f4 are the focal points corresponding to the sub-beams. Constraint 2: The average plane is generally selected on the specimen plane or the imaging plane of the specimen. The specimens corresponding to different sub-beams are translated to the same position. In Fig. 2, ${\overrightarrow d _m}$ represents the translation vector, and then the amplitude of each sub-beam is averaged. This step makes the amplitude distribution of each sub-beam the same, and the noise could be effectively suppressed. Constraint 3: The distribution of the encoding plate is binary phase modulation with random phase distribution, and there is a certain angle between each sub-beam; therefore, the sub-beams correspond to different positions of the encoding plate. As shown in the encoding plate of Fig. 2, it is equivalent to multiple modulations, which effectively improves the modulation ability of the weak scattering modulator. Constraint 4: Use the detector to record the light intensity distribution of the diffraction pattern and replace the estimated wavefront amplitude with its square root, leaving the original phase distribution unchanged.

 figure: Fig. 2.

Fig. 2. Beam splitting encoding and averaging schematic diagram.

Download Full Size | PDF

The iterative flow chart of revised CMI based on BSEA is shown in Fig. 3. The initial random non-zero guess $Egues{s_m}(x^{\prime},y^{\prime})$ is the wavefront on the front of the encoding plate, which is used as the input of the iteration. S1 and S2 are calculated to obtain the complex amplitude distribution ${D_m}(X,Y)$ of the wavefront on the detector. Combining with the light intensity distribution $I(X,Y)$ recorded by the detector, S3 updates the amplitude of the guessed wavefront on the detector. S4 is the error function, and the iteration ends when the error is less than the ideal error. S5 and S6 return the updated wavefront of the detector plane to the encoding plate plane and use the update formula to obtain the wavefront distribution in front of the encoding plate to obtain $E{^{\prime}_m}(x^{\prime},y^{\prime})$, $\chi $ is constant of (0-1). S7 and S8 propagate the beam to the focal plane, and employ the hole constraint according to formula (5), thereby obtaining the revised wavefront distribution $F{^{\prime}_m}({u_0},{v_0})$. S9 propagates the wavefront to the average plane, while S10 and S11 translate the sub-beams to the same position and then average the amplitude. In S12, the averaged sub-beam is back-propagated to the original position, and the original phase value remains unchanged so that after averaging, the wavefront distribution $Q(u,v)$ is obtained. S13 obtains the updated wavefront distribution ${E_m}(x^{\prime},y^{\prime})$, and completes one iteration. The wavefront distribution of the specimen plane or the detector plane is obtained through the propagation of the beam, to meet the measurement requirements.

 figure: Fig. 3.

Fig. 3. Flow chart of BSEA.

Download Full Size | PDF

3. Simulation

The validity of the revised algorithm was first demonstrated by simulations, the average plane can be selected on the imaging plane, and the optical path is shown in Fig. 4. An imaging lens is added between the specimen plane and grating plane to image the specimen to a position near the front of the encoding plate. The distances between the specimen, lens, and grating are LSL and LLG respectively. The ground truth of the specimen is shown in Figs. 5(a) and (b), the diameter of the specimen is 900$\mathrm{\mu m}$, and the phase value range is [−1,1]. The wavelength of the simulated laser is 532nm, and the specimen is irradiated with parallel light. The focal length and aperture of the lens are 75mm and 50.8mm, respectively. The grating diameter is 12.7mm, it is a 3×3 two-dimensional Dammann grating, and the beam splitting separation angle is 0.5°. Due to the separation angle of the Dammann grating, the optical path can completely separate the images of the sub-beams. The element size of the encoding plate is 100$\mathrm{\mu m}$, as shown in Fig. 6(a), the inset is a partially enlarged view, the phase value is a binary random distribution of 0-π, the amplitude value is 1, and the diameter of the encoding plate fills the entire matrix. The pixel size of the detector is 6.5×6.5µm. The distances LSL, LLG, LGE, and LED are set to 100mm, 10mm, 330mm, and 50mm, respectively.

 figure: Fig. 4.

Fig. 4. BSEA imaging optical path.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Ground truth of the specimen. (a) amplitude. (b) phase. (The unit of color bar is in radians. The scale bar in (a) is applicable to (b).)

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Retrieved results of the complex amplitude in the three situations. (a) is the phase distribution of the encoding plate. (b)is the diffraction pattern of situation 1. (c) is the diffraction pattern of situations 2 and 3. (d)-(f) are the amplitude distributions retrieved in situations 1-3. (g)-(i) are the phase distributions retrieved in situations 1-3. (The unit of color bar is in radians. The scale bar in (d) is applicable to (e)-(i).)

Download Full Size | PDF

The simulation compared the results in three situations. Situation 1: Phase retrieval without grating; Situation 2: Phase retrieval with grating but without averaging algorithm; Situation 3: Phase retrieval with grating and averaging algorithm added. The diffraction pattern obtained by the detector in situation 1 is shown in Fig. 6(b), where the inset is the intensity distribution at the focal point during the algorithm iteration. The intensity distribution of the diffracted pattern and focal point corresponding to Situation 2 and Situation 3 is the same as shown in Fig. 6(c) and the inset. Due to the addition of a grating, both the diffracted pattern and focal point are divided into nine areas. Before the iteration, the diffraction pattern was quantized to 65535. In order to be consistent with the actual experimental noise, quantization noise is added to the diffraction pattern: Noise = round(rand(n)·200), where n is the size of the calculated square matrix. The different iterative algorithms are used to retrieve the amplitude [Figs. 6(d-f)] and phase distribution [Figs. 6(g-i)] of the specimen in the three situations. Situations 1, 2, and 3 correspond to Figs. 6(d)(g), (e)(h), and (f)(i), respectively. It can be seen that the retrieved image in situation 2, Figs. 6(e)(h), is clearer than that in situation 1, Figs. 6(d)(g). Therefore, the convergence performance can be improved after adding the grating. The resolution of the retrieved image in situation 3, Figs. 6(f)(i), is higher than that in situation 2, Figs. 6(e)(h), so that the addition of the average algorithm can effectively suppress noise. In the iterative process, the error value of each iteration is calculated, and the variation curves of the error with the number of iterations in the three situations are obtained, as shown in Fig. 7. It can be seen from the curves that the final iterative error values of situations 1, 2, and 3 decrease successively, and the convergence speed increases successively. The inset in Fig. 7 is an enlarged view of the first 200 iterations, from which it can be seen that the error of situation 3 is less than 0.6 at first. The computer uses GPU (NVIDIA Tesla K40c) for calculations. After the iteration, the errors of situations 1, 2 and 3 are 0.79, 0.61, and 0.46, respectively. Therefore, in this simulation, the error of situation 3 is 1.7 times lower than that of situation 1. This also proves that adding grating and averaging algorithms can effectively improve the convergence performance and SNR of phase reconstruction for single-shot measurement.

 figure: Fig. 7.

Fig. 7. Error curve with the number of iterations in the three situations.

Download Full Size | PDF

4. Experiment

The optical path used in the experiment is the same as the simulated optical path, as is shown in Fig. 4. The laser used in the experiment is monochromatic light with a wavelength of 532nm, diameter of the specimen is 700µm, and distance between it and lens is LSL = 100mm. The aperture of the lens is 50.8mm, and its focal length is 75 mm. This distance can magnify the specimen by three times, and the image is at 300 mm behind the lens and the distance between the lens and the grating is 10mm. The grating is a 3×3 two-dimensional Dammann grating. The aperture of the Dammann grating is 12.7mm and the beam splitting separation angle is 0.5°. The distance between the Dammann grating and the encoding plate is 330 mm, and the encoding plate adopts a phase modulation plate with a 0-π binary random distribution. The encoding plate is processed by etching, and the optical aperture is large enough to ignore its influence. The element size of the encoding plate is 100µm and the distance between the encoding plate and the detector is 48mm. The pixel size of the detector is 6.5×6.5µm, the number of pixels is 2048×2048, and the dynamic range is 16bit.

The amplitude and phase of the encoding plate used in the experiment are shown in Figs. 8(a) and (b), respectively, and are obtained by the ptychographic iterative engine method. First of all, experimental verification was carried out with a biological specimen, and the diffracted pattern and focus intensity distribution are shown in Figs. 9(a) and (b), respectively. Owing to the beam splitting of the grating, there are nine areas of the diffraction pattern and focus intensity. The intensity distribution on the average plane is shown in Fig. 9(c). It is necessary to translate the specimen in nine positions to the same position for averaging operation. The image of the biological specimen is recorded by the detector at the image plane is shown in Fig. 9(d), and the amplitude and phase of the biological specimen obtained by BSEA are shown in Figs. 9(e) and (f), respectively. Comparing the amplitude of Fig. 9(d) and Fig. 9(e), it can be seen that the resolutions of the BSEA method and the imaging method are close, and Fig. 9(f) shows that the phase of the biological specimen can be clear retrieval. Secondly, to study the resolution of amplitude and phase, more experiments compare the reconstruction results of the amplitude-type resolution plate (USAF1951) and the phase-type resolution plate in the above three situations of simulation.

 figure: Fig. 8.

Fig. 8. Amplitude (a) and phase (b) of the encoding plate in experiment. (The unit of color bar is in radians. The scale bar in (a) is applicable to (b).)

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Imaging and reconstruction results of a biological specimen with BSEA. (a) is the diffraction pattern recorded by the detector. (b) is focus intensity distribution obtained by BSEA algorithm. (c) is amplitude distribution of the average plane. (d) is the imaging amplitude distribution of specimen. (e) and (f) are the amplitude and phase distributions of the specimen obtained by BSEA. (The unit of color bar is in radians. The scale bar in (d) is applicable to (e)-(f).)

Download Full Size | PDF

For the amplitude-type resolution plate, the diffraction pattern and focus intensity distribution of situation 1 as shown in Fig. 10(a) and the inset respectively. The diffracted pattern and focus intensity of Situation 2 and Situation 3 are the same, as shown in Fig. 10(b) and the inset. Because of the Dammann grating, the diffraction pattern has nine regions, and the diffraction patterns in each region partially overlap. The focus distribution is also divided into nine regions, corresponding to the sub-beams. During iteration, the amplitude on the average plane in situation 3 is shown in Fig. 10(c). There are nine specimens with the same amplitude, which are translated to the same position for averaging.

 figure: Fig. 10.

Fig. 10. USAF 1951 reconstruction results in three situations. (a) is the diffraction pattern of situation 1. (b) is the diffraction pattern of situations 2 and 3. (c) is amplitude distribution of the average plane. (d)-(f) are the amplitude distributions retrieved in situations 1-3. (g)-(i) are the enlarged images of blue boxes in situations 1-3, and the one-dimensional curves corresponding to the blue lines.

Download Full Size | PDF

Situations 1, 2, and 3 correspond to the reconstruction results of the amplitude-type USAF 1951 test target are shown in Figs. 10(d), (e), and (f) respectively, while the enlarged images of the blue box area are shown in Figs. 10(g), (h) and (i). The one-dimensional intensity curves corresponding to the blue line are given below. In the enlarged images and curves, the resolution and SNR of situations 1, 2, and 3 are improved in turn. The advantages of situation 2 over situation 1 are that, first, the diffraction patterns of sub-beams interfere is used as a modulation to improve the convergence performance; second, the grating sub-beams at different positions on the detector plane improves the utilization and detector records more high-frequency information. In situation 3, the average algorithm is added based on situation 2, which can effectively suppress noise. From Figs. 10(d), (e), and (f), it can be seen that the noise decreases in turn. In order to compare the resolution limits in the three situations, plot the modulation transfer function (MTF) curve corresponding to the first element of the fifth group (5,1) to the fifth element of the sixth group (6,5), as shown in Fig. 11. For the standard USAF1951, the MTF in this article is expressed as: $MTF = {{({{I_{\max }} - {I_{\min }}} )} / {({{I_{\max }}\textrm{ + }{I_{\min }}} )}}$. For situation 3 and situation 1, the minimum accuracy of MTF higher than 0.4 corresponds to the third element of the sixth group (6,3) and the second element of the fifth group (5,2), respectively, and the spatial resolution is 6.20µm and 13.92µm, respectively. Therefore, the spatial resolution is increased by 2.2 times.

 figure: Fig. 11.

Fig. 11. MTF curve of USAF 1951 corresponding to situations 1, 2 and 3.

Download Full Size | PDF

For the phase-type resolution plate, the diffracted pattern and focus intensity distribution of situation 1 are shown in Fig. 12(a) and its inset, where there is only one focus without grating. The diffracted pattern and focus of Situation 2 and Situation 3 are shown in Fig. 11(b) and its inset respectively. The focus is changed from one to nine corresponding to the sub-beams of the Dammann grating. The phase distribution corresponding to the average plane is shown in Fig. 12(c) after subtracting the background phase. The nine regions correspond to the nine sub-beams of the grating. After iterative calculation, the phase distributions corresponding to situations 1, 2, and 3 are shown in Fig. 12(d), (e), and (f). The difference between the average value of the p1 area and the p2 area is 0.94 rad, which is close to the designed value (0.9rad) with a relative error of 4.4% or phase resolution of 3.4 nm. The one-dimensional curves corresponding to the blue line area in Fig. 11 are shown in Fig. 12(g), (h), and (i). It can be seen from the graphs and curves that the noise of situations 1, 2, and 3 are reduced in turn; therefore, adding grating and averaging algorithms can effectively suppress noise.

 figure: Fig. 12.

Fig. 12. Phase-type resolution plate reconstruction results in three situations. (a) is the diffraction pattern of situation 1. (b) is the diffraction pattern of situations 2 and 3. (c) is phase distribution of the average plane. (d)-(f) are the phase distributions retrieved in situations 1-3. (g)-(i) are the one-dimensional curves corresponding to the blue lines in situations 1-3. (The unit of color bar is in radians.)

Download Full Size | PDF

5. Discussion

The resolution of the system is restricted by the aperture of the device. The schematic of resolution analysis is shown in Fig. 13, where the distances between the specimen, grating, encoding plate, and detector are LSG, LGE, and LGD, respectively. The diameter of the specimen is $\phi $, beam splitting angle of the grating is $\theta $, and diameter of the grating is dG. It is considered that the aperture of the encoding plate is large enough. Although the encoding plate will not cut off the information of the incident light, it will broaden the spatial spectrum. The amount of spatial spectrum broadening in the x and y directions is expressed as:

$$\left\{ \begin{array}{l} \Delta {\xi_{Ex}}\textrm{ = }\frac{1}{{\Delta {p_x}}} = \frac{{\sin {\beta_x}}}{\lambda }\\ \Delta {\xi_{Ey}}\textrm{ = }{\frac{1}{{\Delta p}}_y} = \frac{{\sin {\beta_y}}}{\lambda } \end{array} \right. ,$$
where $\Delta {\textrm{p}_x}$ and $\Delta {\textrm{p}_\textrm{y}}$ represent the element size of the encoding plate in the x and y directions, while ${\beta _x}$ and ${\beta _\textrm{y}}$ represent the increased divergence angles of the encoding plate in the x and y directions, respectively. The photosensitive area of the detector is $Dx \times Dy$. The cut-off frequencies of grating and detector in the optical path are expressed by the following formula:
$$\textrm{Grating cut - off frequency}\textrm{ }1\textrm{:} \quad{\xi _G}\textrm{ = }\frac{{{d_G}}}{{2\lambda {L_{SG}}}} \approx \frac{{\sin \alpha }}{\lambda }, $$
$$\textrm{Detector cut - off frequency}\textrm{ }1\textrm{:} \quad\left\{ \begin{array}{l} {\xi_{Dx}}\textrm{ = }\frac{{\frac{3}{2}Dx - 2{L_{ED}}\cdot \sin {\beta_x}}}{{2\lambda ({L_{SG}} + {L_{GE}} + {L_{ED}})}}\\ {\xi_{Dy}}\textrm{ = }\frac{{\frac{3}{2}Dy - 2{L_{ED}}\cdot \sin {\beta_y}}}{{2\lambda ({L_{SG}} + {L_{GE}} + {L_{ED}})}} \end{array} \right. ,$$
where $\alpha $ is the diffraction angle of the emitted light from the specimen. Due to the need to select a suitable average plane, the requirement of selecting the specimen plane as the average plane is ${L_{SG}} > {\phi / {\sin \theta }}$. The grating makes the detector plane have multiple diffracted patterns at different positions. As shown in the inset of Fig. 13, the center of the pattern a1 is at the upper left corner of the photosensitive area, and the longest space that can be recorded in the x and y directions is ${3 / 4}Dx$ and ${3 / 4}Dy$, as shown by the blue arrow in the inset of Fig. 13. In the iterative process, the diffraction pattern is not divided, and since the diffraction patterns of the grating sub-beams are at the edge of the detector, the diffraction patterns of the edge sacrifice the high-frequency recording space on one side to obtain more high-frequency recording space on the other side. By shifting the diffraction patterns of the sub-beams of the grating, the equivalent photosensitive area of the detector is obtained, as shown in Fig. 14. The red box is the real photosensitive size of the detector, and the blue box is the equivalent photosensitive size of the detector. By comparison, the photosensitive area of the detector has changed from the original $Dx \times Dy$ to ${3 / 2}Dx \times {3 / 2}Dy$, so the cut-off frequency of the detector will be correspondingly larger. The encoding plate will introduce spatial spectrum broadening, so formula (8) needs to subtract the amount of broadening $2{L_{ED}}\cdot \sin {\beta _x}$.

 figure: Fig. 13.

Fig. 13. Schematic of resolution analysis without imaging lens.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Schematic of expanding photosensitive range on detector.

Download Full Size | PDF

The average plane can also be selected on the imaging plane, as shown in Fig. 15. The imaging lens needs to be added between specimen and grating, where the diameter of the lens is a, the focal length is $f$, the diffraction angle of the imaging plane is $\gamma $, and the distances between the specimen, lens, grating, imaging plane and encoding plate are LSL, LLG, LGA and LAE, respectively. The magnification of the imaging system is $\eta = {f / {({{L_{SL}} - f} )}}$. In order to be able to completely separate the image surface sample, the product of the distance from the grating to the imaging plane and $\tan \theta$ needs to be greater than the enlarged specimen diameter, so the lengths LSL and LLG need to satisfy the following formula:

$$(\eta \cdot {L_{SL}} - {L_{LG}})\cdot \tan \theta > \eta \cdot \phi$$

The cut-off frequencies of the entrance pupil and exit pupil of the lens are:

$$\textrm{Entrance pupil cut - off frequency}\quad{\xi _{Li}} = \frac{a}{{2\lambda {L_{SL}}}} \approx \frac{{\sin \alpha }}{\lambda },$$
$$\textrm{Exit pupil cut - off frequency}\quad{\xi _{Lo}} = \frac{a}{{1.22\lambda f}},$$

 figure: Fig. 15.

Fig. 15. Schematic of resolution analysis with imaging lens

Download Full Size | PDF

Considering the lens and beam splitting, the formula (7) and formula (8) become:

$$\textrm{Grating cut - off frequency}\textrm{ }2\textrm{:} \quad{\xi _G}\textrm{ = }\frac{{{d_G}}}{{2\lambda {L_{GA}}}}\cdot \eta$$
$$\textrm{Detector cut - off frequency}\textrm{ }2\textrm{:} \quad\left\{ \begin{array}{l} {\xi_{Dx}}\textrm{ = }\frac{{\frac{3}{2}Dx - 2{L_{ED}}\cdot \sin {\beta_x}}}{{2\lambda ({L_{AE}} + {L_{ED}})}}\cdot \eta \\ {\xi_{Dy}}\textrm{ = }\frac{{\frac{3}{2}Dy - 2{L_{ED}}\cdot \sin {\beta_y}}}{{2\lambda ({L_{AE}} + {L_{ED}})}}\cdot \eta \end{array} \right.$$

Due to the amplification system, the corresponding width of the frequency needs to be multiplied by the amplification factor $\eta $. In this situation, the system is subject to the comprehensive constraints of formulas (6), (9) – (11). In the experiment of this paper, the imaging lens is added, and the limit resolution determined by formulas (10) – (13) can be calculated as: 2.1µm, 0.958µm, 8.1µm, and 1.6µm respectively. Therefore, the experiment is mainly limited by the aperture of the grating. The resolution in the experimental recovery is 6.20µm, which has reached the theoretical resolution limit. The aperture of the Dammann grating can be selected according to requirements to further improve the resolution.

Based on the above analysis conclusions, the resolution of the system is primarily affected by two aspects. First, increase the optical aperture of the device; Second, to reduce the distance between specimen and grating, a large-angle grating can be selected to improve the resolution. The advantage of BSEA compared with the use of the strong modulation is that it can be seen from the formula (6) that the divergence angle of the beam after being modulated by the weak-modulation encoding plate is smaller, so the detector can record more high-frequency information. This improves the resolution of the system, reduces the calibration error of the system, and improves the SNR. The field of view of BSEA needs to satisfy formula ${L_{SG}} > {\phi / {\sin \theta }}$ or formula (9), which makes the field of view limited by the device. In the future, the field of view can be increased by increasing the aperture of the detector or increasing the angle of the grating.

6. Conclusion

For a single-shot phase retrieval algorithm, convergence performance and SNR are the primary challenges for practical application. Introducing a modulation plate is an effective strategy to address the above problems. For a traditional CMI algorithm, a binary phase modulator with a small basic size is preferred to provide a strong modulation process and guarantee fast convergence speed. However, strong scattering capability leads to the difficulty of accurate pre-characterization with aperture limited detector. In addition, strong modulation seems to be invalid to guarantee high SNR because the information redundancy is insufficient with a single diffraction pattern. Consequently, to guarantee fast convergence speed and high SNR simultaneously, a revised CMI algorithm based on BSEA was proposed. The diffraction pattern array is recorded after introducing a grating, and the fast convergence speed is also guaranteed despite the use of a weak scattering modulator with a large basic size. Besides, the average process in the object plane during the iterations improves the SNR by introducing noise diversity for each sub-beam retrieved, and the greater the number of sub-beams, the better the SNR available in theory. Furthermore, since the diffraction pattern array is not divided as is introduced by the previous grating-based phase retrieval algorithm, the diffraction-limited resolution, in theory, is also available. In addition, a higher resolution, which breaks through the limitation of detector aperture, is available because higher frequency components could be recorded by the sub-diffraction patterns in the corner of the detector. As a universal single-shot phase retrieval algorithm, the revised CMI algorithm based on BSEA could be used in a variety of fields with fast convergence speed and high SNR.

Funding

Research Instrument and Equipment Development Project of Chinese Academy of Sciences (YJKYYQ20180024); Strategic Priority Research Program of Chinese Academy of Sciences (XDA25020105, XDA25020306); National Natural Science Foundation of China (6190031304, 11875308).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. R. W. Gerchberg and W. O. Saxton, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures,” Optik 35(2), 1–6 (1972).

2. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

3. D. Shapiro, P. Thibault, T. Beetz, V. Elser, M. Howells, C. Jacobsen, J. Kirz, E. Lima, H. Miao, A. M. Neiman, and D. Sayre, “Biological imaging by soft x-ray diffraction microscopy,” Proc. Natl. Acad. Sci. 102(43), 15343–15346 (2005). [CrossRef]  

4. L. Shemilt, E. Verbanis, J. Schwenke, A. K. Estandarte, G. Xiong, R. Harder, N. Parmar, M. Yusuf, F. Zhang, and I. K. Robinson, “Karyotyping human chromosomes by optical and x-ray ptychography methods,” Biophys. J. 108(3), 706–713 (2015). [CrossRef]  

5. M. Beckers, T. Senkbeil, T. Gorniak, M. Reese, K. Giewekemeyer, S. Gleber, T. Salditt, and A. Rosenhahn, “Chemical contrast in soft X-ray ptychography,” Phys. Rev. Lett. 107(20), 208101 (2011). [CrossRef]  

6. M. Kahnt, L. Grote, D. Brückner, M. Seyrich, F. Wittwer, D. Koziej, and C. G. Schroer, “Multi-slice ptychography enables high-resolution measurements in extended chemical reactors,” Sci. Rep. 11(1), 1500 (2021). [CrossRef]  

7. G. Weng, J. Yan, S. Chen, C. Zhao, H. Zhang, J. Tian, Y. Liu, X. Hu, J. Tao, S. Chen, Z. Zhu, H. Akiyama, and J. Chu, “Superior single-mode lasing in a self-assembly CsPbX 3 microcavity over an ultrawide pumping wavelength range,” Photonics Res. 9(1), 54–65 (2021). [CrossRef]  

8. Y. Takahashi, A. Suzuki, N. Zettsu, Y. Kohmura, K. Yamauchi, and T. Ishikawa, “Multiscale element mapping of buried structures by ptychographic x-ray diffraction microscopy using anomalous scattering,” Appl. Phys. Lett. 99(13), 131905 (2011). [CrossRef]  

9. A. Leblanc, S. Monchocé, C. Bourassin-Bouchet, S. Kahaly, and F. Quéré, “Ptychographic measurements of ultrahigh-intensity laser-plasma interactions,” Nat. Phys. 12(4), 301–305 (2016). [CrossRef]  

10. J. Feng, Y. Li, J. Wang, D. Li, C. Zhu, J. Tan, X. Geng, F. Liu, and L. Chen, “Optical control of transverse motion of ionization injected electrons in a laser plasma accelerator,” High Power Laser Sci. Eng. 9, e5–8 (2021). [CrossRef]  

11. P. Ding, Y. Yao, D. Qi, C. Yang, F. Cao, Y. He, J. Yao, C. Jin, Z. Huang, L. Deng, L. Deng, T. Jia, J. Liang, Z. Sun, and S. Zhang, “Single-shot spectral-volumetric compressed ultrafast photography,” Adv. Photonics 3(4), 045001 (2021). [CrossRef]  

12. C. Hu, Z. Du, M. Chen, S. Yang, and H. Chen, “Single-shot ultrafast phase retrieval photography,” Opt. Lett. 44(17), 4419–4422 (2019). [CrossRef]  

13. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004). [CrossRef]  

14. H. M. L. Faulkner and J. M. Rodenburg, “Movable Aperture Lensless Transmission Microscopy: A Novel Phase Retrieval Algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]  

15. X. Pan, C. Liu, and J. Zhu, “Single shot ptychographical iterative engine based on multi-beam illumination,” Appl. Phys. Lett. 103(17), 171105 (2013). [CrossRef]  

16. P. Sidorenko and O. Cohen, “Single-shot ptychography,” Optica 3(1), 9–14 (2016). [CrossRef]  

17. F. Zhang and J. M. Rodenburg, “Phase retrieval based on wave-front relay and modulation,” Phys. Rev. B 82(12), 121104 (2010). [CrossRef]  

18. F. Zhang, B. Chen, G. R. Morrison, J. Vila-Comamala, M. Guizar-Sicairos, and I. K. Robinson, “Phase retrieval by coherent modulation imaging,” Nat. Commun. 7(1), 13367 (2016). [CrossRef]  

19. X. Pan, C. Liu, and J. Zhu, “Ultramicroscopy Phase retrieval with extended field of view based on continuous phase modulation,” Ultramicroscopy 204, 10–17 (2019). [CrossRef]  

20. X. He, H. Tao, X. Pan, C. Liu, and J. Zhu, “High-quality laser beam diagnostics using modified coherent phase modulation imaging,” Opt. Express 26(5), 6239–6248 (2018). [CrossRef]  

21. Z. He, B. Wang, J. Bai, G. Barbastathis, and F. Zhang, “Ultramicroscopy High-quality reconstruction of coherent modulation imaging using weak cascade modulators,” Ultramicroscopy 214, 112990 (2020). [CrossRef]  

22. X. He, X. Pan, C. Liu, and J. Zhu, “Single-shot phase retrieval based on beam splitting,” Appl. Opt. 57(17), 4832–4838 (2018). [CrossRef]  

23. X. He, C. Liu, and J. Zhu, “On-line beam diagnostics based on single-shot beam splitting phase retrieval,” Chin. Opt. Lett. 16(9), 091001 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Basic scheme of revised CMI based on BSEA.
Fig. 2.
Fig. 2. Beam splitting encoding and averaging schematic diagram.
Fig. 3.
Fig. 3. Flow chart of BSEA.
Fig. 4.
Fig. 4. BSEA imaging optical path.
Fig. 5.
Fig. 5. Ground truth of the specimen. (a) amplitude. (b) phase. (The unit of color bar is in radians. The scale bar in (a) is applicable to (b).)
Fig. 6.
Fig. 6. Retrieved results of the complex amplitude in the three situations. (a) is the phase distribution of the encoding plate. (b)is the diffraction pattern of situation 1. (c) is the diffraction pattern of situations 2 and 3. (d)-(f) are the amplitude distributions retrieved in situations 1-3. (g)-(i) are the phase distributions retrieved in situations 1-3. (The unit of color bar is in radians. The scale bar in (d) is applicable to (e)-(i).)
Fig. 7.
Fig. 7. Error curve with the number of iterations in the three situations.
Fig. 8.
Fig. 8. Amplitude (a) and phase (b) of the encoding plate in experiment. (The unit of color bar is in radians. The scale bar in (a) is applicable to (b).)
Fig. 9.
Fig. 9. Imaging and reconstruction results of a biological specimen with BSEA. (a) is the diffraction pattern recorded by the detector. (b) is focus intensity distribution obtained by BSEA algorithm. (c) is amplitude distribution of the average plane. (d) is the imaging amplitude distribution of specimen. (e) and (f) are the amplitude and phase distributions of the specimen obtained by BSEA. (The unit of color bar is in radians. The scale bar in (d) is applicable to (e)-(f).)
Fig. 10.
Fig. 10. USAF 1951 reconstruction results in three situations. (a) is the diffraction pattern of situation 1. (b) is the diffraction pattern of situations 2 and 3. (c) is amplitude distribution of the average plane. (d)-(f) are the amplitude distributions retrieved in situations 1-3. (g)-(i) are the enlarged images of blue boxes in situations 1-3, and the one-dimensional curves corresponding to the blue lines.
Fig. 11.
Fig. 11. MTF curve of USAF 1951 corresponding to situations 1, 2 and 3.
Fig. 12.
Fig. 12. Phase-type resolution plate reconstruction results in three situations. (a) is the diffraction pattern of situation 1. (b) is the diffraction pattern of situations 2 and 3. (c) is phase distribution of the average plane. (d)-(f) are the phase distributions retrieved in situations 1-3. (g)-(i) are the one-dimensional curves corresponding to the blue lines in situations 1-3. (The unit of color bar is in radians.)
Fig. 13.
Fig. 13. Schematic of resolution analysis without imaging lens.
Fig. 14.
Fig. 14. Schematic of expanding photosensitive range on detector.
Fig. 15.
Fig. 15. Schematic of resolution analysis with imaging lens

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

G ( x , y ) = [ P ( x 0 , y 0 ) O ( x 0 , y 0 ) , L S G ]
E m ( x , y ) = [ G ( x , y ) e j k r m , L G E ]
D m ( X , Y ) = [ φ m ( x , y ) , L E D ]
I ( X , Y ) = m = 1 M | D m ( X , Y ) | 2
H m ( u 0 , v 0 , a m , b m )  =  { 1 ( u 0  -  a m ) 2 + ( v 0  -  b m ) 2 < R 2 0 ( u 0  -  a m ) 2 + ( v 0  -  b m ) 2 > R 2
{ Δ ξ E x  =  1 Δ p x = sin β x λ Δ ξ E y  =  1 Δ p y = sin β y λ ,
Grating cut - off frequency   1 : ξ G  =  d G 2 λ L S G sin α λ ,
Detector cut - off frequency   1 : { ξ D x  =  3 2 D x 2 L E D sin β x 2 λ ( L S G + L G E + L E D ) ξ D y  =  3 2 D y 2 L E D sin β y 2 λ ( L S G + L G E + L E D ) ,
( η L S L L L G ) tan θ > η ϕ
Entrance pupil cut - off frequency ξ L i = a 2 λ L S L sin α λ ,
Exit pupil cut - off frequency ξ L o = a 1.22 λ f ,
Grating cut - off frequency   2 : ξ G  =  d G 2 λ L G A η
Detector cut - off frequency   2 : { ξ D x  =  3 2 D x 2 L E D sin β x 2 λ ( L A E + L E D ) η ξ D y  =  3 2 D y 2 L E D sin β y 2 λ ( L A E + L E D ) η
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.