Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot ptychography with highly tilted illuminations

Open Access Open Access

Abstract

A single-shot ptychographic iterative engine (PIE) using highly tilted illumination is proposed to realize accurate phase retrieval from a single frame of multiple and non-overlapping sub-diffraction patterns generated by a bunch of laser beams propagating at greater angles with respect to the optical axis. A non-paraxial reconstruction algorithm is developed to numerically propagate these highly tilted laser beams in back and forth iterative computations. Faster data acquisition and higher reconstruction quality are achieved in the proposed method by recording non-overlapping sub-diffraction patterns in a single frame and eliminating usual reconstruction errors arising from paraxial approximations.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

As a recently developed coherent diffraction imaging (CDI) technique, Ptychography Iterative Engine (PIE) has significant advantages of large field of view, high imaging quality and faster convergence [16]. Though it was initially developed for imaging applications with electron beam [3] and X-ray [4,5] to further improve the spatial resolution without using high quality optics, researches in PIE have further expanded deep into ultraviolet [6], visible light [79] and terahertz [10] ranges and became successful in several applications of biological imaging [8], laser beam diagnostics [11], 3D imaging [12], optical element measurement [13] and stress detection [14]. Typical PIE scans a sample at many positions with respect to a localized illumination beam and records its corresponding diffraction patterns. Complex amplitudes of both illuminating beam and the sample are then iteratively computed from the recorded diffraction pattern array using a pair of updating formula [15] at fast convergence. Since PIE usually acquires more than tens or hundreds of diffraction frames, its data redundancy is quite high, and thus various improved computing algorithms are applied to solve the problems related to positioning error of translation stage [16], incoherency of illumination beam [17] and vibration of imaging system [18] etc. PIE usually needs several seconds or minutes to acquire all of the diffraction patterns, depending on the speed of translation stage and thus not suitable for imaging dynamic events such as plasma generation [19] or live cell observation [20]. As an alternative, single-shot PIE algorithms are developed [2127] to observe such dynamic objects by collecting multiple far-field diffraction patterns with a single camera snapshot by using different techniques such as cross grating, pin hole array, micro-lens array, spatial light modulator (SLM) or dynamic micro-mirror device (DMD). In these methods, the sample is illuminated by a bunch of laser beams propagating in different directions and it results in multiple diffraction patterns that are recorded in a single frame. Under the assumption of parallel illumination, the resolution of a single-shot PIE can be approximated as $\Delta = \mathrm{\lambda }\frac{d}{\textrm{D}}$ [24], where d is the distance between sample and detector, and D is the interval between neighboring diffraction patterns on detector. The size of each sub-diffraction pattern is inversely proportional to the size of the finest structure of the sample being illuminated. Thus, spatial resolution of a single-shot ptychography increases with an increased separation between simultaneously generated diffraction patterns. This also means that diffraction patterns should be distinctly isolated from each other on the detector. Currently available CCD camera with a large sensor chip is capable of recording many diffraction patterns. By adjusting the angle between adjacent illumination beams, it is possible to get large enough separations between diffraction patterns and consequently avoid any overlapping between them while imaging samples with minute details. However, this makes outermost illumination beams highly tilted with respect to optical axis and hence do not fulfill the requirement of paraxial approximation and also leads to phase under-sampling in numerical computations. This has limited the use of highly tilted illumination in single shot PIE, though it could improve image quality remarkably. In an example calculation for an array of 11×11 illuminations, a 2° angular separation between two adjacent illuminations results in a propagation angle of 10° for the outermost illuminating beams with respect to optical axis and consequently generates a phase ramp of about 1.73 rad/µm on both sample and recording planes. Since this phase ramp cannot be directly resolved by most of common detectors, each diffraction pattern should be separated and shifted to the center of computing matrix to avoid any under-sampling of the phase. This shifting operation on each sub-diffraction pattern is essentially based on the paraxial approximation- a small change in the direction of paraxial illumination results only in a transverse shift in the diffracted intensity pattern in the recording plane, without altering its structure. However, for single shot PIE with highly tilted illumination, the outermost laser beams do not fulfill paraxial assumption, and hence moving their diffraction patterns to the center of computing matrix and running an iterative computation with a paraxial diffraction formula result in an obvious degradation of retrieved image quality. Developing a proper computing algorithm to deal with the iterative computation of non-paraxial illuminating beam is crucially important to improve the reconstruction quality and convergence speed of single shot PIE.

High quality single shot PIE imaging using highly tilted non-paraxial laser beam illumination is proposed in this paper. Tilted illuminations avoid any overlapping between neighboring diffraction patterns. A non-paraxial iterative computation algorithm is proposed to sufficiently reduce the reconstruction error caused by paraxial approximation by using a modified transfer function while each diffraction pattern is moved into the center of computation matrix to eliminate the occurrence of phase under-sampling. This proposed method is verified by both numerical simulations and experiments, where the outermost illuminating beam makes an angle of 35° and 12°, respectively with optical axis. Multiple diffraction patterns recorded simultaneously in a single frame are distinctly isolated from each other. The quality of the reconstructed image obtained with the proposed non-paraxial computational method has remarkably improved in comparison to that with common paraxial approximation.

2. Theory

The principle of single-shot PIE using highly tilted multiple illuminations is schematically shown in Fig. 1(a). Laser beams propagating at different angles pass through the sample and generate diffraction patterns that are simultaneously recorded in a single frame. Adjacent illuminations are properly overlapped on the sample surface to meet the requirements of PIE algorithm. Under paraxial approximation, Fraunhofer diffraction pattern corresponding to nth illumination is the convolution ${\tilde{P}_n}({f_x},{f_y}) \otimes \tilde{O}({f_x},{f_y})$, where ${\tilde{P}_n}({f_x},{f_y})$ and $\tilde{O}({f_x},{f_y})$ are the Fourier transforms of the nth illumination beam ${P_n}(x,y)$ and the object $O(x,y)$. For a sample with fine details, the angular separation between two adjacent illuminating beams should be large enough which makes the outermost illuminating beams non-paraxial and a paraxial approximation in computation leads to obvious reconstruction errors.

 figure: Fig. 1.

Fig. 1. (a) The principle of single-shot PIE with multiple illuminations. (b) The numerical propagation of highly tilted illuminating beam.

Download Full Size | PDF

Figure 1(b) shows such a highly tilted beam $P(x,y)$ that illuminates the sample $O(x,y)$ at an angle of α and β with respect to x and y axis.

$$P({x,y} )= \textrm{ }{P^{\prime}}({x,y} )\exp \left[ {i\frac{{2\pi }}{\lambda }(x\cos \alpha + y\cos \beta )} \right]. $$
where ${P_n}^{\prime}({x,y} )$ represents the complex amplitude of beam without a tilt. The light leaving the sample is
$$U(x,y) = O(x,y){P^{\prime}}({x,y} )\exp \left[ {i\frac{{2\pi }}{\lambda }(x\cos \alpha + y\cos \beta )} \right]. $$
The Fourier transform of the transmitted light can be expressed as
$$\tilde{U}({f_\textrm{x}},{f_y}) = \tilde{O}({f_x},{f_y}) \otimes {\tilde{P}^{\prime}}({f_x},{f_y}) \otimes \delta ( - \frac{{\cos \beta }}{\lambda }, - \frac{{\cos \alpha }}{\lambda })$$
where ${\otimes}$ is the convolution operation. By assuming ${\tilde{U}^{\prime}}({f_x},{f_y}) = \tilde{O}({f_x},{f_y}) \otimes {\tilde{P}^{\prime}}({f_x},{f_y})$ and ${k_1} = {{\cos \alpha } / \lambda }$ and ${k_2} = {{\cos \beta } / \lambda }$, Eq. (3) can be rewritten as $\tilde{U}({{f_x},{f_y}} )\textrm{ = }{\tilde{U}^{\prime}}({{f_x}\textrm{ - }{k_1},{f_y} - {k_2}} )$. The light field distribution on the detector plane, which is at a distance of d from the object, is obtained by the angular spectrum propagation as
$$D(x,y) = F{T^{ - 1}}\{{\tilde{U}({f_x},{f_y})H({f_x},{f_y})} \}$$
where $H({{f_x},{f_y}} )= \exp \left( {j2\pi \frac{d}{\lambda }\sqrt {1 - {\lambda^2}f_x^2 - {\lambda^2}f_y^2} } \right)$.

Assuming ${k_x} = {f_x} - {k_1}$ and ${k_y} = {f_y} - {k_2}$, $\tilde{U}({f_x},{f_y})H({f_x},{f_y})$ can be rewritten as

$$\tilde{U}({k_x} + {k_1},{k_y} + {k_2})H({k_x} + {k_1},{k_y} + {k_2})\textrm{ = }{\tilde{U}^{\prime}}({k_x},{k_y})H({k_x} + {k_1},{k_y} + {k_2}). $$
The diffraction patterns $D(x,y)$ in Eq. (4) is then equal to
$$D(x,y) = F{T^{\textrm{ - }1}}\{ {\tilde{U}^{\prime}}({k_x},{k_y})H({k_x} + {k_1},{k_y} + {k_2})\} \exp[i({k_1}x + {k_2}y)]. $$
The phase term of $H({k_x} + {k_1},{k_y} + {k_2})$ can be expanded as Taylor series
$$\sqrt {1 - {\lambda ^2}{{({f_\textrm{x}} + {k_1})}^2} - {\lambda ^2}{{({f_y} + {k_2})}^2}} = 1 - \frac{{{\lambda ^2}}}{2}[{({f_x} + {k_1})^2} + {({f_y} + {k_1})^2}] - \frac{{{\lambda ^4}}}{8}{[{({f_x} + {k_1})^2} + {({f_y} + {k_1})^2}]^2} + \cdots$$
For a paraxial laser beam, the third and higher order terms vanish as they are negligible in comparison with the second term. However, this is not a valid approximation for non-paraxial laser beams with large ${k_1}$ and ${k_2}$. The presence of ${({f_x} + {k_1})^2} = f_x^2 + k_1^2 + 2{f_x}{k_1}$ and ${({f_y} + {k_2})^2} = f_y^2 + 2{f_y}{k_2} + k_2^2$ in the phase term of $H({k_x} + {k_1},{k_y} + {k_2})$ in Eq. (7) gives rise to a linear phase term $\exp [i2\pi d\lambda ({k_1}{f_x} + {k_2}{f_y})]$, thus Eq. (6) include two phase terms of $\exp [i2\pi d\lambda ({k_1}{f_x} + {k_2}{f_y})]$ and $\exp [i({k_1}x + {k_2}y)]$. Experimental parameters are used to illustrate the sampling requirements of these two phase terms, and this would give us an insight to the computational challenge of single shot PIE. For a tilting angle of $\alpha \textrm{ = }20^\circ$, $\lambda = 0.6328$ µm, $d = 5$cm, and $2\pi d\lambda {k_1}{f_x}\textrm{ = }2\pi \times 17101{f_x}$, these parameters show that in order to avoid phase under sampling in the phase term $\exp [i2\pi d\lambda ({k_1}{f_x} + {k_2}{f_y})]$, the sampling interval of $\Delta {f_x}$ should be smaller than ${1 / {({2 \times 17101} )}}$ µm-1, and hence the computing matrix adopted should have a size larger than 34.202 mm. For the phase term $\exp [i({k_1}x + {k_2}y)]$, the pixel size should be smaller than 0.93µm, and then the matrix used for computation should be larger than 36971×36971. Such requirement on computer memory is unreasonably huge and such a CCD camera with that many very small pixels is not currently available. This limits the largest illumination angle allowed in most of the experiments of single shot PIE to only a few degrees (less than 2° in lots of experiments) and the resolution reachable is quite limited [24].

The sampling criteria set by each of the phase term in Eq. (6) is unrealistic and hence demands an alternative approach to make the reconstruction possible without compromising the quality of the image. Phase term $\exp[i({k_1}x + {k_2}y)]$indicates the phase ramp of a tilted illumination, and it will vanish when using ${P^{\prime}}(x,y)$ in place of $P(x,y)$ to compute the intensity of $D(x,y)$. To avoid the phase under sampling caused by $\exp [i2\pi d\lambda ({k_1}{f_x} + {k_2}{f_y})]$ in $H({k_x} + {k_1},{k_y} + {k_2})$, this phase term can be cancelled by using a modified transfer function of $H({k_x} + {k_1},{k_y} + {k_2})\exp[ - i2\pi d\lambda ({k_1}{f_x} + {k_2}{f_y})]$ in place of $H({k_x} + {k_1},{k_y} + {k_2})$ to compute the diffraction intensity $|{D(x,y)} |$. This is justified as a multiplication by $\exp[ - i2\pi d\lambda ({k_1}{f_x} + {k_2}{f_y})]$ is equivalent to a transverse shift in the computed diffraction patterns from $|{D(x,y)} |$ to $|{D({x + d\cos \alpha ,y + d\cos \beta } )} |$. Experimentally, we need to shift each recorded sub-diffraction intensity $|{D(x,y)} |$ by $d\cos \alpha$ and $d\cos \beta$ in x and y directions, respectively, and then it is used for iterative computation. By eliminating the phase sampling requirements in the transfer function and diffraction pattern, it is now possible to realize a single shot PIE with a large number of non-overlapping diffraction patterns that are generated simultaneously by a cluster of tilted illuminations with large enough divergent angles. After solving the problem on phase sampling and non-paraxial approximation in this way, it is now possible for us to have a single shot PIE with high speed data acquisition and high reconstruction quality using multi-beam illumination.

The first step in experiment is to measure the complex amplitude ${P^{\prime}}({x,y} )$of each illuminating beam shown in Fig. 1(a), and they can be measured in advance by transversely scanning an object through the laser beam cluster to many positions and recording corresponding diffraction patterns. The diffraction pattern $|{{D_{mn}}(x,y)} |$of the nth illuminating beam is selected out from the mth recorded diffraction pattern array and is shifted to obtain the required ${D_{mn}}({x + d\cos {\alpha_n},y + d\cos {\beta_n}} )$. The complex amplitude ${P_n}^{\prime}({x,y} )$ of the nth illuminating beam is then reconstructed from ${D_{mn}}({x + d\cos {\alpha_n},y + d\cos {\beta_n}} ){|_{m = 1 \cdots M}}$using ePIE algorithm with above non-paraxial propagation formula. The sample to be imaged is then placed in the optical path and is illuminated simultaneously by the same cluster of highly tilted beams. The resulting multiple and non-overlapping diffraction patterns are captured in a single frame in the detector. The iterative computation of single shot PIE with highly tilted illuminations is then carried out with following steps. Figure 2 shows the flow chart of iterative computation.

  • 1. Start with a random initial guess for the complex amplitude $O(x,y)$ of the sample.
  • 2. Compute the exiting light $U_n^{\prime}(x,y) = {P_n}^{\prime}({x,y} ){O^K}(x,y)$of the nth illumination and its Fourier transform ${\tilde{U}_n}^{\prime}({k_x},{k_y})$, where K is the iteration number.
  • 3. Compute the diffraction intensity ${D_n}(x,y)$using modified transfer function $H({k_x} + {k_1},{k_y} + {k_2})\exp[ - i2\pi d\lambda ({k_1}{f_x} + {k_2}{f_y})]$.
  • 4. Shift the recorded nth diffraction pattern by corresponding distances to get ${|{{D_n}({x + d\cos {\alpha_n},y}}}$ ${{{+ d\cos {\beta_n}} )} |^2}$ and use its square root to replace the intensity of computed ${D_n}(x,y)$. Keep the phase unchanged to get updated diffraction patterns $D_n^{\prime}(x,y)$.
  • 5. Propagate $D_n^{\prime}(x,y)$ to the sample plane with transfer function ${H^\ast }({k_x} + {k_1},{k_y} + {k_2})\exp[i2\pi d\lambda ({k_1}{f_x} + {k_2}{f_y})]$ to get updated exiting light $U_n^{\prime\prime} (x,y)$.
  • 6. Update the complex amplitude of the sample as
    $${O^{K + 1}}(x,y) = {O^K}(x,y) + [U_n^{^{\prime\prime}}(x,y) - U_n^{\prime}(x,y)]\frac{{p{{_n^{\prime}}^ \ast }(x,y)}}{{{{|{p_n^{\prime}(x,y)} |}^2} + \eta }}{\frac{{|{p_n^{\prime}(x,y)} |}}{{|{p_n^{\prime}(x,y)} |}}_{\max }}.$$
  • 7. Jump to step 2 until all diffraction patterns are addressed.
  • 8. Calculate the reconstruction error Erro with Eq. (9). If Erro is smaller than the desired value, the iterative computation stops, else jump to step 2 to start another round of iterative computation.
    $${E_{rro}} = \frac{{\sum\limits_n {|{|{D_n}({x,y} )|- |{D_n}({x + d\cos {\alpha_n},y + d\cos {\beta_n}} )|} |} }}{{\sum\limits_n {|{{D_n}({x,y} )} |} }}$$

 figure: Fig. 2.

Fig. 2. Data flow chart of the proposed computation method.

Download Full Size | PDF

3. Numerical simulation

The feasibility of the above proposed computing method is verified using numerical simulations. A 7×7 array of laser beams with a wavelength of 0.6328 µm is used to illuminate a sample with amplitude and phase transmitting functions shown in Fig. 3. The outermost beam makes an angle of 35° with respect to optical axis. The diameter of each laser-beam is 2.2 mm on the source plane in Fig. 1(a). The distance from sample to CCD camera is d = 53 mm. Under these conditions, the overlapping ratio between illuminated regions of two neighboring beams is about 65%.

 figure: Fig. 3.

Fig. 3. Complex amplitude of simulated sample. (a) amplitude and (b) phase.

Download Full Size | PDF

A sampling interval of 0.5 µm is chosen to avoid any phase under sampling effects at the source, sample and recording planes. The illuminating probes that incident on the sample surface and the consequent diffraction patterns on the recording plane are computed with exact Huygens-Fresnel integration [28]. The amplitudes of 7×7 illuminating probes are computed on the sample surface and are shown in Fig. 4(a), where the colorful background shows the overall illumination and 9 grey images show the central and some of the outermost illuminating probes. Figure 4(b) shows all 7×7 diffraction patterns computed, and all of them are clearly isolated from each other. Figure 4(c) shows an enlarged view of the central and some of the outermost diffraction patterns, which corresponds to 9 illuminating probes shown in Fig. 4(a). The outermost diffraction patterns are oval and cannot be treated with paraxial formula by simply shifting them to the matrix center in iterative computations.

 figure: Fig. 4.

Fig. 4. (a) illuminations on the sample surface, (b) diffractions patterns formed, and (c) enlarged view of diffractions patterns.

Download Full Size | PDF

Above illustrated computing algorithm for highly tilted illumination is used to iteratively reconstruct the complex amplitude of the sample, and the reconstructed amplitude and phase are shown in Fig. 5(a) and Fig. 5(b), respectively. Figure 5(a) and Fig. 5(b) have the same details as that of original images in Fig. 3. Figure 5(c) shows the reconstruction error changing with number of iterations, where we can find that the reconstruction error has reduced to 8.47% after 100 iterations.

 figure: Fig. 5.

Fig. 5. (a)Reconstructed amplitude and (b) phase (c) reconstruction convergence.

Download Full Size | PDF

We also repeated the reconstruction with common paraxial approximation algorithm by simply shifting the diffraction pattern to the center of computing matrix. The reconstructed amplitude and phase images are shown in Fig. 6(a) and Fig. 6(b), respectively. we can observe that the reconstruction quality is obviously degraded, and the image distortion is quite remarkable especially for phase image. Figure 6(c) shown the reconstruction error with iterations, where we can find that the reconstruction error is still as large as 17.57% even after 100 iterations, almost no structure is correctly reconstructed.

 figure: Fig. 6.

Fig. 6. (a) reconstructed amplitude and (b) phase (c) reconstruction convergence.

Download Full Size | PDF

4. Experiments

The optical set up for the proposed technique is shown in Fig. 7. A He-Ne laser beam of 20 mw is split into 5×6 array of sub-beams by a fiber-beam-splitter. All fiber pigtails are fixed on a spherical surface with a radius of 200mm. Laser beams pass through the center of the spherical surface and each one is slightly divergent. Each beam has a diameter of about 2.5mm at the sample plane with a separation of about 0.25mm. Adjacent laser beams are tilted at an angle difference of 3.2° (in x and y directions). The object - CCD distance is L = 65.2mm. The CCD (AVT Pike F1100B) has a pixel size of 9 µm × 9 µm and with a resolution of 4008 × 2672. Most of the laser beams in Fig. 7 are non-paraxial and they cannot be alternatively obtained just by placing a light source array on the focal plane of a lens. Hence the optical set up given in Fig. 7 is a little complex in comparison to current single-shot PIE techniques [24].

 figure: Fig. 7.

Fig. 7. Setup of single-shot ptychography with simultaneous multi-angle illumination.

Download Full Size | PDF

To measure the illumination beams on the sample plane, diffraction patterns are recorded by firstly placing a stem of Pachira macrocarpa as an object in the sample plane.The structure of the scanning object is shown in the inset of Fig. 8, and it is scanned by highly tilted laser beams at 10×10 positions by a translation stage of MT3Z8 from Thorlabs to record hundred frames of diffraction patterns, each frame with 5×6 sub-diffraction patterns. The obtained diffraction patterns are then organized to compute the phase and amplitude of each beam by separating out the corresponding sub-diffraction patterns. ePIE imaging method with the above illustrated non-paraxial formula is used to calculate the phase and amplitude of each laser beam as shown in Fig. 8, where the brightness shows the intensity of illuminating beam on the sample surface and the color scheme shows its phase.

 figure: Fig. 8.

Fig. 8. Complex amplitude distribution of illumination beam array. The brightness shows the intensity of illuminating beams on the sample surface and the color scale shows the phase. Figure in the inset shows the object at the sample position.

Download Full Size | PDF

A Pumpkin stem is then placed in the object plane, and Fig. 9 shows its corresponding sub-diffraction patterns. Diffraction patterns are distinctly isolated from each other.

 figure: Fig. 9.

Fig. 9. Recorded array of diffraction patterns.

Download Full Size | PDF

Figure 10(a) and Fig. 10(b) show the phase and intensity of the samples that are computed using the proposed algorithm. The field of view of the obtained image is about 9.9 mm2 and fine details of the sample are clearly visible. Figure 10(c) and Fig. 10(d) show the reconstruction that are obtained with common scanning PIE method. It appears that there is no visible difference between these images obtained by either methods.

 figure: Fig. 10.

Fig. 10. Reconstructed (a) amplitude and (b) phase of the specimen obtained by single-shot PIE. Reconstructed (a) amplitude and (b) phase of the specimen obtained by common scanning PIE.

Download Full Size | PDF

The resolution of the experimental system is studied using the resolution target USAF 1951. An array of 30 sub-diffraction patterns is recorded with the same experimental setup and the iteratively retrieved image is shown in Fig. 11(a). For an easy observation, the intensities of the 4th order bars of group 5 are plotted along horizontal and vertical directions in Fig. 11(b) and Fig. 11(c) respectively, where we can clearly find that the resolution of the imaging system is 11 µm. Under the assumption of parallel illumination, the theoretical resolution of a single shot PIE is mainly determined by the receiving angle of each sub-diffraction pattern as $\Delta = \mathrm{\lambda }\frac{d}{\textrm{D}}$. We calculated a theoretical resolution of 9.87 µm for our experiments. However, the resolution achieved is lower than the theoretical value, mainly because some higher order diffraction components outside the bright disks in Fig. 9 are lost during the recording because of the limited sensitivity of the CCD device.

 figure: Fig. 11.

Fig. 11. The experimental results on USAF 1951 test target. (a)Intensity image retrieved, intensity along (b) horizontal direction and (c)vertical direction.

Download Full Size | PDF

5. Conclusion

High quality reconstruction is realized together with a faster data acquisition in a single-shot PIE technique that used an improved optical alignment and modified reconstruction algorithm. Highly tilted illuminating beams allow larger separation between adjacent sub-diffraction patterns, and hence more high order diffraction components which are significant in improvement in spatial resolution of single shot PIE can be recorded. The overlapping between diffraction patterns is avoided, and the non-paraxial iterative computation method proposed remarkably reduces the reconstruction error caused by commonly adopted paraxial approximation while avoiding under sampling of phase on both sample and detector planes. All these efforts enhance the reconstruction quality of a single shot PIE. The feasibility of this proposed method is verified both numerically and experimentally. A spatial resolution of 11 µm is reached experimentally. This makes the single shot PIE a more valuable tool in various applications.

Funding

National Natural Science Foundation of China (11875308, 61675215, 61827816, 61905261); Shanghai Science and Technology Innovation Action Plan Project (19142202600); Shanghai Sailing Program (18YF1426600); Scientific Instrument Developing Project (YJKYYQ20180024); Strategic Priority Research Program of Chinese Academy of Sciences (XDA25020306).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004). [CrossRef]  

2. H. M. L. Faulkner and J. M. Rodenburg, “Movable Aperture Lensless Transmission Microscopy: A Novel Phase Retrieval Algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]  

3. M. J. Humphry, B. Kraus, A. C. Hurst, A. M. Maiden, and J. M. Rodenburg, “Ptychographic electron microscopy using high-angle dark-field scattering for sub-nanometre resolution imaging,” Nat. Commun. 3(1), 730 (2012). [CrossRef]  

4. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution Scanning X-ray Diffraction Microscopy,” Science 321(5887), 379–382 (2008). [CrossRef]  

5. C. C. Polo, L. Pereira, P. Mazzafera, D. N. A. Flores-Borges, J. L. S. Mayer, M. Guizar-Sicairos, M. Holler, M. Barsi-Andreeta, H. Westfah, and F. Meneau, “Correlations between lignin content and structural robustness in plants revealed by X-ray ptychography,” Sci. Rep. 10(1), 6023 (2020). [CrossRef]  

6. M. D. Seaberg, B. S. Zhang, D. F. Gardner, E. R. Shanblatt, M. M. Murnane, H. C. Kapteyn, and D. E. Adams, “Tabletop nanometer extreme ultraviolet imaging in an extended reflection mode using coherent Fresnel ptychography,” Optica 1(1), 39–44 (2014). [CrossRef]  

7. W. H. Xu, H. X. Lin, H. Y. Wang, and F. C. Zhang, “Super-resolution near-field ptychography,” Opt. Express 28(4), 5164–5178 (2020). [CrossRef]  

8. S. W. Jiang, J. K. Zhu, P. M. Song, C. F. Guo, Z. C. Bian, R. H. Wang, Y. K. Huang, S. Y. Zhang, H. Zhang, and G. A. Zheng, “Wide-field, high-resolution lensless on-chip microscopy via near-field blind ptychographic modulation,” Lab Chip 20(6), 1058–1065 (2020). [CrossRef]  

9. P. M. Song, S. W. Jiang, H. Zhang, Z. C. Bian, C. F. Guo, K. Hoshino, and G. A. Zheng, “Super-resolution microscopy via ptychographic structured modulation of a diffuser,” Opt. Lett. 44(15), 3645–3648 (2019). [CrossRef]  

10. D. Y. Wang, B. Li, L. Rong, F. R. Tan, J. J. Healy, J. Zhao, and Y. X. Wan, “Multi-layered full-field phase imaging using continuous-wave terahertz ptychography,” Opt. Lett. 45(6), 1391–1394 (2020). [CrossRef]  

11. M. Q. Du, L. Loetgering, K. S. E. Eikema, and S. Witte, “Measuring laser beam quality, wavefronts, and lens aberrations using ptychography,” Opt. Express 28(4), 5022–5034 (2020). [CrossRef]  

12. T. M. Godden, R. Suman, M. J. Humphry, J. M. Rodenburg, and A. M. Maiden, “Ptychographic microscope for three-dimensional imaging,” Opt. Express 22(10), 12513–12523 (2014). [CrossRef]  

13. F. Seiboth, A. Schropp, M. Scholz, F. Wittwer, C. Rodel, M. Wunsche, T. Ullsperger, S. Nolte, J. Rahomaki, K. Parfeniukas, S. Giakoumidis, U. Vogt, U. Wagner, C. Rau, U. Boesenberg, J. Garrevoet, G. Falkenberg, E. C. Galtier, H. J. Lee, B. Nagler, and C. G. Schroer, “Perfect X-ray focusing via fitting corrective glasses to aberrated optics,” Nat. Commun. 8(1), 14623 (2017). [CrossRef]  

14. N. Anthony, G. Cadenazzi, H. Kirkwood, E. Huwald, K. Nugent, and B. Abbey, “A Direct Approach to In-Plane Stress Separation using Photoelastic Ptychography,” Sci. Rep. 6(1), 30541 (2016). [CrossRef]  

15. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4(7), 736–745 (2017). [CrossRef]  

16. F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21(11), 13592–13606 (2013). [CrossRef]  

17. P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature 494(7435), 68–71 (2013). [CrossRef]  

18. Y. D. Yao, C. Liu, Q. Lin, and J. Q. Zhu, “Compensation for the setup instability in ptychographic imaging,” Opt. Express 25(10), 11969–11983 (2017). [CrossRef]  

19. S. Ohdachi, K. Y. Watanabe, K. Tanaka, Y. Suzuki, Y. Takemura, S. Sakakibara, X. D. Du, T. Bando, Y. Narushima, R. Sakamoto, J. Miyazawa, G. Motojima, T. Morisaki, and L. H. D. Experiment Group, “Observation of the ballooning mode that limits the operation space of the high-density super-dense-core plasma in the LHD,” Nucl. Fusion 57(6), 066042 (2017). [CrossRef]  

20. J. Marrison, L. Räty, P. Marriott, and P. O’Toole, “Ptychography–a label free, high-contrast imaging technique for live cells using quantitative phase information,” Sci. Rep. 3(1), 2369 (2013). [CrossRef]  

21. X. C. Pan, C. Liu, and J. Q. Zhu, “Single shot ptychographical iterative engine based on multi-beam illumination,” Appl. Phys. Lett. 103(17), 171105 (2013). [CrossRef]  

22. W. H. Xu, Y. Luo, T. Li, H. Y. Wang, and Y. S. Shi, “Multiple-Image Hiding by Using Single-Shot Ptychography in Transform Domain,” IEEE Photonics J. 9(3), 1–10 (2017). [CrossRef]  

23. P. Sidorenko, O. Lahav, and O. Cohen, “Ptychographic ultrahigh-speed imaging,” Opt. Express 25(10), 10997–11008 (2017). [CrossRef]  

24. P. Sidoenko and O. Cohen, “Single-shot ptychography,” Optica 3(1), 9–14 (2016). [CrossRef]  

25. O. Wengrowicz, O. Peleg, B. Loevsky, B. K. Chen, G. I. Haham, U. S. Sainadh, and O. Cohen, “Experimental time-resolved imaging by multiplexed ptychography,” Opt. Express 27(17), 24568–24577 (2019). [CrossRef]  

26. R. Horisaki, T. Kojima, K. Matsushima, and J. Tanida, “Subpixel reconstruction for single-shot phase imaging with coded diffraction,” Appl. Opt. 56(27), 7642–7647 (2017). [CrossRef]  

27. C. Y. Hu, Z. M. Du, M. H. Chen, S. G. Yang, and H. W. Chen, “Single-shot ultrafast phase retrieval photography,” Opt. Lett. 44(17), 4419–4422 (2019). [CrossRef]  

28. V. Borovytsky, “Huygens–Fresnel principle and Abbe formula,” Opt. Eng. 57(09), 1 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. (a) The principle of single-shot PIE with multiple illuminations. (b) The numerical propagation of highly tilted illuminating beam.
Fig. 2.
Fig. 2. Data flow chart of the proposed computation method.
Fig. 3.
Fig. 3. Complex amplitude of simulated sample. (a) amplitude and (b) phase.
Fig. 4.
Fig. 4. (a) illuminations on the sample surface, (b) diffractions patterns formed, and (c) enlarged view of diffractions patterns.
Fig. 5.
Fig. 5. (a)Reconstructed amplitude and (b) phase (c) reconstruction convergence.
Fig. 6.
Fig. 6. (a) reconstructed amplitude and (b) phase (c) reconstruction convergence.
Fig. 7.
Fig. 7. Setup of single-shot ptychography with simultaneous multi-angle illumination.
Fig. 8.
Fig. 8. Complex amplitude distribution of illumination beam array. The brightness shows the intensity of illuminating beams on the sample surface and the color scale shows the phase. Figure in the inset shows the object at the sample position.
Fig. 9.
Fig. 9. Recorded array of diffraction patterns.
Fig. 10.
Fig. 10. Reconstructed (a) amplitude and (b) phase of the specimen obtained by single-shot PIE. Reconstructed (a) amplitude and (b) phase of the specimen obtained by common scanning PIE.
Fig. 11.
Fig. 11. The experimental results on USAF 1951 test target. (a)Intensity image retrieved, intensity along (b) horizontal direction and (c)vertical direction.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

P ( x , y ) =   P ( x , y ) exp [ i 2 π λ ( x cos α + y cos β ) ] .
U ( x , y ) = O ( x , y ) P ( x , y ) exp [ i 2 π λ ( x cos α + y cos β ) ] .
U ~ ( f x , f y ) = O ~ ( f x , f y ) P ~ ( f x , f y ) δ ( cos β λ , cos α λ )
D ( x , y ) = F T 1 { U ~ ( f x , f y ) H ( f x , f y ) }
U ~ ( k x + k 1 , k y + k 2 ) H ( k x + k 1 , k y + k 2 )  =  U ~ ( k x , k y ) H ( k x + k 1 , k y + k 2 ) .
D ( x , y ) = F T  -  1 { U ~ ( k x , k y ) H ( k x + k 1 , k y + k 2 ) } exp [ i ( k 1 x + k 2 y ) ] .
1 λ 2 ( f x + k 1 ) 2 λ 2 ( f y + k 2 ) 2 = 1 λ 2 2 [ ( f x + k 1 ) 2 + ( f y + k 1 ) 2 ] λ 4 8 [ ( f x + k 1 ) 2 + ( f y + k 1 ) 2 ] 2 +
O K + 1 ( x , y ) = O K ( x , y ) + [ U n ( x , y ) U n ( x , y ) ] p n ( x , y ) | p n ( x , y ) | 2 + η | p n ( x , y ) | | p n ( x , y ) | max .
E r r o = n | | D n ( x , y ) | | D n ( x + d cos α n , y + d cos β n ) | | n | D n ( x , y ) |
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.