Abstract
Ptychography is a scanning variation of the coherent diffractive imaging method for providing high-resolution quantitative images from specimen with extended dimensions. Its capability of achieving diffraction-limited spatial resolution can be compromised by the sample thickness, which is generally required to be thinner than the depth of field of the imaging system. In this Letter, we present a method to extend the depth of field for ptychography by numerically generating the focus stack from reconstructions with propagated illumination wavefronts and combining the in-focus features to a single sharp image using an algorithm based on the complex-valued discrete wavelet transform. This approach does not require repeated measurements by translating the sample along the optical axis as in the conventional focus stacking method, and offers a computation-efficient alternative to obtain high-resolution images with extended depth of fields, complementary to the multi-slice ptychography.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Corrections
Xiaojing Huang, Hanfei Yan, Ian K. Robinson, and Yong S. Chu, "Extending the depth of field for ptychography using complex-valued wavelets: publisher’s note," Opt. Lett. 44, 662-662 (2019)https://opg.optica.org/ol/abstract.cfm?uri=ol-44-3-662
18 January 2019: A typographical correction was made to Eq. (3).
The depth of field of an imaging system refers to the distance range in the vicinity of the focal plane, where the image maintains an acceptable sharpness. The depth of field enforces a general constraint on the allowable sample thickness for obtaining the highest achievable spatial resolution. This limitation can severely deteriorate for high-resolution three-dimensional microscopy, because the typical depth of field can be substantially smaller than the lateral field of view. The focus stacking method tackles this problem by collecting a series of images at multiple object planes, where features at various depths sequentially appear in focus at their corresponding focal planes. These sharp features can be extracted and merged to reconstruct an image with an extended apparent depth of field. Considering that the in-focus features contain sharp details and, thus, have more high frequency information, they can be distinguished through analyzing the frequency components using wavelet or windowed Fourier transforms [1,2]. The focus stacking method has been successfully implemented in optical [3], electron [4], and full-field x-ray microscopy [5].
The multi-slice method [6,7] provides an alternative way to extend the depth of field by decomposing a thick sample into thin slices and modeling the multiple scattering effect slice by slice. This method extracts depth information through numerical propagation, instead of the image stack. The integration of the multi-slice method with the ptychography technique was demonstrated as an effective approach to obtain diffraction-limited resolution from specimens thicker than the depth of field [8], and is able to provide 3D images without sample rotation [9]. These benefits encourage emerging efforts on adopting the multi-slice ptychography method in the x-ray regime for improving both the lateral and depth resolutions [10–15], and exploiting the potential advantages as a novel approach for high-resolution 3D imaging with reduced number of projections beyond the Crowther criterion [16–19].
In this Letter, we present a new approach to effectively extend the depth of field by numerically generating the focus stack from single-slice ptychography reconstructions with propagated illumination wavefronts and obtaining a single sharp image using an image fusion algorithm based on the complex-valued discrete wavelet transform. This approach does not require the repeated measurements of translating the sample along the axial direction as in the conventional focus stacking method, and is computationally more efficient than the multi-slice ptychography method.
Ptychography is a diffraction-based imaging technique, which reconstructs real-space images from datasets collected by scanning a sample across a confined beam with adequately fine steps [20]. The redundant measurements provided by overlapped scan spots make it possible to recover complex-valued images with quantitative absorption and phase contrasts [21–23]. The reconstructed wavefront at the sample exit plane is related to the far-field diffraction pattern by a Fourier transform [24]. Assuming that this wavefront has propagated a relatively short distance, the propagated wavefront will give the same diffraction pattern as the initial wavefront. As a result, the reconstruction algorithm cannot distinguish the initial exit wavefront from its propagated versions. This is known as the propagation ambiguity in the phase retrieval process. For the coherent diffractive imaging method, it has been pointed out that the real-space constraints used in the iterative reconstruction process eliminates the propagation non-uniqueness and effectively determines the plane where the wavefront is reconstructed [25,26]. Similarly, the reconstruction plane in ptychography is determined by the illumination function used in the reconstruction [27]. The features on the reconstruction plane appear sharp as in focus, while the features away from the reconstruction plane that are more than the depth of field lose sharpness, becoming out of focus. In this manner, the optical sectioning can be numerically realized by propagating the incident illumination function to target image planes and reconstructing the focus stack plane by plane, using the same ptychographic dataset.
The sectioning capability, i.e., the depth resolution, is limited by the depth of field for a ptychographic imaging system. Figure 1 illustrated a typical experimental setup for a ptychography measurement using a focused beam. The confined illumination delivered by focusing optics has an intrinsic depth of focus. Within this range, the propagation effect of the illumination wavefront is negligible, as illustrated by the red box in Fig. 1. The depth of focus is determined by the numerical aperture of the focusing optics by [28]. The diffraction- based imaging techniques directly measure the scattering signal from sample. Therefore, it is capable of realizing a larger detection numerical aperture and using a spatial frequency signal beyond the maximum scattering angle defined by the optics for achieving better spatial resolution. The enlarged detection numerical aperture shrinks the depth of field of the imaging system accordingly, as indicated by the green box in Fig. 1. The sample is placed at the focal plane in the left panel of Fig. 1 for direct comparison with the depth of focus and the depth of field, while the ptychography measurements can be performed at defocal planes without sacrificing the achievable resolution. The right panel of Fig. 1 shows a typical far-field diffraction pattern collected with an x-ray beam focused by multilayer Laue lenses (MLLs). The bright region selected by the red frame represents the natural divergence of the focused beam and corresponds to the . The scattering signal actually extends to the edge of the cropped detection array as indicated by the green frame, which defines and, thus, the depth of field. The narrower depth of field, defined by the detection, enforces a more restrictive limitation on sample thickness and is considered as an undesirable side effect of pursuing high spatial resolution. On the other hand, for the purpose of optical sectioning, a short depth of field is actually beneficial, because it offers better depth sensitivity. As the conventional focus stacking method requires the axial step matching the depth of field to properly capture sample features through the focus, the narrowed depth of field thus demands more sectioning steps, i.e., more measurements. For the ptychography reconstruction, this requirement does not introduce extra burden on collecting more datasets, because the sectioning is numerically conducted by propagating the illumination function with desired steps and running reconstructions accordingly.
For detecting the in-focus features, the wavelet transform works better than Fourier transform, since it employs locally oscillating wavelets instead of globally oscillating sinusoids in Fourier transform, allowing the capture of both location and frequency information [29,30]. The complex-valued wavelet transform works better than the real-valued counterpart, because this redundant representation relaxes the inherent constraints in the real-valued case by using individual bases for the real and imaginary components [3,31], and provides more information as the phase of the coefficient carries detailed frequency components, while the magnitude part provides the corresponding weights [32].
We use the dual-tree complex discrete wavelet transform [33,34] for the image fusion, which uses a set of orthonormal and smooth bases and is analytically invertible [30]. The maximum- absolute-value selection rule is used to pick up the most pronounced wavelet coefficients for generating the final sharp image. The image fusion algorithm is adopted and modified from [29] and consists of the following four steps:
- 1. Align 2D slices from single-slice ptychography reconstructions with propagated wavefronts with sub-pixel accuracy [35].
- 2. Apply the complex-valued discrete wavelet transform
WT [36] on each 2D slice in the focus stack FS:
where is the wavelet coefficient for level , and denotes the th 2D slice in the focus stack.
The proposed method is validated using an experimental dataset collected at the hard x-ray nanoprobe beamline, National Synchrotron Light Source II (NSLS-II). The sample is a 10 μm thick silicon wafer, with gold and nickel oxide nanoparticles prepared on its front and rear surfaces, respectively. The wafer was scanned across a 12 keV nano-focused x-ray beam produced by MLLs. The details of the experimental setup can be found in Refs. [37,38]. The front surface of the sample was placed 10 μm downstream from the focal plane, and the beam was diverged to ∼100 nm. A Fermat spiral trajectory [39] was used to scan a area with 15 nm radial step size to provide a generous overlapping condition [40–42] to reconstruct two sample planes. A Merlin pixel-array detector [43] was placed 0.5 m downstream and recorded 2035 data frames for the entire 2D ptychography scan. The dataset used in this Letter was taken with a 1 s exposure time per scan point. Typical diffraction amplitude is shown in the right panel of Fig. 1. The cropped data array gives a 5 nm reconstruction pixel size and ~1 μm depth of field. The wavefront of the focused beam was determined from a ptychography reconstruction using a dataset collected with only the gold particles inside the field of view.
The dataset was first reconstructed using multi-slice ptychography with 1000 iterations of a difference map algorithm [21] using the single-slice reconstruction results as the starting guess for the object functions. The obtained phase images on the front and rear surface of the sample are shown in Figs. 2(a) and 2(b), respectively. The nanoparticles are sharply reconstructed on their corresponding layers, and they agree well with the Au and Ni fluorescence images, as shown in Figs. 2(c) and 2(d), which were measured by a mesh scan over the same area at the same axial position with a 20 nm step size and 1 s dwell time. Multiplying these two reconstructed images (i.e., summing their phases) gives a single projection image with in-focus features over a 10 μm range, as shown in Fig. 2(e), which significantly exceeds the estimated 1 μm depth of field.
To generate the focus stack, the x-ray wavefront at the front sample surface was propagated from to 12 μm in 1 μm steps to match the depth of field. The front and rear sample surfaces were at and , respectively. Single-slice ptychography reconstructions with 1000 difference map iterations were performed 15 times, each time using one fixed probe from 15 propagated x-ray wavefronts. Figures 3(a)–3(h) display eight of the obtained phase images with 2 μm axial increments. At and planes, the particles on the front and rear surfaces are sharply reconstructed, each overlaid with a blurry image of the features on the other plane. At other planes, the image quality on both surfaces is compromised to a certain extend. Using this focus stack, the final image is obtained using the aforementioned algorithm described in the previous section with three-level wavelet transforms. Figure 3(i) shows the merged image with features on both slices sharply presented. The obtained image is consistent with the result given by the multi-slice ptychography. Since the sample only consists of two planes, and the thickness of each plane is thinner than the depth of field, two single-slice reconstructions at and planes are sufficient to give a merged image with a very similar quality as the one obtained from 15 reconstructions.
In the ptychography reconstruction process, the majority of the computation time is for Fourier transform. The propagation from the sample to the detector can be described by a Fresnel propagation with one Fourier transform, and the propagation between adjacent planes in the multi-slice case can be modeled by the angular spectrum method which contains two Fourier transforms [44]. Thus, the total computation time in a multi-slice ptychography reconstruction with slices can be estimated as , where the first term is for propagations between adjacent slices, the second term is for propagations between the last slice and the detector plane, FT denotes the computation time for one Fourier transform over all data frames, and a general factor of 2 represents the forward and backward propagations in each cycle. However, repeating the single-slice ptychography reconstruction times only takes . In our case, the entire image fusion process only took less than 1 s. For this two-layer sample, the proposed method increases the computation efficiency by one-third compared with the multi-slice ptychography approach. Considering the number of slices, can become remarkably large for a continuous sample with extended dimension, and the multi-slice ptychography approach demands at least a factor of more memory; numerically generating the focus stack with single-slice ptychography reconstructions in the proposed method is expected to bring a more significant improvement on the computational efficiency.
The freedom and simplicity to numerically tune the axial sectioning spacing of the focus stack make this method very flexible to favor practical applications. Wavelet transform-based image processing techniques, such as denoising by thresholding coefficients [45], can be easily implemented into this method. It should be noted that the numerical sectioning capability of this method relies on an assumption that the probe profile is not altered through the sample thickness, i.e., with negligible multiple scattering effects. Otherwise, the multi-slice approach works better than the focus stacking method [46]. Despite the computational burden, the axially separated slices obtained from multi-slice ptychography enable unique capabilities, such as effectively reducing the projection numbers for tomography [18] or 3D imaging without sample rotation [9].
We present a new method to extend the depth of field for ptychography by numerically generating the focus stack from single-slice ptychography reconstructions with propagated illumination functions and merging the in-focus features with a final sharp image using complex-valued discrete wavelet transform. This method simplifies the data acquisition process compared with the conventional focus stacking method, and it shows the potential to significantly improve the computation efficiency compared with the multi-slice ptychography method. It offers a new opportunity to remove the limitation on sample thickness and obtain high-resolution images from materials with extended dimensions.
Funding
U.S. Department of Energy (DOE); Office of Science (SC) by Brookhaven National Laboratory (DE-SC0012704).
REFERENCES
1. S. Mallat, IEEE Trans. Pattern Anal. Mach. Intell. 11, 674 (1989). [CrossRef]
2. M. Unser and A. Aldroubi, Proc. IEEE 84, 626 (1996). [CrossRef]
3. A. Valdecasas, D. Marshall, J. Becerra, and J. Terrero, Micron 32, 559 (2001). [CrossRef]
4. R. Hovden, H. Xin, and D. Muller, Microsc. Microanal. 17, 75 (2011). [CrossRef]
5. Y. Liu, J. Wang, Y. Hong, Z. Wang, K. Zhang, P. Williams, P. Zhu, J. Andrews, P. Pianetta, and Z. Wu, Opt. Lett. 37, 3708 (2012). [CrossRef]
6. J. Cowley and A. Moodie, Acta Crystallogr. 10, 609 (1957). [CrossRef]
7. P. Goodman and A. Moodie, Acta Crystallogr. A30, 280 (1974). [CrossRef]
8. A. Maiden, M. Humphry, and J. Rodenburg, J. Opt. Soc. Am. A 29, 1606 (2012). [CrossRef]
9. T. Godden, R. Suman, M. Humphry, J. Rodenburg, and A. Maiden, Opt. Express 22, 12513 (2014). [CrossRef]
10. A. Suzuki, S. Furutaku, K. Shimomura, K. Yamauchi, Y. Kohmura, T. Ishikawa, and Y. Takahashi, Phys. Rev. Lett. 112, 053903 (2014). [CrossRef]
11. K. Shimomura, A. Suzuki, M. Hirose, and Y. Takahashi, Phys. Rev. B 91, 214114 (2015). [CrossRef]
12. E. Tsai, I. Usov, A. Diaz, A. Menzel, and M. Guizar-Sicairos, Opt. Express 24, 29089 (2016). [CrossRef]
13. K. Shimomura, M. Hirose, and Y. Takahashi, Acta Crystallogr. A74, 66 (2018). [CrossRef]
14. H. Ozturk, H. Yan, Y. He, M. Ge, Z. Dong, M. Lin, E. Nazaretski, I. Robinson, Y. Chu, and X. Huang, Optica 5, 601 (2018). [CrossRef]
15. X. Huang, H. Yan, Y. He, M. Ge, H. Ozturk, Y. Fang, S. Ha, M. Lin, M. Lu, E. Nazaretski, I. Robinson, and Y. Chu, Acta Crystallogr. (to be published).
16. E. Tsai, M. Odstrcil, I. Usov, M. Holler, A. Diaz, J. Bosgra, A. Menzel, and M. Guizar-Sicairos, Imaging Appl. Opt. CW3B.2 (2017).
17. P. Li and A. Maiden, Sci. Rep. 8, 2049 (2018). [CrossRef]
18. C. Jacobsen, Opt. Lett. 43, 4811 (2018). [CrossRef]
19. K. Shimomura, M. Hirose, T. Higashino, and Y. Takahashi, Opt. Express 26, 31199 (2018). [CrossRef]
20. J. Rodenburg, A. Hurst, A. Cullis, B. Dobson, F. Pfeiffer, O. Bunk, C. David, K. Jefimovs, and I. Johnson, Phys. Rev. Lett. 98, 034801 (2007). [CrossRef]
21. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, Science 321, 379 (2008). [CrossRef]
22. M. Guizar-Sicairos and J. Fienup, Opt. Express 16, 7264 (2008). [CrossRef]
23. A. Maiden and J. Rodenburg, Ultramicroscopy 109, 1256 (2009). [CrossRef]
24. P. Thibault, V. Elser, C. Jacobsen, D. Shapiro, and D. Sayre, Acta Crystallogr. A62, 248 (2006). [CrossRef]
25. J. Spence, U. Weierstall, and M. Howells, Phil. Trans. R. Soc. Lond. A 360, 875 (2002). [CrossRef]
26. X. Huang, R. Harder, G. Xiong, X. Shi, and I. Robinson, Phys. Rev. B 83, 224109 (2011). [CrossRef]
27. I. Robinson and X. Huang, Nat. Mater. 16, 160 (2017). [CrossRef]
28. M. Born and E. Wolf, Principles of Optics (Cambridge Univiversity, 1999).
29. B. Forster, D. V. D. Ville, J. Berent, D. Sage, and M. Unser, Microsc. Res. Tech. 65, 33 (2004). [CrossRef]
30. I. Selesnick, R. Baraniuk, and N. Kingsbury, IEEE Signal Process. Mag. 22, 123 (2005). [CrossRef]
31. I. Daubechies, Commun. Pure Appl. Math. 41, 909 (1988). [CrossRef]
32. J. Lina, J. Math. Imaging Vis. 7, 211 (1997). [CrossRef]
33. N. Kingsbury, Philos. Trans. R. Soc. London, A 357, 2543 (1999). [CrossRef]
34. N. Kingsbury, Appl. Comput. Harmon. Anal. 10, 234 (2001). [CrossRef]
35. M. Guizar-Sicairos, S. Thurman, and J. Fienup, Opt. Lett. 33, 156 (2008). [CrossRef]
36. “Python port of the dual-tree complex wavelet transform toolbox,” https://github.com/rjw57/dtcwt.
37. E. Nazaretski, K. Lauer, H. Yan, N. Bouet, J. Zhou, R. Conley, X. Huang, W. Xu, M. Lu, K. Gofron, S. Kalbfleisch, U. Wagner, C. Rau, and Y. Chu, J. Synchrotron Radiat. 22, 336 (2015). [CrossRef]
38. E. Nazaretski, H. Yan, K. Lauer, N. Bouet, X. Huang, W. Xu, J. Zhou, D. Shu, Y. Hwu, and Y. Chu, J. Synchrotron Radiat. 24, 1113 (2017). [CrossRef]
39. X. Huang, H. Yan, R. Harder, Y. Hwu, I. Robinson, and Y. Chu, Opt. Express 22, 12634 (2014). [CrossRef]
40. O. Bunk, M. Dierolf, S. Kynde, I. Johnson, O. Marti, and F. Pfeiffer, Ultramicroscopy 108, 481 (2008). [CrossRef]
41. T. Edo, D. Batey, A. Maiden, C. Rau, U. Wagner, Z. Pesic, T. Waigh, and J. Rodenburg, Phys. Rev. A 87, 053850 (2013). [CrossRef]
42. X. Huang, H. Yan, M. Ge, H. Ozturk, E. Nazaretski, I. Robinson, and Y. Chu, Appl. Phys. Lett. 111, 023103 (2017). [CrossRef]
43. R. Plackett, I. Horswell, E. Gimenez, J. Marchal, D. Omar, and N. Tartoni, J. Instrum. 8, C01038 (2013).
44. J. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Company, 2005).
45. M. Roy, V. Kumar, B. Kulkarni, J. Sanderson, M. Rhodes, and M. van der Stappen, J. Am. Inst. Chem. Eng 45, 2461 (1999).
46. H. Xin, V. Intaraprasonk, and D. Muller, Appl. Phys. Lett. 92, 013125 (2008). [CrossRef]