Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Computational adaptive optics for high-resolution non-line-of-sight imaging

Open Access Open Access

Abstract

Non-line-of-sight (NLOS) imaging has aroused great interest during the past few years, by providing a unique solution for the observation of hidden objects behind obstructions or scattering media. As such, NLOS imaging may facilitate broad applications in autonomous driving, remote sensing, and medical diagnosis. However, existing NLOS frameworks suffer from severe degradation of resolution and signal-to-noise ratio (SNR) due to aberrations induced by scattering media and system misalignment, restricting its practical applications. This paper proposes a computational adaptive optics (CAO) method for NLOS imaging to correct optical aberrations in post-processing without the requirement of any hardware modifications. We demonstrate the effectiveness of CAO with a confocal NLOS imaging system in Terahertz (THz) band by imaging different samples behind occlusions for both low- and high-order aberrations. With appropriate metrics used for iterative CAO in post-processing, both the resolution and SNR can be increased by several times without reducing the data acquisition speed.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Non-line-of-sight (NLOS) imaging has been well recognized in the past few years with different methods proposed to observe the hidden objects around the corner or behind scattering media [16]. As a fundamental problem, NLOS imaging shows great potential in various applications including autonomous driving, surveillance, remote sensing, and medical diagnosis [79]. With the intensive development of different kinds of detectors such as gated camera [10], single-photon avalanche diodes (SPAD) [11], streak cameras [12] and vector network analyzer (VNA) [13], high-frequency time-resolved information can now be obtained within the broad bandwidth of light, facilitating 3D reconstructions of hidden objects from the extremely-weak multiple-scattering light. Although various algorithms have been proposed to improve the reconstruction performance and speed [1417], the optical aberrations induced by the scattering media or system misalignment have seldom been considered in NLOS imaging, causing severe degradation in complicated practical imaging environment.

Such aberrations can be mitigated to some extent by confocal configuration, which can efficiently avoid cross-talk between different scattering photons. Both resolution and SNR improvement have been demonstrated in visible light and THz band [4,13]. However, the scattering of the rough surface will still introduce additional phase disturbance requiring further corrections. Moreover, it’s hard to focus all the illumination spots on the wall or scattering media within a limited depth of field, leading to blurry results. Adaptive optics (AO) is a typical means of wavefront correction in both astronomy [18] and microscopy [19]. In a typical AO system, a wavefront sensor and a deformable mirror array are combined to detect and correct the aberration with a guide star. This is quite expensive and usually localized to a small region, since the aberrations vary greatly at different regions. For the situations without a guide star, sensorless AO is required for more measurements or iterative feedback, which is not suitable for dynamic samples [20]. Moreover, there is still a lack of practical wavefront sensors and controllers in some specific bandwidths such as THz frequency [21]. Therefore, high-resolution robust NLOS imaging with strong scattering media remains a great challenge at the current stage, restricting the practical applications such as security inspections and medical diagnosis.

To address this problem, we propose a computational adaptive optics (CAO) framework for NLOS imaging by modulating the scattering fields in post-processing to achieve high-resolution high-SNR results after iterative feedbacks to estimate the aberrated wavefront. By capturing the full spatial information of the light field such as the complex field in coherent conditions [22,23] and the phase-space distributions in partially-coherent conditions [24,25], we can synthesize the light-field distributions after arbitrary analog physical modulations in digital processing, which has recently been explored in optical coherence tomography (OCT) [22,26,27] and fluorescence imaging [28]. As the full time-resolved light fields are captured in NLOS imaging, we find that similar computational modulations can also be applied in NLOS data, which facilitate CAO to achieve near-diffraction-limited resolution after strong scattering medium. To establish the unique advantages of CAO in NLOS imaging, we imaged various samples behind strong scattering media by a confocal THz NLOS system. THz imaging is one of the most promising technologies in security inspection and medical imaging due to nonionizing radiation, deep penetration, and high resolution [29], but few studies consider the image degradation due to the occlusions [30,31,32]. However, it’s hard to keep the phase stability of THz waves for scattering occlusions, leading to strong blurring, artifacts or specklesspeckles. By computationally modifying the pupil phase according to different prior metrics, we can achieve significant improvement in resolution and SNR after iterative feedback in post-processing without reducing the detection speed. The remainder of this paper is organized as follows. The imaging geometry and the proposed CAO framework are described in Section 2. In Section 3, we present the experimental results with different types of aberration. In addition to the low-order aberrations which is common in OCT, we further studied the high-order aberrations in practical THz applications. The conclusion is made in Section 4.

2. Methods

Fig. 1 illustrates the confocal THz NLOS imaging system. The vector network analyzer (VNA) connected with two THz extenders is used to generate frequency modulated continuous wave (FMCW) signal. Horn antennas are installed on the extenders to transmit and receive the THz waves in the free space. Between the transceivers, a beam splitter is installed to align the transmitter and receiver in conjugated planes. Compared with the conventional THz synthetic aperture radar (SAR) imaging system, a convex lens is introduced to focus the beam, forming a confocal configuration to increase SNR. The wave penetrates the occlusion with scattering and illuminates the hidden object. After that, the reflected signal is scattered again, and then collected by the receiver. Finally, the data is captured by scanning the target in 2D, akin to a galvo-based confocal scanning system. In traditional THz SAR imaging without scattering, the ideal echo signal can be expressed as:

$$S(x,y;k) = \int\!\!\!\int\!\!\!\int {\rho (x,y,z) \cdot \exp ( - 2jk\sqrt {{{(x - x^{\prime})}^2} + {{(y - y^{\prime})}^2} + {{(z - {\textrm{Z}_0})}^2}} )dxdydz}$$
where $k = 2\pi f\textrm{/c}$ is the wavenumber, $S(x,y,k)$ is the echo signal captured by transceiver at point $(x^{\prime},y^{\prime},{\textrm{Z}_0})$, $\rho (x,y,z)$ is the scattering potential. The second term in the right side of Eq. (1) encodes the system point spread function (PSF) $h(x,y,z;k)$, which stands for an ideal spherical wavefront. In the frequency spatial domain, the signal can be represented as:
$$S({Q_x},{Q_y};k) = \rho ({Q_x},{Q_y},{Q_z})H({Q_x},{Q_y};k)$$
where ${Q_z} = \sqrt {4{k^2} - Q_x^2 - Q_y^2} $ and ${Q_x},{Q_\textrm{y}}$ denotes the axial and transverse spatial frequency, $S({Q_x},{Q_y};k)$ denotes the 2D transverse Fourier transform of the echo signal, $H({Q_x},{Q_y};k)$ encodes the ideal spatial frequency response of the system. Since ${Q_z}$ is not uniformly spaced, an interpolation called Stolt mapping is taken. This procedure is called range migration algorithm (RMA) and the reconstruction formula can be presented as [33]:
$$\rho (x,y,z) = \textrm{IFF}{\textrm{T}_{\textrm{3D}}}\{{\textrm{Inter}{\textrm{p}_{k \to {Q_z}}}[{\textrm{(}S({Q_x},{Q_y};k)} ]} \}$$
The above formulas are under an ideal condition without considering the aberration. For CAO, we consider that there is an aberrated phase at the pupil plane introduced by the occlusion or scattering media. Noting that in the confocal configuration, the focal plane of the beam can be viewed as a new virtual aperture. Fourier optics connects the lens pupil function to the complex field at the virtual aperture plane [34] thus the received signal can be represented as:
$${S_{ab}}({Q_x},{Q_y};k) = \rho ({Q_x},{Q_y},{Q_z})H({Q_x},{Q_y};k){H_{ab}}({Q_x},{Q_y};k)$$
where ${H_{ab}}({Q_x},{Q_y};k)$ is the scattering term introduced by the occlusion. Here, we take the thin-phase approximation to assume that the aberration is introduced by a pure phase structure located at one ideal plane which has no thickness, and everywhere else has no scattering or aberration. In this case, the aberrated complex field at the virtual aperture can be represented as:
$${H_{\textrm{ab}}}({Q_x},{Q_y};k) = \exp (ik/{k_c}{\Phi _h}({Q_x},{Q_y}))$$
where ${k_c}$ is the center wavenumber and ${\Phi _h}({Q_x},{Q_y})$ is the phase aberration. In order to eliminate it, a phase filter is designed as the conjugated term as ${H_{AC}}\textrm{ = }H_{\textrm{ab}}^\ast $. By searching the phase aberration ${\Phi _h}({Q_x},{Q_y})$ based on different image evaluation metrics that provided feedback of the aberration correction, we can optimize the imaging performance behind the occlusion. Finally, the conjugated phase filter is applied to the RMA to achieve better reconstruction performance, which can be presented as:
$$\rho (x,y,z) = \textrm{IFF}{\textrm{T}_{\textrm{3D}}}\{{\textrm{Inter}{\textrm{p}_{k \to {Q_z}}}[{\textrm{(}{S_{ab}}({Q_x},{Q_y};k)\exp ( - ik/{k_c}{\Phi _h}({Q_x},{Q_y}))} ]} \}$$
We take different algorithm to estimate ${\Phi _h}({Q_x},{Q_y})$ due to different types of aberration. For low-order aberration, it can be expressed as a series of Zernike polynomials [35] ${Z_m}$ with coefficients ${a_m}$:
$${\Phi _h}({Q_x},{Q_y}) = \left\{ \begin{array}{rr} \sum\limits_{m = 1}^N {{a_m}{Z_m}(\frac{{{Q_x}}}{{{f_n}}}, \frac{{{Q_y}}}{{{f_n}}})} , &\sqrt {Q_x^2 + Q_y^2} \le {f_n}\\ 0, &\sqrt {Q_x^2 + Q_y^2} > {f_n} \end{array} \right.$$
where ${f_n}$ denotes the cut-off frequency. Because of the independence of Zernike polynomials with different order, we search ${a_m}$ with a fixed step size from low-order to high-order sequentially. The best value of each ${a_m}$ is stored after all given Zernike polynomials has been searched. Next, this value is used as the initial solution for the next iteration and the search process is repeated for a few times. At last, we can get the optimized coefficients of various Zernike polynomials and corresponding phase pattern (see Supplement1 for detailed pseudo code). For high-order aberration, since Zernike polynomials representation is no longer appropriate, we take genetic algorithm (GA) to search for the solution ${\Phi _h}({Q_x},{Q_y})$ directly with boundary constraint:
$$|{{\Phi _h}({Q_x},{Q_y})} |\le \gamma \pi ,\quad 0 < \gamma \le 1$$
where $\gamma $ is a prior constant determined by the specific occlusion. Through iterative procedures including solution selection, crossover and mutation, GA get the optimized evaluation metric and corresponding phase aberration.

 figure: Fig. 1.

Fig. 1. Overview of CAO for NLOS imaging. (a) Schematic of NLOS imaging based on confocal THz SAR system. The ideal wavefront is distorted by the occlusion in traditional methods but can be corrected digitally by adaptive reconstruction. (b) The raw temporal measurements indicating the path lengths from the transmitter to the receiver (A), to the occlusion and back (B), to the hidden object and back (C). (c) Results obtained with and without CAO algorithm by comparing with the ground truth. The point target can hardly be observed before CAO. Scale bar: 1 cm.

Download Full Size | PDF

3. Experimental results

The effectiveness of the CAO framework was tested on a confocal THz SAR imaging system. Two typical aberrations with low- and high-order are discussed respectively. The experimental setup is shown in Fig. 2. The low-order aberration is introduced by a convex lens while the high-order aberration is caused by the occlusion with rough surface. In addition, the imaging results without aberration are also shown for comparison with following experiments.

 figure: Fig. 2.

Fig. 2. Experimental imaging system and the results without aberration. (a) The imaging system. For low-order aberration, the occlusion surface is smooth, but an extra convex lens is introduced (blue box). For high-order aberration, the occlusion has a random rough surface (purple box). (b) NLOS imaging of a point target without aberration. (c) NLOS imaging of a metal plate without aberration. Scale bar: 1 cm.

Download Full Size | PDF

3.1 Low-order aberration

We first conducted several comparisons with the existence of low-order aberrations, resulting from the optical misalignment, or the defects of the radar system.

During experiment, we artificially introduced the aberration by placing a convex lens at a random position between the beam splitter and the confocal lens, as shown in Fig. 2. The diameters of both two lens are about 50mm and the focal length are 60 mm. The linear frequency sweep from 280 GHz to 320 GHz with 200 sampling points is applied in the system. The horn antennas with 20° full beam width are used to transmit and receive the signal. The target is placed around 240 mm away from the transmit antenna and installed on a 2D scanner. As shown in Fig. 2, we chose a metal disk with 2.5mm diameter as a target to show the PSF of the system and analyze the resolution. The intermediate frequency bandwidth is set as 1 kHz and it takes about 300 ms to collect the data of one point. For the measurement with 80 ${\times} $ 80 points, it takes about 40 minutes in total. The transceiver array and chirp waveform can be used in the future for high speed scanning. The sampling interval and resolution step size are both set to 1 mm on each axis.

During reconstruction, we used the peak intensity defined as the sum of maximum 10 values of the image as the evaluation metric: $\rho = \sum\limits_{i = 1}^{10} {{u_i}} $, where ${u_i}$ is pixel intensity in descending order. Reconstructed orthogonal slices with and without CAO are shown in Fig. 3(a) and 3(b) in the same scale. Both the resolution and SNR are significantly improved by CAO. While there is almost no contrast for traditional NLOS imaging system, CAO results can still have quite good optical sectioning with high resolution in axial domain. In terms of full-width at half maximum (FWHM), we find that CAO can achieve obvious improvement in resolution, which is limited by the size of the target (Fig. 2(b), Fig. 3(c) and 3(d)). For aberration estimation, we searched the optimized phased based on the Zernike polynomials below 21st order with about 10 iterations. Within each iterations, we went through all the Zernike items sequentially. It cost about 0.4s to complete one iteration and 4s in total. The final phase is shown in Fig. 3(e), which is dominated by the defocus term, according with the inserted additional lens as aberration.

 figure: Fig. 3.

Fig. 3. NLOS imaging of a point target with a regular aberration. (a) Lateral and axial planes of the point target before CAO correction with the highest contrast. (b) Results after CAO correction. (c) Intensity profiles along the line marked in the lateral plane. (d) Intensity profiles along the line marked in the axial plane. (e) The estimated aberration wavefront after CAO. (f) The maximum intensity during optimization of different iteration numbers. Scale bar: 1 cm.

Download Full Size | PDF

To further evaluate the performance of the method in practical applications such as security inspection, we conducted the experiments with a metal plate and a gun model as shown in Fig. 4. The evaluation metric is also defined as the peak intensity but the sum of the maximum 400 values due to the larger target size. Even with very complicated structures, our method can still work well with accurate aberration estimations. Visualization 1 demonstrates the tuning process when searching the coefficient of Zernike term for defocus (Z3). When the searched phase matches the aberration introduced by the additional lens, we can achieve the highest SNR with sharp images. In addition, the Zernike based iterative search method is tested under different aberration order and SNR through simulation (see Supplement1).

 figure: Fig. 4.

Fig. 4. Experimental results of a metal plate and a gun model under a regular aberration (see Visualization 1). (a)(e) Optical images without the occlusion. (b)(f) In-focus results before CAO correction. (c)(f) In-focus result after CAO correction. (d)(h) Corresponding aberrated wavefronts. Scale bar: 1 cm.

Download Full Size | PDF

3.2 High-order aberration

For more challenging cases, we explored the high-order aberrations caused by the rough surfaces of the occlusion. For traditional algorithms, the imaging result of the target hidden by rough surfaces will usually be overwhelmed by speckles. It’s hard for tiny structures to be observed with the existence of speckle. To remove the speckles, smoothing and filtering can be applied, however, at the cost of spatial resolution.

During experiment, the high-order aberration was introduced by a sheet with rough surface, as shown in Fig. 2. The diameter of the sheet is 100 mm and its thickness is about 1 mm. The sheet is made of polyethylene to ensure enough transmission and fabricated by 3D-printing to get the designed roughness. The root mean square (RMS) of the surface height of the sheet to estimate the roughness is about 210 µm. Fig. 5(a) shows the result of conventional NLOS reconstruction of the point target, which is dominated by the speckle noise with very low contrast of the sample. On the contrary, CAO can greatly improve the contrast with about 350% increase in terms of peak intensity after optimization with GA (Fig. 5(c)). We used the same evaluation metric as before (maximum 10 values of the image) but with a different searching domain, because the aberration phase introduced by the scattering of rough surface has much more high-order components that Zernike polynomials can be of little use. As a result, the optimized phase shown in Fig. 5(d) is very messy compared to Fig. 3(e). We implemented GA by the Matlab Genetic Algorithm toolbox, using a population size of 100 and defaulted other parameters. Due to a larger searching domain and iteration time compared to low-order aberration, the whole CAO process in post-processing takes about 5 minutes on a normal laptop (2.6 GHz Intel Core i7).

 figure: Fig. 5.

Fig. 5. CAO results of a point target with high-order aberrations. (a) In-focus result before CAO correction. (b) In-focus result after CAO correction. (c) The intensity profiles along the marked line. (d) The optimized correction phase. Scale bar: 1 cm.

Download Full Size | PDF

In addition to different searching domain, we find that it’s important to choose suitable evaluation metrics for optimized reconstructions based on different samples. For a metal plate with a large size (30mm × 40mm), it will have a broad speckle distributions after the scattering occlusion, which makes the peak intensity not a suitable evaluation metric. It’s found that maximizing the intensity of some local regions will lead to more severe speckles (Fig. 6(b)). Inspired by the previous study of hardware AO, we chose speckle contrast as the new evaluation metric: $C = {\sigma / \mu }$, where $\sigma $ is the standard deviation and $\mu $ is the mean value. As shown in Fig. 6, CAO can greatly reduce the speckle contrast with even better sharpness at the edge. We then evaluated the CAO method by imaging the large metal plate under occlusions of different roughness. Three different sheets with RMS of 56 µm, 158 µm and 210 µm were used as the occlusion (see Ref [13] for detail). Since the center wavelength of the illumination is 1 mm, these sheets can make significant aberration and scattering (see Fig. S3 for more detailed information). As shown in Fig. 6(a), the speckle effect increases as the roughness of the occlusion get larger for traditional NLOS methods. Before CAO correction, the speckle contrasts are 0.106 for the slightly rough case, 0.241 for the medium rough case, 0.386 for the very rough case. After the optimization, the speckle contrast decreases to 0.038, 0.059, 0.088 respectively (Fig. 6(c)). More importantly, CAO can maintain the sharpness of the image, especially at the edge. It takes about 2.25s for one iteration and about 30 minutes to finish the optimization, which is much longer than the time for point target since a larger total iteration time. As shown in Fig. 7, the increase is not very significant if the iteration time is too small (left). As the convergence curve drops exponentially, an appropriate iteration time makes a good balance between calculation time and performance.

 figure: Fig. 6.

Fig. 6. CAO correction decreases the speckle contrast. (a) Traditional NLOS imaging results. (b) Results with the CAO correction using the peak intensity as the evaluation metric (c) Results with the CAO correction using the speckle contrast as the evaluation metric (d) The estimated aberration phase in (c). Scale bar: 1 cm.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. The convergence curve of optimizing speckle contrast.

Download Full Size | PDF

4. Conclusion

To conclude, we have shown that CAO with iterative searching and updates can be applied in NLOS imaging to correct the optical aberrations for better SNR and resolution without the requirement of additional spatial light modulators or wavefront sensors. We validated the effectiveness of the CAO framework with a confocal THz NLOS imaging and analyzed the influence of different evaluation metrics and searching domains. We believe such a method can be generally applied to all NLOS imaging systems of different bandwidth without any hardware modifications. Much more complicated evaluation metrics can be explored in the future for better performance. Better searching algorithms or learning-based methods can be applied to increase the speed of phase searching. This work opens up a new horizon for NLOS imaging with digital manipulations of the transient data during synthesis, which paves the way to practical applications of NLOS imaging in complicated environment.

Funding

National Natural Science Foundation of China, Grant no. (61927804).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012). [CrossRef]  

2. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012). [CrossRef]  

3. J. Bertolotti, E. Van Putten, C. Blum, A. Lagendijk, W. Vos, and A. Mosk, “Non-invasive imaging through opaque scattering layers,” in Adaptive Optics and Wavefront Control for Biological Systems, (International Society for Optics and Photonics, 2015), 93350W.

4. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018). [CrossRef]  

5. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572(7771), 620–623 (2019). [CrossRef]  

6. C. Wu, J. Liu, X. Huang, Z.-P. Li, C. Yu, J.-T. Ye, J. Zhang, Q. Zhang, X. Dou, and V. K. Goyal, “Non–line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. 118(10), e2024468118 (2021). [CrossRef]  

7. I. Freund, “Looking through walls and around corners,” Phys. A 168(1), 49–65 (1990). [CrossRef]  

8. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]  

9. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014), 3222–3229.

10. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014). [CrossRef]  

11. A. Lyons, F. Tonolini, A. Boccolini, A. Repetti, R. Henderson, Y. Wiaux, and D. Faccio, “Computational time-of-flight diffuse optical tomography,” Nat. Photonics 13(8), 575–579 (2019). [CrossRef]  

12. A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 1–8 (2013). [CrossRef]  

13. Z. Ou, J. Wu, H. Geng, X. Deng, and X. Zheng, “Confocal terahertz SAR imaging of hidden objects through rough-surface scattering,” Opt. Express 28(8), 12405–12415 (2020). [CrossRef]  

14. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast fk migration,” ACM Trans. Graph. 38(4), 1–13 (2019). [CrossRef]  

15. X. Liu, S. Bauer, and A. Velten, “Phasor field diffraction based reconstruction for fast non-line-of-sight imaging systems,” Nat. Commun. 11(1), 1–13 (2020). [CrossRef]  

16. S. I. Young, D. B. Lindell, B. Girod, D. Taubman, and G. Wetzstein, “Non-line-of-sight surface reconstruction using the directional light-cone transform,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020), 1407–1416.

17. A. Zhang, J. Wu, J. Suo, L. Fang, H. Qiao, D. D.-U. Li, S. Zhang, J. Fan, D. Qi, and Q. Dai, “Single-shot compressed ultrafast photography based on U-net network,” Opt. Express 28(26), 39299–39310 (2020). [CrossRef]  

18. R. Davies and M. Kasper, “Adaptive optics for astronomy,” Annu. Rev. Astron. Astrophys. 50(1), 305–351 (2012). [CrossRef]  

19. M. J. Booth, “Adaptive optics in microscopy,” Philos. Trans. R. Soc., A 365(1861), 2829–2843 (2007). [CrossRef]  

20. M. J. Booth, “Wavefront sensorless adaptive optics for large aberrations,” Opt. Lett. 32(1), 5–7 (2007). [CrossRef]  

21. B. Mathilde, S. Jean-Francois, P. Mathias, and A. Emmanuel, “Terahertz adaptive optics with a deformable mirror,” Opt. Lett. 43(7), 1594 (2018). [CrossRef]  

22. S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. 109(19), 7175–7180 (2012). [CrossRef]  

23. J. Wu, X. Lin, Y. Liu, J. Suo, and Q. Dai, “Coded aperture pair for quantitative phase imaging,” Opt. Lett. 39(19), 5776–5779 (2014). [CrossRef]  

24. A. E. Tippie, A. Kumar, and J. R. Fienup, “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Express 19(13), 12027–12038 (2011). [CrossRef]  

25. L. Waller, G. Situ, and J. W. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6(7), 474–479 (2012). [CrossRef]  

26. S. G. Adie, N. D. Shemonski, B. W. Graf, A. Ahmad, P. Scott Carney, and S. A. Boppart, “Guide-star-based computational adaptive optics for broadband interferometric tomography,” Appl. Phys. Lett. 101(22), 221117 (2012). [CrossRef]  

27. S. G. Adie and J. A. Mulligan, “Computational adaptive optics for broadband interferometric tomography of tissues and cells,” in Adaptive Optics and Wavefront Control for Biological Systems II, (International Society for Optics and Photonics, 2016), 971716.

28. J. Wu, Z. Lu, D. Jiang, Y. Guo, H. Qiao, Y. Zhang, T. Zhu, Y. Cai, X. Zhang, and K. Zhanghao, “Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3D subcellular dynamics at millisecond scale,” Cell 184(12), 3318–3332.e17 (2021). [CrossRef]  

29. S. Dhillon, M. Vitiello, E. Linfield, A. Davies, M. C. Hoffmann, J. Booske, C. Paoloni, M. Gensch, P. Weightman, and G. Williams, “The 2017 terahertz science and technology roadmap,” J. Phys. D: Appl. Phys. 50(4), 043001 (2017). [CrossRef]  

30. V. Lorenzo, Z. Peter, and H. Erwin, “Topography of hidden objects using THz digital holography with multi-beam interferences,” Opt. Express 25(10), 11038–11047 (2017). [CrossRef]  

31. L. Valzania, P. Zolliker, and E. Hack, “Coherent reconstruction of a textile and a hidden object with terahertz radiation,” Optica 6(4), 518–523 (2019). [CrossRef]  

32. A. Redo-Sanchez, B. Heshmat, A. Aghasi, S. Naqvi, M. Zhang, J. Romberg, and R. Raskar, “Terahertz time-gated spectral imaging for content extraction through layered structures,” Nat. Commun. 7(1), 12665 (2016). [CrossRef]  

33. D. M. Sheen, D. L. McMakin, and T. E. Hall, “Three-dimensional millimeter-wave imaging for concealed weapon detection,” IEEE Trans. Microwave Theory Tech. 49(9), 1581–1592 (2001). [CrossRef]  

34. J. W. Goodman and S. C. Gustafson, and Reviewer, “Introduction to Fourier Optics, Second Edition,” Optical Engineering (1996).

35. M. J. Booth, D. Débarre, and T. Wilson, “Image-based wavefront sensorless adaptive optics,” Proc. SPIE 6711, 671102 (2007). [CrossRef]  

Supplementary Material (2)

NameDescription
Supplement 1       The simulation, Zernike-based search algorithm and scattering analysis
Visualization 1       The tuning process when searching the coefficient of Zernike term for defocus.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Overview of CAO for NLOS imaging. (a) Schematic of NLOS imaging based on confocal THz SAR system. The ideal wavefront is distorted by the occlusion in traditional methods but can be corrected digitally by adaptive reconstruction. (b) The raw temporal measurements indicating the path lengths from the transmitter to the receiver (A), to the occlusion and back (B), to the hidden object and back (C). (c) Results obtained with and without CAO algorithm by comparing with the ground truth. The point target can hardly be observed before CAO. Scale bar: 1 cm.
Fig. 2.
Fig. 2. Experimental imaging system and the results without aberration. (a) The imaging system. For low-order aberration, the occlusion surface is smooth, but an extra convex lens is introduced (blue box). For high-order aberration, the occlusion has a random rough surface (purple box). (b) NLOS imaging of a point target without aberration. (c) NLOS imaging of a metal plate without aberration. Scale bar: 1 cm.
Fig. 3.
Fig. 3. NLOS imaging of a point target with a regular aberration. (a) Lateral and axial planes of the point target before CAO correction with the highest contrast. (b) Results after CAO correction. (c) Intensity profiles along the line marked in the lateral plane. (d) Intensity profiles along the line marked in the axial plane. (e) The estimated aberration wavefront after CAO. (f) The maximum intensity during optimization of different iteration numbers. Scale bar: 1 cm.
Fig. 4.
Fig. 4. Experimental results of a metal plate and a gun model under a regular aberration (see Visualization 1). (a)(e) Optical images without the occlusion. (b)(f) In-focus results before CAO correction. (c)(f) In-focus result after CAO correction. (d)(h) Corresponding aberrated wavefronts. Scale bar: 1 cm.
Fig. 5.
Fig. 5. CAO results of a point target with high-order aberrations. (a) In-focus result before CAO correction. (b) In-focus result after CAO correction. (c) The intensity profiles along the marked line. (d) The optimized correction phase. Scale bar: 1 cm.
Fig. 6.
Fig. 6. CAO correction decreases the speckle contrast. (a) Traditional NLOS imaging results. (b) Results with the CAO correction using the peak intensity as the evaluation metric (c) Results with the CAO correction using the speckle contrast as the evaluation metric (d) The estimated aberration phase in (c). Scale bar: 1 cm.
Fig. 7.
Fig. 7. The convergence curve of optimizing speckle contrast.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

S ( x , y ; k ) = ρ ( x , y , z ) exp ( 2 j k ( x x ) 2 + ( y y ) 2 + ( z Z 0 ) 2 ) d x d y d z
S ( Q x , Q y ; k ) = ρ ( Q x , Q y , Q z ) H ( Q x , Q y ; k )
ρ ( x , y , z ) = IFF T 3D { Inter p k Q z [ ( S ( Q x , Q y ; k ) ] }
S a b ( Q x , Q y ; k ) = ρ ( Q x , Q y , Q z ) H ( Q x , Q y ; k ) H a b ( Q x , Q y ; k )
H ab ( Q x , Q y ; k ) = exp ( i k / k c Φ h ( Q x , Q y ) )
ρ ( x , y , z ) = IFF T 3D { Inter p k Q z [ ( S a b ( Q x , Q y ; k ) exp ( i k / k c Φ h ( Q x , Q y ) ) ] }
Φ h ( Q x , Q y ) = { m = 1 N a m Z m ( Q x f n , Q y f n ) , Q x 2 + Q y 2 f n 0 , Q x 2 + Q y 2 > f n
| Φ h ( Q x , Q y ) | γ π , 0 < γ 1
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.