Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Long range 3D imaging through atmospheric obscurants using array-based single-photon LiDAR

Open Access Open Access

Abstract

Single-photon light detection and ranging (LiDAR) has emerged as a strong candidate technology for active imaging applications. In particular, the single-photon sensitivity and picosecond timing resolution permits high-precision three-dimensional (3D) imaging capability through atmospheric obscurants including fog, haze and smoke. Here we demonstrate an array-based single-photon LiDAR system, which is capable of performing 3D imaging in atmospheric obscurant over long ranges. By adopting the optical optimization of system and the photon-efficient imaging algorithm, we acquire depth and intensity images through dense fog equivalent to 2.74 attenuation lengths at distances of 13.4 km and 20.0 km. Furthermore, we demonstrate real-time 3D imaging for moving targets at 20 frames per second in mist weather conditions over 10.5 km. The results indicate great potential for practical applications of vehicle navigation and target recognition in challenging weather.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The ability to see through obscurants (e.g., mist, fog, haze, and smoke) is of great importance in real-life applications. Examples include self-driving vehicles that can see further in foggy weather which would reduce the rate of traffic accidents [1,2], augmenting optical devices for surveillance and reconnaissance with further working distances in challenging weather conditions. However, the high levels of particulate absorption and scattering in atmospheric obscurants can greatly diminish image contrast, spatial resolution and return signal strength [3,4], which poses an impediment to conventional optical imaging approaches. There has been growing interest in the implementation of high-resolution imaging in degraded visual environments, and some solutions including passive and active approaches have been proposed for tackling this issue. Passive methods for imaging through atmospheric obscurants include haze removal algorithms (e.g., dark channel prior [5] and data-driven approaches [6]), polarization-based dehazing [7], and thermal imaging [8], where most work is dedicated to reflectance recovery and dependent on target characteristics. Active methods include millimeter wave radar [9] and the emerging Terahertz imaging [10], which leverages the fact that mmWave and THz signals have favorable propagation characteristics in low visibility conditions. However, the resolution of these methods is relatively poor compared with optical approaches (mainly LiDAR) while the latter is plagued by the great attenuation of light propagating in atmospheric obscurants. Researchers have made significant efforts to improve the performance of LiDAR on penetrating through fog in applications including autonomous driving [11,12].

In recent years, single-photon LiDAR has emerged as a strong candidate technology for active imaging applications, particularly for 3D imaging in atmospheric obscurants [13]. Benefiting from the single-photon sensitivity and picosecond temporal resolution, it can provide high-resolution [14], three dimensional(3D) imaging in both transversal and longitudinal dimension in challenging scenarios which has been successfully demonstrated in applications including long-range depth imaging [1521], global topography [2224], non-line-of-sight imaging [2527], and underwater imaging [28,29]. Compared with linear avalanche photodiode (APD) detectors adopted in conventional LiDAR systems, the single photon detectors used in single-photon LiDAR systems (e.g., superconducting nanowire detectors (SNSPDs) [3032] and single-photon avalanche diode (SPAD) detectors [3335]) provide single photon sensitivity, which tolerates significant laser attenuation in the transmission path and allows the use of low average optical power for target illumination. Moreover, the time-correlated single-photon counting (TCSPC) technique [36] employed in single-photon LiDAR offers picosecond time resolution in time-of-flight measurements, resulting in an excellent surface-to-surface resolution. Furthermore, the computational imaging algorithms have witnessed remarkable progress to process the single-photon data efficiently [3745]. Several algorithms have show good performance dealing with low return signals and high background noise, which is a potential solution for data processing in atmospheric obscurants.

In consideration of the advantages of single-photon LiDAR, researchers have foreseen its potential in imaging thorough atmospheric obscurants and carried out experiments in laboratory. For instance, Satat et al. recovered objects at 57 cm away in a fog chamber with a visibility of 37 cm using a silicon-based single-photon camera [46]. Tobin et al. implemented single-photon imaging of static and moving targets through fog at a distance of 24 m and 150 m separately [47,48]. Shi et al. demonstrated a Bessel-beam single-photon LiDAR which achieved 3D imaging in fog chamber at 31.5 m [49]. Zhang et al. realized an outdoor 3D imaging of 1.4 km in the visibility of 1.7 km [50]. However, 3D imaging for static and moving targets in natural fog environments at long ranges have not been reported yet. In particular, the long-range imaging faces the challenges of strong optical attenuation from both dynamic atmospheric obscurants and geometrical prorogations.

In this paper, we demonstrate a single-photon LiDAR system that can obtain depth and intensity profiles of static and moving targets through high levels of atmospheric obscurants at long distances up to 20.0 km. The main contributions of our work can be summarized as follows. First, to surmount the high levels of absorption and scattering in atmospheric obscurants, we develop a highly efficient bistatic optical tranceiver based on a InGaAs/InP SPAD detector array and a comprehensive noise-suppression approach using filtering techniques in spatial, spectral and temporal domain. Second, we realize 3D imaging through natural fog at 13.4 km ranges when the visibility is less than 4 km. We obtain depth and intensity profiles of both static and moving targets at distances of 20.0 km and 10.5 km separately in real outdoor mist conditions. Finally, we calculate the atmospheric attenuation in our experiment quantitatively and model the behavior of the system in more dense fog to ascertain its practicality under more challenging conditions. The results indicate the great potential of single-photon LiDAR for achieving high-resolution imaging through fog, which provides a new perspective for autonomous driving, airborne navigation and target recognition in adverse weather conditions.

2. Single-photon LiDAR setup

As shown in Fig. 1, the single-photon LiDAR transceiver is arranged in a bistatic optical configuration, which can effectively mitigate the backscattering noise when imaging in atmospheric obscurants. As for the light source, we employ an all-fiber pulsed erbium-doped fiber laser operating at a wavelength of 1550 nm (1 ns pulse width and 25 kHz repetition rate). The maximum optical power is 1.5 W, which is equivalent to 60 $\mu$J per pulse. Triggered by an arbitrary function generator (AFG), the laser is transmitted to the collimator (f = 6.37mm) through the fiber then used to flood-illuminate the scene with a divergence angle of 4 mrad, which is slightly larger than the field of view (FoV).

 figure: Fig. 1.

Fig. 1. Schematic diagram of experimental setup. (a) The system comprises of a pulsed fiber laser source, an InGaAs/InP SPAD detector array, a custom-designed bistatic transceiver unit, and electronic modules for data processing. The optical components include an aspheric lens, a long pass filter, a band pass filter, a fiber collimator and a Cassegrain telescope. A standard astronomical camera is mounted on the telescope for target alignment and weather monitoring. AFG, arbitrary function generator; TDC, time-to-digital converter. (b) A photograph of the whole single-photon LiDAR, which shows the laser, AFG, and the optical transceiver that constitute the system.

Download Full Size | PDF

The receiver system consists of a commercial Cassegrain telescope with a modest telescope aperture of 280 mm and a compact optical platform with lens, optical filters and the SPAD camera assembled on (see Fig. 1(a)). The aspheric lens in optical path after the telescope is used for FoV adjustment by changing the effective focal length. In experiment, the FoV of a single pixel is set to be about 60 $\mu$rad, that is, the total FoV of the system is 3.8 mrad (effective focal length = 840 mm). The collected return photons pass through the telescope, lens, optical filters in sequence and are coupled to the sensitive area of the single-photon detector array. The SPAD array is operated at a frame rate of 25 kHz, and is also triggered by the AFG unit to gurantee time synchronization with the laser source. The built-in time-to-digital converter (TDC) module measures the time difference between the start signal and the photon events at every pixel, which are transferred to the computer via a CameraLink cable. As shown in Fig. 1(b), the transceiver components are mounted on a homemade high-precision two-axis rotating stage for accurate target pointing. A standard astronomical camera (ASI294MC, f = 280 mm) is parallel mounted on the telescope to provide an observation of the imaging FoV and monitor the weather conditions during experiments. A summary of the key system parameters is listed in Table 1.

3. System optimization for imaging through atmospheric obscurants

When single-photon LiDAR systems operate in atmospheric obscurants, the main factors that deteriorate the image quality and limit the working distance are the high levels of particulate absorption and scattering in fog or smoke, which results in much fewer returned photons and the strong backscattering noise level. To facilitate 3D imaging over long ranges in atmospheric obscurants, we develop high-efficiency optical devices and a comprehensive noise-suppression method to improve the collection efficiency and decrease background noise.

3.1 Wavelength of 1550 nm

The four main wavelengths for study of land based, sub-10 km sensing LiDAR have been near the 1 $\mu$m, 1.5 $\mu$m, 2 $\mu$m and 10.6 $\mu$m wavelengths [51]. Among these wavelengths, there are mature and low-cost manufacturing capabilities for Si-based and InGaAs single-photon detectors operating at the 1 $\mu$m and 1.5 $\mu$m bands, which indicates a higher sensitivity for signal collection. Compared with shorter wavelengths such as visible and near-infrared (e.g., 905 nm and 1064 nm) bands, the use of 1550 nm wavelength in LiDAR systems can have several advantages including reduced in-band solar background and higher atmospheric transmission. Moreover, being outside the retinal hazard region (400–1400 nm) [52], the operation wavelength of 1550 nm enables the use of a higher optical power for illumination whilst remaining eye-safe. Especially, light propagation at 1550 nm may suffer less from attenuation by some kinds of atmospheric obscurants (prominent in haze, smoke and hazy fog) than at shorter wavelengths, which would be beneficial in applications of imaging in adverse weather conditions. In our simulation of the atmospheric transmission using MODTRAN [53], the transmission of 1550 nm wavelength is more than an order of magnitude higher than that of 905 nm and 1064 nm wavelengths in hazy fog with a visibility of 4 km over 10 km distances.

3.2 InGaAs/InP SPAD detector array

The high photon sensitivity of SPAD detectors can tackle the strong attenuation of light propagation in adverse weather conditions. Here, we deploy an InGaAs/InP 64 $\times$ 64 SPAD detector array (manufactured by Chongqing Optoelectronics Research Institute) to obtain the capability of real-time 3D imaging in fog at long distances. The pixel spacing of the focal plane is 50 $\mu$m, which means that the active area of the sensor has a dimensions of 3.2 mm $\times$ 3.2 mm. The sensor is suitable for single photon detection at short-wavelength infrared band (SWIR), which has a detection efficiency of 20 $\%$ at 1550 nm and a dark count rate of 2 kHz. The SPAD array is operated at a frame rate of 25 kHz, which is triggered by the AFG for synchronization with the pulsed laser source. Each pixel is integrated with a separate TDC unit with a time bin resolution of 1 ns, corresponding to 15 cm depth resolution. One frame of the detector output is the time of flight of the first detected photon of all pixels during exposure time of a single frame of 4 $\mu$s. There is a micro lens array in front of the detector plane designed for improving the fill factor, which approaches 60 $\%$ in tests thus diminishes the loss of laser power when using flood illumination.

3.3 Comprehensive noise-suppression methods

When imaging in atmospheric obscurants, the noise comes from internal back-scattering, atmosphere back-scattering and solar background noise, in which the back reflection noise by obscurants is dominant. To eliminate these unwanted noise photons, we employ comprehensive noise-suppression approaches in space, spectra, and time domain, which contributes to an improved signal-to-background ratio (SBR). The system is implemented in a bistatic configuration, which suppresses the back scattered noise from local optical components and near-field atmosphere effectively. As a way of spatial filtering, the FoV of each individual pixel is designed to be about 60 $\mu$rad, reducing the number of background photons incidence in large angles. In time domain, the back reflection noise accounting for fog can be approximated using a Gamma distribution [46]. In long-range applications, as the target is far away from our LiDAR system, the signals are located in the decreasing tail of the background distribution even though the peak of backscattered noise is higher than that of signals. In experiment, we obtain the the distribution of echo photons within the whole pulse period, and then extract signals leveraging the difference in time between signals and backscattered noise. In spectra, we adopt a series of optical filters comprised of a longpass filter with a cut-on wavelength of 1500 nm and a 1.8 nm fullwidth half-maximum (FWHM) bandpass filter centered on 1550.6 nm. This filter combination provides sufficient suppression for solar background noise and enables the system to be operated in the daytime.

3.4 Photon-efficient imaging algorithm

As for data processing, we adopted two different state-of-the-art algorithms proposed for photon-efficient 3D imaging, which have both demonstrated good results in long-range scenarios where the signals are limited and the noise level is high. These algorithms are believed to perform well in coping with strong noise caused by scattering and we validated their performance in scenarios with high levels of atmospheric obscurants in experiment.

The first one is the 3D deconvolution algorithm based on a convex-optimization solver [19,38]. The image formation process is abstracted as a convolutional model and a 3D spatiotemporal matrix is introduced to solve reflectivity and depth simultaneously. This can include the correlations between reflectivity and depth in the optimization process, which significantly reduce the required signal photon numbers. The algorithm procedure can be divided into two steps: (i) a global gating approach to unmix signal from noise, (ii) an inverse 3D deconvolution process based on the modified SPIRALTAP solver. This algorithm is tailored for long-range imaging where the SBR is extremely low and has shown good performance in a variety of scenes.

The second algorithm is based on a non-local neural network [45], which utilizes the fact that photon-efficient measurements contain long-range correlations in both spatial and temporal dimensions. Aiming at dealing with those measurements with extremely low signal counts and low SBR, the network is modified from an advanced denoising model called dense dilated fusion network by integrating the non-local operator to exploit the long-range correlations. As shown in [45], the non-local network trained on simulated data can be well generalized to different real-world imaging systems and the performance is decent compared with other state-of-the-art methods especially in scenarios with extremely low SBR. Moreover, in contrast with the optimization method, the time consumption of this algorithm is decreased owing to the pretrained network, which indicates the potential in real-time photon-efficient imaging.

4. Experimental results

Using our array-based single-photon LiDAR system, we present an in-depth experiment to image a variety of scenes in atmospheric obscurants at different distances. Most of our experiments were done in daytime in an urban environment in Shanghai. The weather conditions during data acquisition process included fog and mist (a minimum visibility of 4 km) and we characterized the atmospheric attenuation in next section to analyze the capability of our system for imaging through fog quantitatively. In experiment, we performed blind LiDAR measurements without any prior information for the time location of returned signals. Different targets with various spatial distributions and distances in different atmospheric obscurants were selected to show the all-around imaging capability of the system. Benefiting from the fast data acquisition feature of the SPAD detector array, we also achieved dynamic imaging at 20 frames per second for targets over 10.5 km in mist. For data processing, we choose two state-of-the-art photon-efficient algorithms as described in section 3.4, which are tailored for image reconstruction of single-photon imaging data where the return signal and the SBR is very low. Three typical results are shown and analyzed as follows.

First, to demonstrate the capability of imaging in high levels of obscurants at long ranges, we took images of the ICBC building over 13.4 km in Pudong New Area, Shanghai. The experiment was carried out on 10 January, 2022 at around 15:00 (UTC+8). The weather condition was fog ($\sim$ 60 $\%$ humidity) while the visibility was less than 4 km. The setup of experiment, a visible-band photograph in experiment and a close-up picture of the target are shown in Fig. 2(a). In the photograph taken by a standard astronomical camera on the left, the buildings that can be seen are located at 2.5 km away while the target in the red square is invisible at all. We used our single-photon LiDAR to perform 3D imaging with a total acquisition time of 10 s and the reconstructed depth map using maximum-likelihood (ML) method is shown in Fig. 2(b). From the results, we can clearly see the letters outside the building in depth, which demonstrated the 3D imaging capability in fog with high resolution. To validate the photon-efficient algorithms, data with 200 ms acquisition time was extracted and the result were shown in Fig. 2(c) and 2(d). The quantitative performances are given with peak-signal-to-noise-ratio (PSNR) while the ground truth is chosen as the data with long data acquisition time shown in Fig. 2(b). The Fig. 2(c) is recovered by the convex-optimization approach while Fig. 2(c) is reconstructed by the end-to-end non-local neural network. Specifically, as for the neural network method, the average time for data processing is only 90 ms with a NVIDIA 2060 Super GPU, which indicates future applications for real-time single-photon imaging in challenging scenarios. The average signal level is $\sim$ 2.87 photons per pixel (PPP) and the SBR is 0.46. In the reconstructed depth maps, we can still figure out the 3D profile of the surface of the building while the letters and the logo is partly blurred due to the low signal number.

 figure: Fig. 2.

Fig. 2. Illustration of 3D imaging through fog over 13.4 km. (a) Satellite image of the experiment setup in Shanghai city. On the left is the photograph taken by the astronomical camera during the experiment in which the red rectangle indicates the LiDAR’s FoV. On the right is a photo of the target building taken nearby. (b) The reconstructed depth results with a total data acquisition time of 10 s. The visibility during the experiment is 4 km while the stand off distance between the target and our system is 13.4 km. (c) The reconstructed depth map using convex-optimization based algorithm with 200 ms data acquisition time. The average signal level is 2.87 photons per pixel. (d) The reconstructed depth map using the non-local neural network with 200 ms data acquisition time. Note that the reconstructed depth map shown in (b) is selected as the ground truth for the calculation of PSNR.

Download Full Size | PDF

To validate the capability of imaging in fog at longer distances, we selected a pier under construction on the sea over 20.0 km as the target. The experiment was carried out at 6 pm in mist with the atmospheric visibility of 8 km. Two visible-band photographs shown in Fig. 3(a) and (b) were taken during the experiment and in clear weather respectively. In experiment, we used the rotating stage to scan the target to obtain a 128 $\times$ 64 image with a total acquisition time of 20 s (10 s acquisition time for each FoV with 64 $\times$ 64 pixels). The recovered depth map using ML method and 3D profile of the pier are separately shown in Fig. 3(c) and 3(e), and the three-dimensional distribution of different parts of the pier can be distinguished precisely especially the crane on the top. We then extracted 5000 frames (0.4 s total acquisition time) from the captured dataset (250000 frames) and deployed the algorithm based on convex-optimization for image reconstruction. The timing histograms of the extracted frames are shown in Fig. 4. For data preprocessing, we adopted a peak finding algorithm on the histogram of all pixels and extracted 400 timing bins around the signal peak for later data analysis. The reconstructed depth profile is shown in Fig. 3(d) while the PSNR is calculated with groundtruth in Fig. 3(c). The average number of photons per pixel over the entire scene is 2.4 photons per pixel while the SBR is 0.12. The outline of the pier can still be resolved clearly owing to the photon-efficient algorithm.

 figure: Fig. 3.

Fig. 3. Depth and 3D profiles of a pier in mist over 20.0 km. (a) The visible-band photograph of the target in experiment and the atmospheric visibility is 8 km. (b) The photograph of the same FoV in clear weather condition. (c) The reconstructed depth profile with a total data acquisition time of 20 s. (d) The reconstructed depth profile of the pier with 400 ms data acquisition time. (e) 3D demonstration of the reconstructed result in (c). The reconstructed depth map shown in (c) is selected as the ground truth for the calculation of PSNR.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The timing histogram of the pier with 0.4 s data acquisition time. On the left is the timing histogram of all pixels (128 $\times$ 64 pixels). The horizontal axis represents the timing bin index, which can be converted to time with 1 ns precision. The background level is non-uniform due to the back scattering in fog. On the right is the timing histogram of a single pixel on the body of the pier.

Download Full Size | PDF

Benefiting from the employment of the SPAD detector array, our LiDAR system can also achieve imaging of moving targets in fog. Here, a wind turbine in rotation at 10.5 km distances was captured with our array-based single-photon LiDAR system in mist. As shown in Fig. 5(a), we placed our imaging device at a hydrographic station on the riverbank while the target was located at the other side of the river. The photograph taken by the astronomical camera (f = 280 mm) shows that the wind turbine is indistinct in mist. In experiment, we collected data of the scene with a total acquisition time of 2 s (50 k frames) and combined 1250 frames (50 ms) for depth map reconstruction in order. After reconstruction, we got a video of the rotation of the wind turbine at 20 frames per second (fps), from which four frames was extracted at an interval of 10 frames (shown in Fig. 5(b)). The average number of PPP per frame is 0.55 and the SBR is 0.56. From the results, it can be seen that the blades of the wind turbine are rotating counterclockwise and there is an angle between the rotation plane and the plane vertical to laser incidence direction (Fig. 5(c)).

 figure: Fig. 5.

Fig. 5. Illustration of 3D imaging of a wind turbine in rotation in mist over 10.5 km. (a) Satellite image of the experiment setup in Shanghai. Our system was placed at a hydrographic station on the riverbank. On the right is the photograph of the target located at the other side of the river during experiment. (b) Reconstructed depth profiles of the wind turbine in rotation. Each image is captured with 50 ms data acquisition time while the time interval between the four frames is 450 ms. The average number of PPP per frame is 0.55 and the SBR is 0.56. It can be seen that the blades are rotating counterclockwise. (c) 3D demonstration of the reconstructed result of the last image in (b). It can be seen that there is an angle between the rotation plane and the plane vertical to laser incidence direction.

Download Full Size | PDF

5. Estimation of atmospheric attenuation and the penetration capability

In this section, we will firstly make theoretical estimations by analyzing the acquired data and the LiDAR equation to assess the atmospheric attenuation quantitatively in our experiment. As the level of obscurants in experiment is within the capability of our system, we will further deduce the penetration limit through atmospheric obscurants of the proposed system.

According to the Beer-Lambert law [54], the transmittance $T$ of light propagating in attenuation medium can be expressed as follows:

$$T = \frac{P_R}{P_T} = e^{-\alpha L}$$
where $P_R$ is the received optical power, $P_T$ is the emitted optical power, $\alpha$ is the attenuation coefficient of the medium, and L denotes the distance of propagation in the medium. The term $\alpha L$ is also called the attenuation length (AL), which can be represented by $N_{AL}$. In the scenario of single-photon imaging in atmospheric obscurants, the number of attenuation lengths for the one-way distance ($R$) between the transceiver and target can be calculated as follows:
$$N_{AL} = \alpha R = \frac{1}{2} \ln{(\frac{n_0}{n_1})}$$
where $\alpha$ denotes the attenuation coefficient of the atmospheric obscurants, $n_0$ is the received number of photons without atmospheric attenuation and $n_1$ is the collected number of photons in atmospheric obscurants. Due to the limitation of atmospheric conditions in the field experiment, the collected number of photons without atmospheric attenuation cannot be measured accurately, such that we strictly calculated the number of photons that should be collected in absence of atmospheric attenuation using the LiDAR equation. And the collected number of photons ($n_1$) in atmospheric obscurants can be calculated from the data acquired in experiment.

The LiDAR equation, in the form of received number of photons ($N_r$), can be expressed as [55]:

$$N_{r} = \frac{P_t\lambda}{hc}t\frac{\rho A \theta_r^2}{\pi R^2 \theta_t^2}\eta_{det}\eta_{f}\eta_o\eta_{atm}$$
where $P_t$ is the transmitted optical power, $\lambda$ is the wavelength of laser, h is the Planck’s constant, c is the speed of light and t is the acquisition time. $\rho$ and $R$ denotes the reflectivity and the distance of the target separately, $A$ denotes the area of the collection lens, and $\theta _r$, $\theta _t$ represent the FoV and the divergence angle of the laser. The terms $\eta _{det}$, $\eta _{f}$, $\eta _{o}$, $\eta _{atm}$ represent the efficiency of the detector, the fill factor of the SPAD array, the optical efficiency and the atmospheric transmittance respectively. By substituting the parameters of our system into Eq. (3), the collected number of photons without $\eta _{atm}$ can be calculated precisely.

Here we choose the experiment scenario in thick fog over 13.4 km, which is described in section 4, to estimate the number of attenuation lengths between the system and the target. The key parameters of the system is shown in Table 1. And the optical efficiency ($\eta _o$) of our system is measured to be 80$\%$ while the reflectivity of the target is considered to be 20$\%$. The reflectivity is estimated with measurements data of different materials acquired from literature [56,57]. After substituting the parameters into Eq. (3), the number of photons that should be collected without $\eta _{atm}$ ($n_0$) are calculated while the term $n_1$ is obtained by averaging the photon counts over non-empty pixels with 10 s data acquisition time. By substituting $n_0$ and $n_1$ into Eq. (2), the one-way atmospheric attenuation is estimated to be 2.74 attenuation lengths ($N_{AL}$ = 2.74).

Tables Icon

Table 1. Summary of the main system parameters

We further proved the validity of our result by simulation of the atmospheric transmittance using MODTRAN. We set the atmosphere model in MODTRAN appropriately corresponding to our experiment setup. In simulation, when the visibility is 4.2 km (close to the visibility of 4 km in experiment), the one-way number of attenuation lengths is simulated to be 2.73, which approximates the result derived above by analyzing the experimental data and the LiDAR equation, consistent with our analysis and calculation above.

After calculating the atmospheric attenuation quantitatively in experiment, here we deduce the penetration limit through atmospheric obscurants of our system, that is, the minimum visibility when it is possible for 3D imaging over the same distances. As shown in the recovered results in section 4, an average PPP equals to 3 would be adequate for image reconstruction. In experiment, the average photons per pixel of data acquired at $N_{AL(1550 nm)}$ = 2.74 is 235 in a total acquisition time of 10 s, which is beyond the required number of photons of our photon-efficient algorithm. By replacing the term $n_1$ in Eq. (2) with the required number of photons of our algorithm, we can figure out the attenuation length as $N_{AL(max)}$ = 4.92. Using the same atmospheric parameters in MODTRAN described in the last paragraph, the visibility corresponding to $N_{AL(max)}$ is simulated to be 2.3 km, showing that the distance between the system and the target (13.4 km) can be more than 5.5 times longer than the minimum visibility.

6. Discussion and conclusion

As shown in Table 2, we made a summary of the recent experiments on single-photon imaging through atmospheric obscurants. The attenuation length in our experiment is 2.7 while there is an additional 2.5 attenuation length of geometrical attenuation at such long distances, as compared with the scenario of 1km distance. Our experiment demonstrates fast 3D imaging over long-range atmospheric obscurants.

Tables Icon

Table 2. Summary of recent experiments on single-photon imaging through atmospheric obscurants

To sum up, we have experimentally demonstrated a single-photon imaging system capable of 3D imaging through atmospheric obscurants over 20.0 km. The system is composed of a 64 $\times$ 64 InGaAs/InP SPAD detector array and a pulsed laser with the operation wavelength of 1550 nm. Owning to the optimizations in our system setup and the adopted photon-efficient algorithms, we performed three-dimensional imaging at up to 2.74 attenuation lengths in fog over a range of 13.4 km and obtain depth and intensity profiles of static and moving targets at distances of 20.0 km and 10.5 km separately in mist weather conditions. An estimation of atmospheric attenuation and the penetration capability is presented by employing the LiDAR equation and our experimental data, which proves the advantage of 1550 nm and further deduced the maximum attenuation lengths of 4.92 in fog over 13.4 km. Overall, our results may provide a potential approach for long-range imaging in adverse weather conditions, which would be useful in autonomous driving, remote sensing and target detection and recognition.

Funding

Innovation Program for Quantum Science and Technology (2021ZD0300300); National Natural Science Foundation of China (62031024); National Natural Science Foundation of China (12104443); Shanghai Science and Technology Development Foundation (22JC1402900); Shanghai Municipal Science and Technology Major Project (2019SHZDZX01); Program of Shanghai Academic Research Leader (21XD1403800); Special Project for Research and Development in Key areas of Guangdong Province (2020B0303020001).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. B. Schwarz, “Mapping the world in 3d,” Nat. Photonics 4(7), 429–430 (2010). [CrossRef]  

2. R. H. Rasshofer, M. Spies, and H. Spies, “Influences of weather phenomena on automotive laser radar systems,” Adv. Radio Sci. 9, 49–60 (2011). [CrossRef]  

3. C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (John Wiley & Sons, 2008).

4. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis 48(3), 233–254 (2002). [CrossRef]  

5. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). [CrossRef]  

6. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. on Image Process. 25(11), 5187–5198 (2016). [CrossRef]  

7. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42(3), 511–525 (2003). [CrossRef]  

8. K. Beier and H. Gemperlein, “Simulation of infrared detection range at fog conditions for enhanced vision systems in civil aviation,” Aerosp. Sci. Technol. 8(1), 63–71 (2004). [CrossRef]  

9. J. Guan, S. Madani, S. Jog, S. Gupta, and H. Hassanieh, “Through fog high-resolution imaging using millimeter wave radar,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020), pp. 11464–11473.

10. L. Daniel, D. Phippen, E. Hoare, A. Stove, M. Cherniakov, and M. Gashinova, “Low-thz radar, lidar and optical imaging through artificially generated fog,” IET Conference Proceedings (2017).

11. M. Kutila, P. Pyykönen, H. Holzhüter, M. Colomb, and P. Duthon, “Automotive lidar performance verification in fog and rain,” in 21st International Conference on Intelligent Transportation Systems (IEEE, 2018), pp. 1695–1701.

12. J. Rapp, J. Tachella, Y. Altmann, S. McLaughlin, and V. K. Goyal, “Advances in single-photon lidar for autonomous vehicles: Working principles, challenges, and recent advances,” IEEE Signal Process. Mag. 37(4), 62–71 (2020). [CrossRef]  

13. A. M. Wallace, A. Halimi, and G. S. Buller, “Full waveform lidar for adverse weather conditions,” IEEE Trans. Veh. Technol. 69(7), 7064–7077 (2020). [CrossRef]  

14. Z.-P. Li, X. Huang, P.-Y. Jiang, Y. Hong, C. Yu, Y. Cao, J. Zhang, F. Xu, and J.-W. Pan, “Super-resolution single-photon imaging at 8.2 kilometers,” Opt. Express 28(3), 4076–4087 (2020). [CrossRef]  

15. M. Laurenzis, F. Christnacher, and D. Monnin, “Long-range three-dimensional active imaging with superresolution depth mapping,” Opt. Lett. 32(21), 3146–3148 (2007). [CrossRef]  

16. A. McCarthy, N. J. Krichel, N. R. Gemmell, X. Ren, M. G. Tanner, S. N. Dorenbos, V. Zwiller, R. H. Hadfield, and G. S. Buller, “Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection,” Opt. Express 21(7), 8904–8915 (2013). [CrossRef]  

17. A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, “Single-photon three-dimensional imaging at up to 10 kilometers range,” Opt. Express 25(10), 11919–11931 (2017). [CrossRef]  

18. Z. Li, E. Wu, C. Pang, B. Du, Y. Tao, H. Peng, H. Zeng, and G. Wu, “Multi-beam single-photon-counting three-dimensional imaging lidar,” Opt. Express 25(9), 10189–10195 (2017). [CrossRef]  

19. Z.-P. Li, X. Huang, Y. Cao, B. Wang, Y.-H. Li, W. Jin, C. Yu, J. Zhang, Q. Zhang, C.-Z. Peng, F. Xu, and J.-W. Pan, “Single-photon computational 3d imaging at 45 km,” Photonics Res. 8(9), 1532–1540 (2020). [CrossRef]  

20. P.-Y. Jiang, Z.-P. Li, and F. Xu, “Compact long-range single-photon imager with dynamic imaging capability,” Opt. Lett. 46(5), 1181–1184 (2021). [CrossRef]  

21. Z.-P. Li, J.-T. Ye, X. Huang, P.-Y. Jiang, Y. Cao, Y. Hong, C. Yu, J. Zhang, Q. Zhang, C.-Z. Peng, F. Xu, and J.-W. Pan, “Single-photon imaging over 200 km,” Optica 8(3), 344–349 (2021). [CrossRef]  

22. J. J. Degnan, “Photon-counting multikilohertz microlaser altimeters for airborne and spaceborne topographic measurements,” J. Geodyn. 34(3-4), 503–549 (2002). [CrossRef]  

23. R. M. Marino and W. R. Davis, “Jigsaw: a foliage-penetrating 3d imaging laser radar system,” Lincoln Laboratory Journal 15, 23–36 (2005).

24. T. Markus, T. Neumann, A. Martino, et al., “The ice, cloud, and land elevation satellite-2 (icesat-2): science requirements, concept, and implementation,” Remote Sens. Environ. 190, 260–273 (2017). [CrossRef]  

25. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018). [CrossRef]  

26. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572(7771), 620–623 (2019). [CrossRef]  

27. C. Wu, J. Liu, X. Huang, Z.-P. Li, C. Yu, J.-T. Ye, J. Zhang, Q. Zhang, X. Dou, V. K. Goyal, F. Xu, and J.-W. Pan, “Non–line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. 118(10), e2024468118 (2021). [CrossRef]  

28. A. Maccarone, A. McCarthy, X. Ren, R. E. Warburton, A. M. Wallace, J. Moffat, Y. Petillot, and G. S. Buller, “Underwater depth imaging using time-correlated single-photon counting,” Opt. Express 23(26), 33911–33926 (2015). [CrossRef]  

29. A. Maccarone, F. M. Della Rocca, A. McCarthy, R. Henderson, and G. S. Buller, “Three-dimensional imaging of stationary and moving targets in turbid underwater environments using a single-photon detector array,” Opt. Express 27(20), 28437–28456 (2019). [CrossRef]  

30. H. Zhou, Y. He, L. You, S. Chen, W. Zhang, J. Wu, Z. Wang, and X. Xie, “Few-photon imaging at 1550 nm using a low-timing-jitter superconducting nanowire single-photon detector,” Opt. Express 23(11), 14603–14611 (2015). [CrossRef]  

31. G. G. Taylor, D. Morozov, N. R. Gemmell, K. Erotokritou, S. Miki, H. Terai, and R. H. Hadfield, “Photon counting lidar at 2.3 μm wavelength with superconducting nanowires,” Opt. Express 27(26), 38147–38158 (2019). [CrossRef]  

32. Y. Guan, H. Li, L. Xue, R. Yin, L. Zhang, H. Wang, G. Zhu, L. Kang, J. Chen, and P. Wu, “Lidar with superconducting nanowire single-photon detectors: Recent advances and developments,” Opt. Lasers Eng. 156, 107102 (2022). [CrossRef]  

33. D. Bronzi, F. Villa, S. Tisa, A. Tosi, and F. Zappa, “Spad figures of merit for photon-counting, photon-timing, and imaging applications: a review,” IEEE Sens. J. 16(1), 3–12 (2016). [CrossRef]  

34. I. Gyongy, S. W. Hutchings, A. Halimi, M. Tyler, S. Chan, F. Zhu, S. McLaughlin, R. K. Henderson, and J. Leach, “High-speed 3d sensing via hybrid-mode imaging and guided upsampling,” Optica 7(10), 1253–1260 (2020). [CrossRef]  

35. X. Ren, P. W. Connolly, A. Halimi, Y. Altmann, S. McLaughlin, I. Gyongy, R. K. Henderson, and G. S. Buller, “High-resolution depth profiling using a range-gated cmos spad quanta image sensor,” Opt. Express 26(5), 5541–5557 (2018). [CrossRef]  

36. G. Buller and A. Wallace, “Ranging and three-dimensional imaging using time-correlated single-photon counting and point-by-point acquisition,” IEEE J. Sel. Top. Quantum Electron. 13(4), 1006–1015 (2007). [CrossRef]  

37. A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343(6166), 58–61 (2014). [CrossRef]  

38. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7(1), 12046 (2016). [CrossRef]  

39. Y. Altmann, X. Ren, A. McCarthy, G. S. Buller, and S. McLaughlin, “Lidar waveform-based analysis of depth images constructed using sparse single-photon data,” IEEE Trans. on Image Process. 25(5), 1935–1946 (2016). [CrossRef]  

40. J. Rapp and V. K. Goyal, “A few photons among many: Unmixing signal and noise for photon-efficient active imaging,” IEEE Trans. Comput. Imaging 3(3), 445–459 (2017). [CrossRef]  

41. D. B. Lindell, M. O’Toole, and G. Wetzstein, “Single-photon 3d imaging with deep sensor fusion,” ACM Trans. Graph. 37(4), 1–12 (2018). [CrossRef]  

42. A. Halimi, R. Tobin, A. McCarthy, J. Bioucas-Dias, S. McLaughlin, and G. S. Buller, “Robust restoration of sparse multidimensional single-photon lidar images,” IEEE Trans. Comput. Imaging 6, 138–152 (2020). [CrossRef]  

43. J. Tachella, Y. Altmann, N. Mellado, A. McCarthy, R. Tobin, G. S. Buller, J.-Y. Tourneret, and S. McLaughlin, “Real-time 3d reconstruction from single-photon lidar data using plug-and-play point cloud denoisers,” Nat. Commun. 10(1), 4984 (2019). [CrossRef]  

44. X. Peng, X.-Y. Zhao, L.-J. Li, and M.-J. Sun, “First-photon imaging via a hybrid penalty,” Photonics Res. 8(3), 325–330 (2020). [CrossRef]  

45. J. Peng, Z. Xiong, H. Tan, X. Huang, Z.-P. Li, and F. Xu, “Boosting photon-efficient image reconstruction with a unified deep neural network,” IEEE Trans. Pattern Anal. Mach. Intell. 45, 4180 (2023). [CrossRef]  

46. G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in International Conference on Computational Photography (IEEE, 2018), pp. 1–10.

47. R. Tobin, A. Halimi, A. McCarthy, M. Laurenzis, F. Christnacher, and G. S. Buller, “Three-dimensional single-photon imaging through obscurants,” Opt. Express 27(4), 4590–4611 (2019). [CrossRef]  

48. R. Tobin, A. Halimi, A. McCarthy, P. J. Soan, and G. S. Buller, “Robust real-time 3d imaging of moving scenes through atmospheric obscurant using single-photon lidar,” Sci. Rep. 11(1), 11236 (2021). [CrossRef]  

49. H. Shi, G. Shen, H. Qi, Q. Zhan, H. Pan, Z. Li, and G. Wu, “Noise-tolerant bessel-beam single-photon imaging in fog,” Opt. Express 30(7), 12061–12068 (2022). [CrossRef]  

50. Y. Zhang, S. Li, J. Sun, X. Zhang, D. Liu, X. Zhou, H. Li, and Y. Hou, “Three-dimensional single-photon imaging through realistic fog in an outdoor environment during the day,” Opt. Express 30(19), 34497–34509 (2022). [CrossRef]  

51. G. R. Osche and D. S. Young, “Imaging laser radar in the near and far infrared,” Proc. IEEE 84(2), 103–125 (1996). [CrossRef]  

52. P. Youssef, N. Sheibani, and D. Albert, “Retinal light toxicity,” Eye 25(1), 1–14 (2011). [CrossRef]  

53. A. Berk, L. S. Bernstein, and D. C. Robertson, “Modtran: A moderate resolution model for lowtran,” Tech. rep., Spectral Sciences Inc, Burlington, MA (1987).

54. D. F. Swinehart, “The beer-lambert law,” J. Chem. Educ. 39(7), 333 (1962). [CrossRef]  

55. P. McManamon, “Review of ladar: a historic, yet emerging, sensor technology with rich phenomenology,” Opt. Eng. 51(6), 060901 (2012). [CrossRef]  

56. R. T. A. Prado and F. L. Ferreira, “Measurement of albedo and analysis of its influence the surface temperature of building roof materials,” Energy Build. 37(4), 295–300 (2005). [CrossRef]  

57. D. Rüdisser, R. McLeod, W. Wagner, and C. Hopfe, “Numerical derivation and validation of the angular, hemispherical and normal emissivity and reflectivity of common window glass,” Build. Sci. 207, 108536 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Schematic diagram of experimental setup. (a) The system comprises of a pulsed fiber laser source, an InGaAs/InP SPAD detector array, a custom-designed bistatic transceiver unit, and electronic modules for data processing. The optical components include an aspheric lens, a long pass filter, a band pass filter, a fiber collimator and a Cassegrain telescope. A standard astronomical camera is mounted on the telescope for target alignment and weather monitoring. AFG, arbitrary function generator; TDC, time-to-digital converter. (b) A photograph of the whole single-photon LiDAR, which shows the laser, AFG, and the optical transceiver that constitute the system.
Fig. 2.
Fig. 2. Illustration of 3D imaging through fog over 13.4 km. (a) Satellite image of the experiment setup in Shanghai city. On the left is the photograph taken by the astronomical camera during the experiment in which the red rectangle indicates the LiDAR’s FoV. On the right is a photo of the target building taken nearby. (b) The reconstructed depth results with a total data acquisition time of 10 s. The visibility during the experiment is 4 km while the stand off distance between the target and our system is 13.4 km. (c) The reconstructed depth map using convex-optimization based algorithm with 200 ms data acquisition time. The average signal level is 2.87 photons per pixel. (d) The reconstructed depth map using the non-local neural network with 200 ms data acquisition time. Note that the reconstructed depth map shown in (b) is selected as the ground truth for the calculation of PSNR.
Fig. 3.
Fig. 3. Depth and 3D profiles of a pier in mist over 20.0 km. (a) The visible-band photograph of the target in experiment and the atmospheric visibility is 8 km. (b) The photograph of the same FoV in clear weather condition. (c) The reconstructed depth profile with a total data acquisition time of 20 s. (d) The reconstructed depth profile of the pier with 400 ms data acquisition time. (e) 3D demonstration of the reconstructed result in (c). The reconstructed depth map shown in (c) is selected as the ground truth for the calculation of PSNR.
Fig. 4.
Fig. 4. The timing histogram of the pier with 0.4 s data acquisition time. On the left is the timing histogram of all pixels (128 $\times$ 64 pixels). The horizontal axis represents the timing bin index, which can be converted to time with 1 ns precision. The background level is non-uniform due to the back scattering in fog. On the right is the timing histogram of a single pixel on the body of the pier.
Fig. 5.
Fig. 5. Illustration of 3D imaging of a wind turbine in rotation in mist over 10.5 km. (a) Satellite image of the experiment setup in Shanghai. Our system was placed at a hydrographic station on the riverbank. On the right is the photograph of the target located at the other side of the river during experiment. (b) Reconstructed depth profiles of the wind turbine in rotation. Each image is captured with 50 ms data acquisition time while the time interval between the four frames is 450 ms. The average number of PPP per frame is 0.55 and the SBR is 0.56. It can be seen that the blades are rotating counterclockwise. (c) 3D demonstration of the reconstructed result of the last image in (b). It can be seen that there is an angle between the rotation plane and the plane vertical to laser incidence direction.

Tables (2)

Tables Icon

Table 1. Summary of the main system parameters

Tables Icon

Table 2. Summary of recent experiments on single-photon imaging through atmospheric obscurants

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

T = P R P T = e α L
N A L = α R = 1 2 ln ( n 0 n 1 )
N r = P t λ h c t ρ A θ r 2 π R 2 θ t 2 η d e t η f η o η a t m
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.