Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Analysis of the performance of a polarized LiDAR imager in fog

Open Access Open Access

Abstract

This paper focuses on exploring ways to improve the performance of LiDAR imagers through fog. One of the known weaknesses of LiDAR technology is the lack of tolerance to adverse environmental conditions, such as the presence of fog, which hampers the future development of LiDAR in several markets. Within this paper, a LiDAR unit is designed and constructed to be able to apply temporal and polarimetric discrimination for detecting the number of signal photons received with detailed control of its temporal and spatial distribution under co-polarized and cross-polarized configurations. The system is evaluated using different experiments in a macro-scale fog chamber under controlled fog conditions. Using the complete digitization of the acquired signals, we analyze the natural light media response, to see that due to its characteristics it could be directly filtered out. Moreover, we confirm that there exists a polarization memory effect, which, by using a polarimetric cross-configuration detector, allows improvement of object detection in point clouds. These results are useful for applications related to computer vision, in fields like autonomous vehicles or outdoor surveillance where many variable types of environmental conditions may be present.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Due to the unprecedented characteristics of LiDAR imaging systems, its development for novel applications related to computer vision, in fields as diverse as autonomous vehicles [1], outdoor recognition [2], surveillance [3], and even planetary observation [4,5] seems very promising. Nevertheless, the current technology implementation still has some uncertainties and it provides many challenges. One of them is related to its use in adverse weather conditions. Under these scattering media conditions, for example in fog, LiDAR performance is heavily altered and the quality of the detection becomes severely degraded [1].

High-resolution and in-depth imaging and detection of objects hidden in a turbid medium have long been challenging and important in problems related to many industrial, military, and biomedical applications. Recently, there has been remarkable progress in methods that explicitly seek to improve the vision of scenes under turbid conditions [68]. Measured light sent through a turbid media – for reflection (backwards) light-detection modes – can be classified into two main categories, each of them carrying different information about the media, thus, allowing distinct imaging modes. This separation comes from the different ways in which photons are scattered while propagating [7].

  • i) Signal photons correspond to photons that interacted with an object and returned to the detector. Once collected, they are useful for getting precise images of the scene of interest because they hold information about the reflectivity and depth of the targets. To some extent, they also retain the spatial and temporal information of the source/emitter because they are usually related to ballistic or snake photons. They are still coherent or partially coherent and provide good spatial resolution. However, in a medium with a strong degree of scattering (which, unfortunately, is the main characteristic of turbid media), their amount is typically very small.
  • ii) Background photons are those photons that did not interact with an object, thus when collected they only hold information about the host media. Due to the scattering dynamics, background photons may arrive to the detector at different times: some of them are directly reflected from the medium while others migrate for a longer time along a multistep random trajectory. These last ones are usually associated with diffuse photons. Under high-scattering conditions, this is by far the largest group and due to the lack of rectilinear propagation, they are unsuitable for imaging a scene, although they can be used to estimate the optical properties of the medium.
For obtaining precise information about a scene being viewed in real-time and direct imaging through turbid media, signal photons have the potential to provide coherent images with resolution only limited by diffraction. Thus, the maximum amount of this kind of photons has to be captured. Under this goal, light-detection techniques through turbid media are focused on discriminating the light that has almost directly crossed the media and interacted only with an object, from the light which has only been scattered by media and does not carry relevant information about the scene. The following two methods [9] can be used to differentiate signal photons from background ones:
  • i) Temporal discrimination is the most commonly used method. Signal photons arrive at the detector at a specific time, which corresponds to the time needed by light to reach the object and come back. Background photons arrive scattered along different random times. For instance, time-gated techniques exploit this difference in arrival time to section the image in the depth dimension, capturing only the photons coming from a slice of the media orthogonal to the camera [1013].
  • ii) Discrimination based on light properties is another relevant method. Detecting light with a polarization state related to that of the input beam, the polarimetric properties of the objects or the polarimetric properties of the medium is a way to differentiate the various types of photons. For example, polarized light is known to preserve its polarization properties deeper into the medium; so, some authors assumed that scattered light by the medium arrives at the detector polarized and light coming back from objects is mostly depolarized. This difference in the degree of polarization of the light arriving at the detector is used to improve image contrast [1417].
These techniques have proved to be efficient in the laboratory and in some real scenarios, nevertheless, to the best of our knowledge, no one has ever evaluated its performance in a LiDAR system for a macroscale application simultaneously.

However, imaging through turbid media is not a new problem. Many authors have worked, studied, and proposed ways to overcome it. Among the main techniques, as previously stated, temporal and polarization filtering are the most important [18]. Strikingly, the characteristics of pulsed LiDAR systems allow the combination of both techniques. We propose to combine temporal and polarization discrimination in a unique LiDAR unit and analyze its performance in the presence of fog. Usually, commercial LiDAR imaging systems for computer vision only return the point cloud or 3D map and, in some of them, the information about the intensity of reflection. Polarization is not considered in these kinds of applications, usually due to the larger cost of the unit, already quite high when compared to a camera. Our contribution is related to the digitization and analysis of the signal acquired for each light pulse within a scanning pulsed system, and also to the use and control of the polarimetric properties of the system for its investigation.

In this paper, we describe the LiDAR imager system that we have designed and constructed, with outstanding characteristics related to optoelectronic modifications, such as digitization capabilities and the inclusion of polarizing elements. The goal is to explore the capability and efficiency of these variations to be used as filtering techniques for LiDAR imaging through turbid media. To evaluate the performance, experiments for the characterization of the system are performed in a fog chamber. On the one hand, we are interested in characterizing the backscattering signal of light in foggy environments and how it influences the detection of objects. On the other hand, we will evaluate how the digitization of the whole signal is useful in this kind of media, and finally, we also will study the benefits of using polarization as a filter in these conditions.

This paper is organized as follows: firstly, we have presented an introduction in which the main ideas and topics of this study are detailed and presented in-depth: the problem we want to explore and the state-of-the-art techniques for imaging through scattering media. The next section is dedicated to introducing the methods and materials used in this study. In this, we include the working principle of LiDAR and the problems related to its use in adverse weather conditions; it includes also the description of the LiDAR imager system used, along with the details related to the digitization of the signal and the use of polarimetric elements. The experimental setup in the fog chamber is also presented in detail. After presenting methods and materials, we present the results obtained during the experiments. Finally, the last two sections of the paper discuss the results and the main conclusions we reached.

2. Methods and materials

2.1. LiDAR principle and the effect of fog on its signal

Over recent years, LiDAR (light detection and ranging) technology has become a global solution in the fields of optomechanical engineering and optoelectronics. Being an established measurement technique since the past century, it holds a solid bibliography corpus and a set of well-known publications. We will describe pulsed, or time-of-flight (TOF) LiDAR, in the following, amongst other setups that have been proposed. A comprehensive review of the working principle, components, structures, and challenges facing LiDAR technology is detailed in [1].

LiDAR is based on a simple working principle known as TOF. It consists of measuring the elapsed time it takes for a pulse of light to travel from the source to an object and then to get back to the detection system. The total time that the pulse spends taking this tour is twice the distance to the target divided by the speed of light. Based on that data, it is possible to establish the distance of that object from the source. In addition, when using the proper scanning strategy on the source, it is also possible to scan a complete scene around the system and generate a detailed 3D map of its surroundings.

Due to its working principle, the electronics of the system have to wait and detect the return of the light pulse. As a consequence, a LiDAR system has the potential to provide a continuous temporal discretization of the signal response. Considering previous results of how a pulsed signal varies in the presence of turbid media [7,19], it has shown that being able to observe the whole returning signal could permit better identification of objects and the characterization of the elements in the scene. However, standard unpolarized commercial systems do not allow access to this information.

One of the weaknesses of LiDAR systems (in the same as most optical systems) is their lack of tolerance to adverse environmental conditions, such as the presence of fog, rain or spray water on the roads. When encountering turbid media, the light signal is rapidly attenuated due to the absorption and scattering events induced by water droplets, significantly lowering the maximum range capability of the system. Moreover, scatterers introduce a large number of false detection alarms from the backscattered intensity, reducing the reliability of the sensor [7,19,20].

This loss of perception would be unacceptable in most outdoor novel applications because the system is expected to perfectly sense its surrounding under all circumstances. Currently, there is not a clear solution to overcome this scenario, which combines the dispersing media properties and signal at very low SNR. Thus, we want to explore the possibility to merge the capacity of temporal discrimination along with polarization discrimination, to study the possibility to improve this technique for being used in adverse weather conditions.

In scattering environments, active imagers suffer from an undesired intense backscattering signal generated by the active source of the imager in the dispersive medium. In the case of a LiDAR system, such backscattering is caused by the active emission of the laser pulses. This kind of signal has significant power for a detector adjusted to detect faint backscattered signals, and thus may easily saturate the detector and/or hide the detection of fainter signals coming back from the actual objects of interest in the scene.

According to the study [7], the signal generated by fog exhibits very specific distribution and timing properties. The fog backscattering signal is expected to follow the shape of a Gamma distribution, and to be fixed at a given depth, which in practice should be considered as fixed in time if the time dependence of the signal is digitized, as in our case. Different fog responses were obtained experimentally using a single-photon counter to detect the backscattered photons coming from fog under different optical densities and fitted with a gamma distribution. It was also observed that the higher the optical densities, the stronger the amplitude of the signal. Moreover, the signal also presents specific polarization properties, which depend on the scattering regime [2124].

Thus, it can be concluded that the denser the fog, i.e. the lower the visibility, the stronger the detected backscattering peak in the media, which in addition presents a fixed position in time. Moreover, the polarization characteristics are also constant as a function of the type of media. Such an issue opens the door to the digitization of the signal and the application of some type of gate filtering and/or the use of some specific send/receive polarimetric configuration which could avoid the undesired intense backscattering signal.

In the end, we are interested in detecting signal photons returning from objects in the best possible way. To do so, the backscattering signal of the medium needs to be adjusted so the detector does not saturate, as this could destabilize the electronic signal which works at high frequency and amplification. If so, the target response from signal photons would become undetectable even when the response is completely digitized. The polarization finger-print of the type of scene has also to be considered. For useful polarization filtering, polarimetric properties of media and objects have to be distinct. Thus, the goals of our modified LiDAR unit are to filter out the backscattering response of the medium in the signal or/and, at the same time, to reduce its intensity to avoid saturation of the detector, a role which could be played by co-polarized or cross-polarized detection.

2.2. Optomechanics of our LiDAR imager

In this section, we present the optomechanics of the LiDAR study used for the performance of the experiments. This system is based on a pulsed fiber laser at 1064 nm with a MEMS scanner suitable for mapping a scene [25]. The design was thought to be able to digitize the signal, control the polarization, allow the user to easily add or remove elements whenever the polarization state needed to be changed, and, finally, be water-proof so it could be positioned inside of a fog chamber. The system can be divided into two main blocks: the scanning module and the receiving module. The scanning module is the subsystem used to send light pulses across the scene with an angular field of view of ± 20 deg. A fiber laser is connected via APC/FC fiber connector to a fiber collimator, which produces a collimated beam of 2 mm in diameter. The laser emits pulses centered at a wavelength of 1064 nm with a 10 nm spectrum bandwidth. The pulse repetition rate can achieve up to 600 kHz (with average power up to 2 W). The subsystem is designed to guarantee the correct distance and tilt of all elements and to maintain the best possible polarization once it starts to scan, as well as to allow the insertion of extra optical elements (e.g. polarizers, waveplates) whenever needed. The receiving module is intended to collect the maximum amount of signal scattered by the targets in the scene and to compute the TOF value. The optical system allows the convergence of light onto the active area of a photodetector, in our case a large area (>2 × 2 mm) APD from Hamamatsu. The APD has been optimized for the application and requirements to feature a bandwidth of 250 MHz. Sensitivity depends on the amplification board and is estimated to be around 60A/W at 1064 nm under standard conditions. The mechanical design of the receiving unit mimics the strategy established in the scanning subsystem: an easily removable block is designed to contain the polarizers for changing polarization conditions on detection, while a second, larger module contains the rest of the system. Figure 1 shows the optomechanical set-up of the unit. The collimated laser light that comes from the fiber is shot through the MEMS scanner (not visible) and leaves the unit through the scanning optics aperture. The light from the target is received by the receiving optics. The block below the electronic board contains the rest of the receiving optics and the detector. To be able to use the LiDAR in a fog chamber, all the components had to be embedded within an IP68 waterproof box. A large box was chosen to handle the different components comfortably and practically, enabling easy access for modifications of the polarization optics. More compact prototypes are definitely possible, but in this case, we wanted to prioritize keeping the unit waterproof and enabling easy removal and reconfiguration of components.

 figure: Fig. 1.

Fig. 1. Optomechanical set-up of the LiDAR unit.

Download Full Size | PDF

2.3. Digitization of the signal

Custom firmware, communication, and visualization software were designed, coded, and implemented on demand to fulfill the requirements of this application. The electronic control of the components of the LiDAR unit is performed using an FPGA (Intel Cyclone 10 LP). The FPGA receives instructions from a PC to configure the LiDAR in real-time. In the case of the scanning module, the components affected by the control are the pulsed laser and the scanner. The FPGA sends the configuration parameters to the laser (pulse duration and mean power) at the start of a recording session and sends a trigger signal every time that a pulse should be emitted. This start signal generated by the FPGA is synchronized with the laser trigger signal. The trigger signal is used to coordinate the receiving module with the scanning module. With each laser pulse, the FPGA tilts the scanner in one particular direction. Thanks to the coordination among signals, we can achieve a homogeneous angular distribution between pulses. The electronic control of the receiving module records signals using a high-speed digitizer card PCI-5152 from National Instruments (NI), as a replacement for the conventional analog-to-digital converter (ADC) system. The acquisition card allows digitizing the input signal with up to 2GS/s sampling rate and 8-bit resolution. Every time that a pulse is emitted the FPGA sends a signal to the digitizer to start the recording process. Thus, once a frame has been captured, it is sent to the PC where it can be visualized and processed. A custom acquisition program was developed in LabVIEW to acquire LiDAR signals during experimental sessions. This LabVIEW program also permitted control of the NI digitizer card. The method for capturing the signal allows us to process the temporal signal of the reflected pulse more easily and efficiently, and to digitize and characterize the behavior of returned signal in more detail. In summary, the signal for each of the emitted pulses, associated with its direction of emission (and, thus, with the objects it finds in its path) is registered and stored in the PC for further processing.

Figure 2 shows a scheme for the acquisition of the LiDAR system. The START LINE signal initiates the acquisition of a line in the image frame. Then, the NI digitizer card waits until the arrival of a trigger from the FPGA and indicates when a pulse has been sent. Once the trigger goes beyond the indicated threshold, record number 1 starts. The card digitizes the signal, taking as many samples as indicated by the parameter record length. Once the record is finished, it is saved in the onboard memory of the card, and the card awaits again for the arrival of a trigger to start the acquisition of record number 2. The process is repeated until the START LINE indicates that a new line is being acquired. At this point, the acquisition of a second line starts and, in a parallel thread, data corresponding to the first line is transferred to the host PC.

 figure: Fig. 2.

Fig. 2. Time diagram of the different signals which appear in a frame acquisition.

Download Full Size | PDF

2.4. Point cloud generation

We have presented the electronic control system and have also described how the laser signals that return from the imaging scene are acquired and digitized. As a result of having digitized each record, there is a complete curve for each of the scanned directions for each frame. This data is sent to the PC where it is processed, and then it is used for generating the point cloud.

As commented in previous sections, when the emitted pulse encounters an object, it is reflected or scattered in the object and a part of the reflected/scattered signal is received by the detection system. In order to generate the point cloud, we need to define some criteria to define when a pulse has returned, and which is the right one in case more than one pulse is present in the signal. This is done by detecting pulses in the digitized signal by applying a threshold, so we consider that a reflected pulse has been detected each time that the signal curve surpasses the associated threshold. Then, we locate the maximum value of the peak in time so this time position is associated with the real-time arrival of the emitted pulse. Then, TOF is converted into the distance of an object from the source. This procedure is performed for all the scanned directions (all recorded curves) and repeated for each frame.

For ideal scenarios, for a scanning direction in which there is an object, only one returning peak should be detected. However, in real scenarios (and in particular when fog is present), more than one peak appears in every detection curve. This might happen either because of the divergence of the laser, which makes the light interact with more than one object, by semi-transparent objects, by multiple reflections, including specular ones, and, in particular, by the response of dispersive media, among other reasons. So, several peaks may be found in a unique curve, i.e. the recorded signal surpasses the threshold several times. As a result, the point cloud is usually generated using multi-hits which means that several points may be located for each scanning direction. Our processing takes into account the appearance of this effect.

In Fig. 3, we present an RGB image corresponding to the scene configuration that will be used for discussions in the following sections. Every object is identified either by a number (for plates or set of plates) or a letter (mannequins) using a yellow panel. With the data that is acquired using our system, we are able to reconstruct a point cloud. Figure 4 is an example of the typical point cloud obtained, in this case without the presence of fog. The figure shows a top view of a point cloud of the scene shown in Fig. 3 with the different objects marked with its sign.

 figure: Fig. 3.

Fig. 3. RGB image of the scene prepared in the fog chamber. Every object is marked either by a number (for plates or set of plates) or a letter (mannequins) for its identification. (top).

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. A top view of the typical point cloud of Fig. 3 without fog in the chamber.

Download Full Size | PDF

2.5. Use of polarization in the system

One of the novelties introduced in the study is the use of controlled polarization for the illuminating/receiving system. Thanks to the characteristics of the LiDAR unit built, change and control of the polarization states in emission and detection were available. The mechanical design guarantees the correct distance and tilt of all elements, as well as allows the user to easily add or remove the polarimetric elements whenever the polarization state needs to be changed. Tests to ensure polarization states during different scanning conditions were successfully carried out.

The system was set to use circular polarization. For turbid media, polarization properties of backscattering can be quite different depending on the regime. According to the models, when the particle diameter is similar to the wavelength (which is the case for our system because we use a 1064 nm laser) the incident light experiences a sequence of near-forward-scattering events before it contributes to the backscattered light. Under these circumstances, the linearly polarized light is depolarized rapidly, whereas the helicity of the circularly polarized light is maintained longer [2123]. This benefit presented by circular polarization has been broadly studied. It is the so-called polarization memory effect and could play an important role in improving image contrast in fog [20,24].

We have chosen a pulsed fiber laser that maintains the polarization of the beam regardless of the curvature of the fiber. Linear polarization is obtained from the fiber; therefore, a linear polarizer is not needed in the scanning subsystem, and only a removable quarter waveplate (QWP) is required (THORLABS WPH10M-1064) to provide circularly polarized light. The light is polarized before arriving at the scanner because the latter shows a low degree of depolarization. The receiving submodule contains a specific mechanical arrangement before the focusing optics and detector for insertion of the polarimetric optical elements to change polarization conditions. We use a circular polarizer composed of a QWP THORLABS WPH10M-1064 followed by a linear polarizer THORLABS LPIREA100-C properly aligned. The proper alignment of the waveplate is checked with a portable polarimeter.

We used left-handed circularly polarized light in emission. This configuration was achieved by placing a QWP with its axis at 45° relative to the X-axis polarization of the fiber output. The emission polarization state was experimentally characterized. The corresponding Stokes parameters were S = (1; -0.031; 0.024; -0.997) so the degree of circular polarization (DoCP) was 99.7%. Detection configuration was interchangeable between a co-polarization (left circular polarization) and a cross-polarization (right circular polarization) channel. Co-polarization and cross-polarization detection channels were achieved by placing a QWP with its axis at 45° and -45° degrees, respectively, in the receiving module, followed by the linear polarizer with its axis along the X-axis.

2.6. Experimental set-up and tests

Experimental work was conducted in a fog chamber at CEREMA’s facilities, located in Clermont-Ferrand (France) [26]. Inside the chamber, homogeneous fog is produced by nozzles spraying water under high pressure. Therefore, fog of different densities can be produced with a controlled and constant meteorological visibility by modifying the quantity of water injected. The visibility, which is defined using the 5% contrast threshold, is monitored using a separate transmissometer and measured in real-time inside a test room, as explained in [27]. Visibility may be varied at will in different types of cycles. In this case study, cycles of fog with visibilities ranging from 5 m to 350 m were measured, in controlled and stabilized steps. The size of the water particle creating the fog can also be selected, ranging from small (∼1 µm) to large (>10 µm) diameters. For our experiment, we selected a droplet size distribution of radiation fog with small droplets with a main diameter of around 0.5–1 µm [28]. This distribution is proved to be very similar to the statistics of the Deirmendjian haze-like fog model of natural fog [29,30]. The test room is 5.5 m wide, 2.3 m high and 31 m long, which includes a 15-m fix section (tunnel) and a 16-m greenhouse with an opaque cover to simulate night-time conditions. All the tests were performed in night-time conditions, to avoid undesired light interactions.

The tests consisted in validating the performance of the described LiDAR system in real fog situations. We wanted to reproduce a complex scene, in order to receive signals similar to the ones received in real scenarios. We were interested in reproducing divergence of the laser at further distances, occlusions, and back reflections that introduce artifacts to the signal, for example, walls. For that purpose, we selected a scene with known objects of different types, including reflective plates, calibrated plates, and mannequins as pedestrians. We had to manage multiple reflections and interactions.

Two main configurations for detection were considered: circular co-polarized (emission and detection in the same orientation of polarization) and circular cross-polarized (emission and detection in orthogonal orientations of polarization). Constant steps of fog visibility were produced in the chamber and digitized point clouds of the scene were obtained for each of the steps. Since the detection can be adjusted to co- and cross-configuration with the same characteristics, these experiments were aimed to compare point clouds and signals obtained in presence of fog, for different visibilities and for different polarization-detection configurations. We analyzed the polarized response of the medium as well as different “real objects”, and the differences presented between co- and cross-configurations of the LiDAR system. In particular, we wanted to verify the characteristics of backscattering in fog using the complete digitalization of the pulse response and propose a way to overcome it using the obtained data. The response of the materials to polarization changes depends strongly on their properties. Different materials present different polarization fingerprints.

3. Results

3.1. Characterization of the non-polarized digitized signal

In this section, we present the characteristics of the digitized signal. In the first instance, we are interested in the backscattering signal of the LiDAR under fog conditions, still without considering polarization. Thus, no polarizing elements are introduced in the receiving subsystem yet.

Figure 5 shows the received signal at different visibilities, for a particular scanning direction where no objects are present (except for the final wall of the tunnel, located at 30 m). In this way, we are able to study only the response of fog. The power of the pulsed laser and the gain of the electronics were chosen to not saturate the detector. Results are in agreement with the previous discussion and with the assumptions found in previous literature. However, compared to them, the tail of the signal is steeper. This particularity is thought to be due to the detection configuration, which includes a reduced field-of-view (FOV) so the angular acceptance of scattered photons is possibly excluding those photons experiencing more scattering events. Such photons are responsible for elongating the tail of the gamma function. In this case, we can conclude that the LiDAR system is also functioning as a sort of spatial filter, discarding those backscattered photons from the fog with longer backscattering paths.

 figure: Fig. 5.

Fig. 5. Normalized digitized signal for fog response for different visibilities. Fog signal follows, for all of them, the expected Gamma distribution shape and is fixed in time.

Download Full Size | PDF

Nonetheless, the main conclusions of previous literature are confirmed, that is, the position of the peak is fixed in time, and its shape is maintained (varying the amplitude in accordance with the visibility), allowing filtering (discard of this response) when the signal is digitized, regardless of the density of the fog.

For our system, the peak is found (with the corresponding time-distance conversion) at a distance of around 1.8 m. The tail (5% of the maximum) is elongated up to 5.6 m for the minimum visibility, which means that the signal of an object found nearer than this value would be overlapped with the backscattering signal. This may cause an object not to be detected if its signal is faint, which should not be the usual case at a distance so short. This corresponds to the most extreme case, as increasing visibility relaxes these conditions: peak position is found closer to the source and band tail elongation is reduced, as can be seen in Fig. 5. These conclusions could not have been reached without having the whole raw signal variation with time digitized because a point cloud does not show all this information. The next step using this tool is working towards the characterization of ways to decrease the backscattering peak and in particular its amplitude, since it defines the saturation conditions and thus the range where the detector becomes blind.

When illuminating a target under foggy conditions in the chamber, at least two peaks in the signal should appear: the one corresponding to the backscattered signal of fog response (previously discussed), and the one corresponding to the object (whenever it is detected). For real applications, we are interested in the reflected pulse from the object, which is related to its TOF, thus to its distance from the source. If the object is close enough and the fog response is too strong, both peaks may become superimposed in the signal, which should be avoided as it complicates the processing beyond a simple threshold algorithm. However, considering the ranges at which a LiDAR system is designed to work in applications like transport and surveillance, there would not be significant cases in which the backscattered signal of fog and object signal were superimposed, as objects will typically be further apart than 4 m of the system. Thus, for most cases of interest, both signals will always be separate enough in time to be distinguished.

Next, we wanted to observe how the detected signal changes with different visibilities. Thus, the LiDAR system is pointed towards a target of 50% reflectivity placed 16 meters away from the source (the central calibrated panel tagged as “5” in Fig. 3), so both peaks can be separated in time and distinguished. Figure 6 shows how the detected signal changes under different visibilities. Both peaks can be detected simultaneously in some visibilities, whereas only one of them appears in others. The first case of Fig. 6 shows the digitized signal when the fog chamber is producing a visibility of 20 m. In this case, only the fog response is detected. The distance characteristics of the peak are in accord with the ones discussed in Fig. 5. The lack of detection of the object at 16 m under a 20 m visibility may be due to several sources of uncertainty in the experiment (temporal instability of the fog, spatial and temporal changes in the transmittance of the fog between the area of measurement of the transmissometer and the LiDAR, differences in power and wavelength between the transmissometer -532nm- and the LiDAR -1064nm-, the closeness of the position of the target to the nominal limit of visibility, etc.). Afterward, the fog starts to dissipate and the second case (visibility of 50 m) allows us to detect the position at 16 m of the peak of the signal that returns from the target, which becomes much clearer in the third case (visibility of 100 m). At the same time, the fog response significantly decreases. Finally, at the visibility of 250 m fog, the media is not dense enough to produce a detectable backscattering signal and the peak of the object recovers its calibrated amplitude.

 figure: Fig. 6.

Fig. 6. Evolution of the LiDAR signal under variable fog conditions when pointing towards a target placed at 16 m.

Download Full Size | PDF

Thus, first part of the study concludes that the response of the media, that is, the fog backscattering is fixed in time, so it may be filtered out using a hardware or software time-gate, given we prevent saturation caused by the backscattering signal, which prevents detection of the target peak. The presence of backscattering of the illumination field forces a trade-off as far as higher gains of detection also increase the detectability and thus the amplitude of the signal corresponding to the backscattering peak, which might saturate. This saturation may frustrate the detection of the desired signal coming back from the target.

3.2. Study of the polarimetric LiDAR

Next, we analyze the influence and benefits of polarization for detecting objects when using a LiDAR system. In particular, we concentrated our analysis on three specific objects placed at known distances in Fig. 3. Specifically, we are interested in targets 2 (central calibrated panel), 4, and B. They are ordered by increasing distance: object 2 refers to a diffusive sample with characterized 50% reflectance in the visible range placed at 8 m, object 4 to a metallic, partially reflective sample plate placed at 11 m, and object B to a child mannequin placed at 13 m. We studied the response of each object for co- and cross-polarization configurations, under different visibilities. The response studied for each object corresponds to the reflective signal of a laser scanning trajectory that coincides with the center of the objects. We wanted to evaluate whether polarization filtering was useful in this kind of scenario.

To that purpose, we took again advantage of having the full reflected signal digitized. Figure 7 shows the resulting response of a laser pulse directed towards the center of each of the three objects described previously. Each plot corresponds to the signal of each object at a given visibility. In each plot, information of co (orange) and cross-polarized (blue) configurations are shown. In all cases, it can be seen that the last peak corresponds to the pulse returning from the object of interest, as it is placed at the exact distance in which the object is located. This first peak, as discussed in previous sections, corresponds to the backscattering response of fog.

 figure: Fig. 7.

Fig. 7. The digitized signal for co- and cross-configurations of the objects of interest (obj. 2, obj. 4, obj. B and backscattering peak), for three different visibilities (20 m, 70 m and 150 m).

Download Full Size | PDF

It can be observed how the cross-polarized signal for the objects of interest is always larger than the equivalent co-polarized one. This behavior is especially pronounced for object 2 due to its metallic surface. Nevertheless, the other objects (which due to their heterogeneity will be expected to be more depolarizing) also present slight differences between the two channels which favor the cross-configuration. Thus, we can conclude that cross-configuration presents an advantage in front of co-configuration as the object signal detected in this configuration tends to be larger. As a result, cross-configuration allows objects to be detected with lower visibilities.

Regarding the backscattering peak of fog, we can observe a completely different behavior. In the cross-configuration situation, the amplitude of the media response is reduced faster as visibility increases. Instead, the co-configuration presents a larger peak, which decreases in amplitude slower. With a visibility of 100 m, there is no backscattering fog peak in the cross-configuration whereas there is still some response in the co-configuration. Such a result suggests maintenance of polarization by this turbid medium, so the polarization memory effect described in the literature is shown in this application.

Taking into account these results, a LiDAR system using a cross-configuration of circularly polarized light for detection could help to decrease the influence of backscattering from the media without compromising the signal returning from the objects of interest, because part of the object response comes back in the cross-component.

Using the data acquired from the scene and applying the method described in section 2.4, we are able to extract the polarized LiDAR point clouds of the scene at different visibilities. Point clouds illustrate graphically the features just described. Next, we will quantitatively comment on the differences encountered in the processed point cloud for the co- (Fig. 8) and cross-configuration (Fig. 9) detection at 20 m of visibility (very dense fog), 75 m, and 120 m. Objects found in the scene depicted in Fig. 3 are indicated with a black triangle with its corresponding tag. Note that for improved visualization purposes there is a shift of +3 m of all data from the real distances in the point clouds.

 figure: Fig. 8.

Fig. 8. Reconstructed polarized point clouds of the scene depicted in Fig. 3 for co-polarization configuration at 20 m, 75 m and 120 m of visibility

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Reconstructed polarized point clouds of the scene depicted in Fig. 3 for cross-polarization configuration at 20 m, 75 m, and 120 m of visibility.

Download Full Size | PDF

First of all, it is remarkable the appearance of the squared wall that appears between 1 and 5 meters. This wall is indicated with a pink triangle in Figs. 8 and 9. It is the result of the response of the media. The backscattering peak is detected as an “object” placed in front of the source, so in the point cloud it appears as a “wall”. In the co-configuration detection, this “fog wall” at the beginning of the point cloud is much thicker and bigger than in the cross-configuration. Thus, point clouds are in accordance with the conclusions taken when the first peak was digitized in Fig. 8. In fact, when using co-configuration for really low visibilities the light is almost not able to arrive and return from any object, so only the wall and the more reflective and closer objects are appearing in the point cloud. For even lower visibilities (approx. < 15 m of visibility) only the fog response is detected. The LiDAR system is completely blind under these circumstances. Nevertheless, in the cross-configuration, the range is larger. We are able to distinguish some distant objects even in the thickest fog conditions, and the “fog wall” effect is significantly reduced.

Moreover, as a result of the cross-polarization component being dominant for the objects, the arriving energy is larger in this configuration, which allows a better differentiation of objects with different reflective properties. This may be clearly appreciated at large visibility (120 m), wherein cross-configuration, all the objects in the scene can be identified, whereas for the co-configuration the most distant objects are not yet detected. This effect is remarkable for 75 m of visibility, which, in terrestrial applications, is still thick fog. Cross-configuration allows the detection of roughly twice the objects than the co-configuration.

4. Discussion

Considering the results presented in the previous section, it is evident the benefits that the digitization of the signal has brought. Full digitization of the signal acquired by a LiDAR imager can help to study ways to improve this system. This becomes of great importance when imaging through turbid media, which is one of the main breakdowns of these systems.

Using full digitization, we have verified that when imaging through fog, an unwelcomed backscattering appears as a response of the medium to the active behavior of the system. Thanks to its analysis, we concluded that this fog backscattering is fixed in time, which when generating a 3D map corresponds to a fixed occlusion near the source which appears as a sort of “fog wall” in the scene. Without considering the attenuation factor, the backscattering peak of fog would cover (in the case of our system) objects placed closer than 5 m (in the worst-case scenario). Nevertheless, due to the specific characteristics of the response, the appearing peak may be filtered out using time-gating or post-processing of the signal, which would not affect mid-range and long-range applications. Then, only the photons coming from the objects would be used for generating the point cloud.

Another remarkable outcome arises from the fact that due to the characteristics of the fog response, one must be careful not to saturate the detector. The response of fog reaches large amplitudes in the acquisition of the signal, which depend on the visibility, but also on the initial power of the pulse and gain of the detector. As result, in some cases, the detector may saturate, which leads to a destabilization of the electronic signal, which blinds the detector temporally. In these cases, even when reflected light arrives from an object, it may not be detected in the destabilized signal. Using temporal filtering would not help to solve the problem, which requires in addition the attenuation of the fog response.

Regarding polarization effects, we have evaluated how the cross-configuration detection presents a bigger or similar signal (amount of energy) than the co-configuration for the detected objects. Thus, in general, any object is always returning a cross-polarized component of light. Such an effect is especially visible in metallic objects, which due to the characteristics of their surfaces mainly return cross-polarized light. Instead, for the backscattering fog signal, we have experimentally shown that the co-configuration presents a larger signal than the cross-configuration. Thus, considering that object responses have always a cross-polarized component of light in the returning signal and fog responses have a larger component of co-polarization, a LiDAR system that is based on circularly polarized incident light and cross-configuration detection helps to reduce the SNR. The cross-configuration detection permits the detection of objects and allows filtering out most of the fog response. Nevertheless, it must be considered that when pointing towards a target that presents a polarimetric behaviour that greatly differs from conventional objects (such as a retroflector), the cross-polarized signal would be significantly reduced and, subsequently, so would the range. In this case, the ability to perform of the system based on cross-polarized configuration would be practically limited. Backscattered light would be filtered but so would be the polarized part of the light coming from the object.

However, it should be considered that when a specific polarimetric configuration is used at the receiving submodule, the energy arriving within the other component is dismissed. While that could be useful for reducing the effect of fog response, we are also losing part of the energy that returns from the objects. In cases of very reflective objects or objects which reflect mainly in the component of detection, it may not be noticeable; but in other circumstances, the range of the system may be reduced due to the rejection of photons in the configuration orthogonal to detection. We are thus dealing with a trade-off between the reduction of the fog signal effect and the weakening of the detectability of objects with depolarizing surfaces.

Nevertheless, there is a clear case in which the cross-polarization strategy only has advantages. Whenever facing conditions in which the backscattering signal saturates the detector, i.e. makes an electronic signal that is so unstable that it does not show object responses, the attenuation of fog response using cross-configuration detection improves the system range by partially filtering out the fog response, without modifying any of the other parameters that influence it. As a result, fog no longer saturates the detector, the signal is stabilized and we acquire the capability to detect the response of objects. Obviously, improved electronics to manage saturated signals is also a possible path, but the fast frequencies and large magnifications involved in TOF LiDAR make it challenging.

5. Conclusions

From the results and discussion presented in the previous sections, we can conclude that full digitization of the signal acquired by a LiDAR imager provides very useful information to study ways to improve the system performance under fog. Using this technique, we have characterized the full signal of a complete image and analyzed the effect of temporal and polarization filtering in the obtained point cloud.

We have studied in detail the fog response, and its behavior in different fog densities, which presents a stable signal, fixed in position across the whole image and with specific polarimetric characteristics which show it to have a very dominant co-polarized component. We also have studied the polarization characteristics of point clouds using a circularly-polarized laser source. It can be concluded that point clouds obtained under a cross-configuration detection provide a larger amount of detail than the ones obtained under a co-configuration: there are fewer noisy points due to the filtering of co-polarized components. The range of the system is also larger in the cross-configuration as objects give a larger cross-polarized response. In these cases, even a simple qualitative inspection of point clouds demonstrates the superiority of cross-detection in front of co-detection under the experimental conditions of this study.

Finally, it has to be taken into account that when using polarization, the range of the system may be reduced compared to non-polarized detection as long as the part of the energy that returns from objects in a polarization state orthogonal to that of the detector is lost. However, in cases when the sensor is saturated due to the response of the media, a cross-polarized detection strategy may help in filtering out a significant part of the backscattered light, which helps to stabilize the signal and may improve detection.

Funding

Agència de Gestió d'Ajuts Universitaris i de Recerca (2021FI_B2 00068, 2021FI_B2 00077); Defence Science and Technology Laboratory (DSTLX1000145661); Ministerio de Ciencia e Innovación (PDC2021-121038-I00, PID2020-119484RB-I00).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time as long as they were partly obtained with a privately funded project, but may be obtained from the authors upon reasonable request.

References

1. S. Royo and M. Ballesta-Garcia, “An overview of lidar imaging systems for autonomous vehicles,” Appl. Sci. 9(19), 4093 (2019). [CrossRef]  

2. L. Schaupp, M. Bürki, R. Dubé, R. Siegwart, and C. Cadena, “OREOS: Oriented Recognition of 3D Point Clouds in Outdoor Scenarios,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS, 2019), pp. 3255–3261.

3. M. J. Gómez, F. García, D. Martín, A. de la Escalera, and J. M. Armingol, “Intelligent surveillance of indoor environments based on computer vision and 3D point cloud fusion,” Expert Syst. Appl. 42(21), 8156–8171 (2015). [CrossRef]  

4. A. J. Brown, “Equivalence Relations and Symmetries for Laboratory, LIDAR, and Planetary Müeller Matrix Scattering Geometries,” J. Opt. Soc. Am. A 31(12), 2789 (2014). [CrossRef]  

5. A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015). [CrossRef]  

6. Y. Li, S. You, M. S. Brown, and R. T. Tan, “Haze visibility enhancement: A Survey and quantitative benchmarking,” Computer Vision and Image Understanding 165, 1–16 (2017). [CrossRef]  

7. G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in Proceedings of IEEE 2018 International Conference on Computational Photography (ICCP, 2018), pp. 1–10.

8. S. Sudarsanam, J. Mathew, S. Panigrahi, J. Fade, M. Alouini, and H. Ramachandran, “Real-time imaging through strongly scattering media: seeing through turbid media, instantly,” Sci. Rep. 6(1), 25033 (2016). [CrossRef]  

9. S. Panigrahi, “Real-time imaging through fog over long distance,” Ph.D. dissertation, Optique [physics.optics] Université Rennes (2016).

10. L.V. Wang and H. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2007).

11. M. Paciaroni and M. Linne, “Single-shot, two-dimensional ballistic imaging through scattering media,” Appl. Opt. 43(26), 5100–5109 (2004). [CrossRef]  

12. J. C. Hebden, R. A. Kruger, and K. S. Wong, “Time resolved imaging through a highly scattering medium,” Appl. Opt. 30(7), 788–794 (1991). [CrossRef]  

13. E. M. Sevick, J. R. Lakowicz, H. Szmacinski, K. Nowaczyk, and M. L. Johnson, “Frequency domain imaging of absorbers obscured by scattering,” J. Photochem. Photobiol., B 16(2), 169–185 (1992). [CrossRef]  

14. Y. Y. Schechner and N. Karpel, “Clear underwater vision,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR, 2004), pp. I.

15. Y. Y. Schechner and N. Karpel, “Recovery of Underwater Visibility and Structure by Polarization Analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005). [CrossRef]  

16. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42(3), 511–525 (2003). [CrossRef]  

17. T. Treibitz and Y. Y. Schechner, “Instant 3Descatter,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR, 2006), pp. 1861–1868.

18. V. V. Tuchin, “Tissue Optics and Photonics: Light-Tissue Interaction,” JBPE 1(2), 98–134 (2015). [CrossRef]  

19. M. Kutila, P. Pyykönen, H. Holzhüter, M. Colomb, and P. Duthon, “Automotive LiDAR performance verification in fog and rain,” in Proceedings of IEEE 21st International Conference on Intelligent Transportation Systems (ITSC, 2018), pp. 1695–1701.

20. M. Bijelic, T. Gruber, and W. Ritter, “A Benchmark for Lidar Sensors in Fog: Is Detection Breaking Down?” in Proceedings of IEEE Intelligent Vehicles Symposium IV (IEEE, 2018), pp. 760–767.

21. S. Peña-Gutiérrez, M. Ballesta-Garcia, P. García-Gómez, and S. Royo, “Quantitative demonstration of the superiority of circularly polarized light in fog environments,” Opt. Lett. 47(2), 242–245 (2022). [CrossRef]  

22. F. C. MacKintosh, J. X. Zhu, D. J. Pine, and D. A. Weitz, “Polarization memory of multiply scattered light,” Phys. Rev. B 40(13), 9342–9345 (1989). [CrossRef]  

23. M. Xu and R. R. Alfano, “Circular polarization memory of light,” Phys. Rev. E 72(6), 065601 (2005). [CrossRef]  

24. J. D. van der Laan, J. B. Wright, S. A. Kemme, and D. A. Scrymgeour, “Superior signal persistence of circularly polarized light in polydisperse, real-world fog environments,” Appl. Opt. 57(19), 5464–5473 (2018). [CrossRef]  

25. S. Royo and J. Riu, “System and method for scanning a surface and computer program implementing the method,” patent US10018724B2 (2013).

26. M. Colomb, H. Khaled, P. André, J. Boreux, P. Lacôte, and J. Dufour, “An innovative artificial fog production device improved in the European project FOG,” Atmos. Res. 87(3-4), 242–251 (2008). [CrossRef]  

27. P. Duthon, M. Colomb, and F. Bernardin, “Light Transmission in Fog: The influence of wavelength on the extinction ratio,” Appl. Sci. 9(14), 2843 (2019). [CrossRef]  

28. P. Duthon, M. Colomb, and F. Bernardin, “Fog Classification by Their Droplet Size Distributions: Application to the Characterization of Cerema’s Platform,” Atmosphere 11(6), 596 (2020). [CrossRef]  

29. D. Deirmendjian, “Scattering and Polarization Properties of Water Clouds and Hazes in the Visible and Infrared,” Appl. Opt. 3(2), 187–196 (1964). [CrossRef]  

30. D. Deirmendjian, “Electromagnetic Scattering on Spherical Polydispersions,” (RAND Corporation, 1969).

Data availability

Data underlying the results presented in this paper are not publicly available at this time as long as they were partly obtained with a privately funded project, but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Optomechanical set-up of the LiDAR unit.
Fig. 2.
Fig. 2. Time diagram of the different signals which appear in a frame acquisition.
Fig. 3.
Fig. 3. RGB image of the scene prepared in the fog chamber. Every object is marked either by a number (for plates or set of plates) or a letter (mannequins) for its identification. (top).
Fig. 4.
Fig. 4. A top view of the typical point cloud of Fig. 3 without fog in the chamber.
Fig. 5.
Fig. 5. Normalized digitized signal for fog response for different visibilities. Fog signal follows, for all of them, the expected Gamma distribution shape and is fixed in time.
Fig. 6.
Fig. 6. Evolution of the LiDAR signal under variable fog conditions when pointing towards a target placed at 16 m.
Fig. 7.
Fig. 7. The digitized signal for co- and cross-configurations of the objects of interest (obj. 2, obj. 4, obj. B and backscattering peak), for three different visibilities (20 m, 70 m and 150 m).
Fig. 8.
Fig. 8. Reconstructed polarized point clouds of the scene depicted in Fig. 3 for co-polarization configuration at 20 m, 75 m and 120 m of visibility
Fig. 9.
Fig. 9. Reconstructed polarized point clouds of the scene depicted in Fig. 3 for cross-polarization configuration at 20 m, 75 m, and 120 m of visibility.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.