Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-spectral SWIR lidar for imaging and spectral discrimination through partial obscurations

Open Access Open Access

Abstract

We have developed a multi-spectral SWIR lidar system capable of measuring simultaneous spatial-spectral information for imaging and spectral discrimination through partial obscurations. We employ objects in the presence and absence of a series of obscurants to evaluate the capability of the system in classifying the objects of interest based on spectral and range information. We employ a principal component analysis-based algorithm in classifying the objects and quantifying the accuracy of detection under various obscured scenarios. The merits of multi-spectral lidar over hyperspectral imaging are highlighted for target identification in the presence of obscurants.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Multi-spectral lidar is being explored by a number of research groups to identify man-made and natural objects that are partially obscured by foliage, sparse clouds or other obscurants. The range specific spectral information allows one to separate the spectral signatures of different layers of return to aid in increased identification. A common research exploit is studying forest undergrowth for health, fire control, invasive species and seasonal variations in tree populations [1,2]. Of interest to the security community is the distinction of man-made objects from natural objects under partial obscuration [36]. There is even a commercially available three wavelength lidar, Optech’s Titan [7] that is being utilized by several groups [810] in exploiting the difference in the reflectance properties of vegetation and water. Traditionally data fusion techniques were exploited to take advantage of the range and spectral information from co-located different instruments with several shortcomings due to registration and co-location problems [4,11]. Hence, there has been a strong interest in single instruments that provide both the range and spectral information. Most multi-spectral instruments either utilize individual multiple lasers [12,13] with a common transmit and receiver optics, which still might pose target overlap issues or use continuum laser so as to use a single light source [2,5,14,15] to mitigate some of the source variation effects, as done in this study.

A challenge of spectral lidar is radiometric calibration of each channel so that measured spectra depend on the geometry, material and surface properties and are not biased by the instrument. For remote sensing, obscurants may be part of the local environment of the target, e.g. tree canopy, or the intervening atmosphere, e.g. a diffuse cloud layer. Radiometric accuracy has been accomplished previously via deployment of calibration panels [2,5,12] or applications have focused on classification, rather than accurate measurement [13] to alleviate the need for radiometric calibration. However, the previous studies’ methods require fixed geometry for approximate radiometric accuracy, which is incompatible with airborne or space-based lidar instruments where control of observation angles is not always possible. This is especially true of foliage penetration that requires multiple look angles. A few studies [14,15] have attempted a calibration not requiring an in-situ reference panel that would be suitable for foliage penetration, however, Lambertian surfaces are assumed; we follow that precedent here.

Here, we extend our work [15] and explore the use of a supercontinuum laser source coupled with a multispectral detector system. Our demonstration system utilizes seven channels in the SWIR region in the relatively transparent windows of the atmosphere; however, we have the flexibility in modifying the number and spectral location of channels in the future to aid in better object identification of spectrally diverse targets in obscured environments. For this study, SWIR wavelengths were selected due to the potential for superior haze penetration in littoral and maritime environments. Wavelengths in the SWIR are also important to the defense community for detection and classification and for eye-safety of remote sensing instruments. In this paper, we present multiple spectral lidar measurements from objects of varying reflectance under highly obscured scenarios in laboratory settings at a distance of 5 m. We discuss our principal components’ based analysis method to identify the various objects in the presence and absence of obscurants with quantifying metrics based on their spectral signature. We demonstrate the merits of utilizing range to isolate the spectra belonging to the objects from that of obscurants, highlighting the merits of range resolved multi-spectral lidar over hyperspectral imaging. Similar to previous studies [3], we also show marked improvements in the classification accuracy by using range to isolate object from spectrally comparable background. One novel aspect of our analysis is in generalizing the data analysis to include obscurants that spectrally alter the partially transmitted beam. This may be important in the case of non-negligible atmospheric scattering, or more generally when the obscurant does not block the entire lidar beam and may alter the spectrum by diffraction or other means. We show the obscurants may morph the recovered spectral properties of the objects of interest and that object classification needs to be performed based on the PCA vectors corresponding to the spectral signature of the obscured objects. This leads us to believe that object classification is not broadly possible based completely on calibration data and unobscured prior spectra. In the future we aim to demonstrate the system’s capabilities in more relevant environments and ranges, and explicitly considering the geometry and bidirectional reflectance distribution function (BRDF).

2. Lidar layout

We have designed and developed a seven channel multispectral lidar system using a supercontinuum laser from Fianium Inc [16] as the light source. The laser produces 0.25 microJoule pulses at 2 MHz. for an average power of 0.5 W in the 450 nm to 2200 nm spectral region, corresponding to an average spectral power density of 0.2 mW/nm. The pulse duration of the laser is < 1 ns. The lidar system was built by Sigma Space Research Corporation (now Hexagon US Federal) [17]. The output of the laser is collimated and then directed onto the scene by a high-speed steering mirror. The steering mirror enables us to scan the region of interest (ROI) and collect spectral and spatial data as a raster scan in 0.3 mrad angular step sizes. The return signal is collected by the same set of optics, collimated and separated into individual bands by means of dichroic beam splitters and bandpass filters as shown in Fig. 1 and discussed in detail in Sivaprakasam et al. [15]. This simultaneous spectral measurement technique, similar to a prior study [6] enables us to acquire data faster from moving or changing targets in comparison spectral and spatial scanning system [3]. Our current system supports seven channels with center wavelengths of 1019, 1094, 1188, 1257, 1311, 1495 and 1550 nm with each channel having a 10 nm bandwidth, one less channel than described in our previous paper due to mismatch in optical calibration [15]. The channels are assembled in two decks with optical trains for four and three channels assembled in each deck with individual detectors. Due to the optical configuration of the spectral channels, there is minor spatial mismatch between the channels that we correct during data processing. The system is designed to detect weak signal returns in few photons regime using Discrete Amplification Photon Detectors (DAPD) [18] in linear mode. The signal from the seven detectors are processed by custom electronics consisting of two time of flight (TOF) processing units, where adjustable thresholds are set to signify a detection event. The signal in each detector corresponds to the number of detection events resolved in range for a specific beam trajectory, which can be mapped to spatial coordinates for each range. Our current demonstration lidar is configured for laboratory-scale short range returns of less than 17 m. The maximum field of view scanned by the lidar is 200 mrad by 140 mrad in the horizontal and vertical directions respectively. The range resolution of the measurements is 29 mm.

 figure: Fig. 1.

Fig. 1. Optical Schematic of the multi-spectral lidar system showing the beam and return path. A series of dichroic filters and bandpass filters is used to isolate the light for seven detection channels with individual DAPD detectors.

Download Full Size | PDF

3. Experimental setup and calibration

The lidar data acquisition was tested and optimized for operational parameters such as number of laser shots per point and detection threshold for the two time of flight (TOF) units. Without averaging, radiometric accuracy is not generally achievable in lidar systems where each detection event is the result of single or few photons. The lidar signal was observed to be linearly proportional to the number of laser shots, beginning with an averaging sample of 5,000 shots (dwell time of 2.5 ms) and the signal to noise ratio was observed to be proportional to number of samples, n to 0.6 power (n0.6), moderately different from a shot noise limited system (our system has strong contributions from electronic/readout noise sources). For most measurements described in this paper we acquire 10,000 or 100,000 shots per point scan with a nominal standard deviation in signal of 30% and 8% respectively. In the current configuration, one threshold value is settable for all the channels processed on each of the two TOF units. The signal and noise characteristics of the individual channels, 1-4 and 5-7 were monitored and the thresholds were chosen for each unit as a compromise between optimum signal to noise ratio while operating in narrow regime where the processor of the TOF unit does not overload or saturate. The dynamic range of the system was measured to be about two orders of magnitude; however, the detector response is non-linear at the higher signal levels. Therefore, we used an external calibration method, whereby we calibrate the lidar response to a spectrometer to span the whole dynamic range of the system. This calibration enabled us to correct for the non-linear response of the lidar signal. The spectral response of the system is calibrated using a highly reflective and spectrally flat 25 × 25 cm Spectralon target. Normalizing the measured data to the Spectralon lidar response allows us to calibrate the lidar measurement in correcting for the variations in the source spectrum, transmission, and detector efficiency and enable us to report the reflectivity of objects.

To validate the spectral response of the lidar system we measure the lidar response of lab made calibration materials, which are 30 × 30 cm canvasses painted with specific materials with distinct spectral response in the NIR bands. A series of five measurements were made for each calibration material from the lidar system and the signals are also plotted as open circles in Fig. 2 for two materials, labelled A and B. The reflectance measurement of the calibration materials are made using commercially available equipment from Spectra Vista Corporation consisting of a surface reflectometry probe and HR1024i spectrometer [19]. Spectra from this instrument serve as the reference spectra and are shown in black lines in Fig. 2. The five measurements for each material fall close to each other with an average deviation of 7% and 8% for materials A and B from the reference spectra.

 figure: Fig. 2.

Fig. 2. Reflectance measurements from five trials of validation samples are plotted as open circles along with their reference spectra plotted as solid lines for (a) material A and (b) material B. The plots show good correlation between the measurement.

Download Full Size | PDF

A typical data acquisition set up is shown in Fig. 3(a). The lidar system operates in a raster scan mode. A typical data acquisition set up is shown in Fig. 2.a. The lidar system operates in a raster scan mode. The maximum field of view scanned by the lidar is 200 mrad by 140 mrad which corresponds to 1.0 m by 0.7 m spatial dimensions with a 1.5 mm spatial resolution at nominal target range of 5 m. Optionally, three obscurants were employed, singly or as a combination of two as shown in Fig. 2(a), to provide partial obscuration along the lidar path.

 figure: Fig. 3.

Fig. 3. (a). shows the lidar measurement setup from a target with two obscurants placed along the path, (b) shows the point cloud collected from one spectral channel, (c) shows the transient profile of one range trajectory and (d) shows the signal from individual range slices highlighting the fine features and lidar resolution. The tallest objects are the tin box, black plastic box and the yellow children’s toy construction block that shows up on the first layer of the scan, followed by the cardboard box and the red toy construction block.

Download Full Size | PDF

Obscurant #1 is a tennis racquet (TR) with nylon string grid pattern with 20 mm pitch placed at 3.3 m, obscuration #2 is one layer of window screen (WS) with 3 mm pitch grid pattern placed at 4.2 m and obscuration #3 is 3 layers of the same window screen (3WS). The lidar return was measured for each target/obscuration combination along with a reference Spectralon scan each day to capture any day to day variations in the instrument. Two sets of target objects are used to characterize the lidar response in this paper. One is a set of wooden letters and the second is a series of 3D objects composed of different materials and of varying heights, mounted on a cardboard box, depicted as the target in Fig. 3(a). An example of the 3D point cloud measured at one spectral channel from the set of 3D objects in the presence of two obscurants is shown in Fig. 3(b). As a reminder, seven such point clouds are recorded from each scan. The signal for each object is observed in multiple range planes, due to the object’s orientation in respect with the perpendicular lidar line of sight and other artifacts. The signal from a single trajectory is plotted as intensity vs. range in Fig. 3(c). To capture the signal corresponding to each obscurant or object, the peak signal is identified and a number of range slices around the peak is used for signal integration as highlighted by the red dotted lines. More details on identification of peak and integration range will be discussed in the data analysis section, Sec. 5. To highlight the range dependent spatial properties of the lidar return, single range slices of the two obscurants and the objects are shown in Fig. 3(d). The grid pattern of the tennis racquet is clearly observed, however, the grid pattern from the finer material of the window screen is not discernable. The range slices of the target objects show the return from the varying height objects being the strongest at different range slices due to the differing heights of the objects as depicted in Fig. 3(a). A strong return is observed from the mounting box as depicted in the last layer.

Figures 4(a) and 4(b) show photographs of the wooden letter and 3D targets. The letters are 12 cm by 6 cm with a thickness of 1 cm. They are painted with three different paints having distinct spectral reflectance in the SWIR even though the letters N and R are indistinguishable in the visible region as seen in Fig. 4(a). A 2D spatial plot of the intensity integrated for the appropriate range slices of the lidar return from the two targets) are shown in figs. 4(c) and (d) for one of the spectral channels. In Fig. 4(c) we draw a region of interest within each letter. The average intensity measured inside each box for each channel is normalized to intensity measured for the Spectralon target return to provide the reflectance measurement of each object and is plotted in figs. 4(e) and (f) with reference spectra measured with a surface reflectance probe [19]. The lidar returns for the letters match well with the reference spectra. The average deviation from the reference measurements are 23%, 18% and 13%, for letters N, R, and L respectively and 10% for the background, in line with the validation measurements discussed in Fig. 2. Similarly, the reflectivity of the 3D objects are plotted in Fig. 4(f) along with their reference spectra. Good agreement is observed between lidar return and reference spectra for four of these objects with average deviations of 25%, 15%, 16% and 11% for black plastic, red toy block, yellow toy block and cardboard. The reference and the lidar measurement vary drastically for the tin box with reflectivity estimated by the lidar as high as 2. The tin box is a highly specular object and thus its reflectance measurement compared to the diffuse reflectance standard Spectralon is subject to errors arising from ignoring the BRDF. Due to these challenges, classification of these 3D objects is not pursued in this paper.

 figure: Fig. 4.

Fig. 4. Photos of the (a) letter targets (b) 3D objects of varying material are shown. The spectral measurement from a single channel is shown for both targets in (c) and (d), where the region of interest in depicted as rectangular areas. The lidar reflectance for each object is plotted along with the reference spectra for (e) letters and the (f) 3D objects.

Download Full Size | PDF

4. Target classification

In this section, we develop an object classification methodology using principal component analysis (PCA). PCA is a common methodology that is used extensively for object classification [20,21], however the data processing and quantifying metrics are uniquely developed here to handle our spectral and spatial 4D data from objects that are obscured by spectrally and intensity modulating targets. Within each trajectory, the measured signal may be regarded as discrete samples of the return intensity as a function of range, as seen in Fig. 3(c). We regard each data point along the trajectory as a 7-component vector (where each component corresponds to a spectral channel). The peak return along a given trajectory is registered at range ${z_0}$ where we observe the maximum vector magnitude over the spectral channels. To mitigate effects of object roughness and timing noise, we integrate the signal over the relevant transient of the return pulse around ${z_0}$ on each channel,

$${\bar{\varphi }_k} = \mathop \smallint \nolimits_{{z_0} - {z^ - }}^{{z_0} + {z^ + }} {\varphi _k}(z )dz,$$
where ${\varphi _k}(z )$ is the return signal on spectral channel k at range z, and parameters ${z^ - }$ and ${z^ + }$ are chosen based on the shape of the return pulse seen in Fig. 3(c). Here, ${\bar{\varphi }_k}$ represents the integrated lidar return on spectral channel k . The manually derived integration bounds are marked on Fig. 3(c) using dashed red lines, and the area under the solid blue curve corresponds to ${\bar{\varphi }_k}$ for the object or obscurant. The peak spectral magnitude integrated around ${z_0}$ is therefore given by
$${\bar{\varphi }_0} = {\left( {\mathop \sum \nolimits_{k = 1}^{\; 7} {{\bar{\varphi }}_k}^2} \right)^{1/2}}$$
for each beam trajectory in the lidar scan. Normalizing the quantity in Eq. (1) by Eq. (2) yields the normalized spectral lidar return,
$$\bar{\varphi }_k^{norm} = \frac{{{{\bar{\varphi }}_k}}}{{{{\bar{\varphi }}_0}}},$$
so that integrated peak returns for all beam trajectories exhibit unit vector magnitude. This normalization is chosen to ensure that our classification is based strictly on the measured spectral information content and remains independent of the object’s albedo. In our experience, this approach is particularly effective at handling obscurations that heavily attenuate the return signal from the target of interest. Figure 5 shows the normalized spectral lidar return on all the spectral channels for the letter targets, where each pixel in the depicted spectral images corresponds to a beam trajectory in the lidar scan. From a visual inspection, the letter R appears to be easily distinguished from the letters N and L on all spectral channels. In contrast, the distinction between letters N and L appears to be much weaker, as expected from the similar spectral variation shown in Fig. 4(e). While the letter L appears to the SWIR lidar channels spectrally similar to the black cloth in the background, our classification algorithm is able to distinguish them. Moreover, it is possible to utilize range information from our measurements to separate the signature of objects from the background, which we will discuss in more detail at the end of Section 5.

 figure: Fig. 5.

Fig. 5. Normalized spectral lidar images of the letter targets on a black cloth background for all spectral channels; the spectral distinction between letters N and L in comparison to letter R is high for most channels, while weak for Letter L and the background for spectral channels at 1257 nm and 1311 nm, consistent with measurements discussed in Fig. 4(e).

Download Full Size | PDF

With the normalized spectral data in Fig. 5, we apply data standardization by subtracting the mean of each spectral channel and dividing by the standard deviation for that channel. The procedure removes channel sensitivity biases from the PCA, allowing us to forego data calibration using the calibration standard Spectralon (as was performed to obtain reflectance measurements). From the standardized data, we compute the principle components. We then proceed to define the members of our target classification, where we again specify regions of interest in Fig. 5 similar to that outlined in Fig. 4(c) and employ spatial averaging, treat with the same normalization and subsequently project onto the principal component vector space (PCVS) to obtain the standardized spectral signature (SSS), which define the center points for the members of our target classification. The peak data points from individual trajectories are then classified based on the minimum Mahalanobis distance [22] to any SSS. The Mahalanobis distance effectively whitens the PCVS by weighing the projection along each principal component by its associated eigenvalue. Relative to any SSS in the PCVS, we can express this metric mathematically as

$${r_{i,j}} = {\left( {\mathop \sum \nolimits_{l = 1}^7 \frac{{({{v_i} - {\mu_j}} )}}{{{\sigma_l}^2}}} \right)^{1/2}},$$
where ${{\boldsymbol v}_{\boldsymbol i}} = {({{v_1},{v_2}, \ldots ,{v_L}} )^T}$ represents the spectral data point (or vector) in PCVS, ${{\boldsymbol \mu }_{\boldsymbol j}} = {({{\mu_1},{\mu_2}, \ldots ,{\mu_L}} )^T}$ represents the positional vector of the jth SSS, ${\sigma _l}$ is the vector of eigenvalues associated with the principal components. Each spectral data point, indexed by subscript i, is classified to the class j, corresponding to $r_i^{prox} \equiv \mathop {\min }\nolimits_j {r_{i,j}}$ obtained among all the SSSs (i.e., assigned to the class corresponding to the nearest SSS). Figure 6 depicts the classification metric by reducing the PCVS to the first $L = 3$ principal components. The SSS corresponding to the three letters and background are shown in red. Prior knowledge of which member each lidar return belongs to is color coded using green, cyan, blue and pink for letters N, R, L and the background respectively. The significant overlap between the letter L and the background (as we previously noted) is apparent here.

 figure: Fig. 6.

Fig. 6. Individual pixels of the letters, N, R and L and background from Fig. 5 mapped to the vector space spanned by the first 3 principal components. The red pluses show the position of the SSS for each class.

Download Full Size | PDF

Figure 7(a) shows the distribution of $r_i^{prox}$ for all pixels (i.e., lidar trajectories) in Fig. 5. Here, most of the pixels exhibit low values of $r_i^{prox}$. Those that exhibit the highest values arise from the boundaries of the letters, which will be discussed in Sec. 5. Figure 7(b) shows the difference from nearest SSS to the next nearest SSS, $r_i^{next}$, which one may regard as a confidence indicator for our classifier, i.e., $\left|{r_i^{prox} - \mathop {\min }\nolimits_j {r_{i,j \ne k}}} \right|$ where k is the assigned class. A larger difference here corresponds to a higher confidence that the spectral data vector has been classified correctly. The letter R demonstrates the highest confidence in classification, followed by the letter N. As expected, the diagnostic metric is similar for the letter L and the background.

 figure: Fig. 7.

Fig. 7. Visual for all pixels from Fig. 5 for the (a) proximity metric ${r^{prox}}$ to the nearest SSS and (b) difference of ${r^{prox}}\; $ from the second closest to the closest SSS; the higher numbers indicate greater confidence of being assigned to the right class.

Download Full Size | PDF

The accuracy of our classifier is measured as the ratio of correctly classified beam trajectories to the total number of beam trajectories. This accuracy can vary greatly depending on the difficulty of the classification problem. To characterize classification difficulty, we define a unitless spectral ambiguity for each classification member in the scene as the ratio of the mean spectral spread to its spectral distinction. The mean spectral spread for the jth member is defined as the mean ${r_{i,j}}$ for all data points belonging to the jth member based on prior knowledge. The spectral distinction for any class is the smallest Mahalanobis distance to any other class. The mean spectral ambiguity (averaged over all classes) is one way to assess the overall difficulty of the classification problem. An alternative diagnostic is to examine the mean spectral distinction of the classification problem, which we define as the average Mahalanobis distance among all classification members. Related to the mean spectral distinction is the minimum spectral distinction, i.e., the minimum Mahalanobis distance between any two classes, which provides another similar but different measure of how prone a system is to misclassification.

5. Obscuration measurements and analysis

One particular advantage in using range data in conjunction with spectral data is the ability to ignore returns that lie outside the range of interest in order to facilitate target classification as demonstrated by Powers and Davis [3] and Puttonen et al. [6] In particular, discriminating against returns from partial obscurants along the beam path to the target can be extremely difficult without ranging capabilities. In this section, we demonstrate the merits of employing a multispectral lidar system by introducing the partial obscurants described in Sec. 4 to our experimental setup and applying our prescribed classifier to the measured data.

In this thrust, we first characterized the spectral response of the partial obscurants using Spectralon as a reference target. We measured the double-pass transmission for each obscurant and combination thereof utilizing the supercontinuum source and an integrating sphere coupled to a reference spectrometer. The measured transmission values are plotted as solid lines in Fig. 8. We repeated a similar measurement using our lidar system for the same set of obscurant configurations (and normalized the data to the Spectralon return in the absence of obscurants) to obtain the target’s perceived reflectance through the obscurants. These additional measurements are plotted as circles in Fig. 8. A comparison between the double-pass transmission and the perceived reflectance shows good correlation for the various obscurant configurations with mean deviations of 13%, 8%, 23%, 30% for TR, WS, 3WS, TR + WS and TR + 3WS configurations respectively, with higher variations noted for highly obscuring configurations. It is important to note that the obscurants not only attenuate the signal but also alter the spectral profile of the light source.

Having characterized the obscurants, we measured the multispectral lidar returns for the target letters N, R and L through the varying obscurant combinations seen in the photographs in Figs. 9(a)-(c), and proceeded to classify the measured data. Figures 9(d)-(f) depict the relative spectral signatures derived from the normalized spectral lidar returns (i.e., $\bar{\varphi }_k^{norm}$) for each of the obscuration configurations, and the corresponding classification results are plotted in Figs. 9(g)-(i). It is interesting to note that the two types of partial obscurants behave rather differently with regards to their impact on our classifier. The comparison between Figs. 9(d) and (f) suggests that the 3-layer window screen not only attenuates the signal (transmitting on average only 2% of the incident light) but alters the spectral characteristics of the transmitted light as well. The tennis racquet, on the other hand, transmits on average 50% of the incident light while fully obscuring the return signal behind its larger grid pattern. Moreover, the spectrum of light transmitted through the tennis racquet appears to be impacted to a lesser degree than the window screen; this is apparent from the spectral signatures shown in Fig. 9(e). We ascribe this distinction to the difference in the obscuring feature size relative to the transverse size of the lidar’s scanning beam.

 figure: Fig. 8.

Fig. 8. The double-pass reflectance measurement for each of the obscuration configuration is plotted as solid lines and the reflectance of Spectralon measured through each of the obscurants is plotted as open circles on the left-hand axis. The apparent gap in the measured spectrum corresponds to the 1064-nm notch filter we used to block the emission from the supercontinuum laser’s pump source.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. (a-c) Photograph of the letter targets, N, R and L with a black cloth background in the absence and presence of two of the obscurants, TR and 3WS, (d-f) the spectral response of the three letters under each of the conditions described (a-c) and (g-h) are the corresponding classifications.

Download Full Size | PDF

From the classification results in Figs. 9(g)-(i), it becomes clear that the measured change in the spectral signatures transmitted through partial obscurants is not the key determining factor in our classifier’s performance. This is due to the use of normalized, in-scene signatures for classification. Table 1 lists the diagnostic metrics proposed in Sec. 4 for the various obscurant configurations along with the measured double-pass transmission and the final classification accuracy. The diagnostic metrics in Table 1 show that the spectral ambiguity generally increases with the complexity of obscurants in the measurement setup, and classification accuracy decreases as a result. The mean spectral spread and the mean and minimum spectral distinction generally increase with complexity of the scene, albeit not necessarily following the trend of reduction in transmission. It is important to note that our defined diagnostic metrics assumes (for simplicity) a radially symmetric data distribution around each SSS within whitened PCVS and, hence, do not capture all the characteristics of the spectral data pertaining to their classification.

Tables Icon

Table 1. Quantitative metrics for the spectral classification of the NRL letters under various obscurant configurations

Taking a closer look, it appears that introducing the tennis racquet gives rise to a much larger spectral spread of the normalized lidar returns as well as a higher minimum spectral distinction, even though it exhibits the highest transmission among all of the obscurants tested. We surmise that this phenomenon is ascribed to our instrumentation rather than the obscurant itself: Because fully opaque obscurations produce edge features in the spectral lidar image, our spectral measurements become highly susceptible to any mismatch in the optical alignment among the spectral channels of the instrument, similar to Powers and Davis [3]. Such a mismatch produces spectral mixing when the measured returns from slightly different beam angles are consolidated into spectral data points. Indeed, the perimeter of the letters in Fig. 7(a) constitute edge features and appear to be significantly farther removed from their nearest SSS due to spectral mixing. We hasten to add that the side facets of the target letters are hidden (along beam propagation axis) from the lidar in our experimental setup and, hence, cannot be the reason for the outstanding spectral ambiguity exhibited along the perimeter of the target letters.

To ascertain our classifier’s performance on the obscured objects based on their unobscured spectral signature (akin to using spectral signatures from a database to perform object matching), we reclassified the same set of data from Table 1 based on the spectral signatures of the unobscured letters and background. The results are tabulated in Table 2, which demonstrates high spectral ambiguity and correspondingly poor classification accuracies that do not follow the expected trend established in Table 1. This analysis further demonstrates that the spectral signatures of the objects are highly distorted by the presence of the obscurants and that it is difficult to recover the spectra of the original objects under these conditions.

Tables Icon

Table 2. Quantitative metrics for the spectral classification of the NRL letters under various obscurant configurations based on PCA of unobscured objects

It is important to emphasize that the results from Table 1 makes use of ranging data to discriminate against the partial obscurants in the data analysis. Without ranging capabilities, the peak return for the lidar beam trajectories that encounter the obscurant tend to register at the obscurant’s depth rather than the target of interest. These data points are of no use to the end-user in most practical scenarios and render target classification virtually impossible under the presence of obscurants such as the window screen. Ranging data can also provide further utility in discriminating against the background from the letters in our experimental setup.

Figure 10 shows the resulting classification after eliminating the background based on ranging data with a marked improvement in comparison to the classification performed with the background in Fig. 9(g). The removal of the background improves the overall classification accuracy to 94.1% (compared to 81% with the background included in Table 1). These capabilities illustrate the merits of employing a multispectral lidar system for target classification (over, for example, a hyperspectral system without ranging capabilities).

6. Conclusion

A seven channel multi-spectral lidar system is characterized and evaluated for its ability to classify objects that are partially obscured. Instrument’s short-comings such as signal non-linearity and fine spectral channel misalignments that are mitigated in post-processing are discussed. The system reports good agreement in reflectance measurement of diffuse validation targets to reference measurements with better than 10% agreement. Lidar measurement is discussed for two sets of objects, painted letters and 3D objects. The 3D objects exhibit specular reflection and exhibit higher deviations in comparison to reference spectra and further classification of these objects were not pursued. The letters are used as a representative diffuse set of targets and are characterized under a varying set of obscuration conditions that includes spectral modulation and intensity attenuation ranging from 50% to 1%. We employ PCA and Mahalanobis distance for classification. The algorithm establishes good classification of the object with 81% classification for the unobscured targets in the presence of a challenging background and classification accuracy of 94% when range is used to isolated the targets from the background. Our analysis also establishes that it is difficult to develop a library and identify targets based on prior signature under the obscuration conditions that we tested, whereby the intensity and the spectra of the input light is modified.

 figure: Fig. 10.

Fig. 10. Classification after removing background using ranging data (no obscurants), with an improved accuracy of 94.1%.

Download Full Size | PDF

Funding

Office of Naval Research (56-6B02-0-9).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request and sponsor clearance.

References

1. I. H. Woodhouse, C. Nichol, P. Sinclair, J. Jack, F. Morsdorf, T. J. Malthus, and G. Patenaude, “A multispectral canopy LiDAR demonstrator project,” IEEE Geosci. Remote Sensing Lett. 8(5), 839–843 (2011). [CrossRef]  

2. A. M. Wallace, A. McCarthy, C. J. Nichol, X. Ren, S. Morak, D. Martinez-Ramirez, I. H. Woodhouse, and G. S. Buller, “Design and evaluation of multispectral lidar for the recovery of arboreal parameters,” IEEE Trans. Geosci. Remote Sensing 52(8), 4942–4954 (2014). [CrossRef]  

3. M. A. Powers and C. C. Davis, “Spectral LADAR: active range-resolved three-dimensional imaging spectroscopy,” Appl. Opt. 51(10), 1468 (2012). [CrossRef]  

4. M. Nischan, R. Joseph, and J. Libby, “Active spectral imaging,” 14, 15 (2003).

5. B. Johnson, R. Joseph, M. L. Nischan, A. B. Newbury, J. P. Kerekes, H. T. Barclay, B. C. Willard, and J. J. Zayhowski, “Compact active hyperspectral imaging system for the detection of concealed targets,” in Detection and Remediation Technologies for Mines and Minelike Targets IV (SPIE, 1999), p. 144.

6. E. Puttonen, T. Hakala, O. Nevalainen, S. Kaasalainen, A. Krooks, M. Karjalainen, and K. Anttila, “Artificial target detection with a hyperspectral LiDAR over 26-h measurement,” Opt. Eng 54(1), 013105 (2015). [CrossRef]  

7. “Optech Titan | GEO3D,” https://www.geo3d.hr/3d-laser-scanners/teledyne-optech/optech-titan.

8. L.-Z. Huo, C. A. Silva, C. Klauberg, M. Mohan, L.-J. Zhao, P. Tang, and A. T. Hudak, “Supervised spatial classification of multispectral LiDAR data in urban areas,” PLoS One 13(10), e0206185 (2018). [CrossRef]  

9. J. C. Fernandez-Diaz, W. E. Carter, C. Glennie, R. L. Shrestha, Z. Pan, N. Ekhtari, A. Singhania, D. Hauser, and M. Sartori, “Capability assessment and performance metrics for the titan multispectral mapping lidar,” Remote Sens. 8(11), 936 (2016). [CrossRef]  

10. R. Tobin, Y. Altmann, X. Ren, A. McCarthy, R. A. Lamb, S. McLaughlin, and G. S. Buller, “Comparative study of sampling strategies for sparse photon multispectral lidar imaging: towards mosaic filter arrays,” J. Opt. 19(9), 094006 (2017). [CrossRef]  

11. A. V. Kanaev, B. J. Daniel, J. G. Neumann, A. M. Kim, and K. R. Lee, “Object level HSI-LIDAR data fusion for automated detection of difficult targets,” Opt. Express 19(21), 20916 (2011). [CrossRef]  

12. C. Briese, M. Pfennigbauer, A. Ullrich, and M. Doneus, “Multi-wavelength airborne laser scanning for archaeological prospection,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci.XL-5/W2, 119–124 (2013).

13. B. Chen, S. Shi, W. Gong, Q. Zhang, J. Yang, L. Du, J. Sun, Z. Zhang, and S. Song, “Multispectral LiDAR point cloud classification: a two-step approach,” Remote Sens. 9(4), 373 (2017). [CrossRef]  

14. T. Hakala, J. Suomalainen, S. Kaasalainen, and Y. Chen, “Full waveform hyperspectral LiDAR for terrestrial laser scanning,” Opt. Express 20(7), 7119 (2012). [CrossRef]  

15. V. Sivaprakasam, M. K. Yetzbacher, H. E. Gemar, and A. T. Watnik, “Multi-spectral SWIR lidar for imaging and spectral discrimination through partial obscurations,” in Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI (SPIE, 2020), p. 11.

16. “SuperK FIANIUM supercontinuum lasers - NKT Photonics,” https://www.nktphotonics.com/lasers-fibers/product/superk-fianium-supercontinuum-lasers/.

17. “About Us | Hexagon US Federal,” https://hexagonusfederal.com/about-us.

18. “Home - Amplification Technologies,” https://amplificationtechnologies.com/.

19. “HR-1024i - SVC: Spectra Vista Corporation,” https://spectravista.com/instruments/hr-1024i/.

20. L. I. Smith, “A tutorial on Principal Components Analysis,” (n.d.).

21. “Principal Component Analysis (PCA) Explained | Built In,” https://builtin.com/data-science/step-step-explanation-principal-component-analysis.

22. Introduction to Multivariate Statistical Analysis in Chemometrics (CRC Press, 2016).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request and sponsor clearance.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Optical Schematic of the multi-spectral lidar system showing the beam and return path. A series of dichroic filters and bandpass filters is used to isolate the light for seven detection channels with individual DAPD detectors.
Fig. 2.
Fig. 2. Reflectance measurements from five trials of validation samples are plotted as open circles along with their reference spectra plotted as solid lines for (a) material A and (b) material B. The plots show good correlation between the measurement.
Fig. 3.
Fig. 3. (a). shows the lidar measurement setup from a target with two obscurants placed along the path, (b) shows the point cloud collected from one spectral channel, (c) shows the transient profile of one range trajectory and (d) shows the signal from individual range slices highlighting the fine features and lidar resolution. The tallest objects are the tin box, black plastic box and the yellow children’s toy construction block that shows up on the first layer of the scan, followed by the cardboard box and the red toy construction block.
Fig. 4.
Fig. 4. Photos of the (a) letter targets (b) 3D objects of varying material are shown. The spectral measurement from a single channel is shown for both targets in (c) and (d), where the region of interest in depicted as rectangular areas. The lidar reflectance for each object is plotted along with the reference spectra for (e) letters and the (f) 3D objects.
Fig. 5.
Fig. 5. Normalized spectral lidar images of the letter targets on a black cloth background for all spectral channels; the spectral distinction between letters N and L in comparison to letter R is high for most channels, while weak for Letter L and the background for spectral channels at 1257 nm and 1311 nm, consistent with measurements discussed in Fig. 4(e).
Fig. 6.
Fig. 6. Individual pixels of the letters, N, R and L and background from Fig. 5 mapped to the vector space spanned by the first 3 principal components. The red pluses show the position of the SSS for each class.
Fig. 7.
Fig. 7. Visual for all pixels from Fig. 5 for the (a) proximity metric ${r^{prox}}$ to the nearest SSS and (b) difference of ${r^{prox}}\; $ from the second closest to the closest SSS; the higher numbers indicate greater confidence of being assigned to the right class.
Fig. 8.
Fig. 8. The double-pass reflectance measurement for each of the obscuration configuration is plotted as solid lines and the reflectance of Spectralon measured through each of the obscurants is plotted as open circles on the left-hand axis. The apparent gap in the measured spectrum corresponds to the 1064-nm notch filter we used to block the emission from the supercontinuum laser’s pump source.
Fig. 9.
Fig. 9. (a-c) Photograph of the letter targets, N, R and L with a black cloth background in the absence and presence of two of the obscurants, TR and 3WS, (d-f) the spectral response of the three letters under each of the conditions described (a-c) and (g-h) are the corresponding classifications.
Fig. 10.
Fig. 10. Classification after removing background using ranging data (no obscurants), with an improved accuracy of 94.1%.

Tables (2)

Tables Icon

Table 1. Quantitative metrics for the spectral classification of the NRL letters under various obscurant configurations

Tables Icon

Table 2. Quantitative metrics for the spectral classification of the NRL letters under various obscurant configurations based on PCA of unobscured objects

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

φ ¯ k = z 0 z z 0 + z + φ k ( z ) d z ,
φ ¯ 0 = ( k = 1 7 φ ¯ k 2 ) 1 / 2
φ ¯ k n o r m = φ ¯ k φ ¯ 0 ,
r i , j = ( l = 1 7 ( v i μ j ) σ l 2 ) 1 / 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.