Abstract

True color 3D imaging plays an essential role in expressing target characteristics and 3D scene reconstruction. It can express the colors, and spatial position of targets and is beneficial for classification and identification to investigate the target material. As a special case of target imaging, true color 3D imaging is important in understanding and reconstructing real scenes. The fusion of 3D point clouds with RGB images can achieve object reconstructions, yet varying illumination conditions and registration problems still exist. As a new active imaging technique, hyperspectral LiDAR (HSL) system, can avoid these problems through hardware configuration, and provide technical support for reconstructing 3D scenes. The spectral range of the HSL system is 431-751nm. However, spectral information obtained with HSL measurements may be influenced by various factors, that further impinge on the true color 3D imaging. This study aims to propose a new color reconstruction method to improve color reconstruction challenges with missing spectral bands. Two indoor experiments and five color reconstruction schemes were utilized to evaluate the feasibility and repeatability of the method. Compared with the traditional method of color reconstruction, color reconstruction effect and color similarity were considerably improved. The similarity of color components was improved from 0.324 to 0.762. Imaging results demonstrated the reliability of improving color reconstruction effect with missing spectral bands through the new method, thereby expanded the application scopes of HSL measurements.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

For decades now, target imaging has always attracted extensive attention from many scholars and been employed widely in medicine, biology, geology, agriculture, and forestry [14]. The fundamental aim of target imaging is to fully understand, display, and classify detected objects using active or passive electromagnetic radiation information acquired by various types of sensors. Currently, with the continuous development of remote sensing techniques and methods, imaging techniques become more diverse, precise and systematical on different space and time scales. Mainstream imaging methods include but are not limited to passive multispectral/hyperspectral imaging, 3D laser scanning imaging, fluorescence imaging and polarization imaging [59], which have their own merits and demerits in different applications.

3D laser scanning imaging and passive multispectral/hyperspectral imaging can intuitively express spatial and spectral information of targets respectively [1013]. However, passive multispectral/hyperspectral imaging technique, lacks vertical distribution information, which is essential for detecting forest canopy and vegetation biomass [14,15]; moreover, varying illumination conditions and atmospheric effects influence imaging quality. 3D laser scanning can obtain spatial information precisely, yet it lacks abundant spectral information that is indispensable for inversing vegetation biochemical parameters and reconstructing color information [16]. Prior to hyperspectral LiDAR (HSL) systems developments, researchers have attempted to fuse point clouds with passive images into the same framework to gain more target characters synchronously [17,18]. Nonetheless, these methods still face significant spatial and temporal registration problems caused by two different detection mechanisms [19]. With the high-speed development of remote sensing detectors, such as the HSL systems, taking rich spectral information on each point, can avoid spatial-temporal registration and illumination condition problems radically through hardware configuration.

The data acquisition of HSL systems, in which spatial and hyperspectral information is gathered through a single measure synchronously, extends spectral imaging from 2D to 3D [20]. To date, several laboratory prototypes of HSL systems have been developed [1922], yet they are still not commercialized, because the hardware performance and construction cost are still prohibitive. Current studies mainly focus on intensity correction, biochemical parameter inversion, and target classification [2325]. Thus, the research on true color 3D imaging with HSL systems is still in the initial phase [26,27]. With the increase in complexity of remote sensing detection environment, achieving true color 3D imaging in a complex environment based on the HSL systems has become a primary goal for the community.

True color information, an external manifestation of the target spectrum, is closely interrelated to the surface reflectance characteristics. Of course, when the reflected spectral signals from the external environment into human eyes, the observer can witness the true color information with their vision perception systems. For instance, observers can see red when the red light illuminates white targets, because only the reflected spectral signals in red range are accepted. Compared with single-wavelength LiDAR, HSL systems obtain spatial and spectral information simultaneously in one measure and provide technical support for reconstructing true color 3D imaging.

The complete acquisition of spectral information in the visible range reflected from targets is the prerequisite for achieving true color reconstruction. However, for HSL systems, achieving true color 3D imaging accurately is a challenge because of the missing spectral information in the visible range with HSL measurements. The output performance of the supercontinuum laser source and the quantum efficiency of detectors are two vital factors for gaining backscattered spectral signals effectively. With the complexity increase in the surrounding environment, the backscattered spectral signals will be affected even the spectral signals of several channels are missing. In addition, the surface reflectance of different targets is a notable factor that may impede spectral signal acquirement. In a complex scanning scene, different targets have different surface reflectance characteristics, that may cause the spectral signals in low reflectivity bands to be unavailable and subsequently impinge color reconstruction. Therefore, we propose a new method to improve color reconstruction with missing spectral bands. This new method consists of three parts, namely, determination of three new bands, determination of color weights, and reconstruction of true color 3D imaging.

Two indoor experimental scenes of 17 colored card papers and 14 different color targets were positioned in the laboratory. The former was used for evaluating the feasibility and accuracy of the new method, whereas the latter was used to determine the applicability in different scenes. The method consisted of three sections. First, the hardware performance of the supercontinuum laser source and detectors, and the sensitivity of human eyes to different wavelength lights were considered to divide three spectral intervals for spectrum; then the spectral intensity on the three spectral intervals was averaged to construct three new bands. Second, non-linear overdetermined equations between true colors and reconstructed colors were formulated to calculate the color weights of the three new bands. Third, 3D information of targets was combined with reconstructed colors to achieve true color 3D imaging. The experimental results of the five schemes of color reconstruction were compared to validate the reconstruction effect by using our proposed method. The results of color reconstruction were assessed through qualitative and quantitative analyses.

2. System description and experimental design

2.1 HSL system

A laboratorial terrestrial HSL system was assembled as an optical setup for measuring backscattered signals (Fig. 1). The HSL system consists of five components, namely, supercontinuum laser, optical receiver, light-splitting, signal reception, and ranging components. The wide-band “white” laser (YSL Photonics, SC-OEM-HP) is generated and emitted from high-nonlinearity photonic crystal fibers. The output power of the supercontinuum laser source is higher than 6 W, and the repetition rate is 0.1-1 MHz with 4 ns pulse duration. The spectral range of the laser covers from 430 to 2400 nm. To ensure the data quality of backscattered signals, a high-precision two-dimensional scanning platform equipped with an off-axis parabolic mirror of high reflectivity and a Schmidt–Cassegrain telescope (with the focal length and diameter being 400 mm and 200 mm, respectively) are utilized. The acquired signals are guided to a grating spectrometer by a fiber. To project spectral signals onto detectors accurately, we employ a 150 g/mm blazed grating that has a high spectroscopic efficiency [28]. A 32-element photosensitive photomultiplier tube (PMT) array, the sensitive wavelength range from 300 to 920 nm, is selected as the received sensor. The output signals emitted from the laser source and the backscattered signals from targets will be recorded separately into an APD detector. By comparing the time interval between output signals and backscattered signals, the distance can be calculated by the time-of-flight. In addition, the spectral range of HSL system can be changed by rotating the blazed grating. More details about the HSL system were described in [19].

 figure: Fig. 1.

Fig. 1. Optical setup of the HSL system

Download Full Size | PPT Slide | PDF

2.2 Experimental design

For terrestrial LiDAR systems, before carrying out the spectral information application, the backscattered intensity needs to be calibrated. The availability of calibrated laser-based backscattered intensity is significant for spectral applications [29]. The distance (i.e., the distance from laser source to the target) and angle-of-incidence (i.e., orientation of the surface relative to the laser beam) are two vital factors that influence the high-accuracy record of spectral intensity [30]. In this study, spectral intensity has been calibrated through distance- and angle-calibrated models [31].

Two indoor experiments were conducted in a dark and clean laboratory to avoid the interference of stray light and other atmospheric factors. The experimental scenes were illustrated in Fig. 2. To evaluate the feasibility of reconstructing true color 3D imaging in a complex scene through HSL systems adequately, 17 different colored card papers were scanned. As the types of colors increase, the distinguishability among colors smaller, so as to cause more difficulties to achieve high-precision color reconstruction by using spectral information from HSL measurements. A certain color could be mistaken as other colors because of a slight color distortion. The second experiment, which consists of 14 different color targets, was used to validate the applicability with the new method.

 figure: Fig. 2.

Fig. 2. Two scanning scenes of 17 colored card papers and 14 different color targets.

Download Full Size | PPT Slide | PDF

In the first scene, 17 different colored card papers (e.g., white, grey, puce, brown, light cyan, blue, cyan, green-yellow, green, yellow, purple, pink, orange, gold, magenta, red, and black) were set up in 4 rows and 4 columns, and were posted on a piece of black paper, where the laser was nearly vertically illuminated. These card papers are made of environmental pulps with a thickness of approximately 180 g/m-2 and can be obtained in a department store easily. These card papers were placed at 7 m away from the laser source approximately. A total of 1440 scanned points were measured.

In another scene, 14 different color targets were placed at a horizontal platform, where the distance was the same as the first experiment approximately. The classes in the experimental scene were these items: black card paper, Sansevieria trifasciata plant, ceramic flowerpot, brown cardboard, blue lamp, black ceramic vacuum cup, white ceramic drinking cup, yellow cellulose tape, wooden, Pachira macrocarpa leaves, Pachira macrocarpa trunk, blue plastic bin, and green and orange part of the carrot-like ceramic object. The materials with different colors were classified into two classes even if they were spatially connected. For instance, the orange and green parts of the carrot-like ceramic object in the bin were mutually independent. The size of the point cloud was 1.01 m × 0.51 m × 0.35 m, and a total of 6684 points were obtained.

3. Methods

3.1 Workflow of true color 3D imaging optimization

Facing a complex scanning scene, the workflow of the true color 3D imaging optimization method based on the HSL system consists of three stages. The flowchart is illustrated in Fig. 3.

  • 1. The spectral range of the system was divided into three spectral intervals according to the hardware performance of the HSL system and sensitivity of human eyes to lights of different wavelengths. Then, three new spectral bands were constructed by averaging the backscattered intensity in the three spectral intervals.
  • 2. Non-linear overdetermined equations between true colors and reconstructed colors were constructed to calculate the color weights that correspond to the three new bands.
  • 3. 3D information of point cloud was combined with reconstructed colors to achieve true color 3D imaging.

 figure: Fig. 3.

Fig. 3. The flowchart of true color 3D imaging improvement method based on the HSL system

Download Full Size | PPT Slide | PDF

For a terrestrial HSL system, the backscattered intensity was not only affected by scanning geometry (e.g., distance and angle-of-incidence) but also by the reflectance characteristics of target surfaces [30]. With the complexity increase in the surrounding environment, obtaining backscattered signals correctly of different kinds of targets became more difficult. Reconstructing true color information by using only three spectral bands was unfeasible when the signal quality of these bands was unsatisfactory. Therefore, this study provided a new method to improve the true color 3D imaging effect in a complex environment based on HSL measurements. In this way, true color reconstruction effect with missing spectral bands can be improved effectively.

3.2 Determination of three new bands

3.2.1 Spectral range determination based on HSL system configuration

Ignoring the influences of the light path design and reflectance characteristics of target surfaces, the backscattered intensity of targets mainly depends on the hardware performance of the HSL system, especially the output energy of the supercontinuum laser source and quantum efficiency of the detectors. The high-power single-wavelength laser injected into photonic crystal fibers will yield a continuous broadband spectral output that spreads to several spectrum regions, including VIS, NIR, SWIR, and even Mid-IR, because of various nonlinear effects, such as four-wave mixing, modulation instability, soliton fission, and Raman scattering [32]. However, the output energy extended into the visible range was less than those of the others, and the energy between the visible range and peak position was extremely different. Quantum efficiency is defined as the fraction of incident photons that are absorbed by photoconductor and generated electrons collected at the detector terminal. The conversion of optical signals into electrical signals is a key performance parameter for photoelectric detector. Figure 4(b) shows that the quantum efficiency of the photomultiplier tube (PMT) detectors is low and affects the spectral signal acquisition to a degree.

 figure: Fig. 4.

Fig. 4. Single pulse energy of the supercontinuum laser source (a) and quantum efficiency of PMT detectors (b).

Download Full Size | PPT Slide | PDF

To obtain enhanced backscattered intensity in the visible range, we considered the performance parameter of the supercontinuum laser source and photoelectric detectors. The supercontinuum laser source was used in this HSL system and its single-pulse energy of the 400–800 nm range, as shown in Fig. 4(a) (the range of the visible spectrum covers 380–780 nm). The single pulse energy was less than 1 uJ, and the output energy was almost none before 430 nm, which will impinge on the backscattered signal acquirement in the visible range. The PMT detectors of instrument model HAMAMATSU H7260-20 were used. The quantum efficiency curve of PMT detectors is shown in Fig. 4(b). The quantum efficiency was to refer to the official documentation, www.hamamatsu.com. We cannot detect its specific value because of lacking related instruments. The quantum efficiency in the whole range was less, in particular the peak was less than 20%, which was disadvantageous when converting optical signals into electrical signals. To obtain a better backscattered intensity, considering the hardware performance of the output energy of the supercontinuum laser source and quantum efficiency of detectors was prioritized.

3.2.2 Determination of three spectral intervals based on the sensitivity of human eyes

To reduce the impact of missing bands on true color reconstruction, we divided the spectral range of the HSL system into three spectral intervals and averaged the existing backscattered intensity on three spectral intervals. The three spectral intervals can be determined through the sensitivity of human eyes to different wavelength lights. For visual perception, this procedure begins with the absorption of light on the retinal cones, which transfer electromagnetic energy into electrical signals. The eyes of a normal observer have three kinds of cones, namely, long (L)-, middle (M)-, and short (S)-wavelength cones [33]. The three cones have different sensitivities to various lights, whereas the sensitivity between M- and L-cones overlaps to a large extent, which cover almost the entire visible spectrum [34]. The spectral absorption function of the different types of cones is the basis of visual perception.

The colorimetry foundation established on the basis of the sensitivity of cones to different lights was established by William David Wright and John Guild in the late 1920s [35,36]. They established an experiment fact, with some exceptions, in which a mixture of three colors can obtain any color. Soon after, the International Commission on Illumination (CIE) built the CIE 1931-RGB and CIE 1931-XYZ color systems based on their studies [37]. The spectral tristimulus values of the CIE 1931-XYZ could be regarded as an approximate representation of the normal observer’s sensitivity to lights (Fig. 5). We divided the spectral range of the HSL system into three spectral intervals based on this sensitivity. By averaging the backscattered intensity in the three sections, we can construct three new bands to reconstruct the true color. In this way, the situation where reconstructed colors were seriously distorted due to low-quality spectral signals can be corrected.

 figure: Fig. 5.

Fig. 5. Spectral tristimulus values of CIE 1931-XYZ color system

Download Full Size | PPT Slide | PDF

3.3 Determination of color weights

Our experimental spectral positions were unknown because three new bands were constructed by averaging the backscattered intensity in three spectral intervals. Existing spectral tristimulus values of CIE 1931-XYZ color system was not a one-to-one correspondence with three new bands, while it was unavailable based on existing spectral tristimulus values to reconstruct true color information by three new bands. Therefore, we needed to build multiple nonlinear overdetermined equations between true colors and reconstructed colors to calculate the color weights by using the least square method. The color composition formulas are expressed as follows:

$$\textrm{k} = \frac{1}{{\int {\overline y (\lambda )d\lambda } }}$$
$$\begin{array}{l} X\textrm{ = }k\int {r(\lambda )\overline x (\lambda )d\lambda } \\ \textrm{Y = }k\int {r(\lambda )\overline y (\lambda )d\lambda } \\ Z\textrm{ = }k\int {r(\lambda )\overline z (\lambda )d\lambda } \end{array}$$
where k is an adjustable parameter for color lightness; X, Y, and Z are the chromaticity values; $r(\lambda )$ is the reflectivity of spectral bands; and $\overline x (\lambda )$, $\overline{\textrm y} (\lambda )$, and $\overline{\textrm z} (\lambda )$ are the tristimulus values in the CIE 1931-XYZ color space. The integration range is 380 to 780 nm covering the entire visible spectrum. The standard reference target Spectralon, with 99% reflectivity, is used to calculate the reflectance of targets. Spectralon is a fluoropolymer with high and stable diffuse reflectance over the visible spectrum [38].

We simplified the color composition formula into the following Eq. (3) in the case where only three spectral bands were used. ${x_1}$, ${x_2}$. and ${x_3}$ refer to the reflectivity of the three new bands. The nine unknown parameters of a, b, c, d, e, f, g, h, and i refer to the color weights of the three new spectral bands, which needed to be calculated. a, b, and c correspond to $\overline x (\lambda )$; d, e, and f correspond to $\overline{\textrm y} (\lambda )$; g, h, and i correspond to $\overline{\textrm z} (\lambda )$. $X^{\prime}$, $Y^{\prime}$, and $Z^{\prime}$ are the true chromaticity values obtained by determining the pixel position in the passive image (Fig. 2) of each point in the point cloud. When the observational frequency was higher than the number of the unknown parameters, we can build multiple non-linear overdetermined equations to determine unknown parameters by using the least square method. Here, we only used one-third of the point cloud for building non-linear overdetermined equations to calculate nine unknown parameters.

$$ \begin{aligned} &\frac{a * x_{1}+b * x_{2}+c * x_{3}}{d+e+f}=X^{\prime} \\ &\frac{\mathrm{d} * x_{1}+e * x_{2}+f * x_{3}}{d+e+f}=\mathrm{Y}^{\prime} \\ &\frac{\mathrm{g} * x_{1}+h * x_{2}+i * x_{3}}{d+e+f}=Z^{\prime} \end{aligned} $$

3.4 Performance evaluation of true color reconstruction

To validate the feasibility of the method of the true color reconstruction improvement, we set up five different schemes for reconstructing true color information. Considering the system hardware performance and the surface reflectance characteristics of different targets and other factors, we removed several spectral channels to simulate the missing spectral bands for reconstructing true color information. For instance, since the performance of the supercontinuum laser source and photoelectric detectors in the blue-violet region were weak, the reflectivity of vegetation in the green-red region was relatively low, the color composition contribution of the spectral tristimulus values in the red region was poor, so these spectral bands were almost deleted. The first scheme (a) uses 32 spectral bands and corresponding spectral tristimulus values to reconstruct true color information. The second (b) and fourth (d) schemes use residual 8 or 14 spectral bands to reconstruct true color information based on corresponding spectral tristimulus values of the CIE 1931-XYZ color space. For the third (c) and fifth (d) schemes,, three new bands were constructed by averaging the backscattered intensity of the residuals of 8 or 14 spectral bands. Subsequently, nonlinear overdetermined equations were formulated to calculate the color weights of three new bands by using the least square method. More details about the five schemes of true color reconstruction were described in Table 1.

Tables Icon

Table 1. Five schemes of true color reconstruction

The true color reconstruction results could be evaluated in two ways: qualitative and quantitative analyses. The normal observers can receive and analyze the visual images, and compose consciousness to identify the change in space, shape, color, and dynamics of objects, so the ability can be considered a powerful tool for evaluating color similarity. For quantitative analysis, we determined the pixel position in the passive image (Fig. 1) of each point in the point cloud by direct linear transformation and utilized a linear discriminant analysis to assess the color similarity between the true color information of the passive image and the reconstructing color of point clouds. The reconstructed color values of three RGB channels were defined as the X-axis, whereas the true color values of the passive pixels were defined as the Y-axis. The R-squared (R2) and root mean squared error (RMSE) were used as assessment criteria to evaluate the accuracy of color reconstruction.

4. Results and discussion

4.1 Determination of three new bands

To acquire a better backscattered intensity for achieving true color 3D imaging, the spectral range of the HSL system has been adjusted to 431–751 nm. The spectral resolution was 10 nm. The spectral range of the HSL system was divided into three spectral intervals based on the sensitivity of human eyes to different wavelength lights. The sensitivity threshold was set to 0.4 through multiple experiments. From Fig. 5, this sensitivity threshold can divide the spectral range into three parts exactly. For the R channel, the spectral range covered 551–641 nm. Likewise, for the G and B channels, two spectral ranges of 501–621 nm and 431–491 nm were selected. The spectral range of 551–621 nm overlapped in R and G channels, whereas both sensitivities were diverse. This phenomenon indicated the spectral bands in two ranges contributed to the R and G components, only with different contributions.

Among the five schemes of color reconstruction, the backscattered intensity in the third (c) and fifth schemes (e) were averaged to construct three new spectral bands, and the spectral intensity of three new spectral bands was normalized by the standard reference target. In this way, the impact of missing spectral bands on true color reconstruction can be reduced effectively, and provided a basis for subsequent true color reconstruction.

4.2 Color weights of three new spectral bands

Considering that the footprints of the laser beam might illuminate different objects simultaneously, the backscattered intensity was composed of the echo energy of different targets. The mixed spectral intensity was complex and difficult to handle. To reconstruct true color information accurately, we ignored these mixed points when the color weight of three new spectral bands were calculated in the third (c) and fifth schemes (e). From Formula (3), each band corresponded to three unknown parameters; each point can build three non-linear equations. Therefore, nearly a thousand non-linear overdetermined equations were constructed to calculate nine unknown parameters by the least square method. Here, we listed the results of the nine unknown parameters of the third (c) and fifth schemes (e).

The color weights of three new bands in the third (c) and schemes fifth (e) were shown in Table 2. Overall, these color weight values were very large, even the order of magnitude more than 4 and there were positive and negative. The ratio of RGB color components is very important, if we want to get a specific color. Therefore, for color weights, the positive values promote color composition in RGB reconstruction, yet the negative values represent counter effect. However, for the third (c) and fifth (e) schemes, although the weight values were different, the trend of weight values was basically similar. Furthermore, we did not know what the calculated color weights correspond to the spectral bands were, and they will change with spectral intervals division and the stated sensitivity threshold.

Tables Icon

Table 2. Color weights of three new bands in the third (c) and fifth schemes (e)

4.3 True color 3D imaging

This study adopted five different true color reconstruction schemes to validate the feasibility of the new method. The color reconstruction results of the five schemes were shown in Figs. 6 and 7. The space coordinates of point clouds (e.g., X, Y, and Z) were unchanged, whereas reconstructed colors of the point clouds were somewhat different. The evaluation criteria for estimating the quality of color reconstruction have two kinds, namely, qualitative and quantitative analyses.

 figure: Fig. 6.

Fig. 6. True color reconstruction results of five schemes (a1), (b1), (c1), (d1) and (e1) in the first experiment. (A): the photo of 17 colored card papers; (a1): uses 32 spectral bands and corresponding spectral tristimulus values; (b1): uses 8 spectral bands and corresponding spectral tristimulus values; (c1): uses three new bands constructed form the 8 spectral bands and the calculated color weights; (d1): uses 14 spectral bands and corresponding spectral tristimulus values; (e1): uses three new bands constructed form the 14 spectral bands and the calculated color weights.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7.

Fig. 7. True color reconstruction results of five schemes (a2), (b2), (c2), (d2) and (e2) in the second experiment. The reconstruction process and color weights were same with the first experiment.

Download Full Size | PPT Slide | PDF

The human eyes have a high discernibility to different colors that is even better than the instrument in a certain color range. The color similarity between the true colors of the photo and the reconstructed colors of the point clouds provide an intuitive judgment through the photo of 17 colored card papers (Fig. 6(A1)). Except for the second scheme (Fig. 6(b1)), the color reconstruction results of the four other schemes were acceptable and could express the true color information of targets at least approximately. Despite the existence of different levels of color distortions, the color reconstruction results of the 17 colored card papers could still be obtained with satisfying visual perception. Based on the color composition formulas, the true color of the targets could be shown perfectly by combining the reflectance of spectral bands and the corresponding tristimulus values in the CIE 1931-XYZ color space. However, for the second (Fig. 6(b1)) and fourth schemes (Fig. 6(d1)), relatively severe color distortions occurred. Overall, the former biased green and did not show the nature color of the card papers, whereas the latter biased yellow and could roughly show the true color. The reason was that the ratio of RGB color components, calculated from spectral bands, was a source of deviation compared with true color information from the photo. For instance, for the first scheme (Fig. 6(a1)), the color reconstruction results trended yellow, because the spectral information in the blue-violet range (380–430) was vital for constructing B color component.

Based on trichromatic theory, most of the colors can be produced by combining the three colors of red, green, and blue in different proportions. In theory, our proposed method could improve true color reconstruction accuracy with some missing spectral bands. Compared with the color reconstruction results of the second (Fig. 6(b1)) and fourth schemes (Fig. 6(d1)), results of the third (Fig. 6(c1)) and fifth schemes (Fig. 6(e1)) were better, and the overall color distortion was relatively small. These two schemes expressed the true color information of objects well and satisfied the visual perception of the human eye. The results showed that it was feasible to improve the color reconstruction effects effectively through the new method. The improvement degree depends on the precision of color truth-values and availability of backscattered intensity.

Figure 7 depicts the color reconstruction results of five schemes (a2), (b2), (c2), (d2), and (e2) in the second experiment, using the same reconstruction process and color weights with the first experiment. The comparison of Figs. 6 and 7 showed that the color reconstruction results of the five schemes in two experiments were significantly similar. For the first scheme (Fig. 7(a2)), there was slight color distortion, in which the overall color trended yellow, yet it could relatively express the nature color of 14 different color targets and satisfied the visual perception of the human eye. From the second (Fig. 7(b2)) and third schemes (Fig. 7(c2)), the color distortion caused by the missing spectral bands had a significant improvement, and the reconstructed color also changed from green to the true color. Similarly, for the fourth (Fig. 7(d2)) and fifth schemes (Fig. 7(e2)), the color distortion also improved. However, compared with the photo of the 14 different color targets (Fig. 7(A2)), the third (Fig. 7(c2)) and fifth schemes (Fig. 7(e2)) can present the color of the target itself, although it existed different degrees of color distortion, especially in the areas where of Sansevieria trifasciata plant, Pachira macrocarpa leaves, blue lamp, and blue plastic bin, were observed. In addition to the factor of missing spectral bands, the reasons for this phenomenon was mirror effects and excessive angle changes in these areas, which limited the calibration effect of the angle calibrated model. There were mirror effects on the surface of leaves, lamps, and bins. Mirror effect, an unneglected factor for intensity calibration, was arduous to remove and account for. This phenomenon of mirror effects affected high-precision acquirement of spectral information of targets, further caused the color distortion problem. Overall, the color reconstruction results were acceptable and displayed the natural color of targets by using the new method, and it was repeatable to reconstruct the true color of other experiments with the same color weights.

The color similarity between the true color values and reconstructed colors was estimated by quantitative analysis using the linear discriminant analysis in the first experiment. The quantitative analysis would not be implemented in the second experiment because of the complexity of spatial information of 14 different color targets. For linear discriminant analysis, R2 and RMSE were used as assessment criteria to evaluate color similarity. Larger R2 and small RMSE indicated better color reconstruction effects and higher color similarity.

The quantitative analysis results between the true color values of the photo and the reconstructed colors are shown in Fig. 8. For the five schemes, the conclusion of quantitative analysis was basically the same as that of the qualitative analysis. Except for the second (Fig. 8(b3)) and fourth schemes (Fig. 8(d3)), the color reconstruction of the three other schemes presented a superior color correlation with the photo (Fig. 6(A1)). R2 and RMSE of the RGB channels were higher than 0.65 and lower than 0.15, respectively. Based on trichromatic theory, the RGB values of a certain color was constant, and if one value among the three RGB color components changed, its color characteristics also changed. The comparison results of the quantitative analysis in the fourth (Fig. 8(d3)) and fifth schemes (Fig. 8(e3)) confirmed this phenomenon. When the two RGB components of R and G exhibited almost the same in the fourth (Fig. 8(d3)) and fifth schemes (Fig. 8(e3)), the difference of the B component caused a relatively large color distortion problem.

 figure: Fig. 8.

Fig. 8. The quantitative analysis results of RGB channels between the true color values of the photo and reconstructed colors of point clouds in five schemes. X-axis: RGB values about reconstructed colors. Y-axis: RGB values about the photo.

Download Full Size | PPT Slide | PDF

The phenomenon, in which the reconstructed color trended yellow in the first scheme (Fig. 8(a3)), could be explained through the quantitative analysis of RGB components. Constructing the value of the B color component by only using the existing spectral information was inaccurate because of the lack of spectral information in the blue-violet range. R2 of RGB color channels were 0.8459, 0.8846, and 0.6822, yet 0.6822 of B channel was low, and the RMSEs were less than 0.1504. The second (Fig. 8(b3)) and fourth schemes (Fig. 8(d3)), taking 8 and 14 spectral bands to reconstruct true color respectively, showed serious color distortion. The root cause of color distortion of these two schemes (Fig. 8(b3) and Fig. 8(d3)) was the lack of corresponding spectral information to balance the RGB color components. R2 of the second scheme (Fig. 8(b3)) in R and B color components were poor at −3.0238 and 0.3098, and even presented negative values, and RMSEs were relatively large. Compared with the second (Fig. 8(b3)) scheme, R2 of R and G color components in the fourth scheme (Fig. 8(d3)) were better at 0.8558 and 0.8786, yet R2 of B color component was poor at 0.324. If the appropriate spectral bands were not selected, then the color distortion will be occurred without difficulty.

In the case of serious color distortions in the second (Fig. 8(b3)) and fourth schemes (Fig. 8(d3)), our proposed method could indeed improve the color reconstruction effect with missing spectral bands for the third (Fig. 8(c3)) and fifth schemes (Fig. 8(e3)) respectively. R2 of R and B color components had a better improvement from −3.0238 to 0.7579 and 0.3098 to 0.6531, respectively, whereas R2 of G color component was constant in third scheme (Fig. 8(c3)). Likewise, R2 of B color component was improved from 0.324 to 0.762 in the fifth (Fig. 8(e3)) scheme. We found it interesting that R2 of B color component in the fifth (Fig. 8(e3)) scheme was higher than the first (Fig. 8(a3)) scheme and RMSE became lower. We noted that the proposed method can be used as a useful tool to achieve a better color reconstruction than the color reconstruction produced by 32 spectral bands even in missing certain spectral bands. According to the above qualitative and quantitative analyses results, the proposed method, averaging the backscattered intensity to construct new spectral bands and calculating the color weights corresponding to new bands for improving color reconstruction effect was feasible and reliable.

4.4 Limitations and prospects

In the two experiments, the backscattered intensity calibration simply considers the distance and angle-of-incidence effects without considering other parameters. The atmospheric effect is also a significant factor for negatively impinging upon the high-precision acquirement of backscattered intensity. The backscattered laser energy suffers a certain degree of attenuation because of the existence of atmospheric components and conditions [39]. For some airborne or satellite LiDAR systems, atmospheric effects are inevitable and difficult to account for. In the future airborne or satellite LiDAR applications, the influence of atmospheric effects should be fully considered.

The spectral location of new constructed bands varies with the supercontinuum laser source, photoelectric detectors, and the HSL system configuration. This imaging technique of true color 3D imaging requires a higher hardware configuration on LiDAR systems. For instance, the supercontinuum laser source, whose spectral output range must cover the visible range, and the output energy should be high enough to ensure signal acquisition. Photoelectric detectors also demonstrate high quantum efficiency in the visible range to improve the photoelectric transformation efficiency. In summary, the hardware performance of system configuration is significant and is a prerequisite that ensures true color 3D imaging.

This study provides a new color reconstruction method to improve the reconstruction effect of true color 3D imaging on a complex scanning scene. In some complex scanning scenes, high-quality true color 3D imaging will be achieved with several missing spectral bands by using this new method. In addition, this new method can also be applied in passive multispectral/hyperspectral imaging to optimize the true color reconstruction effect and will have an enormous application potential in various fields.

5. Conclusions

We have presented a new imaging technique that takes rich spectral information on each point and provides technical support for reconstructing real scenes. However, With the complexity increase in the surrounding environment, the spectral information obtained by HSL measurements may be influenced by various factors, which affect reconstruction effects of true color 3D imaging. Hence, the new method is proposed to improve color reconstruction effects with missing spectral bands.

Two scanning experiments and five color reconstruction schemes were used to validate the feasibility and reliability of this method. However, for qualitative and quantitative analyses, although slight color distortions were observed in the third (c) and fifth (e) schemes, the color reconstruction effects could still be acceptable for visual perception. Compared with the color reconstruction results using the traditional method, the proposed method could indeed improve the color reconstruction effect with missing spectral bands effectively. R2 of R and B color components had a better improvement from -3.0238 to 0.7579 and from 0.3098 to 0.6531, respectively, in the third scheme (Fig. 8(c3)). In addition, R2 of the B color component was improved from 0.324 to 0.762 in the fifth scheme (Fig. 8(e3)). The color reconstruction results of the five schemes demonstrate the feasibility and repeatability of this method to improve the color reconstruction effect, and provide a new idea for the exact calculation of color weights of spectral bands.

Funding

National Key Research and Development Program of China (2018YFB0504500); National Natural Science Foundation of China (41971307); China Postdoctoral Science Foundation (2017T100582); Natural Science Foundation of Hubei Province (2019CFB532); LIESMARS Special Funding.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D.S.W. Ting, P. Burlina, X. Xu, N. M. Bressler, and T. Y. Wong, “AI for medical imaging goes deep,” Nat. Med., 2018. [CrossRef]  

2. D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science 300(5616), 82–86 (2003). [CrossRef]  

3. S. Mahesh, A. Manickavasagana, D. S. Jayasa, J. Paliwala, and N. D. G. Whiteb, “Feasibility of near-infrared hyperspectral imaging to differentiate Canadian wheat classes,” Bioprocess Eng. 101(1), 50–57 (2008). [CrossRef]  

4. M. Simard, N. Pinto, J. B. Fisher, and A. Baccini, “Mapping forest canopy height globally with spaceborne lidar,” J. Geophys. Res.: Biogeosci. 116(G4), G04021 (2011). [CrossRef]  

5. R. Lu, “Detection of bruises on apples using near–infrared hyperspectral imaging,” Trans. ASAE 46(2), 523 (2003). [CrossRef]  

6. A. G. Andreou and Z. K. Kalayjian, “Polarization imaging: principles and integrated polarimeters,” IEEE Sens. J. 2(6), 566–576 (2002). [CrossRef]  

7. X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020). [CrossRef]  

8. A. Wehr and U. Lohr, “Airborne laser scanning—an introduction and overview,” ISPRS J. Photogramm. 54(2-3), 68–82 (1999). [CrossRef]  

9. A. J. Brown, “Spectral curve fitting for automatic hyperspectral data analysis,” IEEE Trans. Geosci. Electron. 44(6), 1601–1608 (2006). [CrossRef]  

10. E.P. Baltsavias, “A comparison between photogrammetry and laser scanning,” ISPRS J. Photogramm. 54(2-3), 83–94 (1999). [CrossRef]  

11. B. Koetz, F. Morsdorf, S. van der Linden, T. Curt, and B. Allgower, “Multi-source land cover classification for forest fire management based on imaging spectrometry and LiDAR data,” For. Ecol. Manage. 256(3), 263–271 (2008). [CrossRef]  

12. R.G. Vaughan, W.M. Calvin, and J.V. Taranik, “SEBASS hyperspectral thermal infrared data: surface emissivity measurement and mineral mapping,” Remote Sens. Environ. 85(1), 48–63 (2003). [CrossRef]  

13. A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015). [CrossRef]  

14. K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “LiDAR remote sensing of forest structure,” Prog. Phys. Geog. 27(1), 88–106 (2003). [CrossRef]  

15. O. Nevalainen et al., “Nitrogen concentration estimation with hyperspectral LiDAR,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013. II-5/W2: p. 205–210.

16. J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018). [CrossRef]  

17. M. Dalponte, L. Bruzzone, and D. Gianelle, “Fusion of Hyperspectral and LIDAR Remote Sensing Data for Classification of Complex Forest Areas,” IEEE Trans. Geosci. Electron. 46(5), 1416–1427 (2008). [CrossRef]  

18. G. P. Asner, D. E. Knapp, T. Kennedy-Bowdoin, M. O. Jones, R. E. Martin, J. Boardman, and R. F. Hughes, “Invasive species detection in Hawaiian rainforests using airborne imaging spectroscopy and LiDAR,” Remote Sens. Environ. 112(5), 1942–1955 (2008). [CrossRef]  

19. L. Du, W. Gong, S. Shi, J. Yang, J. Sun, B. Zhu, and S. Song, “Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR,” ITC J. 44, 136–143 (2016). [CrossRef]  

20. T. Halali, J. Suomalainen, S. Kaasalainen, and Y. Chen, “Full waveform hyperspectral LiDAR for terrestrial laser scanning,” Opt. Express 20(7), 7119 (2012). [CrossRef]  

21. W. Li, G. Sun, Z. Niu, S. Gao, and H. Qiao, “Estimation of leaf biochemical content using a novel hyperspectral full-waveform LiDAR system,” Remote Sens. Lett. 5(8), 693–702 (2014). [CrossRef]  

22. D. Martinez-Ramirez, et al.. Developing hyperspectral lidar for structural and biochemical analysis of forest data,” in Proc. EARSEL Conf. Adv. Geosci.2012.

23. S. Kaasalainen, M. Åkerblom, O. Nevalainen, T. Hakala, and M. Kaasalainen, “Uncertainty in multispectral lidar signals caused by incidence angle effects,” Interface Focus. 8(2), 20170033 (2018). [CrossRef]  

24. L. Du, S. Shi, J. Yang, W. Wang, J. Sun, B. Cheng, Z. Zhang, and W. Gong, “Potential of spectral ratio indices derived from hyperspectral LiDAR and laser-induced chlorophyll fluorescence spectra on estimating rice leaf nitrogen contents,” Opt. Express 25(6), 6539–6549 (2017). [CrossRef]  

25. J. Vauhkonen, T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M. Holopainen, and J. Hyyppa, “Classification of spruce and pine trees using active hyperspectral LiDAR,” IEEE Geosci. Remote Sensing Lett. 10(5), 1138–1141 (2013). [CrossRef]  

26. B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019). [CrossRef]  

27. B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020). [CrossRef]  

28. F. Kneubühl, “Diffraction grating spectroscopy,” Appl. Opt. 8(3), 505–519 (1969). [CrossRef]  

29. S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009). [CrossRef]  

30. W. Fang, X. Huang, F. Zhang, and D. Li, “Intensity correction of terrestrial laser scanning data by estimating laser transmission function,” IEEE Trans. Geosci. Electron. 53(2), 942–951 (2015). [CrossRef]  

31. C. Biwu et al., “Using HSI Color Space to Improve the Multispectral Lidar Classification Error Caused by Measurement Geometry,” IEEE Trans. Geosci. Electron., 2020.

32. J. M. Dudley, G. Genty, and S. Coen, “Supercontinuum generation in photonic crystal fiber,” Rev. Mod. Phys. 78(4), 1135–1184 (2006). [CrossRef]  

33. S.K. Shevell, The science of color. Elsevier: 2003.

34. K. R. Gegenfurtner and D. C. Kiper, “Color vision,” Annu. Rev. Neurosci. 26(1), 181–206 (2003). [CrossRef]  

35. W. D. Wright, “A re-determination of the trichromatic coefficients of the spectral colours,” Trans. Opt. Soc. 30(4), 141–164 (1929). [CrossRef]  

36. J. Guild, “The colorimetric properties of the spectrum,” Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 1931. 230(681-693): p. 149–187.

37. T. Smith and J. Guild, “The CIE colorimetric standards and their use,” Trans. Opt. Soc. 33(3), 73–134 (1931). [CrossRef]  

38. G. T. Georgiev and J. J. Butler, “Long-term calibration monitoring of Spectralon diffusers BRDF in the air-ultraviolet,” Appl. Opt. 46(32), 7892–7899 (2007). [CrossRef]  

39. W. Y. Yan, A. Shaker, A. Habib, and A. P. Kersting, “Improving classification accuracy of airborne LiDAR intensity data by geometric calibration and radiometric correction,” ISPRS J. Photogramm. 67, 35–44 (2012). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. D.S.W. Ting, P. Burlina, X. Xu, N. M. Bressler, and T. Y. Wong, “AI for medical imaging goes deep,” Nat. Med., 2018.
    [Crossref]
  2. D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science 300(5616), 82–86 (2003).
    [Crossref]
  3. S. Mahesh, A. Manickavasagana, D. S. Jayasa, J. Paliwala, and N. D. G. Whiteb, “Feasibility of near-infrared hyperspectral imaging to differentiate Canadian wheat classes,” Bioprocess Eng. 101(1), 50–57 (2008).
    [Crossref]
  4. M. Simard, N. Pinto, J. B. Fisher, and A. Baccini, “Mapping forest canopy height globally with spaceborne lidar,” J. Geophys. Res.: Biogeosci. 116(G4), G04021 (2011).
    [Crossref]
  5. R. Lu, “Detection of bruises on apples using near–infrared hyperspectral imaging,” Trans. ASAE 46(2), 523 (2003).
    [Crossref]
  6. A. G. Andreou and Z. K. Kalayjian, “Polarization imaging: principles and integrated polarimeters,” IEEE Sens. J. 2(6), 566–576 (2002).
    [Crossref]
  7. X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020).
    [Crossref]
  8. A. Wehr and U. Lohr, “Airborne laser scanning—an introduction and overview,” ISPRS J. Photogramm. 54(2-3), 68–82 (1999).
    [Crossref]
  9. A. J. Brown, “Spectral curve fitting for automatic hyperspectral data analysis,” IEEE Trans. Geosci. Electron. 44(6), 1601–1608 (2006).
    [Crossref]
  10. E.P. Baltsavias, “A comparison between photogrammetry and laser scanning,” ISPRS J. Photogramm. 54(2-3), 83–94 (1999).
    [Crossref]
  11. B. Koetz, F. Morsdorf, S. van der Linden, T. Curt, and B. Allgower, “Multi-source land cover classification for forest fire management based on imaging spectrometry and LiDAR data,” For. Ecol. Manage. 256(3), 263–271 (2008).
    [Crossref]
  12. R.G. Vaughan, W.M. Calvin, and J.V. Taranik, “SEBASS hyperspectral thermal infrared data: surface emissivity measurement and mineral mapping,” Remote Sens. Environ. 85(1), 48–63 (2003).
    [Crossref]
  13. A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015).
    [Crossref]
  14. K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “LiDAR remote sensing of forest structure,” Prog. Phys. Geog. 27(1), 88–106 (2003).
    [Crossref]
  15. O. Nevalainen et al., “Nitrogen concentration estimation with hyperspectral LiDAR,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013. II-5/W2: p. 205–210.
  16. J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018).
    [Crossref]
  17. M. Dalponte, L. Bruzzone, and D. Gianelle, “Fusion of Hyperspectral and LIDAR Remote Sensing Data for Classification of Complex Forest Areas,” IEEE Trans. Geosci. Electron. 46(5), 1416–1427 (2008).
    [Crossref]
  18. G. P. Asner, D. E. Knapp, T. Kennedy-Bowdoin, M. O. Jones, R. E. Martin, J. Boardman, and R. F. Hughes, “Invasive species detection in Hawaiian rainforests using airborne imaging spectroscopy and LiDAR,” Remote Sens. Environ. 112(5), 1942–1955 (2008).
    [Crossref]
  19. L. Du, W. Gong, S. Shi, J. Yang, J. Sun, B. Zhu, and S. Song, “Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR,” ITC J. 44, 136–143 (2016).
    [Crossref]
  20. T. Halali, J. Suomalainen, S. Kaasalainen, and Y. Chen, “Full waveform hyperspectral LiDAR for terrestrial laser scanning,” Opt. Express 20(7), 7119 (2012).
    [Crossref]
  21. W. Li, G. Sun, Z. Niu, S. Gao, and H. Qiao, “Estimation of leaf biochemical content using a novel hyperspectral full-waveform LiDAR system,” Remote Sens. Lett. 5(8), 693–702 (2014).
    [Crossref]
  22. D. Martinez-Ramirez, et al. Developing hyperspectral lidar for structural and biochemical analysis of forest data,” in Proc. EARSEL Conf. Adv. Geosci.2012.
  23. S. Kaasalainen, M. Åkerblom, O. Nevalainen, T. Hakala, and M. Kaasalainen, “Uncertainty in multispectral lidar signals caused by incidence angle effects,” Interface Focus. 8(2), 20170033 (2018).
    [Crossref]
  24. L. Du, S. Shi, J. Yang, W. Wang, J. Sun, B. Cheng, Z. Zhang, and W. Gong, “Potential of spectral ratio indices derived from hyperspectral LiDAR and laser-induced chlorophyll fluorescence spectra on estimating rice leaf nitrogen contents,” Opt. Express 25(6), 6539–6549 (2017).
    [Crossref]
  25. J. Vauhkonen, T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M. Holopainen, and J. Hyyppa, “Classification of spruce and pine trees using active hyperspectral LiDAR,” IEEE Geosci. Remote Sensing Lett. 10(5), 1138–1141 (2013).
    [Crossref]
  26. B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019).
    [Crossref]
  27. B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020).
    [Crossref]
  28. F. Kneubühl, “Diffraction grating spectroscopy,” Appl. Opt. 8(3), 505–519 (1969).
    [Crossref]
  29. S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
    [Crossref]
  30. W. Fang, X. Huang, F. Zhang, and D. Li, “Intensity correction of terrestrial laser scanning data by estimating laser transmission function,” IEEE Trans. Geosci. Electron. 53(2), 942–951 (2015).
    [Crossref]
  31. C. Biwu et al., “Using HSI Color Space to Improve the Multispectral Lidar Classification Error Caused by Measurement Geometry,” IEEE Trans. Geosci. Electron., 2020.
  32. J. M. Dudley, G. Genty, and S. Coen, “Supercontinuum generation in photonic crystal fiber,” Rev. Mod. Phys. 78(4), 1135–1184 (2006).
    [Crossref]
  33. S.K. Shevell, The science of color. Elsevier: 2003.
  34. K. R. Gegenfurtner and D. C. Kiper, “Color vision,” Annu. Rev. Neurosci. 26(1), 181–206 (2003).
    [Crossref]
  35. W. D. Wright, “A re-determination of the trichromatic coefficients of the spectral colours,” Trans. Opt. Soc. 30(4), 141–164 (1929).
    [Crossref]
  36. J. Guild, “The colorimetric properties of the spectrum,” Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 1931. 230(681-693): p. 149–187.
  37. T. Smith and J. Guild, “The CIE colorimetric standards and their use,” Trans. Opt. Soc. 33(3), 73–134 (1931).
    [Crossref]
  38. G. T. Georgiev and J. J. Butler, “Long-term calibration monitoring of Spectralon diffusers BRDF in the air-ultraviolet,” Appl. Opt. 46(32), 7892–7899 (2007).
    [Crossref]
  39. W. Y. Yan, A. Shaker, A. Habib, and A. P. Kersting, “Improving classification accuracy of airborne LiDAR intensity data by geometric calibration and radiometric correction,” ISPRS J. Photogramm. 67, 35–44 (2012).
    [Crossref]

2020 (2)

X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020).
[Crossref]

B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020).
[Crossref]

2019 (1)

B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019).
[Crossref]

2018 (2)

S. Kaasalainen, M. Åkerblom, O. Nevalainen, T. Hakala, and M. Kaasalainen, “Uncertainty in multispectral lidar signals caused by incidence angle effects,” Interface Focus. 8(2), 20170033 (2018).
[Crossref]

J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018).
[Crossref]

2017 (1)

2016 (1)

L. Du, W. Gong, S. Shi, J. Yang, J. Sun, B. Zhu, and S. Song, “Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR,” ITC J. 44, 136–143 (2016).
[Crossref]

2015 (2)

A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015).
[Crossref]

W. Fang, X. Huang, F. Zhang, and D. Li, “Intensity correction of terrestrial laser scanning data by estimating laser transmission function,” IEEE Trans. Geosci. Electron. 53(2), 942–951 (2015).
[Crossref]

2014 (1)

W. Li, G. Sun, Z. Niu, S. Gao, and H. Qiao, “Estimation of leaf biochemical content using a novel hyperspectral full-waveform LiDAR system,” Remote Sens. Lett. 5(8), 693–702 (2014).
[Crossref]

2013 (1)

J. Vauhkonen, T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M. Holopainen, and J. Hyyppa, “Classification of spruce and pine trees using active hyperspectral LiDAR,” IEEE Geosci. Remote Sensing Lett. 10(5), 1138–1141 (2013).
[Crossref]

2012 (2)

W. Y. Yan, A. Shaker, A. Habib, and A. P. Kersting, “Improving classification accuracy of airborne LiDAR intensity data by geometric calibration and radiometric correction,” ISPRS J. Photogramm. 67, 35–44 (2012).
[Crossref]

T. Halali, J. Suomalainen, S. Kaasalainen, and Y. Chen, “Full waveform hyperspectral LiDAR for terrestrial laser scanning,” Opt. Express 20(7), 7119 (2012).
[Crossref]

2011 (1)

M. Simard, N. Pinto, J. B. Fisher, and A. Baccini, “Mapping forest canopy height globally with spaceborne lidar,” J. Geophys. Res.: Biogeosci. 116(G4), G04021 (2011).
[Crossref]

2009 (1)

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

2008 (4)

S. Mahesh, A. Manickavasagana, D. S. Jayasa, J. Paliwala, and N. D. G. Whiteb, “Feasibility of near-infrared hyperspectral imaging to differentiate Canadian wheat classes,” Bioprocess Eng. 101(1), 50–57 (2008).
[Crossref]

M. Dalponte, L. Bruzzone, and D. Gianelle, “Fusion of Hyperspectral and LIDAR Remote Sensing Data for Classification of Complex Forest Areas,” IEEE Trans. Geosci. Electron. 46(5), 1416–1427 (2008).
[Crossref]

G. P. Asner, D. E. Knapp, T. Kennedy-Bowdoin, M. O. Jones, R. E. Martin, J. Boardman, and R. F. Hughes, “Invasive species detection in Hawaiian rainforests using airborne imaging spectroscopy and LiDAR,” Remote Sens. Environ. 112(5), 1942–1955 (2008).
[Crossref]

B. Koetz, F. Morsdorf, S. van der Linden, T. Curt, and B. Allgower, “Multi-source land cover classification for forest fire management based on imaging spectrometry and LiDAR data,” For. Ecol. Manage. 256(3), 263–271 (2008).
[Crossref]

2007 (1)

2006 (2)

J. M. Dudley, G. Genty, and S. Coen, “Supercontinuum generation in photonic crystal fiber,” Rev. Mod. Phys. 78(4), 1135–1184 (2006).
[Crossref]

A. J. Brown, “Spectral curve fitting for automatic hyperspectral data analysis,” IEEE Trans. Geosci. Electron. 44(6), 1601–1608 (2006).
[Crossref]

2003 (5)

D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science 300(5616), 82–86 (2003).
[Crossref]

R. Lu, “Detection of bruises on apples using near–infrared hyperspectral imaging,” Trans. ASAE 46(2), 523 (2003).
[Crossref]

R.G. Vaughan, W.M. Calvin, and J.V. Taranik, “SEBASS hyperspectral thermal infrared data: surface emissivity measurement and mineral mapping,” Remote Sens. Environ. 85(1), 48–63 (2003).
[Crossref]

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “LiDAR remote sensing of forest structure,” Prog. Phys. Geog. 27(1), 88–106 (2003).
[Crossref]

K. R. Gegenfurtner and D. C. Kiper, “Color vision,” Annu. Rev. Neurosci. 26(1), 181–206 (2003).
[Crossref]

2002 (1)

A. G. Andreou and Z. K. Kalayjian, “Polarization imaging: principles and integrated polarimeters,” IEEE Sens. J. 2(6), 566–576 (2002).
[Crossref]

1999 (2)

E.P. Baltsavias, “A comparison between photogrammetry and laser scanning,” ISPRS J. Photogramm. 54(2-3), 83–94 (1999).
[Crossref]

A. Wehr and U. Lohr, “Airborne laser scanning—an introduction and overview,” ISPRS J. Photogramm. 54(2-3), 68–82 (1999).
[Crossref]

1969 (1)

1931 (1)

T. Smith and J. Guild, “The CIE colorimetric standards and their use,” Trans. Opt. Soc. 33(3), 73–134 (1931).
[Crossref]

1929 (1)

W. D. Wright, “A re-determination of the trichromatic coefficients of the spectral colours,” Trans. Opt. Soc. 30(4), 141–164 (1929).
[Crossref]

Ahokas, E.

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

Åkerblom, M.

S. Kaasalainen, M. Åkerblom, O. Nevalainen, T. Hakala, and M. Kaasalainen, “Uncertainty in multispectral lidar signals caused by incidence angle effects,” Interface Focus. 8(2), 20170033 (2018).
[Crossref]

Akujarvi, A.

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

Allan, V. J.

D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science 300(5616), 82–86 (2003).
[Crossref]

Allgower, B.

B. Koetz, F. Morsdorf, S. van der Linden, T. Curt, and B. Allgower, “Multi-source land cover classification for forest fire management based on imaging spectrometry and LiDAR data,” For. Ecol. Manage. 256(3), 263–271 (2008).
[Crossref]

Andreou, A. G.

A. G. Andreou and Z. K. Kalayjian, “Polarization imaging: principles and integrated polarimeters,” IEEE Sens. J. 2(6), 566–576 (2002).
[Crossref]

Asner, G. P.

G. P. Asner, D. E. Knapp, T. Kennedy-Bowdoin, M. O. Jones, R. E. Martin, J. Boardman, and R. F. Hughes, “Invasive species detection in Hawaiian rainforests using airborne imaging spectroscopy and LiDAR,” Remote Sens. Environ. 112(5), 1942–1955 (2008).
[Crossref]

Baccini, A.

M. Simard, N. Pinto, J. B. Fisher, and A. Baccini, “Mapping forest canopy height globally with spaceborne lidar,” J. Geophys. Res.: Biogeosci. 116(G4), G04021 (2011).
[Crossref]

Baltsavias, E.P.

E.P. Baltsavias, “A comparison between photogrammetry and laser scanning,” ISPRS J. Photogramm. 54(2-3), 83–94 (1999).
[Crossref]

Biwu, C.

C. Biwu et al., “Using HSI Color Space to Improve the Multispectral Lidar Classification Error Caused by Measurement Geometry,” IEEE Trans. Geosci. Electron., 2020.

Boardman, J.

G. P. Asner, D. E. Knapp, T. Kennedy-Bowdoin, M. O. Jones, R. E. Martin, J. Boardman, and R. F. Hughes, “Invasive species detection in Hawaiian rainforests using airborne imaging spectroscopy and LiDAR,” Remote Sens. Environ. 112(5), 1942–1955 (2008).
[Crossref]

Bressler, N. M.

D.S.W. Ting, P. Burlina, X. Xu, N. M. Bressler, and T. Y. Wong, “AI for medical imaging goes deep,” Nat. Med., 2018.
[Crossref]

Brown, A. J.

A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015).
[Crossref]

A. J. Brown, “Spectral curve fitting for automatic hyperspectral data analysis,” IEEE Trans. Geosci. Electron. 44(6), 1601–1608 (2006).
[Crossref]

Bruzzone, L.

M. Dalponte, L. Bruzzone, and D. Gianelle, “Fusion of Hyperspectral and LIDAR Remote Sensing Data for Classification of Complex Forest Areas,” IEEE Trans. Geosci. Electron. 46(5), 1416–1427 (2008).
[Crossref]

Burlina, P.

D.S.W. Ting, P. Burlina, X. Xu, N. M. Bressler, and T. Y. Wong, “AI for medical imaging goes deep,” Nat. Med., 2018.
[Crossref]

Butler, J. J.

Byrne, S.

A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015).
[Crossref]

Calvin, W.M.

R.G. Vaughan, W.M. Calvin, and J.V. Taranik, “SEBASS hyperspectral thermal infrared data: surface emissivity measurement and mineral mapping,” Remote Sens. Environ. 85(1), 48–63 (2003).
[Crossref]

Cao, X.

B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020).
[Crossref]

Chen, B.

X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020).
[Crossref]

X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020).
[Crossref]

B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019).
[Crossref]

B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019).
[Crossref]

J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018).
[Crossref]

Chen, Y.

Chen, Z.

B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020).
[Crossref]

Cheng, B.

Coen, S.

J. M. Dudley, G. Genty, and S. Coen, “Supercontinuum generation in photonic crystal fiber,” Rev. Mod. Phys. 78(4), 1135–1184 (2006).
[Crossref]

Colaprete, A.

A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015).
[Crossref]

Curt, T.

B. Koetz, F. Morsdorf, S. van der Linden, T. Curt, and B. Allgower, “Multi-source land cover classification for forest fire management based on imaging spectrometry and LiDAR data,” For. Ecol. Manage. 256(3), 263–271 (2008).
[Crossref]

Dalponte, M.

M. Dalponte, L. Bruzzone, and D. Gianelle, “Fusion of Hyperspectral and LIDAR Remote Sensing Data for Classification of Complex Forest Areas,” IEEE Trans. Geosci. Electron. 46(5), 1416–1427 (2008).
[Crossref]

Du, L.

B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019).
[Crossref]

J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018).
[Crossref]

L. Du, S. Shi, J. Yang, W. Wang, J. Sun, B. Cheng, Z. Zhang, and W. Gong, “Potential of spectral ratio indices derived from hyperspectral LiDAR and laser-induced chlorophyll fluorescence spectra on estimating rice leaf nitrogen contents,” Opt. Express 25(6), 6539–6549 (2017).
[Crossref]

L. Du, W. Gong, S. Shi, J. Yang, J. Sun, B. Zhu, and S. Song, “Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR,” ITC J. 44, 136–143 (2016).
[Crossref]

Dudley, J. M.

J. M. Dudley, G. Genty, and S. Coen, “Supercontinuum generation in photonic crystal fiber,” Rev. Mod. Phys. 78(4), 1135–1184 (2006).
[Crossref]

Fang, W.

W. Fang, X. Huang, F. Zhang, and D. Li, “Intensity correction of terrestrial laser scanning data by estimating laser transmission function,” IEEE Trans. Geosci. Electron. 53(2), 942–951 (2015).
[Crossref]

Fisher, J. B.

M. Simard, N. Pinto, J. B. Fisher, and A. Baccini, “Mapping forest canopy height globally with spaceborne lidar,” J. Geophys. Res.: Biogeosci. 116(G4), G04021 (2011).
[Crossref]

Flood, M.

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “LiDAR remote sensing of forest structure,” Prog. Phys. Geog. 27(1), 88–106 (2003).
[Crossref]

Gao, S.

W. Li, G. Sun, Z. Niu, S. Gao, and H. Qiao, “Estimation of leaf biochemical content using a novel hyperspectral full-waveform LiDAR system,” Remote Sens. Lett. 5(8), 693–702 (2014).
[Crossref]

Gegenfurtner, K. R.

K. R. Gegenfurtner and D. C. Kiper, “Color vision,” Annu. Rev. Neurosci. 26(1), 181–206 (2003).
[Crossref]

Genty, G.

J. M. Dudley, G. Genty, and S. Coen, “Supercontinuum generation in photonic crystal fiber,” Rev. Mod. Phys. 78(4), 1135–1184 (2006).
[Crossref]

Georgiev, G. T.

Gianelle, D.

M. Dalponte, L. Bruzzone, and D. Gianelle, “Fusion of Hyperspectral and LIDAR Remote Sensing Data for Classification of Complex Forest Areas,” IEEE Trans. Geosci. Electron. 46(5), 1416–1427 (2008).
[Crossref]

Gong, W.

X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020).
[Crossref]

B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020).
[Crossref]

B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019).
[Crossref]

J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018).
[Crossref]

L. Du, S. Shi, J. Yang, W. Wang, J. Sun, B. Cheng, Z. Zhang, and W. Gong, “Potential of spectral ratio indices derived from hyperspectral LiDAR and laser-induced chlorophyll fluorescence spectra on estimating rice leaf nitrogen contents,” Opt. Express 25(6), 6539–6549 (2017).
[Crossref]

L. Du, W. Gong, S. Shi, J. Yang, J. Sun, B. Zhu, and S. Song, “Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR,” ITC J. 44, 136–143 (2016).
[Crossref]

Grund, C. J.

A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015).
[Crossref]

Guild, J.

T. Smith and J. Guild, “The CIE colorimetric standards and their use,” Trans. Opt. Soc. 33(3), 73–134 (1931).
[Crossref]

J. Guild, “The colorimetric properties of the spectrum,” Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 1931. 230(681-693): p. 149–187.

Guo, K.

X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020).
[Crossref]

B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019).
[Crossref]

Habib, A.

W. Y. Yan, A. Shaker, A. Habib, and A. P. Kersting, “Improving classification accuracy of airborne LiDAR intensity data by geometric calibration and radiometric correction,” ISPRS J. Photogramm. 67, 35–44 (2012).
[Crossref]

Hakala, T.

S. Kaasalainen, M. Åkerblom, O. Nevalainen, T. Hakala, and M. Kaasalainen, “Uncertainty in multispectral lidar signals caused by incidence angle effects,” Interface Focus. 8(2), 20170033 (2018).
[Crossref]

J. Vauhkonen, T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M. Holopainen, and J. Hyyppa, “Classification of spruce and pine trees using active hyperspectral LiDAR,” IEEE Geosci. Remote Sensing Lett. 10(5), 1138–1141 (2013).
[Crossref]

Halali, T.

He, D.

B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020).
[Crossref]

Holopainen, M.

J. Vauhkonen, T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M. Holopainen, and J. Hyyppa, “Classification of spruce and pine trees using active hyperspectral LiDAR,” IEEE Geosci. Remote Sensing Lett. 10(5), 1138–1141 (2013).
[Crossref]

Huang, X.

W. Fang, X. Huang, F. Zhang, and D. Li, “Intensity correction of terrestrial laser scanning data by estimating laser transmission function,” IEEE Trans. Geosci. Electron. 53(2), 942–951 (2015).
[Crossref]

Hughes, R. F.

G. P. Asner, D. E. Knapp, T. Kennedy-Bowdoin, M. O. Jones, R. E. Martin, J. Boardman, and R. F. Hughes, “Invasive species detection in Hawaiian rainforests using airborne imaging spectroscopy and LiDAR,” Remote Sens. Environ. 112(5), 1942–1955 (2008).
[Crossref]

Hyppa, H.

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

Hyppa, J.

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

Hyyppa, J.

J. Vauhkonen, T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M. Holopainen, and J. Hyyppa, “Classification of spruce and pine trees using active hyperspectral LiDAR,” IEEE Geosci. Remote Sensing Lett. 10(5), 1138–1141 (2013).
[Crossref]

Jaakkola, A.

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

Jayasa, D. S.

S. Mahesh, A. Manickavasagana, D. S. Jayasa, J. Paliwala, and N. D. G. Whiteb, “Feasibility of near-infrared hyperspectral imaging to differentiate Canadian wheat classes,” Bioprocess Eng. 101(1), 50–57 (2008).
[Crossref]

Jones, M. O.

G. P. Asner, D. E. Knapp, T. Kennedy-Bowdoin, M. O. Jones, R. E. Martin, J. Boardman, and R. F. Hughes, “Invasive species detection in Hawaiian rainforests using airborne imaging spectroscopy and LiDAR,” Remote Sens. Environ. 112(5), 1942–1955 (2008).
[Crossref]

Kaasalainen, M.

S. Kaasalainen, M. Åkerblom, O. Nevalainen, T. Hakala, and M. Kaasalainen, “Uncertainty in multispectral lidar signals caused by incidence angle effects,” Interface Focus. 8(2), 20170033 (2018).
[Crossref]

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

Kaasalainen, S.

S. Kaasalainen, M. Åkerblom, O. Nevalainen, T. Hakala, and M. Kaasalainen, “Uncertainty in multispectral lidar signals caused by incidence angle effects,” Interface Focus. 8(2), 20170033 (2018).
[Crossref]

J. Vauhkonen, T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M. Holopainen, and J. Hyyppa, “Classification of spruce and pine trees using active hyperspectral LiDAR,” IEEE Geosci. Remote Sensing Lett. 10(5), 1138–1141 (2013).
[Crossref]

T. Halali, J. Suomalainen, S. Kaasalainen, and Y. Chen, “Full waveform hyperspectral LiDAR for terrestrial laser scanning,” Opt. Express 20(7), 7119 (2012).
[Crossref]

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

Kalayjian, Z. K.

A. G. Andreou and Z. K. Kalayjian, “Polarization imaging: principles and integrated polarimeters,” IEEE Sens. J. 2(6), 566–576 (2002).
[Crossref]

Kennedy-Bowdoin, T.

G. P. Asner, D. E. Knapp, T. Kennedy-Bowdoin, M. O. Jones, R. E. Martin, J. Boardman, and R. F. Hughes, “Invasive species detection in Hawaiian rainforests using airborne imaging spectroscopy and LiDAR,” Remote Sens. Environ. 112(5), 1942–1955 (2008).
[Crossref]

Kersting, A. P.

W. Y. Yan, A. Shaker, A. Habib, and A. P. Kersting, “Improving classification accuracy of airborne LiDAR intensity data by geometric calibration and radiometric correction,” ISPRS J. Photogramm. 67, 35–44 (2012).
[Crossref]

Kiper, D. C.

K. R. Gegenfurtner and D. C. Kiper, “Color vision,” Annu. Rev. Neurosci. 26(1), 181–206 (2003).
[Crossref]

Knapp, D. E.

G. P. Asner, D. E. Knapp, T. Kennedy-Bowdoin, M. O. Jones, R. E. Martin, J. Boardman, and R. F. Hughes, “Invasive species detection in Hawaiian rainforests using airborne imaging spectroscopy and LiDAR,” Remote Sens. Environ. 112(5), 1942–1955 (2008).
[Crossref]

Kneubühl, F.

Koetz, B.

B. Koetz, F. Morsdorf, S. van der Linden, T. Curt, and B. Allgower, “Multi-source land cover classification for forest fire management based on imaging spectrometry and LiDAR data,” For. Ecol. Manage. 256(3), 263–271 (2008).
[Crossref]

Kukko, A.

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

Lehner, H.

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

Li, D.

W. Fang, X. Huang, F. Zhang, and D. Li, “Intensity correction of terrestrial laser scanning data by estimating laser transmission function,” IEEE Trans. Geosci. Electron. 53(2), 942–951 (2015).
[Crossref]

Li, F.

B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020).
[Crossref]

Li, W.

W. Li, G. Sun, Z. Niu, S. Gao, and H. Qiao, “Estimation of leaf biochemical content using a novel hyperspectral full-waveform LiDAR system,” Remote Sens. Lett. 5(8), 693–702 (2014).
[Crossref]

Lim, K.

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “LiDAR remote sensing of forest structure,” Prog. Phys. Geog. 27(1), 88–106 (2003).
[Crossref]

Lin, X.

B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020).
[Crossref]

Litkey, P.

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

Lohr, U.

A. Wehr and U. Lohr, “Airborne laser scanning—an introduction and overview,” ISPRS J. Photogramm. 54(2-3), 68–82 (1999).
[Crossref]

Lu, R.

R. Lu, “Detection of bruises on apples using near–infrared hyperspectral imaging,” Trans. ASAE 46(2), 523 (2003).
[Crossref]

Mahesh, S.

S. Mahesh, A. Manickavasagana, D. S. Jayasa, J. Paliwala, and N. D. G. Whiteb, “Feasibility of near-infrared hyperspectral imaging to differentiate Canadian wheat classes,” Bioprocess Eng. 101(1), 50–57 (2008).
[Crossref]

Manickavasagana, A.

S. Mahesh, A. Manickavasagana, D. S. Jayasa, J. Paliwala, and N. D. G. Whiteb, “Feasibility of near-infrared hyperspectral imaging to differentiate Canadian wheat classes,” Bioprocess Eng. 101(1), 50–57 (2008).
[Crossref]

Mao, F.

J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018).
[Crossref]

Martin, R. E.

G. P. Asner, D. E. Knapp, T. Kennedy-Bowdoin, M. O. Jones, R. E. Martin, J. Boardman, and R. F. Hughes, “Invasive species detection in Hawaiian rainforests using airborne imaging spectroscopy and LiDAR,” Remote Sens. Environ. 112(5), 1942–1955 (2008).
[Crossref]

Martinez-Ramirez, D.

D. Martinez-Ramirez, et al. Developing hyperspectral lidar for structural and biochemical analysis of forest data,” in Proc. EARSEL Conf. Adv. Geosci.2012.

Michaels, T. I.

A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015).
[Crossref]

Morsdorf, F.

B. Koetz, F. Morsdorf, S. van der Linden, T. Curt, and B. Allgower, “Multi-source land cover classification for forest fire management based on imaging spectrometry and LiDAR data,” For. Ecol. Manage. 256(3), 263–271 (2008).
[Crossref]

Nevalainen, O.

S. Kaasalainen, M. Åkerblom, O. Nevalainen, T. Hakala, and M. Kaasalainen, “Uncertainty in multispectral lidar signals caused by incidence angle effects,” Interface Focus. 8(2), 20170033 (2018).
[Crossref]

J. Vauhkonen, T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M. Holopainen, and J. Hyyppa, “Classification of spruce and pine trees using active hyperspectral LiDAR,” IEEE Geosci. Remote Sensing Lett. 10(5), 1138–1141 (2013).
[Crossref]

O. Nevalainen et al., “Nitrogen concentration estimation with hyperspectral LiDAR,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013. II-5/W2: p. 205–210.

Niu, Z.

W. Li, G. Sun, Z. Niu, S. Gao, and H. Qiao, “Estimation of leaf biochemical content using a novel hyperspectral full-waveform LiDAR system,” Remote Sens. Lett. 5(8), 693–702 (2014).
[Crossref]

Paliwala, J.

S. Mahesh, A. Manickavasagana, D. S. Jayasa, J. Paliwala, and N. D. G. Whiteb, “Feasibility of near-infrared hyperspectral imaging to differentiate Canadian wheat classes,” Bioprocess Eng. 101(1), 50–57 (2008).
[Crossref]

Pinto, N.

M. Simard, N. Pinto, J. B. Fisher, and A. Baccini, “Mapping forest canopy height globally with spaceborne lidar,” J. Geophys. Res.: Biogeosci. 116(G4), G04021 (2011).
[Crossref]

Pyysalo, U.

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

Qiao, H.

W. Li, G. Sun, Z. Niu, S. Gao, and H. Qiao, “Estimation of leaf biochemical content using a novel hyperspectral full-waveform LiDAR system,” Remote Sens. Lett. 5(8), 693–702 (2014).
[Crossref]

Shaker, A.

W. Y. Yan, A. Shaker, A. Habib, and A. P. Kersting, “Improving classification accuracy of airborne LiDAR intensity data by geometric calibration and radiometric correction,” ISPRS J. Photogramm. 67, 35–44 (2012).
[Crossref]

Shevell, S.K.

S.K. Shevell, The science of color. Elsevier: 2003.

Shi, S.

X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020).
[Crossref]

B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019).
[Crossref]

J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018).
[Crossref]

L. Du, S. Shi, J. Yang, W. Wang, J. Sun, B. Cheng, Z. Zhang, and W. Gong, “Potential of spectral ratio indices derived from hyperspectral LiDAR and laser-induced chlorophyll fluorescence spectra on estimating rice leaf nitrogen contents,” Opt. Express 25(6), 6539–6549 (2017).
[Crossref]

L. Du, W. Gong, S. Shi, J. Yang, J. Sun, B. Zhu, and S. Song, “Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR,” ITC J. 44, 136–143 (2016).
[Crossref]

Simard, M.

M. Simard, N. Pinto, J. B. Fisher, and A. Baccini, “Mapping forest canopy height globally with spaceborne lidar,” J. Geophys. Res.: Biogeosci. 116(G4), G04021 (2011).
[Crossref]

Smith, T.

T. Smith and J. Guild, “The CIE colorimetric standards and their use,” Trans. Opt. Soc. 33(3), 73–134 (1931).
[Crossref]

Song, S.

B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020).
[Crossref]

J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018).
[Crossref]

L. Du, W. Gong, S. Shi, J. Yang, J. Sun, B. Zhu, and S. Song, “Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR,” ITC J. 44, 136–143 (2016).
[Crossref]

Stephens, D. J.

D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science 300(5616), 82–86 (2003).
[Crossref]

St-Onge, B.

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “LiDAR remote sensing of forest structure,” Prog. Phys. Geog. 27(1), 88–106 (2003).
[Crossref]

Sun, G.

W. Li, G. Sun, Z. Niu, S. Gao, and H. Qiao, “Estimation of leaf biochemical content using a novel hyperspectral full-waveform LiDAR system,” Remote Sens. Lett. 5(8), 693–702 (2014).
[Crossref]

Sun, J.

B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020).
[Crossref]

X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020).
[Crossref]

B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019).
[Crossref]

J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018).
[Crossref]

L. Du, S. Shi, J. Yang, W. Wang, J. Sun, B. Cheng, Z. Zhang, and W. Gong, “Potential of spectral ratio indices derived from hyperspectral LiDAR and laser-induced chlorophyll fluorescence spectra on estimating rice leaf nitrogen contents,” Opt. Express 25(6), 6539–6549 (2017).
[Crossref]

L. Du, W. Gong, S. Shi, J. Yang, J. Sun, B. Zhu, and S. Song, “Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR,” ITC J. 44, 136–143 (2016).
[Crossref]

Sun, W.

A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015).
[Crossref]

Suomalainen, J.

J. Vauhkonen, T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M. Holopainen, and J. Hyyppa, “Classification of spruce and pine trees using active hyperspectral LiDAR,” IEEE Geosci. Remote Sensing Lett. 10(5), 1138–1141 (2013).
[Crossref]

T. Halali, J. Suomalainen, S. Kaasalainen, and Y. Chen, “Full waveform hyperspectral LiDAR for terrestrial laser scanning,” Opt. Express 20(7), 7119 (2012).
[Crossref]

Suomalainen, S.

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

Taranik, J.V.

R.G. Vaughan, W.M. Calvin, and J.V. Taranik, “SEBASS hyperspectral thermal infrared data: surface emissivity measurement and mineral mapping,” Remote Sens. Environ. 85(1), 48–63 (2003).
[Crossref]

Ting, D.S.W.

D.S.W. Ting, P. Burlina, X. Xu, N. M. Bressler, and T. Y. Wong, “AI for medical imaging goes deep,” Nat. Med., 2018.
[Crossref]

Titus, T. N.

A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015).
[Crossref]

Treitz, P.

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “LiDAR remote sensing of forest structure,” Prog. Phys. Geog. 27(1), 88–106 (2003).
[Crossref]

van der Linden, S.

B. Koetz, F. Morsdorf, S. van der Linden, T. Curt, and B. Allgower, “Multi-source land cover classification for forest fire management based on imaging spectrometry and LiDAR data,” For. Ecol. Manage. 256(3), 263–271 (2008).
[Crossref]

Vastaranta, M.

J. Vauhkonen, T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M. Holopainen, and J. Hyyppa, “Classification of spruce and pine trees using active hyperspectral LiDAR,” IEEE Geosci. Remote Sensing Lett. 10(5), 1138–1141 (2013).
[Crossref]

Vaughan, R.G.

R.G. Vaughan, W.M. Calvin, and J.V. Taranik, “SEBASS hyperspectral thermal infrared data: surface emissivity measurement and mineral mapping,” Remote Sens. Environ. 85(1), 48–63 (2003).
[Crossref]

Vauhkonen, J.

J. Vauhkonen, T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M. Holopainen, and J. Hyyppa, “Classification of spruce and pine trees using active hyperspectral LiDAR,” IEEE Geosci. Remote Sensing Lett. 10(5), 1138–1141 (2013).
[Crossref]

Videen, G.

A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015).
[Crossref]

Wang, B.

B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020).
[Crossref]

Wang, W.

Wehr, A.

A. Wehr and U. Lohr, “Airborne laser scanning—an introduction and overview,” ISPRS J. Photogramm. 54(2-3), 68–82 (1999).
[Crossref]

Whiteb, N. D. G.

S. Mahesh, A. Manickavasagana, D. S. Jayasa, J. Paliwala, and N. D. G. Whiteb, “Feasibility of near-infrared hyperspectral imaging to differentiate Canadian wheat classes,” Bioprocess Eng. 101(1), 50–57 (2008).
[Crossref]

Wolff, M. J.

A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015).
[Crossref]

Wong, T. Y.

D.S.W. Ting, P. Burlina, X. Xu, N. M. Bressler, and T. Y. Wong, “AI for medical imaging goes deep,” Nat. Med., 2018.
[Crossref]

Wright, W. D.

W. D. Wright, “A re-determination of the trichromatic coefficients of the spectral colours,” Trans. Opt. Soc. 30(4), 141–164 (1929).
[Crossref]

Wulder, M.

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “LiDAR remote sensing of forest structure,” Prog. Phys. Geog. 27(1), 88–106 (2003).
[Crossref]

Xu, X.

D.S.W. Ting, P. Burlina, X. Xu, N. M. Bressler, and T. Y. Wong, “AI for medical imaging goes deep,” Nat. Med., 2018.
[Crossref]

Yan, W. Y.

W. Y. Yan, A. Shaker, A. Habib, and A. P. Kersting, “Improving classification accuracy of airborne LiDAR intensity data by geometric calibration and radiometric correction,” ISPRS J. Photogramm. 67, 35–44 (2012).
[Crossref]

Yang, J.

X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020).
[Crossref]

B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019).
[Crossref]

J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018).
[Crossref]

L. Du, S. Shi, J. Yang, W. Wang, J. Sun, B. Cheng, Z. Zhang, and W. Gong, “Potential of spectral ratio indices derived from hyperspectral LiDAR and laser-induced chlorophyll fluorescence spectra on estimating rice leaf nitrogen contents,” Opt. Express 25(6), 6539–6549 (2017).
[Crossref]

L. Du, W. Gong, S. Shi, J. Yang, J. Sun, B. Zhu, and S. Song, “Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR,” ITC J. 44, 136–143 (2016).
[Crossref]

Zhang, F.

W. Fang, X. Huang, F. Zhang, and D. Li, “Intensity correction of terrestrial laser scanning data by estimating laser transmission function,” IEEE Trans. Geosci. Electron. 53(2), 942–951 (2015).
[Crossref]

Zhang, Z.

Zhao, X.

X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020).
[Crossref]

B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019).
[Crossref]

Zhu, B.

L. Du, W. Gong, S. Shi, J. Yang, J. Sun, B. Zhu, and S. Song, “Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR,” ITC J. 44, 136–143 (2016).
[Crossref]

Annu. Rev. Neurosci. (1)

K. R. Gegenfurtner and D. C. Kiper, “Color vision,” Annu. Rev. Neurosci. 26(1), 181–206 (2003).
[Crossref]

Appl. Opt. (2)

Bioprocess Eng. (1)

S. Mahesh, A. Manickavasagana, D. S. Jayasa, J. Paliwala, and N. D. G. Whiteb, “Feasibility of near-infrared hyperspectral imaging to differentiate Canadian wheat classes,” Bioprocess Eng. 101(1), 50–57 (2008).
[Crossref]

For. Ecol. Manage. (1)

B. Koetz, F. Morsdorf, S. van der Linden, T. Curt, and B. Allgower, “Multi-source land cover classification for forest fire management based on imaging spectrometry and LiDAR data,” For. Ecol. Manage. 256(3), 263–271 (2008).
[Crossref]

IEEE Geosci. Remote Sensing Lett. (1)

J. Vauhkonen, T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M. Holopainen, and J. Hyyppa, “Classification of spruce and pine trees using active hyperspectral LiDAR,” IEEE Geosci. Remote Sensing Lett. 10(5), 1138–1141 (2013).
[Crossref]

IEEE Sens. J. (1)

A. G. Andreou and Z. K. Kalayjian, “Polarization imaging: principles and integrated polarimeters,” IEEE Sens. J. 2(6), 566–576 (2002).
[Crossref]

IEEE Trans. Geosci. Electron. (4)

A. J. Brown, “Spectral curve fitting for automatic hyperspectral data analysis,” IEEE Trans. Geosci. Electron. 44(6), 1601–1608 (2006).
[Crossref]

M. Dalponte, L. Bruzzone, and D. Gianelle, “Fusion of Hyperspectral and LIDAR Remote Sensing Data for Classification of Complex Forest Areas,” IEEE Trans. Geosci. Electron. 46(5), 1416–1427 (2008).
[Crossref]

S. Kaasalainen, H. Hyppa, A. Kukko, P. Litkey, E. Ahokas, J. Hyppa, H. Lehner, A. Jaakkola, S. Suomalainen, A. Akujarvi, M. Kaasalainen, and U. Pyysalo, “Radiometric Calibration of LIDAR Intensity With Commercially Available Reference Targets,” IEEE Trans. Geosci. Electron. 47(2), 588–598 (2009).
[Crossref]

W. Fang, X. Huang, F. Zhang, and D. Li, “Intensity correction of terrestrial laser scanning data by estimating laser transmission function,” IEEE Trans. Geosci. Electron. 53(2), 942–951 (2015).
[Crossref]

Interface Focus. (1)

S. Kaasalainen, M. Åkerblom, O. Nevalainen, T. Hakala, and M. Kaasalainen, “Uncertainty in multispectral lidar signals caused by incidence angle effects,” Interface Focus. 8(2), 20170033 (2018).
[Crossref]

ISPRS J. Photogramm. (3)

A. Wehr and U. Lohr, “Airborne laser scanning—an introduction and overview,” ISPRS J. Photogramm. 54(2-3), 68–82 (1999).
[Crossref]

E.P. Baltsavias, “A comparison between photogrammetry and laser scanning,” ISPRS J. Photogramm. 54(2-3), 83–94 (1999).
[Crossref]

W. Y. Yan, A. Shaker, A. Habib, and A. P. Kersting, “Improving classification accuracy of airborne LiDAR intensity data by geometric calibration and radiometric correction,” ISPRS J. Photogramm. 67, 35–44 (2012).
[Crossref]

ITC J. (1)

L. Du, W. Gong, S. Shi, J. Yang, J. Sun, B. Zhu, and S. Song, “Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR,” ITC J. 44, 136–143 (2016).
[Crossref]

J. Geophys. Res.: Biogeosci. (1)

M. Simard, N. Pinto, J. B. Fisher, and A. Baccini, “Mapping forest canopy height globally with spaceborne lidar,” J. Geophys. Res.: Biogeosci. 116(G4), G04021 (2011).
[Crossref]

J. Quant. Spectrosc. Radiat. Transfer (1)

A. J. Brown, T. I. Michaels, S. Byrne, W. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund, “The case for a modern multiwavelength, polarization-sensitive LIDAR in orbit around Mars,” J. Quant. Spectrosc. Radiat. Transfer 153, 131–143 (2015).
[Crossref]

Opt. Express (2)

Prog. Phys. Geog. (1)

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “LiDAR remote sensing of forest structure,” Prog. Phys. Geog. 27(1), 88–106 (2003).
[Crossref]

Remote Sens. (2)

B. Chen, S. Shi, W. Gong, J. Sun, B. Chen, L. Du, J. Yang, K. Guo, and X. Zhao, “True-Color Three-Dimensional Imaging and Target Classification Based on Hyperspectral LiDAR,” Remote Sens. 11(13), 1541 (2019).
[Crossref]

B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12(4), 593 (2020).
[Crossref]

Remote Sens. Environ. (3)

R.G. Vaughan, W.M. Calvin, and J.V. Taranik, “SEBASS hyperspectral thermal infrared data: surface emissivity measurement and mineral mapping,” Remote Sens. Environ. 85(1), 48–63 (2003).
[Crossref]

J. Sun, S. Shi, J. Yang, B. Chen, W. Gong, L. Du, F. Mao, and S. Song, “Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion,” Remote Sens. Environ. 212, 1–7 (2018).
[Crossref]

G. P. Asner, D. E. Knapp, T. Kennedy-Bowdoin, M. O. Jones, R. E. Martin, J. Boardman, and R. F. Hughes, “Invasive species detection in Hawaiian rainforests using airborne imaging spectroscopy and LiDAR,” Remote Sens. Environ. 112(5), 1942–1955 (2008).
[Crossref]

Remote Sens. Lett. (1)

W. Li, G. Sun, Z. Niu, S. Gao, and H. Qiao, “Estimation of leaf biochemical content using a novel hyperspectral full-waveform LiDAR system,” Remote Sens. Lett. 5(8), 693–702 (2014).
[Crossref]

Rev. Mod. Phys. (1)

J. M. Dudley, G. Genty, and S. Coen, “Supercontinuum generation in photonic crystal fiber,” Rev. Mod. Phys. 78(4), 1135–1184 (2006).
[Crossref]

Science (1)

D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science 300(5616), 82–86 (2003).
[Crossref]

Sensors (1)

X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D imaging of vegetation based on multi-wavelength fluorescence LiDAR,” Sensors 20(3), 935 (2020).
[Crossref]

Trans. ASAE (1)

R. Lu, “Detection of bruises on apples using near–infrared hyperspectral imaging,” Trans. ASAE 46(2), 523 (2003).
[Crossref]

Trans. Opt. Soc. (2)

W. D. Wright, “A re-determination of the trichromatic coefficients of the spectral colours,” Trans. Opt. Soc. 30(4), 141–164 (1929).
[Crossref]

T. Smith and J. Guild, “The CIE colorimetric standards and their use,” Trans. Opt. Soc. 33(3), 73–134 (1931).
[Crossref]

Other (6)

J. Guild, “The colorimetric properties of the spectrum,” Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 1931. 230(681-693): p. 149–187.

S.K. Shevell, The science of color. Elsevier: 2003.

C. Biwu et al., “Using HSI Color Space to Improve the Multispectral Lidar Classification Error Caused by Measurement Geometry,” IEEE Trans. Geosci. Electron., 2020.

D. Martinez-Ramirez, et al. Developing hyperspectral lidar for structural and biochemical analysis of forest data,” in Proc. EARSEL Conf. Adv. Geosci.2012.

D.S.W. Ting, P. Burlina, X. Xu, N. M. Bressler, and T. Y. Wong, “AI for medical imaging goes deep,” Nat. Med., 2018.
[Crossref]

O. Nevalainen et al., “Nitrogen concentration estimation with hyperspectral LiDAR,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013. II-5/W2: p. 205–210.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Optical setup of the HSL system
Fig. 2.
Fig. 2. Two scanning scenes of 17 colored card papers and 14 different color targets.
Fig. 3.
Fig. 3. The flowchart of true color 3D imaging improvement method based on the HSL system
Fig. 4.
Fig. 4. Single pulse energy of the supercontinuum laser source (a) and quantum efficiency of PMT detectors (b).
Fig. 5.
Fig. 5. Spectral tristimulus values of CIE 1931-XYZ color system
Fig. 6.
Fig. 6. True color reconstruction results of five schemes (a1), (b1), (c1), (d1) and (e1) in the first experiment. (A): the photo of 17 colored card papers; (a1): uses 32 spectral bands and corresponding spectral tristimulus values; (b1): uses 8 spectral bands and corresponding spectral tristimulus values; (c1): uses three new bands constructed form the 8 spectral bands and the calculated color weights; (d1): uses 14 spectral bands and corresponding spectral tristimulus values; (e1): uses three new bands constructed form the 14 spectral bands and the calculated color weights.
Fig. 7.
Fig. 7. True color reconstruction results of five schemes (a2), (b2), (c2), (d2) and (e2) in the second experiment. The reconstruction process and color weights were same with the first experiment.
Fig. 8.
Fig. 8. The quantitative analysis results of RGB channels between the true color values of the photo and reconstructed colors of point clouds in five schemes. X-axis: RGB values about reconstructed colors. Y-axis: RGB values about the photo.

Tables (2)

Tables Icon

Table 1. Five schemes of true color reconstruction

Tables Icon

Table 2. Color weights of three new bands in the third (c) and fifth schemes (e)

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

k = 1 y ¯ ( λ ) d λ
X  =  k r ( λ ) x ¯ ( λ ) d λ Y =  k r ( λ ) y ¯ ( λ ) d λ Z  =  k r ( λ ) z ¯ ( λ ) d λ
a x 1 + b x 2 + c x 3 d + e + f = X d x 1 + e x 2 + f x 3 d + e + f = Y g x 1 + h x 2 + i x 3 d + e + f = Z

Metrics