Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Development of a multispectral fluorescence LiDAR for point cloud segmentation of plants

Open Access Open Access

Abstract

The accelerating development of high-throughput plant phenotyping demands a LiDAR system to achieve spectral point cloud, which will significantly improve the accuracy and efficiency of segmentation based on its intrinsic fusion of spectral and spatial data. Meanwhile, a relatively longer detection range is required for platforms e.g., unmanned aerial vehicles (UAV) and poles. Towards the aims above, what we believe to be, a novel multispectral fluorescence LiDAR, featuring compact volume, light weight, and low cost, has been proposed and designed. A 405 nm laser diode was employed to excite the fluorescence of plants, and the point cloud attached with both the elastic and inelastic signal intensities that was obtained through the R-, G-, B-channels of a color image sensor. A new position retrieval method has been developed to evaluate far field echo signals, from which the spectral point cloud can be obtained. Experiments were designed to validate the spectral/spatial accuracy and the segmentation performance. It has been found out that the values obtained through the R-, G-, B-channels are consistent with the emission spectrum measured by a spectrometer, achieving a maximum R2 of 0.97. The theoretical spatial resolution can reach up to 47 mm and 0.7 mm in the x- and y-direction at a distance of around 30 m, respectively. The values of recall, precision, and F score for the segmentation of the fluorescence point cloud were all beyond 0.97. Besides, a field test has been carried out on plants at a distance of about 26 m, which further demonstrated that the multispectral fluorescence data can significantly facilitate the segmentation process in a complex scene. These promising results prove that the proposed multispectral fluorescence LiDAR has great potential in applications of digital forestry inventory and intelligent agriculture.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

High-throughput phenotyping is in great demand as an non-destructive and rapid tool to monitor and measure phenotypic information related to the growth, yield, and adaptation to biotic or abiotic stress of plants from real-world field situations [1]. To extend the detection range as well as improve the detection efficiency, various optical technologies have been developed for traits including leaf area index [2], crown base height [3], water content [4], etc. As a powerful active remote sensing tool, light detection and ranging (LiDAR) technique has been widely employed in phenotyping [57] by installing LiDAR systems at fixed points [8], on backpacks [9], unmanned aerial vehicles (UAV) [10], or even satellites [11]. While the latter have detection range of tens of kilometers [12], those on UAVs or poles with shorter detection ranges (around 20 m [13]) but higher spatial resolution are more frequently used. Traditional LiDAR techniques are mainly based on the time-of-flight (TOF) principle [14], where a pulse laser is utilized as the light source and the reflected/scattered laser pulse would be detected by a high sensitivity photodetector. The range is obtained according to the time interval between the emission and the arrival of the laser pulse. Combing with rotational scanning, two- or three-dimensional (3D) structure, i.e. cloud points, can be achieved, upon which geometric relation based methods are developed to evaluate profile diagram, diameter at breast height (DBH), canopy base height, etc. [15]. Plant point cloud segmentation is an indispensable part for high-throughput phenotyping [16], which provides fundamental data for calculation of biomass, crown diameter, etc. A broader information besides locations and intensities of point cloud, say, spectral information, would definitely simplify the segmentation process and improve the accuracy. However, this requirement is beyond the ability of the traditional LiDAR, and methods through post data matching are sophisticated [1720].

Apart from 3D cloud points, fluorescence or reflectance spectra related to the intrinsic compositions is highly demanded for a better understanding of chlorophyll contents and more precise classification [2123]. Thus, hyperspectral LiDAR, multiple-wavelength LiDAR, and laser-induced fluorescence (LIF) LiDAR have been developed for remote sensing of vegetations. Among these techniques, the LIF-LiDAR is capable of directly measuring chlorophyll fluorescence and thus be utilized for studies on growth status, biochemical content and stress factors [2426]. Traditional LIF-LiDAR uses a pulsed laser for excitation and the fluorescence is detected by spectrometers or monochromators with sensitive photomultiplier tubes (PMTs) [27,28]. When employing a spectrometer as the detector, the distance information is missing. Besides, near-field detection is difficult to achieve due to the intrinsic near-field blind zone characteristic of large-aperture receiving telescopes. Therefore, this type of LiDAR is more suitable for airborne or space borne platforms, whose aims are to obtain large-scale fluorescence or spectral information. One approach for simultaneous measurement of fluorescence and distance is to employ narrowband filters placing ahead of photodetectors, and the range-resolved fluorescence intensities at several wavelengths e.g. 685 nm and 740 nm can thus be obtained [29]. However, the size and weight limit its usage in lightweight platforms. Recently, a triangulation LiDAR technique, based on the Scheimpflug imaging principle, has been proposed for 3D measurements in short range [30,31]. Proof-of-principle demonstration revealed that it can be modified for fluorescence detection in meter ranges [32]. However, such a short distance is not sufficient for outdoor phenotyping measurements.

In this work, a multispectral fluorescence LiDAR system has been developed for 3D fluorescence measurements in the range of tens of meters, a detection range required for field applications. A new position retrieval method has been proposed to achieve high accuracy based on the far field echo signals. To quantitatively validate the feasibility of using the obtained multi-channel fluorescence point cloud for fluorescence-technology-based detection, an indoor fluorescence experiment has been proposed to evaluate the relationship between the detected RGB intensities and the fluorescence spectrum of the target according to a theoretical model. The spatial detection performance has been evaluated by measuring standard objects with known size placed at far distances. A validation experiment has been carried out on a plant to investigate the performance of point cloud segmentation. Outdoor experiments have been carried out for simultaneous measurements of fluorescence spectral and distance information of various plants, as well as for the validation of point cloud segmentation in a complex scene.

2. Instrumentation and methods

2.1 Principle of the multispectral fluorescence LiDAR

The multispectral fluorescence LiDAR is based on the Scheimpflug imaging principle with a colorful CMOS sensor, as shown in Fig. 1. In a one-dimensional measurement system, the reflected/scattered light at different ranges is focused on different pixels along the image sensor by a receiving lens. If a light sheet is transmitted, a sector area on the objective plane would be illuminated. The distant targets within the illuminated sector area will reflect or scatter the incident light. Through the receiving lens, the reflected/backscattered light from different locations would be focused on different positions of the image plane. As a result, two-dimensional localization of the target can be achieved.

 figure: Fig. 1.

Fig. 1. The principle of the two-dimensional multispectral fluorescence LiDAR technique.

Download Full Size | PDF

According to geometrical optics [33,34], the (x, y) spatial position of the measured target in the x-y-z coordinate of the LiDAR system is described by

$$x = \frac{{L[{r_\textrm{I}}(\sin \Theta - \cos \Theta \tan \Phi ) + {L_{\textrm{IL}}}]}}{{{r_\textrm{I}}(\cos \Theta + \sin \Theta \tan \Phi ) + {L_{\textrm{IL}}}\tan \Phi }},$$
$$y = \frac{{{c_I}(x - f)}}{f}.$$

Here L is the distance of the lens to the object plane, rI is the row position given by the row number (n) and the pixel size (sp), i.e., rI = (N/2-n)sp, where N is the total row number and the center of the sensor is placed coinciding with the optical axis of the lens. Ф is the swing angle of the lens, Θ is the tilt angle of the image plane to the lens plane (or the angle of the object plane with the lens axis). LIL is the distance between the origin of the image plane and the center of the lens. cI is the column position defined by the column number (m), the pixel size (sp) and the total column number (M), i.e., cI = (M/2-m)sp. f is the focal length of the imaging lens.

Figure 2 shows the schematic diagram and the real picture of the multispectral fluorescence LiDAR system. The optical layout mainly consists of four parts, namely the laser source, the beam shaper, the receiving lens, and the image sensor. A continuous-wave (CW) laser diode with a central wavelength of 405 nm and an intensity of 800 mW is employed to illuminate the sampling area. The receiving lens, with a diameter of 75 mm (effective diameter of 60 mm), has a focal length of 150 mm. The receiving lens focuses the reflected/scattered light of the distant target onto the CMOS sensor (4096 × 2160 pixels, 3.45 µm × 3.45 µm). The distance between the laser beam and the center of the receiving lens is 400 mm. Thus, the CMOS sensor has been tilted by 21 degrees to satisfy the Scheimpflug principle. The field-of-view (FOV) of the receiving optics is about 90 mrad with the present image sensor (width: 14.1 mm). To match the FOV of the receiver, the laser beam is shaped into a light sheet with a divergence of 0.3 mrad × 100.0 mrad by a beam shaper composed of a cylindrical lens and a Powell prism. The laser divergence is designed slightly larger on purpose as the beam profile at the edges of the light sheet may be inhomogeneous. Meanwhile, a long-pass filter with a cut-on wavelength of 420 nm (3.84% transmission at 405 nm) is employed to suppress the intensity of the received excitation light. The acquired colorful images are transferred to a minicomputer (≈ 1.0 kg) for data recording and calculation. The whole multispectral fluorescence LiDAR system has a volume of 480 × 265 × 120 mm and a weight of only about 2.9 kg including the minicomputer.

 figure: Fig. 2.

Fig. 2. (a) Schematic and (b) real picture of the multispectral fluorescence LiDAR system.

Download Full Size | PDF

As shown in Eqs. (1) and (2), the spatial resolution of the LiDAR system is related to system parameters and the measurement principle. The spatial resolution along the longitude direction (referred to as x-axis) is given by

$$\textrm{d}x ={-} \frac{{{x^2}\sin \mathrm{\Theta (1\ +\ }{{\tan }^2}\mathrm{\Phi })}}{{{{[{r_\textrm{I}}(\sin \mathrm{\Theta } - \mathrm{cos\Theta }\tan \mathrm{\Phi }) + {L_{\textrm{IL}}}]}^2}}}\textrm{d}{r_\textrm{I}}.$$

Owing to the intrinsic measurement principle of the imaging-based LiDAR technique, the longitudinal (referred to as x-axis) resolution is not uniform along the detection distance. With the increase of the detection distance, the longitudinal resolution decreases. For the present LiDAR system, the longitudinal resolution is around 3 mm at 7.5 m, but drops to around 47 mm at 30 m. Also, limited by the width and placement of the CMOS sensor, the near-range blind zone of the present LiDAR system is about 7.5 m.

The horizontal (referred to as y-axis) spatial resolution is mainly determined by the pixel number, the measurement distance and the FOV of the receiving optics, which is given by

$$\textrm{d}y = \frac{{W(x - f)}}{{f \cdot M}}.$$

Here W is the sensor width. Through Eq. (4), one can conclude that the horizontal spatial resolution is proportional to the distance. In other words, high spatial resolution in the near field, but lower in the far field. Nevertheless, a resolution of 0.7 mm can still be achieved at a distance of 30 m, when all the pixels of the CMOS sensor are employed (4096 pixels) without pixel binning. The pixel-distance relationship as well as the spatial resolutions at different axes are depicted in Fig. 3.

 figure: Fig. 3.

Fig. 3. (a) Pixel-distance relationship and (b) spatial resolution of the present multispectral fluorescence LiDAR system. Pixel binning reduces spatial resolution along the y-axis.

Download Full Size | PDF

Figure 4 shows the spatial relationship between the LiDAR system and the target for ground-based measurements. The X-Y-Z coordinate of the measurement space is defined by taking the location of the LiDAR system as the coordinate origin (O). The coordinate transformation between the x-y-z coordinate and the X-Y-Z coordinate is given by

$$X = x\cos \varphi ,$$
$$Y = y,$$
$$Z = h\textrm{ + }x\sin \varphi .$$

 figure: Fig. 4.

Fig. 4. The relationship between the x-y-z coordinate of the LiDAR system and the X-Y-Z coordinate of the measurement space.

Download Full Size | PDF

Here h is the height of the laser exit window above the ground, φ is the pitch angle of the LiDAR system, i.e., the angle between the laser beam and the horizontal plane. If the LiDAR system looks down, φ < 0.

The resolution in the Z-axis depends mainly on the angular resolution of the equatorial and the thickness of the light sheet at the target distance. The angular resolution of the present equatorial can reach up to 0.01 degree, corresponding to a spatial resolution of 0.5 cm at 30 m along Z-axis. On the other hand, the thickness of the light sheet is about 1.00 cm at 30 m, and half of the thickness is defined as the minimum detectable distance difference. Thus, the Z-axis spatial resolution is 0.5 cm at 30 m.

2.2 Fluorescence spectral information acquisition

Plants are abundant in pigments, especially chlorophyll a which plays a key role in the photosynthesis process. The fluorescence of chlorophyll a can be induced by light with wavelength from UV to red region. Its typical fluorescence spectrum involves two peaks centered at 685 nm and 740 nm, respectively [35]. These red and far-red fluorescence signals can contribute to point cloud segment, as the content of chlorophyll a differs a lot from leaves to branches. The present multispectral fluorescence LiDAR system uses a diode laser with a wavelength of 405 nm as the excitation light to achieve fluorescence excitation of substances such as chlorophyll a contained in the target plant. In a color CCD/CMOS image sensor, optical filters with different transmission spectra are placed in front of the four adjacent pixels, namely, one R-channel, two G-channels, and one B-channel. When using a RGB camera, the R- and G- channels are used to detect the fluorescence signal of pigments excited by blue-violet light. Either peak or both peaks included in the red and far-red regions can be valuable for segmentation. Since not all targets have fluorescent signals, the B-channel is indispensable for detecting elastic signals. Thus, the combination of RGB channels ensures simultaneous detection of elastic reflected and inelastic fluorescence signals. An alteration of the RGB camera is to use hyperspectral cameras with more spectral channels, e.g., 16 or 25-channel single-chip cameras are commercially available. Although these hyperspectral cameras can achieve higher spectral resolution, their use also represents lower spatial resolution and maybe lower scanning speed. Besides, hyperspectral camera is still of high cost, while a RGB camera is a cost-effective solution.

As mentioned above, the color CMOS has three channels, i.e., the R-, G-, and B-channels. The relationship between the obtained lidar signal (Lch) for a specific detection channel and the original fluorescence spectrum is given by

$${L_{\textrm{ch}}} = K \cdot \int {{I_{\textrm{Fluo}\textrm{.}}}(\lambda )} \cdot {T_{\textrm{Filter}}}(\lambda )\cdot {\eta _{\textrm{ch}}}(\lambda )\textrm{d}\lambda .$$

Here K represents the system constant, IFlou. represents the fluorescence spectrum of the target, TFilter represents the transmittance of the optical filter in the receiver, η represents the sensitivity of the CMOS sensor, ch represents different spectral channels (R, G, B), and λ represents the wavelength. The sensitivity of each channel of the CMOS sensor used in the present work is shown in Fig. 5. A long-pass filter with a transmission of about 3.84% at 405 nm was also employed to reduce the intensity of the detected excitation light, which would be mainly detected by the B-channel. The R-channel and the G-channel have much lower responsivities at 405 nm, which are dedicated to detect fluorescence at longer wavelength. Thus, the fluorescence of chlorophyll a, with peaks around 685 nm and 740 nm, is mainly detected by the R-channel. Generally speaking, the higher the chlorophyll a content in the plant, the greater the R-channel intensity. However, the signal intensity measured by the present multispectral fluorescence LiDAR may be affected by the uniformity of the energy distribution of the light sheet, the transmission distance of the laser, the incident angle of the excitation light, the reception angle and system parameters, etc. Thus, the fluorescence intensity (detected by the R- and G-channels) should be normalized by using the excitation light intensity detected by the B-channel, which has much higher responsivity at 405 nm, but lower responsivity at the infrared region, The normalized R-channel and G-channel can greatly reduce measurement uncertainties, which can then be utilized for qualitatively analyzing the fluorescence spectral information of plants. The normalized R-channel by the B-channel is defined as

$$R_{\textrm{Norm}.}^\textrm{L} = \frac{{{L_\textrm{R}}}}{{{L_\textrm{B}}}} = \frac{{\int {{I_{\textrm{Flou}\textrm{.}}}(\lambda )} \cdot {T_{\textrm{Filter}}}(\lambda )\cdot {\eta _\textrm{R}}(\lambda )d\lambda }}{{\int {{I_{\textrm{Flou}\textrm{.}}}(\lambda )} \cdot {T_{\textrm{Filter}}}(\lambda )\cdot {\eta _\textrm{B}}(\lambda )d\lambda }}.$$

Here the B-channel intensity acts as a normalization factor, and the system constant is supposed to be wavelength independent since both the excitation light and fluorescence light are collected by the same receiving optics and they share the same light path. The normalized R-channel intensity can thus be used to classify the cloud points of plants from the ambient environment, as the ambient objects may have no or much weaker fluorescence especially at the near infrared region. Also, the normalized G-channel can be defined similarly as below

$$G_{\textrm{Norm}.}^\textrm{L} = \frac{{{L_\textrm{G}}}}{{{L_\textrm{B}}}} = \frac{{\int {{I_{\textrm{Flou}\textrm{.}}}(\lambda )} \cdot {T_{\textrm{Filter}}}(\lambda )\cdot {\eta _\textrm{G}}(\lambda )d\lambda }}{{\int {{I_{\textrm{Flou}\textrm{.}}}(\lambda )} \cdot {T_{\textrm{Filter}}}(\lambda )\cdot {\eta _\textrm{B}}(\lambda )d\lambda }}.$$

 figure: Fig. 5.

Fig. 5. The relative response of the R-, G-, B-channels of the CMOS image sensor employed.

Download Full Size | PDF

2.3 Spatial information acquisition

In the multispectral fluorescence LiDAR technique, a light sheet is transmitted and the two-dimensional spatial information in x-y plane is obtained simultaneously by the color image sensor, with the pixel corresponding to the position of the target in reality scenario. Although the principle is straightforward, several procedures should be performed to further improve the spatial accuracy of the relatively far field targets with complex shapes.

According to the measurement principle, only a specific pixel in each column of the image indicates the real position of the target. As the images of distant targets have certain widths owing to the thickness of the light sheet and the imperfect imaging, it is thus crucial to accurately locate the actual pixel position in order to obtain the distance information. Figure 6 shows the flowchart for obtaining the target position from a raw color image.

 figure: Fig. 6.

Fig. 6. (a) A raw color image. (b) The color image after digitally binned along the y-axis. (c) The pixel intensity plot of a specific column of each channel. (d) Zoom-in plots of the facula. (e) The retrieved contour line.

Download Full Size | PDF

2.3.1 Pre-processing method

In order to improve the signal-to-noise ratio (SNR), every 16 adjacent columns are digitally binned into one column. Although the binning process will decrease the spatial resolution along y-axis, it still reaches 11 mm at 30 m afterwards, which is sufficient for most cases. The compressed image (Fig. 6(b)) with a dimension of 2160 × 256 is then processed column by column.

The pixel intensity plot of a specific column is shown in Fig. 6(c), where a facula expanding to a range of about 100 pixels is observed (Fig. 6(d)). Thus, 200 pixels around the maximum intensity of the whole column are selected, and the rest are treated as the background signal. A SNR evaluation is carried out to justify whether the maximum signal is a true echo from a distant target or not. The SNR is defined by the ratio between the maximum value of the whole signal and the deviation of the background signal. The SNR threshold for an efficient echo is empirically determined as 4.0. As long as any of the SNR of the three channels is larger than the threshold, the corresponding RGB echo signals are reserved.

The reserved 200 pixels surrounding the maximum signal is denoised by a smoothing algorithm with a span of 3 pixels based on robust locally weighted regression, while the rest pixels are set to zero. Selecting effective signals can improve the processing speed, reduce data volume and eliminate the influence of stray light.

2.3.2 Spatial position retrieval

Due to the reflection and scattering around the target surface, the received signal is often broadened, a phenomenon also observed when using full waveform lidar. This broadening may lead to miss-positioning of the target. Here, we determine the actual position of the echo using the “center of gravity” method [36], which is obtained by a weighted average of the signal intensities. All columns are calculated to get the actual pixel positions of the echo, which are then transferred to the distances according to the pixel-distance relationship. A contour line of the target can thus be obtained (Fig. 6(e)).

A whole scan of the target is performed through a rotational “push broom” method, by which contour lines at different angles are recorded. Combining with the system position and pitch angle, a three-dimensional point cloud of the target can be generated according to the theory described in Sec. 2.1.

3. Results and discussion

The purpose of the multispectral fluorescence LiDAR is to collect the three-dimensional shape of the target containing spectral information. Therefore, the validity of the spectral information and the accuracy of three-dimensional information are both the keys to evaluate the capability of the multispectral fluorescence LiDAR.

3.1 Relationship between the fluorescence spectra and RGB signals

To verify the effectiveness of the spectral information collected by the multispectral fluorescence LiDAR system, an indoor experiment was carried out, and the schematic diagram is shown in Fig. 7. The multispectral fluorescence LiDAR system is installed on the equatorial 10 m away from the sample platform, where leaves are mounted on a black board. A fluorescence measurement setup consisting of an optical filter, a collimating lens, an optical fiber and a portable spectrometer (Ocean Optics, HR4000CG-UV-NIR) is implemented to measure the fluorescence excited by the laser beam from the LiDAR system. The viewing angle of the fluorescence measurement setup and the FOV of the multispectral fluorescence LiDAR system are aligned so that fluorescence originating from the same positions and angles are measured by the two setups. The sample leaves are collected in the campus of the Dalian University of Technology. These sample leaves are different in terms of species and growth states, so that the universality of the experimental results can be validated.

 figure: Fig. 7.

Fig. 7. Indoor fluorescence experimental setup to evaluate the relationship between the fluorescence spectra and RGB signals.

Download Full Size | PDF

During signal acquisition process, once the laser of the multispectral fluorescence LiDAR system was turned on, both devices started to collect signals at the same time with the same exposure time. The multispectral fluorescence LiDAR acquired the RGB image from the sampling area, which gives R-, G- and B-channel intensities. The spectrometer collected the fluorescence spectrum (SFluo.). The expected intensities of the RGB channels evaluated from the fluorescence spectrum can be calculated by

$${S_{\textrm{ch}}} = {K_2}\int {{S_{\textrm{Fluo}\textrm{.}}}(\lambda )} \cdot {T_{\textrm{Filter}}}(\lambda )\cdot {\eta _{\textrm{ch}}}(\lambda )\textrm{d}\lambda .$$

Here K2 is the system constant of the fluorescence measurement setup. The normalized intensity of the R-channel and G-channel are defined by

$$R_{\textrm{Norm}\textrm{.}}^\textrm{S} = \frac{{{S_\textrm{R}}}}{{{S_\textrm{B}}}},$$
$$G_{\textrm{Norm}\textrm{.}}^\textrm{S} = \frac{{{S_\textrm{G}}}}{{{S_\textrm{B}}}}.$$

As the two setups have different system constants, the expected fluorescence intensities evaluated from the spectrometer may be different from the RGB intensities measured by the multispectral fluorescence LiDAR system. Nevertheless, the relationship between different channels should be similar. In other words, $R_{\rm{Norm.}}^{\rm S}$ and $R_{\rm{Norm.}}^{\rm L}$ should exhibit similar trend with the variation of the chlorophyll content of the measured plants. In this work, thirteen leaves of different kinds were examined. As can be seen from Fig. 8, the results obtained by the two instruments are in good agreement with each other. The coefficients of determination (R2) of R- and G-channel can reach up to 0.97. After normalization by the B-channel, the two types of results still have good consistency (R2 > 0.83).

 figure: Fig. 8.

Fig. 8. (a)-(f) RGB intensities obtained from the multispectral fluorescence LiDAR and the spectrometer. (g)-(j) Normalized results for R- and G-channels.

Download Full Size | PDF

In view of the fact that the R2 of the B-channel is lower than that of R- or G-channel, a possible reason is that under current experimental conditions, there is still a certain difference in the receiving angles of the two devices. The B-channel, dominated by the elastic reflection signal, is more sensitive to the receiving angle than the R- or G-channel dominated by the inelastic fluorescence signal. Overall, the spectral information collected by the multispectral fluorescence LiDAR is proved effective through the comparison test.

3.2 Verification on spatial resolution

To verify the spatial resolution, an experiment was designed and implemented using 3D printing standard cubes. The multispectral fluorescence LiDAR system was placed in a corridor, along which three groups of cubes were placed at distances of 7.5 m, 15 m, and 30 m, as shown in Fig. 9(a). In each group, three cubes (8 × 8 × 10 cm, 4 × 4 × 6 cm, 2 × 2 × 3 cm) were placed with the same layout as shown in Fig. 9(b). When the multispectral fluorescence LiDAR system was turned on, a two-dimensional profile can be obtained by a static measurement. The three-dimensional point cloud covering multiple targets at different ranges can be achieved through scanning measurements by rotating the system with the equatorial.

 figure: Fig. 9.

Fig. 9. (a) Physical arrangement of the LiDAR system and targets. (b) Layout of three cubes in each group.

Download Full Size | PDF

The 3D view, side view, and top view of the scanning result are shown in Fig. 10. The practical performance of the multispectral fluorescence LiDAR system is examined by evaluating the dimensions and separations of the cubes at different distances. Figure 11 shows the measurement results of the cubes at 7.5 m, which were all in good agreement with the designed values. For instance, the width of the biggest cube was estimated to be 7.87 cm; the X-axis separation between the biggest cube and the middle cube was found to be 3.27 cm. Table 1 summarizes the measurement uncertainties at different distances.

 figure: Fig. 10.

Fig. 10. (a) 3D view, (b) side view, and (c) top view of the scanning result. The exposure time was 175 ms, a total of 825 photos were collected, and the total time was 145 s.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Measurements of the target at 7.5 m, (a) actual width, (b) actual distance between the front surfaces of the cubes, measurement results of (c) width and (d) distance between the front surfaces of the cubes.

Download Full Size | PDF

Tables Icon

Table 1. Spatial resolutions at different distances.

3.3 Point cloud segmentation

Point cloud segmentation can achieve further refinement by classifying the point cloud data of a target based on spatial information, geometry, etc. Prevailing methods e.g., density-based, shape-based, deep learning-based methods, are mainly based on the geometry and intensity relation. Besides these two kinds of information, the present LiDAR technique offers additional multichannel fluorescence information, which is helpful for building the segmentation model and will contribute to the field of forest inventory [37], biomass calculation [38], etc.

A validation experiment has been carried out on a Ficus pandurata Hance to investigate the performance of point cloud segmentation based on multispectral fluorescence information. The picture of the plant as well as the point cloud are shown in Fig. 12. Clearly, the normalized intensities of the R-channel for leaves and trunks/branches are substantially different, which can then be readily segmented by setting a threshold (i.e., 2.8), as shown in Fig. 12(c). In order to quantitatively evaluate the segmentation performance, ground truth for leaves and trunks/branches was obtained by manual segmentation. Table 2 summarizes quantitative results of the segmentation, where true positive (TP), false positive (FP), and false negative (FN) were obtained from the confusion matrix after point cloud segmentation. Besides, parameters such as recall, precision, and F-score of the point cloud segmentation have also been evaluated [39]. As shown in Table 2, the point cloud segmentation based on the multispectral fluorescence information has excellent performance with all evaluation parameters larger than 0.97. On the other hand, incorrect segmentation (FP and FN) occurs mainly around the leaf-branch connection area and the scars on the leaves. Further analysis shows that the spectral signals of these scars are indeed different from those of the leaves, which can be further improved by considering the position information in the segmentation algorithm.

 figure: Fig. 12.

Fig. 12. (a) Real picture, (b) normalized intensity of the R-channel, and (c) segmented point cloud of the Ficus pandurata Hance located at 10 m away from the LiDAR system. The exposure time was 150 ms, a total of 220 photos were collected in 33 s.

Download Full Size | PDF

Tables Icon

Table 2. Performance of the point cloud segmentation.

An outdoor experiment was also carried out in vitro to test the on-site detection capability and evaluate the improvement that the involvement of multiple fluorescence spectral channels brings to the classification. Figure 13(a) shows the measurement scenario, including leaves and branches of a prunus cerasifera, a pine and an ilex, grass, soil, as well as inorganic substances such as curbstone and ceramic tiles. The distances of these targets to the multispectral fluorescence LiDAR system are between 20 to 26 m. The scanning results are in good agreement with the real scene in terms of three-dimensional morphology. The B-channel 3D profile shows the reflection characteristics of the targets, while the R- and G-channel 3D profiles show their fluorescence spectral characteristics. It is not easy to distinguish the targets through their intensity difference in B-channel 3D profile. For instance, the leaves of the prunus cerasifera and the pine are similar, the curbstone shows different intensities at different positions, the branches and leaves of the prunus cerasifera are alike. Nevertheless, When blue-violet light is used as excitation light sources, most plant leaves emit fluorescence in red and green light bands [40], related to pigments such as chlorophyll a. It can be seen from the normalized R-channel ($R_{\rm{Norm.}}^{\rm L}$) and G-channel ($G_{\rm{Norm.}}^{\rm L}$) 3D profiles (Fig. 13(c)-(d)) that the signal intensity of grass and various leaves is significantly stronger than other targets. In the real scene, ilex leaves are greener than those of the prunus cerasifera, indicating that the latter contains more other pigments. In the view of fluorescence point cloud, the performances of both $R_{\rm{Norm.}}^{\rm L}$ and $G_{\rm{Norm.}}^{\rm L}$ are consistent with real scene, i.e., higher level of chlorophyll a content leads to larger value of $R_{\rm{Norm.}}^{\rm L}$, while higher level of other pigments such as riboflavin enhance the value of $G_{\rm{Norm.}}^{\rm L}$ [41,42].

 figure: Fig. 13.

Fig. 13. (a) Real figure, (b) scanning result of the B channel, (c) $R_{\rm{Norm.}}^{\rm L}$, and (d) $G_{\rm{Norm.}}^{\rm L}$ of the targets at distance of 20 m to 26 m. The exposure time was 100 ms, a total of 984 photos were collected, and the total time was 100 s.

Download Full Size | PDF

Therefore, depending on the performance of the multispectral channel, the targets mentioned above can be classified by simply setting a threshold for $R_{\rm{Norm.}}^{\rm L}$ or $G_{\rm{Norm.}}^{\rm L}$ value. Figure 14(c) shows the histogram of $R_{\rm{Norm.}}^{\rm L}$. Three valleys can be seen at 0.45, 1.31, and 2.07. By taking these three values as criteria, the point cloud can be roughly classified into four categories, as shown by Fig. 14(a). It can be seen that not only substances without chlorophyll, i.e., wall and soil, are well distinguished from the leaves, but also the leaves of the prunus cerasifera, the pine, and the ilex can be distinguished. The leaves can be easily reduced from the point cloud, leaving only walls, soil, trunks and branches for further analysis (Fig. 14(b)).

 figure: Fig. 14.

Fig. 14. (a) Threshold segmentation of the $R_{\rm{Norm.}}^{\rm L}$ point clouds of the targets. (b) Wall surfaces, soil, tree trunks and branches, etc. extracted from (a). (c) Histogram of $R_{\rm{Norm.}}^{\rm L}$ point clouds.

Download Full Size | PDF

4. Conclusion

In this paper, a multispectral fluorescence LiDAR has been developed, which can achieve multi-spectral point cloud with the range of about 30 m while greatly reducing the system volume and cost. The LiDAR system can be installed on poles and drones for a detection range of around 30 m, and an improved position retrieval method is proposed. A theoretical model has been proposed to describe RGB intensities measured by the multispectral fluorescence LiDAR system. An indoor fluorescence experiment on thirteen plant leaves has been carried out to evaluate the relationship between the detected RGB intensities and the fluorescence spectrum of the target according to the theoretical model. High linearities are achieved between all three spectral channel signals and the spectra measured by the spectrometer. The R2 between the R-channel intensity measured by the LiDAR system and the expected R-channel intensity measured by the spectrometer can achieve up to 0.97. The promising result successfully validate the feasibility of using the obtained multi-channel fluorescence point cloud for fluorescence-technology-based detection. The spatial detection performance has been evaluated by measuring standard objects with known size placed at various distances (7.5 m, 15 m and 30 m). Typically, the resolution in X direction at 7.5 m is around 3 mm and the resolution in Y direction is better than 2 mm, both better than commercial TOF lidar systems (∼ 1 cm) that can only acquire spatial information. A validation experiment has been carried out to investigate the performance of point cloud segmentation based on multispectral fluorescence information, which has demonstrated excellent performance with all evaluation parameters larger than 0.97.

Outdoor experiments on various plants have been carried out at a distance of about 26 m to demonstrate the capability of the 3D multispectral fluorescence LiDAR for accurate measurements of fluorescence spectral and distance information. The point cloud of the B-channel, which owns information of position and intensity as most LiDARs do, is compared with the results of the ratio of the R-channel or the G-channel to the B-channel. The obtained three-dimensional multi-spectral fluorescence point cloud achieved high accuracy digital twin of the targets. Besides, the spectral information collected by the system provides the intrinsic biochemical status of the target, which significantly improves the segmentation process. The improvement shows its potential in the applications of digital forestry inventory and intelligent agriculture.

The present work has successfully demonstrated the capability of the multispectral fluorescence LiDAR in the applications of point cloud segmentation. On the other hand, quite much work can be carried out in the near future. For hardware improvement, the current multispectral fluorescence LiDAR is equipped on an equatorial and acts as a terrestrial lidar system. Efforts will be made to install it on a UAV and acquire data more efficiently. Moreover, replacing the RGB CMOS by a sensor with more spectral bands is also in consideration. Algorithms and models based on the fusion of spatial and spectral information will be pursued for accurately and efficiently retrieving quantitative vegetation indexes, e.g., DBH, canopy base height, and biomass.

Funding

National Natural Science Foundation of China (62075025, 62105085); Dalian High-Level Talent Innovation Program (2020RQ018); Fundamental Research Funds for the Central Universities (DUT22JC17, DUT22QN246); Natural Science Foundation of Zhejiang Province (LQ20F050006).

Acknowledgement

The authors greatly acknowledge the valuable help of Zheng Kong during experiments.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. X. Jin, P. Zarco-Tejada, U. Schmidhalter, M. P. Reynolds, M. J. Hawkesford, R. K. Varshney, T. Yang, C. Nie, Z. Li, B. Ming, Y. Xiao, Y. Xie, and S. Li, “High-throughput estimation of crop traits: A review of ground and aerial phenotyping platforms,” IEEE Geosci. Remote Sens. Mag. 9(1), 200–231 (2021). [CrossRef]  

2. E. Tunca, E. S. Koksal, S. Cetin, N. M. Ekiz, and H. Balde, “Yield and leaf area index estimations for sunflower plants using unmanned aerial vehicle images,” Environ. Monit. Assess. 190(11), 682 (2018). [CrossRef]  

3. L. Luo, Q. Zhai, Y. Su, Q. Ma, M. Kelly, and Q. Guo, “Simple method for direct crown base height estimation of individual conifer trees using airborne LiDAR data,” Opt. Express 26(10), A562–A578 (2018). [CrossRef]  

4. T. Hakala, J. Suomalainen, S. Kaasalainen, and Y. Chen, “Full waveform hyperspectral LiDAR for terrestrial laser scanning,” Opt. Express 20(7), 7119–7127 (2012). [CrossRef]  

5. T. Yao, X. Yang, F. Zhao, Z. Wang, Q. Zhang, D. Jupp, J. Lovell, D. Culvenor, G. Newnham, W. Ni-Meister, C. Schaaf, C. Woodcock, J. Wang, X. Li, and A. Strahler, “Measuring forest structure and biomass in New England forest stands using Echidna ground-based lidar,” Remote Sensing of Environment 115(11), 2965–2974 (2011). [CrossRef]  

6. F. Fiorani and U. Schurr, “Future scenarios for plant phenotyping,” Annu. Rev. Plant Biol. 64(1), 267–291 (2013). [CrossRef]  

7. G. Zheng and L. M. Moskal, “Retrieving Leaf Area Index (LAI) Using Remote Sensing: Theories, Methods and Sensors,” Sensors 9(4), 2719–2745 (2009). [CrossRef]  

8. M. Dassot, T. Constant, and M. Fournier, “The use of terrestrial LiDAR technology in forest science: application fields, benefits and challenges,” Ann. For. Sci. 68(5), 959–974 (2011). [CrossRef]  

9. Y. Su, Q. Guo, S. Jin, H. Guan, X. Sun, Q. Ma, T. Hu, R. Wang, and Y. Li, “The Development and Evaluation of a Backpack LiDAR System for Accurate and Efficient Forest Inventory,” IEEE Geosci. Remote Sensing Lett. 18(9), 1660–1664 (2021). [CrossRef]  

10. T. Hu, X. Sun, Y. Su, H. Guan, Q. Sun, M. Kelly, and Q. Guo, “Development and Performance Evaluation of a Very Low-Cost UAV-Lidar System for Forestry Applications,” Remote Sens. 13(1), 77 (2020). [CrossRef]  

11. J. B. Abshire, X. Sun, and R. S. Afzal, “Mars Orbiter Laser Altimeter: receiver model and performance analysis,” Appl. Opt. 39(15), 2449–2460 (2000). [CrossRef]  

12. R. Nelson, K. Ranson, G. Sun, D. Kimes, V. Kharuk, and P. J. R. S. o. E. Montesano, “Estimating Siberian timber volume using MODIS and ICESat/GLAS,” 113, 691–701 (2009).

13. B. Wang, S. Song, S. Shi, Z. Chen, F. Li, D. Wu, D. Liu, and W. Gong, “Multichannel Interconnection Decomposition for Hyperspectral LiDAR Waveforms Detected From Over 500 m,” IEEE Trans. Geosci. Remote Sensing 60, 1–14 (2022). [CrossRef]  

14. K. Omasa, F. Hosoi, and A. Konishi, “3D lidar imaging for detecting and understanding plant responses and canopy structure,” J. Exp. Bot. 58(4), 881–898 (2006). [CrossRef]  

15. J. Hyyppa, O. Kelle, M. Lehikoinen, and M. Inkinen, “A segmentation-based method to retrieve stem volume estimates from 3-D tree height models produced by laser scanners,” IEEE Trans. Geosci. Remote Sensing 39(5), 969–975 (2001). [CrossRef]  

16. A. Paturkar, G. Sen Gupta, and D. Bailey, “Making Use of 3D Models for Plant Physiognomic Analysis: A Review,” Remote Sens. 13(11), 2232 (2021). [CrossRef]  

17. G. Kereszturi, L. N. Schaefer, W. K. Schleiffarth, J. Procter, R. R. Pullanagari, S. Mead, and B. Kennedy, “Integrating airborne hyperspectral imagery and LiDAR for volcano mapping and monitoring through image classification,” International Journal of Applied Earth Observation and Geoinformation 73, 323–339 (2018). [CrossRef]  

18. S. Jin, X. Sun, F. Wu, Y. Su, Y. Li, S. Song, K. Xu, Q. Ma, F. Baret, D. Jiang, Y. Ding, and Q. Guo, “Lidar sheds new light on plant phenomics for plant breeding and management: Recent advances and future prospects,” ISPRS Journal of Photogrammetry and Remote Sensing 171, 202–223 (2021). [CrossRef]  

19. J. P. Underwood, C. Hung, B. Whelan, and S. Sukkarieh, “Mapping almond orchard canopy volume, flowers, fruit and yield using lidar and vision sensors,” Computers and Electronics in Agriculture 130, 83–96 (2016). [CrossRef]  

20. Z. Wang, C. Li, M. Zhou, H. Zhang, W. He, W. Li, and Y. Qiu, “Recent development of hyperspectral LiDAR using supercontinuum laser,” in Hyperspectral Remote Sensing Applications and Environmental Monitoring and Safety Testing Technology, (2016). [CrossRef]  

21. S. Luo, C. Wang, X. Xi, S. Nie, X. Fan, H. Chen, X. Yang, D. Peng, Y. Lin, and G. Zhou, “Combining hyperspectral imagery and LiDAR pseudo-waveform for predicting crop LAI, canopy height and above-ground biomass,” Ecol. Indic. 102, 801–812 (2019). [CrossRef]  

22. J. Yang, W. Gong, S. Shi, L. Du, B. Zhu, J. Sun, and S. Song, “Excitation Wavelength Analysis of Laser-Induced Fluorescence LiDAR for Identifying Plant Species,” IEEE Geosci. Remote Sensing Lett. 13(7), 977–981 (2016). [CrossRef]  

23. P. Chen, C. Jamet, and D. Liu, “LiDAR Remote Sensing for Vertical Distribution of Seawater Optical Properties and Chlorophyll-a From the East China Sea to the South China Sea,” IEEE Trans. Geosci. Remote Sensing 60, 1–21 (2022). [CrossRef]  

24. J. Jiang, Z. Zhang, Q. Cao, Y. Liang, B. Krienke, Y. Tian, Y. Zhu, W. Cao, and X. Liu, “Use of an Active Canopy Sensor Mounted on an Unmanned Aerial Vehicle to Monitor the Growth and Nitrogen Status of Winter Wheat,” Remote Sens. 12(22), 3684 (2020). [CrossRef]  

25. O. Nevalainen, T. Hakala, J. Suomalainen, R. Mäkipää, M. Peltoniemi, A. Krooks, and S. Kaasalainen, “Fast and nondestructive method for leaf level chlorophyll estimation using hyperspectral LiDAR,” Agricultural and Forest Meteorology 198-199, 250–258 (2014). [CrossRef]  

26. U. Ahmad, A. Alvino, and S. Marino, “A Review of Crop Water Stress Assessment Using Remote Sensing,” Remote Sens. 13(20), 4155 (2021). [CrossRef]  

27. Z. Duan, T. Peng, S. Zhu, M. Lian, Y. Li, F. Wei, J. Xiong, S. Svanberg, Q. Zhao, J. Hu, and G. Zhao, “Optical characterization of Chinese hybrid rice using laser-induced fluorescence techniques-laboratory and remote-sensing measurements,” Appl. Opt. 57(13), 3481–3487 (2018). [CrossRef]  

28. L. Du, S. Shi, J. Yang, W. Wang, J. Sun, B. Cheng, Z. Zhang, and W. Gong, “Potential of spectral ratio indices derived from hyperspectral LiDAR and laser-induced chlorophyll fluorescence spectra on estimating rice leaf nitrogen contents,” Opt. Express 25(6), 6539–6549 (2017). [CrossRef]  

29. X. Zhao, S. Shi, J. Yang, W. Gong, J. Sun, B. Chen, K. Guo, and B. Chen, “Active 3D Imaging of Vegetation based on Multi-Wavelength Fluorescence LiDAR,” Sensors (Basel) 20(3), 935 (2020). [CrossRef]  

30. F. Gao, H. Lin, K. Chen, X. Chen, and S. He, “Light-sheet based two-dimensional Scheimpflug lidar system for profile measurements,” Opt. Express 26(21), 27179–27188 (2018). [CrossRef]  

31. G. Fu, A. Menciassi, and P. Dario, “Development of a low-cost active 3D triangulation laser scanner for indoor navigation of miniature mobile robots,” Robotics and Autonomous Systems 60(10), 1317–1326 (2012). [CrossRef]  

32. H. Lin, Y. Zhang, and L. Mei, “Fluorescence Scheimpflug LiDAR developed for the three-dimension profiling of plants,” Opt. Express 28(7), 9269–9279 (2020). [CrossRef]  

33. L. Mei and M. Brydegaard, “Atmospheric aerosol monitoring by an elastic Scheimpflug lidar system,” Opt Express 23(24), A1613–1628 (2015). [CrossRef]  

34. L. Mei and M. Brydegaard, “Continuous-wave differential absorption lidar,” Laser Photonics Rev. 9(6), 629–636 (2015). [CrossRef]  

35. H. Lin, Z. Li, H. Lu, S. Sun, F. Chen, K. Wei, and D. Ming, “Robust Classification of Tea Based on Multi-Channel LED-Induced Fluorescence and a Convolutional Neural Network,” Sensors 19(21), 4687 (2019). [CrossRef]  

36. H. C. van Assen, M. Egmont-Petersen, and J. H. Reiber, “Accurate object localization in gray level images using the center of gravity measure: accuracy versus precision,” IEEE Trans. on Image Process. 11(12), 1379–1384 (2002). [CrossRef]  

37. X. Chen, K. Jiang, Y. Zhu, X. Wang, and T. Yun, “Individual Tree Crown Segmentation Directly from UAV-Borne LiDAR Data Using the PointNet of Deep Learning,” Forests 12(2), 131 (2021). [CrossRef]  

38. X. Xu, F. Iuricich, K. Calders, J. Armston, and L. De Floriani, “Topology-based individual tree segmentation for automated processing of terrestrial laser scanning point clouds,” International Journal of Applied Earth Observation and Geoinformation 116, 103145 (2023). [CrossRef]  

39. A.-V. Vo, L. Truong-Hong, D. F. Laefer, and M. Bertolotto, “Octree-based region growing for point cloud segmentation,” ISPRS Journal of Photogrammetry and Remote Sensing 104, 88–100 (2015). [CrossRef]  

40. J. Yang, J. Sun, L. Du, B. Chen, Z. Zhang, S. Shi, and W. Gong, “Effect of fluorescence characteristics and different algorithms on the estimation of leaf nitrogen content based on laser-induced fluorescence lidar in paddy rice,” Opt. Express 25(4), 3743–3755 (2017). [CrossRef]  

41. N. Tremblay, Z. Wang, and Z. G. Cerovic, “Sensing crop nitrogen status with fluorescence indicators. A review,” Agron. Sustain. Dev. 32(2), 451–464 (2012). [CrossRef]  

42. J. Yang, S. Song, L. Du, S. Shi, W. Gong, J. Sun, and B. Chen, “Analyzing the Effect of Fluorescence Characteristics on Leaf Nitrogen Concentration Estimation,” Remote Sens. 10(9), 1402 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. The principle of the two-dimensional multispectral fluorescence LiDAR technique.
Fig. 2.
Fig. 2. (a) Schematic and (b) real picture of the multispectral fluorescence LiDAR system.
Fig. 3.
Fig. 3. (a) Pixel-distance relationship and (b) spatial resolution of the present multispectral fluorescence LiDAR system. Pixel binning reduces spatial resolution along the y-axis.
Fig. 4.
Fig. 4. The relationship between the x-y-z coordinate of the LiDAR system and the X-Y-Z coordinate of the measurement space.
Fig. 5.
Fig. 5. The relative response of the R-, G-, B-channels of the CMOS image sensor employed.
Fig. 6.
Fig. 6. (a) A raw color image. (b) The color image after digitally binned along the y-axis. (c) The pixel intensity plot of a specific column of each channel. (d) Zoom-in plots of the facula. (e) The retrieved contour line.
Fig. 7.
Fig. 7. Indoor fluorescence experimental setup to evaluate the relationship between the fluorescence spectra and RGB signals.
Fig. 8.
Fig. 8. (a)-(f) RGB intensities obtained from the multispectral fluorescence LiDAR and the spectrometer. (g)-(j) Normalized results for R- and G-channels.
Fig. 9.
Fig. 9. (a) Physical arrangement of the LiDAR system and targets. (b) Layout of three cubes in each group.
Fig. 10.
Fig. 10. (a) 3D view, (b) side view, and (c) top view of the scanning result. The exposure time was 175 ms, a total of 825 photos were collected, and the total time was 145 s.
Fig. 11.
Fig. 11. Measurements of the target at 7.5 m, (a) actual width, (b) actual distance between the front surfaces of the cubes, measurement results of (c) width and (d) distance between the front surfaces of the cubes.
Fig. 12.
Fig. 12. (a) Real picture, (b) normalized intensity of the R-channel, and (c) segmented point cloud of the Ficus pandurata Hance located at 10 m away from the LiDAR system. The exposure time was 150 ms, a total of 220 photos were collected in 33 s.
Fig. 13.
Fig. 13. (a) Real figure, (b) scanning result of the B channel, (c) $R_{\rm{Norm.}}^{\rm L}$, and (d) $G_{\rm{Norm.}}^{\rm L}$ of the targets at distance of 20 m to 26 m. The exposure time was 100 ms, a total of 984 photos were collected, and the total time was 100 s.
Fig. 14.
Fig. 14. (a) Threshold segmentation of the $R_{\rm{Norm.}}^{\rm L}$ point clouds of the targets. (b) Wall surfaces, soil, tree trunks and branches, etc. extracted from (a). (c) Histogram of $R_{\rm{Norm.}}^{\rm L}$ point clouds.

Tables (2)

Tables Icon

Table 1. Spatial resolutions at different distances.

Tables Icon

Table 2. Performance of the point cloud segmentation.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

x = L [ r I ( sin Θ cos Θ tan Φ ) + L IL ] r I ( cos Θ + sin Θ tan Φ ) + L IL tan Φ ,
y = c I ( x f ) f .
d x = x 2 sin Θ ( 1   +   tan 2 Φ ) [ r I ( sin Θ c o s Θ tan Φ ) + L IL ] 2 d r I .
d y = W ( x f ) f M .
X = x cos φ ,
Y = y ,
Z = h  +  x sin φ .
L ch = K I Fluo . ( λ ) T Filter ( λ ) η ch ( λ ) d λ .
R Norm . L = L R L B = I Flou . ( λ ) T Filter ( λ ) η R ( λ ) d λ I Flou . ( λ ) T Filter ( λ ) η B ( λ ) d λ .
G Norm . L = L G L B = I Flou . ( λ ) T Filter ( λ ) η G ( λ ) d λ I Flou . ( λ ) T Filter ( λ ) η B ( λ ) d λ .
S ch = K 2 S Fluo . ( λ ) T Filter ( λ ) η ch ( λ ) d λ .
R Norm . S = S R S B ,
G Norm . S = S G S B .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.