Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

4D dual-mode staring hyperspectral-depth imager for simultaneous spectral sensing and surface shape measurement

Open Access Open Access

Abstract

A 4D dual-mode staring hyperspectral-depth imager (DSHI), which acquire reflectance spectra, fluorescence spectra, and 3D structural information by combining a staring hyperspectral scanner and a binocular line laser stereo vision system, is introduced. A 405 nm laser line generated by a focal laser line generation module is used for both fluorescence excitation and binocular stereo matching of the irradiated line region. Under the configuration, the two kinds of hyperspectral data collected by the hyperspectral scanner can be merged into the corresponding points in the 3D model, forming a dual-mode 4D model. The DSHI shows excellent performance with spectral resolution of 3 nm, depth accuracy of 26.2 µm. Sample experiments on a fluorescent figurine, real and plastic sunflowers and a clam are presented to demonstrate system’s with potential within a broad range of applications such as, e.g., digital documentation, plant phenotyping, and biological analysis.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Hyperspectral imaging (HSI) can simultaneously obtain two-dimensional images at dozens to thousands narrow wavelength bands, making it suitable for efficient non-contact and non-destructive sensing and detection applications [1,2]. HSI is e.g. widely applied in characterization of physicochemical sample composition as it can provide detailed information related to vibrational modes of molecular bonds [3]. A range of spectrum acquisition techniques for HSI, such as dispersive scanning [4], holographic interference [5], filter-based spectrometers [6], and compressive sensing [7], have been developed. Among these, dispersive scanning HSI is one of the most commonly used due to advantages of simple set up, low cost and high spectral resolution [8]. In dispersive HSI, incident light is dispersed and expanded in the spectral dimension using a prism or a grating, and corresponding spectral data is recorded by a sensor. Different imaging techniques have similarly been used for HSI-systems, including e.g. reflectance hyperspectral imaging [9] and fluorescence hyperspectral imaging [10], and so on. Reflectance spectra provide light absorption characteristics of materials [11], while fluorescence spectra have advantages of high sensitivity and specificity in measurement of fluorescent samples as the fluorescent signals have higher quantum efficiency [12]. Combining the two kinds of spectra can improve the detection accuracy by recording both light absorption and fluorescent material information. Such dual-mode hyperspectral imaging has been utilized for detection and classification of biomedical samples, such as microalgae [13], intestinal fungi [14] and Tetrastigma hemsleyanum [15], as well as for component analysis of e.g. historical documents [16], and citrus plants [17].

Combining HSI with three-dimensional (3D) imaging, which provides complementary depth information, enables the construction of 3D surface models with spectral information. Commonly used 3D imaging techniques include time-of-flight (TOF) [18], binocular vision [19,20], structured light stereo vision [21,22], laser triangulation [23], and LIDAR [24], all of which have varying specific advantages in terms of spatial resolution, measurement distance, imaging time and field-of-view (FOV) levels.

Four-dimensional (4D) systems that combine 3D imaging and HSI techniques have been developed and studied extensively in laboratory settings [2534]. Reported 4D systems can be categorized in two categories based on how the two imaging modules are combined. In the first category the two imaging modes share a same sensor, which has the advantage of not requiring extra computational overhead for data fusion. Heist et al. proposed a fast 5D hyperspectral imaging system using structured light and Fabry-Pérot interferometer filters [29], however its spectral resolution, which is limited by the filters’ performance, is low at about 15 nm. Recently, Li et al. reported a 4D line-scan hyperspectral imager based on structured light and line-scan dispersive hyperspectral imaging [31]. In this set-up, only one grayscale camera operating in a line-scanning manner was utilized for acquisition of both fringe pattern images and spectral images, which greatly limited the 3D reconstruction performance, with low spatial resolution and time-consuming measurement process as results. H. Rueda, et al. have present 4D cameras based on compressive spectral imaging (CSI) and ToF sensors, which can realize 4D snapshot imaging with high framerates [32,33], requiring no moving parts. They overcome the bulkiness the previous imaging system [34], presenting a more-compact imaging device with snapshot capabilities. Generally, for 4D systems using one sensor, there is always a tradeoff between spectral resolution and depth accuracy.

The second category of 4D systems acquire spatial and spectral data using separate sensors, and the spectral and spatial resolutions only depend on the inherent performance of the chosen hyperspectral imaging and 3D measurement techniques, respectively, which makes such systems more flexible and easier to expand. The prize to pay is that additional algorithms are required to realize fusion of the two acquired data sets, however it is possible to greatly reduce the computational complexity by choosing appropriate imaging techniques for the 3D measurement and spectral detection, as well as by designing optical path modules such as to establish a matching relationship between the two imaging channels. Careful design choices allows fusion precision to be controlled at pixel level. For example, Zhao et al. demonstrated a 4D system based on an acousto-optical tunable filter (AOTF) and laser triangulation ranging [27]. Here, the hyperspectral and 3D imaging channels share a common optical path configuration, and thus the geometrical relation between spectral and laser spot images are fixed, which decreases the difficulty of data registration. However, the spatial resolution is only in the millimeter range as it is limited by the particularities of laser spot triangulation. Our research group recently integrated a staring hyperspectral imaging system with a structured light stereovision system [30]. In this setup the two imaging systems are coupled using a double light path module, and the two image planes are conjugated to realize point-to-point correspondence.

It should be noted that a majority of previously reported 4D systems are based on combining reflectance spectra and 3D spatial data. However, for substances that can excite fluorescence, the fluorescent properties provide additional information about the sample under study and thus an HSI that integrates reflectance and fluorescent imaging can be powerful characterization tool. Besides, the fluorescence spectrum is of high sensitivity and specificity. To realize fluorescence HSI, an extra laser source is required for the fluorescence excitation and it is therefore appropriate to use a 3D reconstruction methods based on laser irradiation, as the laser can used for both fluorescence excitation and stereo matching. Kim et al. reported an end-to-end measurement system for capturing reflectance and UV fluorescence spectral data from a 3D object using compressive sensing and laser scanning [7]. However, the system suffered from a low depth accuracy of 0.1 mm being restricted by its 3D imaging technique. We recently proposed a fluorescent line-scan 4D imager with spatial reconstruction based on line structured light stereo vision [35]. As such, it represents our initial exploration of fluorescent 4D imaging, in which we demonstrated that the combination of line-based 3D reconstruction and fluorescent hyperspectral line-scanner can be an appropriate approach for fluorescent 4D imaging. However, there exists intrinsic limitations and difficulties of this laboratory prototype. For example, an extra motion platform is required to move the measurement objects and enable complete laser scanning; collected reflectance spectra cannot be mapped to the 3D data; and it is challenging to use for long distance measurements.

In this paper, to go beyond proof-of-concept and to develop a 4D system with improved imaging performance and that is suitable for practical use, we propose and demonstrate a new 4D dual-mode staring hyperspectral-depth imager (DSHI) based on line laser binocular stereo vision and staring dispersive hyperspectral imaging. Under this configuration, reflectance spectra, fluorescence spectra and 3D spatial data of the object under investigation is fused together to form a dual-mode 4D model. It overcomes the limitation of single spectral type and other difficulties of our previous system, and realizes staring in-situ imaging, which has higher degree of usability and broader range of potential applications. The working process of the DSHI includes three steps. First, through continuous scanning by a 405 nm laser line, 3D spatial data and fluorescence spectra are obtained using a binocular stereo vision system and a hyperspectral scanner, respectively. The guidance of laser line in the binocular stereo vision greatly improves the accuracy of 3D reconstruction. Second, turning off the laser source, reflectance spectral data is collected by a hyperspectral scanner. Finally, based on a homographic transformation of the image planes of the cameras used in the 3D reconstruction and hyperspectral imaging, both fluorescence and reflectance spectral data is merged into the corresponding 3D point cloud, forming a complete dual-mode 4D model. The DSHI displays an excellent performance with spectral resolution of 3 nm, depth accuracy of 26.2 µm, and fusion precision of 1.1 pixels. Several experiments based on a fluorescent figurine, real/plastic flowers and a clam have been carried out to demonstrate the validity and versatility of the proposed DSHI.

2. Theoretical Background

2.1 Binocular line laser stereo vision system

The 3D coordinates of a point on the object surface are calculated using binocular line laser stereo vision. Figure 1(a) illustrates the 3D reconstruction: a 3D object point P is projected on left and right images at pixels p1(u1, v1) and p2 (u2, v2) through the optical centers, respectively.

 figure: Fig. 1.

Fig. 1. (a) Binocular line laser stereo vision 3D reconstruction. (b) Images captured by the left and right cameras. Epipolar and laser lines indicated in white dashed and purple lines, respectively.

Download Full Size | PDF

The transformational correspondence between the 2D pixel p1(u1, v1) at the left image and the 3D coordinate (Xw, Yw, Zw) can be expressed as:

$$Z_c^1\left[ {\begin{array}{c} {{u^1}}\\ {{v^1}}\\ 1 \end{array}} \right] = {M^1}\left[ {\begin{array}{c} {{X_w}}\\ {{Y_w}}\\ {\begin{array}{c} {{Z_w}}\\ 1 \end{array}} \end{array}} \right] = \left[ {\begin{array}{cccc} {m_{11}^1}&{m_{12}^1}&{\begin{array}{cc} {m_{13}^1}&{m_{14}^1} \end{array}}\\ {m_{21}^1}&{m_{22}^1}&{\begin{array}{cc} {m_{23}^1}&{m_{24}^1} \end{array}}\\ {m_{31}^1}&{m_{32}^1}&{\begin{array}{cc} {m_{33}^1}&{m_{34}^1} \end{array}} \end{array}} \right]\left[ {\begin{array}{c} {{X_w}}\\ {{Y_w}}\\ {\begin{array}{c} {{Z_w}}\\ 1 \end{array}} \end{array}} \right]$$
where M1 is the projection matrix of the left camera and m1 are the parameters in the matrix, which can be calculated through checkerboard calibration in advance [36]. Zc1 is a scale factor. Similarly, the correspondence between the p2(u2, v2) and the 3D coordinate (Xw, Yw, Zw) can be described as:
$$Z_c^2\left[ {\begin{array}{c} {{u^2}}\\ {{v^2}}\\ 1 \end{array}} \right] = {M^2}\left[ {\begin{array}{c} {{X_w}}\\ {{Y_w}}\\ {\begin{array}{c} {{Z_w}}\\ 1 \end{array}} \end{array}} \right] = \left[ {\begin{array}{cccc} {m_{11}^2}&{m_{12}^2}&{\begin{array}{cc} {m_{13}^2}&{m_{14}^2} \end{array}}\\ {m_{21}^2}&{m_{22}^2}&{\begin{array}{cc} {m_{23}^2}&{m_{24}^2} \end{array}}\\ {m_{31}^2}&{m_{32}^2}&{\begin{array}{cc} {m_{33}^2}&{m_{34}^2} \end{array}} \end{array}} \right]\left[ {\begin{array}{c} {{X_w}}\\ {{Y_w}}\\ {\begin{array}{c} {{Z_w}}\\ 1 \end{array}} \end{array}} \right]$$
where M2 is the projection matrix of right camera. Zc2 is a scale factor. Combining Eqs. (1)–(2) and eliminating the scale factors, we obtain a linear equation for Xw, Yw, Zw:
$$\left[ {\begin{array}{ccc} {{u^1}m_{31}^1 - m_{11}^1}&{{u^1}m_{32}^1 - m_{12}^1}&{{u^1}m_{33}^1 - m_{13}^1}\\ {{v^1}m_{31}^1 - m_{21}^1}&{{v^1}m_{32}^1 - m_{22}^1}&{{v^1}m_{33}^1 - m_{23}^1}\\ {\begin{array}{c} {{u^2}m_{31}^2 - m_{11}^2}\\ {{v^2}m_{31}^2 - m_{21}^2} \end{array}}&{\begin{array}{c} {{u^2}m_{32}^2 - m_{12}^2}\\ {{v^2}m_{32}^2 - m_{22}^2} \end{array}}&{\begin{array}{c} {{u^2}m_{33}^2 - m_{13}^2}\\ {{v^2}m_{33}^2 - m_{23}^2} \end{array}} \end{array}} \right]\left[ {\begin{array}{c} {{X_w}}\\ {{Y_w}}\\ {{Z_w}} \end{array}} \right] = \left[ {\begin{array}{c} {m_{14}^1 - {u^1}m_{34}^1}\\ {\begin{array}{c} {m_{24}^1 - {u^1}m_{34}^1}\\ {m_{14}^2 - {u^2}m_{34}^2} \end{array}}\\ {m_{24}^2 - {u^2}m_{34}^2} \end{array}} \right]$$

Thus, the coordinate P(Xw, Yw, Zw) can be calculated based on corresponding matching pixel coordinates in the two images, which is the basis for calculating the 3D spatial coordinates in binocular stereo vision. In other words, for a 2D pixel p1(u1, v1) at the left image, if we could find its corresponding pixel at the right image, its 3D coordinate can then be calculated. And the registration accuracy of the pixels at left and right images is crucial for the precision of 3D reconstruction.

Stereo matching establishes pixel-wise correspondence between the left and right images [20]. Referring to Fig. 1(b), which shows example images captured by left and right camera: according to the epipolar constraint [20], for pixel p1(u1, v1) in the left image, the corresponding pixel in the right image lies along a line, known as the epipolar line [20] for pixel p2, indicated in Fig. 1(b) (white dash line). Thus, the search space of the corresponding pixel p2 of p1(u1, v1) is constrained from the entire image to a line. Furthermore, both p1 and p2 are located on the laser line, indicated Fig. 1(b) (purple line) and the intersection of the laser line and the epipolar line at the right image is the pixel corresponding to the p1(u1, v1), whose coordinate is p1(u2, v2). Thus, through continuous scanning of the laser line, a pixel-to-pixel correspondence of the two images can be established. Compared with other matching algorithms such as Semi Global Matching (SGM) [37], stereo matching based on laser line has a higher matching precision, and furthermore realizes a higher 3D depth accuracy.

2.2 Dispersive staring hyperspectral line-scanner

The optical path of the dispersive hyperspectral line-scanner is shown in Fig. 2, which consists of a an imaging lens, a slit, collimator lens, prism-gating-prism (PGP), a focusing lens, and a camera. After passing through an imaging lens, the incident light is focused at the image plane where a slit is placed. One line region of the incident light will pass through the slit, collimated by the collimator lens and then dispersed and expanded in the spectral range 450 nm to 800 nm by the PGP pair. After passing through a focusing lens, the light is then focused on the image plane of the camera. The spectrum of the line region of the incident light is thus obtained. To perform staring hyperspectral imaging on the complete sample, a galvanometer mirror is usually placed between the imaging lens and the slit. As the angle of the incident light can be controlled by the galvanometer, successive line regions of the sample can collected by the hyperspectral imager to form a complete hyperspectral image.

 figure: Fig. 2.

Fig. 2. Light path diagram of the hyperspectral line-scanner

Download Full Size | PDF

3. Experimental setup

3.1 Optical design and prototype

The DSHI system consists of three main parts: a focal laser line generation module, a binocular line laser stereo vision module, and a staring hyperspectral scanner. Its prototype is depicted in Fig. 3 in two observation angles. Figure 4(a) shows the optical paths. 405 nm light from the laser passes through a cylindrical lens, forming a laser line, after which it travels through a lens, a dichroic mirror, a galvanometer mirror, a collimating lens and an imaging lens, after which it is focused on the surface of the object under measurement. Referring to Figs. 4(b) and 4(c), which show detailed ray-traces of the 405 nm laser line generation light path in the xoz plane (side view) and yoz plane (top view), after passing through a cylindrical lens, the incident 405 nm light converges to a point in the yoz plane, while diverging in the xoz plane generating the focal laser line. Passing the following three lenses, the laser line the diverges in the yoz plane, and converges at a point in the xoz plane, forming a focal line on the object surface.

 figure: Fig. 3.

Fig. 3. Prototype of the DSHI system displayed in two observation angles, including a focal laser line generation module, a binocular line laser stereo vision module, and a hyperspectral scanner.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. (a) Light path diagram of the DSHI. Experimental setup of the focal laser line generation module. L1, L2, L3, L5: imaging lens; L4, L7, L8: collimating lens, L6: focusing lens. F1, F2: band-pass filter; F3: long-pass filter. (b) side view; (c) top view of the light path for the focal laser line generation.

Download Full Size | PDF

The object irradiated by laser line is captured by two cameras. Band-pass filters are placed in front of the imaging lenses to eliminate interference of environmental light. After image processing, the central pixel of the laser line in the image planes of the two cameras are extracted and used for binocular matching, and 3D spatial data of the irradiated line region is calculated based on the triangulation principle [20]. Simultaneously, excited fluorescence from the irradiated line region returns along the original light binocular path, passes through the dichroic mirror, and is captured by the hyperspectral scanner. Fluorescence spectral data of the irradiated line region is recorded. A long-pass filter is placed in front of the hyperspectral scanner to filter out the 405 nm excitation light. In this way, both fluorescent spectral data and the 3D spatial coordinates of the irradiated line region are obtained simultaneously. The galvanometer mirror is used to control which part of the sample is irradiated by the laser line and by scanning the entire object, complete 3D spatial data and fluorescent spectra are obtained.

Following this, the line laser is turn off and the measurement of the reflectance spectra is done in an indoor LED lighting environment. Bounce light from the measurement object passes through the imaging lens, collimating lens, galvanometer mirror, and is collected by the staring hyperspectral scanner. In this manner, the reflectance spectrum of one line region of the object is obtained, and by changing the angle of the incident light using the galvanometer mirror, subsequent line regions of the measurement object are collected by the hyperspectral scanner forming a complete reflectance spectral image. As the galvanometer mirror is operated in the same manner as in fluorescence spectra acquisition, point-to-point correspondence of the reflectance and fluorescence spectral data in the image plane of the hyperspectral scanner is ensured.

Following completion of the above steps, the reflectance and fluorescence spectral data, as well as the 3D point cloud model of the object are obtained. As the collection of reflectance and fluorescence spectra shares a same imaging plane, and as the galvanometer mirror is operated in the same manner, the two kinds of spectral data sets can subsequently be directly fused together. The fused spectral data collectively referred to as “dual-mode spectral data”. Then, the dual-mode spectral data is assigned to corresponding 3D point cloud as the fourth dimensional data, thereby producing a complete dual-mode 4D model. The fusion process of the two types of data is described in the following section.

3.2 Data fusion process

The spectral and spatial data fusion process, illustrated in Fig. 5, is as follows: First, in order to establish a matching relation between the 3D point cloud and spectral data, a checkerboard target is used in a unified geometrical calibration for the image planes of both camera and hyperspectral scanner. The target is imaged by both the left camera and the hyperspectral scanner and every corner point, indicated by blue and green circles in Fig. 4, is retrieved. For each corner point cco (u, v) located in the left camera plane, we can locate its corresponding corner point hco (u, v) in the image plane of hyperspectral scanner. And a homographic transformation between the two image planes can be calculated as:

$${c_{co}} = H \cdot {h_{co}},H = {c_{co}} \cdot {h_{co}}^{ - 1} = \left[ {\begin{array}{ccc} {0.9812}&{0.0011}&{0.0000}\\ {0.0034}&{0.9912}&{0.0000}\\ {0.0031}&{0.0014}&{0.9789} \end{array}} \right]$$
where H is the homographic matrix. In this way, the relationship between the two imaging channels is established. Therefore, for an arbitrary pixel h (u, v) located in the image plane of hyperspectral scanner, we can locate its corresponding pixels c (u, v) in the camera image plane using the homographic transformation. Furthermore, the reflectance spectral data Ir and fluorescence spectral data If of pixel h (u, v) can be assigned to the corresponding pixel c (u, v). As discussed above regarding the binocular line laser stereo vision, each pixel c (u, v) in the image plane of the left camera corresponds to a unique 3D point cloud P(Xw, Yw, Zw) (depicted in Fig. 4) and therefore, the spectral data (Ir, If) can be assigned to the corresponding 3D point P(Xw, Yw, Zw) one by one, represented as the dual-mode 4D data point P(Xw, Yw, Zw, Ir, If).

 figure: Fig. 5.

Fig. 5. Schematic of data fusion process.

Download Full Size | PDF

4. Performance test

4.1 Spectral resolution

The hyperspectral scanner records spatial data from a line region in the image in one dimension, which serves as the pixel axis, and the spectral data in the other dimension, which serves as the wavelength axis. To establish the relationship between pixel index and the corresponding wavelength value, and thus calculate the spectral resolution, spectral calibration is required prior to hyperspectral imaging. A Mercury Argon calibration light source was used to irradiate the slit of the hyperspectral scanner, and the obtained spectral image is shown in Fig. 6(a). The calibration light source has several characteristic spectral peaks located at wavelengths 404.6 nm, 435.8 nm, 546.1 nm, 578.0 nm, 772.4 nm, seen as five distinct spectral lines in Fig. 6(a). The pixel indices corresponding to these wavelength values are 125, 217, 541, 635 and 1212, respectively. The relationship between the pixel index y and wavelength value λ can be described by a third-order polynomial equation:

$$\lambda = {a_0} + {a_1}y + {a_2}{y^2} + {a_3}{y^3}$$
where a0, a1, a2 and a3 are the calibration coefficients, which are calculated to be [a0, a1, a2, a3] = [362.51, 0.339, -1.37e-04, -5.71e-08] using the polynomial least square method [38]. The fitted curve of the polynomial equation is shown in Fig. 6(b) and the spectral curve of the Mercury Argon Calibration light source measured by our hyperspectral scanner following spectral calibration is shown in Fig. 6(c). The maximum full width at half of the spectral peak 546.1 nm is about 3 nm, indicating that the spectral resolution of the system is 3 nm. To evaluate the spectral characterizing performance of our system, we measured the spectrum of a standard diffuse plane using a commercial spectrometer (OceanOptics QE65PRO) and our DSHI system, respectively. Figure 6(d) shows the measured results. We found that the spectral results measured our DSHI system are consistent with the commercial spectrometer characterization results, both have similar shape as a LED-2700 K spectrum, demonstrating the accuracy of spectral measurement of our self-developed HSI module.

 figure: Fig. 6.

Fig. 6. (a) Spectral image of the Mercury Argon Calibration Source. (b) Relationship between the wavelengths and pixel indices as fitted by a third-order polynomial. (c) Spectral curve of the Mercury Argon Calibration Source measured by our hyperspectral scanner according to the wavelength calibration. (d) Comparison of spectra measured by a commercial spectrometer and our DSHI.

Download Full Size | PDF

4.2 Depth accuracy

The 3D measurement precision can be tested using a standard planar board [27]. The ideal reconstruction of the board is that all points in the 3D point cloud are distributed in the same plane, known as the best-fitting plane. However, due to reconstruction errors, some of the calculated 3D points deviate from the best-fitting plane, and the standard derivation between the reconstructed 3D point cloud and its fitting plane reflects the accuracy of the 3D reconstruction. Figure 7 exhibits the residuals of the plane fitting as measured by our system at different observation angles. The standard deviation is about 26.2 µm, indicating that the depth accuracy of the DSHI is 26.2 µm. Furthermore, we have tested the plane measurement error of the binocular stereo vision without the guidance of laser line in the same manner. In this case, the standard deviation is only 4.6 mm. As the reconstructed performance is poor, it is a challenge to display the residuals and thus is omitted. The results of the comparative experiments further demonstrate the effectiveness of the binocular line laser stereo vision in improving the 3D reconstruction precision.

 figure: Fig. 7.

Fig. 7. Residuals of plane fitting, shown at different observation angles.

Download Full Size | PDF

4.3 Fusion Precision

To test the fusion precision of the dual-mode spectral and 3D spatial data, we present 4D imaging experiments of the standard checkerboard target used in section 3.2. Following data fusion, both reflectance and fluorescence spectra are assigned to the corresponding point in the 3D point cloud model, forming a complete dual-mode 4D point. For demonstration purposes, the reflectance spectral and fluorescence spectral data of the 4D model are visualized separately, as shown in Figs. 8(a) and 8(b). The colors of the 3D point cloud were set according to the chromatic value of the specified wavelength, with brightness determined by the reflectance and fluorescence intensities, respectively. The 3D point cloud and the reflectance or fluorescence spectrum have one-to-one consistency, and all the squares are clearly visible at different wavelengths. The reflectance and fluorescence spectra of point A (indicated in Figs. 8(a) and (b), respectively) are shown in Fig. 8(c). Point A is located in the white region of the checkerboard, which is also coated by fluorescent paint. The reflectance spectrum of point A has shape similar the spectrum to an LED-2700 K lamp (typically used as an indoor lighting source), which is due to the fact that the material of point A has a uniform reflectivity in the 400 - 800 nm range. The fluorescent spectrum of point A has a peak located at 450 nm, consistent with the characteristics of fluorescent paint of paper.

 figure: Fig. 8.

Fig. 8. (a) Reflectance 4D model and (b) fluorescence 4D model of the checkerboard. (c) Reflectance spectrum and fluorescence spectrum of points A (indicated in (a) and (b)).

Download Full Size | PDF

Data fusion of spectral and spatial data is based on the homographic matrix H, calculated in advance in the system calibration process. Supposing that c (u, v), h (u, v) are the actual pixel coordinates in the image planes of the left camera and the hyperspectral scanner respectively, and c’ (u’, v’) is obtained by the homographic transformation c’=Hh. The fusion precision can be defined as the average Euclidean distance between c’ (u’, v’) and c (u, v) [30] :

$$\alpha = \frac{{\sum\limits_{n = 1}^N {\sqrt {{{({u^{\prime} - u} )}^2} + {{({v^{\prime} - v} )}^2}} } }}{N}$$
where N is the number of corner points of the checkerboard, and c (u, v) and c’ (u’, v’) are the coordinates of the corner points in the camera image plane and the corresponding points calculated by the homographic transformation, respectively. We find the fusion precision to be 1.1 pixel. We summarize the performance and characteristics of our DSHI system in Table 1.

Tables Icon

Table 1. Performance and characteristics of the DSHI system

5. Results

5.1 4D imaging of a figurine

As a non-invasive technique, 4D hyperspectral imaging is promising for applications related to digital documentation of art and cultural objects. Various forms of spectroscopy such as reflectance, fluorescence or Raman spectroscopy have been utilized in museum labs to identify, analyze and record the material components of objects. 3D measurement can reconstruct the geometry of objects and locate repaired or damaged sections. To exemplify the potential of the DSHI in this field, we use the system on a fluorescent figurine.

Figure 9(a) shows the figurine, which has a yellow coat, a green ‘horn’ and bag, and light brown hair. The paint on the surface of the figurine can be excited to fluoresce. Figure 9(b) shows the obtained 3D point cloud model of the fluorescent figurine using artificial gray shading. Enlarged pictures of the mouth are shown in Figs. 9(c) and 9(d), where we see that the 3D point cloud is dense and smooth, and details in the reconstructed model are clearly visible. To obtain a complete 4D model, the reflectance and fluorescence spectra are assigned to corresponding 3D points of the reconstructed model one by one to provide texture information. Figure 10(a) shows the reflectance 4D model at different wavelengths and different observation angles. The colors of the 3D point cloud were determined by the chromatic value of the specific wavelength, with the brightness determined by the reflectance intensity. To display the spectral data in detail, we chose four points A, B, C and D located at the head, hair, cheek, and bag regions as indicated in Fig. 9(b), and acquired their normalized reflectance spectral curves shown at Fig. 10(b). The reflectance peaks are located at 572 nm, 598 nm, 600 nm, and 540 nm, respectively, consistent with yellow, orange, red and green color characteristics of the indicated regions. Figure 10(c) shows the equivalent fluorescent 4D model of the figurine, with colors set according to the fluorescent wavelengths and Fig. 10(d) shows the fluorescent spectral curves of points A, B, C and D, with fluorescent peaks located at 532 nm, 572 nm, 590 nm and 496 nm, respectively, consistent with the characteristics of yellow, orange, red, and green fluorescent paint.

 figure: Fig. 9.

Fig. 9. (a) Photograph and (b) 3D point cloud model with artificial gray shading of the fluorescent figurine. (c) Enlargement of the mouth region. (d) Distribution of the point cloud of the rectangular region indicated in (c).

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Reflectance (a) and fluorescence (c) 4D models of the fluorescent figurine observed at different angles and wavelengths. Normalized reflectance (b) and fluorescence (d) spectra of points A-D indicated in Fig. 8(b)

Download Full Size | PDF

As both reflectance and fluorescence spectral data can be recorded in the 4D model, our proposed DSHI can generate detailed visualization of spectral signatures of the reconstructed 3D surface model. This makes it suitable for, e.g., digital investigation, documentation and exhibition of geometry, color characteristics and material properties of objects of artistic or cultural value.

5.2 4D imaging of real and plastic sunflowers

Another interesting field of application for DHSI system is non-invasive plant component analysis. Color reflectance is a crucial aspect of a plant’s phenotype of, as it can function as camouflage or for attracting pollinating insect. The reflectance spectrum is a good indicator of a plant’s color characteristics, as well as the optical absorption properties of some materials such as chlorophyll and pigments. The fluorescence spectrum can be used to study the chemical composition of plants, obviously especially so for materials that can only be detected and identified by fluorescent excitation. Mapping reflectance and fluorescence spectra to spatial dimensions can be helpful and essential to analyze the content and distribution of some chemical component of plant, as well as to evaluate growth or health condition.

We study a real sunflower, shown in Fig. 11(a), which has rich morphological details and color characteristics. Figure 11(b) show the obtained 3D surface model with artificial blue shading, with enlargements pictures shown in Figs. 11(c) and 11(d). The details, textures and outlines of the enlarged region are clear, with a dense 3D point cloud. The fusion of the reflectance and fluorescence spectra with the 3D model enables visualization of morphology features and spectral characteristics. Figures 11(e-h) shows the reflectance 4D model at different angles and wavelengths, with color set according to different reflectance wavelength 630 nm, 540 nm, 570 nm and 590 nm. Figure 11(i) shows the fluorescent 4D model at 670 nm. Red fluorescence indicates chlorophyll, which is distributed in the leaf and at the center of the flower dish. It should be mentioned that the chlorophyll at the center of the disk can be not be detected by the reflectance spectra alone, as the brown disk has strong optical absorption, which interfere with the detection of chlorophyll. In the fluorescent 4D model on the other hand, chlorophyll can be easily detected, and its distribution can be visualized. Therefore, the combination of reflectance and fluorescence spectra with 3D surface model enables precise and comprehensive analysis of the material properties and distribution.

 figure: Fig. 11.

Fig. 11. Photograph (a) and 3D point cloud model (b) with artificial blue shading of a real sunflower. (c) Enlargement of the rectangular region indicated in (b). (d) Distribution of point cloud of the rectangular region indicated in (c). (e-h) Reflectance 4D model of the real sunflower at different angles and wavelengths. (i) Fluorescence 4D model.

Download Full Size | PDF

For comparison, 4D imaging of a plastic sunflower was also carried out. The photograph, reconstructed 3D model with blue shading, and enlargements are shown in Figs. 12(a-d), respectively. Figures 12(e-g) show the reflectance 4D model of the plastic sunflower at different observation angles and different wavelengths. Figure 12(h) shows the fluorescent 4D model, in which the colors of each 3D point are determined by the chromatic value of wavelength at fluorescence spectra peaks. The flower petals and the center of the disk have yellow and green fluorescence after being excited by the 405 nm laser, which is caused by the use of fluorescent chemical substances in the production of the plastic sunflower.

 figure: Fig. 12.

Fig. 12. Photograph (a) and 3D point cloud model (b) with artificial blue shading of the artificial sunflower. (c) Enlargement of the rectangular region indicated in (b). (d) Point cloud distribution of the rectangular region indicated in (c). (e-h) Reflectance 4D model of the real sunflower at different angles and wavelengths. (i) Fluorescence 4D model.

Download Full Size | PDF

Figure 13(a) and Fig. 13(d) show the full dual-mode 4D model of the real/plastic sunflowers, respectively, i.e., where reflectance spectra and fluorescence spectra information are presented in the point cloud model. For point cloud without fluorescent characteristics, their colors were determined by the chromatic value of the wavelength of reflectance spectral peaks, which reflect their original color properties. For point cloud with fluorescent characteristics, their colors were determined by the chromatic value of the wavelength of fluorescence spectral peaks to characterize corresponding fluorescence properties. Points A, B,C and D are located at the leaf, petal, the edge of the flower disks, the center of the flower disks of the real and plastic sunflowers, respectively. Normalized reflectance and fluorescence spectra of the indicated regions are displayed in Figs. 13(b), (c), (e) and (f). As shown in Figs. 13(b) and 13(e), the spectral shapes of the real and plastic sunflowers at the same indicated positions are obviously different, which is due to different color characteristics and physicochemical properties. The reflectance spectra of the real flower have wider half width, indicating a softer color of the real sample, which is consistent with our observation. Furthermore, for point A located in the green leaf of the real flower, the reflectance spectrum has an absorption peak at 670 nm, and the fluorescence spectrum has a main fluorescence peak at 680 nm with an additional peak at 725 nm. These spectral characteristics are consistent with the properties of chlorophyll. However, similar optical characteristics are not seen in the reflectance and fluorescence spectra of point A indicated in Fig. 13(d), and its fluorescence spectrum only has a fluorescent peak located at 580 nm. In Fig. 13(f), the fluorescence spectrum of point D (indicated in Fig. 13(d)) has two fluorescent peaks at 530 nm and 670 nm, which we attribute to characteristics of industrial chemicals. Point D located at the same position of real flower (indicated in Fig. 13(a)) show fluorescence characteristics of chlorophyll.

 figure: Fig. 13.

Fig. 13. (a) Dual-mode 4D model of the real sunflower. (b) The normalized reflectance spectra and (c) normalized fluorescence spectrum of points indicated in (a). (d) Dual-mode 4D model of the artificial sunflower. (e) The normalized reflectance spectra and (f) normalized fluorescence spectrum of points indicated in (d).

Download Full Size | PDF

5.3 4D imaging of a clam

For most 4D imaging systems, it is challenging to image samples with high optical absorption properties. For instance, for 4D systems in which the 3D reconstruction is based on light structured stereo vision [29,30], the projected structured light stripes will be very faintly reflected by such samples, with low accuracy 3D reconstruction as a result. In contrast, as our proposed DSHI is based on binocular line laser stereo vision, influence of light absorption or transmission of the measured samples can be overcome, since the high-power laser used in the binocular line laser system can be accurately reflected by various surfaces with different material properties. Thus, the DSHI has a wider range of applications.

As an example, we performed 4D imaging of a Tegillarca granosa clam. Its blood red soft interior has strong light absorption characteristics, while the white shell has obvious reflection to light, which 4D imaging with high spatial resolution challenging. Figure 14(a) shows the reflectance 4D model of the granosa clam, with clear nodal shape texture. The texture information can be used for identification of the place of origin. Corresponding normalized reflectance spectral curves of points A and B indicated in bright and dark regions of Fig. 14(a) are displayed in Fig. 14(b). These are quite flat in the visible range, with higher reflectivity in the bright region. Figure 14(c) show the 4D model of the interior structure of the granosa clam. The spectrum of point C indicated in Fig. 14(c) is shown in Fig. 14(d), with two light absorption peaks located at 540 nm, and 576 nm, respectively corresponding to the characteristics of the hemoglobin. For quantitative analysis of the hemoglobin distribution on the clam, we define an optical absorption intensity of hemoglobin (OAIH) factor as:

$$OAIH = \frac{{{\alpha _{576}}}}{{{\alpha _{560}}}}$$
where αλ is the normalized reflectance at wavelength λ and 560 nm and 576 nm are the reflectance and absorption peaks of the normalized reflectance curve, respectively. Color of the 4D model in Fig. 14(b) is set by the value of the OAIH factor and brightness is negatively associated with the intensity of light absorption. The spectral information of the hemoglobin can be used to evaluate the health state, and thus the quality, of the granosa clam. 4D imaging using the DSHI system can thus provide scientific data related to quality, growth and origin for the study of granosa clam breeding, and can similarly be applied for analysis of other special biological samples, which broad application prospects.

 figure: Fig. 14.

Fig. 14. Reflectance 4D model of the shell (a) and the interior (b) structure a Tegillarca granosa clam in false-color representation. (d) Normalized reflectance spectra of points A and B indicated in (a) and point C indicated in (c).

Download Full Size | PDF

6. Summary

In this paper, we propose a 4D dual-mode staring hyperspectral-depth imager (DSHI) that can acquire unified reflectance spectra, fluorescence spectra and 3D spatial data. The DSHI system consists of three main parts: a 405 nm focal laser line generation module for fluorescence excitation and binocular stereo matching of the irradiated line region; a binocular line laser stereo vision module for 3D point cloud model generation; and a staring hyperspectral scanner used to collect both reflectance and fluorescence spectral data. Based on a homographic transformation of the image planes of the two latter imaging systems, the two kinds of spectral data can be assigned to a corresponding point cloud of the 3D model, producing a complete dual-mode 4D model. The spectral resolution and depth accuracy of the proposed DSHI are 3 nm and 26.1 um, respectively, rivaling that of commercial spectrometers. The fusion precision of spectral and spatial data can be controlled at pixel level. We have presented a set of experiments on a fluorescence figurine, real and plastic sunflowers and a clam to demonstrate the versatility of the DSHI system, which has substantial potential areas of application areas, such as digital documentation of art and cultural objects, precision agriculture and biological analysis.

Funding

National Natural Science Foundation of China (11621101, 61774131); Ningbo Science and Technology Project (2020Z077, 2021Z076); Key Research and Development Program of Zhejiang Province (2021C03178); National Key Research and Development Program of China (2018YFC1407503).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. X. Yao, S. Li, and S. He, “Dual–Mode Hyperspectral Bio–Imager with a Conjugated Camera for Quick Object–Selection and Focusing,” Prog. Electromagn. Res. 168, 133–143 (2020). [CrossRef]  

2. X. Liu, Z. Jiang, T. Wang, F. Cai, and D. Wang, “Fast hyperspectral imager driven by a low–cost and compact galvo–mirror,” Optik 224, 165716 (2020). [CrossRef]  

3. Q. Li, Y. Wang, H. Liu, X. He, D. Xu, J. Wang, and F. Guo, “Leukocyte cells identification and quantitative morphometry based on molecular hyperspectral imaging technology,” Computerized Medical Imaging and Graphics 38(3), 171–178 (2014). [CrossRef]  

4. X. Yao, F. Cai, P. Zhu, H. Fang, J. W. Li, and S. He, “Non–invasive and rapid pH monitoring for meat quality assessment using a low–cost portable hyperspectral scanner,” Meat Sci. 152, 73–80 (2019). [CrossRef]  

5. Y. Xu, J. Li, C. Bai, Y. Liu, and J. Wang, “5D–fusion sensing via interference illumination and polarization imaging,” Opt. Lett. 46(19), 4976–4979 (2021). [CrossRef]  

6. J. Gomez-Sanchis, J. Blasco, E. Soria-Olivas, D. Lorente, P. Escandell-Montero, J. M. Martinez-Martinez, M. Martinez-Sober, and N. Aleixos, “Hyperspectral LCTF–based system for classification of decay in mandarins caused by Penicillium digitatum and Penicillium italicum using the most relevant bands and non–linear classifiers,” Postharvest Biol. Technol. 82, 76–86 (2013). [CrossRef]  

7. M. H. Kim, H. Rushmeier, J. Dorsey, T. A. Harvey, R. O. Prum, D. S. Kittle, and D. J. Brady, “3D Imaging Spectroscopy for Measuring Hyperspectral Patterns on Solid Objects,” ACM Trans. Graphics 31(4), 1–3 (2012). [CrossRef]  

8. C. Wu and C. Yan, “Imaging spectrometer optical design based on prism–grating–prism dispersing device,” Journal of Applied Optics 33(1), 37–43 (2012).

9. M. E. Klein, B. J. Aalderink, R. Padoan, G. de Bruin, and T. A. G. Steemers, “Quantitative hyperspectral reflectance imaging,” Sensors 8(9), 5576–5618 (2008). [CrossRef]  

10. S. Kong, M. E. Martin, and T. Vo-Dinh, “Hyperspectral fluorescence imaging for mouse skin tumor detection,” Etri Journal 28(6), 770–776 (2006). [CrossRef]  

11. R. Pettersen, G. Johnsen, P. Bruheim, and T. Andreassen, “Development of hyperspectral imaging as a bio–optical taxonomic tool for pigmented marine organisms,” Org. Divers. Evol. 14(2), 237–246 (2014). [CrossRef]  

12. J. Li, W. Jiang, X. Yao, F. Cai, and S. He, “Fast quantitative fluorescence authentication of milk powder and vanillin by a line–scan hyperspectral system,” Appl. Opt. 57(22), 6276–6282 (2018). [CrossRef]  

13. J. Luo, H. D. Zhang, E. Forsberg, S. Hou, S. Li, Z. Xu, X. Chen, X. H. Sun, and S. L. He, “Confocal hyperspectral microscopic imager for the detection and classification of individual microalgae,” Opt. Express 29(23), 37281–37301 (2021). [CrossRef]  

14. S. Lin, X. Bi, S. Zhu, H. Yin, Z. Li, and C. Chen, “Dual–type hyperspectral microscopic imaging for the identification and analysis of intestinal fungi,” Biomed. Opt. Express 9(9), 4496–4508 (2018). [CrossRef]  

15. C. Jiao, Z. Xu, Q. Bian, E. Forsberg, Q. Tan, X. Peng, and S. He, “Machine learning classification of origins and varieties of Tetrastigma hemsleyanum using a dual–mode microscopic hyperspectral imager,” Spectrochim. Acta, Part A 261, 1 (2021). [CrossRef]  

16. T. Kuhns and D. W. Messinger, “Spectral Phenomenology of Historical Parchments and Inks to Aid Cultural Heritage Imaging System Development,” 24th SPIE Conference on Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery10644 (2018).

17. M. Sighicelli, F. Colao, A. Lai, and S. Patsaeva, “Monitoring Post–Harvest Orange Fruit Disease by Fluorescence and Reflectance Hyperspectral Imaging,” 1st International Symposium on Horticulture in Europe817, 277–284 (2009).

18. S. Foix, G. Alenya, and C. Torras, “Lock-in Time-of-Flight (ToF) Cameras: A Survey,” IEEE Sens. J. 11(9), 1917–1926 (2011). [CrossRef]  

19. L. Nalpantidis, G. C. Sirakoulis, and A. Gasteratos, “Review of Stereo Vision Algorithms: from Software to Hardware,” Int. J. Optomechatronics 2(4), 435–462 (2008). [CrossRef]  

20. U. R. Dhond and J. K. Aggarwal, “Structure from Stereo – A Review,” IEEE Trans. Syst., Man, Cybern. 19(6), 1489–1510 (1989). [CrossRef]  

21. S. Zhang, “High–speed 3D shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

22. J. S. Hyun, G. T. C. Chiu, and S. Zhang, “High–speed and high–accuracy 3D surface measurement using a mechanical projector,” Opt. Express 26(2), 1474–1487 (2018). [CrossRef]  

23. K. Zhang, M. Yan, T. Huang, J. Zheng, and Z. Li, “3D reconstruction of complex spatial weld seam for autonomous welding by laser structured light scanning,” J. Manuf. Processes 39, 200–207 (2019). [CrossRef]  

24. L. Luo, X. Chen, Z. Xu, S. Li, Y. Sun, and S. He, “A Parameter–Free Calibration Process for a Scheimpflug LIDAR for Volumetric Profiling,” Prog. Electromagn. Res. 169, 117–127 (2020). [CrossRef]  

25. H. Aasen, A. Burkart, A. Bolten, and G. Bareth, “Generating 3D hyperspectral information with lightweight UAV snapshot cameras for vegetation monitoring: From camera calibration to quality assurance,” ISPRS J. Photogramm. Remote Sens. 108, 245–259 (2015). [CrossRef]  

26. E. Ivorra, S. Verdu, A. J. Sanchez, R. Grau, and J. M. Barat, “Predicting Gilthead Sea Bream (Sparus aurata) Freshness by a Novel Combined Technique of 3D Imaging and SW–NIR Spectral Analysis,” Sensors 16(10), 1735 (2016). [CrossRef]  

27. H. Zhao, Z. Wang, G. Jia, X. Li, and Y. Zhang, “Field imaging system for hyperspectral data, 3D structural data and panchromatic image data measurement based on acousto–optic tunable filter,” Opt. Express 26(13), 17717–17730 (2018). [CrossRef]  

28. H. Zhao, L. Xu, S. Shi, H. Jiang, and D. Chen, “A High Throughput Integrated Hyperspectral Imaging and 3D Measurement System,” Sensors 18(4), 1068 (2018). [CrossRef]  

29. S. Heist, C. Zhang, K. Reichwald, P. Kuhmstedt, G. Notni, and A. Tuennermann, “5D hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light,” Opt. Express 26(18), 23366–23379 (2018). [CrossRef]  

30. J. Luo, S. Li, E. Forsberg, and S. He, “4D surface shape measurement system with high spectral resolution and great depth accuracy,” Opt. Express 29(9), 13048–13070 (2021). [CrossRef]  

31. J. Li, Y. Zheng, L. Liu, and B. Li, “4D line–scan hyperspectral imaging,” Opt. Express 29(21), 34835–34849 (2021). [CrossRef]  

32. H. Rueda-Chacon, J. F. Florez-Ospina, D. L. Lau, and G. R. Arce, “Snapshot Compressive ToF plus Spectral Imaging via Optimized Color-Coded Apertures,” IEEE Trans. Pattern Anal. Mach. Intell. 42(10), 2346–2360 (2020). [CrossRef]  

33. W. Feng, H. Rueda, C. Fu, G. R. Arce, W. He, and Q. Chen, “3D compressive spectral integral imaging,” Opt. Express 24(22), 24859–24871 (2016). [CrossRef]  

34. H. Rueda, C. Fu, D. L. Lau, and G. R. Arce, “Single Aperture Spectral plus ToF Compressive Camera: Toward Hyperspectral plus Depth Imagery,” IEEE J. Sel. Top. Signal Process. 11(7), 992–1003 (2017). [CrossRef]  

35. J. Luo, E. Forsberg, S. Fu, and S. L. He, “High-precision four-dimensional hyperspectral imager integrating fluorescence spectral detection and 3D surface shape measurement,” Appl. Opt. 61(10), 2542–2551 (2022). [CrossRef]  

36. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

37. P. Bu, H. Zhao, J. Yan, and Y. Jin, “Collaborative semi–global stereo matching,” Appl. Opt. 60(31), 9757–9768 (2021). [CrossRef]  

38. J. H. Cho, P. J. Gemperline, and D. Walker, “Wavelength calibration method for a CCD detector and multichannel fiber–optic probes,” Appl. Spectrosc. 49(12), 1841–1845 (1995). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. (a) Binocular line laser stereo vision 3D reconstruction. (b) Images captured by the left and right cameras. Epipolar and laser lines indicated in white dashed and purple lines, respectively.
Fig. 2.
Fig. 2. Light path diagram of the hyperspectral line-scanner
Fig. 3.
Fig. 3. Prototype of the DSHI system displayed in two observation angles, including a focal laser line generation module, a binocular line laser stereo vision module, and a hyperspectral scanner.
Fig. 4.
Fig. 4. (a) Light path diagram of the DSHI. Experimental setup of the focal laser line generation module. L1, L2, L3, L5: imaging lens; L4, L7, L8: collimating lens, L6: focusing lens. F1, F2: band-pass filter; F3: long-pass filter. (b) side view; (c) top view of the light path for the focal laser line generation.
Fig. 5.
Fig. 5. Schematic of data fusion process.
Fig. 6.
Fig. 6. (a) Spectral image of the Mercury Argon Calibration Source. (b) Relationship between the wavelengths and pixel indices as fitted by a third-order polynomial. (c) Spectral curve of the Mercury Argon Calibration Source measured by our hyperspectral scanner according to the wavelength calibration. (d) Comparison of spectra measured by a commercial spectrometer and our DSHI.
Fig. 7.
Fig. 7. Residuals of plane fitting, shown at different observation angles.
Fig. 8.
Fig. 8. (a) Reflectance 4D model and (b) fluorescence 4D model of the checkerboard. (c) Reflectance spectrum and fluorescence spectrum of points A (indicated in (a) and (b)).
Fig. 9.
Fig. 9. (a) Photograph and (b) 3D point cloud model with artificial gray shading of the fluorescent figurine. (c) Enlargement of the mouth region. (d) Distribution of the point cloud of the rectangular region indicated in (c).
Fig. 10.
Fig. 10. Reflectance (a) and fluorescence (c) 4D models of the fluorescent figurine observed at different angles and wavelengths. Normalized reflectance (b) and fluorescence (d) spectra of points A-D indicated in Fig. 8(b)
Fig. 11.
Fig. 11. Photograph (a) and 3D point cloud model (b) with artificial blue shading of a real sunflower. (c) Enlargement of the rectangular region indicated in (b). (d) Distribution of point cloud of the rectangular region indicated in (c). (e-h) Reflectance 4D model of the real sunflower at different angles and wavelengths. (i) Fluorescence 4D model.
Fig. 12.
Fig. 12. Photograph (a) and 3D point cloud model (b) with artificial blue shading of the artificial sunflower. (c) Enlargement of the rectangular region indicated in (b). (d) Point cloud distribution of the rectangular region indicated in (c). (e-h) Reflectance 4D model of the real sunflower at different angles and wavelengths. (i) Fluorescence 4D model.
Fig. 13.
Fig. 13. (a) Dual-mode 4D model of the real sunflower. (b) The normalized reflectance spectra and (c) normalized fluorescence spectrum of points indicated in (a). (d) Dual-mode 4D model of the artificial sunflower. (e) The normalized reflectance spectra and (f) normalized fluorescence spectrum of points indicated in (d).
Fig. 14.
Fig. 14. Reflectance 4D model of the shell (a) and the interior (b) structure a Tegillarca granosa clam in false-color representation. (d) Normalized reflectance spectra of points A and B indicated in (a) and point C indicated in (c).

Tables (1)

Tables Icon

Table 1. Performance and characteristics of the DSHI system

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

Z c 1 [ u 1 v 1 1 ] = M 1 [ X w Y w Z w 1 ] = [ m 11 1 m 12 1 m 13 1 m 14 1 m 21 1 m 22 1 m 23 1 m 24 1 m 31 1 m 32 1 m 33 1 m 34 1 ] [ X w Y w Z w 1 ]
Z c 2 [ u 2 v 2 1 ] = M 2 [ X w Y w Z w 1 ] = [ m 11 2 m 12 2 m 13 2 m 14 2 m 21 2 m 22 2 m 23 2 m 24 2 m 31 2 m 32 2 m 33 2 m 34 2 ] [ X w Y w Z w 1 ]
[ u 1 m 31 1 m 11 1 u 1 m 32 1 m 12 1 u 1 m 33 1 m 13 1 v 1 m 31 1 m 21 1 v 1 m 32 1 m 22 1 v 1 m 33 1 m 23 1 u 2 m 31 2 m 11 2 v 2 m 31 2 m 21 2 u 2 m 32 2 m 12 2 v 2 m 32 2 m 22 2 u 2 m 33 2 m 13 2 v 2 m 33 2 m 23 2 ] [ X w Y w Z w ] = [ m 14 1 u 1 m 34 1 m 24 1 u 1 m 34 1 m 14 2 u 2 m 34 2 m 24 2 u 2 m 34 2 ]
c c o = H h c o , H = c c o h c o 1 = [ 0.9812 0.0011 0.0000 0.0034 0.9912 0.0000 0.0031 0.0014 0.9789 ]
λ = a 0 + a 1 y + a 2 y 2 + a 3 y 3
α = n = 1 N ( u u ) 2 + ( v v ) 2 N
O A I H = α 576 α 560
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.