Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

4D surface shape measurement system with high spectral resolution and great depth accuracy

Open Access Open Access

Abstract

A 4D surface shape measurement system that combines spectral detection and 3D surface morphology measurements is proposed, which can realize high spectral resolution and great depth accuracy (HSDA system). A starring hyperspectral imager system based on a grating generates precise spectral data, while a structured light stereovision system reconstructs target morphology as a 3D point cloud. The systems are coupled using a double light path module, which realize point-to-point correspondence of the systems’ image planes. The spectral and 3D coordinate data are fused and transformed into a 4D data set. The HSDA system has excellent performance with a spectral resolution of 3 nm and depth accuracy of 27.5 μm. A range of 4D imaging experiments are presented to demonstrate the capabilities and versatility of the HSDA system, which show that it can be used in broad range of application areas, such as fluorescence detection, face anti-spoofing, physical health state assessment and green plant growth condition monitoring.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Spectral characterization plays a critical role in molecular identification, which has a broad range of application areas, e.g., on-site environmental remote sensing, food safety control, soil classification, gas detection, biomedicine, etc. [18]. It can provide a rich set of information related to the vibrational modes of molecular bonds and the light absorption. Hyperspectral imaging adds spectral information to traditional two-dimensional images, and produces a hyperspectral cube that makes it possible to simultaneously obtain geometric and spectral characteristics of an object. This is useful for improving accuracy in discrimination of biological properties, such as identification of cancerous tissue and diseases [911]. Hyperspectral imaging can be categorized in three types depending on the scanning mode: point scanning, push-broom scanning, and staring scanning. A point scanning imager obtains spectral information point by point, while a push-broom scanning imager does so line by line along one axis. Point scanning and push-broom scanning imagers are relatively mature techniques. However, in order to generate a complete two-dimensional hyperspectral image, an extra motion platform is required to move the camera such that all spatial points or lines of the measured objects can be scanned and enable image stitching. This limits their application in multi-sensor collaboration and fusion as such motion of the camera would break the spatial relations of various sensors. On the other hand, a staring scanning imager can capture spectral images without platform scanning or self-motion [12, 13]. Compared with the point scanning and push-broom scanning imaging techniques, staring scanning has better frame imaging characteristic and perform well in multi-sensor systems.

A drawback of hyperspectral imaging is that depth information of the object is lost in hyperspectral cubes, which impedes spectral distribution analysis of three-dimensional surfaces. Three-dimensional (3D) reconstruction technologies, such as binocular vision [14,15], structured light stereo vision [16,17], time of flight (TOF) [18] and LIDAR [19], are common morphological measurement methods that provide depth information. Binocular vision systems capture images from two different perspectives, and detects corresponding points from those images to perform a 3D coordinate calculation based on triangulation. However, the 3D-reconstruction quality can be poor if the measured surface lacks rich texture. It is furthermore difficult for such a technique to achieve great depth accuracy as it merely relies on image processing to realize stereo correspondence. In TOF systems, an active emitter is used to modulate light in the time domain, back-scattered light from the target is collected by an optical sensor and depth information is obtained by calculating the time delay of the signal [18]. However, this method suffers from low accuracy (limited to the centimeter range) and being expensive. Structured light stereovision systems are similar to binocular stereo vision systems, with the difference that one camera is replaced by a projector, which is used to project structured patterns with encoded information on the target. The camera captures images that carries the structured patterns, which are then used to solve for stereo correspondence between the camera and the projector after decoding. The 3D coordinate of the target is calculated by triangulation of camera, projector and the target [20]. Compared with binocular vision and TOF, the reconstruction precision and resolution of structured light stereovision technology are markedly higher.

Higher-dimensional optical detection systems have been studied extensively in laboratory settings, and 3D surface shape detection combined with spectral analysis have been discussed at length in the literature [2126]. Behmann et al. developed a method to generate hyperspectral 3D plant models, which has the potential applications in automated plant phenotyping [23]. Huijie Zhao et al. developed integrated systems for hyperspectral imaging and 3D measurement [24,25], however the spatial resolution and point cloud density of the 3D reconstruction is relatively low in this system. Heist et al. developed a fast and accurate measurement system for surface and spectral characteristics [26], however its spectral resolution of 15 nm is low compared with its depth accuracy, which is determined by characteristics of tunable filter.

In this paper we propose, analyze and demonstrate a four-dimensional (4D) hyperspectral imaging system, based on a structured light stereovision system and a staring hyperspectral imager, which has high spectral resolution and great depth accuracy (termed the HSDA system). The staring hyperspectral imager is based on a grating and can realize high spectral precision, enabling it to effectively monitor optical material properties. The structured light stereovision system provides high spatial 3D reconstruction resolution, enabling precise measurements of microscale morphological variations. The HSDA system utilizes a double light path module for coupling of the two systems, which facilitates point-to-point correspondence between the image planes of the structured light stereovision and the hyperspectral imager. The spectral data is merged into a corresponding 3D point cloud of the reconstructed model, forming the 4D data. The depth and spectral resolution of the HSDA system are 27.5 μm and 3 nm, respectively. In order to prove the system’ performance, it was used to characterize a circle array plane board, fluorescence car model, human face, face model, green plant, and plastic plant. The results demonstrate that our system has good performance in spectral detection on 3D surface morphology of targets. The HSDA system greatly improves accuracy of physical and chemical state analysis and substance identification.

The paper is structured as follows: Section II introduces HSDA system and describes its principle of operation and a verification experiment. Section III, several experiments that demonstrate the HSDA system performance: 4D imaging of a car model with fluorescent defeats, a human face and face model, as well as a green plant and plastic plant. The results of these experiments verify HSDA system’s stability and validity, and that it is applicable for a wide range of imaging applications.

2. Methods

2.1 System setup and calibration

The optical path system and experimental installation were set up in-house and shown in Fig. 1(a). The two key parts of the HSDA system are a staring hyperspectral imager and a structured light stereovision system. The two share the same front optical module that consists of two aspheric achromatic lenses (AAL), which make up an infinity-corrected optical system, and a beam splitter (BS). The schematic diagram of the HSDA system is shown in Fig. 1(b). Light from the measured object enters the infinity-corrected optical system and is subsequently collimated. The collimated incident beam is then divided into two optical paths by the beam splitter. 90% of the light is reflected into a galvanometer mirror (Thorlabs, GVS012) and then into the staring hyperspectral imager, and 10% of the light is reflected into monochrome imaging CMOS (HIKVISION, MV-CE050-30GM) to form a grayscale image that is used for 3D reconstruction by the structured light stereovision system. The light splitter ratio is set to balance camera exposure in the staring hyperspectral imager and the structured light stereovision system. The two conjugate light paths of the HSDA system realize a point-to-point correspondence between the image planes of the structured light stereovision system and the hyperspectral imager, which is key for precise 4D data fusion.

 figure: Fig. 1.

Fig. 1. (a) The HSDA system, including a front optical module, hyperspectral imager and structured light stereovision. (b) The schematic diagram of HSDA system. (c) The physical map of HSDA system.

Download Full Size | PDF

The staring hyperspectral imager consists of a galvanometer mirror, an aspheric achromatic lens, a slit, a prism-grating-prism (PGP) pair, and a monochrome imaging CMOS (ZWO, ASI174). The collimated light from the front-optical module is reflected by the galvanometer mirror, and passed through an aspheric achromatic lens (AAL-3), imaging on the image plane where a 50 μm slit is placed to collect a line region of the image. The light of the line region is collimated by an aspheric achromatic lens (AAL-4) and reaches the PGP pair. It is dispersed and expanded in the spectral dimension 450-800 nm by the PGP, and then focused on the CMOS-2 by an achromatic lens (AAL-5). In this way, the spectrum of one line region of the image can be obtained. The angle of the incident light can then be changed by the galvanometer mirror, so that different line regions of the image can pass through the slit continuously and form a complete hyperspectral image. Thus, unlike traditional push-broom hyperspectral imagers, our staring hyperspectral imager can acquire the hyperspectral cube in a static state.

The structured light stereovision system consists of an aspheric achromatic lens, a monochrome imaging CMOS camera and a projector. The DMD of the projector projects structured light strips onto the object under investigation, and reflected light enters the front optical module and is collimated, then divided by the beam splitter. One path of the collimated light passes through the aspheric achromatic lens (AAL-2) and is imaged onto the CMOS-1. As shown in Fig. 1(c), the HSDA system can be assembled in a 40 cm × 40 cm × 40 cm box, which is portable and convenient for general measurements.

The hyperspectral imager records spatial data from a line region of the image in one dimension (serving as the spatial axis), and the spectral data in the other dimension (serving as the spectral axis). Prior to hyperspectral imaging, wavelength calibration of the spectral images is required to enable conversion of the pixel index of the spectral axis into wavelengths. For this purpose, a Mercury Argon Calibration Source is used to irradiate the slit of the hyperspectral imager. The calibration source can emit multiple extremely narrow spectral lines. The spectral image in Fig. 2(a) shows seven distinct spectral lines located at the wavelengths λ, 404.6 nm, 435.8 nm, 546.1 nm, 578.0 nm, 750.4 nm, 763.5 nm, and 772.4 nm, respectively, corresponding to the characteristic spectral peaks of the light source. The corresponding pixel indices, y, are 54, 147, 646, 553, 1023, 1058, and 1082, respectively. The relationship between the wavelength and the pixel coordinates can be described by the following polynomial equation [27]:

$$\lambda = {a_0} + {a_1}y + {a_2}{y^2} + {a_3}{y^3}$$
where λ is the calibrated wavelength vector, y is the pixel index vector, and a0, a1, a2, and a3 are the calibration coefficients. Using a polynomial least square method, we find the calibration coefficients as [a0, a1, a2, a3] = [-398.0190, 0.6024, 1.2443e-04, -7.7731e-08]. The fitted curve of the polynomial equation is shown in Fig. 2(b) in the spectral range 400-800 nm. As seen in Fig. 2(c), the full width at half maximum of the spectral line 546.1 nm is about 3 nm, indicating that the spectral resolution of our system is 3 nm.

 figure: Fig. 2.

Fig. 2. (a) The spectral image of the Mercury Argon Calibration Source. (b) The relationship between the wavelength and pixel index fitted by third-order polynomial. (c) The spectral curve of the Mercury Argon Calibration Source measured by our hyperspectral imager after wavelength calibration.

Download Full Size | PDF

2.2 Principle of HSDA system

The principle of operation of the HSDA system is illustrated in Fig. 3. We first analyze the two light paths (red and blue) on the left. The incident light is divided into two beams by the BS, one of which enters the CMOS camera (CMOS-1 in Fig. 1), while the other enters the hyperspectral imager. After the CMOS captures all images of the measured object with structured pattern, the hyperspectral imager in HSDA system captures spectral images to form a hyperspectral cube through galvanometer scanning. The light reflected from the 4D point $P({x^w},{y^w},{z^w},R(\lambda ))$ is captured by the camera at certain pixel position in the CMOS plane. In practice, there are radial and lateral distortion in the camera, leading to deviation between the ideal pixel position$c(u_i^c,v_i^c)$and actual pixel position. The ideal pixel $c(u_i^c,v_i^c)$is the position where the reflected light of 4D point $P({x^w},{y^w},{z^w},R(\lambda ))$ reaches the CMOS plane through the center of the front optical module of the camera. After eliminating the distortion, the accurate pixel position $c(u_i^c,v_i^c)$can be obtained. In the same fashion, the reflected light of the 4D point P is captured by the hyperspectral imager at pixel position $h(u_i^h,v_i^h)$, providing the spectral data ${I_w}({\lambda _1},{\lambda _2},\ldots ,{\lambda _n})$ to the hyperspectral cube. $R({\lambda _1},{\lambda _2},\ldots ,{\lambda _n})$is the intrinsic reflectance of targets at different wavelengths, while ${I_w}({\lambda _1},{\lambda _2},\ldots ,{\lambda _n})$ is the reflected spectrum obtained after irradiation by the light source. After eliminating the influence of the light source, $R({\lambda _1},{\lambda _2},\ldots ,{\lambda _n})$can be obtained through ${I_w}({\lambda _1},{\lambda _2},\ldots ,{\lambda _n})$. For the pixels $c(u_i^c,v_i^c)$ and $h(u_i^h,v_i^h)$, there exists a precise homographic transformation under condition of a conjugate image plane, as shown in below:

$$c = Hh,H = \left[ {\begin{array}{ccc} {1.678}&{0.099}&{50.914}\\ {0.0123}&{1.660}&{189.697}\\ { - 1.464\textrm{e} - 05}&{9.506\textrm{e} - 06}&{1.000} \end{array}} \right]$$
where matrix H was derived through the calibration experiment. Therefore, the spectral data ${I_w}({\lambda _1},{\lambda _2},\ldots ,{\lambda _n})$ of $(u_i^h,v_i^h,{I_w}({\lambda _1},{\lambda _2},\ldots ,{\lambda _n}))$ can be projected to the camera image coordinate system and assigned to single pixels $c(u_i^c,v_i^c)$. Similarly, based on trigonometric mapping relation between the 4D point P and the CMOS camera pixel $c(u_i^c,v_i^c)$, the spectral data ${I_w}({\lambda _1},{\lambda _2},\ldots ,{\lambda _n})$ of $(u_i^c,v_i^c)$ can be assigned to the 4D points P one by one. And thus a 4D point containing precise 3D positional information and spectral data is obtained, represented as ${({x^w},{y^w},{z^w},{I_w}({\lambda _1},{\lambda _2},\ldots ,{\lambda _n}))^T}$. After eliminating the influence of the light source, the 4D points $P{({x^w},{y^w},{z^w},R({\lambda _1},{\lambda _2},\ldots ,{\lambda _n}))^T}$ is obtained. The fusion precision is the indicator to measure the fusion performance between the 3D information and hyperspectral data. The fusion precision is the average Euclidean distance between the pixel coordinates $c^{\prime}(u_i^{c \prime},v_i^{c \prime})$ obtained by homographic transformation in Eq. (2) and the actual pixel coordinates $c(u_i^c,v_i^c)$ on the CMOS camera plane. The higher the fusion precision, the smaller the fusion error, and the better the fusion performance. In the above, it is assumed that the coordinates of 3D point $P({x^w},{y^w},{z^w})$ are known, however they can be calculated by first establishing the mapping relation between the pixels of the DMD of the projector and CMOS images. We use a four-step phase-shift method to solve for the relative phase [28]. The intensity of structured light stripe is:
$${I_m} = \frac{1}{2} + \frac{1}{2} \times \textrm{cos}({\varphi ^n}(u_i^c,v_i^c) + 2m\pi /4)$$
where m is the phase-shifting step number. $u_i^c$ and $v_i^c$ are the abscissa and ordinate of the CMOS image, and n is the periodic sequence of the relative phase. The relative phase ${\varphi ^n}(u_i^c,v_i^c)$ can be calculated from Eq. (3) as shown in Eq. (4).
$${\varphi ^n}(u_i^c,v_i^c) ={-} \textrm{ta}{\textrm{n}^{ - 1}}[\frac{{\sum\nolimits_{m = 0}^3 {{I_m}\textrm{sin}(2m\pi /4)} }}{{\sum\nolimits_{m = 0}^3 {{I_m}\cos (2m\pi /4)} }}]$$

However, as the relative phase ${\varphi ^n}(u_i^c,v_i^c)$ is periodic, there is no unique solution for every pixel. In order to unwrap the wrapped phase, a multi-wavelength heterodyne phase unwrapping solution is introduced [29]. The frequency of the projected structured light used in the HSDA system is 128, 121 and 115 Hz, and so the multi-frequency heterodyne is 1 Hz. In this way, across the complete range of the DMD image, the phase is continuous with one complete period, and thus each pixel in the DMD image has a unique absolute phase value ${\Phi _{ch}}(u_i^c,v_i^c)$ in horizontal direction and ${\Phi _{cv}}(u_i^c,v_i^c)$ in vertical direction. The point-to-point correspondence between the CMOS pixel $c(u_i^c,v_i^c)$and DMD pixel $p(u_i^p,v_i^p)$is described in Eq. (5).

$$u_i^p = \frac{{{\Phi _{ch}}(u_i^c,v_i^c)}}{{2\pi }} \times W$$
$$v_i^p = \frac{{{\Phi _{cv}}(u_i^c,v_i^c)}}{{2\pi }} \times H$$
where W is the width of DMD image and H is the height of DMD image. As for the optical path of the CMOS, its optical axis gets through optical center of the lens and beam splitter. Therefore, the lens and beam splitter in front of CMOS are equivalent to a pinhole camera model approximatively, as well as the lens in front of DMD. In a pinhole model, the correspondence of 2D image points (CMOS or DMD) and 3D points can be described in Eq. (6) [29].
$${z_c}\left[ {\begin{array}{{c}} {u_i^c}\\ {v_i^c}\\ 1 \end{array}} \right] = {K_c}{M_c}\left[ {\begin{array}{{c}} {{x^w}}\\ {{y^w}}\\ {{z^w}}\\ 1 \end{array}} \right]$$
$${\textrm{z}_p}\left[ {\begin{array}{{c}} {u_i^p}\\ {v_i^p}\\ 1 \end{array}} \right] = {K_p}{M_p}\left[ {\begin{array}{{c}} {{x^w}}\\ {{y^w}}\\ {{z^w}}\\ 1 \end{array}} \right]$$
where Kc and Mc are the intrinsic and external camera parameters, while Kp and Mp are the intrinsic and external projector parameters. zc and zp are scale factors. To calculate their intrinsic parameters Kc, Kp and external parameters Mc, Mp, the calibration of camera and projector is required in advance. A circle plane array board broad is used as calibration board. It is worth mentioned that the world coordinate is defined through the circle plane array board. We define the center of the plane as the origin, horizontal line through the origin on the plane as X-axis, vertical line through the origin on the plane as Y-axis, line perpendicular to the plane and through the origin as Z-axis. In the following, the 3D coordinates of targets are calculated based on this world coordinate. The coordinates of 3D point P can then be calculated using Eqs. (5) and (6). After calibration, the unit of all coordinates is millimeter.

 figure: Fig. 3.

Fig. 3. The principle of operation of the HSDA system.

Download Full Size | PDF

2.3 Flow chart of HSDA system

Figure 4 shows the operational flow chart of the HSDA system, which consists of two parts. The first part is to verify proper operation of the hardware (i.e., projector, camera, and hyperspectral imager), and constitutes 3D reconstruction and the generation of a 3D point cloud model using the images captured by the CMOS camera, as well as to form a hyperspectral cube with the spectral images captured by the hyperspectral imager. Structured light stripes of a certain frequency are projected onto the target by the projector. And then the projector triggers the camera at high level to capture gray images of the target, which contain structured light stripes. The frequency and phase of the structured light stripes changes in order, and the camera captures the images with different stripes frequency one by one. This is repeated until all the images with structured light stripes have been captured by the CMOS camera. The images are used to constitute 3D reconstruction and generate 3D point cloud model. Following this, the hyperspectral imager captures spectral images of the measured object by galvanometer scanning. The spectral images are used to form a hyperspectral cube of measured object.

 figure: Fig. 4.

Fig. 4. The flow chart of HSDA system, including two parts. Part 1 is to constitute 3D reconstruction and form a hyperspectral cube. Part 2 is to form a 4D data set and sample results are displayed. (a) The original 4D model. (b) The spectrum curves of point A, B, C and D in the origin model. (c) The 4D model with point cloud segmentation based on spectral information. (d) monochrome images at different wavelengths (450 nm–750 nm) for different 3D perspectives.

Download Full Size | PDF

The second part is to fuse the 3D point cloud and the hyperspectral cube to form a 4D data set. The fusion precision is at subpixel level (0.9174 pixel). Sample results are shown in Fig. 4. After the fusion of the 3D point cloud and hyperspectral information, the original the 4D model is shown in Fig. 4(a). The corresponding spectrum curves of point A, B, C and D in the model are shown in Fig. 4(b). In Fig. 4(c), point cloud segmentation is performed on the 4D model according to its spectrum characteristics. The spectral information makes it possible to distinguish between similar colors that could not be distinguished by chromatic RGB values. In Fig. 4(d), 4D data can be displayed as monochrome images at different wavelengths (450 nm–750 nm) for different 3D perspectives. The spectral information can also be utilized for substance detection, such as hemoglobin and chlorophyll. The content of the measured substances as well as their spatial distribution can be qualitatively estimated. These will be demonstrated by the experiments in Section 3.

In order to test the performance of our system, we perform a 4D imaging experiment of a circle plane array board. The plane array was placed about 300 mm in front of the HSDA system. A blue light LED was used as the structured light stereovision projector light source as it enables the camera to clearly capture the boundaries of the measured object. In addition, a white light LED, that matches our indoor lighting, was used as the light source for the hyperspectral imager. The hyperspectral cube of circle plane array board is shown in Fig. 5(a). Each pixel in the hyperspectral image corresponds to different light intensities at different wavelengths. The reflectance spectra of points A and B are shown in Fig. 5(b). Both have a similar shape as a LED-5000K spectrum, which is due to the fact that the materials at points A and B in the plane has similar reflectivity in the 400-800 nm range. The reflectivity of a white circle is obviously stronger than that of the dark background, which is manifested by the stronger intensity of the reflectance spectrum of point A. The 3D point cloud of circle plane array board with color masks for different 3D observation angles are shown in Fig. 5(c). The color masks are used to encode the CMOS camera pixel coordinate $(u_i^c,v_i^c)$ as an RGB component. The encoding rules of the color mask are as follow:

$$r = u_i^c\textrm{ - }\left\lfloor {\frac{{u_i^c}}{{{2^8}}}} \right\rfloor \times {2^8}$$
$$g = v_i^c\textrm{ - }\left\lfloor {\frac{{v_i^c}}{{{2^8}}}} \right\rfloor \times {2^8}$$
$$b = 60 \times \left\lfloor {\frac{{u_i^c}}{{{2^8}}}} \right\rfloor + 2 \times \left\lfloor {\frac{{v_i^c}}{{{2^8}}}} \right\rfloor$$

 figure: Fig. 5.

Fig. 5. (a) Hyperspectral cube of circle plane array board. (b) Spectrum intensities of pixels A (white circle) and B (dark background). (c) 3D point cloud of the circle plane array board plane with color mask for different viewing angles of the three-dimensional space. (d) Monochrome 3D point clouds of the circle plane array board plane at varying wavelength.

Download Full Size | PDF

In this way, the pixel coordinate $c(u_\textrm{i}^c,v_i^c)$ information is also assigned to the point cloud $({x^w},{y^w},{z^w})$. Encoding results are shown in Fig. 5(c), where the 3D point cloud has different chromatic values and RGB components. After decoding the chromatic RGB value of the 3D point$({x^w},{y^w},{z^w})$, the pixel $c(u_i^c,v_i^c)$ can be determined. It is worth noting that the there exists a precise homographic transformation for pixels $c(u_i^c,v_i^c)$ in the CMOS and $h(u_i^h,v_i^h)$ in the hyperspectral cube, and thus the corresponding pixel position in the hyperspectral cube can be obtained. The corresponding spectrum of the 3D point can furthermore be obtained, and the spectrum data is thus assigned to the corresponding 3D point as the fourth dimensional data. In other words, the color mask is used to merge the 3D point cloud and the hyperspectral cube, thereby producing the 4D data set.

Monochrome images of the circle plane array board at varying wavelengths in the 450–700 nm range are obtained and shown in Fig. 5(d). The 3D points and their spectrum have a one-to-one consistency and all the circles are clearly visible at the different wavelengths. The fusion precision is in the subpixel range. The repeatability error of the 3D point cloud coordinate is 0.0275 mm. The ideal reconstruction of the circle plane array board is that all the 3D points are distributed on the same plane, known as the best-fitting plane. In the actual measurement, the reconstructed points deviate from the best-fitting plane. The standard deviation between circle plane array board point cloud and its calculated best-fitting plane is 0.0269 mm; and the maximum deviation between circle plane array board point cloud and its calculated best-fitting plane is 0.14 mm.

We summarize the performance and characteristics of our HSDA system in Table 1. The spectral resolution is smaller than 3 nm and the depth resolution is about 27.5 μm. The HSDA system can collect a maximum of 64,0000 4D points$({x^w},{y^w},{z^w},R(\lambda ))$at one time and produce a dense point cloud model, with a run time of less than 80s. The proper number of 4D points can ensure 3D reconstruction of high spatial resolution, and realize low computational complexity and high process speed as well.

Tables Icon

Table 1. Performance and characteristics of the HSDA system

3. Experimental demonstrations

In this section, we present experimental demonstrations of the HSDA system used on a number of sample objects: a car model, human face, a face model as well as a real and a plastic plant. The sample objects are chosen to demonstrate the versatility of the HSDA system.

3.1 4D imaging of a car model

Ensuring high and stable performance of integrated mechanical systems require strict quality standards on components, and effective defect detection is thus critical in industrial production [3032]. Current detection methods are typically based on image detection, which suffers from low accuracy in defect identification and location. As our proposed HSDA system can simultaneously collect 3D spatial and spectral information, it can be utilized to detect, locate and identify component defects effectively and thus promises significant improvement over currently used detection methods. As an example, we used the HSDA system on a car model.

The hyperspectral imager and structured light stereovision system of HSDA system were calibrated in advance. The car model was covered with two kinds of different fluorescent paint A and B, which act as defects/stains on the surface of the model. And the windows of the car model were covered with white paper. The car model was placed about 400 mm in front of HSDA system. Blue light LED was used as the structured light source and purple light (365 nm) was used as the exciting light source for the hyperspectral imaging. Fluorescent paint can be excited by purple light and thus corresponding fluorescent spectrum can be obtained. The 3D point cloud model was obtained through 3D structural light reconstruction, as discussed in the previous section. As can be seen in Fig. 6(a), the two kinds of applied defects are close to indistinguishable in the 3D point cloud. After being excited by the purple light, the fluorescent spectrum of the car model was captured by the hyperspectral imager. Using our algorithm, we then identify different sets of 3D point cloud areas based on the fluorescent spectrum differences and render them in color. As shown in Fig. 6(b), the 3D point cloud areas of the paper fluorescence, fluorochrome A, and fluorochrome B rendered are in blue, cyan, and red color, respectively. The defects are clearly distinguishable, even when the fluorescent 3D point cloud reconstructed model is observed from various angles. Moreover, based on the spatial information of the 3D point cloud, the 3D size of different components in car model can be measured precisely. The measured length, width and height of the car are 109.31 mm, 43.81 mm and 32.06 mm, respectively. The actual length, width and height of the car are 109.35 mm, 43.86 mm, and 32.1 mm, respectively. The relative measurement error of length, width, and height are 0.00037, 0.00114 and 0.00125, respectively. After triangularization on the point cloud, surface patch is obtained, which can be used to calculate surface area of target. The surface area of hood, car door and car window are 1322.97 mm2, 722.41 mm2 and 1101.44 mm2, respectively. Besides, the superficial area of the fluorescence defect A (cyan) and B (red) can be calculated, and their values are 537.71mm2 and 1157.37mm2, respectively. Figures 7(a) and (b) show the 3D point cloud of the rearview mirror and the door handle, respectively. The insets in Figs. 7(a) and 7(b) show enlargements of 3D points distributions, which are dense and smooth.

 figure: Fig. 6.

Fig. 6. (a) Gray scale 3D point cloud of the car model for different viewing angles. (b) 3D point cloud with fluorescence overlay for different viewing angles.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. (a) 3D point cloud model of the rearview mirror. (b) 3D point cloud model of the door handle.

Download Full Size | PDF

The difference of the fluorescent spectra was analyzed in details as follows. Three 3D points A (-1.70, -4.76, -1.45), B (-27.45, -6.93, 4.45), and C (25.43, -4.10, -12.41), located in the three fluorescent regions in Fig. 8(a), were chosen and their fluorescent spectra are shown in Fig. 8(b). When excited by the 365 nm light, the paper covering the car window (point A) emitted blue light and its fluorescent spectrum peak is located at 450 nm. The intensity of its fluorescent spectrum declines slowly at longer wavelengths and reaches zero at 700 nm. The fluorescent spectrum peak for point B is located at 480 nm, and its half-width is larger than that of the white paper’s fluorescent spectrum peak. The fluorescent spectrum peak of point C is located at 616 nm, with side lobes at 582 nm, 595 nm, 655 nm and 703 nm. Its half-width is only 10nm, much sharper than that of points A and B. The difference in peak positions and half-width of the three fluorescence spectra can be utilized for identification of type of surface defects. Additionally, as is evident from Fig. 8(a), the specific 3D position of the defects can also be obtained.

 figure: Fig. 8.

Fig. 8. (a) 3D model of car model with fluorescence. (b) The fluorescence spectrum of 3D points A, B and C.

Download Full Size | PDF

This experiment demonstrates that our system is able to precisely measure size of components, detect surface flaws of, and stains on, components, which have obvious and significant value in precision manufacturing.

3.2 4D imaging of human face and face model

The combination of facial recognition and health monitoring is rapidly gaining traction due to rapid development of facial recognition technology [33]. Current facial recognition systems still face significant challenges, such as detection failures due to camouflage, illumination variations, and facial expression, as well as face anti-spoofing [34,35]. 4D surface facial shape measurement can provide new opportunities for dependable and secure facial recognition. 3D facial imaging highlights the differences of human faces in depth [3638], while spectral information corresponds to intrinsic physiology information of human face [39], which can be used to distinguish between a real human face and fake one. Spectral information can furthermore be used for health monitoring and emotional analysis. This as changes of health or emotional states can cause variations of hemoglobin distributions in the face, which leads to the changes in optical absorption and of the optical spectrum. Therefore, there clearly exist a plethora of potential applications for 3D facial recognition combined with spectral information. Here we demonstrate this by using the HSDA system on a real human face and face model.

In the first experiment, a face model was placed about 450 mm away from the HSDA system. Blue light stripes were projected onto the face model and the camera captured images to reconstruct a 3D point cloud model, after which the staring hyperspectral imager captured spectral images to produce the hyperspectral cube. The hyperspectral cube and 3D model were merged into a 4D data set, which shown in Fig. 9(a), where 4D data of the face model is displayed at different angles and at different wavelengths. Figure 9(b) shows enlarged pictures of the nose, where we see that the 3D point cloud is dense and smooth, and can show details of the 3D model. In the second experiment, the 4D data of a human face was captured in the same way. As shown in Fig. 10(a), the human face with four-dimensional data is displayed at different angles of 3D space and at different wavelengths. Figure 10(b) shows the enlarge detailed pictures of the lip, where we see that the lip wrinkles are clear. Besides, the 3D geometrical size of human face can be measured based on the spatial information of 3D point cloud. Its actual and calculated size are shown in Table 2. The precise geometrical information can be used to customize individual facial products.

 figure: Fig. 9.

Fig. 9. (a) 4D data of the face model shown at different wavelengths and different viewing angles. (b) Enlargement of the nose in the 3D point cloud model.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. (a) 4D data of the human face shown at different wavelengths and different viewing angles. (b) Enlargement of the lip in the 3D point cloud model.

Download Full Size | PDF

Tables Icon

Table 2. The measurement of 3D geometric human face

Figures 11(a) and (b) show the 3D point clouds of the face model and human face with RGB components. The color of the 3D point cloud was determined based on RGB component values, where the light intensity at 406, 540, and 600 nm were assigned to the chromatic values of blue, green, and red, respectively. The intensity spectrum curve of 3D points A (15.10, 31.96, -2.57) (face model) and B (-9.12, -36.93, -8.91) (human face) are shown in Fig. 11(c). We find that the spectrum intensity of the human face (point B) is lower than that of the face model (point A) in the 520-600 nm range, which is due to the optical absorption of hemoglobin. In order to calculate reflectance relative to the LED light source and better distinguish the two spectrum curves, we derive normalized reflectance spectra by dividing the intensity spectrum of the two points by the intensity spectrum of a standard whiteboard. As seen in Fig. 11(d), the reflectance curve of point B in the human face has hemoglobin absorption peaks at 420 nm, 540 nm and 576 nm. The reflectance curve of point A in the face model has an absorption peak at 435 nm due its intrinsic material. It is therefore possible to distinguish a human face and 3D face model based on the difference of absorption peaks.

 figure: Fig. 11.

Fig. 11. (a) 3D point cloud of the face model displayed using RGB colors. Light intensities at 406 nm, 540 nm 600 nm assigned to the chromatic value of the blue, green, and red components, respectively. (b) 3D point cloud of human face with RGB colors. (c) The reflected spectrum curve of 3D points A and B. (d) The reflectance spectrum of 3D points A and B.

Download Full Size | PDF

In order to make quantitative analysis of the optical absorption of hemoglobin, we define the optical absorption intensity of hemoglobin (OAIH) factor as:

$$\textrm{OAIH} = ({\alpha _{540}} \times {\alpha _{576}})/({\alpha _{510}} \times {\alpha _{564}})$$
where αλ is the normalized reflectance at wavelength $\lambda $. The subscripts 540 nm and 576 nm are the hemoglobin absorption peaks of the reflectance curve of human face, while the subscripts 510 nm and 564 nm are the hemoglobin reflectance peaks, as shown in Fig. 11(d). Compared with the spectrum curve of the face model in Fig. 11(d), 540 nm and 576 nm are the strongest optical absorption band of hemoglobin, while 510 nm and 564 nm are the weakest optical absorption band. Therefore, the ratio of the intensity of the strongest and weakest bands (manifested as OAIH) can better reflect changes of optical absorption. Figure 12 shows the 3D point cloud with the color set by the value of the OAIH factor. The brightness of the color then relates to the intensity of the optical absorption. The darker the color red, the smaller the value of the OAIH factor and the stronger the optical absorption of hemoglobin. Similarly, the brighter the color yellow, the larger the OAIH factor and the weaker the optical absorption of hemoglobin. As can be seen in Fig. 12(a) that shows the human face, the color of the 3D points is darkest in the lip region, which indicates that the lip has the highest content of hemoglobin, followed by the nose and the cheek. This matches the distribution of blood vessels in the human face. In Fig. 12(b), we see that the color of 3D points is balanced and light-colored across face model, indicating that the face model lacks hemoglobin. Overall, the color of the 3D points of the human face are darker and more differentiated as compared to the face model. The reflectance spectra of 3D points A (5.40, -10.16, -6.98) and B (-45.32, 5.83, 5.17), located in the cheek and lip of the human face, respectively, are shown in Fig. 12(c). We note that the lip has larger absorption peaks at 540 nm and 576 nm than the cheek, which is consistent with the definition of the OAIH factor. The color of lip region is darker than that of the cheek region. This experiment demonstrates both that our HSDA system can be utilized in anti-spoofing facial identification, and that it has considerable potential for assessment of health conditions and emotional states, as it can accurately monitor the distribution of hemoglobin in 3D human face.

 figure: Fig. 12.

Fig. 12. 3D point clouds of the (a) human face and (b) face model, where the color of the 3D points is determined by the OAIH factor. (c) Reflectance curve of 3D points A and B.

Download Full Size | PDF

3.3 4D imaging of green and plastic plants

Green plants play a critical role for energy conversion, photosynthesis, and environmental quality evaluation [40]. The chlorophyll of green plant produces oxygen that we humans survive on. Chlorophyll distributions can act as indicators of growth and health state of green plants [4148], and therefore it is desirable to be able to monitor such chlorophyll distributions. Traditional monitoring methods, such as chemical detection, are inherently destructive for the plants. As an alternative, our HSDA system can in a non-invasive way simultaneously obtain three-dimensional spatial and spectral information of a green plant. The spectral information can be utilized to measure the optical absorption of chlorophyll and thus monitor the physical health of the green plant, while evaluation of changes in 3D spatial geometry can be used to accurately observe the growth state. Precision agriculture is thus an area for potential applications of our HSDA system.

The experiment details are shown as follows. A green plant was placed about 450 mm away from the HSDA system, as shown in Fig. 13(a). Blue structured light stripes were projected onto the green plant for 3D reconstruction, after which the hyperspectral imager captured spectral images to produce the hyperspectral cube, which is shown in Fig. 13(b). Subsequently, the 3D model and hyperspectral cube were merged into the 4D data set, as shown in Fig. 13(c). Its enlarged pictures of the leaf blade margin are shown in Fig. 13(e) and 13(f). The texture and outlines of the margin are clear. Figure 13(d) shows the 4D model of the green plant from different observation angles and at different wavelengths. We observe that the plant is brighter at 550 nm than at other wavelengths, indicating lower absorption in the wavelength band around 550 nm. Figure 13(g) shows the surface patch of the point cloud, which is used to calculate the superficial area. The geometric sizes of the green plant are shown in Table 3, which are measured according to the spatial information of 3D point cloud. Therefore, the 4D data can be used to make quantitative evaluation of growth state of green plant.

 figure: Fig. 13.

Fig. 13. (a) The green plant. (b) The hyperspectral cube of the measured green plant. (c) 3D point cloud model of the green plant. (d) The 4D data of the green plant observed at different angles and at different wavelengths (450-675 nm). (e) Enlargement of the leaf blade margin. (f) The distribution of the point cloud of blue rectangular region in the leaf blade. (g) The surface patch of the point cloud.

Download Full Size | PDF

Tables Icon

Table 3. 3D measurement of parts in green plant (Fig. 13(a))

4D imaging of a plastic plant was carried out in the same way as for the green plant. The hyperspectral cube and reconstructed 4D model are shown in Fig. 14.

 figure: Fig. 14.

Fig. 14. The hyperspectral cube and 3D point cloud of the plastic plant.

Download Full Size | PDF

Figures 15(a) and 15(b) show the 3D point cloud of the green plant and plastic plant, respectively. The colors of the 3D point cloud were determined by the RGB component values, which were determined by assigning the light intensities of 406 nm, 540 nm, and 600 nm to the chromatic values of blue, green, and red. It is worth mentioned that the black areas on the surface of the 4D plastic plant in Fig. 15(b) are caused by overexposure. Because the plastic plant is made of compounds of spectral materials, it will cause specular reflection at some certain areas on the surface, leading to high collected brightness values. However, the brightness values are relatively low at most other areas. To ensure the normal exposure of most image areas, the specular reflection areas have to be overexposed. Those overexposed pixels cannot be used in 3D reconstruction, leading to some black areas on the surface of the reconstructed 4D point cloud model. Figure 15(c) shows the normalized intensity spectra of 3D point A (30.21, 21.18, -12.31) indicted on the green plant and 3D point B (55.23, 58.31, -16.07) on the plastic plant (indicted in Figs. 15(a) and 15(b) respectively) have obvious differences.

 figure: Fig. 15.

Fig. 15. 3D point cloud of the (a) green plant and (b) plastic plant displayed using RGB colors. (c) Intensity spectrum of 3D points A and point B. (d) Normalized reflectance spectrum of 3D points A and B.

Download Full Size | PDF

In order to better distinguish between the two intensity spectrums, we remove the impact of the light source by dividing the spectral intensity of two points by the spectral intensity of a standard whiteboard and then normalizing the obtained values, which yields the normalized reflectance spectra shown in Fig. 15(d). The reflectance spectrum of the green plant has absorption peaks at 450 nm and 670 nm, which are caused by the optical absorption of the chlorophyll, while the reflectance spectrum of the plastic plant has absorption peaks at 440 nm and 470 nm. In this way, the difference between the spectra of 3D points A and B are maximized. Thus, although the 3D point cloud model of the green plant and plants have quite similar appearances, they can be easily distinguished by their reflectance spectra. In other words, every 3D point corresponds a unique spectrum data, which can be used as a main feature to distinguish the real and plastic plants.

In order to do a quantitative analysis of the optical absorption of chlorophyll we define the optical absorption intensity of chlorophyll (OAIC) factor as:

$$\textrm{OAIC = (}{\alpha _{450}} \times {\alpha _{670}}\textrm{)/(}{\alpha _{540}} \times {\alpha _{540}}\textrm{)}$$
where αλ is the normalized reflectance at wavelength λ. The subscript 450 nm and 670 nm are the absorption peaks of the normalized reflectance curve, while 540 nm is the reflectance peak. Figure 16 shows the 3D point clouds of the green and plastic plants with the color set by the value of the OAIC factor at different angles of observation. The color gradation relates to the OAIC factor, the darker the color, the smaller the OAIC factor and the stronger the optical absorption. The optical absorption is caused by chlorophyll, and hence a higher chlorophyll concentration yields a darker color in the reconstructed 4D model. The chromatic value of the green plant is much darker than that of plastic plant, and thus the HSDA system can easily distinguish between the green and plastic plants, the difference of which are barely distinguishable by the naked eye. By the same principle, we can monitor chlorophyll content in different leaves or at different position of the same leaf. As seen in Figs. 16(a) and 16(c), the chlorophyll content of the new tender leaves at the center of the plant is higher than that of the older leaves further out. On a single leaf, the dark color is concentrated at the center of the leaf, with lighter color at the edges, indicating a higher chlorophyll concentration at the center of the leaf.

 figure: Fig. 16.

Fig. 16. 3D point clouds of the green plant (a) and (c) and plastic plant (b) and (d), where the color of the 3D points is determined by the OAIC factor, from different perspectives.

Download Full Size | PDF

This experiment demonstrates the HSDA system’s potential for plant identification applications. It also demonstrates the ability to monitor chlorophyll distributions, through which growth and health states of plants can be assessed and monitored. The non-invasive and non-contact 4D measurement can play a significant role in precise agriculture.

4. Summary and outlook

In this paper, we have presented the HSDA system, a 4D surface shape measurement system that can generate detailed visualizations of three-dimensional surface properties and provide spectral information, with high spectral resolution and great depth accuracy. The HSDA system consists of a hyperspectral imager system used to collect spectral data, and a 3D structured light stereovision system applied to generate a 3D point cloud model with spatial data. The two systems are coupled together by a double light path module and their image planes are conjugated to realize a point-to-point correspondence. Therefore, the 3D spatial and spectral data can be fused together into a 4D data set such that every point in the 3D point cloud model has corresponding hyperspectral data. The HSDA system shows excellent performance, with spatial and spectral resolutions of 27.5 µm and 3 nm, respectively.

We have presented a set of experiments on the fluorescence car model, human face, face model, green plant, and plastic plant to demonstrate the versatility of the HSDA system. Different kinds of transparent fluorescent paint on the car model can be precisely distinguished and located. Besides, the distribution of hemoglobin in human face and the distribution of chlorophyll in green plant can be analyzed. Results show that the HSDA system can acquire the precise 3D point cloud model of the measurement object with point-to-point spectrum data, which can be used to qualitatively detect and analyze the substances distribution and properties on the 3D surface. It has substantial potential for applications in broad range of areas, such as digital documentation of art and cultural objects, biological imaging, precision agriculture, advanced manufacturing, security and health.

Areas for future system improvements include a wider wavelength detection range, all-round 3D reconstruction as well as hyperspectral analysis combined with polarization, which are all feasible and under consideration. The HSDA system can be expanded to cover a broad spectral range including ultraviolet (UV) and infra-red (IR), providing ample fingerprint characteristics [4951]. For example, the infra-red spectrum can be used for determination of plant water content [52]. All-around 3D reconstruction can provide better visual representation and more comprehensive surface features. Furthermore, by combining 4D hyperspectral detection technology with polarization analysis, object identification measurement can be made more precise [5354]. Based on our prototype system, the spectral and spatial resolution can be further improved through optimizing of system components.

Funding

National Key Research and Development Program of China (#2018YFC1407504); National Natural Science Foundation of China (11621101).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. Nasrabadi, and J. Chanussot, “Hyperspectral Remote Sensing Data Analysis and Future Challenges,” IEEE Geosci. Remote Sens. Mag. 1(2), 6–36 (2013). [CrossRef]  

2. N. Caporaso, M. B. Whitworth, S. Grebby, and I. D. Fisk, “Non-Destructive Analysis of Sucrose, Caffeine and Trigonelline on Single Green Coffee Beans by Hyperspectral Imaging,” Food Res. Int. 106, 193–203 (2018). [CrossRef]  

3. N. Caporaso, M. B. Whitworth, M. S. Fowler, and I. D. Fisk, “Hyperspectral Imaging for Non-Destructive Prediction of Fermentation Index, Polyphenol Content and Antioxidant Activity in Single Cocoa Beans,” Food Chem. 258, 343–351 (2018). [CrossRef]  

4. M. Zhu, D. Huang, X. Hu, W. Tong, B. Han, J. Tian, and H. Luo, “Application of Hyperspectral Technology in Detection of Agricultural Products and Food: A Review,” Food Sci. Nutr. 8(10), 5206–5214 (2020). [CrossRef]  

5. Y. Liu, H. Pu, and D. W. Sun, “Hyperspectral Imaging Technique for Evaluating Food Quality and Safety During Various Processes: A Review of Recent Applications,” Trends Food Sci. Technol. 69(Part A), 25–35 (2017). [CrossRef]  

6. N. Weksler and E. Ben-Dor, “Mineral Classification of Soils Using Hyperspectral Longwave Infrared (LWIR) Ground-Based Data,” Remote Sens. 11(12), 1429 (2019). [CrossRef]  

7. L. Coic, P. Y. Sacre, A. Dispas, A. K. Sakira, M. Fillet, R. D. Marini, P. Hubert, and E. Ziemons, “Comparison of Hyperspectral Imaging Techniques for the Elucidation of Falsified Medicines Composition,” Talanta 198, 457–463 (2019). [CrossRef]  

8. G. Lu and B. Fei, “Medical Hyperspectral Imaging: A Review,” J. Biomed. Opt. 19(1), 010901 (2014). [CrossRef]  

9. S. Zhu, K. Su, Y. Liu, H. Yin, Z. Li, F. Huang, Z. Chen, W. Chen, G. Zhang, and Y. Chen, “Identification of Cancerous Gastric Cells Based on Common Features Extracted from Hyperspectral Microscopic Images,” Biomed. Opt. Express 6(4), 1135–1145 (2015). [CrossRef]  

10. M. A. S. de Oliveira, L. Galganski, S. Stokes, C. Chang, C. D. Pivetti, B. Zhang, K. E. Matsukuma, P. Saadai, and J. Chan, “Diagnosing Hirschsprung Disease by Detecting Intestinal Ganglion Cells Using Label-Free Hyperspectral Microscopy,” Sci. Rep. 11(1), 1398–1409 (2021). [CrossRef]  

11. X. Hadoux, F. Hui, J. K. H. Lim, C. L. Masters, A. Pebay, S. Chevalier, J. Ha, S. Loi, C. J. Fowler, C. Rowe, V. L. Villemagne, E. N. Taylor, C. Fluke, J. P. Soucy, F. Lesage, J. P. Sylvestre, P. Rosa-Neto, S. Mathotaarachchi, S. Gauthier, Z. S. Nasreddine, J. D. Arbour, M. A. Rheaume, S. Beaulieu, M. Dirani, C. T. O. Nguyen, B. V. Bui, R. Williamson, J. G. Crowston, and P. van Wijngaarden, “Non-Invasive in Vivo Hyperspectral Imaging of the Retina for Potential Biomarker Use in Alzheimer's Disease,” Nat. Commun. 10(1), 4227–4239 (2019). [CrossRef]  

12. X. Yao, S. Li, and S. He, “Dual-Mode Hyperspectral Bio-Imager with a Conjugated Camera for Quick Object-Selection and Focusing,” Prog. Electromagn. Res. 168, 133–143 (2020). [CrossRef]  

13. X. Liu, Z. Jiang, T. Wang, F. Cai, and D. Wang, “Fast hyperspectral imager driven by a low-cost and compact galvo-mirror,” Optik 224, 165716 (2020). [CrossRef]  

14. L. Nalpantidis, G. C. Sirakoulis, and A. Gasteratos, “Review of Stereo Vision Algorithms: From Software to Hardware,” Int. J. Optomechatroni. 2(4), 435–462 (2008). [CrossRef]  

15. U. R. Dhond and J. K. Aggarwal, ““Structure from stereo-a review,” IEEE Trans. Syst.,” IEEE Trans. Syst., Man, Cybern. 19(6), 1489–1510 (1989). [CrossRef]  

16. S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

17. J. S. Hyun, G. T. C. Chiu, and S. Zhang, “High-speed and high-accuracy 3D surface measurement using a mechanical projector,” Opt. Express 26(2), 1474–1487 (2018). [CrossRef]  

18. F. S. Sergi, A. R. Guillem, and T. Carme, “Lock-in Time-of-Flight (ToF) Cameras: A Survey,” IEEE Sens. J. 11(9), 1917–1926 (2011). [CrossRef]  

19. L. Luo, X. Chen, Z. Xu, S. Li, and S. He, “A Parameter-Free Calibration Process for a Scheimpflug LIDAR for Volumetric Profiling,” Prog. Electromagn. Res. 169, 117–127 (2020). [CrossRef]  

20. C. Reich, R. Ritter, and J. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000). [CrossRef]  

21. F. Cai, T. Wang, J. Wu, and X. Zhang, “Handheld four-dimensional optical sensor,” Optik 203, 164001 (2020). [CrossRef]  

22. H. Aasen, A. Burkart, A. Bolten, and G. Bareth, “Generating 3d Hyperspectral Information with Lightweight Uav Snapshot Cameras for Vegetation Monitoring: From Camera Calibration to Quality Assurance,” ISPRS J. Photogramm. 108, 245–259 (2015). [CrossRef]  

23. J. Behmann, A. K. Mahlein, S. Paulus, J. Dupuis, H. Kuhlmann, E. C. Oerke, and L. Pluemer, “Generation and Application of Hyperspectral 3d Plant Models: Methods and Challenges,” Mach. Visioin Appl. 27(5), 611–624 (2016). [CrossRef]  

24. H. Zhao, Z. Wang, G. Jia, X. Li, and Y. Zhang, “Field imaging system for hyperspectral data, 3D structural data and panchromatic image data measurement based on acousto-optic tunable filter,” Opt. Express 26(13), 17717–17730 (2018). [CrossRef]  

25. H. Zhao, L. Xu, S. Shi, H. Jiang, and D. Chen, “A High Throughput Integrated Hyperspectral Imaging and 3D Measurement System,” Sensors 18(4), 1068–1085 (2018). [CrossRef]  

26. S. Heist, C. Zhang, K. Reichwald, P. Kühmstedt, G. Notni, and A. Tünnermann, “5D hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light,” Opt. Express 26(18), 23366–23379 (2018). [CrossRef]  

27. J. Cho, P. J. Gemperline, and D. Walker, “Wavelength calibration method for a CCD detector and multichannel fiber-optic probes,” Appl. Spectrosc. 49(12), 1841–1845 (1995). [CrossRef]  

28. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase Shifting Algorithms for Fringe Projection Profilometry: A Review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

29. C. Reich, R. Ritter, and J. Thesing, “White light heterodyne principle for 3D-measurement,” Proc. SPIE 3100(1), 236–244 (1997). [CrossRef]  

30. D. Weimer, B. Scholz-Reiter, and M. Shpitalni, “Design of Deep Convolutional Neural Network Architectures for Automated Feature Extraction in Industrial Inspection,” CIRP Ann. Manuf. Technol. 65(1), 417–420 (2016). [CrossRef]  

31. M. W. Ashour, F. Khalid, A. A. Halin, L. N. Abdullah, and S. H. Darwish, “Surface Defects Classification of Hot-Rolled Steel Strips Using Multi-Directional Shearlet Features,” Arab. J. Sci. Eng. 44(4), 2925–2932 (2019). [CrossRef]  

32. Z. Xu, E. Forsberg, Y. Guo, F. Cai, and S. He, “Light-sheet microscopy for surface topography measurements and quantitative analysis,” Sensors 20(10), C1 (2020). [CrossRef]  

33. B. Jin, L. Cruz, and N. Goncalves, “Deep Facial Diagnosis: Deep Transfer Learning from Face Recognition to Facial Diagnosis,” IEEE Access 8, 123649–123661 (2020). [CrossRef]  

34. F. Wu, X. Jing, Y. Feng, Y. Ji, and R. Wang, “Spectrum-Aware Discriminative Deep Feature Learning for Multi-Spectral Face Recognition,” Pattern Recogn. 111, 107632 (2021). [CrossRef]  

35. S. Arya, N. Pratap, and K. Bhatia, “Future of Face Recognition: A Review,” Procedia Comput. Sci. 58, 578–585 (2015). [CrossRef]  

36. F. Tsalakanidou, D. Tzovaras, and M. G. Strintzis, “Use of Depth and Colour Eigenfaces for Face Recognition,” Pattern Recogn. Lett. 24(9-10), 1427–1435 (2003). [CrossRef]  

37. K. Chang, K. W. Bowyer, and P. J. Flynn, “An Evaluation of Multimodal 2d+3d Face Biometrics,” IEEE T. Pattern Anal. 27(4), 619–624 (2005). [CrossRef]  

38. V. Blanz and T. Vetter, “Face Recognition Based on Fitting a 3d Morphable Model,” IEEE T. Pattern Anal. 25(9), 1063–1074 (2003). [CrossRef]  

39. Z. Pan, G. Healey, M. Prasad, and B. Tromberg, “Face Recognition in Hyperspectral Images,” IEEE T. Pattern Anal. 25(12), 1552–1560 (2003). [CrossRef]  

40. J. M. Jez, S. G. Lee, and A. M. Sherp, “The next green movement: Plant biology for the environment and sustainability,” Science 353(6305), 1241–1244 (2016). [CrossRef]  

41. K. Golhani, S. K. Balasundram, G. Vadamalai, and B. Pradhan, “Estimating Chlorophyll Content at Leaf Scale in Viroid-Inoculated Oil Palm Seedlings (Elaeis Guineensis Jacq.) Using Reflectance Spectra (400 Nm-1050 Nm),” Int J Remote Sens 40(19), 7647–7662 (2019). [CrossRef]  

42. R. Sonobe, H. Yamashita, H. Mihara, A. Morita, and T. Ikka, “Hyperspectral Reflectance Sensing for Quantifying Leaf Chlorophyll Content in Wasabi Leaves Using Spectral Pre-Processing Techniques and Machine Learning Algorithms,” Int. J. Remote Sens. 42(4), 1311–1329 (2021). [CrossRef]  

43. L. Xiao, Z. Sun, S. Lu, and K. Omasa, “A MultiAngular Invariant Spectral Index for the Estimation of Leaf Water Content across a Wide Range of Plant Species in Different Growth Stages,” Remote Sens. Environ. 253, 112230 (2021). [CrossRef]  

44. C. Nguyen, V. Sagan, M. Maimaitiyiming, M. Maimaitijiang, S. Bhadra, and M. T. Kwasniewski, “Early Detection of Plant Viral Disease Using Hyperspectral Imaging and Deep Learning,” Sensors 21(3), 742–765 (2021). [CrossRef]  

45. A. Siedliska, P. Baranowski, J. Pastuszka-Woźniak, M. Zubik, and J. Krzyszczak, “Identification of Plant Leaf Phosphorus Content at Different Growth Stages Based on Hyperspectral Reflectance,” BMC Plant Biol. 21(1), 28–45 (2021). [CrossRef]  

46. J. Lu, R. Ehsani, Y. Shi, A. I. Castro, and S. Wang, “Detection of multi-tomato leaf diseases (late blight, target and bacterial spots) in different stages by using a spectral-based sensor,” Sci. Rep. 8(1), 2793–2804 (2018). [CrossRef]  

47. H. Lin, Y. Zhang, and L. Mei, “Fluorescence Scheimpflug LiDAR developed for the three-dimension profiling of plants,” Opt. Express 28(7), 9269–9279 (2020). [CrossRef]  

48. Z. Xu, Y. Jiang, J. Ji, E. Forsberg, Y. Li, and S. He, “Classification, identification, and growth stage estimation of microalgae based on transmission hyperspectral microscopic imaging and machine learning,” Opt. Express 28(21), 30686–30700 (2020). [CrossRef]  

49. Z. Yong, S. Zhang, Y. Dong, and S. He, “Broadband nanoantennas for plasmon enhanced fluorescence and Raman spectroscopies,” Prog. Electromagn. Res. 153, 123–131 (2015). [CrossRef]  

50. G. Wang, P. Kulinski, P. Hubert, A. Deguine, D. Petitprez, S. Crumeyrolle, E. Fertein, K. Deboudt, P. Flament, M. W. Sigrist, H. Yi, and W. Chen, “Filter-free light absorption measurement of volcanic ashes and ambient particulate matter using multi-wavelength photoacoustic spectroscopy,” Prog. Electromagn. Res. 166, 59–74 (2019). [CrossRef]  

51. D. DePaoli, É Lemoine, K. Ember, M. Parent, M. Prud’homme, L. Cantin, K. Petrecca, F. Leblond, and D. C. Côté, “Rise of Raman spectroscopy in neurosurgery: a review,” J. Biomed. Opt. 25(05), 1 (2020). [CrossRef]  

52. M. Corti, P. Marino Gallina, D. Cavalli, and G. Cabassi, “Hyperspectral Imaging of Spinach Canopy under Combined Water and Nitrogen Stress to Estimate Biomass, Water, and Nitrogen Content,” Biosyst. Eng. 158, 38–50 (2017). [CrossRef]  

53. C. Fu, H. Arguello, B. M. Sadler, and G. R. Arce, “Compressive Spectral Polarization Imaging by a Pixelized Polarizer and Colored Patterned Detector,” J. Opt. Soc. Am. A 32(11), 2178–2188 (2015). [CrossRef]  

54. Y. Ye, Y. Tan, and G. Jin, “Accurate Measurement for Damage Evolution of Ceramics Caused by Nanosecond Laser Pulses with Polarization Spectrum Imaging,” Opt. Express 27(11), 16360–16376 (2019). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. (a) The HSDA system, including a front optical module, hyperspectral imager and structured light stereovision. (b) The schematic diagram of HSDA system. (c) The physical map of HSDA system.
Fig. 2.
Fig. 2. (a) The spectral image of the Mercury Argon Calibration Source. (b) The relationship between the wavelength and pixel index fitted by third-order polynomial. (c) The spectral curve of the Mercury Argon Calibration Source measured by our hyperspectral imager after wavelength calibration.
Fig. 3.
Fig. 3. The principle of operation of the HSDA system.
Fig. 4.
Fig. 4. The flow chart of HSDA system, including two parts. Part 1 is to constitute 3D reconstruction and form a hyperspectral cube. Part 2 is to form a 4D data set and sample results are displayed. (a) The original 4D model. (b) The spectrum curves of point A, B, C and D in the origin model. (c) The 4D model with point cloud segmentation based on spectral information. (d) monochrome images at different wavelengths (450 nm–750 nm) for different 3D perspectives.
Fig. 5.
Fig. 5. (a) Hyperspectral cube of circle plane array board. (b) Spectrum intensities of pixels A (white circle) and B (dark background). (c) 3D point cloud of the circle plane array board plane with color mask for different viewing angles of the three-dimensional space. (d) Monochrome 3D point clouds of the circle plane array board plane at varying wavelength.
Fig. 6.
Fig. 6. (a) Gray scale 3D point cloud of the car model for different viewing angles. (b) 3D point cloud with fluorescence overlay for different viewing angles.
Fig. 7.
Fig. 7. (a) 3D point cloud model of the rearview mirror. (b) 3D point cloud model of the door handle.
Fig. 8.
Fig. 8. (a) 3D model of car model with fluorescence. (b) The fluorescence spectrum of 3D points A, B and C.
Fig. 9.
Fig. 9. (a) 4D data of the face model shown at different wavelengths and different viewing angles. (b) Enlargement of the nose in the 3D point cloud model.
Fig. 10.
Fig. 10. (a) 4D data of the human face shown at different wavelengths and different viewing angles. (b) Enlargement of the lip in the 3D point cloud model.
Fig. 11.
Fig. 11. (a) 3D point cloud of the face model displayed using RGB colors. Light intensities at 406 nm, 540 nm 600 nm assigned to the chromatic value of the blue, green, and red components, respectively. (b) 3D point cloud of human face with RGB colors. (c) The reflected spectrum curve of 3D points A and B. (d) The reflectance spectrum of 3D points A and B.
Fig. 12.
Fig. 12. 3D point clouds of the (a) human face and (b) face model, where the color of the 3D points is determined by the OAIH factor. (c) Reflectance curve of 3D points A and B.
Fig. 13.
Fig. 13. (a) The green plant. (b) The hyperspectral cube of the measured green plant. (c) 3D point cloud model of the green plant. (d) The 4D data of the green plant observed at different angles and at different wavelengths (450-675 nm). (e) Enlargement of the leaf blade margin. (f) The distribution of the point cloud of blue rectangular region in the leaf blade. (g) The surface patch of the point cloud.
Fig. 14.
Fig. 14. The hyperspectral cube and 3D point cloud of the plastic plant.
Fig. 15.
Fig. 15. 3D point cloud of the (a) green plant and (b) plastic plant displayed using RGB colors. (c) Intensity spectrum of 3D points A and point B. (d) Normalized reflectance spectrum of 3D points A and B.
Fig. 16.
Fig. 16. 3D point clouds of the green plant (a) and (c) and plastic plant (b) and (d), where the color of the 3D points is determined by the OAIC factor, from different perspectives.

Tables (3)

Tables Icon

Table 1. Performance and characteristics of the HSDA system

Tables Icon

Table 2. The measurement of 3D geometric human face

Tables Icon

Table 3. 3D measurement of parts in green plant (Fig. 13(a))

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

λ = a 0 + a 1 y + a 2 y 2 + a 3 y 3
c = H h , H = [ 1.678 0.099 50.914 0.0123 1.660 189.697 1.464 e 05 9.506 e 06 1.000 ]
I m = 1 2 + 1 2 × cos ( φ n ( u i c , v i c ) + 2 m π / 4 )
φ n ( u i c , v i c ) = ta n 1 [ m = 0 3 I m sin ( 2 m π / 4 ) m = 0 3 I m cos ( 2 m π / 4 ) ]
u i p = Φ c h ( u i c , v i c ) 2 π × W
v i p = Φ c v ( u i c , v i c ) 2 π × H
z c [ u i c v i c 1 ] = K c M c [ x w y w z w 1 ]
z p [ u i p v i p 1 ] = K p M p [ x w y w z w 1 ]
r = u i c  -  u i c 2 8 × 2 8
g = v i c  -  v i c 2 8 × 2 8
b = 60 × u i c 2 8 + 2 × v i c 2 8
OAIH = ( α 540 × α 576 ) / ( α 510 × α 564 )
OAIC = ( α 450 × α 670 )/( α 540 × α 540 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.