Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High dynamic range 3D measurement based on polarization and multispectrum co-modulation

Open Access Open Access

Abstract

Three-dimensional (3D) shape measurement serves an important role in many areas, and fringe projection profilometry (FPP) is a widely used 3D measurement technique due to its non-physical contact and high speed. The real measurement scenarios are often mixtures of specular and diffuse reflections, causing overexposed and underexposed areas to co-exist. Currently, utilizing FPP to simultaneously measure overexposed and underexposed areas remains a challenge. To solve this problem, we propose a mixed reflection model and what we believe to be a novel high dynamic range (HDR) 3D measurement method based on polarization and multispectrum co-modulation. In mixed reflection, the fringe images captured by the polarized color camera can be modulated to different intensities between different channels due to the co-modulation effect. By synthesizing all sub-images, high-modulation fringe images are formed and simultaneous reconstruction of overexposed and underexposed surfaces is finally achieved. Compared to conventional methods, the proposed method is more effective for measuring complex reflection situations, especially when objects with specular and diffuse surfaces simultaneously exist in the scene. And what we believe to be a novel no-registration-error calibration framework for multi-channel cameras has also been proposed, which both acquires a significant amount of information in the region with HDR problems and avoids the registration error due to the physical distances between different channels. Experiments were conducted to verify the effectiveness of the proposed method.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) shape measurement serves an important role in a wide range of areas, including industrial manufacturing, biomedical imaging and on-line inspection [1,2]. A traditional device to measure 3D shape of objects is the coordinate measuring machine (CCM) [3], but the system requires point-by-point physical contact with the measured object and is consequently limited in terms of measurement efficiency and measurement scenarios [4]. In contrast, fringe projection profilometry (FPP) has the advantage of non-physical contact. Meanwhile, due to the increasing performance of cameras and projectors, fast speed and high measurement accuracy also become the advantages of FPP. A conventional FPP system generally consists of a projector, a camera and a computer. Via computer control, a set of sequentially designed fringe patterns are projected from the projector onto the measured object. Meanwhile, the camera captures the deformed fringe patterns which contain phase information. The phase information from the captured fringe images is extracted by means of decoding algorithms, and it is then mapped to real 3D coordinates by triangulation, finally a 3D point cloud is successfully reconstructed [5]. As FPP is a 3D surface measurement technique based on the digital image received by the camera, it has a high degree of robustness when the measured object has a diffuse surface.

However, due to the uneven surface reflectivity of the object and the restricted dynamic range of the camera, the technique generally produces phase errors as shown in framed areas in Fig. 1, which are caused by the low contrast of the fringes in the low reflective areas and the high grey values in the high bright areas. So that the camera parameters need to be adjusted to suit the complex reflectivity surface. In practice, however, the object may have both low reflective areas and high bright areas, resulting in the camera's dynamic range not being sufficient to acquire high contrast fringes in both areas at the same time. This is a problem which cannot be solved by simply adjusting the camera parameters [6].

 figure: Fig. 1.

Fig. 1. (a-b) The low reflective area, and its absolute phase with severe error; (c-d) The high bright area, and its absolute phase with cavities.

Download Full Size | PDF

Researchers have explored many solutions to address the problem of high dynamic range (HDR) imaging. The mainstream approaches can be classified into four categories: 1) camera-based multiple exposure methods; 2) projector-based adaptive fringe intensity methods; 3) methods based on deep-learning; 4) methods based on other unconventional equipment.

For the first category, they perform 3D reconstruction by capturing sequences of images at several different exposures, then select the best channel for each pixel and finally use the fused images [711]. Though these methods were able to successfully reconstruct areas of the HDR objects, multiple shots are required to capture numerous images, even pre-analysis is required, which reduces the efficiency of the measurement.

For the second category, they build a mathematical model based on the fringe images captured in the pre-analysis, then modify the grey level of each pixel in the projection patterns to fit the measured object and avoid saturation problems in the subsequently captured images [1215]. However, the disadvantage of this category is that when the measurement scenario is changed, the previous projection patterns are no longer applicable and a new model needs to be built to modify the projection patterns.

For the third category, they enable appropriately trained neural networks to enhance the signal-noise-ratio (SNR) of fringe images or to obtain phase information from single fringe pattern, etc. [1618]. Though decent results have been achieved in these methods, their behavior relies heavily on the collection of training samples, which is very time consuming.

In addition, for the fourth category, there are approaches to addressing HDR problems based on non-conventional devices in FPP systems. One approach uses a digital micro-mirror device (DMD) camera to extend the dynamic range [19], but it is difficult to match the imaging unit of a charge-coupled device (CCD) to the micro-mirror. There are also approaches of replacing the imaging unit with a polarization camera [20] or a hyperspectral camera [21]. Both of them take the advantage that different channels of the camera lead to different measurements, filtering and fusing images from each channel with different intensities pixel by pixel, thus obtaining high-modulation fringe images that can be used to reconstruct 3D point clouds. However, the polarization method could filter out glare effect but is not effective when the surface of the object is diffuse, and the hyperspectral method could capture various spectrum bandwidths, which results the acquisition of images with different intensities under narrow-band illumination, but could not achieve the impact of filtering out glare effect.

Since the object may have both low reflective areas and high bright areas, to overcome the limited dynamic range of conventional methods, we propose a mixed reflection model and an HDR 3D measurement method based on polarization and multispectrum co-modulation. The basic idea is to project monochrome fringe patterns through a polarizer and capture the fringe images on the object with the polarized color camera. As the image intensity is a joint result of all modulation processes, to combine the advantages of filtering out glare effect in polarization imaging and the capability to acquire images of multiple intensities in multispectral imaging, the polarized color camera is utilized to achieve the co-modulation of polarization and multispectrum. Furthermore, the increased data sources excellently solve the problems of polarization loss while measuring diffuse objects, and avoids the problems of inability to filter out glare effect in multispectral imaging. By synthesizing all sub-images of varying intensities to obtain high-modulation fringes, HDR 3D measurement can be finally achieved. At the same time, considering the physical distances between channels which are generally ignored in conventional methods when synthesizing sub-images, we propose a novel no-registration-error calibration framework to avoid it while mining a significant amount of information in the regions with HDR problems.

In our experiments, the simultaneous reconstruction of overexposed surface and underexposed surfaces verifies the effectiveness of the method. The method is also compared with the multispectral camera method and the mono-chrome polarization camera method, highlighting the performance improvement.

Section 2 explains the principle of the proposed method, introduce the experimental system and the experimental process. Section 3 shows the experimental results to verify the performance of the proposed method; and Section 4 summarizes the paper.

2. Principle

2.1 Phase-shifting algorithm

Phase-shifting algorithm is the most common method of 3D measurement, if a four-step phase-shifting algorithm is adopted to acquire the wrapped phase of an object, the intensities of the four fringe patterns In (n = 1,2,3,4) are [22]

$${I_n}(x,y) = A(x,y) + B(x,y)\cos \left[ {\phi (x,y) + (n - 1) \times \frac{\pi }{2}} \right],$$
where x and y are the vertical and horizontal coordinates of the image pixels, A is the background intensity, B is the modulation intensity and ϕ is the phase. Then the wrapped phase ϕw is
$${\phi _w}(x,y) = \arctan \left[ {\frac{{{I_4}(x,y) - {I_2}(x,y)}}{{{I_1}(x,y) - {I_3}(x,y)}}} \right].$$
The wrapped phase ϕw in Eq. (2) merely shows values between (-π,π], therefore a phase-unwrapping algorithm is required to acquire the absolute phase. In this paper, we adopt the three-frequency heterodyne algorithm to obtain the absolute phase [23]. Three-frequency heterodyne algorithm is a temporal unwrapping method which is achieved by projecting fringe patterns with three different frequencies. Furthermore, we judge the contrast of the fringes by the modulation intensity B, and from Eq. (1), B can be determined by
$$B(x,y) = \frac{{\sqrt {{{[{{I_1}(x,y) - {I_3}(x,y)} ]}^2} + {{[{{I_2}(x,y) - {I_4}(x,y)} ]}^2}} }}{2}.$$

2.2 High dynamic range 3D imaging principle based on polarization and multispectrum co-modulation

The dynamic range of conventional methods is limited, especially when objects with specular and diffuse surfaces simultaneously exist in the scene [2426]. These surfaces reflect light differently leading to HDR problems. Specifically, when light reflects on the surface of an object, the vibration direction of the reflected light is deflected relative to the incident light. According to Fresnel formula, the degree of deflection is related to the angle of polarization of the incident light and the angle of incidence. Therefore, the reflection of the light on surfaces could be categorized into three cases.

  • • For specular surface, when a linearly polarized light is reflected on it, the reflection is regarded as a specular reflection, and the angle of incidence is the same for each region, thus the angle of polarization of the reflected light is still the same for all regions, and the reflected light is still linearly polarized.
  • • For diffuse surface, when the light is reflected on it, the reflection is regarded as a diffuse reflection, and the angle of incidence is different for different regions, thus the angle of polarization of the reflected light varies, and the reflected light is approximately a natural light, which we call the polarization loss.
  • • Whereas in practice, the reflection of light on the surface of the measured object is a mixed reflection between specular reflection and diffuse reflection, the reflected light is regarded as a partially polarized light.
From our observation, currently there is no method that can measure the 3D shapes of specular and diffuse surfaces simultaneously. In order to solve this problem, we come to an idea by utilizing a polarized color camera to achieve polarization and multispectrum co-modulation. The proposed experimental system is shown in Figs. 2(a-c), the polarized color camera has polarizer arrays and color filter arrays within it, therefore we modulate the light polarization state by placing a linear polarizer in front of the digital projector. The polarizer arrays have 4 polarization directions of 45°, 90°, 135°, and 0° respectively, and the color filter arrays has RGGB channels. Each super pixel is a 4 × 4 pixel group and can lead to different intensities by 16 types of different polarization and multispectrum co-modulation. The pixels under the same co-modulation type form the same sub-image, as a result, each sub-image has one quarter of the side length of the raw image. The co-modulation of the proposed method is as follows.

 figure: Fig. 2.

Fig. 2. (a) The experimental system of the proposed method; (b) The front view of the real experimental system; (c) The side view of the real experimental system; (d) The spectral modulation curves for surface reflectance and RGB channels; (e) The Malus’s law.

Download Full Size | PDF

First, the illumination light will become linearly polarized when it passes through the polarizer, and the light intensity becomes half of the original light. Then when it travels to the object surface, the light will be reflected according to the surface spectral reflectance character which is shown in Fig. 2(b) as curve R(λ). Finally, when the light reaches the imaging unit, it will be co-modulated by the polarizer arrays and color filter arrays of the polarized color camera, the spectral response curves for different channels are shown in Fig. 2(d) as C(λ) and the effect of the polarizers on the intensity can be obtained based on Malus’s law as shown in Fig. 2(e). When the measured object has a diffuse surface, polarization loss occurs, such as paper, plaster etc. Whereas the polarization is preserved when the measured object has a specular surface, such as metal. In practice, however, the reflection of light is a mixed reflection between specular reflection and diffuse reflection. Therefore, the intensities of the captured image Iraw are associated with the co-modulation process, and they could be divided into three cases, as described by

$${I_{raw}} = \left\{ {\begin{array}{{cc}} {\int_\lambda {R(\lambda )C(\lambda )} d\lambda \ast \frac{1}{2}{{\cos }^2}{\theta_i}\textrm{,}}&{\textrm{specular reflection}}\\ {\int_\lambda {R(\lambda )C(\lambda )} d\lambda \ast \frac{1}{4},}&{\textrm{diffuse reflection}}\\ {\int_\lambda {R(\lambda )C(\lambda )} d\lambda \ast \frac{1}{2}\left( {\omega {{\cos }^2}{\theta_i} + \frac{{1 - \omega }}{2}} \right)\textrm{,}}&{\textrm{mixed reflection}} \end{array}} \right.,$$
where λ represents the wavelength, θi (i = 1,2,3,4) represents the angle between the polarizers and ω is a coefficient characterizing the proportion of specular and diffuse reflections in the case of mixed reflections, with ω values between (0,0.5) when diffuse reflection dominates and values between (0.5,1) when specular reflection dominates. In addition, the color mode of the projector is green rather than white and the dithering-modulated binary fringes with defocusing technique is used to eliminate gamma effect [27]. Since different polarization channels can provide fringe images of different intensities for metal objects well, and different color channels can provide fringe images of different intensities for diffuse objects well, polarization and multispectrum co-modulation is achieved, HDR 3D imaging can be achieved by composing sub-images of different channels.

As shown in Fig. 3, the processing of HDR 3D imaging based on polarized color camera can be further decomposed into four steps:

  • 1) Channels extraction

    Separate the raw image Iraw,k,n into sub-images, Ii,k,n, i = 1,2,3,…,16 is the ith sub-image of the raw image. For all the raw images and their sub-images, k = 1,2,3 is the kth frequency of phase-shifting patterns, n = 1,2,3,4 is the nth step of phase-shifting patterns.

  • 2) Generate decision maps

    Generate intensity modulations Bi,k by using Ii,k,n as shown in Algorithm 1, then generate decision maps Mi,k by comparing Bi,k of all 16 sub-images Ii,k,n. As shown in Algorithm 2, each Mi,k is a binary mask, it is 1 where the corresponding Bi,k is the maximum and for the rest of the map it is 0.

  • 3) Image fusion

    Synthesize Ii,k,n into new images Nk,n with Mi,k as shown in Algorithm 3.

  • 4) Phase retrieval and 3D reconstruction

    Retrieve fringe phase with Nk,n and reconstruct the 3D point cloud.

 figure: Fig. 3.

Fig. 3. The processing flow chart of HDR 3D imaging based on polarized color camera.

Download Full Size | PDF

Algorithm 1: Generate modulation intensities
Input: Sub-images Ii,k,n
Output: Modulation intensities Bi,k
Pixel by pixel calculate the modulation intensity Bi,k;
for frequency k = 1 to 3 and sub-image number i = 1 to 16 do $\;\;\;{B_{i,k}}(x,y) = \left\{ {\begin{array}{{ccc}} {\frac{{\sqrt {{{[{{I_{i,k,1}}(x,y) - {I_{i,k,3}}(x,y)} ]}^2} + {{[{{I_{i,k,2}}(x,y) - {I_{i,k,4}}(x,y)} ]}^2}} }}{2},}&{\begin{array}{{c}} {{I_{i,k,1}}(x,y) < 255}\\ {{I_{i,k,2}}(x,y) < 255}\\ {{I_{i,k,3}}(x,y) < 255}\\ {{I_{i,k,4}}(x,y) < 255} \end{array}}\\ {\textrm{0,}}&{\textrm{otherwise}} \end{array}} \right..$ end
Algorithm 2: Generate decision maps
Input: Modulation intensities Bi,k
Output: Decision maps Mi,k
Pixel-by-pixel compare modulation intensities Bi,k to generate decision maps Mi,k;
for frequency k = 1 to 3 and sub-image number i = 1 to 16 do $\;\;\;{M_{i,k}}(x,y) = \left\{ {\begin{array}{{cc}} {1,}&{{B_{i,k}}(x,y) = \max \{{{B_{1,k}}(x,y),\ldots ,{B_{16,k}}(x,y)} \}}\\ {0,}&{otherwise} \end{array}} \right..$ end
Algorithm 3: Image fusion
Input: Sub-images Ii,k,n and decision maps Mi,k
Output: Synthesized new images Nk,n
Synthesize all sub-images Ii,k,n and the corresponding decision maps Mi,k into new images Nk,n;
for frequency k = 1 to 3 and step n = 1 to 4 do $\;\;\;{N_{k,n}}(x,y) = \sum\limits_{i = 1}^{i = 16} {{I_{i,k,n}}({x,y} )\times {M_{i,k}}({x,y} ).}$ end

2.3 No-registration-error calibration framework for multi-channel camera

In the reconstruction method described above, the synthesized fringe image from different sub-images is with high-modulation because pixels at the same location in different sub-images are under different types of polarization and multispectrum co-modulation. Generally, these synthesized fringes could be used to reconstruct HDR depth maps or 3D point clouds. Actually this is the way that conventional multi-channel methods utilize [20,21]. However, inside the raw image there are physical distances between different channels, as shown in the registration error part of Fig. 4. For example, the physical distance between the pixels at the same location is non-ignorable in different sub-images (such as pixel A of 90°-R and pixel A’ of 0°-G2). This physical distance could introduce registration error and thus deteriorate the 3D reconstruction accuracy.

 figure: Fig. 4.

Fig. 4. The registration-error between channels and the no-registration-error calibration framework for multi-channel cameras.

Download Full Size | PDF

Therefore, in order to solve the problem of HDR 3D reconstruction without ignoring the physical distances between different channels, we propose a novel no-registration-error calibration framework which does not reduce the image resolution. In the framework, we consider each channel as a camera and calibrate it individually. The proposed framework has the following steps as shown in the data preparation and reconstruction part of Fig. 4:

  • 1) In order to extract reliable locations in each super pixel, we identify several locations with the highest modulation in each super pixel to generate the modulation mask. Then we utilize the mask to separate the raw image into masked sub-images by keeping the data at locations with the highest modulation and discarding the data at the other positions.
  • 2) Calibrate each channel as a separate camera and obtain the calibration parameters.
  • 3) Obtain the absolute phase and the height map of each channel. For our case, the phase-height mapping algorithm is utilized to establish the phase-height mapping table by reference planes and transfer the absolute phase into the height maps.
  • 4) Reconstruct the 3D point cloud of each channel by utilizing the corresponding height map and calibration parameter. Fuse the 3D point clouds into the complete full-resolution 3D point cloud.
In different applications, the number M of highest modulation locations varies according to specific needs (for our case, M = 2), and the sparsity of the 3D point cloud thus changes. In this method, though the 3D point cloud tends to become sparse, we have reached no-registration-error HDR 3D reconstruction. Furthermore, this no-registration-error calibration framework could be used for any other multi-channel cameras based on bayer pattern, in other words, it is a generalized calibration framework for multi-channel cameras when needing the full-resolution reconstruction.

3. Experiments

As shown in Figs. 2(a-c), the experimental system mainly consists of a digital projector (model: DLP4500), a polarized color camera with the resolution of 2448 × 2048 pixels (model: BFS-U3-51S5PC-C from FLIR) and a linear polarizer (model LBTEK FLP-VIS-50, 400-700 nm). We capture fringe images in controlled environment to avoid the disturbance caused by ambient illumination. The fringe patterns’ frequency need to be sufficiently high to acquire the absolute phase of the whole field and the fringe pitch of the captured image should be sufficiently wide to ensure the sinusoidal property of the fringe. As a result, in this paper, the fringe pitches related to three-frequency heterodyne algorithm are respectively 59, 64, and 70. This selection of fringe pitches contributes to preventing unwrapping failures caused by phase errors [28]. Besides, we use the phase-height mapping algorithm [29,30] to obtain a phase-height mapping table which can transfer the absolute phase into the height.

To verify the validity of the proposed method, we first measured a set of objects consisting of a metal sheet and a plastic board with a black coating as shown in Fig. 5(a). The surface of the metal sheet is so smooth that it causes an extensive area of specular reflection during measurement, while the black plastic board is a regular cube but has a surface which is not smooth enough to cause specular reflection. The dynamic range of a normal monochrome camera is not sufficient to measure them simultaneously. Figure 5(b) shows the sub-images of the captured raw fringe image and Fig. 5(c) shows the synthesized image. It is worth mentioning that in order to make the fringes on black object in Fig. 5(c) visible, the black object is labelled with the blue box and showed with increased contrast. The reconstructed 3D point clouds in two views are shown in Figs. 5(d) and (e). Obviously, the intact smooth surfaces of the metal sheet and the plastic board are both reconstructed. It is clear that the synthesized images can obtain high quality 3D reconstruction result.

 figure: Fig. 5.

Fig. 5. (a) The measured object; (b) The sub-images of the captured raw fringe image which are used to generate the decision maps and form the synthesized image; (c) The synthesized image, and in which the black object is labelled with the blue box and showed with increased contrast; (d-e) The reconstructed 3D point cloud.

Download Full Size | PDF

In order to compare the performance of the proposed method with the multispectral camera method and the monochrome polarization camera method, we then select two sets of sub-images from all channels to simulated the multispectral camera and the monochromatic polarization camera, which are respectively called multispectrum set and polarization set. The multispectrum set consists of 4 sub-images with the same polarization direction but different colors, while the polarization set consists of 4 sub-images with the same color but different polarization directions. We separately synthesized the images and then calculate the absolute phases of these two sets to compare with the set utilizing all sub-images which is called co-modulation set. Figs. 6(a)-(c) show the synthesized images of the three sets respectively. The absolute phase maps and the corresponding cross-sections are respectively shown in Figs. 6(d)-(i). Comparing with the co-modulation set, the synthesized images of the multispectrum set and the polarization set have different phase errors such as cavities caused by overexposure and low contrast of fringes caused by underexposure, and these errors can be obviously observed in the corresponding absolute phases and cross-sections. Specifically, the intact smooth phase is shown in Fig. 6(i), however, the phase in Fig. 6(g) has a lost section corresponding to the overexposed area in the raw image, while the phase in Fig. 6(h) has a chaotic section corresponding to the underexposed area in the raw image. The results validate the discussion in the previous section, the multispectral camera cannot fully filter out glare effect and will cause the phase cavities as a result. Besides, the monochrome polarization camera will cause polarization loss when the exposure is too low and lead to phase errors. The comparison shows that the proposed method has the capacity to meet both the demands of filtering out glare effect and avoiding polarization loss in the mixed reflection, while the spectrum modulation based method and the polarization modulation based method can only address one of these HDR problems. However, the super pixel of the polarized color camera in the proposed method is a 4 × 4 pixel group while the super pixel of the multispectral camera or the monochrome polarization camera is usually a 2 × 2 or 3 × 3 pixel group, which brings more intense mosaic effect and as a result the resolution reduced.

 figure: Fig. 6.

Fig. 6. (a-c) The synthesized images of the multispectrum set, the polarization set and the co-modulation set; (d-f) The absolute phase maps of the sets; (g-i) The corresponding absolute phase cross-sections to the red lines of the sets.

Download Full Size | PDF

To further clarify the robustness of the proposed method, another experiments are conducted to test the system's measurement performance on objects with HDR problems. As shown in Figs. 7(a-d), we measured different metal objects and black objects. These metal objects respectively have complex surfaces with holes and grooves of different scales and can lead to specular reflections of different levels and overexposed areas of different sizes. Unlike the cubic plastic board with a regular shape, the charger has a height difference in its own structure. The surfaces of the black objects are not smooth enough to cause specular reflections. The black objects with increased contrast are also labelled with blue boxes. Figs. 7(e-h) show the brightest sub-images of the objects, the problem of fringes disappearance is obviously seen in them while the fringes on the black objects are barely visible. The synthesized images and the 3D point clouds of them are shown in Figs. 7(i-l) and (m-p), we can draw a conclusion that we have obtained a synthesized image with high-modulation fringes and reconstructed high quality 3D point clouds of both overexposed and underexposed objects.

 figure: Fig. 7.

Fig. 7. (a-d) The measured objects; (e-h) The brightest sub-images of the objects; (i-l) The synthesized images; (m-p) The reconstructed 3D point clouds.

Download Full Size | PDF

To demonstrate the reconstruction performance of the no-registration-error calibration framework, two groups of objects which cause HDR problems were measured and reconstructed into point cloud as shown in Figs. 8(a-f). The black object is still the charger because the height difference between its parts can be visualized very well in the point cloud. The metal objects are respectively a perforated sheet causing multiple spots of overexposure and a disk causing regional overexposure over a large area. Figs. 8(a-b) show the measured objects and Figs. 8(c-d) illustrate the specifics of overexposure and underexposure on the surface of the measured objects when projecting a fringe pattern. The point clouds are shown in Figs. 8(e-f), we have also zoomed in to show the representative overexposed areas of the two metal objects in the boxes on the right of the point cloud. As shown in the boxes, the 3D point cloud is zoomed in more to show the sparsity in Figs. 8(e), while that in Fig. 8(f) shows the information which is mined in the overexposed region. It is worth mentioning that the uneven color in the zoomed in point cloud in Fig. 8(f) is caused by visual color variations due to point cloud sparsity rather than height variations. It can be seen from the experiment results that in the areas under HDR problems are successfully reconstructed rather than no information was mined. In this method, though the 3D point cloud tends to become sparse, we have reached a full-resolution 3D reconstruction while solving the HDR problem, without the registration error caused by the physical distances between the different channels. In areas with HDR problems such as overexposure or underexposure, we obtained a significant amount of data. In addition, this no-registration-error HDR 3D reconstruction model actually sparsifies the 3D point cloud in regions where there is no HDR problem, so we are exploring a new model which guarantees the integrity of the point cloud in normal regions and also acquires a significant amount of data in HDR regions.

 figure: Fig. 8.

Fig. 8. (a-b) The measured objects; (c-d) The specifics of overexposure and underexposure on the surface of the measured objects when projecting a fringe pattern; (e-f) The reconstructed 3D point clouds, and the representative overexposed areas which are zoomed in to be shown in the boxes. (g-h) The reconstructed 3D point clouds which utilized the fusion method, and the representative overexposed areas which are zoomed in to be shown in the boxes.

Download Full Size | PDF

We have provided two data processing methods for HDR 3D measurement method based on polarization and multispectrum co-modulation. The first fusion method obtains a highly modulated image by fusing different sub-images, which is equivalent to considering 1/16 of the total raw data as available and suffers from the problems associated with downsampling. The point clouds which utilized the fusion method are shown in Figs. 8(g-h). The second method is the no-registration-error calibration framework, which does not require downsampling, but obtains a sparse 3D point cloud, the sparsity of the point cloud varies with the number M of highest modulation locations. In this experiment, we kept the two channels with the highest modulation (M = 2), which means we considered 1/8 of the total raw data as available. For the two groups of measured objects, the obtained available pixels are 56245 and 94828, which are both higher than twice of our proposed fusion method (23876 and 39761). Based on this result, the conclusion can be drawn that the no-registration-error calibration framework is effective in acquiring more data. Due to the downsampling, although the point cloud of the fusion method is complete, when it is normalized to the original size, it will look even sparser than the point cloud of the no-registration-error calibration framework. The comparison of the available point data proves this conclusion, moreover, the zoomed-in view of the point cloud in Figs. 8(e-h) visualizes the high performance of the no-registration-error calibration framework comparing to the fusion method.

Furthermore, we have also set up an experiment to quantitatively evaluate the measurement accuracy of the system. We measured a standard step-shaped workpiece which has a height difference of 30 mm for each step. The workpiece and its depth map are shown in Fig. 9(a) and Fig. 9(b), the measure data of the middle row in the planes is shown in Fig. 9(c). The mean height and root-mean-square error(RMSE) of each plane are listed in Table 1. It can be seen that the RMSE of plane 2 is 0.0447 mm, which is smaller than that of plane 1 (0.1342 mm) and plane 3 (0.0623 mm), indicating that plane 2 has better flatness than the other two planes. This is because plane 2 is closest to the moderately defocusing position of the projector. Furthermore, the step height differences calculated from mean height are respectively 30,1772 mm and 30.0871 mm. The results show that the system has a high 3D measurement performance, and better calibration method if adopted could achieve better accuracy, which will be explored in our future research.

 figure: Fig. 9.

Fig. 9. (a) The standard step-shaped workpiece; (b) The depth map of the standard step-shaped workpiece; (c) The measure data of the middle row in the planes.

Download Full Size | PDF

Tables Icon

Table 1. Measurement accuracy results of system

4. Conclusion

In this paper, we have studied the problem of 3D measurements on surfaces with complex reflection situations. We conclude that real measurement scenarios are often mixtures of specular and diffuse reflections. Based on this finding, a novel HDR 3D measurement method based on polarization and multispectrum co-modulation was applied to solve the mixed reflection HDR problems. The experiments demonstrate the high measurement performance of the proposed method.

The major novelty of the proposed method are as follows:

  • 1) A mixed reflection theoretical model based on polarization and multispectrum co-modulation has been established for the complex reflection that exist in real measurement scenarios. Furthermore, we have implemented the multi-channel fusion algorithm and the co-modulation strategy of polarization and multispectrum. Experiments have demonstrated that, the proposed method can simultaneously meet both the demands of avoiding polarization loss and filtering out glare effect, which makes the proposed method more effective than conventional methods in complex reflections, especially when specular and diffuse areas simultaneously exist. Therefore, it is suitable for many mixed reflection scenarios such as on-line chip inspection and metal workpiece manufacturing.
  • 2) We propose a novel no-registration-error calibration framework for multi-channel cameras, and this framework solves the HDR problem without the registration error caused by the physical distances in the raw image between the different channels.

Funding

Sichuan Science and Technology Program (2023NSFSC0496); Open Fund of Key Laboratory of Icing and Anti/De-icing (IADL20200308); National Natural Science Foundation of China (62075143).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018). [CrossRef]  

2. Z. Zhang, Y. Wang, S. Huang, et al., “Three-dimensional shape measurements of specular objects using phase-measuring deflectometry,” Sensors 117(12), 2835 (2017). [CrossRef]  

3. A. J. Spyridi and A. A. G. Requicha, “Accessibility analysis for the automatic inspection of mechanical parts by coordinate measuring machines,” Proc. IEEE 2, 1284–1289 (1990). [CrossRef]  

4. J. Wang, Y. Li, Y. Ji, et al., “Deep Learning-Based 3D Measurements with Near-Infrared Fringe Projection,” Sensors 22(17), 6469 (2022). [CrossRef]  

5. Y. Li, J. Qian, S. Feng, et al., “Composite fringe projection deep learning profilometry for single-shot absolute 3D shape measurement,” Opt. Express 30(3), 3424–3442 (2022). [CrossRef]  

6. J. Zhang, B. Luo, F. Li, et al., “Single-exposure optical measurement of highly reflective surfaces via deep sinusoidal prior for complex equipment production,” IEEE Trans. Ind. Inform. 19(2), 2039–2048 (2023). [CrossRef]  

7. S. Zhang and S. Yau, “High dynamic range scanning technique,” Opt. Eng. 48(3), 030505 (2009). [CrossRef]  

8. L. Rao and F. Da, “High dynamic range 3D shape determination based on automatic exposure selection,” J. Vis. Commun. Image Represent 50, 217–226 (2018). [CrossRef]  

9. K. Wu, J. Tan, and C. Liu, “A novel approach to obtain optimal exposure for 3D shape reconstruction of high dynamic range objects,” Meas. Sci. Technol. 32(9), 095206 (2021). [CrossRef]  

10. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012). [CrossRef]  

11. H. Zhao, X. Liang, X. Diao, et al., “Rapid in-situ 3D measurement of shiny object based on fast and high dynamic range digital fringe projector,” Opt. Lasers Eng. 54, 170–174 (2014). [CrossRef]  

12. D. Li and J. Kofman, “Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement,” Opt. Express 22(8), 9887–9901 (2014). [CrossRef]  

13. C. Chen, N. Gao, X. Wang, et al., “Adaptive pixel-to-pixel projection intensity adjustment for measuring a shiny surface using orthogonal color fringe pattern projection,” Meas. Sci. Technol. 29(5), 055203 (2018). [CrossRef]  

14. H. Lin, J. Gao, Q. Mei, et al., “Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement,” Opt. Express 24(7), 7703–7718 (2016). [CrossRef]  

15. H. Lin, J. Gao, Q. Mei, et al., “Three-dimensional shape measurement technique for shiny surfaces by adaptive pixel-wise projection intensity adjustment,” Opt. Lasers Eng. 91, 206–215 (2017). [CrossRef]  

16. L. Zhang, Q. Chen, C. Zuo, et al., “High-speed high dynamic range 3 D shape measurement based on deep learning,” Opt. Lasers Eng. 134, 106245 (2020). [CrossRef]  

17. X. Liu, W. Chen, H. Madhusudanan, et al., “Optical measurement of highly reflective surfaces from a single exposure,” IEEE Trans. Ind. Inform. 17(3), 1882–1891 (2021). [CrossRef]  

18. S. Feng, C. Zuo, Y. Hu, et al., “Deep-learning-based fringe-pattern analysis with uncertainty estimation,” Optica 8(12), 1507–1510 (2021). [CrossRef]  

19. S. Ri, M. Fujigaki, and Y. Morimoto, “Intensity range extension method for three-dimensional shape measurement in phase-measuring profilometry using a digital micromirror device camera,” Appl. Opt. 47(29), 5400–5407 (2008). [CrossRef]  

20. B. Salahieh, Z. Chen, J. Rodriguez, et al., “Multi-polarization fringe projection imaging for high dynamic range objects,” Opt. Express 22(8), 10064–10071 (2014). [CrossRef]  

21. Y. Wang, J. Zhang, and B. Luo, “High dynamic range 3D measurement based on spectral modulation and hyperspectral imaging,” Opt. Express 26(26), 34442–34450 (2018). [CrossRef]  

22. Y. Surrel, “Design of algorithms for phase measurements by the use of phase stepping,” Appl. Opt. 35(1), 51–60 (1996). [CrossRef]  

23. C. Reich, R. Ritter, and J. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000). [CrossRef]  

24. Tong Li, Shaohui Zhang, Yao Hu, et al., “High dynamic range 3D measurements based on space–time speckle correlation and color camera,” Opt. Express 29(22), 36302–36320 (2021). [CrossRef]  

25. S. Feng, L. Zhang, C. Zuo, et al., “High dynamic range 3D measurements with fringe projection profilometry: A review,” Meas. Sci. Technol. 29(12), 122001 (2018). [CrossRef]  

26. S. K. Nayar, K. Ikeuchi, and T. Kanade, “Surface reflection: physical and geometrical perspectives,” IEEE Trans. Pattern Anal. Machine Intell. 13(7), 611–634 (1991). [CrossRef]  

27. Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51(27), 6631–6636 (2012). [CrossRef]  

28. C. Reich, R. Ritter, and J. Thesing, “White light heterodyne principle for 3D-measurement,” Proc. SPIE 3100(1), 236–244 (1997). [CrossRef]  

29. W. Guo, Z. Wu, R. Xu, et al., “A fast reconstruction method for three-dimensional shape measurement using dual-frequency grating projection and phase-to-height lookup table,” Opt. Laser Technol. 112, 269–277 (2019). [CrossRef]  

30. W. Zhao, X. Su, and W. Chen, “Discussion on accurate phase–height mapping in fringe projection profilometry,” Opt. Eng. 56(10), 1 (2017). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a-b) The low reflective area, and its absolute phase with severe error; (c-d) The high bright area, and its absolute phase with cavities.
Fig. 2.
Fig. 2. (a) The experimental system of the proposed method; (b) The front view of the real experimental system; (c) The side view of the real experimental system; (d) The spectral modulation curves for surface reflectance and RGB channels; (e) The Malus’s law.
Fig. 3.
Fig. 3. The processing flow chart of HDR 3D imaging based on polarized color camera.
Fig. 4.
Fig. 4. The registration-error between channels and the no-registration-error calibration framework for multi-channel cameras.
Fig. 5.
Fig. 5. (a) The measured object; (b) The sub-images of the captured raw fringe image which are used to generate the decision maps and form the synthesized image; (c) The synthesized image, and in which the black object is labelled with the blue box and showed with increased contrast; (d-e) The reconstructed 3D point cloud.
Fig. 6.
Fig. 6. (a-c) The synthesized images of the multispectrum set, the polarization set and the co-modulation set; (d-f) The absolute phase maps of the sets; (g-i) The corresponding absolute phase cross-sections to the red lines of the sets.
Fig. 7.
Fig. 7. (a-d) The measured objects; (e-h) The brightest sub-images of the objects; (i-l) The synthesized images; (m-p) The reconstructed 3D point clouds.
Fig. 8.
Fig. 8. (a-b) The measured objects; (c-d) The specifics of overexposure and underexposure on the surface of the measured objects when projecting a fringe pattern; (e-f) The reconstructed 3D point clouds, and the representative overexposed areas which are zoomed in to be shown in the boxes. (g-h) The reconstructed 3D point clouds which utilized the fusion method, and the representative overexposed areas which are zoomed in to be shown in the boxes.
Fig. 9.
Fig. 9. (a) The standard step-shaped workpiece; (b) The depth map of the standard step-shaped workpiece; (c) The measure data of the middle row in the planes.

Tables (1)

Tables Icon

Table 1. Measurement accuracy results of system

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ ϕ ( x , y ) + ( n 1 ) × π 2 ] ,
ϕ w ( x , y ) = arctan [ I 4 ( x , y ) I 2 ( x , y ) I 1 ( x , y ) I 3 ( x , y ) ] .
B ( x , y ) = [ I 1 ( x , y ) I 3 ( x , y ) ] 2 + [ I 2 ( x , y ) I 4 ( x , y ) ] 2 2 .
I r a w = { λ R ( λ ) C ( λ ) d λ 1 2 cos 2 θ i , specular reflection λ R ( λ ) C ( λ ) d λ 1 4 , diffuse reflection λ R ( λ ) C ( λ ) d λ 1 2 ( ω cos 2 θ i + 1 ω 2 ) , mixed reflection ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.