Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D shape measurement in the presence of strong interreflections by using single-pixel imaging in a camera-projector system

Open Access Open Access

Abstract

Optical 3D shape measurements, such as fringe projection profilometry (FPP), are popular methods for recovering the surfaces of an object. However, traditional FPP cannot be applied to measure regions that contain strong interreflections, resulting in failure in 3D shape measurement. In this study, a method based on single-pixel imaging (SI) is proposed to measure 3D shapes in the presence of interreflections. SI is utilized to separate direct illumination from indirect illumination. Then, the corresponding points between the pixels of a camera and a projector can be obtained through the direct illumination. The 3D shapes of regions with strong interreflections can be reconstructed with the obtained corresponding points based on triangulation. Experimental results demonstrate that the proposed method can be used to separate direct and indirect illumination and measure 3D objects with interreflections.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical 3D shape measurement methods are widely used in industrial applications because of their unique advantages, such as noncontact condition, rapidity, and high precision [1]. However, interreflections likely happen when objects with shiny concave surfaces are measured. Thus, lights received by camera pixels contain not only direct illumination but also indirect (also termed as global) illumination because of interreflections between surfaces [2]. With interreflections, the measurement of 3D objects with shiny surfaces becomes an extremely challenging problem for optical metrology, such as fringe projection profilometry (FPP) [3].

Several methods have been proposed to measure 3D shapes when interreflections occur between surfaces. For example, spraying matt powder before measurement is usually adopted to change the reflection of the measured object from shiny surfaces to diffuse surfaces [4]. Although this method may be efficient and convenient, the thickness of powder can affect measurement precision. In addition, external spraying is not allowed for many high-precision parts; consequently, spraying matt powder is inapplicable.

Linearly polarized light can be utilized to suppress indirect illumination, because light that is reflected once remains polarized, but multiple reflected light becomes depolarized [5,6]. When the angle of the polarizer placed in front of the camera and projector lens is adjusted, interreflections can be blocked to some degree, whereas directly reflected light can pass through. Although interreflections can be reduced by adding a polarizer, they cannot be completely eliminated. Indirect illumination effects can remain nearly constant by projecting high-frequency patterns [7]. As such, high-frequency patterns are applied to modulate indirect illumination and suppress interreflections [7,8], but this process cannot work well when strong interreflections occur. Adaptive projection methods have been introduced to detect regions with errors caused by interreflections, and an iterative algorithm is applied to reduce interreflections within regions [9,10]. However, if interreflections are too strong, iterations may not converge. Furthermore, structured light transport that utilizes an epipolar constraint to separate direct and indirect illumination is introduced [11,12]. However, separation fails when non-epipolar dominance assumption is not met. Frequency shift triangulation has also been proposed to separate direct and indirect illumination and robustly capture 3D shapes in the presence of strong interreflections [13].

We have previously introduced regional fringe projection [14]. In this method, concave surfaces are divided into several regions, and interreflections cannot occur inside one another. Consequently, interreflections are suppressed by separately projecting patterns onto each reflected region. However, the initial 3D shape of an object is required in this method to generate regional projecting masks. Thus, epipolar imaging with speckle patterns is proposed to obtain coarse measurement results when initial 3D models are unavailable [15]. However, in regional fringe projection, only single-bounce indirect illumination can be removed, whereas second-bounce or multibounce indirect illumination cannot be totally eliminated.

In this study, a high-precision 3D shape measurement method in the presence of strong interreflections is presented by using single-pixel imaging (SI) [1618] that simultaneously reconstructs light transport coefficients. In this way, indirect illumination can be separated entirely from direct illumination regardless of the number of bounces. SI is applied to separate direct illumination from indirect illumination. Each pixel on a pixelated array of a camera is viewed as an SI unit, and the SI result of each pixel corresponds to the light transport coefficients. These coefficients represent the ratio of light received by the camera pixel from each projector pixel and contains two components, i.e., the direct and indirect illumination. Direct illumination can be further identified on the basis of epipolar constraint [19]. The corresponding points between the pixels of the camera and the projector can be determined with the location of the direct illumination. Hence, 3D reconstruction can be achieved by using the obtained corresponding points. Experimental results show that the SI-based 3D measurement method can be applied to objects with strong interreflections.

This paper is organized as follows. The principles of the proposed method are explained in Section 2, the experimental setup and results are described in Section 3, and conclusions are presented in Section 4.

2. Principle

In FPP, surfaces without indirect illumination, i.e., the region enclosed by the yellow circle in Fig. 1(a), can be directly measured. However, regions with strong interreflections, i.e., the region enclosed by the yellow circle in Fig. 1(b), may be subjected to fringe aliasing. Consequently, a phase error that causes failures of 3D shape measurement may be introduced.

 figure: Fig. 1.

Fig. 1. Light paths of an object in FPP. (a) The light path of the region without interreflections only has a direct light path (blue solid line). (b) The light paths of the region with strong interreflections contain a direct light path (blue solid line) and an indirect light path (red dashed line).

Download Full Size | PDF

2.1 Phase error introduced by strong interreflections in FPP

In FPP, the projector projects N-step phase-shifting fringe patterns to establish the correspondences between camera and projector pixels [2022]. The projected fringe patterns ${P_n}({n = 0,1, \cdots ,N - 1;N \ge 2} )$ can be expressed as follows:

$${P_n}({x^{\prime},y^{\prime}} )= a + b\cos \left[ {\phi ({x^{\prime},y^{\prime}} )+ \frac{{2n\pi }}{N}} \right] = a + b\cos \left[ {2\pi {f_{x^{\prime}}}x^{\prime} + 2\pi {f_{y^{\prime}}}y^{\prime} + \frac{{2n\pi }}{N}} \right], $$
where $({x^{\prime},y^{\prime}} )$ are the coordinates on the projector image plane; a is the DC term, which is also termed as the average intensity; b is the amplitude; $\phi ({x^{\prime},y^{\prime}} )$ is the initial phase; and ${f_{x^{\prime}}}$ and ${f_{y^{\prime}}}$ are spatial frequencies.

Phase-shifting fringe patterns are projected onto the scene with strong interreflections, and the camera is used to collect the light reflected by the scene. The intensity of the captured images in the presence of strong interreflections can be expressed as follows:

$${I_n}({x,y} )= O({x,y} )\textrm{ + }\sum\limits_{x^{\prime} = 0}^{W - 1} {\sum\limits_{y^{\prime} = 0}^{H - 1} {[{h({x^{\prime},y^{\prime};x,y} )\times {P_n}({x^{\prime},y^{\prime}} )} ]} }, $$
where $({x,y} )$ are the coordinates on the camera image plane; ${I_n}({x,y} )$ is the intensity at camera pixel $({x,y} )$; $O({x,y} )$ is the intensity of ambient light; W and H are the horizontal and vertical resolutions of the projector, respectively; and $h(x^{\prime},y^{\prime};x,y)$ denotes the light transport coefficient between projector pixel $({x^{\prime},y^{\prime}} )$ and camera pixel $({x,y} )$, which indicates the ratio of the light energy received by camera pixel $({x,y} )$ from projector pixel $({x^{\prime},y^{\prime}} )$. By using a phase-shifting algorithm, the wrapped phase $\phi ({x,y} )$ of camera pixel $({x,y} )$ in the presence of strong interreflections can be calculated as follows:
$$\phi ({x,y} )= \arctan \left[ { - \frac{{\sum\limits_{n = 0}^{N - 1} {{I_n}({x,y} )\sin \left( {\frac{{2n\pi }}{N}} \right)} }}{{\sum\limits_{n = 0}^{N - 1} {{I_n}({x,y} )\cos \left( {\frac{{2n\pi }}{N}} \right)} }}} \right] = \arctan \left[ {\frac{{\sum\limits_{x^{\prime} = 0}^{W - 1} {\sum\limits_{y^{\prime} = 0}^{H - 1} {h({x^{\prime},y^{\prime};x,y} )\sin \phi ({x^{\prime},y^{\prime}} )} } }}{{\sum\limits_{x^{\prime} = 0}^{W - 1} {\sum\limits_{y^{\prime} = 0}^{H - 1} {h({x^{\prime},y^{\prime};x,y} )\cos \phi ({x^{\prime},y^{\prime}} )} } }}} \right]. $$

For scene points with strong interreflections, the light received by camera pixels can be divided into direct and indirect illumination and described as follows:

$${I_n}({x,y} )= I_n^d({x,y} )+ I_n^i({x,y} ), $$
where $I_n^d({x,y} )$ and $I_n^i({x,y} )$ are the intensities of direct and indirect illumination, respectively. Assuming that the projector pixel $({{{x^{\prime}}_d},{{y^{\prime}}_d}} )$ directly illuminates the scene point imaged at camera pixel $({x,y} )$, the direct illumination can be further formulated as:
$$I_n^d({x,y} )= h({x^{\prime}_d},{y^{\prime}_d};x,y){P_n}({{{x^{\prime}}_d},{{y^{\prime}}_d}} ), $$
$${\phi ^d}({x,y} )= \arctan \left[ { - \frac{{\sum\limits_{n = 0}^{N - 1} {I_n^d({x,y} )\sin \left( {\frac{{2n\pi }}{N}} \right)} }}{{\sum\limits_{n = 0}^{N - 1} {I_n^d({x,y} )\cos \left( {\frac{{2n\pi }}{N}} \right)} }}} \right] = \arctan \left[ {\frac{{\sin \phi ({{{x^{\prime}}_d},{{y^{\prime}}_d}} )}}{{\cos \phi ({{{x^{\prime}}_d},{{y^{\prime}}_d}} )}}} \right], $$
$$\Delta \phi ({x,y} )= \phi ({x,y} )- {\phi ^d}({x,y} ), $$
where ${\phi ^d}({x,y} )$ denotes the wrapped phase of the direct illumination, and $\Delta \phi ({x,y} )$ is the phase error caused by indirect illumination. The phase error can be calculated with the combined Eqs. (3)–(7):
$$\begin{aligned} \Delta \phi ({x,y} )&= \arctan \{{\tan [{\phi ({x,y} )- {\phi^d}({x,y} )} ]} \}= \arctan \left[ {\frac{{\tan \phi ({x,y} )- \tan {\phi^d}({x,y} )}}{{1 + \tan \phi ({x,y} )\tan {\phi^d}({x,y} )}}} \right]\\ &= \arctan \left\{ {\frac{{\sum\limits_{x^{\prime} = 0}^{W - 1} {\sum\limits_{y^{\prime} = 0}^{H - 1} {h({x^{\prime},y^{\prime};x,y} )\sin [{\phi ({x^{\prime},y^{\prime}} )- \phi ({{{x^{\prime}}_d},{{y^{\prime}}_d}} )} ]} } }}{{\sum\limits_{x^{\prime} = 0}^{W - 1} {\sum\limits_{y^{\prime} = 0}^{H - 1} {h({x^{\prime},y^{\prime};x,y} )\cos [{\phi ({x^{\prime},y^{\prime}} )- \phi ({{{x^{\prime}}_d},{{y^{\prime}}_d}} )} ]} } }}} \right\}\\ &= \arctan \left\{ {\frac{{\sum\limits_{x^{\prime} = 0}^{W - 1} {\sum\limits_{y^{\prime} = 0}^{H - 1} {\{{h({x^{\prime},y^{\prime};x,y} )\sin [{2\pi {f_{x^{\prime}}}({x^{\prime} - {{x^{\prime}}_d}} )+ 2\pi {f_{y^{\prime}}}({y^{\prime} - {{y^{\prime}}_d}} )} ]} \}} } }}{{\sum\limits_{x^{\prime} = 0}^{W - 1} {\sum\limits_{y^{\prime} = 0}^{H - 1} {\{{h({x^{\prime},y^{\prime};x,y} )\cos [{2\pi {f_{x^{\prime}}}({x^{\prime} - {{x^{\prime}}_d}} )+ 2\pi {f_{y^{\prime}}}({y^{\prime} - {{y^{\prime}}_d}} )} ]} \}} } }}} \right\} \end{aligned}. $$

For scene points where no interreflection effect exists, the camera pixel only receives direct illumination from the projector pixel. Therefore, for any $({x^{\prime},y^{\prime}} )$, $h({x^{\prime},y^{\prime};x,y} )= 0$when $({x^{\prime},y^{\prime}} )\ne ({{{x^{\prime}}_d},{{y^{\prime}}_d}} )$. Equation (8) indicates that the phase error $\Delta \phi ({x,y} )$ is zero. Therefore, the wrapped phase can be calculated with a phase-shifting algorithm. The absolute phase can be retrieved with the phase unwrapping method, which is used to obtain the correspondences between the pixels of the camera and the projector [23,24]. Lastly, 3D points are reconstructed on the basis of the triangulation principle [25].

For regions with strong interreflections, the light received by the camera pixel contains direct and indirect illumination, which may cause a nonzero phase error $\Delta \phi ({x,y} )$. Therefore, the indirect illumination introduced by strong interreflections may lead to failure in FPP measurement. In this study, SI is applied to separate direct and indirect illumination.

2.2 Separation of direct and indirect illumination by SI

The light transport coefficients can be used to separate direct and indirect illumination because they describe the mapping between camera and projector pixels. The intensity of a camera pixel can be expressed as Eq. (2) in a camera–projector system. If the light transport coefficient $h({x^{\prime},y^{\prime};x,y} )$ is not zero, then the light projected by the projector pixel $({x^{\prime},y^{\prime}} )$ can be received by camera pixel $({x,y} )$ through direct reflections or interreflections.

As shown in Fig. 2(a), the received illumination for scene point A imaged at camera pixel $({{u_a},{v_a}} )$ contains only the direct illumination from projector pixel $({{w_a},{h_a}} )$. The light transport coefficients $h({x^{\prime},y^{\prime};{u_a},{v_a}} )$ have only one light spot, as illustrated in Fig. 2(b). By contrast, the received illumination for scene point B imaged at the camera pixel $({{u_b},{v_b}} )$ in Fig. 2(c) comprises direct and indirect illumination from projector pixels $({{w_b},{h_b}} )$ and $({{w_c},{h_c}} )$ by interreflections, respectively. Therefore, two light spots are observed in the light transport coefficients $h({x^{\prime},y^{\prime};{u_b},{v_b}} )$, as depicted in Fig. 2(d). These light spots are direct and indirect illumination from different projector pixels and separated from one another in the light transport coefficients.

 figure: Fig. 2.

Fig. 2. Light transport coefficients of camera pixels. (a) The light reaches camera pixel $({{u_a},{v_a}} )$ from projector pixel $({{w_a},{h_a}} )$ through direct reflection. (b) Normalized light transport coefficients $h({x^{\prime},y^{\prime};{u_a},{v_a}} )$. (c) The light reaches camera pixel $({{u_b},{v_b}} )$ from projector pixel $({{w_b},{h_b}} )$ through direct reflection and from projector pixel $({{w_c},{h_c}} )$ through interreflections (after being reflected by scene point C, light is reflected by scene point B). (d) Normalized light transport coefficients $h({x^{\prime},y^{\prime};{u_b},{v_b}} )$.

Download Full Size | PDF

Therefore, the direct and indirect illumination received by camera pixels can be separated by obtaining light transport coefficients. A previous research indicated that the light transport coefficients can be computed with SI [26]. Fourier single-pixel imaging (FSI) [18], which is applied to retrieve the light transport coefficients in our study, is a typical SI that can achieve high-quality imaging.

In FSI, four-step phase-shifting sinusoidal fringe patterns are projected to obtain Fourier spectra, which can be expressed as follows:

$${P_n}(x^{\prime},y^{\prime};{f_{x^{\prime}}},{f_{y^{\prime}}}) = a + b\cos (2\pi {f_{x^{\prime}}}x^{\prime} + 2\pi {f_{y^{\prime}}}y^{\prime} + \frac{\pi }{2}n), $$
where n takes values of 0, 1, 2, and 3; ${f_{x^{\prime}}} = \frac{w}{W}$, w takes values of 0, 1, 2, …, W-1; and ${f_{y^{\prime}}} = \frac{h}{H}$, h takes values of 0, 1, 2, …, H-1. Thus, Eq. (2) can be written as follows:
$${I_n}({x,y;{f_{x^{\prime}}},{f_{y^{\prime}}}} )= O({x,y} )+ \sum\limits_{x^{\prime}\textrm{ = 0}}^{W - 1} {\sum\limits_{y^{\prime}\textrm{ = 0}}^{H - 1} {h(x^{\prime},y^{\prime};x,y){P_n}(x^{\prime},y^{\prime};{f_{x^{\prime}}},{f_{y^{\prime}}})} }. $$

The DC term a and the ambient light $O({x,y} )$ are eliminated by four-step phase shifting. $h({x^{\prime},y^{\prime};x,y} )$ can be retrieved by rearranging the captured intensities and applying inverse discrete Fourier transform (IDFT):

$$\begin{array}{c} h({x^{\prime},y^{\prime};x,y} )= \frac{1}{{2b}}\cdot {F^{ - 1}}\{ [{I_0}(x,y;{f_{x^{\prime}}},{f_{y^{\prime}}}) - {I_2}(x,y;{f_{x^{\prime}}},{f_{y^{\prime}}})]\\ + j\cdot [{I_1}(x,y;{f_{x^{\prime}}},{f_{y^{\prime}}}) - {I_3}(x,y;{f_{x^{\prime}}},{f_{y^{\prime}}})]\} \end{array}, $$
where j is an imaginary unit, and F−1 is the IDFT operator. Hence, the light transport coefficients can be retrieved by FSI.

2.3 Determination of direct illumination by epipolar constraint

The illumination received by the camera pixel from different projector pixels can be separated from one another in light transport coefficients by using FSI. However, more than one light spot is shown in the light transport coefficients for the regions with interreflections, and the direct illumination spot must be determined.

O’Tool et al. [11,12] introduced structured light transport, which indicates that direct and indirect illumination can be distinguished by the crucial link between light transport and epipolar geometry. A typical light ray transport from the projector to the camera is shown in Fig. 3. The illumination received by camera pixel $({u,v} )$ contains the direct illumination from projector pixel $({{w_1},{h_1}} )$ and the indirect illumination from projector pixel $({{w_2},{h_2}} )$. The projector pixels $({{w_1},{h_1}} )$ and $({{w_2},{h_2}} )$ irradiating the direct and indirect illumination are found on and beyond the epipolar line, respectively.

 figure: Fig. 3.

Fig. 3. Light ray transport from the projector to the camera. The illumination received by camera pixel $({u,v} )$ is split into two beams, that is, direct and indirect illumination caused by interreflections. The direct illumination lies on the epipolar line, whereas the indirect illumination often exceeds the epipolar line.

Download Full Size | PDF

The epipolar constraint is used to determine direct illumination (Fig. 4). The epipolar line is computed, and a threshold ε is set in the light transport coefficients of the camera pixel $({u,v} )$. The value of the threshold ε is related to the distortion coefficient of the projector. To make a trade-off between accuracy and efficiency of searching the direct light spot, the threshold ε takes value of 3 pixels according to the distortion coefficient of the projector in this study. The area within the threshold range of the epipolar line is scanned to obtain the projector coordinate with the maximal intensity and localize the direct illumination spot coordinate.

 figure: Fig. 4.

Fig. 4. Determining the direct illumination by epipolar constraint in the light transport coefficients $h({x^{\prime},y^{\prime};u,v} )$. The epipolar line is computed, and a threshold ε is set in the normalized light transport coefficients. Direct illumination can be identified by searching the area within the threshold range of the epipolar line.

Download Full Size | PDF

If the epipolar line equation is $l = Aw + Bh + C$, then the procedure for determining direct illumination can be described as follows:

$$({{w^\ast },{h^\ast }} )= \mathop {\arg \max }\limits_{({w,h} )\in {\Omega _D}} I({w,h} ), $$
where ${\Omega _D} = \{ ({w,h} )|D(w,h) \le \varepsilon \}$ is the projector pixel set, $D({w,h} )= \frac{{|{Aw + Bh + C} |}}{{\sqrt {{A^2} + {B^2}} }}$ denotes the distance between the projector pixel $({w,h} )$ and the epipolar line, $I({w,h} )$ is the intensity of projector pixel $({w,h} )$, and $({{w^ \ast },{h^\ast }} )$ is the coordinate of the direct illumination spot, which is the corresponding point of the camera pixel $({u,v} )$. The subpixel coordinate $({w_s^\ast ,h_s^\ast } )$ of direct illumination spot can be obtained by the grayscale centroid method as follows:
$$\left\{ \begin{array}{l} w_s^\ast{=} \frac{{\sum\limits_{({w,h} )\in \Omega } {wI({w,h} )} }}{{\sum\limits_{({w,h} )\in \Omega } {I({w,h} )} }}\\ h_s^\ast{=} \frac{{\sum\limits_{({w,h} )\in \Omega } {hI({w,h} )} }}{{\sum\limits_{({w,h} )\in \Omega } {I({w,h} )} }} \end{array} \right., $$
where $\Omega = \{ ({w,h} )||{w - {w^\ast }} |\le m,|{h - {h^\ast }} |\le m\}$, and m is the side length of the rectangular range involved in the calculation of the centroid. In this study, m is set as 2, which is determined by the size of the direct illumination spot.

Therefore, the corresponding points between the projector and camera pixels are obtained, and the 3D shape can be recovered by triangulation.

3. Experiments

The experimental system in Fig. 5 comprises a monochrome CMOS camera with a resolution of 1920 × 1200 and a digital projector with a resolution of 1920 × 1080. The projector projects sinusoidal patterns onto an object, and the light reflected by the object is collected by the camera. Every pixel can be regarded as a single-pixel detector.

 figure: Fig. 5.

Fig. 5. Experimental setup includes a pair of calibrated cameras and projectors. The projector projects the fringe patterns onto the scene, and the camera collects the reflected light of the scene. Every camera pixel can be regarded as a single-pixel detector.

Download Full Size | PDF

The measured objects with interreflections are an aluminum alloy workpiece, a metal part, a V-shaped piece composed of a ceramic plate and a metal block, two ceramic bowls, and an etalon composed of two metal gauge blocks (Fig. 6). The measurement of the objects using the proposed method mainly involves three steps.

 figure: Fig. 6.

Fig. 6. Measured objects. (a) Aluminum alloy workpiece. (b) Metal part. (c) V-shaped piece composed of a metal block and a ceramic plate. (d) Two ceramic bowls. (e) Etalon composed of two metal gauge blocks.

Download Full Size | PDF

First, four-step phase-shifting fringe patterns projected onto the measured objects by the projector are captured by the camera to obtain the Fourier spectra. The number of projected patterns can be reduced by half because the discrete Fourier transform has symmetry. The total number of fringe patterns can be calculated as follows:

$${N_{total}} = \frac{{4 \times W \times H}}{2} = \frac{{4 \times 1920 \times 1080}}{2} = 4,147,200, $$
and the camera with a frame rate of 155 fps has a measurement time consumption of 446 min. The light transport coefficients can be computed by applying the IDFT algorithm to the captured Fourier spectra as described in Section 2.2. The reconstructed light transport coefficients of the aluminum alloy workpiece and the metal part are shown in Figs. 7 and 8, respectively. For scene point A where interreflections are not evident, as illustrated in Figs. 7(a) and 8(a), only one light spot can be observed in the light transport coefficients, as presented in Figs. 7(b), 7(c), 8(b), and 8(c). For scene points B and C where strong interreflections occur, several light spots can be observed in the light transport coefficients. These light spots representing direct and indirect illumination are separated from one another in the light transport coefficients, as depicted in Figs. 7(d)–(g) and 8(d)–(g).

 figure: Fig. 7.

Fig. 7. Light transport coefficients of the aluminum alloy workpiece captured by FSI. (a) Fringe images of the aluminum alloy workpiece. (b), (d), and (f) are the light transport coefficients of camera pixels corresponding to scene points A, B, and C indicated in (a), respectively. Direct illumination is separated from indirect illumination and determined by epipolar line. (c), (e), and (g) are the zoomed results of the white boxes indicated in (b), (d), and (f), respectively.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Light transport coefficients of the metal part captured by FSI. (a) Fringe images of the metal part. (b), (d), and (f) are the light transport coefficients of camera pixels corresponding to scene points A, B, and C indicated in (a), respectively. Direct illumination is separated from indirect illumination and determined by epipolar line. (c), (e), and (g) are the zoomed results of the white boxes indicated in (b), (d), and (f), respectively.

Download Full Size | PDF

Second, the epipolar line is utilized to determine the direct illumination spot, as described in Section 2.3. The corresponding points between the projector and the camera pixels are obtained by the location of the direct illumination spot.

Third, the 3D shape of objects can be recovered through triangulation with the obtained corresponding points. The measurement results of the objects by FPP and FSI are shown in Fig. 9. The FPP measurement results show that the point clouds of some regions are missing because of interreflections. In comparison with the FPP measurement results, the point clouds of the FSI measurement results are more complete and denser. Thus, the proposed method can be applied to measure the 3D shape of an object with interreflections and is more effective than FPP.

 figure: Fig. 9.

Fig. 9. Measurement results of objects. (a) Measured objects. (b) FPP measurement results of the objects. (c) FSI measurement results of the objects.

Download Full Size | PDF

An etalon composed of two metal gauge blocks [Fig. 6(e)] is measured to evaluate FPP and FSI as measurement methods in the presence of strong interreflections. The FPP and FSI measurement results are shown in Figs. 10(a) and 10(b), respectively. The time cost of FPP is less than 1 s, whereas the time cost of FSI is 612 min (data collection and processing take 446 and 166 min, respectively). Measurement precision can be evaluated with the fitting error of the plane, and the fitting plane results are shown in Figs. 10(c) and 10(d). The fitting plane errors show that the measurement precision of FSI is higher than that of FPP in the presence of strong interreflections (Table 1).

 figure: Fig. 10.

Fig. 10. Precision evaluation of the measurement methods. (a) The FPP measurement result. (b) The FSI measurement result. (c) Fitting plane result of FPP. (d) Fitting plane result of FSI.

Download Full Size | PDF

Tables Icon

Table 1. Fitting plane errors

4. Conclusion

A 3D shape measurement method in the presence of strong interreflections by using SI is proposed. The cause of measurement failure in FPP in the presence of interreflections is analyzed, and SI is introduced to separate the scene reflection illumination received by the camera. The light transport coefficients of each camera pixel can be obtained by SI, and the direct and indirect illumination received by camera pixels are separated. Epipolar constraint is utilized to determine the direct illumination accurately. Experimental results demonstrate that the proposed method can be applied to measure 3D objects in the presence of strong interreflections. The measurement results of the objects by the proposed method are more complete and denser than FPP. An etalon composed of two metal gauge blocks is measured to evaluate the precision of the proposed method, and the result demonstrates the high measurement precision of the proposed method.

In the proposed method, SI is utilized to obtain light transport coefficients for separating direct illumination from indirect illumination. However, the proposed method requires a large number of patterns to determine such coefficients, resulting in a time-consuming process. Thus, improving the imaging efficiency of SI should be considered in future works to separate direct and indirect illumination.

Funding

National Natural Science Foundation of China (61735003, 61875007); National Key Research and Development Program of China (2020YFB2010701); Foundation (6141B061106); Leading Talents Program for Enterpriser and Innovator of Qingdao (18-1-2-22-zhc).

Disclosures

The authors declare no conflicts of interest.

References

1. F. Chen, G. M. Brown, and M. M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 8–22 (2000). [CrossRef]  

2. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006). [CrossRef]  

3. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser Eng. 48(2), 133–140 (2010). [CrossRef]  

4. A. Miks, J. Novak, and P. Novak, “Method for reconstruction of shape of specular surfaces using scanning beam deflectometry,” Opt. Laser Eng. 51(7), 867–872 (2013). [CrossRef]  

5. J. Clark, E. Trucco, and L. B. Wolff, “Using light polarization in laser scanning,” Image Vis. Comput. 15(2), 107–117 (1997). [CrossRef]  

6. T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1829–1836.

7. M. Gupta and S. K. Nayar, “Micro Phase Shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

8. T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.

9. Y. Xu and D. G. Aliaga, “An Adaptive Correspondence Algorithm for Modeling Scenes with Strong Interreflections,” IEEE Trans. Vis. Comput. Graph. 15(3), 465–480 (2009). [CrossRef]  

10. X. Chen and Y. Yang, “Scene adaptive structured light using error detection and correction,” Pattern Recogn. 48(1), 220–230 (2015). [CrossRef]  

11. M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.

12. M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” IEEE Trans. Pattern Anal. Machine Intell. 38(7), 1298–1312 (2016). [CrossRef]  

13. F. B. D. Dizeu, J. Boisvert, M. -A. Drouin, G. Godin, M. Rivard, and G. Lamouche, “Frequency shift triangulation: a robust fringe projection technique for 3D shape acquisition in the presence of strong interreflections,” in Proceedings of International Conference on 3D Vision (IEEE, 2019), pp. 194–203.

14. H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” Proc. SPIE 10458, 104581A (2017). [CrossRef]  

15. H. Zhao, Y. Xu, H. Jiang, and X. Li, “3D shape measurement in the presence of strong interreflections by epipolar imaging and regional fringe projection,” Opt. Express 26(6), 7117–7131 (2018). [CrossRef]  

16. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012). [CrossRef]  

17. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

18. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

19. H. Jiang, H. Zhao, X. Li, and C. Quan, “Hyper thin 3D edge measurement of honeycomb core structures based on the triangular camera-projector layout & phase-based stereo matching,” Opt. Express 24(5), 5502–5513 (2016). [CrossRef]  

20. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018). [CrossRef]  

21. C. Chen, N. Gao, X. Wang, Z. Zhang, F. Gao, and X. Jiang, “Generic exponential fringe model for alleviating phase error in phase measuring profilometry,” Opt. Lasers Eng. 110, 179–185 (2018). [CrossRef]  

22. Z. Wang, Z. Zhang, N. Gao, Y. Xia, F. Gao, and X. Jiang, “Single-shot 3D shape measurement of discontinuous objects based on a coaxial fringe projection system,” Appl. Opt. 58(5), A169–A178 (2019). [CrossRef]  

23. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016). [CrossRef]  

24. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018). [CrossRef]  

25. S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Laser Eng. 106, 119–131 (2018). [CrossRef]  

26. H. Jiang, H. Zhai, Y. Xu, X. Li, and H. Zhao, “3D shape measurement of translucent objects based on Fourier single-pixel imaging in projector-camera system,” Opt. Express 27(23), 33564–33574 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Light paths of an object in FPP. (a) The light path of the region without interreflections only has a direct light path (blue solid line). (b) The light paths of the region with strong interreflections contain a direct light path (blue solid line) and an indirect light path (red dashed line).
Fig. 2.
Fig. 2. Light transport coefficients of camera pixels. (a) The light reaches camera pixel $({{u_a},{v_a}} )$ from projector pixel $({{w_a},{h_a}} )$ through direct reflection. (b) Normalized light transport coefficients $h({x^{\prime},y^{\prime};{u_a},{v_a}} )$ . (c) The light reaches camera pixel $({{u_b},{v_b}} )$ from projector pixel $({{w_b},{h_b}} )$ through direct reflection and from projector pixel $({{w_c},{h_c}} )$ through interreflections (after being reflected by scene point C, light is reflected by scene point B). (d) Normalized light transport coefficients $h({x^{\prime},y^{\prime};{u_b},{v_b}} )$ .
Fig. 3.
Fig. 3. Light ray transport from the projector to the camera. The illumination received by camera pixel $({u,v} )$ is split into two beams, that is, direct and indirect illumination caused by interreflections. The direct illumination lies on the epipolar line, whereas the indirect illumination often exceeds the epipolar line.
Fig. 4.
Fig. 4. Determining the direct illumination by epipolar constraint in the light transport coefficients $h({x^{\prime},y^{\prime};u,v} )$ . The epipolar line is computed, and a threshold ε is set in the normalized light transport coefficients. Direct illumination can be identified by searching the area within the threshold range of the epipolar line.
Fig. 5.
Fig. 5. Experimental setup includes a pair of calibrated cameras and projectors. The projector projects the fringe patterns onto the scene, and the camera collects the reflected light of the scene. Every camera pixel can be regarded as a single-pixel detector.
Fig. 6.
Fig. 6. Measured objects. (a) Aluminum alloy workpiece. (b) Metal part. (c) V-shaped piece composed of a metal block and a ceramic plate. (d) Two ceramic bowls. (e) Etalon composed of two metal gauge blocks.
Fig. 7.
Fig. 7. Light transport coefficients of the aluminum alloy workpiece captured by FSI. (a) Fringe images of the aluminum alloy workpiece. (b), (d), and (f) are the light transport coefficients of camera pixels corresponding to scene points A, B, and C indicated in (a), respectively. Direct illumination is separated from indirect illumination and determined by epipolar line. (c), (e), and (g) are the zoomed results of the white boxes indicated in (b), (d), and (f), respectively.
Fig. 8.
Fig. 8. Light transport coefficients of the metal part captured by FSI. (a) Fringe images of the metal part. (b), (d), and (f) are the light transport coefficients of camera pixels corresponding to scene points A, B, and C indicated in (a), respectively. Direct illumination is separated from indirect illumination and determined by epipolar line. (c), (e), and (g) are the zoomed results of the white boxes indicated in (b), (d), and (f), respectively.
Fig. 9.
Fig. 9. Measurement results of objects. (a) Measured objects. (b) FPP measurement results of the objects. (c) FSI measurement results of the objects.
Fig. 10.
Fig. 10. Precision evaluation of the measurement methods. (a) The FPP measurement result. (b) The FSI measurement result. (c) Fitting plane result of FPP. (d) Fitting plane result of FSI.

Tables (1)

Tables Icon

Table 1. Fitting plane errors

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

P n ( x , y ) = a + b cos [ ϕ ( x , y ) + 2 n π N ] = a + b cos [ 2 π f x x + 2 π f y y + 2 n π N ] ,
I n ( x , y ) = O ( x , y )  +  x = 0 W 1 y = 0 H 1 [ h ( x , y ; x , y ) × P n ( x , y ) ] ,
ϕ ( x , y ) = arctan [ n = 0 N 1 I n ( x , y ) sin ( 2 n π N ) n = 0 N 1 I n ( x , y ) cos ( 2 n π N ) ] = arctan [ x = 0 W 1 y = 0 H 1 h ( x , y ; x , y ) sin ϕ ( x , y ) x = 0 W 1 y = 0 H 1 h ( x , y ; x , y ) cos ϕ ( x , y ) ] .
I n ( x , y ) = I n d ( x , y ) + I n i ( x , y ) ,
I n d ( x , y ) = h ( x d , y d ; x , y ) P n ( x d , y d ) ,
ϕ d ( x , y ) = arctan [ n = 0 N 1 I n d ( x , y ) sin ( 2 n π N ) n = 0 N 1 I n d ( x , y ) cos ( 2 n π N ) ] = arctan [ sin ϕ ( x d , y d ) cos ϕ ( x d , y d ) ] ,
Δ ϕ ( x , y ) = ϕ ( x , y ) ϕ d ( x , y ) ,
Δ ϕ ( x , y ) = arctan { tan [ ϕ ( x , y ) ϕ d ( x , y ) ] } = arctan [ tan ϕ ( x , y ) tan ϕ d ( x , y ) 1 + tan ϕ ( x , y ) tan ϕ d ( x , y ) ] = arctan { x = 0 W 1 y = 0 H 1 h ( x , y ; x , y ) sin [ ϕ ( x , y ) ϕ ( x d , y d ) ] x = 0 W 1 y = 0 H 1 h ( x , y ; x , y ) cos [ ϕ ( x , y ) ϕ ( x d , y d ) ] } = arctan { x = 0 W 1 y = 0 H 1 { h ( x , y ; x , y ) sin [ 2 π f x ( x x d ) + 2 π f y ( y y d ) ] } x = 0 W 1 y = 0 H 1 { h ( x , y ; x , y ) cos [ 2 π f x ( x x d ) + 2 π f y ( y y d ) ] } } .
P n ( x , y ; f x , f y ) = a + b cos ( 2 π f x x + 2 π f y y + π 2 n ) ,
I n ( x , y ; f x , f y ) = O ( x , y ) + x  = 0 W 1 y  = 0 H 1 h ( x , y ; x , y ) P n ( x , y ; f x , f y ) .
h ( x , y ; x , y ) = 1 2 b F 1 { [ I 0 ( x , y ; f x , f y ) I 2 ( x , y ; f x , f y ) ] + j [ I 1 ( x , y ; f x , f y ) I 3 ( x , y ; f x , f y ) ] } ,
( w , h ) = arg max ( w , h ) Ω D I ( w , h ) ,
{ w s = ( w , h ) Ω w I ( w , h ) ( w , h ) Ω I ( w , h ) h s = ( w , h ) Ω h I ( w , h ) ( w , h ) Ω I ( w , h ) ,
N t o t a l = 4 × W × H 2 = 4 × 1920 × 1080 2 = 4 , 147 , 200 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.