Abstract
Optical 3D shape measurements, such as fringe projection profilometry (FPP), are popular methods for recovering the surfaces of an object. However, traditional FPP cannot be applied to measure regions that contain strong interreflections, resulting in failure in 3D shape measurement. In this study, a method based on single-pixel imaging (SI) is proposed to measure 3D shapes in the presence of interreflections. SI is utilized to separate direct illumination from indirect illumination. Then, the corresponding points between the pixels of a camera and a projector can be obtained through the direct illumination. The 3D shapes of regions with strong interreflections can be reconstructed with the obtained corresponding points based on triangulation. Experimental results demonstrate that the proposed method can be used to separate direct and indirect illumination and measure 3D objects with interreflections.
© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Optical 3D shape measurement methods are widely used in industrial applications because of their unique advantages, such as noncontact condition, rapidity, and high precision [1]. However, interreflections likely happen when objects with shiny concave surfaces are measured. Thus, lights received by camera pixels contain not only direct illumination but also indirect (also termed as global) illumination because of interreflections between surfaces [2]. With interreflections, the measurement of 3D objects with shiny surfaces becomes an extremely challenging problem for optical metrology, such as fringe projection profilometry (FPP) [3].
Several methods have been proposed to measure 3D shapes when interreflections occur between surfaces. For example, spraying matt powder before measurement is usually adopted to change the reflection of the measured object from shiny surfaces to diffuse surfaces [4]. Although this method may be efficient and convenient, the thickness of powder can affect measurement precision. In addition, external spraying is not allowed for many high-precision parts; consequently, spraying matt powder is inapplicable.
Linearly polarized light can be utilized to suppress indirect illumination, because light that is reflected once remains polarized, but multiple reflected light becomes depolarized [5,6]. When the angle of the polarizer placed in front of the camera and projector lens is adjusted, interreflections can be blocked to some degree, whereas directly reflected light can pass through. Although interreflections can be reduced by adding a polarizer, they cannot be completely eliminated. Indirect illumination effects can remain nearly constant by projecting high-frequency patterns [7]. As such, high-frequency patterns are applied to modulate indirect illumination and suppress interreflections [7,8], but this process cannot work well when strong interreflections occur. Adaptive projection methods have been introduced to detect regions with errors caused by interreflections, and an iterative algorithm is applied to reduce interreflections within regions [9,10]. However, if interreflections are too strong, iterations may not converge. Furthermore, structured light transport that utilizes an epipolar constraint to separate direct and indirect illumination is introduced [11,12]. However, separation fails when non-epipolar dominance assumption is not met. Frequency shift triangulation has also been proposed to separate direct and indirect illumination and robustly capture 3D shapes in the presence of strong interreflections [13].
We have previously introduced regional fringe projection [14]. In this method, concave surfaces are divided into several regions, and interreflections cannot occur inside one another. Consequently, interreflections are suppressed by separately projecting patterns onto each reflected region. However, the initial 3D shape of an object is required in this method to generate regional projecting masks. Thus, epipolar imaging with speckle patterns is proposed to obtain coarse measurement results when initial 3D models are unavailable [15]. However, in regional fringe projection, only single-bounce indirect illumination can be removed, whereas second-bounce or multibounce indirect illumination cannot be totally eliminated.
In this study, a high-precision 3D shape measurement method in the presence of strong interreflections is presented by using single-pixel imaging (SI) [16–18] that simultaneously reconstructs light transport coefficients. In this way, indirect illumination can be separated entirely from direct illumination regardless of the number of bounces. SI is applied to separate direct illumination from indirect illumination. Each pixel on a pixelated array of a camera is viewed as an SI unit, and the SI result of each pixel corresponds to the light transport coefficients. These coefficients represent the ratio of light received by the camera pixel from each projector pixel and contains two components, i.e., the direct and indirect illumination. Direct illumination can be further identified on the basis of epipolar constraint [19]. The corresponding points between the pixels of the camera and the projector can be determined with the location of the direct illumination. Hence, 3D reconstruction can be achieved by using the obtained corresponding points. Experimental results show that the SI-based 3D measurement method can be applied to objects with strong interreflections.
This paper is organized as follows. The principles of the proposed method are explained in Section 2, the experimental setup and results are described in Section 3, and conclusions are presented in Section 4.
2. Principle
In FPP, surfaces without indirect illumination, i.e., the region enclosed by the yellow circle in Fig. 1(a), can be directly measured. However, regions with strong interreflections, i.e., the region enclosed by the yellow circle in Fig. 1(b), may be subjected to fringe aliasing. Consequently, a phase error that causes failures of 3D shape measurement may be introduced.
2.1 Phase error introduced by strong interreflections in FPP
In FPP, the projector projects N-step phase-shifting fringe patterns to establish the correspondences between camera and projector pixels [20–22]. The projected fringe patterns ${P_n}({n = 0,1, \cdots ,N - 1;N \ge 2} )$ can be expressed as follows:
Phase-shifting fringe patterns are projected onto the scene with strong interreflections, and the camera is used to collect the light reflected by the scene. The intensity of the captured images in the presence of strong interreflections can be expressed as follows:
For scene points with strong interreflections, the light received by camera pixels can be divided into direct and indirect illumination and described as follows:
where $I_n^d({x,y} )$ and $I_n^i({x,y} )$ are the intensities of direct and indirect illumination, respectively. Assuming that the projector pixel $({{{x^{\prime}}_d},{{y^{\prime}}_d}} )$ directly illuminates the scene point imaged at camera pixel $({x,y} )$, the direct illumination can be further formulated as:For scene points where no interreflection effect exists, the camera pixel only receives direct illumination from the projector pixel. Therefore, for any $({x^{\prime},y^{\prime}} )$, $h({x^{\prime},y^{\prime};x,y} )= 0$when $({x^{\prime},y^{\prime}} )\ne ({{{x^{\prime}}_d},{{y^{\prime}}_d}} )$. Equation (8) indicates that the phase error $\Delta \phi ({x,y} )$ is zero. Therefore, the wrapped phase can be calculated with a phase-shifting algorithm. The absolute phase can be retrieved with the phase unwrapping method, which is used to obtain the correspondences between the pixels of the camera and the projector [23,24]. Lastly, 3D points are reconstructed on the basis of the triangulation principle [25].
For regions with strong interreflections, the light received by the camera pixel contains direct and indirect illumination, which may cause a nonzero phase error $\Delta \phi ({x,y} )$. Therefore, the indirect illumination introduced by strong interreflections may lead to failure in FPP measurement. In this study, SI is applied to separate direct and indirect illumination.
2.2 Separation of direct and indirect illumination by SI
The light transport coefficients can be used to separate direct and indirect illumination because they describe the mapping between camera and projector pixels. The intensity of a camera pixel can be expressed as Eq. (2) in a camera–projector system. If the light transport coefficient $h({x^{\prime},y^{\prime};x,y} )$ is not zero, then the light projected by the projector pixel $({x^{\prime},y^{\prime}} )$ can be received by camera pixel $({x,y} )$ through direct reflections or interreflections.
As shown in Fig. 2(a), the received illumination for scene point A imaged at camera pixel $({{u_a},{v_a}} )$ contains only the direct illumination from projector pixel $({{w_a},{h_a}} )$. The light transport coefficients $h({x^{\prime},y^{\prime};{u_a},{v_a}} )$ have only one light spot, as illustrated in Fig. 2(b). By contrast, the received illumination for scene point B imaged at the camera pixel $({{u_b},{v_b}} )$ in Fig. 2(c) comprises direct and indirect illumination from projector pixels $({{w_b},{h_b}} )$ and $({{w_c},{h_c}} )$ by interreflections, respectively. Therefore, two light spots are observed in the light transport coefficients $h({x^{\prime},y^{\prime};{u_b},{v_b}} )$, as depicted in Fig. 2(d). These light spots are direct and indirect illumination from different projector pixels and separated from one another in the light transport coefficients.
Therefore, the direct and indirect illumination received by camera pixels can be separated by obtaining light transport coefficients. A previous research indicated that the light transport coefficients can be computed with SI [26]. Fourier single-pixel imaging (FSI) [18], which is applied to retrieve the light transport coefficients in our study, is a typical SI that can achieve high-quality imaging.
In FSI, four-step phase-shifting sinusoidal fringe patterns are projected to obtain Fourier spectra, which can be expressed as follows:
The DC term a and the ambient light $O({x,y} )$ are eliminated by four-step phase shifting. $h({x^{\prime},y^{\prime};x,y} )$ can be retrieved by rearranging the captured intensities and applying inverse discrete Fourier transform (IDFT):
2.3 Determination of direct illumination by epipolar constraint
The illumination received by the camera pixel from different projector pixels can be separated from one another in light transport coefficients by using FSI. However, more than one light spot is shown in the light transport coefficients for the regions with interreflections, and the direct illumination spot must be determined.
O’Tool et al. [11,12] introduced structured light transport, which indicates that direct and indirect illumination can be distinguished by the crucial link between light transport and epipolar geometry. A typical light ray transport from the projector to the camera is shown in Fig. 3. The illumination received by camera pixel $({u,v} )$ contains the direct illumination from projector pixel $({{w_1},{h_1}} )$ and the indirect illumination from projector pixel $({{w_2},{h_2}} )$. The projector pixels $({{w_1},{h_1}} )$ and $({{w_2},{h_2}} )$ irradiating the direct and indirect illumination are found on and beyond the epipolar line, respectively.
The epipolar constraint is used to determine direct illumination (Fig. 4). The epipolar line is computed, and a threshold ε is set in the light transport coefficients of the camera pixel $({u,v} )$. The value of the threshold ε is related to the distortion coefficient of the projector. To make a trade-off between accuracy and efficiency of searching the direct light spot, the threshold ε takes value of 3 pixels according to the distortion coefficient of the projector in this study. The area within the threshold range of the epipolar line is scanned to obtain the projector coordinate with the maximal intensity and localize the direct illumination spot coordinate.
If the epipolar line equation is $l = Aw + Bh + C$, then the procedure for determining direct illumination can be described as follows:
where ${\Omega _D} = \{ ({w,h} )|D(w,h) \le \varepsilon \}$ is the projector pixel set, $D({w,h} )= \frac{{|{Aw + Bh + C} |}}{{\sqrt {{A^2} + {B^2}} }}$ denotes the distance between the projector pixel $({w,h} )$ and the epipolar line, $I({w,h} )$ is the intensity of projector pixel $({w,h} )$, and $({{w^ \ast },{h^\ast }} )$ is the coordinate of the direct illumination spot, which is the corresponding point of the camera pixel $({u,v} )$. The subpixel coordinate $({w_s^\ast ,h_s^\ast } )$ of direct illumination spot can be obtained by the grayscale centroid method as follows:Therefore, the corresponding points between the projector and camera pixels are obtained, and the 3D shape can be recovered by triangulation.
3. Experiments
The experimental system in Fig. 5 comprises a monochrome CMOS camera with a resolution of 1920 × 1200 and a digital projector with a resolution of 1920 × 1080. The projector projects sinusoidal patterns onto an object, and the light reflected by the object is collected by the camera. Every pixel can be regarded as a single-pixel detector.
The measured objects with interreflections are an aluminum alloy workpiece, a metal part, a V-shaped piece composed of a ceramic plate and a metal block, two ceramic bowls, and an etalon composed of two metal gauge blocks (Fig. 6). The measurement of the objects using the proposed method mainly involves three steps.
First, four-step phase-shifting fringe patterns projected onto the measured objects by the projector are captured by the camera to obtain the Fourier spectra. The number of projected patterns can be reduced by half because the discrete Fourier transform has symmetry. The total number of fringe patterns can be calculated as follows:
Second, the epipolar line is utilized to determine the direct illumination spot, as described in Section 2.3. The corresponding points between the projector and the camera pixels are obtained by the location of the direct illumination spot.
Third, the 3D shape of objects can be recovered through triangulation with the obtained corresponding points. The measurement results of the objects by FPP and FSI are shown in Fig. 9. The FPP measurement results show that the point clouds of some regions are missing because of interreflections. In comparison with the FPP measurement results, the point clouds of the FSI measurement results are more complete and denser. Thus, the proposed method can be applied to measure the 3D shape of an object with interreflections and is more effective than FPP.
An etalon composed of two metal gauge blocks [Fig. 6(e)] is measured to evaluate FPP and FSI as measurement methods in the presence of strong interreflections. The FPP and FSI measurement results are shown in Figs. 10(a) and 10(b), respectively. The time cost of FPP is less than 1 s, whereas the time cost of FSI is 612 min (data collection and processing take 446 and 166 min, respectively). Measurement precision can be evaluated with the fitting error of the plane, and the fitting plane results are shown in Figs. 10(c) and 10(d). The fitting plane errors show that the measurement precision of FSI is higher than that of FPP in the presence of strong interreflections (Table 1).
4. Conclusion
A 3D shape measurement method in the presence of strong interreflections by using SI is proposed. The cause of measurement failure in FPP in the presence of interreflections is analyzed, and SI is introduced to separate the scene reflection illumination received by the camera. The light transport coefficients of each camera pixel can be obtained by SI, and the direct and indirect illumination received by camera pixels are separated. Epipolar constraint is utilized to determine the direct illumination accurately. Experimental results demonstrate that the proposed method can be applied to measure 3D objects in the presence of strong interreflections. The measurement results of the objects by the proposed method are more complete and denser than FPP. An etalon composed of two metal gauge blocks is measured to evaluate the precision of the proposed method, and the result demonstrates the high measurement precision of the proposed method.
In the proposed method, SI is utilized to obtain light transport coefficients for separating direct illumination from indirect illumination. However, the proposed method requires a large number of patterns to determine such coefficients, resulting in a time-consuming process. Thus, improving the imaging efficiency of SI should be considered in future works to separate direct and indirect illumination.
Funding
National Natural Science Foundation of China (61735003, 61875007); National Key Research and Development Program of China (2020YFB2010701); Foundation (6141B061106); Leading Talents Program for Enterpriser and Innovator of Qingdao (18-1-2-22-zhc).
Disclosures
The authors declare no conflicts of interest.
References
1. F. Chen, G. M. Brown, and M. M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 8–22 (2000). [CrossRef]
2. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006). [CrossRef]
3. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser Eng. 48(2), 133–140 (2010). [CrossRef]
4. A. Miks, J. Novak, and P. Novak, “Method for reconstruction of shape of specular surfaces using scanning beam deflectometry,” Opt. Laser Eng. 51(7), 867–872 (2013). [CrossRef]
5. J. Clark, E. Trucco, and L. B. Wolff, “Using light polarization in laser scanning,” Image Vis. Comput. 15(2), 107–117 (1997). [CrossRef]
6. T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1829–1836.
7. M. Gupta and S. K. Nayar, “Micro Phase Shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.
8. T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.
9. Y. Xu and D. G. Aliaga, “An Adaptive Correspondence Algorithm for Modeling Scenes with Strong Interreflections,” IEEE Trans. Vis. Comput. Graph. 15(3), 465–480 (2009). [CrossRef]
10. X. Chen and Y. Yang, “Scene adaptive structured light using error detection and correction,” Pattern Recogn. 48(1), 220–230 (2015). [CrossRef]
11. M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.
12. M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” IEEE Trans. Pattern Anal. Machine Intell. 38(7), 1298–1312 (2016). [CrossRef]
13. F. B. D. Dizeu, J. Boisvert, M. -A. Drouin, G. Godin, M. Rivard, and G. Lamouche, “Frequency shift triangulation: a robust fringe projection technique for 3D shape acquisition in the presence of strong interreflections,” in Proceedings of International Conference on 3D Vision (IEEE, 2019), pp. 194–203.
14. H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” Proc. SPIE 10458, 104581A (2017). [CrossRef]
15. H. Zhao, Y. Xu, H. Jiang, and X. Li, “3D shape measurement in the presence of strong interreflections by epipolar imaging and regional fringe projection,” Opt. Express 26(6), 7117–7131 (2018). [CrossRef]
16. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012). [CrossRef]
17. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013). [CrossRef]
18. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]
19. H. Jiang, H. Zhao, X. Li, and C. Quan, “Hyper thin 3D edge measurement of honeycomb core structures based on the triangular camera-projector layout & phase-based stereo matching,” Opt. Express 24(5), 5502–5513 (2016). [CrossRef]
20. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018). [CrossRef]
21. C. Chen, N. Gao, X. Wang, Z. Zhang, F. Gao, and X. Jiang, “Generic exponential fringe model for alleviating phase error in phase measuring profilometry,” Opt. Lasers Eng. 110, 179–185 (2018). [CrossRef]
22. Z. Wang, Z. Zhang, N. Gao, Y. Xia, F. Gao, and X. Jiang, “Single-shot 3D shape measurement of discontinuous objects based on a coaxial fringe projection system,” Appl. Opt. 58(5), A169–A178 (2019). [CrossRef]
23. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016). [CrossRef]
24. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018). [CrossRef]
25. S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Laser Eng. 106, 119–131 (2018). [CrossRef]
26. H. Jiang, H. Zhai, Y. Xu, X. Li, and H. Zhao, “3D shape measurement of translucent objects based on Fourier single-pixel imaging in projector-camera system,” Opt. Express 27(23), 33564–33574 (2019). [CrossRef]