Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D reconstruction method based on the multi-polarization superposition coding phase pattern of LRR objects

Open Access Open Access

Abstract

Conventional research in structured light measurements has utilized light intensity as a channel for information. The polarization of light can be used as an additional channel of information. In this paper, a method based on the superposition of multiple polarization states is proposed to encode structured light. By building a polarization model between the color of light and the polarization states, polarized structured light containing phase information is obtained without rotating the polarizer. It is demonstrated that the method improves the waveform quality of stripes and the accuracy of the 3D reconstruction results when measuring highly reflective objects.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Structured light technology is widely used for the physical shape measurement of objects [1]. An effective method for obtaining the 3D physical shape of an object is digital stripe projection [27]. A simple setup for applying stripe projection is to project interference stripes onto the object and view the reflection pattern from the other direction through the camera. The reflection pattern is a deformed stripe pattern with information about the height of the object embedded in its phase distribution. Stripe analysis methods such as phase shift algorithms are required to calculate the phase distribution of the reflection image [810]. However, due to the complex distribution of reflectance on the surface of high dynamic range (HDR) objects [11], images can be overexposed and underexposed. Information on the surface of high dynamic range objects is lost due to overexposure and underexposure of the image. Therefore, the study of high dynamic range object measurements is a key issue in structured light projection methods. Moreover, the limited dynamic range of conventional cameras makes it difficult to obtain the correct phase distribution of patterns from large rang of reflectivity (LRR) objects [12,13]. Only the area with strong reflection toward the camera provides a measurable optical signal, while the light reaching the camera from other areas of the object is extremely weak. Therefore, not enough information from the illuminated object can be obtained to obtain the surface shape of the object. Therefore, the corresponding phase and intensity information for these regions (saturated pixels) is not measurable.

Over the years, several methods have been developed to overcome the limited dynamic range problem of cameras. One proposed method is the use of a polarization filter [14,15]. This method completely reduces the total intensity of the reflection patterns. Therefore, it is difficult to obtain useful information from the darker areas of the scene. Another method called “high dynamic range scanning” was proposed by Yoshinori et al. [16]. The purpose of the final HDR image for phase recovery phase unwrapping is to unwrap or integrate the phase along the path of the calculated two discontinuities. Liang et al. proposed a method to enhance visual observation by using color and polarization data with a custom color full-Stokes polarization camera [17]. However, an additional custom camera led to a significant increase in cost and measurement instability. Salahieh et al. introduced a method of multi-polarization fringe projection [18], which could maintain good fringe quality. In this method, the projected structured light is passed through a linear polarizer and then impinged on an object, after which the light becomes linearly polarized. However, although this method rendered the struc­tured light linearly polarized, it did not fully use the polarization char­acteristics for encoding. There have been many studies on polarization methods [1921], but these polarization encoding methods usually require complex and sophisticated optical instruments and equipment. This makes it difficult to apply these polarization encoding methods in industrial inspection with complex environments.

Most conventional structured light encoding schemes are based on the intensity of the light received by the camera. Even if the grayscale value of the encoded pattern projected by the projector onto the object's surface under test is approximately sinusoidal, sometimes the phase-shift encoding of the object's surface under test is not well distinguished from the stripe pattern captured by the camera. In this paper, a polarization superposition-based structured light enhancement coding method is proposed. It can improve the signal-to-noise ratio (SNR) of LRR and reduce the time to take multiple pictures in the structured light measurement system. The method is based on conventional structured light coding with sinusoidally varying grey values, uses the polarization characteristics of LCD projectors to design a pattern of projected polarized stripes, and analyses the invariant polarization properties of polarized light reflected on the surface of an object. Then the polarized light is described using the Stokes vector model, and the light transmission transition is described by the Mueller matrix. Degree of linear polarization (DOLP) is introduced to describe the polarization state of polarization state encoded structured light, which is used to decode the polarization state encoded Gray structured light and obtain the information in the polarization state encoding based on the magnitude of the linear polarization degree of polarized light in the horizontal direction. The experimental data demonstrate that the scheme of constructing structured light by superimposing two polarization states in horizontal and vertical directions enhances the quality of the stripes when measuring LRR objects and also improves the measurement speed. The rest of the paper is organized as follows: Section 2 presents the principles; experiments and discussion are given in Section 3; conclusions are explained in Section 4.

2. Principles

2.1 Phase-shifting method

The kth fringe pattern can be mathematically described as

$$\begin{array}{c} {{I_n}({x,y} )= A({x,y} )+ B({x,y} )\cos \left[ {\varphi ({x,y} )- \frac{{2\pi n}}{N}} \right]} \end{array}$$
where A(x, y) is the average intensity, B(x, y) is the intensity modulation, $\varphi ({x,y} )$ is the phase to be solved for, and $n \in [{0,N - 1} ]$ is the phase shift. The variation of fringe analysis methods lies in the way of recovering phase $\varphi ({x,y} )$ from the fringe pattern and subsequently reconstructing 3D from $\varphi ({x,y} )$. Phase $\varphi ({x,y} )$ can be retrieved from one single pattern using the Fourier transform method or from multiple patterns using a phase-shifting algorithm, along with others.
$$\varphi (x,y) = {\tan ^{ - 1}}\frac{{\sum\nolimits_{n = 0}^{N - 1} {{I_n}(x,y)\sin \left( {\frac{{2\pi n}}{N}} \right)} }}{{\sum\nolimits_{n = 0}^{N - 1} {{I_n}(x,y)\cos \left( {\frac{{2\pi n}}{N}} \right)} }}$$

For the four-step phase shift, the phase value φ can be solved by the following equation:

$$\begin{array}{c} {{\varphi _4}({x,y} )= {{\tan }^{ - 1}}\frac{{{I_3} - {I_1}}}{{{I_0} - {I_2}}}} \end{array}$$

It can be seen from the Eq. (3) that a certain feature point on the surface of an object can be uniquely determined through at least three fringe patterns. This structure light is a major breakthrough in structured light measurement technology and improves the measurement speed.

The phase obtained by Eqs. (2) and (3) is called the enveloping phase, whose range is [-π, π]. Every fringe period can obtain the enveloping phase with the range of [-π, π]. It can be known that the enveloping phase cannot provide a unique value, so the unique characteristic point cannot be determined. Therefore, the following Equation should be used to expand the phase:

$$\begin{array}{c} {\phi ({x,y} )= \varphi ({x,y} )+ 2\pi K({x,y} )} \end{array}$$

The Eq. (4) denotes the unwrapped phase value, and K is the stripe's order or the stripe's number of periods. By calculating K using the phase unwrapping algorithm, the parcel phase can be unwrapped, and the unwrapped phase value can be obtained. Based on the obtained phase information, the coordinates of the feature point on the pixel coordinate system of the projector are obtained:

$$\begin{array}{c} {\left\{ {\begin{array}{c} {{u_p} = \frac{{{\mathrm{\Phi }_h}({x,y} ){N_p}}}{{2\pi {\textrm{n}_v}}}}\\ {{v_p} = \frac{{{\mathrm{\Phi }_v}({x,y} ){M_p}}}{{2\pi {\textrm{n}_h}}}} \end{array}} \right.} \end{array}$$

In the Eq. (5), ${\mathrm{\Phi }_v}$ and ${\mathrm{\Phi }_h}$ are the absolute phase values of the feature point in the Vertical and Horizontal directions, respectively, $2\pi {\textrm{n}_v}$ and $2\pi {\textrm{n}_h}$ are the maximum absolute phase values of the projector pixel coordinates in the Vertical and Horizontal directions, respectively, and ${\textrm{n}_v}$ and ${\textrm{n}_h}$ are the number of cycles of the stripe pattern projected in the Vertical and Horizontal directions.

2.2 Multi-polarization superposition coded structured light

The Stokes parameter can be used to describe the superposition of polarized light, and the stability of polarized light is good, so the key of the method in this paper is to obtain the phase information after the superposition of multiple polarized states well. With the inherent modulation characteristics of liquid crystal display (LCD) projectors, stable polarized light in units of pixels is projected by the LCD projector. Among them, the polarization state of the purple light projected by the projector is vertical, and the polarization state of the green light projected is horizontal. The polarization states of the light projected by the projector can be visually observed and discerned by the colors. The horizontal green light projected by the LCD projector is ${\vec{{\boldsymbol S}}_{proj({\parallel} )}}$, and the vertical purple light is ${\vec{{\boldsymbol S}}_{proj(\bot )}}$. The light intensity can be adjusted. Let the change period of polarization intensity T be 128 pixels, x represents the horizontal pixel coordinates of the projector, and the normalized stokes parameter of fringe can be expressed as:

$$\begin{array}{c} {{{\vec{{\boldsymbol S}}}_{proj({\parallel} )}}(x )= \frac{1}{2}\left[ {\begin{array}{c} {|\cos (\frac{x}{{128}}\cdot 2\pi ) + 1|}\\ {\cos (\frac{x}{{128}}\cdot 2\pi ) + 1}\\ {\begin{array}{c} 0\\ 0 \end{array}} \end{array}} \right]\; \; } \end{array}$$
$$\begin{array}{c} {{{\vec{{\boldsymbol S}}}_{proj(\bot )}}(x )= \frac{1}{2}\left[ {\begin{array}{c} {|\sin (\frac{x}{{128}}\cdot 2\pi - \frac{\pi }{2}) + 1|}\\ { - \sin (\frac{x}{{128}}\cdot 2\pi - \frac{\pi }{2}) + 1}\\ {\begin{array}{c} 0\\ 0 \end{array}} \end{array}} \right],\; ({x \in [{0,128} ]} )} \end{array}$$

According to Eq. (6), a set of sinusoidally horizontally polarized light and a set of sinusoidally vertically polarized light with a period of 128 pixels are obtained, respectively, as shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. One set of horizontally polarized light and one set of vertically polarized light with a period of 128 pixels.

Download Full Size | PDF

Since ${\vec{{\boldsymbol S}}_{proj({\parallel} )}}$, ${\vec{{\boldsymbol S}}_{proj({\parallel} )}}$ are both vectors, the polarized light after superposition is set as ${\vec{{\boldsymbol S}}_{proj}}$, then:

$$\begin{array}{c} {{{\vec{{\boldsymbol S}}}_{proj}}(x )= {{\vec{{\boldsymbol S}}}_{proj({\parallel} )}}(x )+ {{\vec{{\boldsymbol S}}}_{proj(\bot )}}(x )= \left[ {\begin{array}{c} {\left|{\cos \left( {\frac{x}{{128}}\cdot 2\pi } \right)} \right|}\\ {\cos \left( {\frac{x}{{128}}\cdot 2\pi } \right)}\\ {\begin{array}{c} 0\\ 0 \end{array}} \end{array}} \right] = \left[ {\begin{array}{c} {\left|{\sin \left( {\frac{x}{{128}}\cdot 2\pi + \frac{\pi }{2}} \right)} \right|}\\ {\sin \left( {\frac{x}{{128}}\cdot 2\pi + \frac{\pi }{2}} \right)}\\ {\begin{array}{c} 0\\ 0 \end{array}} \end{array}} \right]} \end{array}$$

According to the Eq. (7), combined with the phase-shifted parallel N-order structured light equation, can be obtained for polarized phase-shifted structured light streaks:

$$\begin{array}{c} {{{\vec{{\boldsymbol S}}}_n}({x,y} )= \left[ {\begin{array}{c} {\left|{A({x,y} )+ B({x,y} )\cos \left( {\varphi ({x,y} )- \frac{{2\pi n}}{N}} \right)} \right|}\\ {A({x,y} )+ B({x,y} )\cos \left( {\varphi ({x,y} )- \frac{{2\pi n}}{N}} \right)}\\ {\begin{array}{c} 0\\ 0 \end{array}} \end{array}} \right]} \end{array}$$

In the Eq. (8), $n \in [{0,N - 1} ]$ represents the NTH picture projecting polarization fringe. A is the initial polarization intensity of polarization fringe, and B is the modulating polarization intensity of polarization fringe and is constant. Through the initial polarization intensity and modulating polarization intensity, it can be obtained that the polarization intensity of polarization fringe varies within the range of $[{A - B,A + B} ]$. ${\vec{{\boldsymbol S}}_n}({x,y} )$ represents the polarization vector of a point, and φ is the phase value of the fringe of the polarization state. Thus, the polarization superposition coding scheme of structured light is obtained. As shown in Fig. 2(b), structured light with the four-step phase shift is displayed.

 figure: Fig. 2.

Fig. 2. Coded stripe patterns. (a) Traditional gray stripe patterns. (b) Polarization superposition coding stripe patterns.

Download Full Size | PDF

The method of using polarization superposition to encode structured light, the unique advantages of measuring LRR objects, and the decoding method of polarization superposition encoding structured light will be presented.

2.3. Polarization enhanced structured light

The camera receives polarization information that is reflected from the object. According to Eq. (6), it can be seen that the polarization superposition coded structured light coding also has the stability of polarization, because the polarized light reflected from the surface of the LRR object still retains the polarization characteristic of polarized light. ${S_1} = \sin \left( {\frac{x}{{128}}\cdot 2\pi + \frac{\pi }{2}} \right)$ is contained in the information captured by the camera. Therefore, the proposed coding method has better stability.

$$\scalebox{0.95}{$\begin{aligned} \overrightarrow {{{\boldsymbol S}^{\prime}}} & = {M_0}{\overrightarrow {\boldsymbol S} _{proj}}\\ & = \frac{1}{2}{\left( {\frac{{\tan \;{\theta_ - }}}{{\tan \;{\theta_ + }}}} \right)^2}\left[ {\begin{array}{@{}cccc@{}} {{{\cos }^2}\;{\theta_ - } + {{\cos }^2}\;{\theta_ + }}&{{{\cos }^2}\;{\theta_ - } - {{\cos }^2}\;{\theta_ + }}&0&0\\ {{{\cos }^2}\;{\theta_ - } - {{\cos }^2}\;{\theta_ + }}&{{{\cos }^2}\;{\theta_ - } + {{\cos }^2}\;{\theta_ + }}&0&0\\ 0&0&{ - 2\cos \;{\theta_ + }\;\cos \;{\theta_ - }}&0\\ 0&0&0&{ - 2\cos \;{\theta_ + }\;\cos \;{\theta_ - }} \end{array}} \right]\\ & \cdot \left[ {\begin{array}{c} {\left|{\sin \left( {\frac{x}{{128}} \cdot 2\pi + \frac{\pi }{2}} \right)} \right|}\\ {\sin \left( {\frac{x}{{128}} \cdot 2\pi + \frac{\pi }{2}} \right)}\\ 0\\ 0 \end{array}} \right] = \frac{1}{2}{\left( {\frac{{\tan \;{\theta_ - }}}{{\tan \;{\theta_ + }}}} \right)^2}\left[ {\begin{array}{c} {2{{\cos }^2}\;{\theta_ - } \cdot \left|{\sin \left( {\frac{x}{{128}} \cdot 2\pi + \frac{\pi }{2}} \right)} \right|}\\ {2{{\cos }^2}\;{\theta_ - } \cdot \sin \left( {\frac{x}{{128}} \cdot 2\pi + \frac{\pi }{2}} \right)}\\ 0\\ 0 \end{array}} \right]\\ & = {\left( {\frac{{\tan \;{\theta_ - }}}{{\tan \;{\theta_ + }}}} \right)^2}{\cos ^2}{\theta _ - }\left[ {\begin{array}{c} {\left|{\cos \left( {\frac{x}{{128}} \cdot 2\pi } \right)} \right|}\\ {\cos \left( {\frac{x}{{128}} \cdot 2\pi } \right)}\\ 0\\ 0 \end{array}} \right] \end{aligned}$}$$

Based on the polarized light obtained by the camera in Eq. (9), it can be seen that when stripes with phase shift are projected, the information captured by the camera also contains phase information, i.e.:

$${S_1} = A({x,y} )+ B({x,y} )\cos \left( {\varphi ({x,y} )- \frac{{2\pi n}}{N}} \right)$$

Since the DOLP algorithm reflects the polarization information, Eq. (8) is substituted to obtain the DOLP image of each polarized structured light streak pattern:

$$\begin{array}{c} {DOL{P_n}({x,y} )= \frac{{{{\vec{{\boldsymbol S}}}_n}{{(1 )}_{max}}({x,y} )- {{\vec{{\boldsymbol S}}}_n}{{(1 )}_{min}}({x,y} )}}{{{{\vec{{\boldsymbol S}}}_n}{{(1 )}_{max}}({x,y} )+ {{\vec{{\boldsymbol S}}}_n}{{(1 )}_{min}}({x,y} )}}} \end{array}$$
Where n represents the number of fringe patterns, $({x,y} )$ represents pixel coordinates. DOLP represents polarization enhanced structured light image. ${\vec{{\boldsymbol S}}_n}(1 )$ represents the first element ${S_0}$ of the vector of polarized structured light. Max and min represent the maximum and minimum values of horizontal azimuth ${S_0}$ respectively.

Substitute Eq. (9) into Eq. (3) to obtain:

$$\begin{array}{c} {{\varphi _4}({x,y} )= {{\tan }^{ - 1}}\frac{{DOL{P_3} - DOL{P_1}}}{{DOL{P_0} - DOL{P_2}}}} \end{array}$$

Since the DOLP image is independent of camera sensitivity, surface reflectance, and exposure time, the phase calculation results of the above equation almost depend on the intensity of polarized light and its polarization state. Therefore, polarization superposition coding structured light technology can provide stable and accurate structured light fringe. At the same time, the DOLP algorithm only needs a single exposure time, which solves the problem of frequent adjustment of exposure time to meet the large range of reflectivity of LRR objects. Polarization superposition coding structured light enhances the quality of fringe and improves the measurement speed and accuracy.

3. Experiments

The experimental setup for this experiment is shown in Fig. 3. The projector is an LCD (CB-FH52) projector made by Epson, and the camera is a CMOS (BFS-U3-23S3M-C) grayscale camera (with HC1605 lens) from FLIR. The resolution of the camera is 1600 * 1200. the resolution of the projector is 1080 X 1920, and the Zolix linear polarizer (OPSP25.4) is placed in front of the camera lens.

 figure: Fig. 3.

Fig. 3. Experimental equipment.

Download Full Size | PDF

For the object under test, flat aluminum scrap metal, ferrous metal bookends, and the dark side of the stamped and formed computer case panel, as shown in Fig. 4. The flat aluminum metal can be considered part of the dielectric ferrous metal bookends because its surface is oxidized by air to dielectric Al2O3. The side panels inside the computer case are stamped and molded aluminum products with black baked enamel material outside, which can also be considered dielectric according to solid-state physics. Their common reflectance varies greatly and can be considered as LRR celestial.

 figure: Fig. 4.

Fig. 4. The measured LLR object.

Download Full Size | PDF

In the experimental setup, the minimum angle between the camera, projector and the subject are about 10-20 degrees, and the camera, and projector is placed around. The line polarizer is placed horizontally in front of the camera lens. All equipment and objects were placed on a standard Zolix dual-frequency damped optical stage. Before conducting the experiment, the focus of the camera was first adjusted until the object could be clearly photographed. The object was a metallic white aluminum plate with a relatively smooth surface, and the distance from the projector-camera pair to the object was set to approximately 90 cm to ensure the accuracy of the results. The LCD projector was used to project an all-purple polarization stripe onto the object.

The experiments verify the effectiveness of the proposed method. As can be seen from the image in Fig. 5(c), the phase-shifted image of the camera capturing a large range of reflectivity objects in a single exposure time exhibits spike waves in the stripes in the high reflectivity region, which is due to the sensitivity of conventional grayscale structured light to high reflectivity objects and also the low signal-to-noise ratio in the low reflectivity region. In contrast, Fig. 5(b) shows the polarization-enhanced structured light image, and it is obvious that the spikes that appear in the conventional grayscale phase-shift coded image exhibit characteristics in the polarization-enhanced structured light image, and the stripes in the underexposed region are more prominent. Pixels from 200 to 1800 columns in 770 rows of both images are selected. The data variation in the pixels is plotted by column, as shown in Fig. 5(c) and 5(d), where the conventional grayscale stripes on the left do not have the same quality and signal-to-noise ratio as the polarization-enhanced stripes on the right.

 figure: Fig. 5.

Fig. 5. Comparison of fringe cross section data between traditional gray structured light and multi-polarization superposition structured light.

Download Full Size | PDF

Further, the phase analysis is performed using a four-step phase-shift method, as shown in Fig. 6. Figure 6(a) and 6(c) show the multi-polarization superposition coding structured light phase unwrapping and the cross-section of 200 to 1800 columns of pixels in 770 rows. Figure 6(b) and 6(d) show the conventional grayscale structured light unwrapping phase and the cross-section of pixels at the same positions. It can be clearly seen that the phase in the region A to I in Fig. 6(d) does not unwrap to obtain a continuous unwrapped phase.

 figure: Fig. 6.

Fig. 6. Unwrapped phase data comparison between traditional gray structured light and multi-polarization superposition structured light.

Download Full Size | PDF

In order to demonstrate the results of unwrapping in detail, the experiment depicts the results of unwrapping for 200 to 480 columns of pixels in region A. It is also noted that both have a simultaneous 0 value of phase at a location in region A. This is because this location is in a region where the structured light is not irradiated and has no phase information. The non-0 phase value that appears in Fig. 7(c) is due to the indirect reflection on the object's surface, resulting in a region that is not irradiated by the projector but by the indirect reflection between objects. Therefore, it contains the wrong polarization information. The unwrapping phase of the conventional grayscale structured light between columns 410 and 480 showed a sharp jump with a jump value Δ of 0.053, corresponding to an error of 5.3% in the measured value with a large error. Meanwhile, the results of the unwrapping phase for 800 to 1200 columns of pixels outside the A region are compared with the data shown in Fig. 8. The unwrapping phase of conventional grayscale structured light shows three obvious phase jumps, while the unwrapping phase of multi-polarization state superimposed structured light is coherent. Such a strong coherent unwrapping phase is obtained in a single exposure time, enabling fast 3D measurement of LRR object surfaces and acquiring 3D point cloud data in different reflectivity regions in a single exposure time.

 figure: Fig. 7.

Fig. 7. Comparison of part unwrapped phase data between traditional gray structured light and multi-polarization superposition structured light.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Comparison of traditional gray structured light with multi-polarization superposition structured light unwrapped phase data from 800 to 1200 columns.

Download Full Size | PDF

The data show that the proposed method in this paper substantially reduces the hopping error by a factor of 26 over the conventional grayscale sine coding method.

Based on the phase information of the measured LRR object surface in the experiment, the 3D point cloud data of the LRR object surface is calculated by combining the camera-projector calibration data, as shown in Fig. 9. The color in the Figure represents the numerical variation of the 3D point cloud in the Z-axis direction. Y and X represent the dimensional variation of the 3D point cloud in the Y and X-axis directions. The abnormal point cloud data with large amplitude variation is clearly visible in Fig. 9(a), and Fig. 9(b) is a zoomed-in view of Fig. 9(a). From Fig. 9(b) and Fig. 9(c). It can be seen that the traditional measurement method has a large area of missing data and point cloud jump compared with this method, and the data has low credibility.

 figure: Fig. 9.

Fig. 9. Comparison between traditional gray structured light (a), (b), and the 3D point cloud data of multi-polarization superposition method (c) in this dissertation.

Download Full Size | PDF

To further analyze the 3D point cloud data, the red dashed line in Fig. 9, the point cloud at X = 28 mm, is intercepted for detailed analysis. Figure 10(a) shows the conventional grayscale sine method, and Fig. 10(b) shows the 3D point cloud cross-sectional view of the multi-polarization state superposition method in this paper. The conventional method in Fig. 10(a) not only does not have the complete 3D point cloud data but also has a 3D point cloud with severe jump above and on the surface of the object.

 figure: Fig. 10.

Fig. 10. Comparison of 3D point cloud data at X = 28 mm.

Download Full Size | PDF

Then, analyze the local 3D point cloud data of the white aluminum plate, black book stand, and dark chassis side panel, respectively, as shown in Figs. 11, 12, and 13. The blue point cloud in Fig. 11(a) is the measurement data of traditional grayscale structured light, and the blue point cloud in Fig. 11(b) is the measurement data of multi-polarization state superposition structured light technology. It can be seen that the 3D point cloud error in Fig. 12(a) is larger, and the 3D point cloud error of multi-polarization state superposition structured light technology in this paper is smaller.

 figure: Fig. 11.

Fig. 11. 3D point cloud data comparison of white aluminum plate at X = 28 mm.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. 3D point cloud data comparison of black bookend plate at X = 28 mm.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. 3D point cloud data comparison of dark box side panel at X = 28 mm.

Download Full Size | PDF

Since the object's surface is almost flat, the 3D point clouds shown in Figs. 11, 12, and 13 are linearly fitted. The corresponding fitting results are obtained as shown by the red straight lines. Using the root-mean-square error (RMSE) to express the straightness of the 3D point cloud, we get Table 1. The RMSE of the traditional grayscale method is higher than that of the multi-polarization state coding method in this paper, and the average RMSE value is 17 times higher than that of the multi-polarization state coding method in this paper.

Tables Icon

Table 1. RMSE value of the 3D point cloud of the object under test

The analysis of the experimental results demonstrates that the multi-polarization superposition coding method reduces the measurement error of the conventional grey scale coding method by a factor of 17, with an error of about 0.765 mm in the 3D point cloud data.

4. Summary

Experiments have shown that the multi-polarization superposition coding method improves the quality of fringes in a single exposure and obtains a more stable unwrapped phase. High-quality phase shift fringes can be obtained in only one exposure using the multi-polarization superposition encoding method. Therefore, the measurement efficiency is improved. The multi-polarization superposition coding method reduces the measurement error to 0.765 mm compared to conventional greyscale encoding methods. It is believed that the structured light method is expected to be used in the automated robotic production of industrial equipment and defect detection of workpieces, which will improve production efficiency.

Funding

Jiangxi Province 03 Special Projects (20212ABC03A20); Key R & D Plan of Jiangxi Province (20223BBE51010); National Natural Science Foundation of China (52065024).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. F. Bkais, “Review of 20 years of range sensor development[J],” J. Electron. Imaging 13(1), 231–243 (2004). [CrossRef]  

2. Z. Zhu, Y. Xie, and Y. Cen, “Polarized-state-based coding strategy and phase image estimation method for robust 3D measurement,” Opt. Express 28(3), 4307 (2020). [CrossRef]  

3. Z. Zhu, D. You, F. Zhou, S. Wang, and Y. Xie, “Rapid 3D reconstruction method based on the polarization-enhanced fringe pattern of an HDR object,” Opt. Express 29(2), 2162–2171 (2021). [CrossRef]  

4. Z. Zhu, Y. Dong, and D. You, “Accurate three-dimensional measurement based on polarization-defocused encoded structured light,” Measurement 205, 1112128 (2022). [CrossRef]  

5. S. Feng, C. Zuo, and T. Tao, “Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry,” Opt. Lasers Eng. 103, 127–138 (2018). [CrossRef]  

6. Y. Fu, “Three-dimensional shape measurement based on a combination of gray-code and phase-shift light projection,” Int. Soc. Opt. Photonics 50(4), 574–579 (2013).

7. J. Tan, Z. He, and W. Su, “Robust fringe projection measurement based on reference phase reconstruction,” Opt. Lasers Eng. 147(3), 106746 (2021). [CrossRef]  

8. X.-Y. Su, G. von Bally, and D. Vukicevic, “Phase-stepping grating profilometry: utilization of intensity modulation analysis in complex objects evaluation,” Opt. Commun. 98(1-3), 141–150 (1993). [CrossRef]  

9. P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt. 6292, 62920M (2006). [CrossRef]  

10. S. Zhang and S.-T. Yau, “High-resolution, real-time 3d absolute coordinate measurement based on a phase-shifting method,” Opt. Express 14(7), 2644–2649 (2006). [CrossRef]  

11. X. Tian, R. Liu, Z. Wang, and J. Ma, “High quality 3D reconstruction based on fusion of polarization imaging and binocular stereo vision,” Inf. Fusion 77, 19–28 (2022). [CrossRef]  

12. D.-B. Goldman, B. Curless, A. Hertzmann, et al., “Shape and spatially-varying BRDFs from photometric stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 32(6), 1060–1071 (2010). [CrossRef]  

13. S. Feng, Z. Chao, and Z. Liang, “Calibration of fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 143, 106622 (2021). [CrossRef]  

14. L. Shaoxu, F. Da, and L. Rao, “Adaptive fringe projection method for high-dynamic range three-dimensional shape measurement using binary search,” Opt. Eng. 56(09), 1–7 (2017). [CrossRef]  

15. Y. Liu, F. Yanjun, and C. Xiaoqi, “A novel high dynamic range 3D measurement method based on adaptive fringe projection method,” Opt. Lasers Eng. 128, 106004 (2020). [CrossRef]  

16. Y. Yoshinori, M. Hiroyuki, N. Osamu, and I. Tetsuo, “Shape measurement of glossy objects by range finder with polarization optical system,” Gazo Denshi Gakkai Kenkyukai Koen Yoko 200, 43–50 (2003).

17. J. Liang, X. Tian, X. Tu, O. Spires, N. Brock, D. Wang, H. Wu, L. Ren, B. Yao, S. Pau, and R. Liang, “Color full stokes polarization fringe projection 3D imaging,” Opt. Lasers Eng. 130, 106088 (2020). [CrossRef]  

18. B. Salahieh, Z. Chen, J. J. Rodriguez, and R. Liang, “Multi-polarization fringe projection imaging for high dynamic range objects,” Opt. Express 22(8), 10064 (2014). [CrossRef]  

19. H. Xiao, B. Jian, and K. Wang, “Target enhanced 3D reconstruction based on polarization-coded structured light,” Opt. Express 25(2), 1173 (2017). [CrossRef]  

20. T. Wang, S. Fu, and F. He, “Generation of perfect polarization vortices using combined gratings in a single spatial light modulator,” Appl. Opt. 56(27), 7567 (2017). [CrossRef]  

21. Y. Ke, S. Chen, W. Shu, and H. Luo, “Generation of perfect vector beams based on the combined modulation of dynamic and geometric phases,” Opt. Commun. 446, 191–195 (2019). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. One set of horizontally polarized light and one set of vertically polarized light with a period of 128 pixels.
Fig. 2.
Fig. 2. Coded stripe patterns. (a) Traditional gray stripe patterns. (b) Polarization superposition coding stripe patterns.
Fig. 3.
Fig. 3. Experimental equipment.
Fig. 4.
Fig. 4. The measured LLR object.
Fig. 5.
Fig. 5. Comparison of fringe cross section data between traditional gray structured light and multi-polarization superposition structured light.
Fig. 6.
Fig. 6. Unwrapped phase data comparison between traditional gray structured light and multi-polarization superposition structured light.
Fig. 7.
Fig. 7. Comparison of part unwrapped phase data between traditional gray structured light and multi-polarization superposition structured light.
Fig. 8.
Fig. 8. Comparison of traditional gray structured light with multi-polarization superposition structured light unwrapped phase data from 800 to 1200 columns.
Fig. 9.
Fig. 9. Comparison between traditional gray structured light (a), (b), and the 3D point cloud data of multi-polarization superposition method (c) in this dissertation.
Fig. 10.
Fig. 10. Comparison of 3D point cloud data at X = 28 mm.
Fig. 11.
Fig. 11. 3D point cloud data comparison of white aluminum plate at X = 28 mm.
Fig. 12.
Fig. 12. 3D point cloud data comparison of black bookend plate at X = 28 mm.
Fig. 13.
Fig. 13. 3D point cloud data comparison of dark box side panel at X = 28 mm.

Tables (1)

Tables Icon

Table 1. RMSE value of the 3D point cloud of the object under test

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

$$\begin{array}{c} {{I_n}({x,y} )= A({x,y} )+ B({x,y} )\cos \left[ {\varphi ({x,y} )- \frac{{2\pi n}}{N}} \right]} \end{array}$$
$$\varphi (x,y) = {\tan ^{ - 1}}\frac{{\sum\nolimits_{n = 0}^{N - 1} {{I_n}(x,y)\sin \left( {\frac{{2\pi n}}{N}} \right)} }}{{\sum\nolimits_{n = 0}^{N - 1} {{I_n}(x,y)\cos \left( {\frac{{2\pi n}}{N}} \right)} }}$$
$$\begin{array}{c} {{\varphi _4}({x,y} )= {{\tan }^{ - 1}}\frac{{{I_3} - {I_1}}}{{{I_0} - {I_2}}}} \end{array}$$
$$\begin{array}{c} {\phi ({x,y} )= \varphi ({x,y} )+ 2\pi K({x,y} )} \end{array}$$
$$\begin{array}{c} {\left\{ {\begin{array}{c} {{u_p} = \frac{{{\mathrm{\Phi }_h}({x,y} ){N_p}}}{{2\pi {\textrm{n}_v}}}}\\ {{v_p} = \frac{{{\mathrm{\Phi }_v}({x,y} ){M_p}}}{{2\pi {\textrm{n}_h}}}} \end{array}} \right.} \end{array}$$
$$\begin{array}{c} {{{\vec{{\boldsymbol S}}}_{proj({\parallel} )}}(x )= \frac{1}{2}\left[ {\begin{array}{c} {|\cos (\frac{x}{{128}}\cdot 2\pi ) + 1|}\\ {\cos (\frac{x}{{128}}\cdot 2\pi ) + 1}\\ {\begin{array}{c} 0\\ 0 \end{array}} \end{array}} \right]\; \; } \end{array}$$
$$\begin{array}{c} {{{\vec{{\boldsymbol S}}}_{proj(\bot )}}(x )= \frac{1}{2}\left[ {\begin{array}{c} {|\sin (\frac{x}{{128}}\cdot 2\pi - \frac{\pi }{2}) + 1|}\\ { - \sin (\frac{x}{{128}}\cdot 2\pi - \frac{\pi }{2}) + 1}\\ {\begin{array}{c} 0\\ 0 \end{array}} \end{array}} \right],\; ({x \in [{0,128} ]} )} \end{array}$$
$$\begin{array}{c} {{{\vec{{\boldsymbol S}}}_{proj}}(x )= {{\vec{{\boldsymbol S}}}_{proj({\parallel} )}}(x )+ {{\vec{{\boldsymbol S}}}_{proj(\bot )}}(x )= \left[ {\begin{array}{c} {\left|{\cos \left( {\frac{x}{{128}}\cdot 2\pi } \right)} \right|}\\ {\cos \left( {\frac{x}{{128}}\cdot 2\pi } \right)}\\ {\begin{array}{c} 0\\ 0 \end{array}} \end{array}} \right] = \left[ {\begin{array}{c} {\left|{\sin \left( {\frac{x}{{128}}\cdot 2\pi + \frac{\pi }{2}} \right)} \right|}\\ {\sin \left( {\frac{x}{{128}}\cdot 2\pi + \frac{\pi }{2}} \right)}\\ {\begin{array}{c} 0\\ 0 \end{array}} \end{array}} \right]} \end{array}$$
$$\begin{array}{c} {{{\vec{{\boldsymbol S}}}_n}({x,y} )= \left[ {\begin{array}{c} {\left|{A({x,y} )+ B({x,y} )\cos \left( {\varphi ({x,y} )- \frac{{2\pi n}}{N}} \right)} \right|}\\ {A({x,y} )+ B({x,y} )\cos \left( {\varphi ({x,y} )- \frac{{2\pi n}}{N}} \right)}\\ {\begin{array}{c} 0\\ 0 \end{array}} \end{array}} \right]} \end{array}$$
$$\scalebox{0.95}{$\begin{aligned} \overrightarrow {{{\boldsymbol S}^{\prime}}} & = {M_0}{\overrightarrow {\boldsymbol S} _{proj}}\\ & = \frac{1}{2}{\left( {\frac{{\tan \;{\theta_ - }}}{{\tan \;{\theta_ + }}}} \right)^2}\left[ {\begin{array}{@{}cccc@{}} {{{\cos }^2}\;{\theta_ - } + {{\cos }^2}\;{\theta_ + }}&{{{\cos }^2}\;{\theta_ - } - {{\cos }^2}\;{\theta_ + }}&0&0\\ {{{\cos }^2}\;{\theta_ - } - {{\cos }^2}\;{\theta_ + }}&{{{\cos }^2}\;{\theta_ - } + {{\cos }^2}\;{\theta_ + }}&0&0\\ 0&0&{ - 2\cos \;{\theta_ + }\;\cos \;{\theta_ - }}&0\\ 0&0&0&{ - 2\cos \;{\theta_ + }\;\cos \;{\theta_ - }} \end{array}} \right]\\ & \cdot \left[ {\begin{array}{c} {\left|{\sin \left( {\frac{x}{{128}} \cdot 2\pi + \frac{\pi }{2}} \right)} \right|}\\ {\sin \left( {\frac{x}{{128}} \cdot 2\pi + \frac{\pi }{2}} \right)}\\ 0\\ 0 \end{array}} \right] = \frac{1}{2}{\left( {\frac{{\tan \;{\theta_ - }}}{{\tan \;{\theta_ + }}}} \right)^2}\left[ {\begin{array}{c} {2{{\cos }^2}\;{\theta_ - } \cdot \left|{\sin \left( {\frac{x}{{128}} \cdot 2\pi + \frac{\pi }{2}} \right)} \right|}\\ {2{{\cos }^2}\;{\theta_ - } \cdot \sin \left( {\frac{x}{{128}} \cdot 2\pi + \frac{\pi }{2}} \right)}\\ 0\\ 0 \end{array}} \right]\\ & = {\left( {\frac{{\tan \;{\theta_ - }}}{{\tan \;{\theta_ + }}}} \right)^2}{\cos ^2}{\theta _ - }\left[ {\begin{array}{c} {\left|{\cos \left( {\frac{x}{{128}} \cdot 2\pi } \right)} \right|}\\ {\cos \left( {\frac{x}{{128}} \cdot 2\pi } \right)}\\ 0\\ 0 \end{array}} \right] \end{aligned}$}$$
$${S_1} = A({x,y} )+ B({x,y} )\cos \left( {\varphi ({x,y} )- \frac{{2\pi n}}{N}} \right)$$
$$\begin{array}{c} {DOL{P_n}({x,y} )= \frac{{{{\vec{{\boldsymbol S}}}_n}{{(1 )}_{max}}({x,y} )- {{\vec{{\boldsymbol S}}}_n}{{(1 )}_{min}}({x,y} )}}{{{{\vec{{\boldsymbol S}}}_n}{{(1 )}_{max}}({x,y} )+ {{\vec{{\boldsymbol S}}}_n}{{(1 )}_{min}}({x,y} )}}} \end{array}$$
$$\begin{array}{c} {{\varphi _4}({x,y} )= {{\tan }^{ - 1}}\frac{{DOL{P_3} - DOL{P_1}}}{{DOL{P_0} - DOL{P_2}}}} \end{array}$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.