Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Rapid 3D reconstruction method based on the polarization-enhanced fringe pattern of an HDR object

Open Access Open Access

Abstract

Measurement of high dynamic range objects is an obstacle in structured light 3D measurement. They entail both over-exposed and low-exposed pixels in a single exposure. This paper proposed a polarization-enhanced fringe pattern (PEFP) method that a high dynamic range image can be obtained within a single exposure time. The degree of linear polarization (DOLP) is calculated using the polarization properties of reflected light and a linear polarizer in fixed azimuth in this method. The DOLP is efficiently estimated by the projected polarization-state-encode (PSE) pattern, and it does not need to change the state of the polarizer. The DOLP depends on light intensity rather than the reflectivity of the object surfaces indicated in experimental results. The contrast of fringe patterns was enhanced, and the quality of fringe patterns was improved by the proposed method. More sufficient 3D point clouds and high-quality shape can be recovered using this method.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The three-dimensional (3D) shape is quite capable of implying depth information and structure feature of objects [1], compared with a two-dimensional (2D) image captured from conventional cameras and imaging sensors. Therefore, 3D shape measurement is being applied to industrial detection [2], antique protection [3], medical cosmetology [4], digital amusement [5] (like virtual reality, games, and movies), and so forth. Besides passive scan methods such as stereoscopy, the most widely used active scan methods such as structured light means. Structured light 3D measurement has marked tremendous advances in development over the last three decades. However, the shiny objects and complex objects with high reflectivity variations on surfaces usually violate assumptions that objects are Lambertian surfaces. Most of these optical metrologies supposed that the surfaces be diffuse and in a small range of reflectivity. If not, the modulated structured pattern on these objects, such as metallic workpieces, cannot be acquired accurately [6]. For example, the camera captured gray intensity is saturated or lower SNR. Although carefully calibrate the projector-camera system [7]. Missed information entails inaccurate decoding of encoded fringe patterns reflected from the object. Therefore, a 3D point coordinate can not be deduced from the triangulation principle [8].

To solve those measurement problems, many methods [6] emerged, including (1) the use of multi-exposure, (2) adjustment of the projected pattern intensity, (3) the use of the perpendicularly polarized filter, (4) the use of the Dichromatic Reflection Model, (5) utilize the photometric stereo, etc. Some research papers proposed that the gray value is approximately proportional to light intensity when fixed exposure time [6]. Pixel gray value is also approximately proportional to the surfaces’ reflectivity when adjusting the exposure time or the projected pattern intensity [911]. However, this method needs to take a set of snapshots. Some of them need to fuse those images to reform a high SNR image, which requires much time. The reflected light contains diffuse and specular components simultaneously when casting light on the metal objects [12]. The orthogonal pair of polarizers was set in front of the camera and projector, respectively. So the polarizer in front of the camera can eliminate polarized specular light. However, the rotating polarizer consumes much time when changing the projected light polarization state or camera received polarization state. The Dichromatic Reflection Model [13] assumes that the reflected components are consist of the body components and the interface components. The body components are due to the object material, and the interface components are the same as the light source color. However, it is not useful for metallic objects. However, since that over-exposure, the high light parts do not contain the reflection's body component. Furthermore, this method in [14] is sensitive to the object surfaces with various colors when detecting the green or red stripe projected on those objects. The photometric stereo technique uses the bidirectional reflectance distribution function (BRDF), whose details are in [15]. The BRDF can recover the 3D profile, but the process is cumbersome. Although [16] utilizes the plenoptic camera to capture a series of small images from different viewpoints. This novel camera saves time of acquisition data. However, the flaw in assembling microlens and the primary lens could constrain the range of surface slope.

Light has some inherent properties, i.e., light irradiance, spectral, dispersion, reflectance, refraction, lightwave, and polarization, so on. Besides intensity imaging, polarimetric imaging has been exploited in target enhance [17], target detection [18], remote sensing detection of atmospheric aerosol and atmospheric pollution [19], and so forth. Polarization information is a supplement to the dimension of grayscale images for a particular application. This paper proposed a method named polarization-enhance fringe pattern (PEFP). It can enhance the fringe pattern based on polarization property. This method takes advantage of the polarization-state-encode pattern strategy [20] based on the conventional gray code. This method can calculate the degree of linear polarization (DOLP) of each pattern reflected from the object surfaces.

The DOLP image is only deduced from a pair of images and does not need to change exposure time. Moreover, it can improve image contrast. Therefore, the DOLP image benefit to next decode process. Both metals and some dielectrics have polarization properties [12], so the approach that is based on polarization can be widely used. Therefore, structured light is managed to polarize linearly (i.e., horizontal and vertical polarization state) in this paper. After calculating the degree of linear polarization, it can improve the fringe pattern's quality and contrast. So almost all pixels of an object in the grabbed image could be encoded by the PEFP strategy. It does not need multi-exposure using the proposed method. This paper's remainder is organized as follows: the principles are introduced in Section 2; experimental and discussion are presented in Section 3; the conclusion will be explained in Section 4.

2. Principles

2.1 Polarization-state-encode strategy

The camera pixel and projector row and column correspondences need to be assigned before a 3D cloud is reconstructed. Therefore, to match the feature point on the object surfaces, structured light was designed to create a feature point. Huang et al. [20] proposed a polarization-state-encode (PSE) strategy based on the conventional Gray code structured light. Dark code 0 and bright code 1 in the Gray code strategy. However, unlike Gray code structured light, the pattern has an orientation which specifically horizontal and vertical. Moreover, the PSE pattern's gray value has an equal light intensity (i.e., 255 grayscale value). Figure 1 demonstrates the example of the Gray code pattern and PSE pattern. Liquid crystal display (LCD) projector can project polarized light due to liquid crystal display units using the thin-film transistor. The purple light (i.e., red and blue mixed) is vertically polarized. Besides, the green light is horizontally polarized in the RGB channel of the LCD. Since the projector can project an arbitrary 24-bit color image, a pattern can spatially distribute different polarization states fringe. So the conventional grayscale pattern can be exploited to polarization state pattern.

When utilizing a linear polarizer to filter vertically polarized light, only the light parallels the polarizer polarization state can transmit it. Therefore, the images recorded the light intensity transmit the linear polarizer are grabbed using the camera. The vertically polarized light be retained few, and the horizontally polarized light could be maintained. Consequently, the vertical polarization state pattern is encoded as 0, and the horizontal polarization state pattern is encoded as 1. It is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Comparison of Gray code (top) and polarization-state-encode (bottom) first four-bit plane patterns and its code. The pixel's code first bit determines by pattern 1, the second bit is determined by pattern 2, next bit code order from most to the least significant bit.

Download Full Size | PDF

2.2 Reflection characteristic of polarization

The projected light that was polarized acts on an object's surfaces could change its polarization state, especially adjusting the light source's incident angle. To describe the light changing process of transmitting the polarizer and reflected from the air-dielectric. The state changing process of polarization can be expressed in the Stokes-Mueller polarimetry system [21], which is:

$${{\boldsymbol S}_{grab}} = {{\boldsymbol P}_\theta } \cdot {{\boldsymbol M}_O} \cdot {{\boldsymbol S}_{proj}},$$
$${{\boldsymbol P}_\theta } = \frac{1}{2}\left[ {\begin{array}{cccc} 1&{cos2\theta }&{sin2\theta }&{0}\\ {cos2\theta }&{co{s^2}2\theta }&{sin2\theta cos2\theta }&{0} \\ {sin2\theta }&{sin2\theta cos2\theta }&{si{n^2}2\theta }&{0}\\ 0&0&0&0\end{array}} \right],$$
$${{\boldsymbol M}_0} = \frac{1}{2}{\left( {\frac{{tan{\theta_ - }}}{{tan{\theta_ + }}}} \right)^2}\left[ \begin{array}{cccc} {co{s^2}{\theta_ - } + co{s^2}{\theta_ + }}&{co{s^2}{\theta_ - } - co{s^2}{\theta_ + }}&0&0\\{co{s^2}{\theta_ - } - co{s^2}{\theta_ + }}&{co{s^2}{\theta_ - } + co{s^2}{\theta_ + }}&0&0\\0&0&{ - 2cos{\theta_ - }cos{\theta_ + }}&0\\ 0&0&0&{ - 2cos{\theta_ - }cos{\theta_ + }}\end{array} \right],$$
where ${{\boldsymbol S}_{proj}},\,{{\boldsymbol S}_{grab}}$ are Stokes vectors of projector projected light and camera grabbed light intensity, ${{\boldsymbol P}_\theta }$ is Mueller matrix of the linear polarizer at an azimuth of $\theta ,\,{{\boldsymbol M}_O}$ is the Mueller matrix of reflection at the air-dielectric interface (i.e., polarized light transmission the air and then reflected from the metallic or dielectric object), ${\theta _ \pm } = {\theta _{incident}} \pm {\theta _{refractive}}$ are the add and subtract angle of incidence and refractive. When adjusting the incidence angle of the light source to a small angle, then ${\theta _ \pm } \approx 0$. As a result, $co{s^2}{\theta _ - } - co{s^2}{\theta _ + } \approx 0,\,co{s^2}{\theta _ - } + co{s^2}{\theta _ + } \approx 2$. Then substitute those to the Eq. (2), as:
$$\begin{array}{c} {{{\boldsymbol S}_{grab({\parallel , \bot } )}} = {{\boldsymbol P}_{\theta = 0}} \cdot {{\boldsymbol M}_0} \cdot {{\boldsymbol S}_{proj({\parallel , \bot } )}} = \frac{1}{2}{{\left( {\frac{{\tan {\theta_ - }}}{{\tan {\theta_ + }}}} \right)}^2}\left[ {\begin{array}{cccc} 1&1&0&0\\ 1&1&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{array}} \right]\left[ {\begin{array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{array}} \right]\left[ {\begin{array}{cc} 1&1\\ 1&{ - 1}\\ 0&0\\ 0&0 \end{array}} \right]}\\ { = {{\left( {\frac{{\tan {\theta_ - }}}{{\tan {\theta_ + }}}} \right)}^2}\left[ {\begin{array}{cc} 1&0\\ 1&0\\ 0&0\\ 0&0 \end{array}} \right].} \end{array}$$
where the vertically polarized light represented as ${{\boldsymbol S}_{proj(\bot )}} = {\left[ {\begin{array}{cc} {\begin{array}{cc} 1&{ - 1} \end{array}}&{\begin{array}{cc} 0&0 \end{array}} \end{array}} \right]^T},$ horizontally polarized light represented as ${{\boldsymbol S}_{proj({\parallel} )}} = {\left[ {\begin{array}{cc} {\begin{array}{cc} 1&1 \end{array}}&{\begin{array}{cc} 0&0 \end{array}} \end{array}} \right]^T}$ and the polarizer fixed horizontally denoted as ${{\boldsymbol P}_{\theta = 0}}$. The first element in the S vectors means total light intensity. The second element in the S vectors is the intensity of horizontally polarized light (the second element with a positive sign) or vertically polarized light (the second element with a negative sign).

Equation (4) imply that ${{\boldsymbol S}_{grab({\parallel , \bot } )}}$ maintain the horizontally polarized component but eliminate the vertically polarized component, which reflects from the surfaces, when placed a polarizer fixed horizontally in front of the camera. It also suggests that polarized light is reflected with its intrinsic polarization state from the metallic or dielectric object surfaces. However, this result can be established under certain conditions. Specifically, the incident angle is limited in a small range, as mentioned above.

2.3 Polarization-enhanced fringe pattern (PEFP) strategy

The degree of linear polarization (DOLP) is a physical magnitude description of the state of polarized light. Specifically, it facilitates the depiction of the degree of that light linearly polarized at a specific azimuth. For example, horizontally polarized light has a maximum DOLP at horizontal azimuth. Dennis et al. [21] apply electric vectors intensity at different azimuth to define the degree of linear polarization. Definition of DOLP expressed as follows:

$$DOLP = \frac{{{I_{max}} - {I_{min}}}}{{{I_{max}} + {I_{min}}}},$$
where ${I_{max}},\,{I_{min}}$ are the maximum intensity and minimum light intensity that it passes through the polarizer at orthogonal azimuth separately. The DOLP value varies between 0 and 1 inclusively. From Eq. (4), ${I_{max}}$ is the horizontally polarized light intensity, and the vertically polarized light to be ${I_{min}}$. Because that horizontally polarized light reflects from the surfaces, then transmits from the horizontal polarizer. However, vertically polarized light reflects from the surfaces and can not transmits from the horizontal polarizer. Therefore, the DOLP was efficiently calculated without rotating the polarizer.

For an ideal camera imaging system [21], which ignores the imaging noise, the gray value of a pixel can be expressed as follow:

$${I_{gray}}({x,y;t} )= \alpha \rho t \cdot {L_{proj}}.$$
where $\alpha $ is the camera sensitivity, $\rho $ is the reflectivity of the surfaces, t is exposure time, ${L_{proj}}$ is the project light intensity, and ${I_{gray}}({x,y} )$ to be the gray valve of the corresponding pixel that the camera grabbed under the image coordinate system. Note that the projected light ${L_{proj}}$ is polarized. So ${L_{max}}$ denote the maximum light intensity of projected light at the horizontal azimuth. ${L_{min}}$ denote the minimum light intensity of projected light at the horizontal azimuth. Using Eq. (5) and Eq. (6), the following equation can be derived as follow:
$$\begin{aligned}{\textrm{DOLP}}({x,y} )&= \frac{{{I_{max}}({x,y} )- {I_{min}}({x,y} )}}{{{I_{max}}({x,y} )+ {I_{min}}({x,y} )}}\\ & = {\frac{{\alpha \rho t \cdot ({{L_{max}}({x,y} )- {L_{min}}({x,y} )} )}}{{\alpha \rho t \cdot ({{L_{max}}({x,y} )+ {L_{min}}({x,y} )} )}} = \frac{{{L_{max}}({x,y} )- {L_{min}}({x,y} )}}{{{L_{max}}({x,y} )+ {L_{min}}({x,y} )}}} \end{aligned}$$

Equation (7) implies that the DOLP image is unrelated to the camera sensitivity and the reflectivity of the surfaces and irrelevant to the exposure time. Nevertheless, its result almost depends on the light intensity and the polarization state. As a result, the DOLP was exploited in polarimetric imaging. It resolves the problem that frequently adjusts the exposure time to meet HDR objects’ different reflectivity since it needs a single exposure time.

3. Experiments and discussion

The experimental system makes up of an LCD projector (NP-CA4160X) with a resolution of 1024*768, a monochrome CMOS camera (FLIR BFS-U3-23S3M-C installed an HC1605 lens) with an adjusted resolution of 1600*1200, and place a horizontal linear polarizer (OPSP25.4) in front of the camera. Shows in Fig. 2(a). The system is calibrated by Bouguet's toolbox [22].The projector project the polarization-state-encode (PSE) pattern mentioned in Section 2. Moreover, the camera is set close to the projector. Also, let the incident angle in a small range mentioned in Section 2. Set the camera gamma and sensor gain be equal to numerical $1$. So that the gray value is nearly linear to light intensity. Furthermore, the exposure time is fixed in 2 ms to avoid too much saturation in an image. Due to the PSE pattern based on the Gray code, 20 PSE pattern images (i.e. $[{{{\log }_2}1024} ]+ [{{{\log }_2}768} ]= 20$) and a uniform vertically polarized pattern was sequentially projected on objects. Besides, their inverse patterns were projected to eliminate decoding artifacts. The value of ${; }{I_{min}}$ is obtained from image captured when projecting a uniform vertically polarized pattern. And the value of ${I_{max}}$ is obtained from image captured with a PSE pattern. To let the Eq. (5) be sense, for both ${I_{max}},\,{; }{I_{min}}$ are equal to zero, let the DOLP equal to zero.

 figure: Fig. 2.

Fig. 2. (a)The experimental system setup includes a compact projector and camera, a polarizer placed in front of the camera. The objects are about 1000 mm distant from the system. The incident angle is about 3°. (b) A thick white aluminum alloy plate back against a thin black-coated metal plate. They have a large range of reflectivity. (c) the Gray code pattern projected on objects. (d) the PSE patterns projected on objects are grabbed using a polarizer. They (c-d) are not the same in detail.

Download Full Size | PDF

The image (a) and (c) in Fig. 2 indicates that the camera can not capture a high dynamic image in a single exposure time. The camera response function [23] map high light intensity to saturated gray value 255 and low light intensity to underexposure gray value 0. Many research papers reveal that short exposure entails objects under-exposure and missing 3D point clouds in the same area. Besides, objects are saturated with long exposure also unable to rebuild correct 3D point information. In this system setup, the exposure time of 2 ms entails the image not too seriously saturated and sustain not very low gray value points. So they can be employed to measure the DOLP image.

The degree of image contrast was firstly analyzed. The more considerable degree of image contrast is, the more information is maintained in images. Utilized the degree of image contrast to demonstrate the DOLP image enhance image contrast. The degree of image contrast is expressed as follow:

$$C = \mathop \sum \nolimits_\delta \delta {({i,j} )^2}{P_\delta }({i,j} ).$$
where $\delta ({i,j} )= |{i - j} |$ is the absolute difference of gray value between adjacent pixels, ${P_\delta }({i,j} )$ is the distribution probability of those adjacent pixels that have the same value of $\delta $. The ${P_\delta }({i,j} )$ are obtained impliedly. Specifically, the computation times N of the difference of gray value between adjacent pixels is previously determined by the resolution of the image, and then the summation of $\delta {({i,j} )^2}$ divided by the N. Note that the pixels around the image boundary were expanded per-pixel, and the gray value of expanded pixels was equal to the neighbor. Therefore, these pixels around the image boundary can make the difference with adjacent pixels and not increase the summation. Table 1 indicates that the degree of image contrast of the DOLP image is 5 times as much as the raw Gray code image. So, the image contrast of the fringe pattern is enhanced in DOLP images.

Tables Icon

Table 1. The degree of image contrast of conventional raw gray images and DOLP images

Due to polarization property, the vertically polarized light, such as polarized high light, cannot transmit the horizontal polarizer. So the reflected light from object surfaces maintains stripe pattern shape. As mentioned in Section 2, the DOLP image was calculated within two images (i.e., an image was captured when casting a uniform vertically polarized pattern, and an image was grabbed when casting a PSE pattern). Besides, it does not need to rotate the polarizer, just change the projected light polarization by designed pattern. The PEFP strategy results can obtain a high dynamic range image shown in Fig. 3(b), 3(d). The dark region in Figs. 3(a)–3(c) converts into bright in 3(b)–3(d). Moreover, it recovers the shape of the patterns except for the area that is out of illumination. It indicates that the DOLP images are unrelated to the camera sensitivity, the reflectivity of the surfaces, and exposure time. This result is compatible with the advantage of the PEFP strategy.

 figure: Fig. 3.

Fig. 3. Comparison between raw Gray code image (a-c) and DOLP image (b-d). The DOLP images have high contrast and high dynamic range. The dark area in (a-c) convert into high gray value excludes part that is not illuminated, and excludes part of vertically stripe in a pattern (i.e., vertical stripe code 0). (e) is the cross-section of the 260th row marked in (c), (f) is the cross-section of the 260th row marked in (d).

Download Full Size | PDF

To demonstrate the fringe pattern is enhanced in the DOLP images, the cross-section of the 260th row that doted in (c-d) was selected, and the results showed in Fig. 3(e)–3(f). Note that image grayscale is normalized in the range [0,1]. Results imply that the DOLP image pattern's normalized gray value varies in a high dynamic range. Because of the dramatically varies in the pattern's gray value. It is effective to segment the pattern and extract the stripe edges by Trujillo-Pino's method [24], and the decoding on the projector benefits from it. The stripe edges were extracted to analyze the encode projector row and column. Figures 4(b)–4(c),4(e)–(f) are zoom in the corresponding part on small squares in Figs. 4 (a), 4(d). They are part of the black-coated metal plate on the left bottom and part of the thick white aluminum alloy plate on the top right, separately. Results showed that the raw gray image's stripe edges occur in deviation and severely lost the stripe edge. It indicates that raw gray images cannot obtain fine stripe edges in both light and dark regions. However, the PEFP strategy can maintain high-quality stripe edges in both light and dark regions, as shown in Fig. 4. Therefore, the DOLP images in the PEFP strategy are useful to maintain high-quality stripe edges and then effectively decode the projector row and column indices.

 figure: Fig. 4.

Fig. 4. Extract the stripe edges of the raw gray image (a) and DOLP image (d). Windows with solid and dotted bounding (b-c) are the zoom in on areas with the solid and dotted window in (a), respectively. Windows with solid and dotted (e-f) bounding is the zoom in on areas with solid and dotted windows (d).

Download Full Size | PDF

As shown in Fig. 5(d)–5(e), the decoded projector row and column indices are mapped to the color indices, and the white area means that this pixel missed the projector's indices. Unlike the conventional raw gray image method, the recovered projector's indices are not severely to miss in the proposed PEFP strategy. So the amount of 3D point cloud is sufficient by decoded the DOLP image than the raw gray image. Thereby, objects have high-quality 3D shape measurement. As shown in Fig. 5(c)–5(f), the PEFP strategy can recover the missed 3D point in the black-coated metal plate's dark areas. Note that Fig. 5(f), due to the light being feeble reflected in the steep area and the grabbed gray value is equal to zero, the left narrow and steep areas on the thin black-coated metal plate are missed reconstructing 3D points.

 figure: Fig. 5.

Fig. 5. The recovered projector row indices and column indices recovered from raw gray image (a-b) and the PEFP strategy (d-e). The jet color bar on the right corresponding to the magnitude of corresponding indices. (c-f) are the reconstructed 3D point cloud of objects. And the parula color bar on the right means the depth of the corresponding point in the camera coordinate system.

Download Full Size | PDF

Finally, the comparison of the conventional raw gray image method's accuracy and the proposed PEFP strategy is made in Fig. 6. Since their flat shape is known, therefore, the RMSE (root-mean-square error) of linear fit [25] for their cross-section in Figs. 6(a)–6(b) can be applied to test the accuracy. Specifically, in Fig. 6(a), the RMSE is 0.6965 using the conventional raw gray image method, and the RMSE of the PEFP strategy is 0.6036. More surfaces in the black-coated metal plate on the left bottom can be reconstructed using the PEFP strategy. Although its RMSE is 0.7356, smaller than 0.8035 of the PEFP strategy, several 3D point clouds were reconstructed using the raw gray image method, which can not rebuild the delicate 3D surfaces than using sufficient point clouds. Therefore, the PEFP strategy has better performance than the conventional raw gray image method.

 figure: Fig. 6.

Fig. 6. The cross-section and linear fit of the reconstructed 3D point clouds on the top (a) and bottom (b) area in the red widows in Fig. 4.

Download Full Size | PDF

4. Conclusion

This paper proposed that the PEFP strategy enhance and recover the pattern in a large dynamic range and does not need to multi-exposure and rotate the polarizer. The measurement of high dynamic range objects or various reflectivity objects is efficient using the proposed PEFP strategy. Because the DOLP is unrelated to reflectivity and camera sensitivity, its results primarily depend on reflected light's polarization. The reflected light is transformed into a different polarization state when transmitting from a fixed linear polarizer. The DOLP image is obtained within two images grabbed when projecting a uniform vertically polarized pattern and a PSE pattern. Since the uniform vertically polarized pattern is only projected once, the projected patterns are only increased by one than conventional Gray code patterns. The polarization-state-encode pattern was firstly proposed by Huang [20]. However, unlike Huang to eliminate surrounding objects to stick out a target using the objects’ different DOLP value as the image segmentation threshold, the proposed PEFP strategy exploits DOLP information to decode the row and column of the projector then reconstruct the 3D point clouds. In the experiment, two objects with a large reflectivity range are used to evaluate this method's feasibility. The proposed PEFP strategy performs better than the conventional raw gray image method illustrated by these experiments. Results suggest that this method can enhance the patterns and increase the patterns’ contrast by 5 times. The 3D point clouds deduced from this method can maintain the dark region's 3D point clouds, but miss in the same area where deduced from the raw gray image that has low image contrast. Nevertheless, the steep region that cannot reflect light to the camera is still a challenge in structured light 3D measurement.

Funding

Key Research and Development Program of Jiangxi Province (20202BBE53022); National Natural Science Foundation of China (52065024, 61763012).

Disclosures

The authors declare no conflicts of interest.

References

1. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128 (2011). [CrossRef]  

2. X. He, W. Sun, X. Zheng, and N. Meng, “Static and dynamic deformation measurements of microbeams by the technique of digital image correlation,” Key Eng. Mater., 326–328 (2006).

3. D. F. Sansoni G, “3-D optical measurements in the field of cultural heritage: the case of the Vittoria Alata of Brescia,” IEEE Trans. Instrum. Meas. 54(1), 359–368 (2005). [CrossRef]  

4. J. M. Lagarde, C. Rouvrais, D. Black, S. Diridollou, and Y. Gall, “Skin topography measurement by interference fringe projection: a technical validation,” Skin. Res. Technol. 7(2), 112–121 (2001). [CrossRef]  

5. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48(2), 133–140 (2010). [CrossRef]  

6. J. G. G. Z. Hui Lin, “Review and Comparison of High-Dynamic Range Three-Dimensional Shape Measurement Techniques,” J SENSORS (2017).

7. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

8. J. J. P. A. Salvi, “Pattern codification strategies in structured light systems,” Pattern Recognition 37(4), 827–849 (2004). [CrossRef]  

9. H. Lin, J. Gao, Q. Mei, Y. He, J. Liu, and X. Wang, “Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement,” Opt. Express 24(7), 7703 (2016). [CrossRef]  

10. L. Ekstrand and S. Zhang, “Autoexposure for three-dimensional shape measurement using a digital-light-processing projector,” Opt. Laser. Eng. 50(12), 123603 (2011). [CrossRef]  

11. S. Zhang, “Rapid and automatic optimal exposure control for digital fringe projection technique,” Opt. Laser. Eng. 128, 106029 (2020). [CrossRef]  

12. L. B. Wolff, “Using polarization to separate reflection components,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition.(IEEE Computer Society, 1989).

13. S. A. Shafer, “Using color to separate reflection components,” Color Res. Appl. 10(4), 210–218 (1985). [CrossRef]  

14. R. Benveniste and C. Unsalan, “A Color Invariant for Line Stripe-Based Range Scanners,” Comput J. 54(5), 738–753 (2011). [CrossRef]  

15. F. Nicodemus, J. Richmond, J. Hsia, I. Ginsberg, and T. Limperis, eds. Geometrical considerations and nomenclature for reflectance (US Department of Commerce, National Bureau of Standards, Washington, DC1977).

16. L. Meng, L. Lu, N. Bedard, and K. Berkner, “Single-Shot Specular Surface Reconstruction with Gonio-Plenoptic Imaging,” (IEEE, 2015), pp. 3433–3441.

17. X. Huang, Y. Luo, J. Bai, R. Cheng, K. He, K. Wang, Q. Liu, Y. Luo, and J. Du, “Polarimetric target depth sensing in ambient illumination based on polarization-coded structured light,” Appl. Opt. 56(27), 7741–7748 (2017). [CrossRef]  

18. S. S. Lin, K. M. Yemelyanov, E. J. Pugh, and N. Engheta, “Polarization-based and specular-reflection-based noncontact latent fingerprint imaging and lifting,” J. Opt. Soc. Am. A 23(9), 2137–2153 (2006). [CrossRef]  

19. T. Treibitz and Y. Y. Schechner, “Polarization: Beneficial for visibility enhancement?” IEEE, 525–532 (2009).

20. X. Huang, J. Bai, K. Wang, Q. Liu, Y. Luo, K. Yang, and X. Zhang, “Target enhanced 3D reconstruction based on polarization-coded structured light,” Opt. Express 25(2), 1173 (2017). [CrossRef]  

21. D. H. Goldstein, “Polarized Light, Revised and Expanded,” Lasers Optics and Photonics (2003).

22. J. Y. Bouguet, “Camera calibration toolbox for Matlab,” http://pku.summon.serialssolutions.com.

23. P. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” (ACM Press/Addison-Wesley Publishing Co, 1997), pp. 369–378.

24. A. Trujillo-Pino, K. Krissian, M. Alemán-Flores, and D. Santana-Cedrés, “Accurate subpixel edge location based on partial area effect,” Image Vision Comput. 31(1), 72–90 (2013). [CrossRef]  

25. D. A. Freedman, Statistical Models: Theory and Practice (Cambridge University Press, 2009).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Comparison of Gray code (top) and polarization-state-encode (bottom) first four-bit plane patterns and its code. The pixel's code first bit determines by pattern 1, the second bit is determined by pattern 2, next bit code order from most to the least significant bit.
Fig. 2.
Fig. 2. (a)The experimental system setup includes a compact projector and camera, a polarizer placed in front of the camera. The objects are about 1000 mm distant from the system. The incident angle is about 3°. (b) A thick white aluminum alloy plate back against a thin black-coated metal plate. They have a large range of reflectivity. (c) the Gray code pattern projected on objects. (d) the PSE patterns projected on objects are grabbed using a polarizer. They (c-d) are not the same in detail.
Fig. 3.
Fig. 3. Comparison between raw Gray code image (a-c) and DOLP image (b-d). The DOLP images have high contrast and high dynamic range. The dark area in (a-c) convert into high gray value excludes part that is not illuminated, and excludes part of vertically stripe in a pattern (i.e., vertical stripe code 0). (e) is the cross-section of the 260th row marked in (c), (f) is the cross-section of the 260th row marked in (d).
Fig. 4.
Fig. 4. Extract the stripe edges of the raw gray image (a) and DOLP image (d). Windows with solid and dotted bounding (b-c) are the zoom in on areas with the solid and dotted window in (a), respectively. Windows with solid and dotted (e-f) bounding is the zoom in on areas with solid and dotted windows (d).
Fig. 5.
Fig. 5. The recovered projector row indices and column indices recovered from raw gray image (a-b) and the PEFP strategy (d-e). The jet color bar on the right corresponding to the magnitude of corresponding indices. (c-f) are the reconstructed 3D point cloud of objects. And the parula color bar on the right means the depth of the corresponding point in the camera coordinate system.
Fig. 6.
Fig. 6. The cross-section and linear fit of the reconstructed 3D point clouds on the top (a) and bottom (b) area in the red widows in Fig. 4.

Tables (1)

Tables Icon

Table 1. The degree of image contrast of conventional raw gray images and DOLP images

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

S g r a b = P θ M O S p r o j ,
P θ = 1 2 [ 1 c o s 2 θ s i n 2 θ 0 c o s 2 θ c o s 2 2 θ s i n 2 θ c o s 2 θ 0 s i n 2 θ s i n 2 θ c o s 2 θ s i n 2 2 θ 0 0 0 0 0 ] ,
M 0 = 1 2 ( t a n θ t a n θ + ) 2 [ c o s 2 θ + c o s 2 θ + c o s 2 θ c o s 2 θ + 0 0 c o s 2 θ c o s 2 θ + c o s 2 θ + c o s 2 θ + 0 0 0 0 2 c o s θ c o s θ + 0 0 0 0 2 c o s θ c o s θ + ] ,
S g r a b ( , ) = P θ = 0 M 0 S p r o j ( , ) = 1 2 ( tan θ tan θ + ) 2 [ 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 ] [ 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 ] [ 1 1 1 1 0 0 0 0 ] = ( tan θ tan θ + ) 2 [ 1 0 1 0 0 0 0 0 ] .
D O L P = I m a x I m i n I m a x + I m i n ,
I g r a y ( x , y ; t ) = α ρ t L p r o j .
DOLP ( x , y ) = I m a x ( x , y ) I m i n ( x , y ) I m a x ( x , y ) + I m i n ( x , y ) = α ρ t ( L m a x ( x , y ) L m i n ( x , y ) ) α ρ t ( L m a x ( x , y ) + L m i n ( x , y ) ) = L m a x ( x , y ) L m i n ( x , y ) L m a x ( x , y ) + L m i n ( x , y )
C = δ δ ( i , j ) 2 P δ ( i , j ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.