Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D shape measurement of diffused/specular surface by combining fringe projection and direct phase measuring deflectometry

Open Access Open Access

Abstract

The three-dimensional (3D) data of object surfaces, like a precision machine part, play an important role in the fields of aerospace, automotive industry, augmented reality, heritage preservation, smart city, etc. The existing fringe projection profilometry and deflectometry can only measure the 3D shape of diffused and specular surfaces, respectively. However, there are many components having both diffused and specular surfaces. This paper presents a novel method for measuring the 3D shape of diffused/specular surfaces by combining fringe projection profilometry and direct phase measuring deflectometry. The principle and calibration method of the proposed method are elaborated. Experimental studies are conducted with an artificial diffused/specular step having diffused/specular surfaces to verify the measurement accuracy. The results on several objects show that the proposed method can measure diffused/specular surfaces effectively with certain accuracy. Error sources are also analyzed to improve the measurement accuracy.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical three-dimensional (3D) shape measurement techniques have been widely investigated to meet different needs in various fields, such as 3D digital model establishment in reverse engineering, palmprint identification in biomedical engineering, detection of optical components in optical field and online detection in advanced manufacturing because of its advantages of non-contact, fast measurement and high accuracy [14]. Among the existing 3D shape measurement techniques, fringe projection profilometry (FPP) [5] and phase measurement deflectometry (PMD) [6] are successively proposed for measuring diffused and specular surfaces respectively, and the relevant technologies are gradually maturing [7,8]. However, with the development of advanced manufacturing technology, many components having both diffused and specular surface appear in the industrial and civil fields. For such kind of objects, although only one of the above methods can be used to achieve 3D data, the surface reflective properties must be modified firstly by spraying different coating materials [9]. Moreover, the behavior of changing the surface reflection properties is not feasible in the fast and precise measurement.

In recent years, few researchers have devoted to studying the measurement technique of the diffused/specular surfaces. Huang et al. studied and compared the sensitivity and precision of fringe projection (FP) and fringe reflection (FR) technology when measuring both incomplete diffuse and incomplete specular surfaces. They combined those two techniques, for which the height is estimated using FP in advance and is then used as an input to FR for slope calculation to realize mirror-like surfaces measurement [10]. Sandner presented a combined geometric-optical phase measuring metrology technique which is fringe projection and PMD to measure such objects. By setting up a measurement system using two cameras, a projector and a thin film transistor (TFT) monitor, the surfaces is reconstructed with partly specular or locally varying reflective behavior. However, the presented system cannot measure the tested objects one time because the data should be obtained by the FP subsystem first and then provided initial height and gradient for PMD subsystem [11]. Yue et al. proposed a 3D measurement system of combining FP and reflection structural illumination to recover the shape of different reflective objects by using compound iteration of gradient [12]. Yi combined FP and FR techniques, considering the sensitivity of FP to height change and FR to gradient change, proposed a composite iterative algorithm to realize the shape reconstruction and defect detection of low reflectivity mobile phone shell [13]. All the above methods are only suitable for measuring continuous diffused/specular reflective surfaces due to the procedure of gradient integration. Moreover, the projector and monitor worked in time-sharing, which have lowered the measurement efficiency.

This paper presents a novel method to measure the 3D shape of diffused/specular objects having isolated and/or discontinuous surfaces by combining FPP and direct PMD [14] without any surface pretreatment. A mathematical model has been derived to directly build up the relationship between the depth and absolute phase data. Then, after calibrating the system parameters in the mathematical model, the shape information of the tested objects can be constructed in the same coordinate system from the captured deformed fringes. The digital light processing (DLP) projector and liquid crystal display (LCD) screen simultaneously project and display fringe patterns through red, green and blue channels. The deformed fringe patterns are captured by a color camera from a different viewpoint to improve measurement efficiency.

The following section introduces the measurement principle of the proposed method. The camera’s focus position is analyzed and the system calibration method is given in Section 3. Section 4 shows the measurement system and experimental results. Errors sources of the system are analyzed in Section 5. Finally, Section 6 concludes the paper.

2. Principle of diffused/specular surfaces measurement

2.1 Phase shifting

Phase shifting fringe captured by the camera can be represented as

$${I_j}(u,v) = A(u,v) + B(u,v)\cos (\phi (u,v) + \frac{{2\pi (j - 1)}}{N})$$
where (u, v) is the pixel coordinate, N is the phase-shifting step, j = 1, 2⋯N, Ij(u, v) is fringe intensity of the jth step phase shifting, ϕ(u, v) is the phase to be determined, A(u, v) represents the background intensity and B(u, v) is the intensity modulation coefficient.

There are three unknown parameters in Eq. (1). When N ≥ 3, ϕ (u, v), A(u, v) and B(u, v) can be calculated using Least-Square Method (LSM).

$$\phi ({u,v} )={=} - \frac{{\sum\limits_{j = 1}^N {\sin (\frac{{2\pi (j - 1)}}{N}){I_j}(u,v)} }}{{\sum\limits_{j = 1}^N {\cos (\frac{{2\pi (j - 1)}}{N}){I_j}(u,v)} }}$$
$$A({u,v} )= \frac{{\sum\limits_{j = 1}^N {{I_j}(u,v)} }}{N}$$
$$B(u,v) = \frac{2}{N}\sqrt {{{\left[ {\sum\limits_{j = 1}^N {{I_j}(u,v)\sin (\frac{{2\pi (j - 1)}}{N})} } \right]}^2} + {{\left[ {\sum\limits_{j = 1}^N {{I_j}(u,v)\cos (\frac{{2\pi (j - 1)}}{N})} } \right]}^2}} $$

2.2 Diffused/specular surface measurement

Figure 1 shows the schematic setup of measuring diffused/specular surfaces. It includes a DLP projector, two LCD screens (LCD1 and LCD2), a color charge coupled device (CCD) camera and the measured diffused/specular surface. The screens LCD1 and LCD2 are parallel. A certain angle between the camera optical axis and normal vector of the screen is needed. In the measurement diagram, the CCD camera and the DLP projector make up FP subsystem for measuring diffused parts, and the CCD camera and two LCD screens form the FR subsystem for specular surface measurement.

 figure: Fig. 1.

Fig. 1. Measurement diagram of the proposed method.

Download Full Size | PDF

When parallel lights are incident on the diffused surface and specular surface, the former will reflect them irregularly and the scattered light can be seen in any directions, while the latter will reflect the incident lights in certain directions and the reflected rays can only be seen in the reflection direction. DLP projector and two LCD screens simultaneously project and display green, red and blue phase-shifting sinusoidal fringe patterns onto the measured surface. Since the reflection effects of diffused and specular surfaces on incident light are different, the diffused parts scatter the projected fringe pattern and the specular parts reflect the displayed fringe patterns. Then, the deformed colorful fringe patterns reflected by relevant parts are captured by red, green and blue channels of the color camera from a different viewpoint. Finally, absolute phase information can be calculated from the captured fringe patterns to reconstruct 3D shape after the system calibration. In Fig. 1, position of the DLP projector can be arranged flexibly as long as it satisfies with the principle of triangulation.

2.3 Mathematical model

Mathematical model of the proposed method is shown in Fig. 2. A diffused/specular reference plane is added to derive the geometric relationship between the absolute phase and tested depth data, and the reference plane is parallel to LCD2. LCD1 and LCD2 are placed to have an angle of β, and a beam splitter (BS) is placed at β/2 relative to LCD1 to make its virtual image LCD1’ be parallel with LCD2. Antireflection film of the BS faces to the LCD1, thus the occlusion of LCD2 by LCD1 is avoided.

 figure: Fig. 2.

Fig. 2. Mathematical model of the proposed method.

Download Full Size | PDF

In Fig. 2, the distance between LCD1’ and LCD2 is Δd, and the distance between LCD1’ and reference plane is d. Assuming the imaging system is a pinhole projection, two rays of light r1 and r2 from two LCD displays intersect with point N on the reference mirror and M on the tested surface, which both are reflected into the same pixel on the CCD through the reflected light ray r0. The incidence angle of r1 and r2 to its respective points are θ and θ+α, respectively. α is the angle between normal vector of point N and M on its own surface. Reference line r1 passes a point on LCD1 whose phase is φr1 and a point on LCD2 whose phase is φr2. Measurement line r2 passes a point on LCD1 whose phase is φm1 and a point on LCD2 whose phase is φm2. h is the distance to be calculated. Ignoring the thickness of the BS, following equations can be deduced from the geometric relationship in Fig. 2.

$$({\varphi _{r2}} - {\varphi _{r1}}^{\prime}) \ast q/2\pi = \Delta d \ast \tan (\theta )$$
$$({\varphi _{m2}} - {\varphi _{m1}}^{\prime}) \ast q\textrm{/2}\pi = \Delta d \ast \tan (\theta + 2\alpha )$$
$$({\varphi _{m1}} - {\varphi _{r1}}^{\prime}) \ast q/2\pi = \Delta l$$
$$(d\textrm{ + }h) \ast \tan (\theta ) + \Delta l = (d - h) \ast \tan (\theta \textrm{ + }2\alpha )$$
$${\varphi _{r1}}^{\prime} = {\varphi _{r1}}$$
$${\varphi _{m1}}^{\prime} = {\varphi _{m1}}$$
where q is the space distance of a period of fringe and its unit is mm. Δl is the space between the phase points φr1 and φm1.

In the mathematical model in Fig. 2, the DLP projector projects green fringe patterns onto the tested objects. The fringe patterns will be distorted depending on the shape of the measured object. The relationship between 3D data of the surface and phase in each pixel (u, v) can be built by using the polynomial functions [15]:

$$h(u,v) = \sum\limits_{i = 0}^n {{a_i}(u,v)} {(\psi (u,v) - \varphi (u,v))^i}$$
where ψ(u, v) is absolute phase of the distorted pattern, φ(u, v) is the reference phase obtained when the calibration board is at the reference position. h is the distance relative to the reference plane. ai(u, v) is the system parameter, and n is the order of polynomials. For simplicity, (u, v) is omitted in the following text.

Combining the Eqs. (5)–(11), the depth of the diffused/specular surface can be expressed as:

$$h\textrm{ = }\left\{ \begin{array}{ll} \frac{{d \cdot [({\varphi_{\textrm{m}2}} - {\varphi_{\textrm{m}1}}) - ({\varphi_{\textrm{r}2}} - {\varphi_{\textrm{r}1}})] - \Delta d \cdot ({\varphi_{\textrm{m}1}} - {\varphi_{\textrm{r}1}})}}{{({\varphi_{\textrm{r}2}} - {\varphi_{\textrm{r}1}}) + ({\varphi_{\textrm{m}2}} - {\varphi_{\textrm{m}1}})}} &(\textrm{specular surface}) \\ \sum\limits_{i = 0}^n {{a_i}} {(\psi - \varphi )^i} &(\textrm{diffused surface}) \end{array} \right.$$

3. System calibration

System calibration is an important step of diffused/specular surface measurement to establish the relationship between phase map and 3D data. The calibration accuracy of system parameters affects the measurement accuracy. It can be seen from Eq. (12), only d, Δd and ai are calibrated, can 3D shape data of the measured objects be obtained. A composite plane calibration board having both diffused and specular surface has been designed to calibrate the system parameters, as shown in Fig. 3. There are evenly distributed rings with known distance between adjacent markers on the composite surface.

 figure: Fig. 3.

Fig. 3. Composite plane calibration board.

Download Full Size | PDF

3.1 Analysis of camera focus position

Assuming OcA is the optical axis of the camera, as illustrated in Fig. 4. Due to the different reflection characteristics of diffused and specular surfaces on incident light, when the DLP projector projects structural light onto the surface and the two LCD screens display fringe pattern, the CCD camera images a diffused point A directly, while two virtual points A1'’ and A2’ of A1’ and A2 mirrored by the specular part of the reference plane.

 figure: Fig. 4.

Fig. 4. Analysis of camera focus position.

Download Full Size | PDF

Therefore, depth of field (DOF) of the CCD camera is at least the distance of AA2’ in order to be clearly imaged the three points A, A1'’ and A2’. In fact, DOF of most camera lens cannot meet the above requirements of large range. Because the circle pattern is sensitive to defocusing, the imaging plane of the CCD camera is set to the reference position and phase target displayed on the two LCD screens is used when calibrating d, Δd.

3.2 Calibration of a

Expanding Eq. (12), there are (n + 1) unknown parameters in the FPP subsystem, so at least (n + 1) known relative heights and corresponding phase differences are required to solve these factors.

An accurate translating stage is used for assisting the system calibration. The composite calibration board is fixed vertically on the stage and moved to multiple known positions in the common DOF of the CCD camera and the DLP projector. In each position, green fringe patterns satisfying with phase-shifting and optimum three fringe selection method [16] are casted onto the board by the DLP projector. The deformed fringe patterns on the board are captured by the CCD camera. Multiple-step phase-shifting and optimum three fringe selection methods are used to calculate the wrapped and absolute phase maps, respectively.

The reference plane is one of the calibration positions. Therefore, at least (n + 2) positions of the calibration board should be given. System parameter ai (i = 0,1⋯n) can be solved by using the following equation.

$$\left\{ \begin{array}{l} {h_1} - {h_r} = {a_\textrm{0}}\textrm{ + }{a_1}({\varphi_\textrm{1}} - {\varphi_r}) + {a_2}{({\varphi_\textrm{1}} - {\varphi_r})^2} + \cdots + {a_n}{({\varphi_\textrm{1}} - {\varphi_r})^n}\\ \cdots \cdots \\ {h_{r - 1}} - {h_r} = {a_\textrm{0}}\textrm{ + }{a_1}({\varphi_{r - 1}} - {\varphi_r}) + {a_2}{({\varphi_{r - 1}} - {\varphi_r})^2} + \cdots + {a_n}{({\varphi_{r - 1}} - {\varphi_r})^n}\\ {h_{r + 1}} - {h_r} = {a_\textrm{0}}\textrm{ + }{a_1}({\varphi_{r + 1}} - {\varphi_r}) + {a_2}{({\varphi_{r + 1}} - {\varphi_r})^2} + \cdots + {a_n}{({\varphi_{r + 1}} - {\varphi_r})^n}\\ \cdots \cdots \\ {h_{ \ge n + 2}} - {h_r} = {a_\textrm{0}}\textrm{ + }{a_1}({\varphi_{ \ge n + 2}} - {\varphi_r}) + {a_2}{({\varphi_{ \ge n + 2}} - {\varphi_r})^2} + \cdots + {a_n}{({\varphi_{ \ge n + 2}} - {\varphi_r})^n} \end{array} \right.$$
where hr is height of the reference plane and is usually set to zero, h1, h2hn+2 are height relative to the reference plane, φr is absolute phase on the reference plane and φ1, φ2, ⋯ φr−1, φr+1, ⋯ φn+2 are the corresponding absolute phase with regard to the board at different heights.

3.3 Calibration of d and Δd

When the calibration board is located at the reference position, virtual image LCD1''and LCD2 of the two LCD screens through the specular part of the board can be seen and captured by the CCD camera. Phase target displayed on the two LCD screens determines their orientations in the camera coordinate system accurately, even though they are out of the range of DOF of the CCD camera. Based on the obtained orientations of the calibration board and the two LCD screens, system parameters of d and Δd are calibrated. Figure 5 shows the flow chart of the specific calibration steps.

 figure: Fig. 5.

Fig. 5. Flow chart of the calibration of system parameters d and Δd.

Download Full Size | PDF

Step1: The internal parameters of the camera are calibrated [17].

Step2: The calibration board having black rings on its surface with known spacing and size is captured by the CCD camera at the reference plane position. After edge detection, the center of all the markers can be determined by using fitting of ellipse. The extrinsic parameters [Rf, Tf] relating the reference plane to the camera are obtained by using the world coordinate of identification points and intrinsic parameters of the camera.

Step3: LCD1''and LCD2 may not be in the DOF of the camera, as shown in Fig. 6. Phase of fringe patterns is not sensitive to the defocusing and will be used to determine the two screens’ orientations. Orthogonal fringe patterns are displayed on LCD1 and LCD2 to supply two-dimensional feature points for obtaining the extrinsic parameters [R1'’, T1'’] and [R2, T2] relating LCD1''and LCD2 to the camera, respectively.

 figure: Fig. 6.

Fig. 6. Diagram of the calibration of DPMD system.

Download Full Size | PDF

Step4: R1'’ and R2 are compared with Rf. If the values of R1'’ and R2 are equal, it means LCD1 and LCD2 are parallel to the reference plane. Otherwise, adjusting the orientations of LCD1 and LCD2 and repeating Step3 until R1'’ and R2 are approximately equal to Rf.

Step5: Pre-distorted fringe patterns of LCD1 and LCD2 are generated with a software-based method [18] to compensate the non-parallelism of LCD1 and LCD2 to the reference plane. Then, the extrinsic parameters [R1'’, T1'’] and [R2, T2] are recalculated.

Step6: System parameters d and Δd are calculated by using the extrinsic parameters obtained in Step2 and Step5.

$$\Delta \textrm{d} = {\textbf{R}_\textrm{f}}^{ - 1}{({\textbf{T}_2}^{\prime} - {\textbf{T}_1}^{\prime\prime})^T}$$
$$\textrm{d} = {\textbf{R}_\textrm{f}}^{ - 1}{({\textbf{T}_f} - {\textbf{T}_1}^{\prime\prime})^T}$$

4. Experimental results and analysis

4.1 Setup of the hardware system

The hardware system has been set up to test the proposed 3D shape measurement for diffused/specular surfaces. The system consists of a camera, a projector, a beam splitter, and two LCD screens, as illustrated in Fig. 7. The CCD camera is an ECO445CVGE industrial camera with a resolution of 964×1296, and matches an 8mm lens of Computar. The projector is a LightCrafter4500 from Texas Instruments with a resolution of 912×1140. The display is LQ101R1JX02 of SHARP with a resolution of 1600×2560, and the size of a pixel is 84.75×84.75um. An electronic control translation stage from Daheng Optics is used to assist system calibration, its model and moving accuracy are GCD0401M and 1um, respectively. A composite calibration board was manufactured by pasting 0.1mm thick holographic film to a flat mirror.

 figure: Fig. 7.

Fig. 7. Measurement system of composite surface.

Download Full Size | PDF

4.2 Calibration results

4.2.1 Camera calibration

The CCD camera was calibrated by using a 11×13 checkerboard with 6mm side length and 1um precision. 24 orientations nearly symmetrical to camera optical axis in DOF of the camera were captured to calculate the intrinsic parameters and distortion coefficients. The mean reprojection errors in x and y axis are 0.05333 and 0.0586 pixels, respectively.

4.2.2 System calibration

To reduce the effect of the nonlinear response and random noise of DLP projector, green fringe patterns with seven-step (N = 7) phase-shifting and having fringe numbers of 81, 80, and 72 are projected by DLP projector and captured by the green channel of the color camera, one of them as illustrated in Fig. 8(a). It is rectified to remove lens distortion using the calibrated distortion parameters. Since the calibration board has circle markers and specular surface, there are no fringes at the corresponding pixels of the specular surface and the fringe intensities of the pixel position of the circle markers and its neighboring pixels are contaminated, as illustrated in Fig. 8(b). To get more valid and accurate calibration results, the phase information of specular surface, the ring position and its surrounding pixels should be recalculated before calculating system parameters a.

 figure: Fig. 8.

Fig. 8. One reference fringe and original absolute phase map. (a) captured reference fringe; (b) original absolute phase.

Download Full Size | PDF

Because B(u, v) in Eq. (1) is related to the reflection characteristics of the measured surface, a threshold THB can be set to distinguish the invalid phase. If B(u, v)< THB, ϕ of the corresponding pixel coordinate (u, v) is set to NAN, otherwise unchanged. The absolute phase with the optimum three fringe selection method was demodulated and the complete phase map was reobtained by using polynomial fitting method.

The relationship between 3D data of the diffused surface and phase can be represented by using a quantic (n = 5) polynomial, so that at least 7 calibration positions should be provided. In the practical calibration, the translating stage moved 16 positions, and the step distance is 5mm. Figure 9(a) shows the pretreated phase when THB=10. It can be seen that invalid phases in Fig. 8(b) are set NAN. Figure 9(b) is a phase map after polynomial fitting. It can be seen from the contour plot of 480 row that the complete reference phase is computed.

 figure: Fig. 9.

Fig. 9. One reference absolute phase map. (a) absolute phase when THB=10, and contour plot of 480 row; (b) absolute phase after polynomial fitting, and contour plot of 480 row.

Download Full Size | PDF

Four known distances of the calibration plate relative to the reference plane were moved to verify the calibration accuracy, as listed in Table 1. The maximum value of mean absolute error and standard error are 0.018mm and 0.016mm, respectively.

Tables Icon

Table 1. Accuracy evaluation of the plate depth data (unit: mm)

Different colors have different refractive indexes to the same medium. To avoid the refraction error of the BS and chromatic error of lens, LCD1 and LCD2 display red orthogonal fringe patterns to calibrate this kind of error, as illustrated in Fig. 10. The fringe sequences of horizontal and vertical direction are [36 35 30] and [49 48 42] with the seven-step phase-shifting algorithm, and feature points are designed every 150 pixels along two directions of the LCD screen. The final normal vector of reference plane, LCD2 and LCD1'’ in camera coordinate system are [−0.5850; 0.0106; −0.8110], [−0.5850; 0.0110; 0.8109] and [−0.5849; 0.0111; 0.8110], respectively. The final Euler angle of reference plane, LCD1'’ and LCD2 are listed in Table 2. The maximum error is 0.0067 degree, so that LCD1 and LCD2 are almost parallel to the reference plane. The values of system parameters d and Δd are 137.859mm and 23.191mm.

 figure: Fig. 10.

Fig. 10. Error from refraction. (a) refraction error of the BS; (b) chromatic error of camera lens.

Download Full Size | PDF

Tables Icon

Table 2. Euler angle of reference plane, LCD1'’ and LCD2’ (unit: degree)

4.3 Measurement results

To verify the proposed method, three diffused/specular surfaces were measured using the above calibrated system, as illustrated in Fig. 11. They are an artificial discontinuous fan-shaped standard step, a concave lens fixed on lens holder and a bottle cap. Heights between two adjacent planes of the standard step were measured by a coordinate measuring machine of ZEISS to be the ground truth. The lens holder is a diffused surface, while the concave lens has specular surface. An artificial flower design in the middle of the cap is the diffused surface and the other part is specular reflected.

 figure: Fig. 11.

Fig. 11. Three diffused/specular surface samples. (a) artificial fan-shaped standard step; (b) a concave lens fixed on lens holder; (c) a bottle cap.

Download Full Size | PDF

In order to reduce the nonlinear response and random noise of the DLP projector and LCD screen, the DLP projector and two LCD screens simultaneously project and display green, blue and red seven-step phase-shifting fringe patterns. The fringe patterns are deformed by the shape of the tested objects and then recorded by the color CCD camera. Figure 12 shows one of the captured deformed fringe patterns of the three measured diffused/specular surfaces. The absolute phase information can be demodulated by using the multiple-step phase-shifting algorithm and the optimum three frequency selection method, as shown in Fig. 13.

 figure: Fig. 12.

Fig. 12. Captured diffused/specular deformed fringe maps. (a) artificial fan-shaped standard step; (b) concave lens fixed on lens holder; (c) bottle cap.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Absolute phase maps. (a) artificial fan-shaped standard step; (b) concave lens fixed on lens holder; (c) a bottle cap.

Download Full Size | PDF

Figure 14 shows the reconstructed 3D shape of the three tested objects having discontinuous diffused/specular surface. To estimate the measurement accuracy, the heights between two adjacent planes of the reconstructed artificial fan-shaped standard step have been calculated, as listed in Table 3. The maximum absolute error is 0.041mm.

 figure: Fig. 14.

Fig. 14. 3D shape of the tested diffused/specular surfaces. (a) artificial fan-shaped standard step; (b) concave lens fixed on lens holder; (c) bottle cap.

Download Full Size | PDF

Tables Icon

Table 3. Measurement results of the artificial fan-shaped standard step (unit: mm)

5. Error sources analysis

5.1 Calibration error

It can be seen from Eq. (12), the calibration accuracy of system parameters d and Δd directly influence the measurement accuracy. If d and Δd have an error δd and δΔd respectively, the measured height will become

$${h_{{\delta _d}}} = \frac{{(d + {\delta _d}) \cdot [({\varphi _{\textrm{m}2}} - {\varphi _{\textrm{m}1}}) - ({\varphi _{\textrm{r}2}} - {\varphi _{\textrm{r}1}})] - \Delta d \cdot ({\varphi _{\textrm{m}1}} - {\varphi _{\textrm{r}1}})}}{{({\varphi _{\textrm{r}2}} - {\varphi _{\textrm{r}1}}) + ({\varphi _{\textrm{m}2}} - {\varphi _{\textrm{m}1}})}}$$
$${h_{{\delta _{\Delta d}}}} = \frac{{d \cdot [({\varphi _{\textrm{m}2}} - {\varphi _{\textrm{m}1}}) - ({\varphi _{\textrm{r}2}} - {\varphi _{\textrm{r}1}})] - (\Delta d + {\delta _{\Delta d}}) \cdot ({\varphi _{\textrm{m}1}} - {\varphi _{\textrm{r}1}})}}{{({\varphi _{\textrm{m}2}} - {\varphi _{\textrm{m}1}}) + ({\varphi _{\textrm{r}2}} - {\varphi _{\textrm{r}1}})}}$$
Let Δφr=φr2-φr1=φr2-φr1’, Δφm =φm2-φm1=φm2-φm1’, the measurement error introduced by d and Δd can be obtained by
$$\overline {{h_{{\delta _d}}}} = {h_{{\delta _d}}} - h = \frac{{{\delta _d} \cdot (\Delta {\varphi _\textrm{m}} - \Delta {\varphi _\textrm{r}})}}{{\Delta {\varphi _\textrm{m}} + \Delta {\varphi _\textrm{r}}}} = \left( {\textrm{1} - 2\frac{{\Delta {\varphi_\textrm{r}}}}{{\Delta {\varphi_\textrm{m}} + \Delta {\varphi_\textrm{r}}}}} \right) \cdot {\delta _d}$$
$$\overline {{h_{{\delta _{\Delta d}}}}} = {h_{{\delta _{\Delta d}}}} - h ={-} {\delta _{\Delta d}} \cdot \left( {\frac{{{\varphi_{\textrm{m}1}} - {\varphi_{\textrm{r}1}}}}{{\Delta {\varphi_\textrm{m}} + \Delta {\varphi_\textrm{r}}}}} \right)$$
From Fig. 2, the signs of Δφr and Δφm are the same, so (Δφrφr+Δφm) ranges from 0 to 1, and $\overline {{h_{\delta d}}} \in ( - {\delta _d},{\delta _d})$. It can be deduced from Eq. (18) that when Δφr is equal to Δφm, $\overline {{h_{\delta d}}}$ is equal to zero. When measuring a flat mirror which is parallel to the reference plane, as shown in Fig. 15(a), no matter d has how much error, the result is immune to δd. At the same time, to reduce the effect of δΔd on the measurement results, it can increase Δd and the angle between optical axis of camera and normal vector of the reference plane to increase the denominator in Eq. (19).

 figure: Fig. 15.

Fig. 15. Influence analysis of δd and δΔd on measurement. (a) flat mirror or discontinuous objects having parallel planes; (b) objects with varying heights.

Download Full Size | PDF

However, when measuring objects with varying heights, as shown in Fig. 15(b), there are different values for Δφr and Δφm in each pixel. This will lead to different errors distribution in the reconstructed 3D data if d and Δd are not calibrated correctly, therefore the distance Δd and the angle between optical axis of camera and normal vector of the reference plane also should be large. In addition, under the conditions of the reflected deformed fringe patterns can be captured by the camera, the measured objects should be placed face to the camera to decrease the numerator of Eq. (19).

5.2 Optical aberration

In order to improve the measurement efficiency, the DLP projector and two LCD screens simultaneously projects and display red, blue and green fringe patterns. The different types of reflected surfaces will reflect different color fringe patterns and the deformed fringe patterns are recorded by the color CCD camera, so there is almost no crosstalk in the captured deformed fringe patterns. However, because the red and blue channels of the two LCD screens are used to recover the mirror-reflected part of the tested objects, chromatic aberration will lead to phase points φr2 and φr1 on the same incident light have different imaging positions, as shown in Fig. 2.

6. Conclusion

In this paper, a 3D shape measurement method by combining fringe projection and direct phase measuring deflectometry is proposed for measuring diffused/specular surfaces. Firstly, the measurement principle of diffused/specular objects is analyzed and a mathematical model is derived to establish the direct relationship between phase and height. Secondly, the focus position of the camera is determined in consideration of the different reflection performances and the system calibration methods are given in detail. Then, an experiment system consisting of two LCD screens, a DLP projector and a color CCD camera is set up to measure different diffused/specular objects. The two LCD screens and the DLP projector simultaneously display and project red, blue and green fringe patterns. All the diffused and specular reflected deformed fringe patterns are captured by three color channels of the color CCD camera for absolute phase calculation. The 3D shape data are obtained in the same coordinate system by using the built mathematical model after system parameters calibration. Finally, an artificial semicircular step having multiple specular and diffused surfaces, and two other objects have been measured to verify that the proposed method can measure discontinuous objects with diffused/specular surfaces accurately and effectively. Meanwhile, the effect of error sources on the measurement result is analyzed and evaluated to get more accurate results.

In the next step, all the potential error sources will be compensated to improve the measurement accuracy [19,20]. Although lens distortion of the camera is corrected, the lens distortion of the DLP projector and the refraction deviation of the BS are not calibrated and compensated, they also need to be compensated to improve the accuracy further.

Funding

National Key Research and Development Program of China (2017YFF0106404); National Natural Science Foundation of China (51675160).

Disclosures

The authors declare no conflicts of interest.

References

1. B. W. Li, T. Bell, and S. Zhang, “Computer-aided-design-model-assisted absolute three-dimensional shape measurement,” Opt. Express 56(24), 6770–6776 (2017). [CrossRef]  

2. X. F. Bai, N. Gao, and Z. H. Zhang, “Person recognition using 3-D palmprint data based on full-field sinusoidal fringe projection,” IEEE Trans. Instrum. Meas. 68(9), 3287–3298 (2019). [CrossRef]  

3. C. F. Guo, X. Y. Lin, A. D. Hu, and J. Zou, “Improved phase-measuring deflectometry for aspheric surfaces test,” Appl. Opt. 55(8), 2059–2064 (2016). [CrossRef]  

4. G. Sansoni, M. Trebeschi, and F. Docchio, “The-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors 9(1), 568–601 (2009). [CrossRef]  

5. D. M. Meadows, W. O. Johnson, and J. B. Allen, “Generation of surface contours by Moiré patterns,” Appl. Opt. 9(4), 942–947 (1970). [CrossRef]  

6. M. C. Knauer, J. Kaminski, and G. Hausler, “Phase measuring deflectometry: a new approach to measure specular free-form surfaces,” Proc. SPIE 5457, 366–376 (2004). [CrossRef]  

7. S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Laser Eng. 106, 119–131 (2018). [CrossRef]  

8. Z. H. Zhang, Y. M. Wang, S. J. Huang, Y. Liu, C. X. Chang, F. Gao, and X. Q. Jiang, “Three-dimensional shape measurement of specular objects using phase-measuring deflectometry,” Sensors 17(12), 2835 (2017). [CrossRef]  

9. D. Palousek, M. Omasta, D. Koutny, J. Bednar, T. Koutecky, and F. Dokoupil, “Effect of matte coating on 3D optical measurement accuracy,” Opt. Mater. 40, 1–9 (2015). [CrossRef]  

10. L. Huang and A. Asundi, “Study on three-dimensional shape measurement of partially diffuse and specular reflective surfaces with fringe projection technique and fringe reflection technique,” Proc. SPIE 8133, 813304 (2011). [CrossRef]  

11. M. Sandner, “Optical measurement of partially specular surfaces by combining pattern projection and deflectometry techniques,” FernUniversität in Hagen, (2015).

12. H. M. Yue, R. Li, Z. P. Pan, H. L. Chen, and Y. Liu, “A three-dimensional measurement system of surface structured light,” CN 106197322 A, (2016).

13. J. Y. Yi, “Study on mobile phone shell inside and outside surface quality inspection based on fringe projection and fringe reflection technologies,” University of Electronic Science and Technology, (2016).

14. Y. Liu, S. H. Huang, Z. H. Zhang, N. Gao, F. Gao, and X. Q. Jiang, “Full-field 3D shape measurement of discontinuous specular objects by direct phase measuring deflectometry,” Sci. Rep. 7(1), 10293 (2017). [CrossRef]  

15. Z. H. Zhang, D. Zhang, and X. Peng, “Performance analysis of a 3D full-field sensor based on fringe projection,” Opt. Laser Eng. 42(3), 341–353 (2004). [CrossRef]  

16. C. E. Towers, D. P. Towers, and J. D. C. Jones, “Absolute fringe order calculation using optimized multi-frequency selection in full-field profilometry,” Opt. Laser Eng. 43(7), 788–800 (2005). [CrossRef]  

17. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

18. Z. H. Zhang, J. Guo, Y. M. Wang, S. J. Huang, N. Gao, and Y. J. Xiao, “Parallel-alignment and correction of two displays in three-dimensional measuring system of specular surfaces,” Guangxue Jingmi Gongcheng 25(2), 289–296 (2017). [CrossRef]  

19. Z. H. Zhang, C. E. Towers, and D. P. Towers, “Compensating lateral chromatic aberration of a colour fringe projection system for shape metrology,” Opt. Laser Eng. 48(2), 159–165 (2010). [CrossRef]  

20. X. H. Liu, S. J. Huang, Z. H. Zhang, F. Gao, and X. Q. Jiang, “Full-field calibration of color camera chromatic aberration using absolute phase maps,” Sensors 17(5), C1 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Measurement diagram of the proposed method.
Fig. 2.
Fig. 2. Mathematical model of the proposed method.
Fig. 3.
Fig. 3. Composite plane calibration board.
Fig. 4.
Fig. 4. Analysis of camera focus position.
Fig. 5.
Fig. 5. Flow chart of the calibration of system parameters d and Δd.
Fig. 6.
Fig. 6. Diagram of the calibration of DPMD system.
Fig. 7.
Fig. 7. Measurement system of composite surface.
Fig. 8.
Fig. 8. One reference fringe and original absolute phase map. (a) captured reference fringe; (b) original absolute phase.
Fig. 9.
Fig. 9. One reference absolute phase map. (a) absolute phase when THB=10, and contour plot of 480 row; (b) absolute phase after polynomial fitting, and contour plot of 480 row.
Fig. 10.
Fig. 10. Error from refraction. (a) refraction error of the BS; (b) chromatic error of camera lens.
Fig. 11.
Fig. 11. Three diffused/specular surface samples. (a) artificial fan-shaped standard step; (b) a concave lens fixed on lens holder; (c) a bottle cap.
Fig. 12.
Fig. 12. Captured diffused/specular deformed fringe maps. (a) artificial fan-shaped standard step; (b) concave lens fixed on lens holder; (c) bottle cap.
Fig. 13.
Fig. 13. Absolute phase maps. (a) artificial fan-shaped standard step; (b) concave lens fixed on lens holder; (c) a bottle cap.
Fig. 14.
Fig. 14. 3D shape of the tested diffused/specular surfaces. (a) artificial fan-shaped standard step; (b) concave lens fixed on lens holder; (c) bottle cap.
Fig. 15.
Fig. 15. Influence analysis of δd and δΔd on measurement. (a) flat mirror or discontinuous objects having parallel planes; (b) objects with varying heights.

Tables (3)

Tables Icon

Table 1. Accuracy evaluation of the plate depth data (unit: mm)

Tables Icon

Table 2. Euler angle of reference plane, LCD1'’ and LCD2’ (unit: degree)

Tables Icon

Table 3. Measurement results of the artificial fan-shaped standard step (unit: mm)

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

I j ( u , v ) = A ( u , v ) + B ( u , v ) cos ( ϕ ( u , v ) + 2 π ( j 1 ) N )
ϕ ( u , v ) = = j = 1 N sin ( 2 π ( j 1 ) N ) I j ( u , v ) j = 1 N cos ( 2 π ( j 1 ) N ) I j ( u , v )
A ( u , v ) = j = 1 N I j ( u , v ) N
B ( u , v ) = 2 N [ j = 1 N I j ( u , v ) sin ( 2 π ( j 1 ) N ) ] 2 + [ j = 1 N I j ( u , v ) cos ( 2 π ( j 1 ) N ) ] 2
( φ r 2 φ r 1 ) q / 2 π = Δ d tan ( θ )
( φ m 2 φ m 1 ) q /2 π = Δ d tan ( θ + 2 α )
( φ m 1 φ r 1 ) q / 2 π = Δ l
( d  +  h ) tan ( θ ) + Δ l = ( d h ) tan ( θ  +  2 α )
φ r 1 = φ r 1
φ m 1 = φ m 1
h ( u , v ) = i = 0 n a i ( u , v ) ( ψ ( u , v ) φ ( u , v ) ) i
h  =  { d [ ( φ m 2 φ m 1 ) ( φ r 2 φ r 1 ) ] Δ d ( φ m 1 φ r 1 ) ( φ r 2 φ r 1 ) + ( φ m 2 φ m 1 ) ( specular surface ) i = 0 n a i ( ψ φ ) i ( diffused surface )
{ h 1 h r = a 0  +  a 1 ( φ 1 φ r ) + a 2 ( φ 1 φ r ) 2 + + a n ( φ 1 φ r ) n h r 1 h r = a 0  +  a 1 ( φ r 1 φ r ) + a 2 ( φ r 1 φ r ) 2 + + a n ( φ r 1 φ r ) n h r + 1 h r = a 0  +  a 1 ( φ r + 1 φ r ) + a 2 ( φ r + 1 φ r ) 2 + + a n ( φ r + 1 φ r ) n h n + 2 h r = a 0  +  a 1 ( φ n + 2 φ r ) + a 2 ( φ n + 2 φ r ) 2 + + a n ( φ n + 2 φ r ) n
Δ d = R f 1 ( T 2 T 1 ) T
d = R f 1 ( T f T 1 ) T
h δ d = ( d + δ d ) [ ( φ m 2 φ m 1 ) ( φ r 2 φ r 1 ) ] Δ d ( φ m 1 φ r 1 ) ( φ r 2 φ r 1 ) + ( φ m 2 φ m 1 )
h δ Δ d = d [ ( φ m 2 φ m 1 ) ( φ r 2 φ r 1 ) ] ( Δ d + δ Δ d ) ( φ m 1 φ r 1 ) ( φ m 2 φ m 1 ) + ( φ r 2 φ r 1 )
h δ d ¯ = h δ d h = δ d ( Δ φ m Δ φ r ) Δ φ m + Δ φ r = ( 1 2 Δ φ r Δ φ m + Δ φ r ) δ d
h δ Δ d ¯ = h δ Δ d h = δ Δ d ( φ m 1 φ r 1 Δ φ m + Δ φ r )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.