Abstract

Sinusoidal fringe pattern is widely used in optical profilometry; however, the traditional constant-frequency sinusoidal fringe pattern reduces 3D measurement accuracy in the defocus region. To this end, this paper presents a variable-frequency sinusoidal fringe pattern method that is optimized by the measurement depth. The proposed method improves the pixel matching accuracy and thus increases measurement accuracy. This paper theoretically determines the optimal frequency by analyzing the pixel matching error caused by intense noise in a captured image; presents the online frequency optimization along abscissa and ordinate axes in the sinusoidal fringe patterns; and details the encoding and decoding to use variable-frequency fringe patterns for 3D profilometry. Simulations and experiments demonstrate that our proposed method can improve the 3D measurement accuracy and increase measurement robustness.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

3D surface measurement techniques have been applied in various fields [1–3]. Among these techniques, fringe projection profilometry has several advantages, such as being non-contact and having high accuracy and flexibility [4]. Phase shifting algorithms are extensively adopted in fringe projection profilometry because of its superior high resolution, high speed, and full-field inspection. A serial of encoded sinusoidal fringe patterns is emitted from a projector, and the deformed pattern is captured by a camera to decode the relative phase from −π to π via the arctangent function. To resolve 2π discontinuity for 3D surface measurement, a spatial or temporal phase unwrapping method is typically required [5]. The spatial phase unwrapping methods determine number of 2π to be added on each pixel by analyzing the phase map itself and usually via optimization. A number of reliability-guided spatial phase unwrapped methods have been reviewed in [6]. In general, it is difficult for a spatial phase wrapping method to be used for discontinuous 3D shape measurement. Temporal phase unwrapping methods relies on capturing additional patterns in a time sequence to resolve the 2π discontinuity problem. Such a method is usually pixel by pixel without optimization, and thus it is more robust and versatile. A number of temporal phase unwrapped methods have been developed over the years, including two- or multi-wavelength phase shifting [7, 8], gray coding, and spatial coding.

In a typical digital fringe projection system with a phase-shifting technique, a set of uniform-frequency sinusoidal fringe patterns are usually used; and the N-step phase shifting algorithm is extensively employed because of its ability to perform pixel-by-pixel measurement rapidly and accurately. In such a technique, generating high-quality sinusoidal fringe patterns is critical to achieving high measurement accuracy. However, commercial projectors such as those based on digital light processing (DLP) techniques typically large apertures to maximize the light efficiency (e.g., to increase output light intensity). Consequently, all these projectors have a narrow depth-of-field and display only non-distorted fringe images on a fronto-parallel screen at the focus depth [9]. By virtue of the defocus character of projectors, binary defocusing techniques have been proposed to generate high-quality sinusoidal fringe patterns [10, 11]. The intensity of fringe patterns in a projector is iteratively optimized by dithering technology to increase the captured fringe pattern quality [12–15]. Although the quality of a sinusoidal fringe pattern captured by a camera is improved by these methods, the pixel-by-pixel or subgroup-by-subgroup optimization based on object depth range to be measured has not been attempted. As a result, the measurement quality may vary across the object surface. This problem is more pronounced the object is moving when it is is difficult to generate optimized fringe patterns for each measurement.

Generally, the defocusing effect can be represented by the breadth of the point spread function (PSF), which is appropriated by a Gaussian filter [16], and the degree of blur is characterized by the defocus kernel. Different from camera defocus, the defocus degree of a projector is scene-independent [17], which is relative to the distance from the scene point to the projector lens center [18]. Several methods have been proposed for estimating the defocus kernel to recover the depth from the defocus-distance calibration relationship [9, 19, 20]. Ideally, if the captured images reflected from the measurement surface are noise-free and thus correspond to perfect sinusoidal waves, the defocus PSF is used as a standard Gaussian filter to attenuate only the amplitude of the sinusoidal wave. Consequently, the phase decoded by phase shifting will equal the designed phase regardless of the frequency and step number of phase shifting. However, commercial cameras and projectors suffer from different kinds of noise, such as noise caused by temperature, exposure time, environment, and digital quantization [21]. Thus, the noise is inherent to the captured images during the projection and capture processes, which affects the phase quality and thus pixel matching accuracy.

To increase the phase quality for more precise pixel matching, this paper analyzes the pixel matching error with respect to the frequency and amplitude of a sinusoidal fringe pattern captured by a camera. It is found that the higher the frequency and larger the amplitude of the captured sinusoidal fringe pattern, the lower the pixel matching error. On one hand, when the frequency becomes higher, the same intensity error in ordinate axis will be corresponding to a reduced pixels coordinate movement in abscissa axis. On another hand, when the amplitude of the sinusoidal wave is increased, the pixels coordinate movement in abscissa axis corresponding to the same intensity error in ordinate axis is reduced. However, the defocus PSF, generally described by a standard Gaussian filter, is a low-pass filter, that is, a higher frequency will result in a captured sinusoidal fringe pattern with a smaller amplitude. When the frequency becomes higher, the amplitude will be attenuated, so the decrease of the pixel matching error induced by the intensity error through increasing frequency is not monotonous. Therefore, there exist optimal frequencies of the sinusoidal fringe pattern for different defocus degrees in different measurement regions at various depths. Therefore, the sinusoidal fringe patterns to be projected should be designed with variable frequency to reduce the pixel matching error induced by image noise. In this case, according to the variable measurement depth, the corresponding variable defocus kernel of the projector and camera are calibrated using the methods presented in [22] and [23], respectively, and then the optimal variable-frequency sinusoidal fringe pattern along the abscissa and ordinate axes in the projector is designed to reduce the pixel matching error in the camera. Finally, the phase decoding algorithm for the variable-frequency sinusoidal fringe pattern is developed for a 3D measurement.

Section 2 details the entire principle of the optimization method for a depth-driven variable-frequency sinusoidal fringe pattern in fringe projection profilometry. To verify our proposed methods, measurement simulations and experiments are conducted in Section 3. We summarize this research in Section 4.

2. Principle

To improve the pixel matching accuracy for 3D measurement, the principle of the optimized depth-driven variable-frequency sinusoidal fringe pattern in fringe projection profilometry is proposed in this section. The 3D measurement accuracy for phase shifting mainly depends on the pixel matching accuracy, which determines the correspondence between the projector and camera. The pixel matching accuracy relies on the quality of the sinusoidal fringe pattern captured by the camera, which is affected by the inevitable image noise. Therefore, the pixel matching error induced by image noise in the captured defocused image is analyzed, and then the optimal frequency, according to the defocus analysis, is presented in the frequency domain; see section 2.1. Then, taking into account the defocus of the projector and camera, the online frequency optimization for the sinusoidal fringe patterns along abscissa and ordinate axes in the projector is presented in section 2.2 to improve the pixel matching accuracy. For phase shifting, the variable-frequency sinusoidal fringe patterns are designed in section 2.3, and the decoding algorithm is developed in section 2.4.

2.1. Pixel matching accuracy induced by intensity error

Without loss of generality, the sinusoidal fringe patterns along both abscissa and ordinate axes in the projector are used to improve the accuracy and robustness of correspondence matching in this paper.

The error of 3D measurement by phase shifting in fringe projection profilometry comes directly from the pixel matching error between the camera and projector, and the pixel matching error depends upon the decoded phase,

ϕ(Uc,Uc)=arctan{i=1NIic(UcVc)sin[(i1)2π/N]i=1NIic(UcVc)cos[(i1)2π/N]},
where Iic(UcVc) is the gray intensity of the images captured by the camera, the subscript and superscript letter c refers to the camera; N is the total number of steps of phase shifting; and i = 1, 2, ⋯, N is the sequence order of fringe patterns. As mentioned in the introduction, the sinusoidal fringe images captured by the camera are corrupted by different kinds of noise; thus, the captured sinusoidal fringes are not perfectly sinusoidal waves. We define the decoded phase ϕ (Uc, Vc) as a function of N gray intensities I1c(Uc,Vc),I2c(Uc,Vc),,INc(Uc,Vc) of these images as follows:
ϕ(Uc,Vc)=f(I1c(Uc,Vc),I2c(Uc,Vc),,INc(Uc,Vc)).
Thus, the decoded phase error induced by image noise can be written as
Δϕ(Uc,Vc)=fI1cΔI1c+fI2cΔI2c++fINcΔINc.
Generally, the captured sinusoid fringe images with phase ϕ(x, y) can be expressed as
Iic(Uc,Vc)=Ic +Ic cos[ϕ(Uc,Vc)+(i1)2π/N],
where Ic is the average intensity and Ic is the amplitude. Therefore, the derivative of f w.r.t Iic can be obtained as
fIic=2sin[ϕ(Uc,Vc)+(i1)2π/N]NIc.
The intensities of all captured images are corrupted by noise due to various errors. The maximum error ΔI of all pixels is restricted because of limited energy ejecting. To present the maximum decoded phase error resulting from the intensity noise, without loss of generality, errors at the same pixel coordinates for the entire sequence of phase shifting images are set as ΔI under the same capture condition ΔI1c=ΔI2c==ΔINc=ΔI. Then, Eq. (5) is substituted into Eq. (3). Hence, the maximum decoded phase error can be expressed as
|Δϕ|max=|i=1NfIc ||ΔI|=2NIc |i=1Nsin[ϕ(x,y)+(i1)/N2π]||ΔI|=MsinϕNIc |ΔI|.
where the intermediate variable Msinϕ=2i=1N|sin[ϕ(Uc,Vc)+(i1)/N2π]| is constant for certain pixel coordinates (Uc, Vc). (Up, Vp) is the matched pixel coordinates in projector corresponding to (Uc, Vc), and the absolute total phase along Up axis in projector image plane at (Up, Vp) can be generally obtained as ϕ0UpωdUp, and the derivative of ϕ with respect to Up is dϕdUp=ω. Thus, the maximum pixel matching error of Up axis can be written as
|ΔUp|max=|Δϕ|maxω=MsinϕNIc 1ω|ΔI|,
where ω is the frequency of the sinusoidal fringes.

When the projector projects the image onto an out-of-focus plane, the displayed image is blurred due to defocus. The defocus can be modeled as a Gaussian filter as

G(Up,Vp)=12πσ2eUp2+Vp22σ2=(12πσeUp22σ2)(12πσeVp22σ2)=G(Up)G(Vp),
where σ is the defocus kernel and G(Up) and G(Vp) are the Gaussian filters for the Up and Vp directions, respectively. Theoretically, the sinusoidal fringe patterns of N-step phase shifting for the Up axis can be presented as
Iip(Up,Vp)u=Ip +Ip cos[ωuUp+(i1)2π/N],
where Iip(Up,Vp)u is the gray intensity for the fringe pattern along the Up direction in the projector, Ip is the average intensity, Ip is the amplitude, and ωu is the frequency of the fringe pattern along Up axis, the subscript and superscript letter p refers to the projector. The output image blurred by defocus can be modelled as the convolution of the defocus Gaussian filter on the designed sinusoidal fringe pattern as
Ic(Uc,Vc)=G(Up,Vp)Ip(Up,Vp).
Equation (10) is transferred into the frequency domain by the Fourier transform as
(Ic(Uc,Vc))=(G(Up,Vp))(Ip(Up,Vp))=(G(Up))(G(Vp))(Ip(Up,Vp)),
where (G(Up))=eσ2ωu22 and (G(Vp))=eσ2ωv22 are the univariate Fourier transforms of G(Up) and G(Vp), respectively. Thus the displayed image can be retrieved as,
Ic(Uc,Vc)u=1((Ic(Up,Vp)u))=Ip +(G(Up))Ipcos[ωuUp+(i1)2π/N].
This equation indicates that the amplitude of displayed image Ic(Uc, Vc)u blurred by defocus is attenuated by (G(Up)) namely
{Ic =Ip Ic =Ip eσ2ωu 22
which will decrease when the frequency ωu is increased. Similarly, amplitude attenuation for fringe patterns along Vp axis can be achieved. Thus, the maximum pixel matching error in Eq. (7) can be rewritten as
|ΔUp|max=(MsinϕNIp |ΔI|)1ωeω2σ22(ω=ωu).
Equation (14) indicates that the maximum pixel matching error |ΔUp|max depends upon fringe frequency. To improve the measurement accuracy, |ΔUp|max must be minimized. Thus, we investigate
hσ(ω)=1ωeσ2ω2.

Figure 1 illustrates the intuitive illustration of the influence of the fringe frequency and image noise on pixel matching accuracy. From Eq. (14), Eq. (15), and Fig. 1, we can see that 1) when the frequency is higher, the attenuation item (G(Up)) will reduce the amplitude of the fringe pattern on the display; thus, the phase decoding error induced by image noise will be increased; and 2) when the frequency of fringes is lower, the phase sensitivity with respect to intensity variation will be augmented. These two aspects will reduce the SNR of the sinusoidal fringe pattern; thereby, there exists an optimized frequency that minimizes the pixel matching error.

 figure: Fig. 1

Fig. 1 Interactive influence of frequency of sinusoidal fringes and noise on pixel matching accuracy.

Download Full Size | PPT Slide | PDF

The optimal frequency of the designed fringe pattern is obtained as

ωopt=1σ,
by solving equation hσ(ω) = 0, which is the first-order derivative of Eq. (15) with respect to ω. Thus, |ΔUp|max is minimized when the frequency is set to ωopt under the condition that the second-order derivative hσ(ωopt)″>0.

2.2. Optimized frequency based on depth

A projector is usually produced with a large aperture; thus, the displayed image is blurred because of defocus when the display plane is at an out-of-focus depth. Following the geometric optics of a projector, shown in Fig. 2(a), when the display plane is out of focus, the image of a point with radius r on the display plane is no longer a point with radius r but a blurred circular patch with radius Rp(u). The blurring kernel σp of the projector is proportional to the patch radius [18]

σp(u)Rp(u)=|Dp(ufp1)|+ruvapu+bp,
where fp is the focal length of the projector, u is the depth of the display plane with respect to the lens, Dp is the radius of the projector lens, v is the distance from the image plane to the lens, and ap and bp are the fitted slope and intercept.

 figure: Fig. 2

Fig. 2 Geometric optics model:(a)projector; (b)camera.

Download Full Size | PPT Slide | PDF

Similar to projector defocus, an out-of-focus scene point is also blurred in the camera image plane according to the camera geometric optics model (Fig. 2(b)). Thus, when a point on a measured object surface is out of focus, its image on the camera image plane is not a point but a circular patch with radius Rc(u). The blur kernel of the camera defocus can be presented as

σc(u)Rc(u)=Dcs(1fc1s1u)ac(bc1u),
where fc is the focal length of the camera, u is the depth of the scene point with respect to the lens, Dc is the radius of the camera lens, s is the distance from the camera image plane to the lens, and ac and bc are the fitted parameters.

The projected pattern is blurred by the projector defocus and camera defocus successively; thus, the total defocus kernel σ can be obtained by the superposition principle of Gaussian convolution as

σ=σp2+σc2.
From Eq. (17) and Eq. (18), the defocus kernel of either the projector or camera is variable with respect to the object depth.

We use the well-known pinhole model to describe the camera image system. The projection from 3D coordinates (xc, yc, zc) in the camera coordinate system to 2D pixel coordinates (Uc, Vc) in the imaging plane can be presented as

[UcVc1]=1zc[fcu0Uc00fcvVc0001]xcyczc
where fcu and fcv are the focal lengths of the camera for the U-axis and V-axis and Uc0, Vc0 is the principle point. The normalized camera pixel coordinates (Uc, Vc) can be calculated using Eq. (20) with the values of fcu, fcv and (Uc0, Vc0) obtained by the camera calibration:
{uc=UcUc0fcu=xczcvc=VcVc0fcv=yczc.
The projector can be modeled as a pseudo-camera. Thereby, the normalized pixel coordinates of the projector can be obtained as
{uc=UpUp0fpu=xpzpvp=VpVp0fpv=ypzp,
where (xp, yp, zp) and (Up, Vp) are the 3D coordinates and 2D pixel coordinates, respectively. By virtue of the calibrated transformation matrix between the projector and camera coordinate systems, the 3D coordinate relationship between the camera and projector coordinate systems can be expressed as
[xp,yp,zp]T=Rpc[xc,yc,zc]T+tpc,
where Rpc=[rij]3×3 and tpc=[t1t2t3]T are the rotation matrix and translation vector from the camera coordinate system to the projector coordinate system, respectively. Then, Eq. (21) and Eq. (22) are substituted into Eq. (23) to yield the relationship between (xc, yc, zc) and uc, vc, and up:
{zc=t1t3upJupHxc=uczcyc=vczc,
where two intermediate variables are H = r11uc + r12vc + r13 and J = r31uc + r32vc + r33. According to Eq. (24), the error of zc can be derived as
Δzc=t3Ht1J(JupH)Δup=t3Ht1(JupH)2ΔUpfpu.
Thus, the 3D composite coordinate error at (uc, vc) can be obtained as
Δ(uc,vc,up)=Δxc2+Δyc2+Δzc2=uc2+vc2+1|Δzc|.
The maximum pixel matching error |ΔUp|max can be obtained from Eq. (14) and Eq. (15). Thus, the maximum composite error is expressed as
Δmax(uc,vc,up)=k(uc,vc,up)Ksinϕhσ(ω)|ΔI|,
where k(uc,vc,up)=uc2+vc2+1|t3Ht1J|(JupH)2 is a function of (uc, vc, up) and Ksinϕ=MsinϕNIp fcu is constant for a certain ϕ. As shown in Fig. 3, for the vertical sinusoidal fringes along the U direction, the total errors vary in the interval [V1,V2] at a certain horizontal projector coordinate U^p because of the varied k(uc, vc, up) resulting from the variable matched camera coordinates (U^c,V^c) corresponding to (U^p,V^p) and the variable σ due to the variable depth. The mean value of the maximum composite error from (U^p,V1) to (U^p,V2)
Δ¯(up,ω)=Ksinϕj=1|V2V1|kjhσj(ω)|V2V1||ΔI|=Ksinϕkj¯hσ¯(ω)|ΔI|,
where the mean value kj¯=(i=1|V2V1|ki)/|V2V1|; the equivalence hσ¯(ω)=j=1|V2V1|λjhσj(ω) with a equivalent defocus kernel σ¯ for mean error value Δ¯(u^p,ω) and the factor λj=kj/i=1|V2V1|ki,i=1|V2V1|λj=1. According to Jensen’s inequality, the following formula holds:
hσ¯1(ω)=1ωexp(ω22i=1|V2V1|λiσi2)hσ¯(ω)1ω(i=1|V2V1|λiexp(σi2))ω22=hσ¯(ω),
where the lower limit of σ¯ is σ¯1=(i=1|V2V1|λiσi2)1/2 and upper limit of σ¯ is σ¯1=(log[i=1|V2V1|λiexp(σi2)])1/2. From Eq. (16), the optimized frequency for vertical fringes at U^p satisfies
1σ¯2ωopt(U^p)=1σ¯1σ¯1.

 figure: Fig. 3

Fig. 3 Principle of fringe projection profilometry.

Download Full Size | PPT Slide | PDF

The aforementioned frequency optimization is for the vertical fringes at a certain pixel coordinate U^p the optimized frequency for horizontal fringes at V^p can be similarly obtained. Via the analysis of the pixel matching error induced by the intensity error in section 2.1, the optimal frequency for the improved accuracy is determined by the defocus kernel for various depths. Thus, the online adaptive variable-frequency sinusoidal fringe pattern in fringe projection profilometry is proposed to improve the accuracy, with the depth as the feedback. The depth can be approximated from the computer aided design(CAD) model, coarse measurement, and other methods.

2.3. Variable-frequency shifting fringe pattern design

To alleviate the composite error for the overall measurement area, the frequency of the vertical sinusoidal fringes along the Up-axis or the horizontal sinusoidal fringe wave along the Vp-axis should be optimized and varied according to the depth. From Eq. (17), the defocus kernel is nonlinearly varied with the depth to measure the curved surface in the projector coordinate system. However, it is mathematically difficult to wrap and unwrap the phase if the frequency is strictly selected to be the optimal frequency, which is the reciprocal of the nonlinearly varied defocus kernel. To simplify the wrapping and unwrapping pattern, the frequency of the sinusoidal fringe pattern is optimized along only the Up (or Vp) direction, whereas equivalent defocus kernels along Up (or Vp) are segmented and piecewise linearly fitted in a segment, as shown in Fig. 4.

 figure: Fig. 4

Fig. 4 Segmentation fitting of defocus kernel σ.

Download Full Size | PPT Slide | PDF

One solution to approximate the depth is to measure the object surface by the uniform-frequency sinusoidal fringe pattern. Then, the defocus kernel is acquired via kernel-distance relationship calibration according to Eq. (17) and Eq. (18). The phase coding along the U-axis at U^p is

ϕ(U^p)=0U^pω(Up)dUp,
where ω(Up) is the frequency at Up. According to the piecewise linearly fitted σ, the optimal phase at U^p can be obtained as
ϕ(U^p)=0Upl1σ1(Upl)dUpC0+j=1m1Upl+(j1)WsUpl+jWs1σjdUp++Upl+(m1)WsU^p1σmdUp
=C0+j=1m1Upl+(j1)WsUpl+jWs1αjUp+βjdUpCm1++Upl+(m1)WsU^p1αmUp+βmdUp=Cm1+1αmlog(αmU^p+βmαm[Upl+(m1)Ws]+βm)(αm0),
where Upl is the left limit of the pixel value matched in the projector image plane; m is the segment order in which the U^pis included; Ws is the width of the segment; Cm−1 is the sum phase from pixel coordinate 0 to the end of the (m − 1)th segment; and αj and βj are the fitted slope and intercept of the jth segment, respectively. If αm = 0, Eq. (33) can be rewritten as
ϕ(U^p)=Cm1+Upl+(m1)WsU^p1βmdUp=Cm1+U^p(Upl+(m1)Ws)βm.
The total number of segments Su is
Su=round(UprUplWs),
where Upr represents the right limit of the pixel value matched in the projector image plane. Also, the optimal phase for the horizontal fringe pattern can be similarly determined based on ϕ(V^p). When all the phases along the Up- and Vp-axes are determined via Eq. (33) or Eq. (34), the sinusoidal fringe pattern can be designed as follows:
Ii,ϕp(U^p,V^p)u=Ip +Ip cos[ϕ(U^p)+(i1)/N2π],i=1,2,,N,
Ii,ϕp(U^p,V^p)v=Ip +Ip cos[ϕ(V^p)+(i1)/N2π],i=1,2,,N.

2.4. Variable-frequency sinusoidal fringe pattern decoding algorithm

The variable-frequency sinusoidal fringe patterns are designed according to the measured defocus kernel. In the measurement process with the adaptive variable-frequency sinusoidal fringe pattern, the first matched pixel coordinate by the dual uniform-frequency sinusoidal fringe pattern is U¯p. By projecting the fringe patterns with variable frequency, the phase can be calculated as

ϕ˜=arctan{i=1NIϕ(i)sin[2π(i1)/N]i=1NIϕ(i)cos[2π(i1)/N]}[π,π),
and the unwrapped phase is discontinuous. For the matching pixel calculation, we make the phase monotonous:
ϕ={ϕ˜ϕ˜0ϕ˜+2πϕ˜<0.
and the phase order Q(U^c,V^c) of 2π can be estimated by the pixel coordinate from the unwrapping of the uniform fringes as
Q(U^c,V^c)=round(0U¯pω(Up)dUpϕ2π).
Thus, the unwrapped phase yields
ϕ(U^p)=0U^pω(Up)dUp=Q(U^c,V^c)2π+ϕ,
When calculating the matched pixels, the section order m is determined by U¯p firstly, and then the phase order Q(U^c,V^c) of 2π is obtained by Eq. (40). Thus, the matched pixels can be acquired by solving Eq. (41). Equation (33) and Eq. (34) are taken into consideration to calculate the matched pixels.

Case1: αm ≠ 0

U^p(U^p,V^c)=[Upl+(m1)Ws+βmαm]exp[αm(2πQ+ϕCm1)]βmαm;
Case2: αm ≠ 0
U^p(U^c,V^c)=βm(2πQ+ϕCm1)+[Upl+(m1)Ws].

Using Eq. (42) and Eq. (43), the pixels are rematched by the variable-frequency sinusoidal fringe pattern to improve the pixel matching accuracy.

3. Experiment

To verify the performance of the proposed adaptive online feedback measurement method with variable-frequency sinusoidal fringes, a fringe projection profilometry system is developed, as shown in Fig. 5, which is composed of a single CMOS camera (JAI GO5000M-USB) with a 25.4 mm focal length and one digital light process (DLP) projector (DLP lightcrafter 4500). The resolutions of the camera and projector are 2560 × 2048 and 1140 × 912, respectively. The measurement distance of the projector is 119 mm. The system is calibrated using the method proposed in [24].

3.1. Projector and camera defocus kernel calibration

For the variable-frequency fringe pattern, the optimal frequency is designed based on the measured defocus kernel. To optimize the frequency online based on the defocus kernel in Eq. (17) and Eq. (18), the parameters ap, bp and ac, bc should be calibrated.

The defocus blur of projection can be presented as a convolution of a PSF with the image to be projected. The degradation caused by the projector defocus blur is usually modeled as 2D Gaussian with a standard deviation σ [20]. The defocus kernel varies when the depth of the display plane with respect to the projector is changed. The kernel of the PSF is estimated via correlation analysis when the projector is situated at a certain depth [22]. For the projector defocus kernel calibration, the camera is set to be at focus on the display plane, the projector defocus calibration result is shown in Fig. 6(a). The image captured by the camera is also blurred by defocus when the scene is not in focus. The camera is also fixed on a moving platform and can be moved along the camera optical axis. The method proposed in [23], in which the camera defocus kernel is estimated from a single image, is employed to calibrate the defocus kernel-distance relationship of the camera. The calibration result is shown in Fig. 6(b).

 figure: Fig. 5

Fig. 5 Layout of the projector kernel calibration

Download Full Size | PPT Slide | PDF

 figure: Fig. 6

Fig. 6 Projector defocus kernel calibration setup: (a) Calibrated projector defocus kernel-distance relationship; (b) Calibrated camera defocus kernel-distance relationship.

Download Full Size | PPT Slide | PDF

The fitted defocus kernel-distance relationship is

σp=0.2486zp53.2983.
And, the nonlinear camera defocus kernel-distance relationship is fitted to be cubic polynomial:
σc=5.353×106zc3+0.00541zc21.7816zc+193.4647.

3.2. Simulations

To verify the optimal frequency method based on the pixel matching error induced by image noise, simulation experiments are conducted in several scenarios. A theoretical plane is set perpendicular to the optical axis of the projector at zp = 355 in our simulations. The uniform-frequency sinusoidal fringe patterns are generated and blurred by a Gaussian filter with a known kernel, and then random noise is added to the blurred patterns. In the simulations, these images are blurred using a defocus Gaussian filter, the plane is measured by a series of different uniform-frequency sinusoidal fringe patterns, and the root-mean-square (RMS) error of plane fitting is obtained for every frequency. Four Gaussian filters with different defocus kernels and three different random noises are simulated in the simulation experiment. The changes in the RMS error for different frequencies for a certain kernel are plotted in Fig. 7, where the abscissa axis is the frequency and the ordinate axis represents the RMS errors of plane fitting.

 figure: Fig. 7

Fig. 7 Simulation result of optimal frequency design: (a) Optimal frequency ω = 1/4rad/pixel. (b) Optimal frequency ω = 1/5rad/pixel. (c) Optimal frequency ω = 1/6rad/pixel. (d) Optimal frequency ω = 1/7rad/pixel.

Download Full Size | PPT Slide | PDF

Figure 7 reveals that the RMS error is lowest when the frequency is set to ω = 1/σ for a certain defocus kernel. Also, it is found that the optimal frequency is consistent when the magnitude of random noise is changed.

3.3. Measurement of field objects with variable depth

The proposed depth-driven variable-frequency pattern in fringe projection profilometry was tested by measuring a standard white balance plane catty-corner with respect to the front of the projector and standard sphere in front of the projector and a step like object in front of projector. The experiment setup is shown in Fig. 8.

 figure: Fig. 8

Fig. 8 Three different objects to be measured for the evaluation of our proposed method.

Download Full Size | PPT Slide | PDF

First, the standard white balance plane, which is tilted in front of the projector, was measured. The number of steps is N = 20, and the plane is measured by a dual-frequency phase shifting method. The period number nTh of high-frequency fringes is nTh = 10, 15, 20, 25, 30 and 35 along the Up-axis; the frequency of fringe patterns along the Vp-axis is set in the same way. As shown in Fig. 9, the Gaussian kernels of the measurement area are estimated by Eq. (44) and Eq. (45) according to the measured depths zp and zc for a uniform-frequency fringe pattern with nTh = 20. Thus, the phase of the variable-frequency fringe pattern can be designed for the Up-axis (Fig. 10(a)) and Vp axis (Fig. 10(c)). To eliminate the edge influence for the plane fitting, the area to be evaluated is shrunk to be 85% multiply of the shared view(SV) between camera and projector in the center.

 figure: Fig. 9

Fig. 9 Defocus kernel maps for plane measurement: (a) Camera defocus kernel. (b) Projector defocus kernel. (c) Total defocus kernel of camera and projector.

Download Full Size | PPT Slide | PDF

 figure: Fig. 10

Fig. 10 Designed phase for plane measurement: (a) Designed phase along Up. (b) Variable period number along Up. (c) Designed phase for Vp. (d) Variable period number along Vp.

Download Full Size | PPT Slide | PDF

Then, the plane is measured using uniform-frequency and variable-frequency fringe patterns. The measurement results by uniform-frequency fringe patterns with different frequencies are illustrated in Fig. 11. It reveals that the measurement error is increased in the area of the plane where the degree of defocus is becoming larger. Figures 11(d)11(f) illustrate that the measured point cloud of plane is distorted and that the distortion in the more blurred area (the left areas in Fig. 11(a)11(c)) is aggravated when the frequency is increased.

 figure: Fig. 11

Fig. 11 Illustration of the measured point cloud of the plane by uniform-frequency fringes: (a)–(c) are point clouds corresponding to nTh = 10, nTh = 20 and nTh = 30, respectively; (d)–(f) are measurement errors correspondingly.

Download Full Size | PPT Slide | PDF

The variable-frequency fringe patterns (Fig. 12(a) and Fig. 12(b)) are generated by Eq. (36) according to the phases designed for the Up-axis and Vp-axis. The measurement results obtained using the variable-frequency fringe pattern are shown in Fig. 12(c) and Fig. 12(d). Compared with the measurement results obtained using the uniform-frequency fringe pattern shown in Fig. 11, the amplitude (represented by the standard deviation(STD) of the fitting error value, which are marked in bottom of Fig. 11(d)11(f) and Fig. 12(d)) of the sinusoidal shape-like wave measurement error is apparently attenuated throughout the measurement area by the variable-frequency fringe pattern. It indicates that variable-frequency can reduce the measurement error fluctuation and improve the robustness.

 figure: Fig. 12

Fig. 12 Designed variable-frequency fringe pattern and point cloud of plane measured by variable-frequency fringe pattern: (a)Designed variable-frequency vertical fringe pattern; (b)Designed variable-frequency parallel fringe pattern; (c) Plane point cloud; (d) Measurement errors of plane fitting of the point cloud in (c).

Download Full Size | PPT Slide | PDF

The plane is measured ten times one place, and the RMS errors of plane fitting are plotted in Fig. 13(a). In other two places, the same measurement experiment is conducted and the measurement results are plotted in Fig. 13(b) and Fig. 13(c).

 figure: Fig. 13

Fig. 13 Plane measurement comparison between uniform-frequency and variable-frequency fringes: (a)Plane measured ten times in first place; (b) Plane measured ten times in second place; (c) Plane measured ten times in third place.

Download Full Size | PPT Slide | PDF

And the mean values of the RMS errors of every plane fitting measured by uniform-frequency and variable-frequency in three places are listed in Table 1, which shows that the variable frequency method improves measurement accuracy of at least 5% over the method using uniform frequencies. Furthermore, the variable frequency design method gives the frequency range that can be selected for an arbitrary object with a known coarse depth measurement.

Tables Icon

Table 1. Mean values of the RMS error for plane measurement(mm)

When the evaluated area is shrunk to be 70% and 55% multiply of the shared view between camera and projector for uniform-frequency nTh = 10, 15, 20, 25, 30, 35 and variable-frequency, the RMS errors of plane fitting in three plane measurement are listed in Table 2.

Tables Icon

Table 2. RMS error of plane fitting on shrunk evaluated area(mm)

From Fig. 13, Table 1 and Table 2, it is found that the measurement accuracy obtained using the variable-frequency fringe pattern is the greatest compared with that obtained using the uniform-frequency fringe pattern. As shown in Table 2, When the plane fitting area to be evaluated is shrunk more to the view center, the measurement accuracy is increased but is still inferior to the measurement accuracy obtained by varying the frequency. Therefore, Table 2 reveals that the measurement involving the variable-frequency fringe pattern is superior over all those involving the uniform-frequency fringe pattern in terms of both accuracy and robustness.

Second, as shown in Fig. 8(e), a standard sphere with radius r = 100mm is measured. The defocus kernel maps of camera and projector and the total value from a coarse depth measured are figured out, which are shown in Fig. 14. So the variable period number(Fig. 15(b) and Fig. 15(d)) and variable-frequency phase(Fig. 15(a) and Fig. 15(c)) are designed according to the proposed variable-frequency fringe pattern design method under the calculated defocus kernel with coarse measured depth.

 figure: Fig. 14

Fig. 14 Defocus kernel maps for sphere measurement: (a) Camera defocus kernel. (b) Projector defocus kernel. (c) Total defocus kernel of camera and projector.

Download Full Size | PPT Slide | PDF

 figure: Fig. 15

Fig. 15 Designed phase for sphere measurement: (a) Designed phase along Up; (b)Variable period number along Up; (c) Designed phase for Vp; (d)Variable period number along Vp.

Download Full Size | PPT Slide | PDF

The designed variable-frequency fringe patterns along Up axis and Vp axis are shown in Fig. 16(a) and Fig. 16(b). And the measurement results are shown in Fig. 16(c) and Fig. 16(d).

 figure: Fig. 16

Fig. 16 Designed variable-frequency fringe pattern and point cloud of sphere measured by variable-frequency fringe pattern: (a) Designed variable-frequency vertical fringe pattern; (b) Designed variable-frequency parallel fringe pattern; (c) Sphere point cloud measured by variable-frequency; (d) Measurement errors of sphere fitting of the point cloud in (c).

Download Full Size | PPT Slide | PDF

And the measurement results by uniform-frequency fringe patterns with different frequencies are illustrated in Fig. 17, the measurement results also indicate that the measurement accuracy of the defocus area at the contour of the shared view decreases when the frequency increases.

 figure: Fig. 17

Fig. 17 Illustration of the measured point cloud of the sphere by uniform-frequency with different frequencies: (a)–(c) are point clouds corresponding to nTh = 10, nTh = 20 and nTh = 30, respectively; (d)–(f) are respectively measurement errors correspondingly.

Download Full Size | PPT Slide | PDF

The sphere is measured ten times respectively at two places and the measurement results are shown in Fig. 18.

 figure: Fig. 18

Fig. 18 Sphere measurement comparison between uniform-frequency fringe and variable frequency fringes: (a) Sphere measured in first place; (b) Sphere measured in second place.

Download Full Size | PPT Slide | PDF

And the mean value of the RMS errors for every uniform-frequency and variable-frequency are listed in Table 3. The results indicate that the variable-frequency method outperforms the uniform-frequency method. For RMS¯1, the improvement is approximately 9%. Furthermore, the variable frequency obtains the best measurement result for RMS¯2. These experimental results demonstrate that it is valuable to determine an optimal frequency even if one uses the uniform frequency method according to the designed range of variable frequency.

Tables Icon

Table 3. Mean values of the RMS error for sphere measurement(mm)

When the evaluated area is shrunk(Srk) by 100 pixels(pxs) and 200 pixels to the center of the shared view between camera and projector the RMS errors of sphere fitting in other two places are listed in Table 4. The results also indicates that the variable-frequency fringes based measurement method obtains the greatest measurement accuracy superior to the best result by uniform frequency.

Tables Icon

Table 4. RMS errors of sphere fitting by shrinking the evaluated area(mm)

Third, a step like object(Fig. 8(c)) is measured, because of the sharp change of the depth, there are some shadow areas in the shared view on the cliff surface between step planes, we set the defocus kernels of these areas to be constant for the variable-frequency fringes design along Up axis and for the variable-frequency fringes design along Vp axis. And when measuring this step like object, the shadow areas are excluded. The defocus kernel maps of camera, projector and their total value are shown in Fig. 19(c). And the designed variable-frequency phase and variable frequency correspondingly are shown in Fig. 20.

 figure: Fig. 19

Fig. 19 Defocus kernel for step like plane measurement: (a) Camera defocus kernel. (b) Projector defocus kernel. (c) Total defocus kernel of camera and projector.

Download Full Size | PPT Slide | PDF

 figure: Fig. 20

Fig. 20 Designed phase for step like plane measurement: (a) Designed phase along Up. (b) Variable period number along Up. (c) Designed phase for Vp. (d) Variable period number along Vp.

Download Full Size | PPT Slide | PDF

The measurement results are shown in Fig. 21.

 figure: Fig. 21

Fig. 21 Point cloud and measurement error of step like plane by uniform-frequency and variable-frequency fringes: (a) Point cloud measured by uniform-frequency nTh = 15; (b) Point cloud measured by variable-frequency; (c) Measurement errors of plane fitting of the point cloud in (a); (d) Measurement errors of step like planes fitting of the point cloud in (b).

Download Full Size | PPT Slide | PDF

Also, the step like object is measured ten times, and the three parallel planes are fitted respectively. The norm vector Ni(i=1,2,3) of the plane corresponding to the minimum RMS of fitting error is selected as the reference norm vector Nmin, and the angle deviation between the fitted norm vectors of other two planes and the reference norm vector is calculated as

ANGi=cos1(|NiNminNiNmin|),
which is used to evaluate the comprehensive measurement accuracy of multi step planes. In the step like object measurement, the best fitted plane is PLANE 1, thus Nmin=N1, so the mean value of the RMS error of plane fitting and the orientation deviation ANG between the fitted norm vector Ni(i=2,3) and the reference vector Nmin for the uniform-frequency and variable-frequency are listed in Table 5.

Tables Icon

Table 5. Fitting error of step plane

From the fitting results in Table 5, the plane fitting results for PLANE1 and PLANE2 are superior compared to those obtained by the uniform frequency method. Although PLANE3 is less inferior compared with uniform frequency nTh = 10 due to the fitting gap of the defocus kernel between PLANE3 and PLANE2, as well as the relatively narrow area of PLANE3, the angle deviation between PLANE1 and PLANE3 decreases about 55%. So the variable frequency measurement obtains the best comprehensive measurement accuracy according to three plane fitting RMS error and norm vector deviation.

4. Summary and discussion

This paper proposed a depth-driven variable-frequency fringe pattern to improve the overall accuracy of 3D measurement in fringe projection profilometry. To reduce the pixel matching error caused by intense noise in the captured image, the optimal frequency was derived based on the depth, which determines the defocus degree of the projector and camera. Then, online frequency optimization along abscissa and ordinate axes in the sinusoidal fringe pattern according to depth feedback was presented. Next, the coding and decoding methods for variable-frequency sinusoidal fringe patterns were given. The experimental results demonstrate the efficiency of our proposed method in improving the 3D shape measurement accuracy with variable depth by designed variable frequency fringe pattern. Though the effect of our proposed variable-frequency measurement method to select the optimal frequency with different measurement depth to improve the measurement accuracy, some flaws remain to be considered. First, the coarse measurement of object depth is required to design the optimal frequency, which slows down the measurement process. Second, when the depth is so large that the defocus is very worse, the optimal frequency makes no obvious improvement because the amplitude of the sinusoidal wave is attenuated too small to accurately determine the phase by phase shifting method.

The defocus kernel projector is proportional to the scene depth measured and during the coarse measurement of the depth, more image processing technologies are required to exclude the gross error and some area can not be reconstructed. And the optimal frequency is not straightforward calculated out for the object with drastic change in depth or noncontinuous gap in the object surface because of the simplification for the defocus kernel linearly fitting with constant section length. More deeply analysis works of the defocus kernel fitting with nonlinear relationship and adaptive section length will be done in the future.

Funding

National Key R&D Program of China(2016YFE0206200); National Natural Science Foundation of China (NSFC) (U1613205, 51675291); State Key Laboratory of China (SKLT2018C04); Basic Research Program of Shenzhen (JCYJ20160229123030978, JCYJ20160429161539298).

References and links

1. L. Zhou, F. Zhou, D. Qu, X. Liu, and W. Lu, “Error analysis of the non-diffraction grating structured light generated by triangular prism,” Opt. Commun. 306, 174–178 (2013). [CrossRef]  

2. P. Jin, J. Liu, S. Liu, and X. Wang, “A new multi-vision-based reconstruction algorithm for tube inspection,” Int. J. Adv. Manuf. Technol. 93, 1–15 (2017). [CrossRef]  

3. Y. J. Xu, C. Chen, S. J. Huang, and Z. H. Zhang, “Simultaneously measuring 3d shape and colour texture of moving objects using ir and colour fringe projection techniques,” Opt. & Lasers Eng. 61, 1–7 (2014). [CrossRef]  

4. J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3, 128–160 (2011). [CrossRef]  

5. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern recognition 43, 2666–2680 (2010). [CrossRef]  

6. X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42, 245–261 (2004). [CrossRef]  

7. E. Li, X. Peng, J. Xi, J. Chicharo, J. Yao, and D. Zhang, “Multi-frequency and multiple phase-shift sinusoidal fringe projection for 3d profilometry,” Opt. Express 13, 1561–1569 (2005). [CrossRef]  

8. Y. Y. Cheng and J. C. Wyant, “Multiple-wavelength phase-shifting interferometry,” Appl. Opt. 24, 804–807 (1985). [CrossRef]  

9. Z. Li and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Transactions on Graph. (TOG) 25, 907–915 (2006). [CrossRef]  

10. Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51, 6631–6636 (2012). [CrossRef]  

11. W. Lohry and S. Zhang, “Genetic method to optimize binary dithering technique for high-quality fringe generation,” Opt. Lett. 38, 540–542 (2013). [CrossRef]  

12. J. Dai and S. Zhang, “Phase-optimized dithering technique for high-quality 3d shape measurement,” Opt. Lasers Eng. 51, 790–795 (2013). [CrossRef]  

13. J. Dai, B. Li, and S. Zhang, “Intensity-optimized dithering technique for three-dimensional shape measurement with projector defocusing,” Opt. Lasers Eng. 53, 79–85 (2014). [CrossRef]  

14. J. Dai, B. Li, and S. Zhang, “High-quality fringe pattern generation using binary pattern optimization through symmetry and periodicity,” Opt. Lasers Eng. 52, 195–200 (2014). [CrossRef]  

15. X. X. Li, Z. J. Zhang, and C. Yang, “High-quality fringe pattern generation using binary pattern optimization based on a novel objective function,” Optik - Int. J. for Light. Electron Opt. 127, 5322–5327 (2016). [CrossRef]  

16. G. A. Ayubi, J. A. Ayubi, J. M. D. Martino, and J. A. Ferrari, “Pulse-width modulation in defocused three-dimensional fringe projection,” Opt. Lett. 35, 3682–3684 (2010). [CrossRef]   [PubMed]  

17. Z. Li and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Transactions on Graph. (TOG) 25, 907–915 (2006). [CrossRef]  

18. M. Nagase, D. Iwai, and K. Sato, “Dynamic defocus and occlusion compensation of projected imagery by model-based optimal projector selection in multi-projection environment,” Virtual Real. 15, 119–132 (2011). [CrossRef]  

19. Y. Oyamada and H. Saito, “Estimation of projector defocus blur by extracting texture rich region in projection image,” Vaclav Skala - UNION Agency (2008).

20. Y. Oyamada and H. Saito, “Defocus blur correcting projector-camera system,” Lect. Notes Comput. Sci. 5259, 453–464 (2008). [CrossRef]  

21. K. Irie, A. E. Mckinnon, K. Unsworth, and I. M. Woodhead, “A technique for evaluation of ccd video-camera noise,” IEEE Transactions on Circuits Syst. for Video Technol. 18, 280–284 (2008). [CrossRef]  

22. Y. Oyamada and H. Saito, “Focal pre-correction of projected image for deblurring screen image,” Comput. Vis. Pattern Recognition, 2007. CVPR ’07. IEEE Conf. on., 1–8 (2007).

23. S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognit. 44, 1852–1858 (2011). [CrossRef]  

24. H. Chen, J. Su, J. Xu, K. Chen, R. Chen, and Z. Zhang, “Accurate calibration method for camera and projector in fringe patterns measurement system,” Appl. Opt. 55, 4293–4300 (2016). [CrossRef]  

References

  • View by:

  1. L. Zhou, F. Zhou, D. Qu, X. Liu, and W. Lu, “Error analysis of the non-diffraction grating structured light generated by triangular prism,” Opt. Commun. 306, 174–178 (2013).
    [Crossref]
  2. P. Jin, J. Liu, S. Liu, and X. Wang, “A new multi-vision-based reconstruction algorithm for tube inspection,” Int. J. Adv. Manuf. Technol. 93, 1–15 (2017).
    [Crossref]
  3. Y. J. Xu, C. Chen, S. J. Huang, and Z. H. Zhang, “Simultaneously measuring 3d shape and colour texture of moving objects using ir and colour fringe projection techniques,” Opt. & Lasers Eng. 61, 1–7 (2014).
    [Crossref]
  4. J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3, 128–160 (2011).
    [Crossref]
  5. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern recognition 43, 2666–2680 (2010).
    [Crossref]
  6. X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42, 245–261 (2004).
    [Crossref]
  7. E. Li, X. Peng, J. Xi, J. Chicharo, J. Yao, and D. Zhang, “Multi-frequency and multiple phase-shift sinusoidal fringe projection for 3d profilometry,” Opt. Express 13, 1561–1569 (2005).
    [Crossref]
  8. Y. Y. Cheng and J. C. Wyant, “Multiple-wavelength phase-shifting interferometry,” Appl. Opt. 24, 804–807 (1985).
    [Crossref]
  9. Z. Li and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Transactions on Graph. (TOG) 25, 907–915 (2006).
    [Crossref]
  10. Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51, 6631–6636 (2012).
    [Crossref]
  11. W. Lohry and S. Zhang, “Genetic method to optimize binary dithering technique for high-quality fringe generation,” Opt. Lett. 38, 540–542 (2013).
    [Crossref]
  12. J. Dai and S. Zhang, “Phase-optimized dithering technique for high-quality 3d shape measurement,” Opt. Lasers Eng. 51, 790–795 (2013).
    [Crossref]
  13. J. Dai, B. Li, and S. Zhang, “Intensity-optimized dithering technique for three-dimensional shape measurement with projector defocusing,” Opt. Lasers Eng. 53, 79–85 (2014).
    [Crossref]
  14. J. Dai, B. Li, and S. Zhang, “High-quality fringe pattern generation using binary pattern optimization through symmetry and periodicity,” Opt. Lasers Eng. 52, 195–200 (2014).
    [Crossref]
  15. X. X. Li, Z. J. Zhang, and C. Yang, “High-quality fringe pattern generation using binary pattern optimization based on a novel objective function,” Optik - Int. J. for Light. Electron Opt. 127, 5322–5327 (2016).
    [Crossref]
  16. G. A. Ayubi, J. A. Ayubi, J. M. D. Martino, and J. A. Ferrari, “Pulse-width modulation in defocused three-dimensional fringe projection,” Opt. Lett. 35, 3682–3684 (2010).
    [Crossref] [PubMed]
  17. Z. Li and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Transactions on Graph. (TOG) 25, 907–915 (2006).
    [Crossref]
  18. M. Nagase, D. Iwai, and K. Sato, “Dynamic defocus and occlusion compensation of projected imagery by model-based optimal projector selection in multi-projection environment,” Virtual Real. 15, 119–132 (2011).
    [Crossref]
  19. Y. Oyamada and H. Saito, “Estimation of projector defocus blur by extracting texture rich region in projection image,” Vaclav Skala - UNION Agency (2008).
  20. Y. Oyamada and H. Saito, “Defocus blur correcting projector-camera system,” Lect. Notes Comput. Sci. 5259, 453–464 (2008).
    [Crossref]
  21. K. Irie, A. E. Mckinnon, K. Unsworth, and I. M. Woodhead, “A technique for evaluation of ccd video-camera noise,” IEEE Transactions on Circuits Syst. for Video Technol. 18, 280–284 (2008).
    [Crossref]
  22. Y. Oyamada and H. Saito, “Focal pre-correction of projected image for deblurring screen image,” Comput. Vis. Pattern Recognition, 2007. CVPR ’07. IEEE Conf. on., 1–8 (2007).
  23. S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognit. 44, 1852–1858 (2011).
    [Crossref]
  24. H. Chen, J. Su, J. Xu, K. Chen, R. Chen, and Z. Zhang, “Accurate calibration method for camera and projector in fringe patterns measurement system,” Appl. Opt. 55, 4293–4300 (2016).
    [Crossref]

2017 (1)

P. Jin, J. Liu, S. Liu, and X. Wang, “A new multi-vision-based reconstruction algorithm for tube inspection,” Int. J. Adv. Manuf. Technol. 93, 1–15 (2017).
[Crossref]

2016 (2)

X. X. Li, Z. J. Zhang, and C. Yang, “High-quality fringe pattern generation using binary pattern optimization based on a novel objective function,” Optik - Int. J. for Light. Electron Opt. 127, 5322–5327 (2016).
[Crossref]

H. Chen, J. Su, J. Xu, K. Chen, R. Chen, and Z. Zhang, “Accurate calibration method for camera and projector in fringe patterns measurement system,” Appl. Opt. 55, 4293–4300 (2016).
[Crossref]

2014 (3)

J. Dai, B. Li, and S. Zhang, “Intensity-optimized dithering technique for three-dimensional shape measurement with projector defocusing,” Opt. Lasers Eng. 53, 79–85 (2014).
[Crossref]

J. Dai, B. Li, and S. Zhang, “High-quality fringe pattern generation using binary pattern optimization through symmetry and periodicity,” Opt. Lasers Eng. 52, 195–200 (2014).
[Crossref]

Y. J. Xu, C. Chen, S. J. Huang, and Z. H. Zhang, “Simultaneously measuring 3d shape and colour texture of moving objects using ir and colour fringe projection techniques,” Opt. & Lasers Eng. 61, 1–7 (2014).
[Crossref]

2013 (3)

L. Zhou, F. Zhou, D. Qu, X. Liu, and W. Lu, “Error analysis of the non-diffraction grating structured light generated by triangular prism,” Opt. Commun. 306, 174–178 (2013).
[Crossref]

W. Lohry and S. Zhang, “Genetic method to optimize binary dithering technique for high-quality fringe generation,” Opt. Lett. 38, 540–542 (2013).
[Crossref]

J. Dai and S. Zhang, “Phase-optimized dithering technique for high-quality 3d shape measurement,” Opt. Lasers Eng. 51, 790–795 (2013).
[Crossref]

2012 (1)

2011 (3)

S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognit. 44, 1852–1858 (2011).
[Crossref]

M. Nagase, D. Iwai, and K. Sato, “Dynamic defocus and occlusion compensation of projected imagery by model-based optimal projector selection in multi-projection environment,” Virtual Real. 15, 119–132 (2011).
[Crossref]

J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3, 128–160 (2011).
[Crossref]

2010 (2)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern recognition 43, 2666–2680 (2010).
[Crossref]

G. A. Ayubi, J. A. Ayubi, J. M. D. Martino, and J. A. Ferrari, “Pulse-width modulation in defocused three-dimensional fringe projection,” Opt. Lett. 35, 3682–3684 (2010).
[Crossref] [PubMed]

2008 (2)

Y. Oyamada and H. Saito, “Defocus blur correcting projector-camera system,” Lect. Notes Comput. Sci. 5259, 453–464 (2008).
[Crossref]

K. Irie, A. E. Mckinnon, K. Unsworth, and I. M. Woodhead, “A technique for evaluation of ccd video-camera noise,” IEEE Transactions on Circuits Syst. for Video Technol. 18, 280–284 (2008).
[Crossref]

2006 (2)

Z. Li and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Transactions on Graph. (TOG) 25, 907–915 (2006).
[Crossref]

Z. Li and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Transactions on Graph. (TOG) 25, 907–915 (2006).
[Crossref]

2005 (1)

2004 (1)

X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42, 245–261 (2004).
[Crossref]

1985 (1)

Ayubi, G. A.

Ayubi, J. A.

Chen, C.

Y. J. Xu, C. Chen, S. J. Huang, and Z. H. Zhang, “Simultaneously measuring 3d shape and colour texture of moving objects using ir and colour fringe projection techniques,” Opt. & Lasers Eng. 61, 1–7 (2014).
[Crossref]

Chen, H.

Chen, K.

Chen, R.

Chen, W.

X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42, 245–261 (2004).
[Crossref]

Cheng, Y. Y.

Chicharo, J.

Dai, J.

J. Dai, B. Li, and S. Zhang, “Intensity-optimized dithering technique for three-dimensional shape measurement with projector defocusing,” Opt. Lasers Eng. 53, 79–85 (2014).
[Crossref]

J. Dai, B. Li, and S. Zhang, “High-quality fringe pattern generation using binary pattern optimization through symmetry and periodicity,” Opt. Lasers Eng. 52, 195–200 (2014).
[Crossref]

J. Dai and S. Zhang, “Phase-optimized dithering technique for high-quality 3d shape measurement,” Opt. Lasers Eng. 51, 790–795 (2013).
[Crossref]

Fernandez, S.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern recognition 43, 2666–2680 (2010).
[Crossref]

Ferrari, J. A.

Geng, J.

J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3, 128–160 (2011).
[Crossref]

Huang, S. J.

Y. J. Xu, C. Chen, S. J. Huang, and Z. H. Zhang, “Simultaneously measuring 3d shape and colour texture of moving objects using ir and colour fringe projection techniques,” Opt. & Lasers Eng. 61, 1–7 (2014).
[Crossref]

Irie, K.

K. Irie, A. E. Mckinnon, K. Unsworth, and I. M. Woodhead, “A technique for evaluation of ccd video-camera noise,” IEEE Transactions on Circuits Syst. for Video Technol. 18, 280–284 (2008).
[Crossref]

Iwai, D.

M. Nagase, D. Iwai, and K. Sato, “Dynamic defocus and occlusion compensation of projected imagery by model-based optimal projector selection in multi-projection environment,” Virtual Real. 15, 119–132 (2011).
[Crossref]

Jin, P.

P. Jin, J. Liu, S. Liu, and X. Wang, “A new multi-vision-based reconstruction algorithm for tube inspection,” Int. J. Adv. Manuf. Technol. 93, 1–15 (2017).
[Crossref]

Li, B.

J. Dai, B. Li, and S. Zhang, “High-quality fringe pattern generation using binary pattern optimization through symmetry and periodicity,” Opt. Lasers Eng. 52, 195–200 (2014).
[Crossref]

J. Dai, B. Li, and S. Zhang, “Intensity-optimized dithering technique for three-dimensional shape measurement with projector defocusing,” Opt. Lasers Eng. 53, 79–85 (2014).
[Crossref]

Li, E.

Li, X. X.

X. X. Li, Z. J. Zhang, and C. Yang, “High-quality fringe pattern generation using binary pattern optimization based on a novel objective function,” Optik - Int. J. for Light. Electron Opt. 127, 5322–5327 (2016).
[Crossref]

Li, Z.

Z. Li and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Transactions on Graph. (TOG) 25, 907–915 (2006).
[Crossref]

Z. Li and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Transactions on Graph. (TOG) 25, 907–915 (2006).
[Crossref]

Liu, J.

P. Jin, J. Liu, S. Liu, and X. Wang, “A new multi-vision-based reconstruction algorithm for tube inspection,” Int. J. Adv. Manuf. Technol. 93, 1–15 (2017).
[Crossref]

Liu, S.

P. Jin, J. Liu, S. Liu, and X. Wang, “A new multi-vision-based reconstruction algorithm for tube inspection,” Int. J. Adv. Manuf. Technol. 93, 1–15 (2017).
[Crossref]

Liu, X.

L. Zhou, F. Zhou, D. Qu, X. Liu, and W. Lu, “Error analysis of the non-diffraction grating structured light generated by triangular prism,” Opt. Commun. 306, 174–178 (2013).
[Crossref]

Llado, X.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern recognition 43, 2666–2680 (2010).
[Crossref]

Lohry, W.

Lu, W.

L. Zhou, F. Zhou, D. Qu, X. Liu, and W. Lu, “Error analysis of the non-diffraction grating structured light generated by triangular prism,” Opt. Commun. 306, 174–178 (2013).
[Crossref]

Martino, J. M. D.

Mckinnon, A. E.

K. Irie, A. E. Mckinnon, K. Unsworth, and I. M. Woodhead, “A technique for evaluation of ccd video-camera noise,” IEEE Transactions on Circuits Syst. for Video Technol. 18, 280–284 (2008).
[Crossref]

Nagase, M.

M. Nagase, D. Iwai, and K. Sato, “Dynamic defocus and occlusion compensation of projected imagery by model-based optimal projector selection in multi-projection environment,” Virtual Real. 15, 119–132 (2011).
[Crossref]

Nayar, S.

Z. Li and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Transactions on Graph. (TOG) 25, 907–915 (2006).
[Crossref]

Z. Li and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Transactions on Graph. (TOG) 25, 907–915 (2006).
[Crossref]

Oyamada, Y.

Y. Oyamada and H. Saito, “Defocus blur correcting projector-camera system,” Lect. Notes Comput. Sci. 5259, 453–464 (2008).
[Crossref]

Y. Oyamada and H. Saito, “Focal pre-correction of projected image for deblurring screen image,” Comput. Vis. Pattern Recognition, 2007. CVPR ’07. IEEE Conf. on., 1–8 (2007).

Y. Oyamada and H. Saito, “Estimation of projector defocus blur by extracting texture rich region in projection image,” Vaclav Skala - UNION Agency (2008).

Peng, X.

Pribanic, T.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern recognition 43, 2666–2680 (2010).
[Crossref]

Qu, D.

L. Zhou, F. Zhou, D. Qu, X. Liu, and W. Lu, “Error analysis of the non-diffraction grating structured light generated by triangular prism,” Opt. Commun. 306, 174–178 (2013).
[Crossref]

Saito, H.

Y. Oyamada and H. Saito, “Defocus blur correcting projector-camera system,” Lect. Notes Comput. Sci. 5259, 453–464 (2008).
[Crossref]

Y. Oyamada and H. Saito, “Focal pre-correction of projected image for deblurring screen image,” Comput. Vis. Pattern Recognition, 2007. CVPR ’07. IEEE Conf. on., 1–8 (2007).

Y. Oyamada and H. Saito, “Estimation of projector defocus blur by extracting texture rich region in projection image,” Vaclav Skala - UNION Agency (2008).

Salvi, J.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern recognition 43, 2666–2680 (2010).
[Crossref]

Sato, K.

M. Nagase, D. Iwai, and K. Sato, “Dynamic defocus and occlusion compensation of projected imagery by model-based optimal projector selection in multi-projection environment,” Virtual Real. 15, 119–132 (2011).
[Crossref]

Sim, T.

S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognit. 44, 1852–1858 (2011).
[Crossref]

Su, J.

Su, X.

X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42, 245–261 (2004).
[Crossref]

Unsworth, K.

K. Irie, A. E. Mckinnon, K. Unsworth, and I. M. Woodhead, “A technique for evaluation of ccd video-camera noise,” IEEE Transactions on Circuits Syst. for Video Technol. 18, 280–284 (2008).
[Crossref]

Wang, X.

P. Jin, J. Liu, S. Liu, and X. Wang, “A new multi-vision-based reconstruction algorithm for tube inspection,” Int. J. Adv. Manuf. Technol. 93, 1–15 (2017).
[Crossref]

Wang, Y.

Woodhead, I. M.

K. Irie, A. E. Mckinnon, K. Unsworth, and I. M. Woodhead, “A technique for evaluation of ccd video-camera noise,” IEEE Transactions on Circuits Syst. for Video Technol. 18, 280–284 (2008).
[Crossref]

Wyant, J. C.

Xi, J.

Xu, J.

Xu, Y. J.

Y. J. Xu, C. Chen, S. J. Huang, and Z. H. Zhang, “Simultaneously measuring 3d shape and colour texture of moving objects using ir and colour fringe projection techniques,” Opt. & Lasers Eng. 61, 1–7 (2014).
[Crossref]

Yang, C.

X. X. Li, Z. J. Zhang, and C. Yang, “High-quality fringe pattern generation using binary pattern optimization based on a novel objective function,” Optik - Int. J. for Light. Electron Opt. 127, 5322–5327 (2016).
[Crossref]

Yao, J.

Zhang, D.

Zhang, S.

J. Dai, B. Li, and S. Zhang, “Intensity-optimized dithering technique for three-dimensional shape measurement with projector defocusing,” Opt. Lasers Eng. 53, 79–85 (2014).
[Crossref]

J. Dai, B. Li, and S. Zhang, “High-quality fringe pattern generation using binary pattern optimization through symmetry and periodicity,” Opt. Lasers Eng. 52, 195–200 (2014).
[Crossref]

W. Lohry and S. Zhang, “Genetic method to optimize binary dithering technique for high-quality fringe generation,” Opt. Lett. 38, 540–542 (2013).
[Crossref]

J. Dai and S. Zhang, “Phase-optimized dithering technique for high-quality 3d shape measurement,” Opt. Lasers Eng. 51, 790–795 (2013).
[Crossref]

Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51, 6631–6636 (2012).
[Crossref]

Zhang, Z.

Zhang, Z. H.

Y. J. Xu, C. Chen, S. J. Huang, and Z. H. Zhang, “Simultaneously measuring 3d shape and colour texture of moving objects using ir and colour fringe projection techniques,” Opt. & Lasers Eng. 61, 1–7 (2014).
[Crossref]

Zhang, Z. J.

X. X. Li, Z. J. Zhang, and C. Yang, “High-quality fringe pattern generation using binary pattern optimization based on a novel objective function,” Optik - Int. J. for Light. Electron Opt. 127, 5322–5327 (2016).
[Crossref]

Zhou, F.

L. Zhou, F. Zhou, D. Qu, X. Liu, and W. Lu, “Error analysis of the non-diffraction grating structured light generated by triangular prism,” Opt. Commun. 306, 174–178 (2013).
[Crossref]

Zhou, L.

L. Zhou, F. Zhou, D. Qu, X. Liu, and W. Lu, “Error analysis of the non-diffraction grating structured light generated by triangular prism,” Opt. Commun. 306, 174–178 (2013).
[Crossref]

Zhuo, S.

S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognit. 44, 1852–1858 (2011).
[Crossref]

ACM Transactions on Graph. (TOG) (2)

Z. Li and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Transactions on Graph. (TOG) 25, 907–915 (2006).
[Crossref]

Z. Li and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Transactions on Graph. (TOG) 25, 907–915 (2006).
[Crossref]

Adv. Opt. Photonics (1)

J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3, 128–160 (2011).
[Crossref]

Appl. Opt. (3)

IEEE Transactions on Circuits Syst. for Video Technol. (1)

K. Irie, A. E. Mckinnon, K. Unsworth, and I. M. Woodhead, “A technique for evaluation of ccd video-camera noise,” IEEE Transactions on Circuits Syst. for Video Technol. 18, 280–284 (2008).
[Crossref]

Int. J. Adv. Manuf. Technol. (1)

P. Jin, J. Liu, S. Liu, and X. Wang, “A new multi-vision-based reconstruction algorithm for tube inspection,” Int. J. Adv. Manuf. Technol. 93, 1–15 (2017).
[Crossref]

Lect. Notes Comput. Sci. (1)

Y. Oyamada and H. Saito, “Defocus blur correcting projector-camera system,” Lect. Notes Comput. Sci. 5259, 453–464 (2008).
[Crossref]

Opt. & Lasers Eng. (1)

Y. J. Xu, C. Chen, S. J. Huang, and Z. H. Zhang, “Simultaneously measuring 3d shape and colour texture of moving objects using ir and colour fringe projection techniques,” Opt. & Lasers Eng. 61, 1–7 (2014).
[Crossref]

Opt. Commun. (1)

L. Zhou, F. Zhou, D. Qu, X. Liu, and W. Lu, “Error analysis of the non-diffraction grating structured light generated by triangular prism,” Opt. Commun. 306, 174–178 (2013).
[Crossref]

Opt. Express (1)

Opt. Lasers Eng. (4)

X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42, 245–261 (2004).
[Crossref]

J. Dai and S. Zhang, “Phase-optimized dithering technique for high-quality 3d shape measurement,” Opt. Lasers Eng. 51, 790–795 (2013).
[Crossref]

J. Dai, B. Li, and S. Zhang, “Intensity-optimized dithering technique for three-dimensional shape measurement with projector defocusing,” Opt. Lasers Eng. 53, 79–85 (2014).
[Crossref]

J. Dai, B. Li, and S. Zhang, “High-quality fringe pattern generation using binary pattern optimization through symmetry and periodicity,” Opt. Lasers Eng. 52, 195–200 (2014).
[Crossref]

Opt. Lett. (2)

Optik - Int. J. for Light. Electron Opt. (1)

X. X. Li, Z. J. Zhang, and C. Yang, “High-quality fringe pattern generation using binary pattern optimization based on a novel objective function,” Optik - Int. J. for Light. Electron Opt. 127, 5322–5327 (2016).
[Crossref]

Pattern Recognit. (1)

S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognit. 44, 1852–1858 (2011).
[Crossref]

Pattern recognition (1)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern recognition 43, 2666–2680 (2010).
[Crossref]

Virtual Real. (1)

M. Nagase, D. Iwai, and K. Sato, “Dynamic defocus and occlusion compensation of projected imagery by model-based optimal projector selection in multi-projection environment,” Virtual Real. 15, 119–132 (2011).
[Crossref]

Other (2)

Y. Oyamada and H. Saito, “Estimation of projector defocus blur by extracting texture rich region in projection image,” Vaclav Skala - UNION Agency (2008).

Y. Oyamada and H. Saito, “Focal pre-correction of projected image for deblurring screen image,” Comput. Vis. Pattern Recognition, 2007. CVPR ’07. IEEE Conf. on., 1–8 (2007).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (21)

Fig. 1
Fig. 1 Interactive influence of frequency of sinusoidal fringes and noise on pixel matching accuracy.
Fig. 2
Fig. 2 Geometric optics model:(a)projector; (b)camera.
Fig. 3
Fig. 3 Principle of fringe projection profilometry.
Fig. 4
Fig. 4 Segmentation fitting of defocus kernel σ.
Fig. 5
Fig. 5 Layout of the projector kernel calibration
Fig. 6
Fig. 6 Projector defocus kernel calibration setup: (a) Calibrated projector defocus kernel-distance relationship; (b) Calibrated camera defocus kernel-distance relationship.
Fig. 7
Fig. 7 Simulation result of optimal frequency design: (a) Optimal frequency ω = 1/4rad/pixel. (b) Optimal frequency ω = 1/5rad/pixel. (c) Optimal frequency ω = 1/6rad/pixel. (d) Optimal frequency ω = 1/7rad/pixel.
Fig. 8
Fig. 8 Three different objects to be measured for the evaluation of our proposed method.
Fig. 9
Fig. 9 Defocus kernel maps for plane measurement: (a) Camera defocus kernel. (b) Projector defocus kernel. (c) Total defocus kernel of camera and projector.
Fig. 10
Fig. 10 Designed phase for plane measurement: (a) Designed phase along Up. (b) Variable period number along Up. (c) Designed phase for Vp. (d) Variable period number along Vp.
Fig. 11
Fig. 11 Illustration of the measured point cloud of the plane by uniform-frequency fringes: (a)–(c) are point clouds corresponding to nTh = 10, nTh = 20 and nTh = 30, respectively; (d)–(f) are measurement errors correspondingly.
Fig. 12
Fig. 12 Designed variable-frequency fringe pattern and point cloud of plane measured by variable-frequency fringe pattern: (a)Designed variable-frequency vertical fringe pattern; (b)Designed variable-frequency parallel fringe pattern; (c) Plane point cloud; (d) Measurement errors of plane fitting of the point cloud in (c).
Fig. 13
Fig. 13 Plane measurement comparison between uniform-frequency and variable-frequency fringes: (a)Plane measured ten times in first place; (b) Plane measured ten times in second place; (c) Plane measured ten times in third place.
Fig. 14
Fig. 14 Defocus kernel maps for sphere measurement: (a) Camera defocus kernel. (b) Projector defocus kernel. (c) Total defocus kernel of camera and projector.
Fig. 15
Fig. 15 Designed phase for sphere measurement: (a) Designed phase along Up; (b)Variable period number along Up; (c) Designed phase for Vp; (d)Variable period number along Vp.
Fig. 16
Fig. 16 Designed variable-frequency fringe pattern and point cloud of sphere measured by variable-frequency fringe pattern: (a) Designed variable-frequency vertical fringe pattern; (b) Designed variable-frequency parallel fringe pattern; (c) Sphere point cloud measured by variable-frequency; (d) Measurement errors of sphere fitting of the point cloud in (c).
Fig. 17
Fig. 17 Illustration of the measured point cloud of the sphere by uniform-frequency with different frequencies: (a)–(c) are point clouds corresponding to nTh = 10, nTh = 20 and nTh = 30, respectively; (d)–(f) are respectively measurement errors correspondingly.
Fig. 18
Fig. 18 Sphere measurement comparison between uniform-frequency fringe and variable frequency fringes: (a) Sphere measured in first place; (b) Sphere measured in second place.
Fig. 19
Fig. 19 Defocus kernel for step like plane measurement: (a) Camera defocus kernel. (b) Projector defocus kernel. (c) Total defocus kernel of camera and projector.
Fig. 20
Fig. 20 Designed phase for step like plane measurement: (a) Designed phase along Up. (b) Variable period number along Up. (c) Designed phase for Vp. (d) Variable period number along Vp.
Fig. 21
Fig. 21 Point cloud and measurement error of step like plane by uniform-frequency and variable-frequency fringes: (a) Point cloud measured by uniform-frequency nTh = 15; (b) Point cloud measured by variable-frequency; (c) Measurement errors of plane fitting of the point cloud in (a); (d) Measurement errors of step like planes fitting of the point cloud in (b).

Tables (5)

Tables Icon

Table 1 Mean values of the RMS error for plane measurement(mm)

Tables Icon

Table 2 RMS error of plane fitting on shrunk evaluated area(mm)

Tables Icon

Table 3 Mean values of the RMS error for sphere measurement(mm)

Tables Icon

Table 4 RMS errors of sphere fitting by shrinking the evaluated area(mm)

Tables Icon

Table 5 Fitting error of step plane

Equations (46)

Equations on this page are rendered with MathJax. Learn more.

ϕ ( U c , U c ) = arctan { i = 1 N I i c ( U c V c ) sin [ ( i 1 ) 2 π / N ] i = 1 N I i c ( U c V c ) cos [ ( i 1 ) 2 π / N ] } ,
ϕ ( U c , V c ) = f ( I 1 c ( U c , V c ) , I 2 c ( U c , V c ) , , I N c ( U c , V c ) ) .
Δ ϕ ( U c , V c ) = f I 1 c Δ I 1 c + f I 2 c Δ I 2 c + + f I N c Δ I N c .
I i c ( U c , V c ) = I c   + I c   cos [ ϕ ( U c , V c ) + ( i 1 ) 2 π / N ] ,
f I i c = 2 sin [ ϕ ( U c , V c ) + ( i 1 ) 2 π / N ] N I c .
| Δ ϕ | max = | i = 1 N f I c   | | Δ I | = 2 N I c   | i = 1 N sin [ ϕ ( x , y ) + ( i 1 ) / N 2 π ] | | Δ I | = M sin ϕ N I c   | Δ I | .
| Δ U p | max = | Δ ϕ | max ω = M sin ϕ N I c   1 ω | Δ I | ,
G ( U p , V p ) = 1 2 π σ 2 e U p 2 + V p 2 2 σ 2 = ( 1 2 π σ e U p 2 2 σ 2 ) ( 1 2 π σ e V p 2 2 σ 2 ) = G ( U p ) G ( V p ) ,
I i p ( U p , V p ) u = I p   + I p   cos [ ω u U p + ( i 1 ) 2 π / N ] ,
I c ( U c , V c ) = G ( U p , V p ) I p ( U p , V p ) .
( I c ( U c , V c ) ) = ( G ( U p , V p ) ) ( I p ( U p , V p ) ) = ( G ( U p ) ) ( G ( V p ) ) ( I p ( U p , V p ) ) ,
I c ( U c , V c ) u = 1 ( ( I c ( U p , V p ) u ) ) = I p   + ( G ( U p ) ) I p cos [ ω u U p + ( i 1 ) 2 π / N ] .
{ I c   = I p   I c   = I p   e σ 2 ω u   2 2
| Δ U p | max = ( M sin ϕ N I p   | Δ I | ) 1 ω e ω 2 σ 2 2 ( ω = ω u ) .
h σ ( ω ) = 1 ω e σ 2 ω 2 .
ω o p t = 1 σ ,
σ p ( u ) R p ( u ) = | D p ( u f p 1 ) | + r u v a p u + b p ,
σ c ( u ) R c ( u ) = D c s ( 1 f c 1 s 1 u ) a c ( b c 1 u ) ,
σ = σ p 2 + σ c 2 .
[ U c V c 1 ] = 1 z c [ f c u 0 U c 0 0 f c v V c 0 0 0 1 ] x c y c z c
{ u c = U c U c 0 f c u = x c z c v c = V c V c 0 f c v = y c z c .
{ u c = U p U p 0 f p u = x p z p v p = V p V p 0 f p v = y p z p ,
[ x p , y p , z p ] T = R p c [ x c , y c , z c ] T + t p c ,
{ z c = t 1 t 3 u p J u p H x c = u c z c y c = v c z c ,
Δ z c = t 3 H t 1 J ( J u p H ) Δ u p = t 3 H t 1 ( J u p H ) 2 Δ U p f p u .
Δ ( u c , v c , u p ) = Δ x c 2 + Δ y c 2 + Δ z c 2 = u c 2 + v c 2 + 1 | Δ z c | .
Δ max ( u c , v c , u p ) = k ( u c , v c , u p ) K sin ϕ h σ ( ω ) | Δ I | ,
Δ ¯ ( u p , ω ) = K sin ϕ j = 1 | V 2 V 1 | k j h σ j ( ω ) | V 2 V 1 | | Δ I | = K sin ϕ k j ¯ h σ ¯ ( ω ) | Δ I | ,
h σ ¯ 1 ( ω ) = 1 ω exp ( ω 2 2 i = 1 | V 2 V 1 | λ i σ i 2 ) h σ ¯ ( ω ) 1 ω ( i = 1 | V 2 V 1 | λ i exp ( σ i 2 ) ) ω 2 2 = h σ ¯ ( ω ) ,
1 σ ¯ 2 ω o p t ( U ^ p ) = 1 σ ¯ 1 σ ¯ 1 .
ϕ ( U ^ p ) = 0 U ^ p ω ( U p ) d U p ,
ϕ ( U ^ p ) = 0 U p l 1 σ 1 ( U p l ) d U p C 0 + j = 1 m 1 U p l + ( j 1 ) W s U p l + j W s 1 σ j d U p + + U p l + ( m 1 ) W s U ^ p 1 σ m d U p
= C 0 + j = 1 m 1 U p l + ( j 1 ) W s U p l + j W s 1 α j U p + β j d U p C m 1 + + U p l + ( m 1 ) W s U ^ p 1 α m U p + β m d U p = C m 1 + 1 α m log ( α m U ^ p + β m α m [ U p l + ( m 1 ) W s ] + β m ) ( α m 0 ) ,
ϕ ( U ^ p ) = C m 1 + U p l + ( m 1 ) W s U ^ p 1 β m d U p = C m 1 + U ^ p ( U p l + ( m 1 ) W s ) β m .
S u = r o u n d ( U p r U p l W s ) ,
I i , ϕ p ( U ^ p , V ^ p ) u = I p   + I p   cos [ ϕ ( U ^ p ) + ( i 1 ) / N 2 π ] , i = 1 , 2 , , N ,
I i , ϕ p ( U ^ p , V ^ p ) v = I p   + I p   cos [ ϕ ( V ^ p ) + ( i 1 ) / N 2 π ] , i = 1 , 2 , , N .
ϕ ˜ = arctan { i = 1 N I ϕ ( i ) sin [ 2 π ( i 1 ) / N ] i = 1 N I ϕ ( i ) cos [ 2 π ( i 1 ) / N ] } [ π , π ) ,
ϕ = { ϕ ˜ ϕ ˜ 0 ϕ ˜ + 2 π ϕ ˜ < 0 .
Q ( U ^ c , V ^ c ) = r o u n d ( 0 U ¯ p ω ( U p ) d U p ϕ 2 π ) .
ϕ ( U ^ p ) = 0 U ^ p ω ( U p ) d U p = Q ( U ^ c , V ^ c ) 2 π + ϕ ,
U ^ p ( U ^ p , V ^ c ) = [ U p l + ( m 1 ) W s + β m α m ] exp [ α m ( 2 π Q + ϕ C m 1 ) ] β m α m ;
U ^ p ( U ^ c , V ^ c ) = β m ( 2 π Q + ϕ C m 1 ) + [ U p l + ( m 1 ) W s ] .
σ p = 0.2486 z p 53.2983 .
σ c = 5.353 × 10 6 z c 3 + 0.00541 z c 2 1.7816 z c + 193.4647 .
A N G i = cos 1 ( | N i N min N i N min | ) ,

Metrics