Abstract

Three-dimensional (3D) shape measurement system with binary defocusing technique can perform high-speed and flexible measurements if binary fringe patterns are defocused by projector properly. However, the actual defocusing degree is difficult to set, and the fringe period is difficult to determine accordingly. In this study, we present a square-binary defocusing parameter selection framework. First, we analyze the fringe formation process mathematically. The defocusing degree is quantified and manipulated by using the focusing distance of projector, which is calibrated by point spread function measurement. To optimize parameter selection, single-point sinusoidal error is modeled as the objective function for the evaluation of the defocusing effect. We verify the correctness by using different parameter combinations and object measurements in our experiments. The appropriate defocusing parameters can be easily obtained according to the analysis of practical system setup, which improves the quality and robustness of the system.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry (FPP) is one of the most well-known approaches in three-dimensional (3D) shape measurement and is widely used in the fields of manufacturing, medicine, and art [1,2]. The 3D shape measurement system with binary defocusing technique first proposed by Zhang et al. [3] has recently attracted attention in research because of its flexibility and high speed [4]. Instead of conventionally generating 8 bits grayscale sinusoidal patterns through a computer, the sinusoidal patterns are obtained by defocusing the binary structured patterns through a projector. FPP with binary defocusing technique can achieve tens of kHz of 3D shape measurements and overcome the nonlinear gamma of a projector because only 1-bit binary structured patterns are required [3].

One of the limitations of the binary defocusing technique is the manipulation of the projector defocusing degree. Optical defocusing is usually modeled as low-pass filtering, which eliminates the high-order harmonics of binary patterns. Only a small range of depth where the object is placed can defocus properly and allow the binary fringe patterns to become high-quality sinusoidal patterns [5]. Otherwise, phase retrieval is disrupted by high-order harmonics.

Two categories of approaches are usually employed for overcoming such barriers. One category is to design new kinds of binary structured patterns that are insensitive to defocusing degree. Several binary pattern generation techniques have been proposed [6]. Sinusoidal pulse width modulation (SPWM) [7] or optimal pulse width modulation (OPWM) [8,9] techniques shift the undesired harmonics away from the first harmonic component, which is preserved as sinusoidal. Dithering techniques with error diffusion [10–12] use a dither kernel to binarize a pattern in order to randomize the quantization error two-dimensionally. However, these methods disregard projector setup, only considering pattern structures.

The other category is to model the phase error mathematically and compensate for it. The error function in terms of wrapped phase can be modeled mathematically as a polynomial function, whose coefficients relate to the measured depth that can be calibrated first [5]. Several appropriate algorithms, such as Hilbert three-step phase-shifting profilometry and double three-step phase-shifting profilometry with latter featuring additional invalid detection, can increase phase accuracy [13]. However, such methods exhibit low extensiveness and flexibility, highly depending on the selected algorithm.

Quantitative approaches for practically measuring and controlling defocusing degree, which is crucial for the FPP system, are rarely conducted. Some studies used quantitative approaches. Zhang et al. [14] analyzed the phase error influenced by projector defocusing for the traditional FPP system and proposed a parameter-choosing scheme. Lei and Zhang [4] compared the phase error at various defocus levels of the FPP system using the binary defocusing technique. Kamagara et al. [15] further investigated the defocus selection based on Gaussian kernel and normed Fourier transform. However, the defocus levels in the above studies are difficult to observe directly. Therefore, the practical optimized choice of defocusing degree and corresponding fringe period is still inapplicable for a certain setup because the defocusing degree is hardly manipulated quantitatively.

Point spread function (PSF) measurement is a useful tool for estimating the defocusing effect. It has been widely used for extending the depth of field and super-resolution of cameras [16,17] and projectors [18–20] in computer vision or computer graphics. The filter kernel of defocus blur can be obtained by PSF measurement. Once the kernel of a real system is estimated, further processing, such as deconvolution, can be conducted based on priors.

In this research, the defocusing degree and corresponding fringe period are selected as defocusing parameters, and a parameter choice framework is proposed for the 3D shape measurement system with square-binary defocusing technique. A mathematical model is presented for the analysis of the entire process. The PSF approach is introduced to the FPP system. Projector focusing distance is utilized for the quantification and control of the defocusing degree, and the actual low-pass kernel is estimated by PSF measurement. The effects of different combinations of parameter choices are evaluated by single-point sinusoidal error. Simulation and experimental results indicate that the best combination with low error can be obtained accurately if certain conditions between focusing distance and fringe period are satisfied. The experimental results also prove the correctness of our framework. From our analysis, the appropriate parameter selection can be easily made according to the practical FPP setup.

The remainder of this paper is organized as follows: Section 2 explains the principles of our approach and presents the preliminary simulation results. Section 3 shows the experimental results coinciding with the simulation results and proving its correctness and practicability. Section 4 summaries this paper.

2. Principle

Instead of the direct use of sinusoidal patterns, binary fringes are used for the generation of quasi-sinusoidal patterns by projector defocusing, which enables high-speed projection. However, the defocusing degree of the projector and the fringe period are difficult to manipulate. If the defocusing degree is large but the fringe period is small, then the amplitude of the sinusoidal signal will be weak or completely submerged in noise. Conversely, if the defocusing degree is small but the fringe period is large, then the high order harmonics of the binary signal will not be filtered.

A quantitative analysis method is proposed for overcoming such barriers. The basic idea of our approach is to regard the defocus blur as an ideal Gaussian low-pass filter whose actual kernel can be estimated by defocusing PSF measurements. Once the PSF of the projector in different focusing distances are measured as priors, the appropriate defocusing degree and period of square-binary fringe can be evaluated and determined. As illustrated in Fig. 1, the parameter combination can be determined through the following pipeline: First, perform PSF measurement to build the connection between Gaussian kernel and physical focusing distance. Then, simulate the filtered fringe for evaluation by using single-point sinusoidal error. Finally, select the suitable combination of fringe period and focusing distance with low single-point sinusoidal error based on the simulation results.

 figure: Fig. 1

Fig. 1 Parameter-choosing pipeline.

Download Full Size | PPT Slide | PDF

In this section, first, we formulate square-binary defocusing FPP and analyze it in the frequency domain. Then, we derive the PSF principle and the evaluation method of the defocusing effect, which is utilized for parameter selection. Finally, we analyze the parameter selection.

2.1 Square-binary defocusing FPP analysis

A square-binary fringe can be represented as a square wave mathematically. The basic idea of the binary defocusing technique is to filter all of the other high-order harmonics of the square wave for the preservation of the first harmonic as sinusoidal wave. The completion of the filtering process goes through two steps: the optical defocus blur of the projector and the least-squares phase-shifting algorithm (LS-PSA). The problem is in the first step that the defocus blur is hard to control quantitatively.

The optical defocus blur can be modeled as a low-pass filtering process in image processing. The kernel of the filter is derived as the defocusing PSF, as follows:

I(x,y)=rect(x,y)h(x,y),
where x,y are the image plane coordinates of projector, rect(x,y) is the square wave, h(x,y) is the defocusing PSF, I(x,y) is the filtered pattern image, and “⨂” is the convolution operator.

After the PSF filtering process, I(x,y) can be approximated as sinusoidal pattern projected on the object and the N-steps phase-shifting method can be applied. The intensity distribution of the ith step (i = 0, 1, …, N − 1) is expressed as follows:

Ii(x,y)=a(x,y)+b(x,y)cos[ϕ(x,y)+2πNi],
where a(x,y) is the average intensity, b(x,y) is the intensity modulation, and ϕ(x,y) is the wrapped phase which is determined by the N-step LS-PSA:
ϕ(x,y)=ϕ(xc,yc)=arctan[i=0NIic(xc,yc)sin(2iπN)i=0NIic(xc,yc)cos(2iπN)],
where xc,yc are the image coordinates of camera, and Iic(xc,yc) is the recorded intensity.

The effect of high-order harmonics residuals is assessed by analyzing the filtering process in the frequency domain. Because the pattern in either horizontal or vertical direction is the same, the mathematical analysis can be simplified as a one-dimensional signal processing issue [4]. As shown in Fig. 2, the Fourier transform of the one-dimensional square wave, rect(x), whose amplitude is A, spatial period is T, and duty cycle is 50%, can be represented as follows:

RECT(jω)=Aπn=sin(nπ2)nπ2δ(ωnω0)=Aπδ(ω)+2Ak=0(1)k2k+1δ[ω±(2k+1)ω0],
where ω=2π/x and ω0 is the angular frequency of the square wave, which is derived as follows:

 figure: Fig. 2

Fig. 2 (a) Cross-section of a square-binary fringe, which can be represented as a square wave, rect(x), mathematically: x is the spatial coordinate; A is the amplitude; T is the spatial period. (b) Absolute value of the Fourier spectrum of rect(x): ω is the angular frequency coordinate, and the angular frequency of rect(x) is ω0 = /T.

Download Full Size | PPT Slide | PDF

ω0=2πT.

PSF is usually modeled as a Gaussian filter in image processing [21,22]. As shown in Eqs. (6) and (7), the mathematical expression of one-dimensional PSF in the spatial and frequency domains is a Gaussian function.

h(x)=12πσh2ex22σh2,
H(jω)=eω22D02,
where D0 = 1/σh is the cutoff frequency of the low-pass Gaussian filter. Hence, the Fourier transform of the PSF-filtered pattern signal F(jω) in one dimension is expressed as follows:

F(jω)=RECT(jω)H(jω)=Aπδ(ω)+2Aeω022D02[δ(ω+ω0)+δ(ωω0)]+2Ak=1(1)k2k+1e[(2k+1)ω0]22D02[δ(ω±(2k+1)ω0)].

g() and Rn() are expressed as Eqs. (9) and (10):

g(jω)=Aπδ(ω)+2Aeω022D02[δ(ω+ω0)+δ(ωω0)],
Rn(jω)=2Ak=1(1)k2k+1e[(2k+1)ω0]22D02[δ(ω±(2k+1)ω0)].

g() only contains zero and the fundamental frequency, which is the desired sinusoidal signal, whereas Rn() is the high-order harmonics, which should be filtered. LS-PSA has frequency transfer function (FTF) Hn(ω), which assists to further reject harmonics [23]. If 8-step is acceptable, H8(ω) can reject the harmonics {-10,-9,-8,-6,-5,-4,-3,-2,2,3,4,5,6,7,8,10} in the range {-10, 10}. However, the increasing of phase-shifting steps decreases the measurement speed. Therefore, the number of steps are commonly selected as 3-step or 4-step. The 3-step LS-PSA fails to filter-out the distorting harmonics {-8, −5, −2, 4, 7, 10} and 4-step LS-PSA fails in {-7, −3, 5, 9}, which means that the appropriate PSFs need to at least filter-out these harmonics before LS-PSA.

The inverse Fourier transform of g() is derived as follows:

G(x)=A2+2Aπeω022D02cos(2πTx).

Compared with Eq. (2), the average intensities and intensity modulation are expressed as follows:

a=A2,
b=2Aπeω022D02.

Filter Rn() properly is difficult because the actual Gaussian kernel is difficult to estimate. In our principle, PSF measurement is introduced, which provides a practical approach to manipulate the projector defocus blur directly.

2.2 Defocus PSF modeling and measurement

PSF measurement is conducted as calibration to bridge the gap between physical defocus effect and image simulation. The purpose of PSF measurement is to estimate the actual cutoff frequency D0 of the low-pass Gaussian filter generated by the projector defocus blur from projector’s perspective, which is used to analyze the effect quantitatively.

Focusing distance is easy to control directly, which would be an ideal candidate to quantize the defocus degree in practice. The simplified projector optical path is shown in Fig. 3. From the geometric optics [24], the radius of the blur spot can be derived as follows:

R=|D2(VS1)|+rVU,
where r is the approximated radius of the Digital Micro-mirror Device (DMD) unit, D is the diameter of the aperture, R is the radius of the blur spot, U is the object distance, V is the image distance, and S is the focusing distance.

 figure: Fig. 3

Fig. 3 Simplified optical path diagram of a projector with defocus blur where 2r is the approximated diameter of the DMD unit, D is the diameter of the aperture, R is the radius of the blur spot, U is the object distance, V is the image distance, and S is the focusing distance. The focus plane is where the clearest image can be observed, whereas the image plane is where the light screen is located.

Download Full Size | PPT Slide | PDF

U and V can be approximated as constants. On one hand, the variation of object distance U is small when we change the focusing distance of the projector. On the other hand, the depth of measured object is smaller than the focus distance, and the motion of object at depth is small during each phase-shifting period. It indicates that the image distance V is determined mainly by the working distance of the FPP system. Therefore, blur spot radius R can be simplified as a function with independent variable S, the focusing distance. Hence,

R(S)=|k1Sk2|+k3,
where k1=DV/2, k2=D/2, and k3=rV/U. These parameters can be determined by calibration.

A variety of PSF measurement methods for the projector has been proposed [18,19,25]. Here, we adopt the simplified method proposed by Bimber and Emmerling [18]. As illustrated in Fig. 4(a), the PSF in the spatial domain, which is a blur spot, can be observed directly by inputting the point source matrix as object image. By assuming that the projector satisfies pinhole imaging and the intensity distribution of the blur spot satisfies Gaussian distribution, we only consider the low-pass effect and ignore other abbreviations. By applying the Gaussian distribution to fit the intensity of each detected blur spot, we estimated the relationship between blur spot radius and σh in Eq. (6) as follows:

R=3σh,
where R is the radius of the blur spot in image plane coordinates. From Eqs. (6) and (7), the cutoff frequency of the Gaussian filter can be estimated as follows:

 figure: Fig. 4

Fig. 4 Principle of PSF measurement: (a) Principle of measurement used to estimate the defocus PSF. (b) Setup of PSF measurement pipeline. The defocus blur spot under different focusing distances can be observed on the light screen and recorded by the camera.

Download Full Size | PPT Slide | PDF

D0=3R.

As illustrated in Fig. 4(b), the blur spot under different focusing distances can be observed. By substituting Eq. (15) into Eq. (17), we express the relationship between focus distance and cutoff frequency as follows:

D0(S)=3|k1Sk2|+k3.

2.3 Single-point sinusoidal error criteria

Once we have the square-binary fringes and actual filter kernel, building an objective function for defocusing parameter optimization is necessary. However, generating different phase-shifting images to evaluate is still time consuming. For square-binary fringes, the evaluation can be further simplified. The phase error of temporal unwrapping is further analyzed based on Eq. (8) in this section.

According to previous research [26,27], the variance of phase error σϕ2 is independent from ϕ and is directly proportional to the reconstruction error, which can be represented as the following equation:

σϕ2=2σ2Nb2,
where σ is the standard deviation (STD) of the intensity noise, N is the number of steps used, and b is the fringe modulation.

We generalize Eq. (19) to binary defocusing techniques. After a full-period phase shifting, the intensity of a single point in the pattern captured by the camera should form a temporal sinusoidal wave ideally. However, different kinds of noises always exist. Actually, the image recorded at the ith step is expressed as follows:

Iic(xc,yc)=Ii(xc,yc)+ηi(xc,yc),

Instead of intensity noise, σ can be generalized for the characterization of the overall error or difference between the ideal sinusoidal wave and the actual wave formed by phase shifting. As illustrated in Fig. 5, hereby, σ is defined as the STD of the intensity residuals between ideal sinusoidal wave and the measured wave formed by single-point intensity variations at each phase-shifting step. For the same N-steps phase-shifting method, the number of steps is constant in application (usually three or four steps). Furthermore, we add the factor T’ (The normalized fringe period) in order to better illustrate the relative sensitivity caused by fringe period. Therefore, Eq. (19) can be further simplified and the single-point sinusoidal error (in short, sinusoidal error) is defined as follows:

σstd={σTbifσb1otherwise×100%.
where T’ = T/Tmax is the normalized fringe period.

 figure: Fig. 5

Fig. 5 Principle of single-point sinusoidal error criteria: (a) The setup of the single-point sinusoidal error consists of a projector, a camera, and a light screen. (b) The recorded intensity variation of the red point in (a) at each phase step and its fitted ideal sinusoidal wave. (c) The residual between the measured and fitted at each phase step.

Download Full Size | PPT Slide | PDF

Normally, noise is smaller than modulation. However, σ>>b may occur. For instance, when the measured wave is approximated as a direct current signal with large noises. This situation usually appears in binary defocusing techniques when the defocus is large but the square-binary fringes are narrow.

For binary defocusing techniques, according to the frequency domain analysis presented in Section 2.1, σ can be further separated into two parts. One part is caused by high-order harmonics, which can be represented as the inverse Fourier transform of Eq. (10) F1{Rn(jω)}, whereas all of the other noises are denoted as N, which is the stochastic noise obtained by estimation.

From Eq. (10), the following equation can be derived:

σ=k=1(2Aπ(2k+1))2e[(2k+1)ω0]2D02+N.

By substituting Eqs. (13) and (22) into Eq. (21), we can rewrite the single-point sinusoidal evaluation as follows:

σstd=T'k=11(2k+1)2e[(2k+1)21]ω02D02+N'eω02D02×100%,
where N'=πN4A2 , which is the normalized noises; ω0 = 2π/T is the angular frequency of the square wave; and D0 is the cutoff frequency of the Gaussian filter. Moreover, we set σstd=100% if σstd>100% similarly. Equation (23) is derived as the objective function.

2.4 Parameters selection

Collecting Eqs. (5), (15), (17), and (23) and setting σstd=100% if σstd>100%, the evaluation can be simulated using the equation set expressed in Eq. (24). In this section, we first simulate the sinusoidal error and the PSF measurement results, and then analyze parameter selection.

The relationship between R, T, andσstdis illustrated in Fig. 6, whereas the relationship between R and S is shown in Fig. 7. For the evaluation, we set k from 1 to 10,000 and estimate the noise N' as 0.0001π by conducting the experiment. We set k1 = 1,782, k2 = 27, and k3 = 9 in Fig. 7 initially to illustrate the formula. The accurate parameters are calibrated through the experiment presented in the subsequent section.

 figure: Fig. 6

Fig. 6 Simulation of the single-point sinusoidal error: (a) Sinusoidal error in different combinations of R and T. (b) Cross section of the surface when T = 25 pixels, which is denoted on the surface by a blue line. (c) Cross section of the surface when R = 15 pixels, which is denoted on the surface by a red line.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7

Fig. 7 Simulation of the projector defocus: (a) The relationship between blur spot radius R in the raw input image coordinate and projector focusing distance S. (b) The estimated cutoff frequency.

Download Full Size | PPT Slide | PDF

{σstd=T'k=11(2k+1)2e[(2k+1)21]ω02D02+N'eω02D02×100%ω0=2πTD0=3RR(S)=|k1Sk2|+k3.

The sinusoidal error forms a convex function. The optimized parameter combination would let σstd be as small as possible with acceptable modulation and PSF tolerance. The goal is to minimize Eq. (24) according to the practical applications. There are two types of application.

In most cases, the fringe period is known. For example, it is determined by the capability of phase unwrapping method [27], which provides possible fringe period selections. Minimum sinusoidal error indicates the appropriate PSF radius. PSF measurement then assists to set the focusing distance based on the blur radius accurately. As illustrated in Fig. 6(b), if the fringe period is selected as 25 pixels, then 72 cm (R = 11 pixels) is an ideal focusing distance.

In other cases, if the defocusing capability of the setup is known (the PSF is known), Eq. (24) helps to exclude the fringe period which is unable to be filtered properly by the practical projector. As shown in Fig. 6(c), the fringe periods between 25 to 35 pixels would be appropriate if the focusing distance is set at 85 cm. However, not all of them are acceptable. The PSF varies at depth. The acceptable range of cutoff frequency limits fringe period selection. Better fringe period has PSF tolerance with low sinusoidal error, which provides sufficient measuring range determined by the measured object. As illustrated in Fig. 8, small fringe period has small range of PSF tolerance, which indicates the limitation of measuring range.

 figure: Fig. 8

Fig. 8 Sinusoidal error with different normalized noise sets: (a) 0.00001π, (b) 0.0001π, which is also the top view of Fig. 6(a), (c) 0.001π, and (d) 0.01π. The white lines denote the cutoff frequency that is equal to the integral multiples of ω0 from D0 = ω0 to D0 = 10ω0. The minimum sinusoidal error region rotates clockwise and shrinks when the noises increase.

Download Full Size | PPT Slide | PDF

Furthermore, as indicated in the simulation, when the focusing distance is smaller than the working distance, the cutoff frequency would change rapidly, which indicates that determining the appropriate defocusing degree is difficult. By contrast, the cutoff frequency would gently change when the focusing distance is larger than the working distance. The low-pass effect can be approximated as constant, which provides a larger measuring range for the binary defocus FPP system. Therefore, in contrast to the traditional approach where the object is positioned behind the focus plane, positioning the object before the image plane would extend the range of depth where the binary pattern becomes the high-quality sinusoidal pattern.

The numerical solution shows that the minimum area is around D0=ω0 (T2πR/3 ), which indicates a parameter strategy under the constraints of both PSF and fringe period. The noise estimation also effects the parameter choices. As shown in Fig. 8, the minimum area rotates clockwise and shrinks with increasing noise, which probably indicates that the defocusing degree should be weakened to improve the noise–signal ratio. Therefore, the area between D0=ω0 andD0=2ω0can be acceptable if the noises are high. Hence, according to Eq. (24), T2πR/3 can be the best strategy to set the parameters, where R can be set accurately by focusing distance S after PSF measurement.

3. Experiment

To verify the model performance, first, we calibrate the PSFs. Then, we evaluate the sinusoidal error for different defocusing parameters by conducting experiments. Finally, we conduct 3D object shape measurements to test the performance. The experiment setup depicted in Fig. 9 consists of a CMOS camera (Basler acA2500-14gm) with a resolution of 2,592 × 2,048, a DMD-based projector with a resolution of 1,920 × 1,080, and a movable 425 mm × 310 mm light screen. The angle between camera and projector is 6 degrees.

 figure: Fig. 9

Fig. 9 Experimental setup.

Download Full Size | PPT Slide | PDF

3.1 Defocus PSF Measurement

The light screen is fixed at 55 cm, which is close to the actual working distance of our FPP system. The range of the projector focusing distance is from 45 cm to 520 cm determined by the projector device. A total of 20 different focusing distances are sampled to fit the coefficients. A 12 × 21 dot matrix pattern with a resolution of 1,920 × 1,080 is used as the input image for the projector, which ensures that the blur spots do not overlap. Each dot only consists of 1 pixel to generate the point light source. The measurement is made in a dark room to reduce noises.

In practice, the PSF measurement pipeline has the following steps: Step 1: Adjust the focusing distance of the projector and record the focusing distance S. Fix the light screen in the working distance. Project a dot matrix pattern and record the blur spot image Is. Project a black image, which is used to measure bias and recorded as Ib. Step 2: Undistort the geometric distortion caused by the camera based on the calibration of the camera intrinsic parameters. A large geometric abbreviation has an effect on radius estimation, which will influence the accuracy. Step 3: Calculate Ip = IsIb. Segment the pattern region and transform it into raw input image coordinates by projective transformation. Step 4: Find the contour of each blur spot. Discard the blur spot with other large abbreviations, such as coma. Fit the contour of the circle and estimate the radius.

The estimated radius is illustrated in Fig. 10(a). We fit the data using Eq. (15). The adjusted r2 is 0.9730, which proves that the blur spot radius can be formulated using the focusing distance. As illustrated in Fig. 10(a), when the focusing distance is larger than 250 cm, the blur spot radius increases slowly. The variation of the cutoff frequency of the optical low-pass filter is relatively small, which indicates that, when the focusing distance is larger than 250 cm, a sinusoidal pattern with acceptable quality would be obtained if the fringe period is selected properly.

 figure: Fig. 10

Fig. 10 Experimental results: (a) Radius estimated result based on PSF measurement. (b) Measured sinusoidal error under different fringe periods where the focusing distance is fixed at 85 cm where the blur spot radius is approximately 14.6 pixels.

Download Full Size | PPT Slide | PDF

3.2 Sinusoidal error of different parameter combinations

After calibration, the sinusoidal error evaluation experiment is conducted. In practice, the evaluation of the pipeline has the following steps: Step 1: Record the N-steps phase-shifting pattern images. Each phase step can be recorded multiple times and averaged to reduce the noises. Step 2: Select a single-point position in the recorded image and fit the phase–intensity wave using the cosine function. Step 3: Record the fitted modulation b. Calculate the STD of the difference between fitted and measured values in each of the phase steps, which is σ. Step 4: According to Eq. (21), calculate the sinusoidal error.

We first fixed the defocusing degree and changed different fringe periods. The focusing distance is set at S = 85 cm where the estimated radius R is approximately 14.6 pixels and the square-binary fringe patterns are generated from T = 9 pixels to T = 72 pixels. We fix the light screen at 55 cm, which is the working distance, and project the phase-shifting fringe patterns. The measured result and the cross-section of the simulation surface at 85 cm is illustrated in Fig. 10(b). The simulation model perfectly fits the experimental result. As shown in Fig. 11, the residual of 9, 18, and 32 pixels are relatively small. However, the sinusoidal errors of 9 and 18 pixels are still unacceptable because of the small modulation. If the defocusing strength is diminished, the PSF tolerance also decreases. The measurement quality will change rapidly because of the small depth variance of the object. By contrast, T = 32 pixels provides larger measuring range. The small sinusoidal error indicates that T = 32 pixels would be an ideal parameter choice with high modulation and small residual under S = 85 cm. Furthermore,T=2πR/331pixels, which indicates that the first-order frequency of T = 32 pixels pattern is close to the cutoff frequency of the optical low-pass filter; hence, ω0D0.

 figure: Fig. 11

Fig. 11 Sinusoidal error evaluation experimental results. The focusing distance is fixed at 85 cm where the blur spot radius is approximately 14.6 pixels. The period of the square-binary fringe is 9, 15, 18, 21, 27, 32, 36, 40, 45, 51, 54, and 72 pixels. The first and third rows are the measured wave formed by intensity variation and the fitted sinusoidal wave, respectively. The second and fourth rows are the residual between measured wave and fitted sinusoidal wave at each phase step.

Download Full Size | PPT Slide | PDF

Six by six combinations of focusing distance and fringe period choices are sampled for the experiment after radius estimation and sinusoidal error simulation. The fringe period T is selected every 15 pixels from T = 15 pixels to T = 90 pixels. The focusing distances are from 55 cm to 250 cm, which have been transformed into the radius according to the data shown in Fig. 10. Each pattern is phase-shifted 15 steps. The distribution of the experimental sinusoidal error for each combination is similar to simulation presented in Section 2.4, which is illustrated in Figs. 12(a) and 12(b). As shown in Fig. 13, although the simulation result is relatively high around the edge of the surface, it still perfectly coincides with the simulation in the central region where the sinusoidal error is relatively low. The minimum region is located near D0=ω0 as expected. Moreover, the experimental results also prove the correctness of the calibration, which indicates that the defocusing degree can be controlled quantitatively after PSF measurement.

 figure: Fig. 12

Fig. 12 Result of the sinusoidal error experiment. (a) 3D surface of different combination results. (b) Top view of the surface.

Download Full Size | PPT Slide | PDF

 figure: Fig. 13

Fig. 13 Experimental and simulation results of the parameter combinations in the same coordinate. The red spots denote the measured data. The simulation result displayed as the surface is the crop of Fig. 6(a).

Download Full Size | PPT Slide | PDF

3.3 3D shape measurement

3D shape measurement is conducted to test the performance. The working distance is fixed at 55 cm. The 4-step LS-PSA is adopted, which helps to further reject harmonics {-10,-9,-8,-6,-5,-4,-2, 2, 3, 4, 6, 7, 8, 10} in the range {-10, 10} [23]. We first measure a standard plane board to test the accuracy. The fringe periods are selected as 16, 20, and 24 pixels and the focusing distance are set at 65, 75 and 105 cm respectively. Figure 14 is the sinusoidal error simulated by Eq. (24). The 3D shape measurement result is illustrated in Fig. 15.

 figure: Fig. 14

Fig. 14 Simulation results Single-point sinusoidal error for different defocusing degrees, which is calculated by Eq. (24).

Download Full Size | PPT Slide | PDF

 figure: Fig. 15

Fig. 15 Measurement results of a standard plane board: (a) Standard plane board. (b) S = 65 cm with T = 16 pixels. (c) S = 65 cm with T = 20 pixels. (d) S = 65 cm with T = 24 pixels. (e) S = 75 cm with T = 16 pixels. (f) S = 75 cm with T = 20 pixels. (g) S = 75 cm with T = 24 pixels. (h) S = 110 cm with T = 16 pixels. (i) S = 110 cm with T = 20 pixels. (h) S = 110 cm with T = 24 pixels.

Download Full Size | PPT Slide | PDF

As illustrated in Fig. 14, the simulation results indicate that the sinusoidal error would decrease with the increase in the fringe period when R is smaller than 10 pixels. By contrast, the sinusoidal error would increase when R is larger. Furthermore, T = 16 pixels with R ≈7.6 pixels would be the best combination of these 9 combinations with minimum measured error.

The board can be obtained, which proves that the defocusing effects are acceptable. If the binary pattern were improperly defocused, then the phase retrieval would fail with the traditional FPP algorithm designed for sinusoidal patterns. It causes the layered reconstruction. As shown in Fig. 15(h) and 15(i), larger sinusoidal error predicts the inferior measurement results. The fitting plane error proves that S = 65 cm with T = 16 pixels can be the best parameter combination to improve the quality. Measurement results also show that the fitting plane errors are minimized when the parameter combination is close toT2πR/3. As shown in Table 1, the distribution of fitting plane errors are consistent with our theoretical expectations, which indicates that the single-point sinusoidal error predicts the accuracy of measurement with high flexibility.

Tables Icon

Table 1. Fitting plane error comparison

The variance of depth is relatively small because the standard plane board is flat. However, for most of objects with thick and complex shapes, the PSF would change at different depths. A plaster model is measured to verify the robustness. The model is placed around the working distance of our system (55 cm) roughly because of the thickness. The measurement results indicate the robustness of our framework.

Small fringe period can increase the FPP sensitivity. However, the measuring range is limited. For the comparison of measurement complicity, the fringe period is selected as 44 pixels with S = 110 cm according to the PSF measurement and Eq. (24). The measurement result is illustrated in Fig. 16. Although little ripples are observed, the 3D shape can still be reconstructed based on traditional FPP algorithms. The comparison shows that, when the focusing distance is set at 110 cm, which is predicted by our framework, the result is more integrated unlike the other parameter combinations. As shown in Fig. 16(c), some reconstructions are missing at the large depth parts of the model when we change the focusing distance closer than we predict. As shown in Fig. 16(d), the measurement results are worse at the large depth parts of the model, though the ripples of the face are weakened with the small fringe period combination. That is because the fringes are square on the hat of the plaster though defocused properly on the face.

 figure: Fig. 16

Fig. 16 Measurement results of a plaster model: (a) Plaster model. (b) Measurement results when the focusing distance is set at 110 cm with T = 44 pixels. (c) Measurement results when the focusing distance is set at 90 cm with T = 44 pixels. (d) Layered measurement results when the focusing distance is set at 65 cm with T = 16 pixels. (e) The fringe pattern captured when the focusing distance is set at 65 cm with T = 16 pixels.

Download Full Size | PPT Slide | PDF

4. Conclusions

We have proposed a defocusing parameter selection framework based on PSF measurement to improve the quality of the 3D shape measurement system with square-binary defocus techniques. The defocusing degree is calibrated by PSF measurement and controlled quantitatively by the focusing distance of the projector in practice. The mathematical derivation and experiments indicate that the effects of different parameter combinations can be approximated and analyzed flexibly by using Eq. (24). The single-point sinusoidal error forms a convex surface that predicts the best strategy for parameter combinations. The simulation and experimental results indicated that the best combination should satisfy T2πR/3roughly, which guides the focusing distance and fringe period selection.

In the presented research, the Gaussian distribution is used to fit the intensity of the blur spot and a linear spatial invariant model is assumed. However, the PSF actually varies at the same the image plane. Furthermore, the PSF has a zero for larger defocusing, which causes contrast inversion of the fringes. The accuracy of PSF measurement influences the selection of the focusing distance. Therefore, more convenient and accurate PSF measurement methods can be further researched. Besides, the FTF of LS-PSAs contributes to reduce the harmonic content. The joint optimization, combining the defocusing strategy with the FTF of LS-PSAs for certain number of steps, can be also investigated analytically in future works.

Funding

National Natural Science Foundation of China (NSFC) (61735003, 61475013); Program for Changjiang Scholars and Innovative Research Team in University (IRT_16R02).

References

1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

2. X. Y. Su and Q. C. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010). [CrossRef]  

3. S. Zhang, “Flexible 3D shape measurement using projector defocusing: extended measurement range,” Opt. Lett. 35(7), 934–936 (2010). [CrossRef]   [PubMed]  

4. S. Y. Lei and S. Zhang, “Digital sinusoidal fringe pattern generation: Defocusing binary patterns VS focusing sinusoidal patterns,” Opt. Lasers Eng. 48(5), 561–569 (2010). [CrossRef]  

5. Y. Xu, L. Ekstrand, J. Dai, and S. Zhang, “Phase error compensation for three-dimensional shape measurement with projector defocusing,” Appl. Opt. 50(17), 2572–2581 (2011). [CrossRef]   [PubMed]  

6. B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. 54(1), 236–246 (2014). [CrossRef]  

7. G. A. Ayubi, J. A. Ayubi, J. M. Di Martino, and J. A. Ferrari, “Pulse-width modulation in defocused three-dimensional fringe projection,” Opt. Lett. 35(21), 3682–3684 (2010). [CrossRef]   [PubMed]  

8. Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19(6), 5149–5155 (2011). [CrossRef]   [PubMed]  

9. C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51(19), 4477–4490 (2012). [CrossRef]   [PubMed]  

10. Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51(27), 6631–6636 (2012). [CrossRef]   [PubMed]  

11. W. Lohry and S. Zhang, “Genetic method to optimize binary dithering technique for high-quality fringe generation,” Opt. Lett. 38(4), 540–542 (2013). [CrossRef]   [PubMed]  

12. J. S. Sun, C. Zuo, S. J. Feng, S. L. Yu, Y. Z. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3D shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015). [CrossRef]  

13. D. Zheng, F. Da, Q. Kemao, and H. S. Seah, “Phase error analysis and compensation for phase shifting profilometry with projector defocusing,” Appl. Opt. 55(21), 5721–5728 (2016). [CrossRef]   [PubMed]  

14. M. Zhang, Q. Chen, T. Tao, S. Feng, Y. Hu, H. Li, and C. Zuo, “Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection,” Opt. Express 25(17), 20381–20400 (2017). [CrossRef]   [PubMed]  

15. A. Kamagara, X. Wang, and S. Li, “Optimal defocus selection based on normed Fourier transform for digital fringe pattern profilometry,” Appl. Opt. 56(28), 8014–8022 (2017). [CrossRef]   [PubMed]  

16. A. Mosleh, P. Green, E. Onzon, I. Begin, J. M. P. Langlois, and Ieee, “Camera Intrinsic Blur Kernel Estimation: A Reliable Framework,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 4961–4968.

17. A. Mosleh, J. M. P. Langlois, and P. Green, “Image Deconvolution Ringing Artifact Detection and Removal via PSF Frequency Analysis,” in Proceedings of European Conference on Computer Vision (Springer, 2014), pp. 247–262. [CrossRef]  

18. O. Bimber and A. Emmerling, “Multifocal projection: A multiprojector technique for increasing focal depth,” IEEE Trans. Vis. Comput. Graph. 12(4), 658–667 (2006). [CrossRef]   [PubMed]  

19. M. Nagase, D. Iwai, and K. Sato, “Dynamic Control of Multiple Focal-Plane Projections for Eliminating Defocus and Occlusion,” in Proceedings of IEEE Conference on Virtual Reality (IEEE, 2010), pp. 293–294. [CrossRef]  

20. T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition projector for extended depth of field and field of view,” Opt. Lett. 38(9), 1560–1562 (2013). [CrossRef]   [PubMed]  

21. A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 256–261. [CrossRef]  

22. H. Hu and G. D. Haan, “Low Cost Robust Blur Estimator,” in in Proceedings of IEEE Conference on Image Processing (IEEE, 2006), pp. 617–620.

23. M. Servín, J. A. Quiroga, and J. M. Padilla, Fringe pattern analysis for optical metrology: theory, algorithms, and applications. (Wiley-VCH, 2014).

24. V. P. Namboodiri and S. Chaudhuri, “On defocus, diffusion and depth estimation,” Pattern Recognit. Lett. 28(3), 311–319 (2007). [CrossRef]  

25. L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 907–915. [CrossRef]  

26. J. Li, L. G. Hassebrook, and C. Guan, “Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity,” J. Opt. Soc. Am. A 20(1), 106–115 (2003). [CrossRef]   [PubMed]  

27. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

References

  • View by:

  1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
    [Crossref]
  2. X. Y. Su and Q. C. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
    [Crossref]
  3. S. Zhang, “Flexible 3D shape measurement using projector defocusing: extended measurement range,” Opt. Lett. 35(7), 934–936 (2010).
    [Crossref] [PubMed]
  4. S. Y. Lei and S. Zhang, “Digital sinusoidal fringe pattern generation: Defocusing binary patterns VS focusing sinusoidal patterns,” Opt. Lasers Eng. 48(5), 561–569 (2010).
    [Crossref]
  5. Y. Xu, L. Ekstrand, J. Dai, and S. Zhang, “Phase error compensation for three-dimensional shape measurement with projector defocusing,” Appl. Opt. 50(17), 2572–2581 (2011).
    [Crossref] [PubMed]
  6. B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. 54(1), 236–246 (2014).
    [Crossref]
  7. G. A. Ayubi, J. A. Ayubi, J. M. Di Martino, and J. A. Ferrari, “Pulse-width modulation in defocused three-dimensional fringe projection,” Opt. Lett. 35(21), 3682–3684 (2010).
    [Crossref] [PubMed]
  8. Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19(6), 5149–5155 (2011).
    [Crossref] [PubMed]
  9. C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51(19), 4477–4490 (2012).
    [Crossref] [PubMed]
  10. Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51(27), 6631–6636 (2012).
    [Crossref] [PubMed]
  11. W. Lohry and S. Zhang, “Genetic method to optimize binary dithering technique for high-quality fringe generation,” Opt. Lett. 38(4), 540–542 (2013).
    [Crossref] [PubMed]
  12. J. S. Sun, C. Zuo, S. J. Feng, S. L. Yu, Y. Z. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3D shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
    [Crossref]
  13. D. Zheng, F. Da, Q. Kemao, and H. S. Seah, “Phase error analysis and compensation for phase shifting profilometry with projector defocusing,” Appl. Opt. 55(21), 5721–5728 (2016).
    [Crossref] [PubMed]
  14. M. Zhang, Q. Chen, T. Tao, S. Feng, Y. Hu, H. Li, and C. Zuo, “Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection,” Opt. Express 25(17), 20381–20400 (2017).
    [Crossref] [PubMed]
  15. A. Kamagara, X. Wang, and S. Li, “Optimal defocus selection based on normed Fourier transform for digital fringe pattern profilometry,” Appl. Opt. 56(28), 8014–8022 (2017).
    [Crossref] [PubMed]
  16. A. Mosleh, P. Green, E. Onzon, I. Begin, J. M. P. Langlois, and Ieee, “Camera Intrinsic Blur Kernel Estimation: A Reliable Framework,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 4961–4968.
  17. A. Mosleh, J. M. P. Langlois, and P. Green, “Image Deconvolution Ringing Artifact Detection and Removal via PSF Frequency Analysis,” in Proceedings of European Conference on Computer Vision (Springer, 2014), pp. 247–262.
    [Crossref]
  18. O. Bimber and A. Emmerling, “Multifocal projection: A multiprojector technique for increasing focal depth,” IEEE Trans. Vis. Comput. Graph. 12(4), 658–667 (2006).
    [Crossref] [PubMed]
  19. M. Nagase, D. Iwai, and K. Sato, “Dynamic Control of Multiple Focal-Plane Projections for Eliminating Defocus and Occlusion,” in Proceedings of IEEE Conference on Virtual Reality (IEEE, 2010), pp. 293–294.
    [Crossref]
  20. T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition projector for extended depth of field and field of view,” Opt. Lett. 38(9), 1560–1562 (2013).
    [Crossref] [PubMed]
  21. A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 256–261.
    [Crossref]
  22. H. Hu and G. D. Haan, “Low Cost Robust Blur Estimator,” in in Proceedings of IEEE Conference on Image Processing (IEEE, 2006), pp. 617–620.
  23. M. Servín, J. A. Quiroga, and J. M. Padilla, Fringe pattern analysis for optical metrology: theory, algorithms, and applications. (Wiley-VCH, 2014).
  24. V. P. Namboodiri and S. Chaudhuri, “On defocus, diffusion and depth estimation,” Pattern Recognit. Lett. 28(3), 311–319 (2007).
    [Crossref]
  25. L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 907–915.
    [Crossref]
  26. J. Li, L. G. Hassebrook, and C. Guan, “Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity,” J. Opt. Soc. Am. A 20(1), 106–115 (2003).
    [Crossref] [PubMed]
  27. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
    [Crossref]

2017 (2)

2016 (2)

D. Zheng, F. Da, Q. Kemao, and H. S. Seah, “Phase error analysis and compensation for phase shifting profilometry with projector defocusing,” Appl. Opt. 55(21), 5721–5728 (2016).
[Crossref] [PubMed]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

2015 (1)

J. S. Sun, C. Zuo, S. J. Feng, S. L. Yu, Y. Z. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3D shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

2014 (1)

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. 54(1), 236–246 (2014).
[Crossref]

2013 (2)

2012 (2)

2011 (2)

2010 (5)

G. A. Ayubi, J. A. Ayubi, J. M. Di Martino, and J. A. Ferrari, “Pulse-width modulation in defocused three-dimensional fringe projection,” Opt. Lett. 35(21), 3682–3684 (2010).
[Crossref] [PubMed]

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

X. Y. Su and Q. C. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
[Crossref]

S. Zhang, “Flexible 3D shape measurement using projector defocusing: extended measurement range,” Opt. Lett. 35(7), 934–936 (2010).
[Crossref] [PubMed]

S. Y. Lei and S. Zhang, “Digital sinusoidal fringe pattern generation: Defocusing binary patterns VS focusing sinusoidal patterns,” Opt. Lasers Eng. 48(5), 561–569 (2010).
[Crossref]

2007 (1)

V. P. Namboodiri and S. Chaudhuri, “On defocus, diffusion and depth estimation,” Pattern Recognit. Lett. 28(3), 311–319 (2007).
[Crossref]

2006 (1)

O. Bimber and A. Emmerling, “Multifocal projection: A multiprojector technique for increasing focal depth,” IEEE Trans. Vis. Comput. Graph. 12(4), 658–667 (2006).
[Crossref] [PubMed]

2003 (1)

Asundi, A.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Ayubi, G. A.

Ayubi, J. A.

Bimber, O.

O. Bimber and A. Emmerling, “Multifocal projection: A multiprojector technique for increasing focal depth,” IEEE Trans. Vis. Comput. Graph. 12(4), 658–667 (2006).
[Crossref] [PubMed]

Chaudhuri, S.

V. P. Namboodiri and S. Chaudhuri, “On defocus, diffusion and depth estimation,” Pattern Recognit. Lett. 28(3), 311–319 (2007).
[Crossref]

Chen, Q.

M. Zhang, Q. Chen, T. Tao, S. Feng, Y. Hu, H. Li, and C. Zuo, “Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection,” Opt. Express 25(17), 20381–20400 (2017).
[Crossref] [PubMed]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

J. S. Sun, C. Zuo, S. J. Feng, S. L. Yu, Y. Z. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3D shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51(19), 4477–4490 (2012).
[Crossref] [PubMed]

Da, F.

Dai, J.

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. 54(1), 236–246 (2014).
[Crossref]

Y. Xu, L. Ekstrand, J. Dai, and S. Zhang, “Phase error compensation for three-dimensional shape measurement with projector defocusing,” Appl. Opt. 50(17), 2572–2581 (2011).
[Crossref] [PubMed]

Darrell, T.

A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 256–261.
[Crossref]

Di Martino, J. M.

Ekstrand, L.

Emmerling, A.

O. Bimber and A. Emmerling, “Multifocal projection: A multiprojector technique for increasing focal depth,” IEEE Trans. Vis. Comput. Graph. 12(4), 658–667 (2006).
[Crossref] [PubMed]

Feng, F.

Feng, S.

Feng, S. J.

J. S. Sun, C. Zuo, S. J. Feng, S. L. Yu, Y. Z. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3D shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

Ferrari, J. A.

Gorthi, S. S.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Green, P.

A. Mosleh, J. M. P. Langlois, and P. Green, “Image Deconvolution Ringing Artifact Detection and Removal via PSF Frequency Analysis,” in Proceedings of European Conference on Computer Vision (Springer, 2014), pp. 247–262.
[Crossref]

Gu, G.

Guan, C.

Haan, G. D.

H. Hu and G. D. Haan, “Low Cost Robust Blur Estimator,” in in Proceedings of IEEE Conference on Image Processing (IEEE, 2006), pp. 617–620.

Hassebrook, L. G.

Horisaki, R.

Hu, H.

H. Hu and G. D. Haan, “Low Cost Robust Blur Estimator,” in in Proceedings of IEEE Conference on Image Processing (IEEE, 2006), pp. 617–620.

Hu, Y.

Huang, L.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Huang, W.

A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 256–261.
[Crossref]

Iwai, D.

M. Nagase, D. Iwai, and K. Sato, “Dynamic Control of Multiple Focal-Plane Projections for Eliminating Defocus and Occlusion,” in Proceedings of IEEE Conference on Virtual Reality (IEEE, 2010), pp. 293–294.
[Crossref]

Kamagara, A.

Kemao, Q.

Langlois, J. M. P.

A. Mosleh, J. M. P. Langlois, and P. Green, “Image Deconvolution Ringing Artifact Detection and Removal via PSF Frequency Analysis,” in Proceedings of European Conference on Computer Vision (Springer, 2014), pp. 247–262.
[Crossref]

Lei, S. Y.

S. Y. Lei and S. Zhang, “Digital sinusoidal fringe pattern generation: Defocusing binary patterns VS focusing sinusoidal patterns,” Opt. Lasers Eng. 48(5), 561–569 (2010).
[Crossref]

Li, B.

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. 54(1), 236–246 (2014).
[Crossref]

Li, H.

Li, J.

Li, S.

Lohry, W.

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. 54(1), 236–246 (2014).
[Crossref]

W. Lohry and S. Zhang, “Genetic method to optimize binary dithering technique for high-quality fringe generation,” Opt. Lett. 38(4), 540–542 (2013).
[Crossref] [PubMed]

Mosleh, A.

A. Mosleh, J. M. P. Langlois, and P. Green, “Image Deconvolution Ringing Artifact Detection and Removal via PSF Frequency Analysis,” in Proceedings of European Conference on Computer Vision (Springer, 2014), pp. 247–262.
[Crossref]

Nagase, M.

M. Nagase, D. Iwai, and K. Sato, “Dynamic Control of Multiple Focal-Plane Projections for Eliminating Defocus and Occlusion,” in Proceedings of IEEE Conference on Virtual Reality (IEEE, 2010), pp. 293–294.
[Crossref]

Nakamura, T.

Namboodiri, V. P.

V. P. Namboodiri and S. Chaudhuri, “On defocus, diffusion and depth estimation,” Pattern Recognit. Lett. 28(3), 311–319 (2007).
[Crossref]

Nayar, S.

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 907–915.
[Crossref]

Pentland, A.

A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 256–261.
[Crossref]

Rastogi, P.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Sato, K.

M. Nagase, D. Iwai, and K. Sato, “Dynamic Control of Multiple Focal-Plane Projections for Eliminating Defocus and Occlusion,” in Proceedings of IEEE Conference on Virtual Reality (IEEE, 2010), pp. 293–294.
[Crossref]

Seah, H. S.

Su, X. Y.

X. Y. Su and Q. C. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
[Crossref]

Sui, X.

Sun, J. S.

J. S. Sun, C. Zuo, S. J. Feng, S. L. Yu, Y. Z. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3D shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

Tanida, J.

Tao, T.

Turk, M.

A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 256–261.
[Crossref]

Wang, X.

Wang, Y.

Xu, Y.

Yu, S. L.

J. S. Sun, C. Zuo, S. J. Feng, S. L. Yu, Y. Z. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3D shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

Zhang, L.

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 907–915.
[Crossref]

Zhang, M.

M. Zhang, Q. Chen, T. Tao, S. Feng, Y. Hu, H. Li, and C. Zuo, “Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection,” Opt. Express 25(17), 20381–20400 (2017).
[Crossref] [PubMed]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Zhang, Q. C.

X. Y. Su and Q. C. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
[Crossref]

Zhang, S.

Zhang, Y. Z.

J. S. Sun, C. Zuo, S. J. Feng, S. L. Yu, Y. Z. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3D shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

Zheng, D.

Zuo, C.

M. Zhang, Q. Chen, T. Tao, S. Feng, Y. Hu, H. Li, and C. Zuo, “Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection,” Opt. Express 25(17), 20381–20400 (2017).
[Crossref] [PubMed]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

J. S. Sun, C. Zuo, S. J. Feng, S. L. Yu, Y. Z. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3D shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51(19), 4477–4490 (2012).
[Crossref] [PubMed]

Appl. Opt. (5)

IEEE Trans. Vis. Comput. Graph. (1)

O. Bimber and A. Emmerling, “Multifocal projection: A multiprojector technique for increasing focal depth,” IEEE Trans. Vis. Comput. Graph. 12(4), 658–667 (2006).
[Crossref] [PubMed]

J. Opt. Soc. Am. A (1)

Opt. Express (2)

Opt. Lasers Eng. (6)

S. Y. Lei and S. Zhang, “Digital sinusoidal fringe pattern generation: Defocusing binary patterns VS focusing sinusoidal patterns,” Opt. Lasers Eng. 48(5), 561–569 (2010).
[Crossref]

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. 54(1), 236–246 (2014).
[Crossref]

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

X. Y. Su and Q. C. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

J. S. Sun, C. Zuo, S. J. Feng, S. L. Yu, Y. Z. Zhang, and Q. Chen, “Improved intensity-optimized dithering technique for 3D shape measurement,” Opt. Lasers Eng. 66, 158–164 (2015).
[Crossref]

Opt. Lett. (4)

Pattern Recognit. Lett. (1)

V. P. Namboodiri and S. Chaudhuri, “On defocus, diffusion and depth estimation,” Pattern Recognit. Lett. 28(3), 311–319 (2007).
[Crossref]

Other (7)

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 907–915.
[Crossref]

M. Nagase, D. Iwai, and K. Sato, “Dynamic Control of Multiple Focal-Plane Projections for Eliminating Defocus and Occlusion,” in Proceedings of IEEE Conference on Virtual Reality (IEEE, 2010), pp. 293–294.
[Crossref]

A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 256–261.
[Crossref]

H. Hu and G. D. Haan, “Low Cost Robust Blur Estimator,” in in Proceedings of IEEE Conference on Image Processing (IEEE, 2006), pp. 617–620.

M. Servín, J. A. Quiroga, and J. M. Padilla, Fringe pattern analysis for optical metrology: theory, algorithms, and applications. (Wiley-VCH, 2014).

A. Mosleh, P. Green, E. Onzon, I. Begin, J. M. P. Langlois, and Ieee, “Camera Intrinsic Blur Kernel Estimation: A Reliable Framework,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 4961–4968.

A. Mosleh, J. M. P. Langlois, and P. Green, “Image Deconvolution Ringing Artifact Detection and Removal via PSF Frequency Analysis,” in Proceedings of European Conference on Computer Vision (Springer, 2014), pp. 247–262.
[Crossref]

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1
Fig. 1 Parameter-choosing pipeline.
Fig. 2
Fig. 2 (a) Cross-section of a square-binary fringe, which can be represented as a square wave, rect(x), mathematically: x is the spatial coordinate; A is the amplitude; T is the spatial period. (b) Absolute value of the Fourier spectrum of rect(x): ω is the angular frequency coordinate, and the angular frequency of rect(x) is ω0 = /T.
Fig. 3
Fig. 3 Simplified optical path diagram of a projector with defocus blur where 2r is the approximated diameter of the DMD unit, D is the diameter of the aperture, R is the radius of the blur spot, U is the object distance, V is the image distance, and S is the focusing distance. The focus plane is where the clearest image can be observed, whereas the image plane is where the light screen is located.
Fig. 4
Fig. 4 Principle of PSF measurement: (a) Principle of measurement used to estimate the defocus PSF. (b) Setup of PSF measurement pipeline. The defocus blur spot under different focusing distances can be observed on the light screen and recorded by the camera.
Fig. 5
Fig. 5 Principle of single-point sinusoidal error criteria: (a) The setup of the single-point sinusoidal error consists of a projector, a camera, and a light screen. (b) The recorded intensity variation of the red point in (a) at each phase step and its fitted ideal sinusoidal wave. (c) The residual between the measured and fitted at each phase step.
Fig. 6
Fig. 6 Simulation of the single-point sinusoidal error: (a) Sinusoidal error in different combinations of R and T. (b) Cross section of the surface when T = 25 pixels, which is denoted on the surface by a blue line. (c) Cross section of the surface when R = 15 pixels, which is denoted on the surface by a red line.
Fig. 7
Fig. 7 Simulation of the projector defocus: (a) The relationship between blur spot radius R in the raw input image coordinate and projector focusing distance S. (b) The estimated cutoff frequency.
Fig. 8
Fig. 8 Sinusoidal error with different normalized noise sets: (a) 0.00001π, (b) 0.0001π, which is also the top view of Fig. 6(a), (c) 0.001π, and (d) 0.01π. The white lines denote the cutoff frequency that is equal to the integral multiples of ω0 from D0 = ω0 to D0 = 10ω0. The minimum sinusoidal error region rotates clockwise and shrinks when the noises increase.
Fig. 9
Fig. 9 Experimental setup.
Fig. 10
Fig. 10 Experimental results: (a) Radius estimated result based on PSF measurement. (b) Measured sinusoidal error under different fringe periods where the focusing distance is fixed at 85 cm where the blur spot radius is approximately 14.6 pixels.
Fig. 11
Fig. 11 Sinusoidal error evaluation experimental results. The focusing distance is fixed at 85 cm where the blur spot radius is approximately 14.6 pixels. The period of the square-binary fringe is 9, 15, 18, 21, 27, 32, 36, 40, 45, 51, 54, and 72 pixels. The first and third rows are the measured wave formed by intensity variation and the fitted sinusoidal wave, respectively. The second and fourth rows are the residual between measured wave and fitted sinusoidal wave at each phase step.
Fig. 12
Fig. 12 Result of the sinusoidal error experiment. (a) 3D surface of different combination results. (b) Top view of the surface.
Fig. 13
Fig. 13 Experimental and simulation results of the parameter combinations in the same coordinate. The red spots denote the measured data. The simulation result displayed as the surface is the crop of Fig. 6(a).
Fig. 14
Fig. 14 Simulation results Single-point sinusoidal error for different defocusing degrees, which is calculated by Eq. (24).
Fig. 15
Fig. 15 Measurement results of a standard plane board: (a) Standard plane board. (b) S = 65 cm with T = 16 pixels. (c) S = 65 cm with T = 20 pixels. (d) S = 65 cm with T = 24 pixels. (e) S = 75 cm with T = 16 pixels. (f) S = 75 cm with T = 20 pixels. (g) S = 75 cm with T = 24 pixels. (h) S = 110 cm with T = 16 pixels. (i) S = 110 cm with T = 20 pixels. (h) S = 110 cm with T = 24 pixels.
Fig. 16
Fig. 16 Measurement results of a plaster model: (a) Plaster model. (b) Measurement results when the focusing distance is set at 110 cm with T = 44 pixels. (c) Measurement results when the focusing distance is set at 90 cm with T = 44 pixels. (d) Layered measurement results when the focusing distance is set at 65 cm with T = 16 pixels. (e) The fringe pattern captured when the focusing distance is set at 65 cm with T = 16 pixels.

Tables (1)

Tables Icon

Table 1 Fitting plane error comparison

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y ) = r e c t ( x , y ) h ( x , y ) ,
I i ( x , y ) = a ( x , y ) + b ( x , y ) cos [ ϕ ( x , y ) + 2 π N i ] ,
ϕ ( x , y ) = ϕ ( x c , y c ) = arc tan [ i = 0 N I i c ( x c , y c ) sin ( 2 i π N ) i = 0 N I i c ( x c , y c ) cos ( 2 i π N ) ] ,
R E C T ( j ω ) = A π n = sin ( n π 2 ) n π 2 δ ( ω n ω 0 ) = A π δ ( ω ) + 2 A k = 0 ( 1 ) k 2 k + 1 δ [ ω ± ( 2 k + 1 ) ω 0 ] ,
ω 0 = 2 π T .
h ( x ) = 1 2 π σ h 2 e x 2 2 σ h 2 ,
H ( j ω ) = e ω 2 2 D 0 2 ,
F ( j ω ) = R E C T ( j ω ) H ( j ω ) = A π δ ( ω ) + 2 A e ω 0 2 2 D 0 2 [ δ ( ω + ω 0 ) + δ ( ω ω 0 ) ] + 2 A k = 1 ( 1 ) k 2 k + 1 e [ ( 2 k + 1 ) ω 0 ] 2 2 D 0 2 [ δ ( ω ± ( 2 k + 1 ) ω 0 ) ] .
g ( j ω ) = A π δ ( ω ) + 2 A e ω 0 2 2 D 0 2 [ δ ( ω + ω 0 ) + δ ( ω ω 0 ) ] ,
R n ( j ω ) = 2 A k = 1 ( 1 ) k 2 k + 1 e [ ( 2 k + 1 ) ω 0 ] 2 2 D 0 2 [ δ ( ω ± ( 2 k + 1 ) ω 0 ) ] .
G ( x ) = A 2 + 2 A π e ω 0 2 2 D 0 2 cos ( 2 π T x ) .
a = A 2 ,
b = 2 A π e ω 0 2 2 D 0 2 .
R = | D 2 ( V S 1 ) | + r V U ,
R ( S ) = | k 1 S k 2 | + k 3 ,
R = 3 σ h ,
D 0 = 3 R .
D 0 ( S ) = 3 | k 1 S k 2 | + k 3 .
σ ϕ 2 = 2 σ 2 N b 2 ,
I i c ( x c , y c ) = I i ( x c , y c ) + η i ( x c , y c ) ,
σ s t d = { σ T b if σ b 1 otherwise × 100 % .
σ = k = 1 ( 2 A π ( 2 k + 1 ) ) 2 e [ ( 2 k + 1 ) ω 0 ] 2 D 0 2 + N .
σ s t d = T ' k = 1 1 ( 2 k + 1 ) 2 e [ ( 2 k + 1 ) 2 1 ] ω 0 2 D 0 2 + N ' e ω 0 2 D 0 2 × 100 % ,
{ σ s t d = T ' k = 1 1 ( 2 k + 1 ) 2 e [ ( 2 k + 1 ) 2 1 ] ω 0 2 D 0 2 + N ' e ω 0 2 D 0 2 × 100 % ω 0 = 2 π T D 0 = 3 R R ( S ) = | k 1 S k 2 | + k 3 .

Metrics