## Abstract

In fringe projection profilometry, the original purpose of projecting multi-frequency fringe patterns is to determine fringe orders automatically, thus unwrapping the measured phase maps. This paper presents that using the same patterns, simultaneously, allows us to correct the effects of projector nonlinearity on the measured results. As is well known, the projector nonlinearity decreases the measurement accuracies by inducing ripple-like artifacts on the measured phase maps; and, theoretical analysis reveals that these artifacts, depending on the number of phase shifts, have multiplied frequencies higher than the fringe frequencies. Based on this fact, we deduce an error function for modeling the phase artifacts and then suggest an algorithm estimating the function coefficients from a couple of phase maps of fringe patterns having different frequencies. As a result, subtracting out the estimated phase errors yields the accurate phase maps with the effects of the projector nonlinearity on them being suppressed significantly. Experiment results demonstrated that this proposed method offers some advantages over others, such as working without a photometric calibration, being applicable when the projector nonlinearity varies over time, and having satisfied efficiency in implementation.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Fringe projection profilometry, a triangulation-based three-dimensional (3D) measurement method [1–4], enables providing whole-field information thus being popularly used in many applications. In its implementation, multi-frequency fringe patterns are usually projected onto the measured object surface in order to determine the fringe orders and further unwrap the phase maps temporally [5]. With the multi-frequency fringe projection profilometry, however, the luminance nonlinearity of projector, which makes fringes non-sinusoidal, is still one of the most crucial factors decreasing the measurement accuracies.

In fringe projection techniques, including those using multi-frequency fringe patterns, the standard approach for solving this nonlinearity issue is to employ a photometric calibration. A target plane is illuminated by the projector with a full range of illumination values. By recording the corresponding intensities, the response curve of this projector, between the real brightness and the gray levels, is determined. This nonlinearity curve can be roughly represented with a power function for emulating the gamma correction property of a cathode ray tube (CRT) monitor [6–9], or more precisely fitted with a polynomial [10,11] or with a piecewise linear function [12]. Since these methods involve a time-consuming procedure, some methods perform the same task by projecting few sinusoidal fringe patterns instead [6–11], at the expense of computational complexity.

Differing from the aforementioned methods, some efforts focus on exploiting the dependence of phase errors on the projector nonlinearities. These techniques calibrate phase errors rather than the projector nonlinearities. For example, Zhang and Huang [13] established a look-up table (LUT) of phase errors, which is related to the known gamma value of the projector as well as the used three-step phase-shifting algorithm. When analyzing the fringe patterns, this LUT is used to compensate for the phase errors. Later, Zhang and Yau [14] proposed an improved method suitable for any phase-shifting algorithms. Since a single-valued gamma is not sufficiently accurate in modeling the projector nonlinearity, more generalized LUT-based methods independent of a special nonlinearity function have also been developed [15–21]. Instead of using a phase error LUT, Pan et al. [22] derived, from the fringe harmonics, a nonlinear phase error function for the same purpose. With these methods, a calibration must be performed in advance to determine the phase error LUTs or the error function coefficients. In its procedure, usually a plane target is measured, and a large number of phase-shifting fringe patterns have to be captured for getting accurate phase values of this plane target.

In measurement practice, the projector nonlinearity may vary over time [23], in which case the calibration-based techniques mentioned above possibly fail in correcting the phase errors. For solving this problem, it is necessary to develop self-correcting methods depending only on the captured fringe patterns without doing a prior calibration. Using phase-shifting technique, most algorithms correspond to finite impulse response (FIR) filters for extracting the fundamental frequency component from a temporal signal. Their effectiveness in restraining the influences of harmonics depends on the number of sampled fringe patterns [24]. If only very few fringe patterns are available, the histograms of fringe patterns can be used to estimate the projector nonlinearities [25,26]. Recently, we suggested an algorithm that directly determines a projector nonlinearity curve from the calculated fringe phases [27]. These statistic-based methods suffer from a somewhat high computational complexity. Ref [28]. proposed a depth map reconstruction technique immune to any projector errors including those induced by its nonlinearity, but it requires a tedious searching procedure across the reference phases maps for the points having the same phase values.

In this paper, we present, to the best of our knowledge, a new correcting technique for projector nonlinearities. It is suitable for improving the measurement accuracy of multi-frequency phase-shifting fringe projection profilometry. Firstly, we deduce an error function for modeling the effects of projector nonlinearity by analyzing the fringe harmonics. Based on this error function, we suggest an iterative algorithm estimating the function coefficients from a couple of phase maps of fringe patterns having different frequencies. By subtracting out the estimates of the phase errors, the effects of the projector nonlinearity on the measurement results are suppressed significantly. Experimental results demonstrated this proposed method to be effective in improving the measurement accuracies.

## 2. Multi-frequency fringe projection profilometry

#### 2.1 Phase measuring using phase-shifting algorithm

The measurement system of fringe projection profilometry, as illustrated in Fig. 1, mainly consists of a projector and a camera. The projector is used for casting sinusoidal fringe patterns onto the measured object surface. Assuming that the fringes generated in computer have a direction along *v*-axis, with (*u, v*) being the pixel coordinates on the projector plane, the *k*th (*k =* 0, 1, …, *K−*1) fringe pattern is represented with

*α*and

*β*denote the bias and contrast of the fringes, respectively, satisfying the inequation 0≤

*α*±

*β*≤1; and

*f*denotes the spatial frequency along

*u*-axis. The second term in brackets is the added phase shift.

When projecting the generated fringe patterns in Eq. (1) onto the object surface using a projector, we have to considering the nonlinearity of this projector. By approximating the output brightness of the projector as a polynomial of degree *N*, i.e., $\lambda ={\displaystyle {\sum}_{n=0}^{N}{c}_{n}{g}^{n}}$, with *c _{n}* being its coefficients, the deformed fringe patterns captured by a camera are represented with

*x*,

*y*) denote the pixel coordinates of a point on the imaging plane of the camera. This point (

*x*,

*y*) is illuminated by the projector pixel (

*u*,

*v*).

*R*(

*x*,

*y*) is a scale factor depending on the local reflectivity and slope of the measured object surface.

*B*(

*x*,

*y*) is the additional intensity related to the ambient illumination. By expanding the binomials in Eq. (2), we can restate this equation in a more concise form

From the captured fringe patterns, the phases at each pixel (*x*, *y*) can be estimated using the synchronous detection algorithm [29]

The synchronous detection algorithm, depending on the number of phase shifts *K*, is insensitive to some harmonics in the fringe patterns. In other words, it has an ability of restraining the influence of projector nonlinearity to a certain extent. According to the Fourier analysis to this algorithm [24], only if *K*≥*N* + 2 can it recover the accurate phase map without the errors induced by the projector nonlinearity.

#### 2.2 Temporal phase unwrapping by use of multi-frequency fringe patterns

Because Eq. (4) involves an arctangent function, the calculated phases are within the range of principal values from *−*π to π rad, and phase-unwrapping must be performed. Spatial phase-unwrapping methods may fail in measuring an object having abrupt steps, severe discontinuities, or isolated surfaces, because they work by exploiting the spatial continuities of phase maps in both horizontal and vertical directions. For solving this problem, one can project several sequences of fringe patterns having different frequencies onto the measured object surface and unwrap the phase map along a time axis [5].

In implementation, temporal phase-unwrapping technique aligns wrapped phases with reference phases. To get the reference phases, at least two sequences of phase-shifting fringe patterns have to be captured. For example, we use the patterns having a high frequency, i.e., *f _{H}*, for measuring the object depths. Simultaneously, we use the patterns having a relatively low frequency, i.e.,

*f*, as an aid for unwrapping the high frequency phase map. By using the method in the previous subsection, the captured fringe patterns are analyzed, and the wrapped phase maps are calculated as

_{L}*ψ*(

_{H}*x, y*) and

*ψ*(

_{L}*x, y*), corresponding to the fringe frequencies

*f*and

_{H}*f*, respectively. Determination of the reference phase map depends on how the two fringe frequencies are selected.

_{L}** Case 1**. As a solution in measurement practice,

*f*and

_{H}*f*can be selected to be close to each other. In this case, the beat phases between the two fringe signals are used as the reference, namely

_{L}*W*{·} denoting a wrapping operator making the phase differences lie on the principal value range from

*−*π to π rad. These beat phases correspond to a relatively low frequencyThis low frequency implies a wide beat period without phase ambiguities.

** Case 2**. Another popularly used method is to select

*f*to be a multiple of

_{H}*f*, in which case we simply use

_{L}*ψ*(

_{L}*x, y*) as reference, viz.

The reference phase map allows us to perform phase-unwrapping in its unambiguous range. If *ψ _{H}*(

*x, y*) and

*ψ*(

_{L}*x, y*) can be accurately measured, the unwrapped high frequency phases, denoted

*Ψ*(

_{H}*x, y*), are exactly equal to

*ψ*(

_{R}*x, y*) multiplied by a factor

*f*/

_{H}*f*. In measurement practice, however, the measured phase maps, especially the low frequency one, are not accurate enough, because they are affected by such factors as noise, illumination fluctuations, and projector nonlinearities. As instead, we use the reference phase

_{R}*ψ*(

_{R}*x, y*) just for determining fringe orders:

With this principle, only when *f _{R}* has a small value can the unambiguous range of the reference phases be wide enough to cover the whole measurement field. Simultaneously,

*f*is preferred to be high for getting a high measurement resolution. As a result, using Eq. (9) may lead to erroneous fringe orders, because the errors in

_{H}*ψ*(

_{R}*x, y*) are amplified by multiplying the factor

*f*/

_{H}*f*. To solve this issue, multiple fringe pattern sequences having from low to high frequencies can be used. Typically, we can start by projecting patterns having a single fringe, whose phase map does not have the ambiguity in fringe orders. The following pattern sequences have exponentially growing frequencies. The phase map of each frequency is unwrapped by using its previous unwrapped phase map as reference. In this relay way from a low frequency phase map to a high frequency one, eventually the phase map of the highest frequency is unwrapped, thus achieving the highest measurement resolution.

_{R}Using multi-frequency fringe patterns allows us to determine fringe orders, thus unwrapping the measured phase maps. With this method, however, the luminance nonlinearity of a projector remains to be a crucial factor decreasing the measurement accuracies. In the next section, we shall present a novel method of projector nonlinearity correction by using fringe patterns having different frequencies.

## 3. Correction of projection nonlinearity

#### 3.1 Effect of projector nonlinearity

In [27], we have analyzed the performance of the synchronous detection algorithm in the presence of projector nonlinearities. We denote the measured phases, after phase unwrapping, as *Ψ*(*x, y*), and denote the real phase values as *Φ*(*x, y*). The phase errors are derived to be of the form

*K*is the number of phase shifts and

*N*is the degree of polynomial representing the projector nonlinearity. From Eq. (3), we know

*N*is also the order of the highest harmonic in the fringe patterns. The coefficients of this error function are

*C*denotes the number of combinations.

Equation (11) reveals that the phase error induced by the projector nonlinearity is a linear combination of sine waves, depending on *K* and *N*, as well as on *d _{n}*, i.e., the polynomial coefficients in Eq. (3). These errors appear as ripple-like artifacts on the calculated phase maps and, further, on the reconstructed 3D shapes, with their frequencies are at least

*K*times higher than the deformed fringe frequencies, and their amplitudes of harmonics are dependent on the extent of the projector nonlinearity. For simplifying the expression, we define these amplitudes of harmonics as ${\xi}_{m}={\displaystyle \sum}_{n=2}^{N}{\rho}_{m,n}$, so that Eq. (11) becomes

If the amplitudes *ξ _{m}* in Eq. (13) are calibrated, we can solve this equation for the real phases

*Φ*(

*x, y*). Without a prior calibration, however, the values of

*ξ*together with

_{m}*Φ*(

*x, y*) are unknowns, leading to an underdetermined system whose number of unknowns is greater than that of equations. In other words, it is impossible to accurately determine

*Φ*(

*x, y*) from a single map of phases.

#### 3.2 Algorithm of projector nonlinearity correction

In multi-frequency fringe projection profilometry, at least two phase maps of different frequencies are recovered, making it possible to determine and eliminate the error terms. According to Eqs. (12) and (13), the amplitudes of harmonics of the phase errors, i.e., *ξ _{m}*, are independent of the fringe frequencies and phases. In multi-frequency fringe projection, we use at least two sequences of fringe patterns having different frequencies. Once the two sequences have the same number of phase shifts and the same bias and contrast, their phase errors should have the same amplitudes. Following from Section 2.2,

*Ψ*and

_{H}*Ψ*denote two unwrapped phase maps corresponding to the fringe frequencies

_{L}*f*and

_{H}*f*, respectively. They both are calculated by using the method in Section 2. Note that

_{L}*Ψ*and

_{H}*Ψ*are functions of (

_{L}*x*,

*y*). In the following text, we omit their coordinate notation (

*x*,

*y*) for shortening the expressions. For each pixel, we have a pair of equations like

*Φ*and

_{H}*Φ*are the real phases corresponding to the fringe frequencies

_{L}*f*and

_{H}*f*, respectively. They are also functions of pixel coordinates. By noting thatEquation (14) becomes

_{L}Based on Eq. (16), we have a system of equations. Its number of equations is twice the number of pixels. In this system, the unknowns to be solved include the phase *Φ _{H}* at each pixel and the amplitude

*ξ*of each error term. In practice, the errors may have infinite terms, but their amplitudes decline quickly as their orders become high. We can reasonably truncate these error functions and keep the first several terms. Using the truncated error functions, the number of independent equations is much greater than that of unknowns, making the system over-determined. We can solve it in the sense of least squares. A difficulty is that the involved equations are nonlinear, so we have to solve them in an iterative way. The iterative procedure is as follows, in which the superscript in parentheses represents the number of iterations and the truncated error functions have

_{m}*M*terms.

**Step 1.** Determine the initial values. Here we use the calculated phases *Ψ _{H}* as the initial values of

*Φ*, viz.

_{H}**Step 2.** Estimate the amplitudes of error terms. When the *i*th iterative values of phases, i.e., ${\Phi}_{H}^{(i)}=(x,y)$ are determined, substitute them into Eq. (16) yields a system of linear equations as

*M*, and the number of equations is twice the number of pixels. Solve it in the least squares sense for the unknowns

*ξ*

_{m}^{(}

^{i}^{)}.

**Step 3.** Update the phase values. When the *i*th iterative values of *ξ _{m}*

^{(}

^{i}^{)}are calculated, we calculate more accurate phases by subtracting the error terms. The result is

**Step 4.** Repeat implementing Steps 2 and 3 until the algorithm converges.

By use of this iterative algorithm, the error terms induced by the projector nonlinearity are removed from the phases. In the next section, we shall investigate its performance through numerical simulations.

## 4. Numerical simulations

We perform simulations to verify the effectiveness of the proposed method in correcting the errors of projector nonlinearity. Figure 2(a) simulates a one-dimensional phase distribution having 1024 pixels. After adding a carrier on them with the carrier frequency being 0.05 rad/pixel, these phases serve as *Φ _{H}*, i.e. the phases of fringes having the high frequency

*f*. For generating their corresponding fringe patterns, we assume the fringe bias

_{H}*α*and contrast

*β*to be 0.5 and 0.4, respectively, and that the projector has nonlinear output brightness verse its gray levels as shown in Fig. 2(b). Because the captured fringe patterns may have ununiform background intensities and modulations, we set

*R*(

*x, y*) in Eq. (2) to be Gaussian shaped having a 50% decrease at the end points on both sides of the phase curves.

*B*(

*x*,

*y*) is 0.1 for simulating the additional intensity caused by the ambient illumination. As a result, three phase-shifting fringe patterns are generated, with its phase step between consecutive frames being 2π/3 radians. These fringe patterns, corresponding to the simulated phase curve, are also one-dimensional. The first one of them is shown in Fig. 3(a).

Multi-frequency fringe projection technique requires capturing at least two sequences of fringe patterns having different frequencies. We simulate the second sequence under the same conditions. The difference is that this sequence has a low frequency halving that of the first one, i.e., *f _{L}*/

*f*= 1/2. The first frame of the three fringe patterns is shown in Fig. 3(d).

_{H}Using the method in Section 2, the phases are recovered from the simulated fringe patterns. After phase unwrapping and carrier removing, the results are given on the middle column of Fig. 3 next to their fringe patterns. From both phase curves, we observe the ripple-like artifacts induced by the projector nonlinearity. Figures 3(c) and 3(f) evaluate the phase errors by subtracting the predefined phases from the calculated ones. In the recovered phases of the high frequency fringes, the maximum error is 0.2832 radians, and the root mean square (RMS) error is 0.1990 radians. Because the number of phase shifts is three, the error curves have fundamental frequencies three times higher than their corresponding fringes, being exactly in accordance with the model in Eq. (13). Note that the errors in Figs. 3(c) and 3(f) have the same amplitudes, because the error amplitude depends on the projector nonlinearity, the bias and contrast of the generated fringes, and the number of phase shifts. They are independent of the fringe frequencies.

To compress the phase errors induced by the projector nonlinearity, we apply the method proposed in Section 3.2 to the measured phases. In most cases, the error function has infinite terms, but their amplitudes quickly decline as the orders become high. In this simulation, we truncate the error function and keep the first five terms. Following the procedure in Section 3.2, the resulting phase curve after 30 iterations is illustrated in Fig. 4(a), and Fig. 4(b) gives its residual errors. It is evident that the ripple-like artifacts induced by the projector nonlinearity are suppressed significantly. The maximum and RMS phase errors are suppressed to 0.0004 and 0.0002 radians, respectively. If we keep more terms in the error function, the residual errors will approach zero.

The result in Fig. 4 is obtained under the noise free condition. Note that Eq. (4), by which we calculate the phase, is a nonlinear function of the fringe intensities. This fact implies that there exists high-order cross-correlations between the phases errors caused by diverse sources. Only when these errors are very small can they be considered approximately unrelated to each other; whereas when the errors become large, their interrelationship is not neglectable. For this reason, it is necessary to investigate the effects of noise on the performance of the proposed method. We repeat doing the simulation by adding zero-mean Gaussian noise to fringe patterns. As a result, Table 1 lists the maximum absolute values and the RMS values of the phase errors under different noise conditions. We see from it that, along the columns, the maximum and RMS errors in the calculated phases become larger as the noise SDs increases, whether the projector nonlinearity is absent, corrected, or not corrected. By comparing the columns, we know that our proposed method is effective in compressing the influences of projector nonlinearity, but its effectiveness may decline under high noise conditions. When noise SD is 0.04, for example, the maximum value of residual phase errors after correction is still over 0.6 radians, implying that noise has some influence on estimating the coefficients of the error function.

The proposed method involves an iterative procedure to solve a nonlinear system of equations. Figure 5(a) investigates the convergence of this algorithm. The horizontal axis denotes the number of iterations, and the vertical axis denotes the RMS values of the residual errors. We use a truncated error function having five terms. Under the noise free condition, the curve approaches to almost zero as the number of iterations increases, demonstrating this algorithm converges. When the fringe patterns contain noise, the RMS phase errors decrease and converge to certain values depending on the noise levels.

Another issue noteworthy is regarding truncation errors in implementing the algorithm. In measurement practice, the number of error terms in Eq. (13) is generally infinite, but the amplitudes of high order terms decline quickly. As a result, we can truncate this function and keep the first several terms in calculation. Figure 5(b) investigates the truncation errors by plotting the RMS phase errors versus the number of the error terms kept in the truncated error function. From its curves, we observe that using more terms is helpful for suppressing the phase errors. On the other hand, doing so may increase the computational burden of the algorithm. By noting that high order terms, e.g., those after the fourth one in Fig. 5(b), has only trivial contributions in improving the measurement accuracies, using a truncated error function having 3 to 5 terms is a proper choice for guaranteeing both accuracy and efficiency.

## 5. Experiment

We experimentally examined the feasibility of the proposed method by measuring an object. The measurement system, as shown in Fig. 1, mainly consists of a DLP projector (PHILIPS PPX4010, 854 × 480 pixels) and a digital camera (AVT Stingray F-125B) with its lens (KOWA LM12JC) having a focal length of 12 mm. A circular object attached on a calibration board was selected as the measured target. This target is typical because it has flat and curved surfaces with edges and discontinuities. According to the principle of multi-frequency fringe projection technique, we project multiple sequences of phase-shifting fringe patterns, which have from low to high frequencies, onto the measured surface and record the deformed patterns. The purpose is to unwrap the phase map temporally. In each sequence, the number of phase shifts is three and the phase increment between consecutive frames is 2π/3 radians. Because the brightness of the projector is not calibrated, the fringe patterns are affected by the projector nonlinearity.

For correcting the errors induced by the projector nonlinearity, two fringe pattern sequences having the highest two frequencies are selected from the captured pattern sequences. The first frame of each sequence is shown in Fig. 6 by the leftmost column. Their image size is 500 × 500 pixels. The pattern in Fig. 6(a) has a frequency twice higher than the one in Fig. 6(e). Next to them are their wrapped phase maps calculated using the phase-shifting algorithm formularized by Eq. (4). Furthermore, their absolute phase maps are obtained by use of the temporal phase-unwrapping technique introduced in Section 2.2, aided by other phase maps of the lower frequency fringe patterns. These unwrapped phase maps are shown in Fig. 6 by the third column. By subtracting out the carrier component, i.e., the reference phase map which, in theory, can be modeled using a rational function [30], we have the phase maps shown in Figs. 6(d) and 6(h). From them, ripple-like artifacts are noticeable. These artifacts are parallel with and three times denser than the deformed fringes, typically being the errors induced by projector nonlinearity, as we analyzed in Section 3.1.

If the projector brightness is not calibrated, it is difficult to remove the errors caused by the projector nonlinearity from a single map of phases. According to Section 3.2, using two phase maps like those in Figs. 6(c) and 6(g), which have different fringe frequencies, allows us to estimate the coefficients of the error function, and further cancel its influence. By following the iterative procedure suggested in Section 3.2, we calculate the error coefficients and the real phases. When the algorithm converges, the result is shown in Fig. 7, in which Fig. 7(a) is the corrected phase map, and Fig. 7(b) is its version with the carrier having been removed. Note that, in this procedure, we have identified the invalid regions like shadows by thresholding the fringe modulations and excluded them from calculations. By comparing Fig. 7(b) with Fig. 6(d), it is evident that, with this technique, the ripple-like errors induced by the projector nonlinearity have been suppressed significantly.

With the proposed technique, it is necessary to discuss the accuracy and its related factors, such as the truncation of the error function and the number of iterations. Different from the case in numerical simulation, it is impossible in this experiment to check the measurement accuracy by comparing the reconstructed phases with their exact values, because the real phase map of the measured object is not available. Instead, we can investigate the accuracy by measuring the phases of a plane, because the phase map of a standard plane can be exactly represented with a rational function [30]. As we see in Figs. 6(a) and 6(e), the circular object is placed before a calibration board. We can segment the fringe patterns and retrieve the phase values of the plane region on the calibration board. By fitting these phase values with the theoretical model, i.e. with a rational function [30], the phase map of the plane is estimated to a very high accuracy. We use it as benchmark, and try to evaluate the phase accuracy by calculating the deviations of the calculated phases from this benchmark. It should be noted that these deviations are caused not only by the factors like noise and the projector nonlinearity, but also by the flatness and roughness errors of the plane. In practice, the commercially available calibration boards can have a 0.05 mm or higher flatness for meeting demands form computer vision and other applications. The phase errors related to this plane board are much smaller than those induced by noise and by the projector nonlinearity. Therefore, using these phase deviations is sufficient for evaluating the performances of the proposed method in compressing the projector nonlinearity errors. More completely and accurately, the measurement precision can be evaluated via repeatability, uncertainty, and relative error in terms of percentage.

Figure 8(a) plots the RMS value of phase deviations versus the number of iterations. Originally the phase deviations have an RMS value of 0.1056 radians. With the proposed technique, this RMS value, after 30 iterations, decreases to 0.0181 radians. The residuals are mainly caused by noise rather than by the projector nonlinearity. Figure 9(b) investigates the influence of the truncation errors. When we keep the first three terms in the error function, the phase errors have been corrected to a satisfied level. Higher order terms can only give an insignificant contribution to the accuracy. After implementing the proposed method, random noise has become an overwhelming factor for determining the residual errors in contrast with truncation of the error function.

When converting the phases into the depths [31], the phase errors become the artifacts on the reconstructed surface. Figure 9 shows the reconstructed depth maps, with their errors induced by the projector nonlinearity being processed using different methods. In this figure, the top row shows the depth maps and the bottom one gives their cross sections along the 12th pixel column from the left. This cross section is selected within the region of the plane outside the circular object, so that the errors are easy to evaluate as we just discussed.

Figure 9(a) shows the depth map calculated directly from Fig. 6(c) without any correction to the projector nonlinearity. From its cross section, we observe very large artifacts induced by the projector nonlinearity. Using the data along this cross section, we calculate the RMS value of the errors to be 2.1481 mm. For removing these artifacts, a postprocessing method is to smooth them using a low-pass or a band-reject filter. For example, when the used Gaussian low-pass filter has a SD of 12 pixels and a size of 23 × 23 pixels, the smoothed depth map and its cross-section is shown in Fig. 9(b). In the continuous areas of the surface, the errors induced by both the projector nonlinearity and the noise have been suppressed significantly, and the RMS error calculated from the selected cross section becomes 0.1487 mm. The problem is that the low-pass filtering makes the measurement result drastically blurred, as we see in Fig. 9(b). As a result, the errors become very large in the regions having discontinuities or near the image boundaries. In fact, low-pass filtering is one of the most popularly used solution for removing the artifacts from, for example, signals, images, and point clouds, but the data near the boundaries and edges must be cut off.

Figure 9(c) shows the results of using our proposed method. It suppresses the ripple-like errors significantly and simultaneously protects the edges of the measured surface from being blurred. The RMS error along the selected cross section is 0.2903 mm. As a comparison, we also measure the same object by employing the active calibration method, which is the most typical among the existing solutions for the nonlinearity issue. We use the projector to illuminate the flat board with a sequence of increasing gray levels. Recording the corresponding intensities yields a LUT. Using it allows us to compensate for the distortions, induced by the projector nonlinearity, in the generated sinusoidal fringe patterns before projecting them onto the object. The result of the active calibration technique is illustrated in Fig. 9(d). Its RMS error is 0.4854 mm. The proposed technique is superior to a calibration-based one, because the latter is sensitive to the time-variance of the projector nonlinearity.

## 6. Discussions

With this technique, there are some issues worth discussing. Random noise is a main factor degrading the captured fringe patterns. Theoretical analysis [32] reveals that, in the presence of noise, the variance of the phase errors is proportional to the noise variance, and inversely proportional to the number of phase shifts and to the square of modulations. It is independent of the phase value. For restraining the effect of noise, we usually implement a spatial low-pass filtering to the captured patterns. As we know, it will blur the edges of the measured surface. A more effective method is to capture multiple patterns for each phase shift and average them. Doing so can decrease the noise variance without blurring the image, at the expense of time duration for image capturing. The illumination fluctuations during image capturing also induce ripple-like phase errors, which have the same frequency as fringes. Using fringe histograms allows us to correct such illumination fluctuations [33].

In addition, phase sensitivity is a crucial factor determining the measurement accuracy and the measurement resolution of fringe projection profilometry. This phase sensitivity mainly depends on the system geometry. If the system is fixed, the phase sensitivity is associated with fringe pitch and fringe orientations [34]. A smaller fringe pitch implies a higher phase sensitivity, and simultaneously may lead to more difficulties in phase unwrapping. In this experiment, for example, we use horizontal fringes, which have periods over 60 pixels in the captured patterns (corresponding to a lateral size around 25 mm on the object depending on the depth) as we see in Fig. 6(a), and the RMS error of the reconstructed depths is 0.2903 mm after correcting the projector nonlinearity. If we use fringes having finer pitch and the optimal direction perpendicular to the epipolar lines, the RMS error under the same noise condition can reduce to a much lower level. Besides the phase sensitivity, the overall measurement resolution depends on the instrument transfer function of the optical system [35]. In addition, the measurement resolution is also related to the phase retrieving resolution of an algorithm. For example, the phase-shifting algorithm has a resolution over ten times higher than Fourier transform algorithm.

There exist some limitations with this method. First, we deduce the phase error function by assuming the camera to be linear. Therefore, the camera nonlinearity, if it exists, may decrease the measurement accuracy. Even so, this newly proposed method is practical because the camera nonlinearity is much easier to calibrate, whereas calibrating a projector usually involves an indirect procedure with the aid of a well-calibrated camera. Even if the camera has a linear response, the fringe saturations may occur when the intensities exceed its dynamic range [36]. These fringe saturations also induce ripple-like artifacts on the reconstructed object surface, but they are position-dependent and hence cannot be overcome by using our proposed technique. Second, the proposed method requires at least two sequences of phase-shifting fringe patterns. It can be implemented compatibly with temporal phase-unwrapping technique, which uses multi-frequency fringe patterns. It is not suitable for the situation that phase-unwrapping is carried out spatially or by use of Gray code while only one sequence of sinusoidal fringe patterns is available.

Another issue is regarding efficiency. The proposed technique, compared with others, is helpful for improving the implementation efficiency, though it involves an iterative procedure for data processing. The reason is that it takes advantage of multi-frequency patterns in temporal phase unwrapping and does not require extra patterns. It works well with a few to three fringe patterns for each frequency, thus saving the time duration for image capturing. More importantly, it avoids a time-consuming calibration for the projector brightness.

## 7. Conclusion

In this paper, we have suggested a method for correcting the errors induced by the projector nonlinearity in multi-frequency phase-shifting fringe projection profilometry. This technique is based on an error function derived from the model of the projector nonlinearity and is implemented by using a couple of phase maps of fringe patterns corresponding to different frequencies. Experimental results have demonstrated that this method can effectively suppress the ripple-like artifacts, induced by the projector nonlinearity, on the reconstructed surfaces. In comparison with existing techniques, this method does not require a photometric calibration, thus being applicable when the projector nonlinearity varies over time; its pointwise operation protects the edges and details of the measurement results from being blurred. In addition, it is helpful in improving measurement efficiency, because it cancels the photometric calibration and saves time duration for image capturing. This method can also be used in the related fields like the phase measuring deflectometry [37] for enhancing their measurement accuracies.

## Funding

National Natural Science Foundation of China (NSFC) (61433016).

## References and links

**1. **S. S. Gorthi and P. Rostogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. **48**(2), 133–140 (2010). [CrossRef]

**2. **Z. Wang, D. Nguyen, and J. Barnes, “Recent advances in 3D shape measurement and imaging using fringe projection technique,” in Proceedings of the SEM Annual Congress and Exposition on Experimental and Applied Mechanics 2009 (Society for Experimental Mechanics, 2009), pp. 2644–2653.

**3. **S. Zhang, “Recent progresses on real-time 3d shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. **48**(2), 149–158 (2010). [CrossRef]

**4. **V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. **23**(18), 3105–3108 (1984). [CrossRef] [PubMed]

**5. **C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. **85**, 84–103 (2016). [CrossRef]

**6. **T. Hoang, B. Pan, D. Nguyen, and Z. Wang, “Generic gamma correction for accuracy enhancement in fringe-projection profilometry,” Opt. Lett. **35**(12), 1992–1994 (2010). [CrossRef] [PubMed]

**7. **X. Zhang, L. Zhu, Y. Li, and D. Tu, “Generic nonsinusoidal fringe model and gamma calibration in phase measuring profilometry,” J. Opt. Soc. Am. A **29**(6), 1047–1058 (2012). [CrossRef] [PubMed]

**8. **S. Ma, C. Quan, R. Zhu, L. Chen, B. Li, and C. J. Tay, “A fast and accurate gamma correction based on Fourier spectrum analysis for digital fringe projection profilometry,” Opt. Commun. **285**(5), 533–538 (2012). [CrossRef]

**9. **Y. Xiao, Y. Cao, Y. Wu, and S. Shi, “Single orthogonal sinusoidal grating for gamma correction in digital projection phase measuring profilometry,” Opt. Eng. **52**(5), 053605 (2013). [CrossRef]

**10. **S. Gai and F. Da, “A novel fringe adaptation method for digital projector,” Opt. Lasers Eng. **49**(4), 547–552 (2011). [CrossRef]

**11. **M. Dai, F. Yang, and X. He, “Single-shot color fringe projection for three-dimensional shape measurement of objects with discontinuities,” Appl. Opt. **51**(12), 2062–2069 (2012). [CrossRef] [PubMed]

**12. **X. Ye, H.-B. Cheng, H.-Y. Wu, D.-M. Zhou, and H.-Y. Tam, “Gamma correction for three-dimensional object measurement by phase measuring profilometry,” Optik (Stuttg.) **126**(24), 5534–5538 (2015). [CrossRef]

**13. **S. Zhang and P. S. Huang, “Phase error compensation for a 3-D shape measurement system based on the phase-shifting method,” Proc. SPIE **6000**, 133–142 (2005). [CrossRef]

**14. **S. Zhang and S.-T. Yau, “Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector,” Appl. Opt. **46**(1), 36–43 (2007). [CrossRef] [PubMed]

**15. **Z. Li, Y. Shi, and C. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. **47**(5), 053604 (2008). [CrossRef]

**16. **Z.-W. Li, Y.-S. Shi, C.-J. Wang, D.-H. Qin, and K. Huang, “Complex object 3D measurement based on phase-shifting and a neural network,” Opt. Commun. **282**(14), 2699–2706 (2009). [CrossRef]

**17. **Y. Fu, Y. Wang, W. Wang, and J. Wu, “Least-squares calibration method for fringe projection profilometry with some practical considerations,” Optik (Stuttg.) **124**(19), 4041–4045 (2013). [CrossRef]

**18. **D. Zheng and F. Da, “Gamma correction for two step phase shifting fringe projection profilometry,” Optik (Stuttg.) **124**(13), 1392–1397 (2013). [CrossRef]

**19. **K. Liu, S. Wang, D. L. Lau, K. E. Barner, and F. Kiamilev, “Nonlinearity calibrating algorithm for structured light illumination,” Opt. Eng. **53**(5), 050501 (2014). [CrossRef]

**20. **Y. Fu, Z. Wang, G. Jiang, and J. Yang, “A novel three-dimensional shape measurement method based on a look-up table,” Optik (Stuttg.) **125**(6), 1804–1808 (2014). [CrossRef]

**21. **C. Zhang, H. Zhao, L. Zhang, and X. Wang, “Full-field phase error detection and compensation method for digital phase-shifting fringe projection profilometry,” Meas. Sci. Technol. **26**(3), 035201 (2015). [CrossRef]

**22. **B. Pan, Q. Kemao, L. Huang, and A. Asundi, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett. **34**(4), 416–418 (2009). [CrossRef] [PubMed]

**23. **B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. **54**, 236–246 (2014). [CrossRef]

**24. **H. Guo and M. Chen, “Fourier analysis of the sampling characteristics of the phase-shifting algorithm,” Proc. SPIE **5180**, 437–444 (2003). [CrossRef]

**25. **H. Guo, H. He, and M. Chen, “Gamma correction for digital fringe projection profilometry,” Appl. Opt. **43**(14), 2906–2914 (2004). [CrossRef] [PubMed]

**26. **H. Guo and Z. Zhao, “Nonlinearity correction in digital fringe projection profilometry by using histogram matching technique,” Proc. SPIE **6616**, 66162I (2007). [CrossRef]

**27. **F. Lü, S. Xing, and H. Guo, “Self-correction of projector nonlinearity in phase-shifting fringe projection profilometry,” Appl. Opt. **56**(25), 7204–7216 (2017). [CrossRef] [PubMed]

**28. **R. Zhang and H. Guo, “Depth recovering method immune to projector errors in fringe projection profilometry by use of cross-ratio invariance,” Opt. Express **25**(23), 29272–29286 (2017). [CrossRef]

**29. **J. H. Bruning, D. R. Herriott, J. E. Gallagher, D. P. Rosenfeld, A. D. White, and D. J. Brangaccio, “Digital wavefront measuring interferometer for testing optical surfaces and lenses,” Appl. Opt. **13**(11), 2693–2703 (1974). [CrossRef] [PubMed]

**30. **H. Guo, M. Chen, and P. Zheng, “Least-squares fitting of carrier phase distribution by using a rational function in fringe projection profilometry,” Opt. Lett. **31**(24), 3588–3590 (2006). [CrossRef] [PubMed]

**31. **H. Guo, H. He, Y. Yu, and M. Chen, “Least-squares calibration method for fringe projection profilometry,” Opt. Eng. **44**(3), 033603 (2005). [CrossRef]

**32. **S. Xing and H. Guo, “Temporal phase unwrapping for fringe projection profilometry aided by recursion of Chebyshev polynomials,” Appl. Opt. **56**(6), 1591–1602 (2017). [CrossRef] [PubMed]

**33. **Y. Lu, R. Zhang, and H. Guo, “Correction of illumination fluctuations in phase-shifting technique by use of fringe histograms,” Appl. Opt. **55**(1), 184–197 (2016). [CrossRef] [PubMed]

**34. **R. Zhang, H. Guo, and A. K. Asundi, “Geometric analysis of influence of fringe directions on phase sensitivities in fringe projection profilometry,” Appl. Opt. **55**(27), 7675–7687 (2016). [CrossRef] [PubMed]

**35. **X. Colonna De Lega and P. de Groot, “Lateral resolution and instrument transfer function as criteria for selecting surface metrology instruments,” in *Imaging and Applied Optics Technical Papers*, OSA Technical Digest (online) (Optical Society of America, 2012), paper OTu1D.4.

**36. **H. Guo and B. Lü, “Phase-shifting algorithm by use of Hough transform,” Opt. Express **20**(23), 26037–26049 (2012). [CrossRef] [PubMed]

**37. **H. Guo, P. Feng, and T. Tao, “Specular surface measurement by using least squares light tracking technique,” Opt. Lasers Eng. **48**(2), 166–171 (2010). [CrossRef]