## Abstract

Fringe-projection-based (FPB) three-dimensional (3D) imaging technique has become one of the most prevalent methods for 3D shape measurement and 3D image acquisition, and an essential component of the technique is the calibration process. This paper presents a framework for hyper-accurate system calibration with flexible setup and inexpensive hardware. Owing to the crucial improvement in the camera calibration technique, an enhanced governing equation for 3D shape determination, and an advanced flexible system calibration technique as well as some practical considerations on accurate fringe phase retrieval, the novel FPB 3D imaging technique can achieve a relative measurement accuracy of 0.010%. The validity and practicality are verified by both simulation and experiments.

© 2012 OSA

## 1. Introduction

Fringe-projection-based (FPB) three-dimensional (3D) imaging technique has emerged as one of the most reliable methods for acquiring the 3D images of objects in real applications because of its considerable advantages such as low cost, easy implementation, high accuracy, and full field imaging. The fundamental principle of the technique is to project a known illumination pattern onto the objects of interest, and the 3D shape information can be extracted from the observed deformation of the pattern.

In order to achieve accurate 3D imaging, the FPB system must be calibrated prior to its application. The notable challenges faced by the technique involve the flexibility in the scale of the field of view and the ability to accurately determine the 3D information of the objects of interest. In recent years, the calibration approaches based on using a governing equation that relates the height or depth information of the object surface to the phase map of the projection fringes at each point have been carefully investigated [1–3]. These methods can be easily implemented by employing a number of gage blocks of different heights to calibrate the system and can yield very accurate results. The challenge lies in manufacturing high precision gage blocks for various applications because the block sizes must be changed according to the field size of imaging, which makes such calibration techniques impractical. A different type of calibration approach is to treat the projector as a reversed camera, and use the camera calibration scheme to calibrate both camera and projector [4]. With projection fringes as a tool to establish the correspondence, the 3D coordinates of points can be determined by mapping the point locations in the camera plane with those in the projector plane. A drawback of this technique is that the calibration of a projector is error-prone and the result of stereo vision is inherently noisy [5].

The key to our approach is to take advantage of the flexibility and high accuracy nature of the classical planar camera calibration technique [6]. As technology evolves, the existing camera calibration methods typically fail to satisfy the ever-increasing demand for higher imaging accuracy. The relevant literature has addressed two major sources of error that affect the camera calibration results: the imperfection of the calibration target board and the uncertainty in locating its target control points [7, 8]. This paper presents a crucial improvement in the geometric camera calibration to overcome these two problems without loss of the original advantages of the conventional techniques. The proposed technique uses a sophisticated lens distortion model that takes the radial, tangential, and prism distortion into account, and achieves a precise localization of the target control points with a novel refinement process using frontal image correlation concept; in addition, the defects of the calibration board can be compensated. The proposed camera calibration technique can yield high accuracy in practice with the re-projection error smaller than 0.01 pixels.

Another influential factor to the performance of the FPB 3D imaging technique is the gamma distortion effect on projected fringes. In reality, a digital projector applies gamma decoding to images to enhance the visual effect, which brings undesired fringe intensity changes and subsequently reduces the accuracy of the 3D imaging [9]. To overcome this nonlinear luminance problem, many approaches for compensating the error of the fringe phase have been developed, and they mainly fall into two categories. The approaches in the first category normally involve a transform scheme, such as the wavelet transform [10] and the Hilbert transform [11], to extract the fringe phase. Despite being computational expensive, they usually cannot provide the desired accurate phase map for typical applications involving multiple objects or an object with a complex shape. The approaches in the second category are based on the phase-shifting scheme, and they are often suitable for measuring objects with complex shapes. Nevertheless, these methods generally face a trade-off between the accuracy of phase-error compensation and the computation complexity [12]. In light of our previous work [13], this paper presents a robust yet simple scheme that involves pre-encoding the initial fringe patterns before the projection to effectively compensate the subsequent gamma distortion. The use of large phase-shifting steps as an alternative way for the accuracy enhancement of phase retrieval is also closely studied.

Along with the two aforementioned improvements, a novel governing equation for the 3D shape determination is proposed. Being algebraically inspired, the theoretically derived governing equation for height or depth determination [1] is modified to take account of both the camera and projection lens distortion without the need of projector calibration and to eliminate other nuisance effects. The associated system calibration procedure can be performed with a process similar to the commonly used camera calibration [6] except that phase-shifted fringe patterns are projected onto the calibration board. The proposed FPB 3D imaging technique is capable of providing relative accuracy higher than 0.010% for the entire field of view using low-cost and off-the-shelf hardware, where the relative accuracy is defined as the ratio of out-of-plane imaging error to the in-plane dimension.

## 2. Camera calibration

#### 2.1. Camera model and traditional calibration technique

The camera is typically described as a pinhole model. With such a model, the relation between the 3D world coordinates of a control point *M* = {*X _{w}*,

*Y*,

_{w}*Z*}

_{w}*and its corresponding location*

^{T}*m*= {

*u*,

*v*}

*in the image plane are given by:*

^{T}*s*is a scale factor;

*A*is the intrinsic matrix, with

*α*and

*β*the horizontal and vertical focal length in pixel unit,

*γ*the skew factor, and (

*u*

_{0},

*v*

_{0}) the coordinates of the principal point;

*R*and

*T*are the extrinsic parameters that denote the rotation and translation relating the world coordinate system to the camera coordinate system. Because the camera lens usually exhibits nonlinear optical distortion, Eq. (1) is insufficient for accurate camera calibration. In spite that some very complex models exist [14], in practice they induce more instability rather than accuracy because of the high order distortion components. In this paper, the lens distortion is modeled by

*a*

_{0},

*a*

_{1},

*a*

_{2}), (

*p*

_{0},

*p*

_{1},

*p*

_{2},

*p*

_{3}), and (

*s*

_{0},

*s*

_{1},

*s*

_{2},

*s*

_{3}) represent the radial, tangential, and prism distortion coefficients, respectively. In Eq. (2), (

*x*,

_{cn}*y*) denotes the normalized location of a distortion-free point (

_{cn}*u*,

*v*), and (

*x*′

*,*

_{cn}*y*′

*) is normalized location of the corresponding distorted point (*

_{cn}*u*′,

*v*′). Their relations are as follows:

Given the locations of the control points in the world coordinate as *M _{j}* and in the image plane as

*m*, where

_{ij}*i*denotes the

*i*th of the

*k*images (

*i*= 1, 2,...,

*k*) and

*j*denotes the

*j*th of the

*l*control points (

*j*= 1, 2,...,

*l*), the calibration process involves a nonlinear optimization with the cost function defined as:

*φ*= (

*a*

_{0},

*a*

_{1},

*a*

_{2},

*p*

_{0},

*p*

_{1},

*p*

_{2},

*p*

_{3},

*s*

_{0},

*s*

_{1},

*s*

_{2},

*s*

_{3}),

*ρ*is the Rodrigues vector presentation of

*R*[6], and

*P*denotes the projection of control points onto the image planes according to Eqs. (1) and (2). The optimization is performed using the Levenberg-Marquardt (L-M) algorithm.

#### 2.2. Position estimation and refinement of the calibration target control points

Because planar calibration target board is employed, the world coordinate system is placed on the board with its surface as the XY plane for simplicity. Intrinsically, the aforementioned camera calibration method requires that the control points to be perfectly positioned on the board. However, the actual positions of these points always have certain errors that are due to the inevitable inaccuracy and imprecision of the calibration board fabrication [7,14]. To cope with this problem, the world coordinates of the control points are be treated as unknown, and they will be determined together with the camera parameters using the so-called bundle adjustment process [15]. With this scheme, the optimization requires a geometric constraint where three non-collinear control points are selected to form a plane. Specifically, the planarity constraint sets the world coordinate *Z* = 0 for each of the three points, and requires the distance between any two of the three points to be accurately known to get the scale information. Although the three constraint points can be randomly selected, placing them at the different corners of the calibration target board is helpful for determining the orientation of the target.

In order to achieve accurate camera calibration, concentric circles are employed in this paper as the calibration target patterns. In general, the detection of the control points, i.e., the centers of the circular targets on the calibration board, can be carried out with the ellipse fitting technique [16]. However, because of the lens and perspective distortions affect on the shapes of the circles recorded in the raw images, the true locations of these centers cannot be accurately determined. To reduce the calibration errors associated with this issue, a three-step refinement technique, which detects the target centers through matching synthesized templates with the images of calibration target patterns in the frontal image plane of each captured image, is utilized. First, the classical method described in the previous section is used to determine the camera parameters. Second, the raw images are undistorted by applying Eq. (2) and then reversely projected onto the frontal image plane of the world coordinate system through using Eq. (1) with the aid of a B-spline interpolation algorithm [17]. Third, the digital image correlation (DIC) process is employed to accurately locate the position of each control point by comparing each target pattern in the frontal image with the corresponding synthesized template pattern, as illustrated in Fig. 1. Using this idea, Datta *et al.* [8] achieved the detection of the control points with subpixel accuracies by performing a quadratic fitting in the neighborhood regions based on their correlation coefficients; nevertheless, such a peak-finding approach is less accurate than the commonly used iterative scheme [18]. In this paper, an algorithm named the parametric sum of squared difference criterion is adopted [19], where the correlation function is written as:

*a*is the scale factor,

*b*is the intensity offset, and

*f*(

*x*,

_{i}*y*) and

_{i}*g*(

*x*′

*,*

_{i}*y*′

*) indicate the intensity values at the*

_{i}*i*th pixel in the template pattern and the matching pixel in the frontal image, respectively. The template is a square pattern of

*N*pixels with its center (

*x*

_{0},

*y*

_{0}) as the center of the concentric circles. Denoting the shift amount between the two matching patterns as (

*ξ*,

*η*), the DIC shape function can be expressed as:

*s*and

_{x}*s*are the coefficients of the shape function. To determine the six unknowns

_{y}*ξ*,

*η*,

*s*,

_{x}*s*,

_{y}*a*,

*b*, the Newton-Raphson algorithm can be employed to minimize

*C*in Eq. (5). With the detected (

*ξ*,

*η*), the location of each control point in the frontal image can be determined, and these points are then projected back to the raw image plane to obtain the positions of the control points with hyper accuracies.

The basic principle of the new approach is that the circle centers can be better detected in the frontal images than in the oblique raw images. The hyper-accurate detection of the control points directly leads to greater accuracy in the recovery of the camera parameters and the actual position of these points. The procedure of the camera calibration can be summarized as follows:

- Detect the control points in the raw images using the edge detection and ellipse fitting method.
- Optimize the camera parameters and world coordinates of the control points using the L-M algorithm.
- Obtain the frontal images and detect the control points using the DIC method.
- Reversely project the detected control points back to the raw images.
- Re-optimize the camera parameters together with the world coordinates of the control points.

## 3. Projection fringes

#### 3.1. Gamma correction

In the FPB 3D imaging technique, because the 3D shape information is encoded into the fringe patterns projected onto the object surfaces, the full-field fringe information must be retrieved from captured images in order to get the 3D coordinate data. This can be achieved by using sinusoidal fringe patterns as well as the phase-shifting algorithm. The initial fringe pattern (e.g., vertical fringes) is typically generated with the following sinusoidal function:

where*I*

_{0}is the intensity of the pattern at pixel coordinate (

*x*,

*y*),

*i*indicates the

*i*th image,

*W*is the width of the pattern,

*f*is the fringe frequency, i.e., number of fringes in the image,

*δ*is the phase-shifting amount, and

*I*is a constant denoting the intensity modulation amplitude. The corresponding fringe pattern in the captured image can be usually expressed as

_{c}*u*,

*v*) denotes a pixel point in the captured image,

*a*is the background intensity,

*b*is the intensity modulation amplitude of the

_{j}*j*th order harmonic term,

*ϕ*is the fringe phase, and

*p*is the highest significant harmonic order of the captured fringes. In the equation, the high-order harmonics are incorporated to consider the existence of nonlinear luminance distortion in practice. When

*N*-step uniform phase-shifting scheme is employed (

*N*≥

*p*+ 2), the full-field wrapped phase distribution can be easily determined by:

*w*denotes the wrapped phase. In reality,

*p*can be a large value and very difficult to determine. However, it can be deduced from Eq. (9) that the phase determination process can benefit from the use of a larger phase shifting step, as higher harmonic orders can be handled.

The nonlinear intensity distortion mainly originates from the gamma encoding and decoding process applied by the camera and projector system. Generally, this effect can be described as:

where*γ*

_{0}is the gamma values of the system, and

*Î*and

*Î*

_{0}are the normalized intensities of the captured and computer-generated initial images, respectively. From the equation, it is evident that an effective yet simple method to cope with the gamma problem is to pre-encode the initial computer-generated image. Specifically, by applying an appropriate gamma encoding, 1/

*γ*, to the generation of ideal image

_{p}*I*

_{0}, the gamma effect can be attenuated to achieve higher accuracies in the phase detection and the eventual 3D imaging. With this handling, Eq. (10) is rewritten as:

Considering the various uncertainties encountered in real measurements, the best *γ _{p}* can be determined by the following approach:

- With a plate as the target, use a relatively large step scheme for phase-shifting, e.g, 20 steps, to determine the phase distribution ${\varphi}_{r}^{w}\left(u,v\right)$ without gamma pre-encoding (
*γ*= 1). Experimental evidence shows that the highest influential harmonic order in typical measurements is generally smaller than six [9], so the large number of phase-shifted images (i.e., 20) yields a reference phase distribution without nonlinear distortion._{p} - Apply a series of different gamma values, such as
*γ*= {1.5, 1.7,...,3.3, 3.5}, to the computer-generated fringe patterns, and use the three-step phase-shifting method to determine the phase distributions ${\varphi}_{d}^{w}\left(u,v\right)$. Then, calculate the sum of squared errors of the detected phase $e={\sum}_{u,v}{\left[{\varphi}_{d}^{w}\left(u,v\right)-{\varphi}_{r}^{w}\left(u,v\right)\right]}^{2}$. The three-step phase-shifting algorithm is adopted here because it is the one most sensitive to the nonlinear intensity distortion._{p} - Use curve fitting to find the
*γ*that gives the smallest phase detection error._{p}

#### 3.2. Multi-frequency fringes

Another notable problem with the aforementioned phase-shifting approach is that Eq. (9) yields wrapped phase instead of unwrapped phase, which is required for the FPB 3D imaging. Although there exist numerous phase unwrapping algorithms, the challenge is how to correctly and quickly perform the phase unwrapping if fringe discontinuities (they are normal when the object of interest has a complex shape or there are multiple objects) are present in the captured images. In this paper, this critical issue can be well addressed by using multi-frequency fringe projection. The technique uses a series of fringe patterns with different fringe numbers, and it always uses one and only one fringe in the lowest-frequency fringe pattern, where the unwrapped phase is equal to the wrapped phase. For other frequencies, the unwrapped phase distribution can be calculated based on the unwrapped phase distribution of the previous frequency:

*i*indicates the

*i*th projection fringe pattern with

*i*= {2, 3,...,

*n*},

*n*is the number of various fringe frequencies with

*n*≥ 2,

*f*is the number of fringes in the projection pattern with

*f*≥

_{n}*f*

_{n}_{−1}≥ ... ≥

*f*

_{1}= 1, and

*INT*represents the function to round a decimal number to integer. Since Eq. (12) involves only a single and simple governing equation for phase unwrapping, it can handle arbitrary fringe frequencies or fringe numbers as long as the ratio of two adjacent fringe frequencies

*f*/

_{i}*f*

_{i}_{−1}is not too big (e.g., smaller than 10). Furthermore, because of its simplicity, this direct approach can obtain the full-field unwrapped phase distributions in a very fast manner. As a consequence, the approach is suitable for measuring multiple objects with complex shapes without any additional processing.

In reality, because the model described by Eq. (11) is a simple model and the true *γ _{p}* is hard to be actually determined, the implementation of the FPB 3D imaging can involve both the presented gamma correction scheme and the multi-step phase-shifting technique. Practically, the most widely used four-step phase-shifting technique can be employed in the FPB 3D imaging. Yet, a larger phase-shifting step, such as the eight-step technique, should be adopted for the highest-frequency (i.e., the working frequency) fringe patterns to eliminate the nonlinear intensity distortion, reduce noise, and obtain phase distributions with high accuracy.

## 4. System calibration

In 3D imaging, the primary task is to obtain the out-of-plane height or depth information of an object or object system. For a generalized FPB system where the positions and orientations of the projector and camera can be arbitrary arranged as long as the regions of interest can be illuminated and captured, the governing equation of the 3D height determination is [1]:

*Z*is the out-of-reference-plane height or depth at point (

*X*,

*Y*) corresponding to the pixel (

*u*,

*v*) in the captured image,

*c*

_{1}−

*c*

_{5}and

*d*

_{0}−

*d*

_{5}are constant coefficients associated with the geometric and other relevant parameters, and

*ϕ*is the unwrapped phase determined by Eq. (12). Considering that Eq. (13) is for the ideal case where there are no uncertainties such as lens distortion, the actual application can include extra terms of second order of

*u*and

*v*to enhance the imaging accuracy. With this scheme, Eq. (13) becomes:

*et al.*[2] proposed to take into account the radial distortion effect of the camera lens by introducing 16 extra terms ranging from the third to fifth orders of

*u*and

*v*into Eq. (13); however, this method suffers from divergence and instability in the numerical computation.

The calibration of the FPB system usually relies on a number of accurate and precise gage blocks of different heights [1–3], and the sizes of the gage blocks should be selected according to the size of the field of imaging. This requires manufacturing a larger number of accurate and precise gage blocks, and thus makes the calibration technique impractical for broad applications. To cope with this problem, this paper advocates the use of the flexible camera calibration board, as shown in Fig. 1, for FPB system calibration. The rationale is that once the 3D coordinates of the calibration target control points are accurately determined during the camera calibration, they can serve as gage points for the calibration of the FPB system.

The flexible FPB calibration technique requires capturing a series of images of the calibration board at different positions with phase-shifted fringes projected on it. A clear calibration board image can be obtained at each position by averaging the captured fringe images. The camera calibration process is applied to these clear board images at all the positions in order to find the intrinsic and extrinsic parameters of the camera as well as the 3D coordinates of each control point on the calibration board. An implicit benefit of using the averaged clear image from phase-shifted images is the significant reduction of noise level, which substantially helps improve the accuracies of the control point detection and the camera calibration.

From the calibrated camera parameters, the height of each calibration control point with respect to a reference plane can be determined at every imaging position. Specifically, for the *j*th control point in the image of the *i*th board position, its physical coordinates on the calibration board, refined by the L-M optimization during camera calibration, are first transformed to the corresponding point (*X _{c,ij}*,

*Y*,

_{c,ij}*Z*) in the camera coordinate system by employing the camera extrinsic parameters. A virtual reference plane is then created by fitting a planar equation to all the points (

_{c,ij}*X*

_{c,}_{1}

*,*

_{j}*Y*

_{c,}_{1}

*,*

_{j}*Z*

_{c,}_{1}

*) in the first board position. Subsequently, the height of every point (*

_{j}*X*,

_{c,ij}*Y*,

_{c,ij}*Z*) with respect to the virtual reference plane can be calculated as:

_{c,ij}*A*,

*B*,

*C*are the planar coefficients of the reference plane and

*Z*is the height of the

_{ij}*j*th target control point on the calibration board captured at the

*i*th position. Figure 2 shows a representative image of the concentric-circle calibration board with fringes projected onto it, the corresponding phase map, and the out-of-reference-plane height map. A cubic B-spline interpolation process has been applied to obtain the height distributions in the bright regions.

After the unwrapped phase *ϕ* and the height *Z* of each control point at every calibration board position are obtained, the system calibration can be carried out to determine the coefficients *c*_{0} − *c*_{17} and *d*_{0} − *d*_{17} through minimizing a nonlinear least-squares error defined as:

*Z*denotes the absolute out-of-reference-plane heights of the control points on the calibration boards at various positions,

_{ij}*k*is the total number of board positions, and

*l*is the number of control points on the calibration board.

The coefficients *c*_{1} − *c*_{17} and *d*_{0} − *d*_{17} can be determined by using the L-M algorithm, where an initial guess can be obtained by minimizing a linear least-squares error in the form of
${S}^{\prime}={\sum}_{i=1}^{k}{\sum}_{j=1}^{l}{\left[{F}_{c}-{F}_{d}{Z}_{ij}\right]}^{2}$. It should be noted that at least three different positions of the calibration board must be utilized to correctly determine the coefficients because of the complexity of the governing equation and the camera calibration process. In practice, more than 20 positions are usually adopted. Moreover, the various positions of the calibration board should cover the volume of the field of imaging to assure accurate imaging over the entire field.

## 5. Experiments

#### 5.1. Camera calibration

To demonstrate the validity of the proposed technique, computer simulations along with a real experiment have been conducted. These experiments use a flat calibration panel with 10 × 7 concentric-circle patterns whose grid distance is 25.4*mm* (as illustrated in Fig. 1).

### 5.1.1. Synthesized images

In the simulation, the images are synthesized with camera parameters that are obtained from rounding a real calibration result where the radial, tangential, and prism lens distortion are considered. Gaussian noise with a standard deviation of 0.2% of the 25.4*mm* grid distance is added to the position of each circular pattern in the horizontal and vertical directions. For better approximation of the synthetic images to the real images, in addition to being blurred by a 5 × 5 Gaussian filter, their intensity values are perturbed with additive white Gaussian noise (*σ* = 2.0). The images are in bitmap format with a size of 2048 × 1536 pixels. The quartic O-MOMS (optimal maximal-order-minimal-support) B-spline algorithm [17] is used for the interpolation involved in the calibration process.

Figure 3 shows the errors in detecting the positions of the control points against their true values. With Heikkila’s technique [16], the control points are directly detected from the raw images, and the relatively large errors are mainly due to the perspective distortion where the ellipse centers are usually not the actual centers of the circles. In contrast, with the frontal image correlation approach, the distortion effects are effectively removed, which leads to an improvement of 30 times on the detection of control points in terms of the root-mean-square error (RMSE) in both vertical and horizontal directions.

Figure 4 presents the performance of the camera calibration process. In the first step, the conventional camera calibration method cannot provide accurate results. In the second step, the approach of control point adjustment alleviates the defects in the calibration target board and subsequently yields much smaller errors for the camera calibration. In the third or last step, the frontal image correlation process helps refine the positions of the control points with substantially higher accuracies; consequently, high-accuracy camera calibration can be achieved. The residual of the calibration, named reprojection error and defined as the RMSE between the projection of the control points to the image planes and their detected locations, is 0.000834 pixels. It is noted that the sharp jump at the beginning of the third step is due to the hyper-accurate locations of the control points detected by the frontal image correlation. These new locations are not associated with the calibration parameters previously obtained in the second step, so larger reprojection errors are produced. However, the errors are reduced significantly and quickly as the iteration continues. Table 1 summarizes the results of the calibration, where the physical rotation angles instead of the Rodrigues rotation vectors are presented for easy interpretation purpose. It can be seen that the camera parameters can be accurately retrieved with the proposed camera calibration scheme.

### 5.1.2. Robustness to noise

Figure 5 shows the detection errors of the control points when the images are contaminated with Gaussian noise whose standard deviation varies from 0 to 3 grayscales. A 5×5 pixels Gaussian filter has been applied to the relevant images. Since interpolation is critical for the control point refinement with sub-pixel accuracies, different interpolation methods [17] have been employed to assess their robustness. For clarification, the data have been separated into two subfigures with the same cubic B-spline results shown in both. The results indicate that the family of B-spline interpolation yields much higher accuracies than the widely used bicubic interpolation for the detection of control points. Particularly, the quartic O-MOMS interpolation provides the highest accuracies.

The accuracy of control point detection in the captured images has a direct relation with the recovery accuracy of the true camera parameters. Figure 6 shows the calibration reprojection error as a function of the standard deviation of noise. It can be seen that the B-spline interpolation methods typically provide an accuracy two to three times higher than the bicubic interpolation method with respect to the RMSE of the reprojection.

### 5.1.3. Real experimental images

Like many other techniques, although the proposed camera calibration technique has been demonstrated to be robust against noise and position uncertainties of the control points, the real experimental results are not good as the simulation ones. This issue can be found in existing techniques as well. For instance, in the work reported by Douxchamps *et al.* [14] where a similar approach was used, the simulation yielded a reprojection RMSE of 0.002 pixels and the real experiment gave 0.045 pixels. Nevertheless, the novel scheme proposed in this paper is still able to provide a remarkable improvement over the existing techniques.

In the experiment, a CMOS camera is used to capture a set of 20 images where the calibration board is located at different positions and oriented in various directions. The images are in bitmap format and each image has a size of 2048 × 1536 pixels.

Figure 7 shows the position error vectors between the detected and projected positions at each control point of the calibration target. With the conventional method, because of the insufficient accuracy of the control point detection and the imperfection of the calibration target along with the effect of lens and perspective distortions, the reprojection errors are noticeably polarized. On the contrary, with the proposed frontal image correlation method, the errors behave more isotropic, indicating that the distortions and the imprecision of the calibration target have been successfully compensated. For the overall reprojection error, the conventional method yielded 0.3839 pixels, and the frontal image correlation method yielded 0.0093 pixels.

#### 5.2. Gamma correction

The validity of the gamma correction technique with the large phase-shifting step scheme and the gamma pre-encoding scheme has been verified by real experiments. To examine the effect of the fringe frequency on the accuracy of phase extraction, three different projection patterns with fringe frequencies of 100, 25, and 8 pixels per fringe are used in the experiments.

Figure 8(a) shows the RMSE of the extracted phase associated with different numbers of phase-shifting steps without gamma pre-encoding, where the reference phase is acquired by a 30-step phase-shifting measurement. As expected, when the number of phase-shifting step increases, the phase error decreases. Reasonably good accuracy can be achieved with the phase-shifting step as small as four, and high accuracy can be obtained if the phase-shifting step is no smaller than six. The result also reveals that a higher frequency can give slightly higher accuracy. In theory, the highest fringe frequency can be two pixels per fringe, but the effect of fringe interference can occur in this case. Moreover, a further experiment indicates that no perceptible difference can be seen for the cases of four and eight pixels per fringe.

Figure 8(b) shows the phase RMSE with different gamma values pre-encoded in the projection patterns. In the experiment, the three-step phase-shifting method is used because it is sensitive to the gamma distortion and thus helpful for detecting the best pre-encoding gamma value. The experimental result gives the best pre-encoding gamma as 2.65, as illustrated in the figure. In addition, the gamma problem can also be well solved by using a high fringe frequency.

To further demonstrate the effect of the gamma pre-encoding scheme, Fig. 9 shows two captured fringe patterns without and with gamma pre-encoded. The line plots shown in the figure are the intensity distributions along the lines highlighted in the captured patterns. It is evident that the gamma correction method can remarkably reduce the nonlinear intensity distortion.

Based on the experimental data, it is desired for the practical 3D imaging to use the following approach: high fringe frequency such as eight pixels per fringe, gamma pre-encoding, and large phase-shifting step (e.g., eight).

#### 5.3. FPB calibration and experiment

To demonstrate the performance of the proposed techniques for accuracy-enhanced FPB 3D imaging, five real experiments have been conducted at various scales of the field of view. Four different fringe frequencies, with 1, 4, 20, and 100 fringes in the entire image of 800 pixels wide, are used to generate the projection fringe patterns according to the multi-frequency phase-shifting technique. For the working frequency, i.e., the highest frequency with 100 fringes in the pattern, the eight-step phase-shifting algorithm rather than the four-step one for the other frequencies is employed in the experiments. In addition, the pre-encoded gamma value is predetermined as 2.65 and the camera calibration parameters are also pre-determined with the techniques described in previous sections.

The first experiment aims at testing the imaging accuracy, where a flat plate of 457.2*mm* × 304.8*mm* with eight high-precision gage blocks and a concentric-circle array board with a grid size of 25.4*mm* are selected as the object of interest and the calibration board, respectively. Figure 10 shows the testing plate together with its experimentally obtained 2D and 3D plots. The results summarized in Table 2 indicate that the maximum error of the mean measured heights over the entire field is 0.048*mm*, which yields a relative accuracy (defined as the ratio of out-of-plane measurement accuracy to the in-plane dimension) of 0.010%. This confirms the validity and reliability of the 3D imaging technique.

For comparison purpose, the captured images of the first three frequencies are also used for the 3D imaging analysis. This means that the working frequency is 40 pixels per fringe or 20 fringes in the 800-pixel width, and four-step phase-shifting algorithm is used. The mean measured heights are 25.729*mm*, 18.903*mm*, 6.340*mm*, 6.302*mm*, 12.332*mm*, 15.622*mm*, 9.522*mm*, and 50.855*mm* for the eight gage blocks, respectively. The largest error is −0.368*mm* and the largest standard deviation is 0.449*mm*. The results clearly show the effectiveness of using high-frequency fringes and large phase-shifting step. In practice, it is favorable to apply all the following approaches: high-frequency fringes, gamma pre-encoding, and large phase-shifting step.

The other four experiments are intended to verify the flexibility of the proposed technique for 3D imaging at various scales. Figure 11 illustrates the full 360° 3D imaging result of a conch shell obtained by combining the 3D images captured from multiple views. In this case, the conch shell is 221*mm* long and the grid distance of the calibration target pattern is scaled down to 16*mm*. Visual inspection vividly shows that the full 360° image has a very good match with the actual object, and the surface structure can be seen at ease. The high quality full 360° 3D registration clearly benefits from the accurate 3D measurements.

Another experiment is carried out to validate the performance of the 3D imaging at small scales, where a small calibration board with a pattern grid distance of 6.0*mm* is employed to calibrate the system. In this experiment, an optical lens is utilized to focus the projection fringes into a small region to cover the object of interest. Figure 12 shows the imaging result of a 48.0*mm* × 56.0*mm* printed circuit board with many components on it. It can be seen that except for the shadow and shiny regions, the result indicates a good imaging accuracy.

Using the similar setup to the previous experiment, the 3D image of a small metal horse sculpture (45*mm* tall) is acquired, as shown in Fig. 13. With the full 360° 3D model successfully formed, the experiment again demonstrates both the accuracy and the flexibility of the FPB 3D imaging approach.

The last experiment is conducted to demonstrate the validity of the technique on 3D shape measurement at a relatively larger scale, where a large calibration board with a pattern grid distance of 50.8*mm* is used. Figure 14 shows the 3D shape measurement result of a lion toy whose length is 600.0*mm*. This and the previous experiments evidently demonstrate that the proposed technique is capable of accurately providing 3D imaging or measuring 3D shapes of objects at various scales as long as the size of the calibration board matches the field of imaging.

## 6. Conclusion

A series of new approaches toward a hyper-accurate calibration and application of the FPB 3D imaging system is presented. The contributions of this work can be summarized as three major improvements. First, an advanced geometric camera calibration technique allows using low-cost calibration hardware to yield accurate camera calibration results by utilizing the bundle adjustment technique and the frontal image correlation scheme. Second, an effective gamma pre-encoding method along with high-frequency fringes and large phase-shifting step schemes are proposed to ensure accurate retrieval of fringe phase. Third, an enhanced governing equation capable of effectively handling various uncertainties and lens distortion issue is proposed for the 3D shape determination, and an advanced flexible calibration technique using the control points of calibration target as gage points is introduced. Because only one calibration board of appropriate size is required for each imaging scenario and the calibration pattern can be printed out by a regular printer, the calibration technique is remarkably flexible and convenient to use. Both simulation and real experimental results successfully verify the validity, reliability and practicality of the proposed technique.

## Acknowledgments

This work was supported by the U.S. Army Research Office (ARO) under Grant No. W911NF-10-1-0502 and the National Science Foundation (NSF) under Grant No. CMMI-0825806.

## References and links

**1. **H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett. **32**(16), 2438–2440 (2007). [CrossRef] [PubMed]

**2. **L. Huang, P. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. **49**(9), 1539–1548 (2010). [CrossRef] [PubMed]

**3. **Z. Wang, D. Nguyen, and J. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. **48**(2), 218–225 (2010). [CrossRef]

**4. **Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. **47**(5), 053604 (2008). [CrossRef]

**5. **X. Zhang and L. Zhu, “Projector calibration from the camera image point of view,” Opt. Eng. **48**(11), 117208 (2009). [CrossRef]

**6. **Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. **22**(11), 1330–1334 (2000). [CrossRef]

**7. **J. Lavest, M. Viala, and M. Dhome, “Do we really need an accurate calibration pattern to achieve a reliable camera calibration?” in Proceedings European Conference on Computer Vision (1998), pp. 158–174.

**8. **A. Datta, J. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” in Proceedings IEEE International Conference on Computer Vision Workshops (IEEE, 2009), 1201–1208.

**9. **B. Pan, Q. Kemao, L. Huang, and A. Asundi, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett. **34**(4), 416–418 (2009). [CrossRef] [PubMed]

**10. **J. Zhong and J. Weng, “Phase retrieval of optical fringe patterns from the ridge of a wavelet transform,” Opt. Lett. **30**(19), 2560–2562 (2005). [CrossRef] [PubMed]

**11. **L. Xiong and S. Jia, “Phase-error analysis and elimination for nonsinusoidal waveforms in Hilbert transform digital-fringe projection profilometry,” Opt. Lett. **34**(15), 2363–2365 (2009). [CrossRef] [PubMed]

**12. **J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of art in structured light patterns for surface profilometry,” Pattern Recogn. **43**(8), 2666–2680 (2010). [CrossRef]

**13. **T. Hoang, B. Pan, D. Nguyen, and Z. Wang, “Generic gamma correction for accuracy enhancement in fringe-projection profilometry,” Opt. Lett. **35**(12), 1992–1994 (2010). [CrossRef] [PubMed]

**14. **D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. **31**(2), 376–383 (2009). [CrossRef]

**15. **C. Engels, H. Stewenius, and D. Nister, “Bundle adjustment rules,” in Proceedings on Photogrammetric Computer Vision (2006), pp. 266–271.

**16. **J. Heikkila, “Moment and curvature preserving technique for accurate ellipse boundary detection,” in Proceedings IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1998), pp. 734–737.

**17. **L. Luu, Z. Wang, M. Vo, T. Hoang, and J. Ma, “Accuracy enhancement of digital image correlation with B-spline interpolation,” Opt. Lett. **36**(16), 3070–3072 (2011). [CrossRef] [PubMed]

**18. **B. Pan, K. Qian, H. Xie, and A. Asundi, “Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review,” Meas. Sci. Technol. **20**(6), 062001 (2009). [CrossRef]

**19. **B. Pan, H. Xie, and Z. Wang, “Equivalence of digital image correlation criteria for pattern matching,” Appl. Opt. **49**(28), 5501–5509 (2010). [CrossRef] [PubMed]