Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Iterative calibration method for measurement system having lens distortions in fringe projection profilometry

Open Access Open Access

Abstract

In fringe projection profilometry, system calibration is crucial for guaranteeing the measurement accuracies. Its difficulty lies in calibrating projector parameters, especially when the projector lens has distortions, since the projector, unlike a camera, cannot capture images, leading to an obstacle to knowing the correspondences between its pixels and object points. For solving this issue, this paper, exploiting the fact that the fringe phases on a plane board theoretically have a distribution of rational function, proposes an iterative calibration method based on phase measuring. Projecting fringes onto the calibration board and fitting the measured phases with a rational function allow us to determine projector pixels corresponding to the featured points on the calibration board. Using these correspondences, the projector parameters are easy to estimate. Noting that the projector lens distortions may deform the fitted phase map thus inducing errors in the estimates of the projector parameters, this paper suggests an iterative strategy to overcome this problem. By implementing the phase fitting and the parameter estimating alternately, the intrinsic and extrinsic parameters of the projector, as well as its lens distortion coefficients, are determined accurately. For compensating for the effects of the lens distortions on measurement, this paper gives two solutions. The pre-compensation actively curves the fringes in computer when generating them; whereas when using the post-compensation, the lens distortion correction is performed in the data processing stage. Both methods are experimentally verified to be effective in improving the measurement accuracies.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry [14], having advantages of high speed, noncontact, and full-field, is widely used in three-dimensional (3D) shape measurement. In its implementation, a projector casts sinusoidal fringe patterns onto a measured surface, and a camera at a different angle captures the deformed patterns caused by the depth variation of the surface. Measuring the fringe phases enables us to reconstruct the 3D shape of the measured surface. Therefore, the system calibration, which determines the mapping relationship between the phase map and 3D point coordinates, is crucial for guaranteeing the measurement accuracy. Conventionally, the reference-plane-based methods are used to determine this mapping relationship between fringe phases and object heights [5,6]. This phase-to-height conversion is usually implicitly represented with a polynomial [79] or with a function deduced from the system geometry [1014]. In [14], for example, by projecting uneven fringes, a one-coefficient relationship is deduced for simplifying the data processing. Compared with these techniques, a more general approach is to explicitly calibrate all the parameters of the camera and the projector, including their intrinsic and extrinsic parameters, as well as their lens distortion coefficients.

As is well known, camera calibration techniques have been extensively developed in the field of photogrammetry or computer vision [15,16]. With them, a specially designed target, e.g., usually a checkerboard with known square size, is placed in the field of view of the camera, in order to help determine the correspondences between camera pixels and the object points. The camera parameters are estimated from these correspondences by using optimization algorithms. For enhancing the calibration accuracy, some efforts are made by use of bundle adjustment strategy [17,18].

Although a projector has seemingly the same model as that of a camera, projector calibration remains challenging because the projector, unlike a camera, cannot capture images, leading to an obstacle to knowing the correspondences between projector pixels and object points. For solving this problem, a well-calibrated camera is usually used as an aid [1924]. Taking [22] for example, the projector to be calibrated casts a checkerboard pattern onto a calibration board, and the image is captured by a camera. By extracting the featured points from these images and using the camera parameters, the spatial coordinates of the object points are calculated, so that the correspondences between projector pixels and object points are determined in such an indirect way. In doing so, many factors may induce uncertainties in the calibration results. Especially, an optical lens generally has a point spread function (PSF) equivalent to the impulse response of a low-pass filter. Such low-pass properties of lenses make the captured images blurred to a certain extent [25]. For suppressing these uncertainties, [23] suggested a two-step method that estimates the coarse parameters by identifying the captured markers, and then achieve more accurate results via iterative adjustments.

Instead of projecting specially designed patterns onto the calibration board, using sinusoidal fringe patterns are more suitable for the projector calibration purpose [2634]. With them the projector pixels are coded as fringe phases thus being less sensitive to the low-pass properties of the projector and camera lenses. In practice, however, the phases at edges or in dark areas cannot be measured accurately, causing a problem of how the phases at the featured points are measured reliably. For solving this problem, [30] and [31] use a red-blue checkerboard, instead of the popularly used black-white ones, as the calibration board thus avoiding measuring the phases in dark areas. In this case, the phase measuring at square corners are still affected by the low-pass properties of the devices [25]. Another option is to use a board having a circle array, with which the circle centers are considered to be the featured points [32,33]. Although the image of each circle, due to perspective projection, does not exactly have a shape of ellipse [35], its centroid is extracted and approximately used as a featured point. The results of this method strongly depend on the ellipse segmentation, and using sub-pixel edge detection is helpful for improving the accuracy [34]. Besides, the existing methods using sinusoidal fringes suffer from the phase measuring errors induced by such factors as noise, device nonlinearities, and illumination fluctuations.

From the proceeding discussion, we know that developing an accurate system calibration method for fringe projection profilometry remains challenging. This paper presents, to the best of our knowledge, a novel method for calibrating the measurement system having lens distortions. This method uses a checkerboard as the calibration board. By projecting sinusoidal fringes onto it, and fitting the fringe phases using a rational function which is demonstrated to be exact in representing the phase map of a plane board, the phases at corner points are accurately extracted and, further, the corresponding projector pixels are determined. An iterative strategy is used for restraining effects of lens distortions on the phase fitting results. As a result, the projector parameters, including the intrinsic and extrinsic parameters, and the radial and tangential distortion coefficients, are determined accurately. For compensating for the effects of the lens distortions on measurement, this paper also suggests a pre- and a post-compensation method. Both are experimentally demonstrated to be effective in improving the measurement accuracies.

2. Model of measurement system

2.1 Camera model

In fringe projection profilometry, the measurement system mainly consists of a camera and a projector. The standard camera model has been well established in the field of computer vision and can be found in the related literature [3638]. In this section, we repeat deriving the camera model just for the convenience in using its concepts, notations, and formulas.

Calibrating a camera is to obtain its intrinsic and extrinsic parameters. The intrinsic parameters include the focal length, the principal point and the pixel skew factor; and the extrinsic parameters include a rotation matrix and a translation vector.

As schemed is Fig. 1, Q is an object point having coordinates (Xw, Yw, Zw) in the world coordinate system and coordinates (Xc, Yc, Zc) in the camera coordinate system. Q produces its image at the point qc on the image plane of the camera. In the image coordinate system, qc has pixel coordinates (u, v). The correspondence between the camera coordinates and the image coordinates is represented with

$${Z_\textrm{c}}\left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = {\textbf{A}_\textrm{c}}\left[ {\begin{array}{c} {{X_\textrm{c}}}\\ {{Y_\textrm{c}}}\\ {{Z_\textrm{c}}} \end{array}} \right] = \left[ {\begin{array}{ccc} {{\eta_u}} &0 &{{u_0}}\\ 0 &{{\eta_v}} &{{v_0}}\\ 0 &0 &1 \end{array}} \right]\left[ {\begin{array}{c} {{X_\textrm{c}}}\\ {{Y_\textrm{c}}}\\ {{Z_\textrm{c}}} \end{array}} \right],$$
where ηu=fc/du and ηv=fc/dv with fc being the focal length of the camera lens, and du and dv being the pixel size along u and v axes, respectively; and (u0, v0) are the coordinates of principal point. Ac is called intrinsic matrix of the camera. The transformation from the world coordinates to the camera coordinates is represented with
$$\left[ {\begin{array}{c} {{X_\textrm{c}}}\\ {{Y_\textrm{c}}}\\ {{Z_\textrm{c}}} \end{array}} \right] = {\textbf{R}_\textrm{c}}\left[ {\begin{array}{c} {{X_\textrm{w}}}\\ {{Y_\textrm{w}}}\\ {{Z_\textrm{w}}} \end{array}} \right] + {\textbf{T}_\textrm{c}} = \left[ {\begin{array}{ccc} {{r_1}} &{{r_2}} &{{r_3}}\\ {{r_4}} &{{r_5}} &{{r_6}}\\ {{r_7}} &{{r_8}} &{{r_9}} \end{array}} \right]\left[ {\begin{array}{c} {{X_\textrm{w}}}\\ {{Y_\textrm{w}}}\\ {{Z_\textrm{w}}} \end{array}} \right] + \left[ {\begin{array}{c} {{t_1}}\\ {{t_2}}\\ {{t_3}} \end{array}} \right] = [{{\textbf{R}_\textrm{c}},{\textbf{T}_\textrm{c}}} ]\left[ {\begin{array}{c} {{X_\textrm{w}}}\\ {{Y_\textrm{w}}}\\ {{Z_\textrm{w}}}\\ \textrm{1} \end{array}} \right],$$
where [Rc, Tc] is the extrinsic parameter matrix, including a 3×3 rotation matrix Rc and a 3×1 translation vector Tc. Combining Eqs. (1) and (2), the projection from the spatial point Q to the image point qc is represented with
$${Z_\textrm{c}}\left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = {\textbf{A}_\textrm{c}}[{{\textbf{R}_\textrm{c}},{\textbf{T}_\textrm{c}}} ]\left[ {\begin{array}{c} {{X_\textrm{w}}}\\ {{Y_\textrm{w}}}\\ {{Z_\textrm{w}}}\\ 1 \end{array}} \right].$$

 figure: Fig. 1.

Fig. 1. The models of the projector-camera system.

Download Full Size | PDF

In measurement practice, distortions of the camera lens may decrease the measurement accuracy. In the presence of lens distortions, the image of a point deviates from its correct position. These distortions should be described in the camera coordinate system, but we have to use the coordinates scaled by 1/fc instead, because the lens focal length fc cannot be separated during calibration. In this case, the relation between the ideal coordinates of a camera pixel (xc, yc) and the distorted coordinates $({\hat{x}_\textrm{c}},{\hat{y}_\textrm{c}})$ is represented with

$$\left[ {\begin{array}{c} {{{\hat{x}}_\textrm{c}}}\\ {{{\hat{y}}_\textrm{c}}} \end{array}} \right] = ({1 + {k_{\textrm{c1}}}{r_\textrm{c}}^2 + {k_{\textrm{c2}}}r_\textrm{c}^4} )\left[ {\begin{array}{c} {{x_\textrm{c}}}\\ {{y_\textrm{c}}} \end{array}} \right] + \left[ {\begin{array}{c} {2{p_{\textrm{c1}}}{x_\textrm{c}}{y_\textrm{c}} + {p_{\textrm{c2}}}({{r_\textrm{c}}^2 + 2x_\textrm{c}^\textrm{2}} )}\\ {{p_{\textrm{c1}}}({{r_\textrm{c}}^2 + 2y_\textrm{c}^2} )+ 2{p_{\textrm{c}2}}{x_\textrm{c}}{y_\textrm{c}}} \end{array}} \right],$$
where xc=(uu0)/ηu and yc=(vv0)/ηv being the scaled camera coordinates, and rc2=xc2+yc2. kc1 and kc2 are the radial distortion coefficients, and pc1 and pc2 are the tangential distortion coefficients. Here, the higher order distortions are neglected due to their insignificant values.

The parameters of a camera, including its intrinsic matrix Ac and extrinsic matrix [Rc, Tc], as well as its lens distortion coefficients, are determined by using the technique known as camera calibration. With it, a target having specially designed markers with known sizes is usually used. For example, the black-white checkerboard as shown in Fig. 1 is the most popularly used target for calibration, with its square corners being the featured points. By capturing the images of the target at various positions and angles and by extracting their featured points, the correspondences between the object points and their image positions on the imaging plane of the camera are easy to determine. Using these correspondences, the camera parameters are estimated according to the model in Eqs. (1) through (4). With this technique, although data processing involves complex nonlinear optimizations, such software is readily available, making it easy to use in many applications. Throughout this paper, we use the standard method proposed by Zhang [16] to calibrate the camera, with the camera parameters being estimated using iterative Levenberg-Marquardt algorithm.

2.2 Projector model

A projector has the same model as the camera, with inverse directions of light rays. As shown in Fig. 1, the point Q in the projector coordinate system has coordinates (Xp, Yp, Zp). It is illuminated by the light source at qp on the image plane of the projector, with its pixel coordinates being (s, t). Similar to Eq. (1), their relation is

$${Z_\textrm{p}}\left[ {\begin{array}{c} s\\ t\\ 1 \end{array}} \right] = {\textbf{A}_\textrm{p}}\left[ {\begin{array}{c} {{X_\textrm{p}}}\\ {{Y_\textrm{p}}}\\ {{Z_\textrm{p}}} \end{array}} \right] = \left[ {\begin{array}{ccc} {{\kappa_s}} &0 &{{s_0}}\\ 0 &{{\kappa_t}} &{{t_0}}\\ 0 &0 &1 \end{array}} \right]\left[ {\begin{array}{c} {{X_\textrm{p}}}\\ {{Y_\textrm{p}}}\\ {{Z_\textrm{p}}} \end{array}} \right],$$
where κs= fp/ ds and κt= fp/ dt with fp being the focal length of the projector lens, and ds and dt being the pixel size along s and t axes of the projector image plane, respectively; (s0, t0) are the coordinates of principal point. Referring to Eqs. (13), we simply derive the transformation from the world coordinates to the projector pixel coordinates as
$${Z_\textrm{p}}\left[ {\begin{array}{c} s\\ t\\ 1 \end{array}} \right] = {\textbf{A}_\textrm{p}}[{{\textbf{R}_\textrm{p}},{\textbf{T}_\textrm{p}}} ]\left[ {\begin{array}{c} {{X_\textrm{w}}}\\ {{Y_\textrm{w}}}\\ {{Z_\textrm{w}}}\\ 1 \end{array}} \right].$$
When the projector lens has distortions, using the projector coordinates scaled by 1/fp, the relationship between the ideal pixel (xp, yp) and its real position $({\hat{x}_\textrm{p}},{\hat{y}_\textrm{p}})$ is formulized as
$$\left[ {\begin{array}{c} {{{\hat{x}}_\textrm{p}}}\\ {{{\hat{y}}_\textrm{p}}} \end{array}} \right] = ({1 + {k_{\textrm{p}1}}{r_\textrm{p}}^2 + {k_{\textrm{p}2}}r_\textrm{p}^4} )\left[ {\begin{array}{c} {{x_\textrm{p}}}\\ {{y_\textrm{p}}} \end{array}} \right] + \left[ {\begin{array}{c} {2{p_{\textrm{p}1}}{x_\textrm{p}}{y_\textrm{p}} + {p_{\textrm{p}2}}({{r_\textrm{p}}^2 + 2x_\textrm{p}^2} )}\\ {{p_{\textrm{p}1}}({{r_\textrm{p}}^2 + 2y_\textrm{p}^2} )+ 2{p_{\textrm{p}2}}{x_\textrm{p}}{y_\textrm{p}}} \end{array}} \right],$$
where xp=(ss0)/κs, yp=(tt0)/κt, and rp2=xp2+yp2. kp1 and kp2 are the radial distortion coefficients. pp1 and pp2 are the tangential ones. The higher order terms are neglected.

Despite having the same model as that of a camera, a projector is much more difficult to calibrate, because it has inverse directions of light rays and cannot capture images, leading to an obstacle to knowing the correspondences between projector pixels and object points. This paper mainly focuses on solving the issue how the correspondences between the projector pixels and the featured points on the calibration board are accurately established. When the correspondences are obtained, the projector parameters will be estimated in a similar fashion to that of camera calibration.

2.3 Projector-camera model

As shown in Fig. 1, the measurement system is based on the principle of triangulation, and works like a binocular stereovision system. For the convenience purpose, we define Hc=Ac[Rc, Tc], so that Eq. (3) is restated as

$${Z_\textrm{c}}\left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = {\textbf{H}_c}\left[ {\begin{array}{c} {{X_\textrm{w}}}\\ {{Y_\textrm{w}}}\\ {{Z_\textrm{w}}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{cccc} {{h_{\textrm{c}11}}}&{{h_{\textrm{c}12}}}&{{h_{\textrm{c}13}}}&{{h_{c14}}}\\ {{h_{\textrm{c}21}}}&{{h_{\textrm{c}22}}}&{{h_{\textrm{c23}}}}&{{h_{c24}}}\\ {{h_{\textrm{c}31}}}&{{h_{\textrm{c}32}}}&{{h_{\textrm{c}33}}}&{{h_{c34}}} \end{array}} \right]\left[ {\begin{array}{c} {{X_\textrm{w}}}\\ {{Y_\textrm{w}}}\\ {{Z_\textrm{w}}}\\ 1 \end{array}} \right].$$
Similarly, by defining Hp=Ap[Rp, Tp], we have
$${Z_\textrm{p}}\left[ {\begin{array}{c} s\\ t\\ 1 \end{array}} \right] = {\textbf{H}_\textrm{p}}\left[ {\begin{array}{c} {{X_\textrm{w}}}\\ {{Y_\textrm{w}}}\\ {{Z_\textrm{w}}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{cccc} {{h_{\textrm{p}11}}}&{{h_{\textrm{p}12}}}&{{h_{\textrm{p}13}}}&{{h_{\textrm{p1}4}}}\\ {{h_{\textrm{p}21}}}&{{h_{\textrm{p}22}}}&{{h_{\textrm{p}23}}}&{{h_{\textrm{p}24}}}\\ {{h_{\textrm{p}31}}}&{{h_{\textrm{p}32}}}&{{h_{\textrm{p}33}}}&{{h_{\textrm{p34}}}} \end{array}} \right]\left[ {\begin{array}{c} {{X_\textrm{w}}}\\ {{Y_\textrm{w}}}\\ {{Z_\textrm{w}}}\\ 1 \end{array}} \right].$$
If the camera and projector are well calibrated, the matrices Hc and Hp are available, and simultaneously the geometric deformations caused by lens distortions in projected and captured patterns are also corrected. In this case the system can be used to measure the 3D shape of an object. Consider an object point having world coordinates (Xw, Yw, Zw). It is illuminated by the projector pixel (s, t) and produces its image at the camera pixel (u, v). By eliminating the scale factors Zc and Zp from Eqs. (8) and (9), we have a system of equations
$$\left[ {\begin{array}{ccc} {u{h_{\textrm{c}31}} - {h_{\textrm{c}11}}}{u{h_{\textrm{c}32}} - {h_{\textrm{c}12}}}{u{h_{\textrm{c}33}} - {h_{\textrm{c}13}}}\\ {v{h_{\textrm{c}31}} - {h_{\textrm{c}21}}}{v{h_{\textrm{c}32}} - {h_{\textrm{c}22}}}{v{h_{\textrm{c}33}} - {h_{\textrm{c}23}}}\\ {s{h_{\textrm{p}31}} - {h_{\textrm{p}11}}}{s{h_{\textrm{p}32}} - {h_{\textrm{p}12}}}{s{h_{\textrm{p}33}} - {h_{\textrm{p}13}}}\\ {t{h_{\textrm{p}31}} - {h_{\textrm{p}21}}}{t{h_{\textrm{p}32}} - {h_{\textrm{p}22}}}{t{h_{\textrm{p}33}} - {h_{\textrm{p}23}}} \end{array}} \right]\left[ {\begin{array}{c} {{X_\textrm{w}}}\\ {{Y_\textrm{w}}}\\ {{Z_\textrm{w}}} \end{array}} \right] = \left[ {\begin{array}{c} {{h_{\textrm{c}14}} - u{h_{\textrm{c}34}}}\\ {{h_{\textrm{c}24}} - v{h_{\textrm{c}34}}}\\ {{h_{\textrm{p}14}} - s{h_{\textrm{p}34}}}\\ {{h_{\textrm{p}24}} - t{h_{\textrm{p}34}}} \end{array}} \right].$$
Equation (10) means that by matching the pixels (u, v) and (s, t), the 3D coordinates of the corresponding object point, (Xw, Yw, Zw), can be calculated by solving this equation system.

In fringe projection technique, the projector pixel (s, t) are coded using fringe phases. In the next section, we shall introduce phase-measuring method and then show how the phases can be converted into projector pixels. Note that, the equation system in Eq. (10) contains four equations involving three unknowns. In measurement, only one of the last two equations necessarily remains in the equation system, depending on the fringe orientations. If vertical fringes (perpendicular to s-axis) are used, the third equation remains. Whereas if horizontal fringes (perpendicular to t-axis) are used, we select the last equation instead. Generally, the fringes roughly perpendicular to the system baseline (which connects the projector and camera lens centers) are used in measurement, for achieving a high phase sensitivity.

3. Iterative calibration method

3.1 Phase measuring

Fringe projection technique measures the 3D shape of an object by projecting sinusoidal fringes onto it and measuring the phases. In this work, we calibrate the system also by measuring the phases of fringes projected on the calibration board. We use phase-shifting technique for recovering the fringe phases. Presume for the moment that vertical fringes are used, so that a sequence of phase-shifting fringe patterns is generated in computer. Among them, the mth (m = 0, 1, …, M-1) frame has a form of

$${g_m}(s,t) = \alpha + \beta cos[{{2\pi s} \mathord{\left/ {\vphantom {{2\pi s} {{\lambda_s}}}} \right.} {{\lambda _s}}} + {{2\pi m} \mathord{\left/ {\vphantom {{2\pi m} M}} \right.} M}],$$
where λs is the fringe pitch, and the two positive numbers α and β denote the background and the contrast of the fringes, respectively, satisfying 0≤α±β≤1 in order to avoid negative or over-saturated gray levels. When projecting these patterns onto an object, e.g., onto a calibration board, the distorted fringe patterns captured by the camera are represented with
$${I_m}(u,v) = A(u,v) + B(u,v)cos[{\varPhi _s}(u,v) + {{2\pi m} \mathord{\left/ {\vphantom {{2\pi m} M}} \right.} M}],$$
where A(u, v) and B(u, v) denote the background intensity and the modulation at a camera pixel (u, v), respectively. $\varPhi$s(u, v) denotes the fringe phases. Using the synchronous detection algorithm [39], the phases are estimated as
$${\phi _s}({u,v} )={-} \textrm{arctan}\left[ {\frac{{\sum\nolimits_{m = 0}^{M - 1} {{I_m}({u,v} )\sin ({{{2\pi m} \mathord{\left/ {\vphantom {{2\pi m} M}} \right.} M}} )} }}{{\sum\nolimits_{m = 0}^{M - 1} {{I_m}({u,v} )\cos ({{{2\pi m} \mathord{\left/ {\vphantom {{2\pi m} M}} \right.} M}} )} }}} \right].$$
Because Eq. (13) involves an arctangent function, the calculated phases ϕs(u, v) are within the range of principal values from –π to π rad. Using a temporal phase-unwrapping technique [40] allows us to unwrap this phase map, thus obtaining the absolute phase map, i.e., $\varPhi$s(u, v). The phases encode the projector pixels. From them, coordinates of projector pixels are calculated with
$$s = {{{\lambda _s}{\varPhi _s}(u,v)} \mathord{\left/ {\vphantom {{{\lambda_s}{\varPhi _s}(u,v)} {2\pi }}} \right.} {2\pi }}.$$
This equation matches the camera pixel (u, v) to the projector pixel coordinate s. If horizontal fringes are selected, phase calculation has the same procedure, but the camera pixel (u, v) is matched to the projector pixel coordinate t instead. By substituting these coordinates into Eq. (10), 3D coordinates of the corresponding object point, (Xw, Yw, Zw), are calculated.

3.2 Determining projector pixels corresponding to featured points

As mentioned previously, the difficulty of calibrating a projector lies in getting the correspondences between projector pixels and featured points on the calibration board, because the projector cannot capture images. We overcome this issue by measuring the fringe phases on the calibration board using the method introduced above.

Here, we select a black-white checkerboard as the calibration board, which is the most popularly used in engineering and thus being commercially available. The square corners on this checkerboard serve as the featured points. Referring to Fig. 1, when the image of the calibration board is captured by the camera, the corners are extracted using a corner detection operator, like Harris detector [41]. We denote the camera pixel coordinates of the extracted square corners as (Un, Vn) with the subscript n being the index related to the nth corner point on the checkerboard. This subsection focuses on determining the corresponding projector pixels, (Sn, Tn), of these corner points.

In 3D measurement, only fringes of one direction are used. When calibrating a projector, we have to project both vertical and horizontal fringe patterns onto the calibration board, as shown in Figs. 2(a) and 2(e), respectively. On their right side, Figs. 2(b) and 2(f) are the recovered phase maps. From them, we see that the regions of black squares have been segmented out by thresholding fringe modulations, where the calculated phases are inaccurate because of the low reflectivity there. In fact, the measured phases at square corners also have prohibitively large errors caused by the low-pass properties of the camera.

 figure: Fig. 2.

Fig. 2. The columns, from left to right, show the captured fringe patterns on the calibration board, the measured phase map with the black areas having been segmented out, the least-squares fitting results using rational functions, and the residuals calculated by subtracting the fitting results from the measured phases. The top and bottom rows correspond to the results of using vertical and horizontal fringes, respectively.

Download Full Size | PDF

For estimating accurate phases at square corners, we exploit the fact that, in the absence of lens distortions, the fringe phases on a plane board theoretically have a distribution of rational function [42], viz.,

$$\left\{ {\begin{array}{c} {{\varPhi _s}(u,v) = {{({a_2} + {a_3}u + {a_4}v)} \mathord{\left/ {\vphantom {{({a_2} + {a_3}u + {a_4}v)} {(1 + {a_0}u + {a_1}v)}}} \right.} {(1 + {a_0}u + {a_1}v)}}}\\ {{\varPhi _t}(u,v) = {{({a_5} + {a_6}u + {a_7}v)} \mathord{\left/ {\vphantom {{({a_5} + {a_6}u + {a_7}v)} {(1 + {a_0}u + {a_1}v)}}} \right.} {(1 + {a_0}u + {a_1}v)}}} \end{array}} \right.$$
where a0 through a7 are coefficients. The coefficients of denominators, i.e., a0 and a1, do not depend on the fringe pitches and orientations thus being the same in the two equations. By substituting the measured phases of the calibration board, i.e., $\varPhi$s(u, v) and $\varPhi$t(u, v), into the following equations
$$\left\{ {\begin{array}{c} { - {a_0}{\varPhi _s}(u,v)u - {a_1}{\varPhi _s}(u,v)v + {a_2} + {a_3}u + {a_4}v = {\varPhi _s}(u,v)}\\ { - {a_0}{\varPhi _t}(u,v)u - {a_1}{\varPhi _t}(u,v)v + {a_5} + {a_6}u + {a_7}v = {\varPhi _t}(u,v)} \end{array}} \right.$$
we have a linear equation system. Solving it in the least squares sense for the unknowns, a0, a1, …, a7, the phase functions in Eq. (15) are determined. In this procedure, by thresholding fringe modulations, the data in the regions of black squares have been excluded. As just mentioned, the data at the edges of squares also have large errors. For removing their effects, we must slightly erode the segmented regions using a morphological operator before fitting the phase map.

Figures 2(c) and 2(g) give the fitting results of Figs. 2(b) and 2(f), respectively. On their right side, Figs. 2(d) and 2(h) show the residuals calculated by subtracting the fitting results in Figs. 2(c) and 2(g) from the measured phases in Figs. 2(b) and 2(f), respectively. We observe that large phase errors appear at the corners and borders of each square, which are induced by the low-pass properties of lenses. As mentioned in the introduction section, an optical lens has the low-pass property because its point spread function (PSF) is generally equivalent to the impulse response of a low-pass filter. At the same time, the low-pass properties of lenses are also sourced from the defocuses of lenses. Reference [25] modeled the lenses using Gaussian low-pass filters and deeply analyzed the effects of their low-pass properties on the phase measuring results, revealing that the phase measuring results inevitably have larger errors at edges, just as we see in Figs. 2(d) and 2(h). This type of errors cannot be suppressed by improving the phase-shifting algorithm, e.g., by increasing the number of phase shifts. This fact means that the measured phases at the square corners cannot be used to determine their pixel coordinates. For this reason, we calculate the pixel coordinates of the corners from the fitting results.

By substituting the camera pixel coordinates of the square corners, i.e., (Un, Vn), into Eq. (15) whose coefficients have been estimated through Eq. (16) and by combing Eq. (14), the corresponding projector pixel coordinates are calculated as

$$\left\{ {\begin{array}{c} {{S_n} = \frac{{{\lambda_s}{\varPhi _s}({U_n},{V_n})}}{{2\pi }} = \frac{{{\lambda_s}({a_2} + {a_3}{U_n} + {a_4}{V_n})}}{{2\pi (1 + {a_0}{U_n} + {a_1}{V_n})}}}\\ {{T_n} = \frac{{{\lambda_t}{\varPhi _t}({U_n},{V_n})}}{{2\pi }} = \frac{{{\lambda_t}({a_5} + {a_6}{U_n} + {a_7}{V_n})}}{{2\pi (1 + {a_0}{U_n} + {a_1}{V_n})}}} \end{array}} \right.$$
where λs and λt are the fringe pitches of vertical and horizontal fringes, respectively. As a result, Fig. 3 gives, by 3(a) and 3(b), the pixel arrays of the extracted corner points on the camera and projector image planes, respectively.

 figure: Fig. 3.

Fig. 3. Arrays of extracted square corners on the image planes of (a) camera and (b) projector

Download Full Size | PDF

This method, by fitting the phase map, estimates coordinates of the projector pixels corresponding to the featured points on the calibration board. The results are much more accurate than those obtained from the phase values directly measured, because they are immune to the low-pass properties of lenses. Simultaneously, the phase errors induced by such factors as noise, projector and camera nonlinearities, and illumination fluctuations have also been averaged out in fitting the rational function. Although we use a checkerboard in this work, this function fitting method is suitable for enhancing the accuracy in extracting the featured points when selecting other types of calibration boards (e.g., that having circle array). Note that, with this function fitting method, the lens distortions still induce errors in extracting the projector pixels of the corners, because they deform the phase maps.

3.3 Iterative estimation of projector parameters

In fringe projection profilometry, system calibration generally involves a complex procedure. For make it clear and easy to follow, we summarize its steps here.

The camera is used to capture a sequence of images of the checkerboard when it is placed at various positions and angles. In parallel to doing it, the projector is used to cast both horizontal and vertical fringe patterns onto the checkerboard, and the same camera captures the deformed patterns. Instead of capturing the image of the checkerboard, its photograph can also be obtained by averaging the captured fringe patterns. From the captured fringe patterns, we recover the fringe phases and exclude invalid areas by using the method in Section 3.1. As a result, for the checkerboard at each position, we have a photograph and a pair of phase maps.

Following Zhang’s technique [16], we calibrate the camera, extracting corner pixels from each photograph and then estimating the camera parameters from the correspondences between the square corners on the checkerboard and their camera pixels. Using the calibrated distortion coefficients of the camera lens, the geometric distortions on the photographs and on the phase maps of the checkerboard are easy to correct. Assuming (u, v) to be a pixel in the corrected pattern, substituting its scaled camera coordinates (xc, yc) into Eq. (4) results in $({\hat{x}_\textrm{c}},{\hat{y}_\textrm{c}})$ and, further, $(\hat{u},\hat{v})$, i.e., the pixel in the distorted pattern. By simply assigning the value (e.g., gray level or phase) at $(\hat{u},\hat{v})$ to (u, v), we have the image or the phase maps with the camera distortions having been corrected. By doing so, the camera lens distortions will not affect calibrating the projector parameters.

After correcting effects of camera lens distortions, we calculate, from each pair of phase maps $\varPhi$s(u, v) and $\varPhi$t(u, v), the projector pixels corresponding to camera pixels (u, v) through the formulas s=λs$\varPhi$s(u, v)/2π and t=λt$\varPhi$t(u, v)/2π. From each photograph of the checkerboard, we extract the corner pixels (Un, Vn). These are basic data for calibrating the projector.

Using the method in Section 3.2, we fit the phases $\varPhi$s(u, v) and $\varPhi$t(u, v) using rational functions of (u, v), resulting in the coefficients {a0, a1, …, a7}. By substituting (Un, Vn) into Eq. (17) having coefficients {a0, a1, …, a7}, we calculate (Sn, Tn), namely the projector pixel coordinates corresponding to the square corners on the checkerboard. Through numerical optimization with Zhang’s technique [16], we estimate the projector parameters from the correspondences between the square corners on the checkerboard and their projector pixels. These parameters include the intrinsic parameters {κs, κt, s0, t0} and the lens distortion coefficients {kp1, kp2, pp1, pp2}.

If the estimated {kp1, kp2, pp1, pp2} are not zero, the projector has lens distortions, in which case the measured phases of the calibration board are not exactly distributed as rational functions. This fact means that the projector lens distortions may induce errors in the calibration result. We use the following iterative strategy to solve this problem.

Step 1. Use the calibration results just obtained as the initial values, i.e., {κs, κt, s0, t0}(0) and{kp1, kp2, pp1, pp2}(0), with the superscripts representing the number of iterations.

Step 2. If {κs, κt, s0, t0}(i) and {kp1, kp2, pp1, pp2}(i) are calculated, correct the effects of the projector lens distortions on the phase maps. Substituting ${x_\textrm{p}} = {{[s - s_0^{(i)}]} \mathord{\left/ {\vphantom {{[s - s_0^{(i)}]} {k_s^{(i)}}}} \right.} {k_s^{(i)}}}$ and ${y_\textrm{p}} = {{[t - t_0^{(i)}]} \mathord{\left/ {\vphantom {{[t - t_0^{(i)}]} {k_t^{(i)}}}} \right.} {k_t^{(i)}}}$ into Eq. (7), we calculate ${\hat{x}_\textrm{p}}$ and ${\hat{y}_\textrm{p}}$ on the left side. Then by using ${\hat{s}^{(i + 1)}} = {\hat{x}_\textrm{p}}\kappa _s^{(i)} + s_0^{(i)}$ and ${\hat{t}^{(i + 1)}} = {\hat{y}_\textrm{p}}\kappa _t^{(i)} + t_0^{(i)}$, we have the updated projector pixel coordinates. Using them, we repeat calculating the phase maps through $\varPhi _s^{(i + 1)}(u,v) = {{2\pi {{\hat{s}}^{(i + 1)}}} \mathord{\left/ {\vphantom {{2\pi {{\hat{s}}^{(i + 1)}}} {{\lambda_s}}}} \right.} {{\lambda _s}}}$ and $\varPhi _t^{(i + 1)}(u,v) = {{2\pi {{\hat{t}}^{(i + 1)}}} \mathord{\left/ {\vphantom {{2\pi {{\hat{t}}^{(i + 1)}}} {{\lambda_t}}}} \right.} {{\lambda _t}}}$ where effects of the projector lens distortions are partially eliminated.

Step 3. Estimate more accurate projector parameters. Fitting $\varPhi _s^{(i + 1)}(u,v)$ and $\varPhi _t^{(i + 1)}(u,v)$ using rational functions, resulting in the coefficients {a0, a1, …, a7}(i+1). By substituting (Un, Vn) into Eq. (17) having these coefficients, we calculate more accurate (Sn, Tn)(i+1). Through numerical optimization, we estimate the projector parameters, including {κs, κt, s0, t0}(i+1) and {kp1, kp2, pp1, pp2}(i+1), by corresponding (Sn, Tn)(i+1) to the square corners on the checkerboard.

Step 4. Repeat implementing Step 2 and 3 until the algorithm converges, i.e., until the variations of the estimated parameters between two consecutive iterations are below a preset threshold. The threshold is determined empirically. Here, we set a threshold of 10−4 which guarantees the iterative procedure achieves a satisfied accuracy.

3.4 Results

We apply the proposed technique to a measurement system. In this system, the camera (AVT Stingray F-125B) has a lens KOWA LM12JC with its focal length being 12 mm, and its image size is 1024 × 768 pixels with the pixel size being 3.75 × 3.75 μm2. The projector (Philips PPX 4010) has a resolution of 854 × 480 pixels. A black-and-white checkerboard having 9 × 12 squares is used as the calibration board, with each square having a size of 25 × 25 mm2.

During the calibration, we use the camera to capture the fringe patterns of the checkerboard at fifteen different positions and angles. At each pose, two sequences of fringe patterns, having vertical and horizontal fringe directions, respectively, are projected onto the checkerboard. In each sequence, the fringes have three different frequencies with the full pattern being covered by 1, 8, and 64 fringe periods, respectively, for the phase-unwrapping purpose. For each fringe frequency, we use three-step phase shifting technique, and the photograph of the checkerboard without fringes is obtained by simply averaging the captured fringe patterns. As a result, we capture totally 270 patterns for one calibration. Harris corner detector is used to extract the featured points from the images. In phase measuring, we use the temporal technique to unwrap the phase maps. With it, the fringe pattern having a single fringe does not have phase ambiguities. Using its phase map as a reference allows us to unwrap a phase map of a pattern having several fringes, by use of the multiple relation between their frequencies. Furthermore, using the unwrapped phase map of a lower frequency pattern as a reference allows us to unwrap a phase map of a higher frequency [43].

Following Zhang’s calibration technique, the camera parameters are estimated using iterative Levenberg-Marquardt algorithm. As a result, the intrinsic parameters are {ηu, ηv, u0, v0}={3415.0950, 3415.6276, 523.4285, 393.0799}, the coefficients of lens distortions are{kc1, kc2, pc1, pc2}={−0.0844, −0.0350, 0.0019, −0.0005}, and the extrinsic matrix is

$$[{{\textbf{R}_\textrm{c}},{\textbf{T}_\textrm{c}}} ] = \left[ {\begin{array}{{rrrr}} {0.9989}&{ - 0.0073}&{0.0471}&{ - 129.1097}\\ {0.0084}&{0.9997}&{ - 0.0244}&{ - 102.3350}\\ { - 0.0469}&{0.0248}&{0.9986}&{1297.9688} \end{array}} \right]$$
Using the proposed method, the parameters of the projector are also estimated. After 10 iterations, the variations of the estimated parameters between two consecutive iterations are below a preset threshold of 10−4. The intrinsic parameters are {κs, κt, s0, t0}={1853.5096, 1856.9661, 21.1879, 357.2866}. Note that the principal point of the projector lies at {s0, t0}={21.1879, 357.2866}, largely deviating from the image center in the vertical direction, because of the off-axis projection of the projector [44]. The coefficients of lens distortions are {kp1, kp2, pp1, pp2}={0.0131, −0.0813, 0.0013, −0.0015}, and the extrinsic matrix is
$$[{{\textbf{R}_\textrm{p}},{\textbf{T}_\textrm{p}}} ] = \left[ {\begin{array}{{rrrr}} {0.9859}&{ - 0.0148}&{ - 0.1665}&{ - 122.9638}\\ {0.0149}&{0.9999}&{ - 0.0007}&{ - 241.5961}\\ {0.1665}&{ - 0.0018}&{0.9860}&{833.3456} \end{array}} \right]$$
To examine the accuracy of the calibration results, Fig. 4 shows the reprojection errors of the calibrated camera and projector, which measure the matching degree between the calibration data and the calibration parameters. From Fig. 4, we observe that the reprojection errors for both the camera and the projector have centralized distributions, within the range of ±0.2 pixels. Their root-mean-square (RMS) errors are 0.0885 and 0.0813 pixels for the camera and the projector, respectively. These small reprojection errors demonstrate the accuracy of the proposed methods.

 figure: Fig. 4.

Fig. 4. Reprojection errors of (a) the camera and (b) the projector.

Download Full Size | PDF

4. Projector lens distortion correction

4.1 Pre-compensation technique

After calibrating the system, we can use it to measure an object through the following steps. First, generate sinusoidal phase-shifting fringe patterns using Eq. (11); second, project the generated fringe patterns onto the measured object surface, and capture the deformed patterns; third, calculate the wrapped phase map through Eq. (13), and unwrap them by using multi-frequency temporal phase unwrapping algorithm; fourth, calculate the project pixel coordinates through Eq. (14); and last, calculate 3D coordinates of the measured surface by using Eq. (10).

In the above procedure, using the calibrated coefficients of camera lens distortions, it is easy to eliminate the image distortions induced by them. The procedure has been described in the third paragraph of Section 3.3. After correcting the camera lens distortions, the measurement accuracy is mainly affected by the projector lens distortions. Some techniques allow us to circumvent this problem. For example, [45] recovers the depth map by use of the cross-ratio invariance, at an expense of much more time for searching for corresponding points in three reference phase maps. Therefore, correcting the projector lens distortions is more feasible in practical measurement. However, correcting distortions of a projector lens is more complex than correcting those of a camera lens. In this paper, we suggest two solutions, i.e., the pre- and post-compensation methods, for this issue.

Here, the pre-compensation method is derived. With it, the fringes are actively curved in computer when generating them. Because a projector has the same model as the camera with inverse directions of light rays, generating a fringe pattern deformed by Eq. (7) allows us to get undistorted fringes in the measurement space.

Assuming $(\hat{s},\hat{t})$ to be a pixel on the pre-corrected fringe patterns, with its scaled projector coordinates being ${\hat{x}_\textrm{p}} = {{(\hat{s} - {s_0})} \mathord{\left/ {\vphantom {{(\hat{s} - {s_0})} {{\kappa_s}}}} \right.} {{\kappa _s}}}$ and ${\hat{y}_\textrm{p}} = {{(\hat{t} - {t_0})} \mathord{\left/ {\vphantom {{(\hat{t} - {t_0})} {{\kappa_t}}}} \right.} {{\kappa _t}}}$. For determining the fringe intensity at this pixel, we have to solve the system of nonlinear equations in Eq. (7) for (xp, yp). By defining the initial values as $[x_\textrm{p}^{(0)},y_\textrm{p}^{(0)}] = [{\hat{x}_\textrm{p}},{\hat{y}_\textrm{p}}]$, the iterative solution for this nonlinear equation system is represented with

$$\left[ {\begin{array}{c} {x_\textrm{p}^{(i + 1)}}\\ {y_\textrm{p}^{(i + 1)}} \end{array}} \right] = \left[ {\begin{array}{c} {{{\hat{x}}_\textrm{p}}}\\ {{{\hat{y}}_\textrm{p}}} \end{array}} \right] - [{k_{\textrm{p}1}}{(r_\textrm{p}^{(i)})^2} + {k_{\textrm{p}2}}{(r_\textrm{p}^{(i)})^{4}}]\left[ {\begin{array}{c} {x_\textrm{p}^{(i)}}\\ {y_\textrm{p}^{(i)}} \end{array}} \right] - \left[ {\begin{array}{c} {2{p_{\textrm{p}1}}x_\textrm{p}^{(i)}y_\textrm{p}^{(i)} + {p_{\textrm{p}2}}[{{(r_\textrm{p}^{(i)})}^2} + 2{{(x_\textrm{p}^{(i)})}^2}]}\\ {{p_{\textrm{p}1}}[{{(r_\textrm{p}^{(i)})}^2} + 2{{(y_\textrm{p}^{(i)})}^2}] + 2{p_{\textrm{p}2}}x_\textrm{p}^{(i)}y_\textrm{p}^{(i)}} \end{array}} \right].$$
After the algorithm converging, the undistorted pixel (s, t) corresponding $(\hat{s},\hat{t})$ are calculated using $s = {x_\textrm{p}}{\kappa _s} + {s_0}$ and $t = {y_\textrm{p}}{\kappa _t} + {t_0}$. By assigning the fringe intensity at (s, t) to $(\hat{s},\hat{t})$, the pre-corrected fringe patterns are generated. For example, the vertical ones are
$${g_m}(\hat{s},\hat{t}) = \alpha + \beta cos[{{2\pi s} \mathord{\left/ {\vphantom {{2\pi s} {{\lambda_s}}}} \right.} {{\lambda _s}}} + {{2\pi m} \mathord{\left/ {\vphantom {{2\pi m} M}} \right.} M}].$$
In measurement, if the projector casts the patterns represented by Eq. (19), instead of those by Eq. (11), onto the object surface, the projector lens distortions will not affect the measurement results.

In a special application when measuring a batch of objects having the same shape, generating adaptive fringe patterns using similar technique is helpful for highlighting the flaws on the objects [46].

4.2 Post-compensation technique

The post-compensation technique projects standard fringe patterns onto the object without curving them in advance, and corrects the projector lens distortions in data processing stage.

In measurement, only fringes of one direction is used. Following from Section 3.1, we presume for the moment that vertical fringes perpendicular to s-axis are projected onto the measured object. Ignoring the projector lens distortions, the fringe phases are calculated. These calculated phases are affected by the projector lens distortions and denoted as ${\hat{\varPhi }_s}(u,v)$, corresponding to the coordinates $\hat{s} = {{{\lambda _s}{{\hat{\varPhi }}_s}(u,v)} \mathord{\left/ {\vphantom {{{\lambda_s}{{\hat{\varPhi }}_s}(u,v)} {2\pi }}} \right.} {2\pi }}$, i.e., the horizontal coordinates of distorted projector pixels. By substituting these matched pixel coordinates (u, v) and $\hat{s}$ into the first three equations of Eq. (10), the 3D coordinates of the measured object point are calculated as $({\hat{X}_w},{\hat{Y}_w},{\hat{Z}_w})$. Equation (7) implies that the projector lens distortions are two-dimensional functions of pixel coordinates. For correcting them, both the horizontal and vertical coordinates must be available. According to Eq. (9), we calculated the vertical coordinate of the distorted projector pixel as

$$\hat{t} = \frac{{{h_{\textrm{p}21}}{{\hat{X}}_\textrm{w}} + {h_{\textrm{p}22}}{{\hat{Y}}_\textrm{w}} + {h_{\textrm{p}23}}{{\hat{Z}}_\textrm{w}} + {h_{\textrm{p}24}}}}{{{h_{\textrm{p}31}}{{\hat{X}}_\textrm{w}} + {h_{\textrm{p}32}}{{\hat{Y}}_\textrm{w}} + {h_{\textrm{p}33}}{{\hat{Z}}_\textrm{w}} + {h_{\textrm{p}34}}}}.$$
By mapping the coordinates $(\hat{s},\hat{t})$ to the scaled projector coordinates $({\hat{x}_\textrm{p}},{\hat{y}_\textrm{p}})$ and using the same iterative algorithm related to Eq. (18), we solve (xp, yp) and, further, determine the undistorted projector pixel (s, t). By substituting them into Eq. (10), the 3D coordinates (Xw, Yw, Zw) are recovered with the effects of projector lens distortions having been removed.

4.3 Results

Here we experimentally verify the effectiveness of the proposed method by measuring objects. Firstly, we measure a standard cylinder in order to exam the accuracies of the methods. The cylinder has a nominal diameter of $\varPhi$149.48 mm, with a deviation of tolerance ± 0.020 mm. We use the system schemed in Fig. 1, which is calibrated in Section 3.4, to do the measurement.

In phase measuring stage, we use phase-shifting algorithm to analyze fringe patterns and use temporal phase-unwrapping technique to get absolute phase maps, as described in Section 3.1. In this procedure, vertical fringes perpendicular to $s$-axis are projected onto the object. The number of phase shifts is three and the phase increment between consecutive frames is 2π/3 radians. Figure 5 shows the results, with the panels in each row, from left to right, being a fringe pattern, the wrapped and the unwrapped phase map. The first row is obtained by projecting standard sinusoidal fringe patterns onto the cylinder, without considering the projector lens distortions. The second row shows the results of using the pre-compensation method in Section 4.1. In the second row, the fringe patterns have been purposely deformed in fringe generating stage, according to the coefficients of the projector lens distortions, so the projector lens distortions will not affect the measurement results.

 figure: Fig. 5.

Fig. 5. Phase measuring results. The panels in each row, from left to right, show a fringe pattern, the wrapped phase map, and the unwrapped phase map. The first row shows the results of projecting standard sinusoidal fringe patterns. The second row shows the results of using the pre-compensation fringe patterns. The colorbars have the unit of radian.

Download Full Size | PDF

After measuring the phases, we reconstruct 3D shape of the cylinder. The achieved accuracy depends not only on phase measurement accuracy, but also on the accuracies of system calibration and projector lens distortion correction. Figure 6(a) shows the reconstructed surface using the standard sinusoidal fringes in Fig. 5(a) without correcting the projector lens distortions. Figure 6(c) gives the results calculated from the same standard fringes, but the effects of projector lens distortions have been corrected through the post-compensation method suggested in Section 4.2. Figure 6(e) is the result of using the pre-compensation technique in Section 4.1, where the projector lens distortions have been corrected in fringe generating stage by actively deforming the fringes according to the calibrated coefficients. For highlighting the differences between these results, Fig. 7 plots cross-sections of these reconstructed surfaces at the same position. From these cross-sections, we observe that the pre- and the post-compensation results are very close to each other and deviate from the one without correcting the projector lens distortions.

 figure: Fig. 6.

Fig. 6. The top row shows the 3D reconstructed surfaces accompanying with their fitted least-squares cylinders. The bottom row gives the radial contours of residual errors, which is obtained by subtracting the nominal radius of the cylinder from the measured radiuses. (a) and (b) Using the standard fringe patterns without correcting the projector lens distortions. (c) and (d) Using the same patterns but the projector lens distortions are corrected using the post-compensation method. (e) and (f) Using the pre-compensation fringe patterns. The x, y, and z axes, and the colorbar have the unit of millimeter; and θ has the unit of degree.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Cross-sections of the reconstructed surfaces in Fig. 6. The pre- and the post-compensation results are very close to each other and deviate from the one without correcting the projector lens distortions.

Download Full Size | PDF

In accompany with the reconstructed surfaces, Fig. 6 also shows, in green, the least-squares cylinders obtained by fitting the measured point clouds. Table 1 compares their radiuses, from which we see that, after correcting the projector lens distortions, the radiuses of the fitted least-squares cylinders are very close to its nominal value. The above results demonstrate that using the proposed technique enables improving the measurement accuracy of global size of the object. For examining their effectiveness in improving the measurement accuracy of local profile, the second row of Fig. 6 shows the contours of the radial errors obtained by subtracting the nominal radius from the measurement results. Table 1 also lists the RMS values of these residual errors. It is evident that the residual errors by using two compensation techniques are much smaller than that without correcting the projector lens distortions. These results demonstrate that the system has been accurately calibrated by using the proposed technique, and that both the pre- and post-compensation techniques are effective in correcting the projector lens distortions thus being helpful for improving the measurement accuracies.

Tables Icon

Table 1. Radiuses and RMS errors of least-squares cylinders (mm)

From the results just obtained, we know that both the pre- and post-compensation techniques can achieve almost the same accuracies. From the view of practice, the pre-compensation is more useful in measurement, because it allows us to measure different objects, without repeating correcting the distortions of projector lens, just by generating one sequence of fringe patterns. However, when we want to change the fringe frequency for getting proper resolutions adapting to different objects, the post-compensation technique shows a higher flexibility.

More experimental results are provided for further evaluating feasibility of our proposed techniques in measuring a complex object. Figure 8 show the results of measuring a mechanical part having a more complex shape than a cylinder. The top row shows the captured fringes, and below them are the reconstructed 3D shapes of the object. Note the first two columns share exactly the same fringe patterns, but in Fig. 8(d) the projector lens distortions are not corrected and in Fig. 8(e) these distortions are corrected by using the post-compensation technique. Figure 8(f) uses different fringe patterns which have been corrected using the pre-compensation technique. For highlighting the differences between these reconstructed shapes, Figs. 8(g) and 8(h) show the gaps between Fig. 8(e) and Fig. 8(d), and between Fig. 8(f) and Fig. 8(d), respectively. The maximum gap is about 0.45 mm, demonstrating that the projector lens distortions may severely deform the reconstructed shapes. Figure 8(i) shows the difference between Fig. 8(e) and Fig. 8(f), implying that both the pre- and post-compensation method have almost equal effects in correcting the projector lens distortions. Figure 9 having the same layout as that of Fig. 8 shows the results of measuring a plaster bust. These results demonstrate that the proposed calibration method works well for measuring complex objects.

 figure: Fig. 8.

Fig. 8. Measurement results of a mechanical part. (a) and (b) are the same captured patterns when projecting standard fringes onto the object without correcting the projector lens distortions. (c) is the captured pattern when projecting curved fringes according to the calibrated coefficients of the projector lens distortions. Below them, (d) shows the reconstructed shape without correcting the projector lens distortions, (e) and (f) are the results with the projector lens distortions are corrected using pre- and post-compensation methods, respectively. In the bottom row, (g), (h), and (i) show the gaps between (e) and (d), between (f) and (d), and between (e) and (f), respectively.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Measurement results of a plaster bust. The layout is the same as that of Fig. 8.

Download Full Size | PDF

5. Discussions

With the proposed techniques, there are some issues worth discussing. For example, random noise, projector nonlinearity, and illumination fluctuation may degrade the accuracies of calibration and measurement by inducing errors in phase measuring results. Even though, the proposed calibration technique is less sensitive to these error-inducing factors, because it can average out the induced errors by fitting the phase maps using rational functions in the least-squares sense. In measurement, however, we must cope with these errors carefully. Among them, the nonlinearity [47] of the projector is the most crucial factor affecting the measurement accuracy. This projector nonlinearity induces ripple-like artifacts having a frequency M times higher than the fringes with M being the number of phase shifts [48,49]. Therefore, increasing M is helpful for restraining the effects of the projector nonlinearity [50]. In this work, we use a more effective method that depends on few fringe patterns and enables recognizing and removing the errors directly from the calculated phase maps [51]. Random noise in fringe patterns [52] always affects the measurement accuracy. Generally, noise in a pattern is induced by complicated physical factors, and can be simply modeled with the Gaussian distribution [53] according to the central limit theorem from the theory of probability. When noise is present, the variance of the induced phase errors is proportional to the variance of noise, and inversely proportional to the number of phase shifts and to the square of modulations [54]. Therefore, increasing the number of phase shifts can compress the influence of noise, at the expense of time duration for image capturing. The illumination fluctuation induces ripple-like phase errors having the same frequency as fringes, which can be corrected with the aid of fringe histograms [55]. In fringe projection profilometry, phase sensitivity is also a main factor determining the measurement resolution and accuracy. This phase sensitivity mainly depends on the system geometry, and simultaneously is associated with fringe pitch and fringe orientation [56]. For a fixed system, using fringes having finer pitch or with a direction perpendicular to the epipolar lines on the projector plane, can achieve a higher phase sensitivity.

Another issue is regarding efficiency. With this calibration method, the time duration is mainly expended in capturing the images, especially when the checkerboard position is adjusted manually. In our experiment, for example, we have to spend several minutes to change the checkerboard poses and capture the required images. In comparison with the existing system calibration techniques based on phase measuring, however, the proposed technique does not require capturing extra images. Although it involves an additional iterative procedure for data processing, increase in calibration time is not so noticeable, thanks to the development of computer technology. For example, we use a computer (Dell Precision T7910) having Intel Xeon CPU with 2.40 GHz and 32G RAM, and the overall data processing for calibration can be completed within three minutes.

6. Conclusion

In fringe projection profilometry, the difficulty in calibrating a system lies in determining projector parameters. This paper has proposed a solution for this issue based on phase measuring. It exploits the fact that the fringe phases on a plane board theoretically have a distribution of rational function, and estimates the projector parameters by implementing the phase fitting and the parameter estimating alternately. In comparison with the existing system calibration techniques based on phase measuring, this technique offers some superior performances. For example, it applies the most popularly used checkerboard as the calibration target, having a properly compat with software readily available. It extracts the featured points from fitted phase map, being less sensitive to the low-pass property of the projector lens. It enables averaging out phase errors induced by such factors as noise, projector nonlinearities, and illumination fluctuations. It does not require capturing extra images, having a satisfied efficiency. We experimentally validate this technique by calibrating a system and then using it to measure objects. In measurement, the projector lens distortions are corrected, according to the calibrated coefficients, by using a pre- and a post-compensation technique thus improving the measurement accuracy.

Funding

National Natural Science Foundation of China (51975345).

Disclosures

The authors declare no conflicts of interest.

References

1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

2. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection technique,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

3. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]  

4. Z. Wang, D. A. Nguyen, and J. C. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010). [CrossRef]  

5. S. Cui and X. Zhu, “A generalized reference-plane-based calibration method in optical triangular profilometry,” Opt. Express 17(23), 20735–20746 (2009). [CrossRef]  

6. Y. Wen, S. Li, H. Cheng, X. Su, and Q. Zhang, “Universal calculation formula and calibration method in Fourier transform profilometry,” Appl. Opt. 49(34), 6563–6569 (2010). [CrossRef]  

7. P. J. Tavares and M. A. Vaz, “Linear calibration procedure for the phase-to-height relationship in phase measurement profilometry,” Opt. Commun. 274(2), 307–314 (2007). [CrossRef]  

8. I. Leandry, C. Brequei, and V. Valle, “Calibration of a structured-light projection system: development to large dimension objects,” Opt. Laser Eng. 50(3), 373–379 (2012). [CrossRef]  

9. Y. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x, y, z)-coordinates for the calibration of a fringe projection profilometer,” Opt. lasers Eng. 50(2), 256–261 (2012). [CrossRef]  

10. H. Liu, W. Su, K. Reichard, and Z. Yin, “Calibration-based phase-shifting projected fringe profilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216(1-3), 65–80 (2003). [CrossRef]  

11. H. Guo, H. He, Y. Yu, and M. Chen, “Least-squares calibration method for fringe projection profilometry,” Opt. Eng. 44(3), 033603 (2005). [CrossRef]  

12. L. Huang, P. S. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. 49(9), 1539–1548 (2010). [CrossRef]  

13. Y. Xiao, Y. Cao, and Y. Wu, “Improved algorithm for phase-to height mapping in phase measuring profilometry,” Appl. Opt. 51(8), 1149–1155 (2012). [CrossRef]  

14. Z. Zhang, C. E. Towers, and D. P. Towers, “Uneven fringe projection for efficient calibration in high-resolution 3D shape metrology,” Appl. Opt. 46(24), 6113–6119 (2007). [CrossRef]  

15. J. Salvi, X. Armangué, and J. Batlle, “A comparative review of camera calibrating methods with accuracy evaluation,” Pattern Recogn. 35(7), 1617–1635 (2002). [CrossRef]  

16. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

17. L. Huang, Q. Zhang, and A. Asundi, “Flexible camera calibration using not measured imperfect target,” Appl. Opt. 52(25), 6278–6286 (2013). [CrossRef]  

18. X. Li, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017). [CrossRef]  

19. X. Zhang and L. Zhu, “Projector calibration from the camera image point of view,” Opt. Eng. 48(11), 117208 (2009). [CrossRef]  

20. I. Din, H. Anwar, I. Syed, H. Zafar, and L. Hasan, “Projector calibration for pattern projection systems,” J. Appl. Res. Technol. 12(1), 80–86 (2014). [CrossRef]  

21. R. Legarda-Saenz, T. Bothe, and W. P. Juptner, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. 43(2), 464–471 (2004). [CrossRef]  

22. W. Gao, L. Wang, and Z. Hu, “Flexible method for structured light system calibration,” Opt. Eng. 47(8), 083602 (2008). [CrossRef]  

23. Q. Hu, P. S. Huang, Q. Fu, and F. P. Chiang, “Calibration of a three-dimensional shape measurement system,” Opt. Eng. 42(2), 487–493 (2003). [CrossRef]  

24. J. Huang, Z. Wang, Q. Xue, and J. Gao, “Calibration of a camera-projector measurement system and error impact analysis,” Meas. Sci. Technol. 23(12), 125402 (2012). [CrossRef]  

25. R. Li and F. Da, “Local blur analysis and phase error correction method for fringe projection profilometry systems,” Appl. Opt. 57(15), 4267–4276 (2018). [CrossRef]  

26. Z. R. Huang, J. T. Xi, Y. G. Yu, and Q. H. Guo, “Accurate projector calibration based on a new point-to-point mapping relationship between the camera and projector images,” Appl. Opt. 54(3), 347–356 (2015). [CrossRef]  

27. S. Huang, L. Xie, Z. Wang, F. Gao, and X. Jiang, “Accurate projector calibration method by using an optical coaxial camera,” Appl. Opt. 54(4), 789–795 (2015). [CrossRef]  

28. X. Chen, J. Xi, Y. Jin, and J. Sun, “Accurate calibration for a camera-projector measurement based on structured light projection,” Opt. Lasers Eng. 47(3-4), 310–319 (2009). [CrossRef]  

29. W. Zhang, W. Li, L. Yu, H. Luo, H. Zhao, and H. Xia, “Sub-pixel projector calibration method for fringe projection profilometry,” Opt. Express 25(16), 19158–19169 (2017). [CrossRef]  

30. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

31. S. Ma, R. Zhu, C. Quan, L. Chen, C. J. Tay, and B. Li, “Flexible structured-light-based three-dimensional profile reconstruction method considering lens projection-imaging distortion,” Appl. Opt. 51(13), 2419–2428 (2012). [CrossRef]  

32. Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]  

33. L. Rao, F. Da, W. Kong, and H. Huang, “Flexible calibration method for telecentric fringe projection profilometry systems,” Opt. Express 24(2), 1222–1237 (2016). [CrossRef]  

34. R. Chen, J. Xu, H. Chen, J. Su, Z. Zhang, and K. Chen, “Accurate calibration method for camera and projector in fringe patterns measurement system,” Appl. Opt. 55(16), 4293–4300 (2016). [CrossRef]  

35. D. He, X. Liu, X. Peng, Y. Ding, and B. Z. Gao, “Eccentricity error identification and compensation for high-accuracy 3D optical measurement,” Meas. Sci. Technol. 24(7), 075402 (2013). [CrossRef]  

36. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

37. R. J. Salazar and V. D. Ramirez, “Operator-based homogeneous coordinates: application in camera document scanning,” Opt. Eng. 56(7), 070801 (2017). [CrossRef]  

38. J. Weng, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992). [CrossRef]  

39. J. H. Bruning, D. R. Herriott, J. E. Gallagher, D. P. Rosenfeld, A. D. White, and D. J. Brangccio, “Digital wavefront measuring interferometer for testing optical surfaces and lenses,” Appl. Opt. 13(11), 2693–2703 (1974). [CrossRef]  

40. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

41. C. Harris and M. Stephens, “A combined corner and edge detector,” Proceedings of the 4th Alvey Vision Conference. pp. 147–151 (1988).

42. H. Guo, M. Chen, and P. Zheng, “Least-squares fitting of carrier phase distribution by using a rational function in fringe projection profilometry,” Opt. Lett. 31(24), 3588–3590 (2006). [CrossRef]  

43. R. J. Salazar, A. Giron, J. Zheng, and V. D. Ramirez, “Key concepts for phase-to-coordinate conversion in fringe projection systems,” Appl. Opt. 58(18), 4828–4834 (2019). [CrossRef]  

44. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 8–22 (2000). [CrossRef]  

45. R. Zhang and H. Guo, “Depth recovering method immune to projector errors in fringe projection profilometry by use of cross-ratio invariance,” Opt. Express 25(23), 29272–29286 (2017). [CrossRef]  

46. J. Peng, X. Liu, D. Deng, H. Guo, Z. Cai, and X. Peng, “Suppression of projector distortion in phase-measuring profilometry by projecting adaptive fringe patterns,” Opt. Express 24(19), 21846–21859 (2016). [CrossRef]  

47. H. Guo, H. He, and M. Chen, “Gamma correction for digital fringe projection profilometry,” Appl. Opt. 43(14), 2906–2914 (2004). [CrossRef]  

48. S. Xing and H. Guo, “Correction of projector nonlinearity in multi-frequency phase-shifting fringe projection profilometry,” Opt. Express 26(13), 16277–16291 (2018). [CrossRef]  

49. F. Lü, S. Xing, and H. Guo, “Self-correction of projector nonlinearity in phase-shifting fringe projection profilometry,” Appl. Opt. 56(25), 7204–7216 (2017). [CrossRef]  

50. H. Guo and M. Chen, “Fourier analysis of the sampling characteristics of the phase-shifting algorithm,” Proc. SPIE 5180, 437–444 (2004). [CrossRef]  

51. S. Xing and H. Guo, “Directly recognizing and removing the projector nonliearity errors from a phase map in phase-shifting fringe projection profilometry,” Opt. Commun. 435, 212–220 (2019). [CrossRef]  

52. K. Yan, Y. Yu, C. Huang, L. Sui, K. Qian, and A. Asundi, “Fringe pattern denoising based on deep learning,” Opt. Comm. 437, 148–152 (2019). [CrossRef]  

53. H. Guo, “A simple algorithm for fitting a Gaussian function,” IEEE Signal Process. Mag. 28(5), 134–137 (2011). [CrossRef]  

54. S. Xing and H. Guo, “Temporal phase unwrapping for fringe projection profilometry aided by recursion of Chebyshev polynomials,” Appl. Opt. 56(6), 1591–1602 (2017). [CrossRef]  

55. Y. Lu, R. Zhang, and H. Guo, “Correction of illumination fluctuations in phase-shifting technique by use of fringe histograms,” Appl. Opt. 55(1), 184–197 (2016). [CrossRef]  

56. R. Zhang, H. Guo, and A. K. Asundi, “Geometric analysis of influence of fringe directions on phase sensitivities in fringe projection profilometry,” Appl. Opt. 55(27), 7675–7687 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The models of the projector-camera system.
Fig. 2.
Fig. 2. The columns, from left to right, show the captured fringe patterns on the calibration board, the measured phase map with the black areas having been segmented out, the least-squares fitting results using rational functions, and the residuals calculated by subtracting the fitting results from the measured phases. The top and bottom rows correspond to the results of using vertical and horizontal fringes, respectively.
Fig. 3.
Fig. 3. Arrays of extracted square corners on the image planes of (a) camera and (b) projector
Fig. 4.
Fig. 4. Reprojection errors of (a) the camera and (b) the projector.
Fig. 5.
Fig. 5. Phase measuring results. The panels in each row, from left to right, show a fringe pattern, the wrapped phase map, and the unwrapped phase map. The first row shows the results of projecting standard sinusoidal fringe patterns. The second row shows the results of using the pre-compensation fringe patterns. The colorbars have the unit of radian.
Fig. 6.
Fig. 6. The top row shows the 3D reconstructed surfaces accompanying with their fitted least-squares cylinders. The bottom row gives the radial contours of residual errors, which is obtained by subtracting the nominal radius of the cylinder from the measured radiuses. (a) and (b) Using the standard fringe patterns without correcting the projector lens distortions. (c) and (d) Using the same patterns but the projector lens distortions are corrected using the post-compensation method. (e) and (f) Using the pre-compensation fringe patterns. The x, y, and z axes, and the colorbar have the unit of millimeter; and θ has the unit of degree.
Fig. 7.
Fig. 7. Cross-sections of the reconstructed surfaces in Fig. 6. The pre- and the post-compensation results are very close to each other and deviate from the one without correcting the projector lens distortions.
Fig. 8.
Fig. 8. Measurement results of a mechanical part. (a) and (b) are the same captured patterns when projecting standard fringes onto the object without correcting the projector lens distortions. (c) is the captured pattern when projecting curved fringes according to the calibrated coefficients of the projector lens distortions. Below them, (d) shows the reconstructed shape without correcting the projector lens distortions, (e) and (f) are the results with the projector lens distortions are corrected using pre- and post-compensation methods, respectively. In the bottom row, (g), (h), and (i) show the gaps between (e) and (d), between (f) and (d), and between (e) and (f), respectively.
Fig. 9.
Fig. 9. Measurement results of a plaster bust. The layout is the same as that of Fig. 8.

Tables (1)

Tables Icon

Table 1. Radiuses and RMS errors of least-squares cylinders (mm)

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

Z c [ u v 1 ] = A c [ X c Y c Z c ] = [ η u 0 u 0 0 η v v 0 0 0 1 ] [ X c Y c Z c ] ,
[ X c Y c Z c ] = R c [ X w Y w Z w ] + T c = [ r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9 ] [ X w Y w Z w ] + [ t 1 t 2 t 3 ] = [ R c , T c ] [ X w Y w Z w 1 ] ,
Z c [ u v 1 ] = A c [ R c , T c ] [ X w Y w Z w 1 ] .
[ x ^ c y ^ c ] = ( 1 + k c1 r c 2 + k c2 r c 4 ) [ x c y c ] + [ 2 p c1 x c y c + p c2 ( r c 2 + 2 x c 2 ) p c1 ( r c 2 + 2 y c 2 ) + 2 p c 2 x c y c ] ,
Z p [ s t 1 ] = A p [ X p Y p Z p ] = [ κ s 0 s 0 0 κ t t 0 0 0 1 ] [ X p Y p Z p ] ,
Z p [ s t 1 ] = A p [ R p , T p ] [ X w Y w Z w 1 ] .
[ x ^ p y ^ p ] = ( 1 + k p 1 r p 2 + k p 2 r p 4 ) [ x p y p ] + [ 2 p p 1 x p y p + p p 2 ( r p 2 + 2 x p 2 ) p p 1 ( r p 2 + 2 y p 2 ) + 2 p p 2 x p y p ] ,
Z c [ u v 1 ] = H c [ X w Y w Z w 1 ] = [ h c 11 h c 12 h c 13 h c 14 h c 21 h c 22 h c23 h c 24 h c 31 h c 32 h c 33 h c 34 ] [ X w Y w Z w 1 ] .
Z p [ s t 1 ] = H p [ X w Y w Z w 1 ] = [ h p 11 h p 12 h p 13 h p1 4 h p 21 h p 22 h p 23 h p 24 h p 31 h p 32 h p 33 h p34 ] [ X w Y w Z w 1 ] .
[ u h c 31 h c 11 u h c 32 h c 12 u h c 33 h c 13 v h c 31 h c 21 v h c 32 h c 22 v h c 33 h c 23 s h p 31 h p 11 s h p 32 h p 12 s h p 33 h p 13 t h p 31 h p 21 t h p 32 h p 22 t h p 33 h p 23 ] [ X w Y w Z w ] = [ h c 14 u h c 34 h c 24 v h c 34 h p 14 s h p 34 h p 24 t h p 34 ] .
g m ( s , t ) = α + β c o s [ 2 π s / 2 π s λ s λ s + 2 π m / 2 π m M M ] ,
I m ( u , v ) = A ( u , v ) + B ( u , v ) c o s [ Φ s ( u , v ) + 2 π m / 2 π m M M ] ,
ϕ s ( u , v ) = arctan [ m = 0 M 1 I m ( u , v ) sin ( 2 π m / 2 π m M M ) m = 0 M 1 I m ( u , v ) cos ( 2 π m / 2 π m M M ) ] .
s = λ s Φ s ( u , v ) / λ s Φ s ( u , v ) 2 π 2 π .
{ Φ s ( u , v ) = ( a 2 + a 3 u + a 4 v ) / ( a 2 + a 3 u + a 4 v ) ( 1 + a 0 u + a 1 v ) ( 1 + a 0 u + a 1 v ) Φ t ( u , v ) = ( a 5 + a 6 u + a 7 v ) / ( a 5 + a 6 u + a 7 v ) ( 1 + a 0 u + a 1 v ) ( 1 + a 0 u + a 1 v )
{ a 0 Φ s ( u , v ) u a 1 Φ s ( u , v ) v + a 2 + a 3 u + a 4 v = Φ s ( u , v ) a 0 Φ t ( u , v ) u a 1 Φ t ( u , v ) v + a 5 + a 6 u + a 7 v = Φ t ( u , v )
{ S n = λ s Φ s ( U n , V n ) 2 π = λ s ( a 2 + a 3 U n + a 4 V n ) 2 π ( 1 + a 0 U n + a 1 V n ) T n = λ t Φ t ( U n , V n ) 2 π = λ t ( a 5 + a 6 U n + a 7 V n ) 2 π ( 1 + a 0 U n + a 1 V n )
[ R c , T c ] = [ 0.9989 0.0073 0.0471 129.1097 0.0084 0.9997 0.0244 102.3350 0.0469 0.0248 0.9986 1297.9688 ]
[ R p , T p ] = [ 0.9859 0.0148 0.1665 122.9638 0.0149 0.9999 0.0007 241.5961 0.1665 0.0018 0.9860 833.3456 ]
[ x p ( i + 1 ) y p ( i + 1 ) ] = [ x ^ p y ^ p ] [ k p 1 ( r p ( i ) ) 2 + k p 2 ( r p ( i ) ) 4 ] [ x p ( i ) y p ( i ) ] [ 2 p p 1 x p ( i ) y p ( i ) + p p 2 [ ( r p ( i ) ) 2 + 2 ( x p ( i ) ) 2 ] p p 1 [ ( r p ( i ) ) 2 + 2 ( y p ( i ) ) 2 ] + 2 p p 2 x p ( i ) y p ( i ) ] .
g m ( s ^ , t ^ ) = α + β c o s [ 2 π s / 2 π s λ s λ s + 2 π m / 2 π m M M ] .
t ^ = h p 21 X ^ w + h p 22 Y ^ w + h p 23 Z ^ w + h p 24 h p 31 X ^ w + h p 32 Y ^ w + h p 33 Z ^ w + h p 34 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.