Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Global calibration and equation reconstruction methods of a three dimensional curve generated from a laser plane in vision measurement

Open Access Open Access

Abstract

We demonstrate a global calibration method for the laser plane using a 3D calibration board to generate the two horizontal coordinates and a height gauge to generate the height coordinate of the point in the laser plane. A sigmoid-Gaussian function for the candidate centers is employed to normalize the eigenvalues of the Hessian matrix to prevent centers missing or muti-centers. Then camera calibration and laser plane calibration are accomplished at the same time. Finally the reconstructed 3D points are transformed to the horizontal plane by the forward process that involves one translation and two rotations. The parametric equation of the 3D curve is reconstructed by the inverse process that performs on the 2D fitting curve.

© 2014 Optical Society of America

1. Introduction

Vision measurement based on structured light is an effective method in non-touch 3D shape metrology [15], manufacturing [6, 7], robot engineering [8, 9], etc. For extracting the metrical information of an object, e.g., a tissue, a human body, a vehicle, a laser plane which is considered as a flexible non-touch sign is projected onto the surface of the measured target [10]. The projective laser plane on the target is bended to a curvilinear structure determined by the shape of the object. As the curved laser line represents both the crossing position and the depth information of the measured target, locating the central points of the laser line, calibrating the optical measurement system, and reconstructing the equation of the laser line are valuable to provide enough and precise data for the target features [11].

Several detection methods for laser line centers have been proposed in the former works which can be summarized into two main categories. One approach is to detect the laser line centers in one-pixel precision based on the local distribution of gray values, e.g., local gray value differences [12, 13], extreme value method [14, 15], threshold method [1618] and directional template method [1921], etc. There are two main drawbacks for this category. Firstly, due to the statistical model for the laser line distribution in the test image, they are often computationally expensive to determine the potential points in the laser line from the hypothetical points. Secondly, these methods cannot extract the centers of the laser line in sub-pixel accuracy. The other approach concentrates on extracting the centers of the laser line in sub-pixel level to improve the extraction accuracy, which consists of edges detection method [22], gray centroid method [23, 24] and Hessian matrix method [2527], etc. Hessian matrix method [2527] has a strong noise immunity and good robustness compared to other methods. It detects the centers by analyzing two eigenvalues of the Hessian matrices of the candidate center points. In recent research, a reasonable normalization model of the eigenvalues in different regions is introduced to tuned and avoid redundant centers or losing centers [28]. In this paper, a standardization model with the sigmoid function and the Gaussian function is adopted to balance the effects of two eigenvalues, which improves the extraction stability. Furthermore, Taylor expansion is employed to promote the extraction accuracy of the centers in the laser line.

Recently, some calibration methods are proposed to obtain the relationship between the 2D centers in the image and the 3D curve in the space. The algorithm of back propagation neural network (BPNT) is presented to be applied to the calibration of the 3D vision inspection with structured light [29]. This method utilizes a simple 2D calibration target. However, the calibration process employs a complex photo-electrical aiming device and a 3D translation platform. For training the BPNT, many target positions also should be generated for the camera to acquire enough calibration points. The approach based on cross-ratio invariability (CRI) is outlined to produce sufficient non-collinear control points for the structured light calibration [30]. Fewer captured images of the planar target in different orientations need be observed as the adopted CRI calculates more control points for the calibration. The rigidity constraint method (RCM) is introduced to calibrate a generic structured light system, consisting of one camera and one projector [31]. A constraint is used to derivate a simple function for the simultaneous estimation of the parameters, resulting in more reliable parameters. The polynomial transformations method (PTM) is proposed to perform the calibration procedure without particular geometrical constraints [32]. The work is founded on a systematic study of different degrees of a polynomial form and suitable for the large objects analyzed at a short distance. The markerless calibration method (MCM) is proposed to estimate the aspect ratio of the planar display wall and calibrate multi-projectors [33]. The hue component of the captured image detects the features and calculates the parameters for geometric transformation and blending. The previous works adopt the 2D calibration board or 2D display as the calibration target, the target characteristics as the constraints. The calibrated result is a laser plane in the camera coordinate. However, some measurement values related to the world coordinate system, such as the vector from a mechanical part to the foundation with the origin of the world coordinate system, cannot be directly calculated by the methods.

In this paper, a calibration method is proposed to avoid iterative calculation and the complex transformation from camera coordinates to world coordinates. The remainder of this paper is as follows: In Section 2, we review a normalization model with the sigmoid function and the Gaussian function, which can be used to balance the effects of two eigenvalues and detect the curve centers accurately. In Section 3, we describe the global calibration method using 3D calibration board and height gauge in the world coordinate system. In Section 4, we analyze the transformation process for the central points of the 3D curve and reconstruct the equation of the curve. We provide experiments and conclusions in Sections 5 and 6, respectively.

2. Extraction of curve centers

To segment the target from the non-target objects in an image and reduce the time consumption in the recognition process of a center line, the subtraction approach [34] is employed to provide the difference part between the original image with a laser line and the background image without the laser line. Since the noises exist in the image often effect the extraction result, the Gaussian filter is implemented to the image of the laser line.

In a bid to extract the centers of a laser line, it is considered to be an obvious feature that the laser line in a transverse section should obey Guassian distribution. Furthermore, in the direction along the axis of the laser line, the gray value remains the same theoretically. For this reason, the second-order derivative at the center of the laser line along the axis should be 0 or a very small absolute value in practice. In the direction vertical to the axis of the laser line, as the gray value reaches a peak at the center point, the second-order derivative should be a negative with a large absolute value actually. Based on the analysis above, the probable centers should satisfy the criterion: In two mutual perpendicular directions, one second-order derivative should be near 0, moreover the other second-order partial derivative should be the smallest negative. To describe the discrimination criterion above, Hessian Matrix at a pixel in the image is expressed by [25]

H(x,y)=[Ixx(x,y)Ixy(x,y)Ixy(x,y)Iyy(x,y)]
Here, Ixx(x, y), Iyy(x, y), and Ixy(x, y) are the convolution results by the second-order partial derivatives of the Gaussian function in three directions, individually. Two eigenvalues of the matrix imply the function variation along two vertical eigenvectors in the image. Therefore, the center positions of the laser line can be decided by the two eigenvalues λ1 (λ1≈0)、λ2 (λ2<<0) and the direction of the laser line axis is also determined by the two eigenvectors.

A model is created to provide a unified discrimination foundation for various pixels in the laser line image after the image subtraction. In accordance with the intensity distribution characteristics of the laser line, the characteristics of the laser line in its axial direction are described by the Gaussian function h1(λ1) in Eq. (2) [28].

h1(λ1)=exp(-λ12/c)
where λ1 is the eigenvalue of the Hessian matrix corresponding to the axis direction of the laser line. c is a constant. The gray value of the laser line center varies tinily in its axis direction. The value of Eq. (2) is 1 when λ1 is close to 0. The value of Eq. (2) is 0 when λ1 is greater or less than 0. It realizes the standardization to the eigenvalue λ1 of the Hessian matrix in the axial direction of laser line centers.

The gray value generally obeys the Gaussian distribution in the direction vertical to the center axis of the laser line. Thereby, eigenvalue λ2 of the Hessian matrix at this point is much less than 0. The standardized decision function in line with the characteristics is described by the sigmoid function h2(λ2) in Eq. (3).

h2(λ2)=11+exp[-b(|λ2|a/2)]
where λ2 is the eigenvalue of the Hessian matrix related to the vertical direction to the center axis of the laser line. b is a constant. When λ2 is far less than 0, the value of the sigmoid function h2(λ2) is close to 1. When λ2 is near 0, the value of the sigmoid function h2(λ2) is close to 0. It realizes the standardization of the eigenvalue along the direction vertical to the center axis of the laser line for different pixels.

Take the two feature vectors into account, the integrated function for the standardized decision based on the sigmoid function and the Gaussian function is presented in Eq. (4).

h(λ1,λ2)=exp(-λ12/c)1+exp[-b(|λ2|a/2)]
where h(λ1, λ2) is the standardized value which adjusts the two eigenvalues λ1, λ2 of candidate central pixels to the scope between 0 and 1. b is the inclination coefficient of the sigmoid function. As the eigenvalues of different pixels should be normalized to the scope from 0 to 1, the sigmoid function curve should be tuned smoothly at two ends and increase sharply in the middle. The normalization results of center points and non-center points are easily discriminated by this model. For this purpose, b is set to 0.2 and a is given by Eq. (5)
a=max[|λ2(Il(x,y))|]
where a is the maximum absolute value of λ2 which is estimated by the Hessian matrices of all the pixels in the image. Equation (4) is illustrated in Fig. 1, where the value of c is equal to a determined by the experiments in this research. The function in Fig. 1 determines the extraction results in two aspects: Firstly, in the direction of λ1, the function reaches to the greatest value in the neighborhood of λ1 = 0. It agrees with the center characteristic from which an eigenvalue should be a small value near zero. Secondly, in the direction of λ2, the function is an increasing function. It meets the feature of the centers from which the other eigenvalue should be a negative with lager absolute value. By this method, the centers are determined only by the normalized value h(λ1, λ2) and extracted more accurately.

 figure: Fig. 1

Fig. 1 The normalization function h(λ1, λ2) for the eigenvalues λ1, λ2 of the Hessian matrix.

Download Full Size | PDF

To improve the accuracy of the centers, the coordinates are calculated in sub-pixel level. Let (x + tnx, y + tny) be the sub-pixel coordinates of the center coordinate (x, y) along the normal direction of the unit vector (nx, ny) deduced by the Hessian matrix. The Taylor expansion of I(x + tnx, y + tny) at I(x, y) is [29]

I(x+tnx,y+tny)=I(x,y)+tnxIx(x,y)+tnyIy(x,y)+1/2(t2nx2Ixx(x,y)+2t2nxnyIxy(x,y)+t2ny2Iyy(x,y))
where t is an unknown value, Ix(x, y), Iy(x, y) are the convolution results of the first-order partial derivatives in two directions. Considering that the sub-pixel center exists on the peak along the normal vector, let the derivative of Eq. (6) equal to 0, then [35]
t=-nxIx(x,y)+nyIy(x,y)nx2Ixx(x,y)+2nxnyIxy(x,y)+ny2Iyy(x,y)
The sub-pixel coordinates are achieved by substituting t into (x + tnx, y + tny).

3. Global calibration of laser plane

In the vision measurement based on structured light, to achieve the 3D reconstruction for the coordinates of an intersection curve between the laser plane and the measured object, we need to acquire the position of the laser plane in the world coordinate system, which is the global calibration for the laser plane.

The method with a 3D calibration target and a height gauge is conducted to obtain the 3D coordinates in the laser plane in the world coordinate system. The transformation matrix of the camera is also calibrated at the same time. This method avoids the conversion from the camera coordinate system to the world coordinate system in traditional methods and directly generates the object’s coordinates in the world coordinate system.

3D calibration board is interpreted in Fig. 2. The origin of the world coordinate system coincides with the intersection point of the three planes of the 3D calibration board. The positions of the feature points on the calibration board have been accurately determined in the world coordinate system. The camera linear model [36, 37] can be expressed by the homogeneous coordinates as follows

si[xiyi1]=[m11m12m13m14m21m22m23m24m31m32m33m34][XiYiZi1]
where (Xi, Yi, Zi, 1) represent the world coordinates of the i-th point on the 3D target, (xi, yi, 1) are the corresponding image coordinates of the i-th point, mjk is the element in the j-th row and the k-th column of the transformation matrix M. The least square method is employed to solve the elements of the matrix M.

 figure: Fig. 2

Fig. 2 The calibration process of a laser plane using a height gauge and a 3D calibration board.

Download Full Size | PDF

It is important to obtain at least three 3D coordinates in the laser plane for globally calibrating the laser plane equation. And then we fit these 3D coordinates to construct the laser plane equation in the world coordinate system.

The method to set the 3D coordinates in the laser plane is explained in Fig. 2. The height gauge is located on the xoy plane of the 3D calibration board. A base angle of the height gauge coincides with a corner of the grid pattern on the 3D calibration board. As the origin in the world coordinate system is the same with the common point of the three planes of the 3D calibration board, the two horizontal coordinates of the height gauge at this position is defined in the world coordinate system. Considering that the projected laser plane and the height gauge intersects at one point on the scale of the height gauge, the height coordinate of the intersection point between the laser plane and the height gauge is read directly from the scale of the height gauge. Consequently, we establish and record the 3D coordinate of the intersection point between the laser plane and the height gauge. The horizontal positions of the height gauge corners in the world coordinate system are illustrated by the yellow pyramidal points on the bottom plane of the 3D calibration board, as shown in Fig. 3. The height coordinates are derived from the movements of the height gauge from one yellow pyramidal point to another. Therefore, 3D coordinates of the points in the laser plane are generated by moving the height gauge in the horizontal plane of the 3D board. The 3D coordinates are represented by the spherical points in Fig. 3. The laser plane equation is calibrated by fitting the 3D coordinates and given by

 figure: Fig. 3

Fig. 3 The 3D calibration points of a laser plane using a height gauge and a 3D calibration board.

Download Full Size | PDF

aX+bY+cZ+d=0

4. Equation reconstruction of 3D curve

In the light of Eqs. (8) and (9), the 3D coordinates of the crossing curve of the laser plane and the measured object are achieved. Nevertheless, it is still a difficult work to reconstruct the equation of the crossing curve in 3D space because of the unknown shape of the measured surface. Three main steps are performed to fit and reconstruct the measured curve based on the characteristic that the 3D points in the space are coplanar with the laser plane. The first step is transforming the 3D points to the 2D points by one translation and two rotations in space, as shown in Fig. 4. Then the 2D points are fitted to the 2D curve equation in the horizontal plane. Finally, the 2D curve equation is transformed to a 3D parametric equation by implementing an inverse process of the first step. The transformation process of the curve equation is interpreted in Fig. 5.

 figure: Fig. 4

Fig. 4 The positive process of the translation and rotations from the original reconstructed 3D points to the 2D points in the horizontal plane. T is the translation vector, α is the first rotation angle to x axis, β is the second rotation angle to y axis.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 The inverse process of the translation and rotations from the fitting curve in the horizontal plane to the fitting curve in the original laser plane. -β is the first rotation angle to y axis, -α is the second rotation angle to x axis, -T is the translation vector.

Download Full Size | PDF

The normal vector of the original laser plane is calculated in the beginning. The 3D coordinates of the crossing curve are translated along the normal vector to the origin of the world coordinate system, and given by Eq. (10).

(X(T)Y(T)Z(T))=(XYZ)+T
where (X(T), Y(T), Z(T))’ represent the spatial coordinates translated to the origin of the world coordinate system, (X, Y, Z)’ are the reconstructed 3D coordinates of the crossing curve, the translation vector T = (tx, ty, tz)’ is the normal vector from the laser plane to the origin of the world coordinate system.

The 3D coordinates of the crossing curve that rotates around the x axis through an angle α are given by Eq. (11).

(X(α,T)Y(α,T)Z(α,T))=Rx(α)(X(T)Y(T)Z(T))
where (X(α,T), Y(α, T), Z(α, T))’ are the results of rotating (X(T), Y(T), Z(T))’ around the x axis by the angle α, the rotation matrix Rx(α) is [1000cosαsinα0sinαcosα] . The rotation angle α is the angle between the z axis and the projective vector of the normal vector in the yoz plane, and is expressed by

α=arctan(ty/tz)

The normal vector of the original laser plane is transformed into the xoz plane after the rotation around the x axis by the angle α. Therefore, the result normal vector (tx(α), ty(α), tz(α))’ derived from the rotation is calculated by

(tx(α)ty(α)tz(α))=Rx(α)(txtytz)

After the second rotation around y axis by the angle β, the 3D coordinates of the crossing curve are denoted by Eq. (14).

(X(α,β,T)Y(α,β,T)Z(α,β,T))=Ry(β)(X(α,T)Y(α,T)Z(α,T))
where (X(α, β, T), Y(α, β, T), Z(α, β, T))’ are the results of rotating (X(α, T), Y(α, T), Z(α, T))’ around the y axis by the angle β, the rotation matrix Ry(β) is [cosβ0sinβ010sinβ0cosβ]. The angle β is the angle between the z axis and the projective vector of the normal vector in the xoz plane, and is expressed by

β=arctan(-ty(α)/tz(α))

After the translation and the rotation, the 3D coordinates of the curve points are converted to the xoy plane. The 2D curve equation is provided by curve fitting in the xoy plane. The 2D parametric equations take the parameter x or y, can be respectively expressed by

{x=xy=f(x)z=0
{x=f1(y)y=yz=0

The equations of the 2D curves in Eqs. (16) and (17) are separately multiplied by the inverse rotation matrices Ry(-β), Rx(-α) and then added the inverse translation -T. Consequently, the reconstructed parametric equations of the 3D curve in the original position are respectively given by

[XYZ]=[cosβxtxcosαf(x)+sinαsinβxty-sinαf(x)+cosαsinβxtz]
[XYZ]=[cosβf1(y)txcosαy+sinαsinβf1(y)ty-sinαy+cosαsinβf1(y)tz]

5. Experiments and discussions

The experimental system mainly consists of a laser projector, a camera, a 3D calibration board, a height gauge, a tripod, and a computer. The camera is DH-HV3102UC-T using the Computar® lens with 5 mm focal length. The red laser projector with 635 nm wavelength is adopted to generate a laser plane. The 3D calibration board is 500 × 500 × 500 mm with 60 × 60 mm gridiron pattern on it. The scale scope of the height gauge is 0 ~500 mm. The image resolution is 640 × 480. The main configuration of the computer is eight 2.2 GHz CPUs and 6 GB RAM.

Comparative experiments are conducted with the standardized model and the traditional model firstly. Two groups of the experimental results are shown in Figs. 6 and 7. Take the results in the first group in Fig. 6 as the example in the following paragraphs. Image preprocessing is performed with the subtraction method to extract the laser line from the background image as shown in Fig. 6(b). The extraction results of the Hessian matrix method and the partial enlarged figure are demonstrated in Figs. 6(c) and 6(d). The extraction results of the sigmoid-Gaussian model and the partial enlarged figure are illuminated in Figs. 6(e) and 6(f). “+” indicates the pixel coordinates of the laser line centers. “×” is the sub-pixel coordinates of the laser line centers. The arrows indicate the eigenvectors at the center pointsand vertical to the axis of the laser line. In Figs. 6(c) and 6(d), the selection conditions of the laser line centers that the absolute value of the eigenvalue λ1 which is near 0 should be less than 1 and the eigenvalue λ2 which is far less than 0 should be less than −2. The result indicates that several centers appear in the same transverse section of the laser line. In Figs. 6(e) and 6(f), the decision function values of the laser line centers are derived from the sigmoid-Gaussian model. The laser line center is regarded as the point with the maximal sigmoid-Gaussian function value in each column of the image. Comparing with the results of the Hessian matrix method, the formative curve that passes through the centers recognized by the normalization function agrees with the projected laser curve well.

 figure: Fig. 6

Fig. 6 Experimental results of the first laser curve on the car model. (a) Original image, (b) Laser curve, (c) Centers extracted by the Hessian method with the thresholds of λ1 = 1 and λ2 = −2, (d) Enlarged view of Fig. 6(c), (e) Centers extracted by the sigmoid-Gaussian method, (f) Enlarged view of Fig. 6(e), (g) Forward transformation process from the 3D reconstructed centers to the 2D points in the xoy plane, (h) Inverse transformation process from the 2D curve in the xoy plane to the 3D curve in the original position.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Experimental results of the second laser curve on the car model. (a) Original image, (b) Laser curve, (c) Centers extracted by the Hessian method with the thresholds of λ1 = 1 and λ2 = −2, (d) Enlarged view of Fig. 6(c), (e) Centers extracted by the sigmoid-Gaussian method, (f) Enlarged view of Fig. 6(e), (g) Forward transformation process from the 3D reconstructed centers to the 2D points in the xoy plane, (h) Inverse transformation process from the 2D curve in the xoy plane to the 3D curve in the original position.

Download Full Size | PDF

After processing the image, the sub-pixel coordinates of the centers in the laser curve can be extracted precisely. In order to reconstruct the 3D points in the laser curve, the relationship between the 3D world coordinate system and the 2D image coordinate system should be established by the camera calibration. The 3D calibration board is positioned in front of the camera. The intersection point of the three planes is the origin of the world coordinate system. 9 feature points on each plane of the calibration board are chosen as the 3D calibration points. 27 feature coordinates in the 2D image related to the 3D calibration points are captured by the camera. The transformation matrix M can be calculated by Eq. (8) and is equal to (0.41210.32260.0148426.01870.20630.03330.4551554.83900.00050.00020.00001.0000) in the experiments.

The transformation matrix M represents a mapping from the 3D space to the 2D space. Due to the lack of a constraint for the 3D coordinates, the 2D points in the image cannot be mapped directly from the 2D space to 3D space. Therefore, it is necessary to calibrate the laser plane equation and add a constraint for the 2D-3D transformation. The baseline length between the laser projector and the camera is 600 mm. As the laser plane intersects with the height gauge, the 3D coordinates of the intersection point can be provided by the scale of the height gauge for the z value and the horizontal grid of the calibration board for the x, y values. The height gauge is located at 24 different positions in the horizontal plane, as shown in Fig. 2. Then the 3D coordinates of the 24 points in the laser plane are obtained. According to the 3D coordinates, the equation of the laser plane is fitted by the least square method and given by

Z=0.0167X+0.3617Y+231.6067

It is worth noting that this calibration result for the laser plane is independent to the camera calibration and acquires the laser plane directly. The 3D coordinates of the curve are reconstructed by substituting the 2D coordinates of the extracted curve into the simultaneous equations of the matrix M and the laser plane. However, considering the complex shape of the car model, it is difficult to directly fit the curve equation in the 3D space. We transform the 3D points to the 2D points through one translation and two rotations by Eqs. (10), (11), and (14) respectively and then fit the curve in the xoy plane. The transformations of the 3D points are shown in Figs. 6(g) and 7(g). Firstly, the normal vector that passes through the origin is calculated and regarded as the translation vector T. The 3D points are all translated to the origin by the translation vector T which is equal to (3.4128 74.0570 −204.7658)’. Then the angle α related to the x axis rotation is calculated by Eq. (12) and is equal to −19.8834°. The 3D coordinates that rotate around the x axis are provided by the results of the translated 3D coordinates multiplied by the rotation matrix Rx(α). Finally, the angle β about the y-axis rotation is calculated by Eq. (15) and is equal to 0.8979°. The 3D coordinates that rotate around the y axis are contributed by the results of the rotated 3D coordinates multiplied by the rotation matrix Ry(β). The parametric equations with a parameter y, of the crossing curve in the xoy plane are fitted by the 2D coordinates and given by

{x=0.0012y2+0.4597y+127.751y=yz=0
{x=0.0012y2+0.4597y+127.751y=yz=0

In order to reconstruct the 3D curve equation at the initial position, inverse transformations of the translation and rotations are performed on the 2D curve equation in the xoy plane. The coefficient matrices of Eqs. (21) and (22) are multiplied by the rotation matrices Ry(-β) and Rx(-α) respectively and added by -T. The results of the initial curves and the transformation processes are shown in Figs. 6(h) and 7(h). The parametric equations of the two curves in the world coordinates system are given by

{X=-0.0012y2+0.4596y+124.3226Y=6.3022×10-6y2+0.9379y-74.7379Z=-1.7426×10-5y2+0.3469y+206.6485
{X=-0.0012y2+0.4596y+124.3226Y=6.3022×10-6y2+0.9379y-74.7379Z=-1.7426×10-5y2+0.3469y+206.6485

In the experiments, as the camera matrix M represents the relative position between the camera and the world coordinate system, there is no need to recalibrate the camera again if the relative position is fixed. In other words, the object that is placed inside the visual field of the camera, will be measured without the calibration board. The 3D points are transformed to the 2D points by the forward three transformations in the world coordinate system. The parametric equations of the laser curves at the initial positions are also reconstructed by the reverse translation and rotations.

6. Conclusions

In this work we developed an approach to calibrate the laser plane in the world coordinate system and reconstruct the equation of a 3D curve generated from a laser line in vision measurement. A normalization model that combines the sigmoid function with the Gaussian function is constructed to enhance the extraction accuracy of the laser line centers and to avoid extracting redundancy centers or losing centers. The global calibration method based on a 3D calibration board and a height gauge is proposed to realize the camera calibration and the laser plane calibration at the same time. The coordinates conversion from the camera coordinate system to the world coordinate system is bypassed with this method. The forward process consists of one translation to the origin, a rotation to x axis, and a rotation to y axis is present to transform the reconstructed 3D points in space curve to the 2D points in horizontal plane. By the least square method, the equation of 2D curve in the horizontal plane is fitted by the 2D points. The 3D curve in the original position is reconstructed by the inverse process which is performed on the 2D curve. The reconstruction experiments of the selected samples clearly indicate a high potential for sensing, non-touch inspection in real world applications.

Acknowledgments

The authors gratefully acknowledge the supports of National Natural Science Foundation of China under Grant No. 51205164, Grant No. 51478204, Spring Bud Talents Plan of Jilin Province, and Jilin Province Science Foundation for Youths under Grant No. 20130522154JH.

References and links

1. S. Zhang, D. Van Der Weide, and J. Oliver, “Superfast phase-shifting method for 3-D shape measurement,” Opt. Express 18(9), 9684–9689 (2010). [CrossRef]   [PubMed]  

2. Z. H. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012). [CrossRef]  

3. F. Felberer, J. S. Kroisamer, C. K. Hitzenberger, and M. Pircher, “Lens based adaptive optics scanning laser ophthalmoscope,” Opt. Express 20(16), 17297–17310 (2012). [CrossRef]   [PubMed]  

4. Y. J. Wang, S. Zhang, and J. H. Oliver, “3D shape measurement technique for multiple rapidly moving objects,” Opt. Express 19(9), 8539–8545 (2011). [CrossRef]   [PubMed]  

5. G. Xu, X. T. Li, J. Su, H. D. Pan, and G. D. Tian, “Precision evaluation of three-dimensional feature points measurement by binocular vision,” J. Opt. Soc. Korea 15(1), 30–37 (2011). [CrossRef]  

6. G. A. Al-Kindi and B. Shirinzadeh, “An evaluation of surface roughness parameters measurement using vision-based data,” Int. J. Mach. Tools Manuf. 47(3), 697–708 (2007). [CrossRef]  

7. K. Zhang, B. Xu, L. Tang, and H. Shi, “Modeling of binocular vision system for 3D reconstruction with improved genetic algorithms,” Int. J. Adv. Manuf. Technol. 29(7–8), 722–728 (2006). [CrossRef]  

8. Z. Hu, C. Marshall, R. Bicker, and P. Taylor, “Automatic surface roughing with 3D machine vision and cooperative robot control,” Robot. Auton. Syst. 55(7), 552–560 (2007). [CrossRef]  

9. M. J. Milford and G. F. Wyeth, “Mapping a suburb with a single camera using a biologically inspired SLAM system,” IEEE Trans. Robot. 24(5), 1038–1053 (2008). [CrossRef]  

10. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

11. S. Zhang and S. T. Yau, “High-resolution, real-time 3D absolute coordinate measurement based on a phase-shifting method,” Opt. Express 14(7), 2644–2649 (2006). [CrossRef]   [PubMed]  

12. M. A. Fischler, J. M. Tenenbaum, and H. C. Wolf, “Detection of roads and linear structures in low-resolution aerial imagery using a multisource knowledge integration technique,” Comput. Graph. Image Process. 15(3), 201–223 (1981). [CrossRef]  

13. D. Geman and B. Jedynak, “An active testing model for tracking roads in satellite images,” IEEE Trans. Pattern Anal. 18(1), 1–14 (1996). [CrossRef]  

14. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986). [CrossRef]   [PubMed]  

15. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. 12(7), 629–639 (1990). [CrossRef]  

16. C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of Alvey Vision Conference (Manchester University, UK, Aug. 1988), pp. 147–151.

17. J. van de Weijer, T. Gevers, and J. M. Geusebroek, “Edge and corner detection by photometric quasi-invariants,” IEEE Trans. Pattern Anal. 27(4), 625–630 (2005). [CrossRef]  

18. L. W. Tsai, J. W. Hsieh, and K. C. Fan, “Vehicle detection using normalized color and edge map,” IEEE Trans. Image Process. 16(3), 850–864 (2007). [CrossRef]   [PubMed]  

19. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Trans. Med. Imaging 8(3), 263–269 (1989). [CrossRef]   [PubMed]  

20. D. Ziou, “Line detection using an optimal IIR filter,” Pattern Recognit. 24(6), 465–478 (1991). [CrossRef]  

21. O. Laligant and F. Truchetet, “A nonlinear derivative scheme applied to edge detection,” IEEE Trans. Pattern Anal. 32(2), 242–257 (2010). [CrossRef]  

22. T. M. Koller, G. Gerig, G. Szekely, and D. Dettwiler, “Multiscale detection of curvilinear structures in 2-D and 3-D image data,” in Proceedings of the Fifth International Conference on Computer Vision (Boston, MA., Jun. 1995), pp. 864–869. [CrossRef]  

23. M. R. Shortis, T. A. Clarke, and T. Short, “A comparison of some techniques for the subpixel location of discrete target images,” Proc. SPIE 2350, 239–250 (1994). [CrossRef]  

24. M. A. Luengo-Oroz, E. Faure, and J. Angulo, “Robust iris segmentation on uncalibrated noisy images using mathematical morphology,” Image Vis. Comput. 28(2), 278–284 (2010). [CrossRef]  

25. C. Steger, “An unbiased detector of curvilinear structures”, IEEE Trans. Pattern Anal. 20(2), 113–125 (1998). [CrossRef]  

26. L. Qi, Y. Zhang, X. Zhang, S. Wang, and F. Xie, “Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger’s algorithm,” Opt. Express 21(11), 13442–13449 (2013). [CrossRef]   [PubMed]  

27. C. Lemaitre, M. Perdoch, A. Rahmoune, J. Matas, and J. Miteran, “Detection and matching of curvilinear structures,” Pattern Recognit. 44(7), 1514–1527 (2011). [CrossRef]  

28. G. Xu, L. N. Sun, X. T. Li, J. Su, Z. B. Hao, and X. Lu, “Adaptable center detection of a laser line with a normalization approach using Hessian-matrix eigenvalues,” J. Opt. Soc. Korea 18(4), 317–330 (2014).

29. G. J. Zhang and Z. Z. Wei, “A novel calibration approach to structured light 3D vision inspection,” Opt. Laser Technol. 34(5), 373–380 (2002). [CrossRef]  

30. F. Q. Zhou and G. J. Zhang, “Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations,” Image Vis. Comput. 23(1), 59–67 (2005). [CrossRef]  

31. R. Legarda-Saenz, T. Bothe, and W. P. Ju, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. 43(2), 464–471 (2004). [CrossRef]  

32. I. Leandry, C. Breque, and V. Valle, “Calibration of a structured-light projection system: development to large dimension objects,” Opt. Lasers Eng. 50(3), 373–379 (2012). [CrossRef]  

33. J. Yan, G. H. Gong, and C. Tian, “Hue-based feature detection for geometry calibration of multiprojector arrays,” Opt. Eng. 53(6), 063108 (2014). [CrossRef]  

34. C. Alard and R. H. Lupton, “A method for optimal image subtraction,” Astrophys. J. 503(1), 325–331 (1998). [CrossRef]  

35. C. Steger, “Unbiased extraction of lines with parabolic and Gaussian profiles,” Comput. Vis. Image Underst. 117(2), 97–112 (2013).

36. Y. I. Abdel-Aziz and H. M. Karara, “Direct linear transformation into object space coordinates in close-range photogrammetry,” in Proceedings of the Symposium on Close-Range Photogrammetry (Falls Church, VA, USA, 1971), pp. 1–18.

37. G. Xu, X. T. Li, J. Su, H. D. Pan, and L. X. Geng, “Integrative evaluation of the optimal configuration for the measurement of the line segments using stereo vision,” Optik (Stuttg.) 124(11), 1015–1018 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 The normalization function h(λ1, λ2) for the eigenvalues λ1, λ2 of the Hessian matrix.
Fig. 2
Fig. 2 The calibration process of a laser plane using a height gauge and a 3D calibration board.
Fig. 3
Fig. 3 The 3D calibration points of a laser plane using a height gauge and a 3D calibration board.
Fig. 4
Fig. 4 The positive process of the translation and rotations from the original reconstructed 3D points to the 2D points in the horizontal plane. T is the translation vector, α is the first rotation angle to x axis, β is the second rotation angle to y axis.
Fig. 5
Fig. 5 The inverse process of the translation and rotations from the fitting curve in the horizontal plane to the fitting curve in the original laser plane. -β is the first rotation angle to y axis, -α is the second rotation angle to x axis, -T is the translation vector.
Fig. 6
Fig. 6 Experimental results of the first laser curve on the car model. (a) Original image, (b) Laser curve, (c) Centers extracted by the Hessian method with the thresholds of λ1 = 1 and λ2 = −2, (d) Enlarged view of Fig. 6(c), (e) Centers extracted by the sigmoid-Gaussian method, (f) Enlarged view of Fig. 6(e), (g) Forward transformation process from the 3D reconstructed centers to the 2D points in the xoy plane, (h) Inverse transformation process from the 2D curve in the xoy plane to the 3D curve in the original position.
Fig. 7
Fig. 7 Experimental results of the second laser curve on the car model. (a) Original image, (b) Laser curve, (c) Centers extracted by the Hessian method with the thresholds of λ1 = 1 and λ2 = −2, (d) Enlarged view of Fig. 6(c), (e) Centers extracted by the sigmoid-Gaussian method, (f) Enlarged view of Fig. 6(e), (g) Forward transformation process from the 3D reconstructed centers to the 2D points in the xoy plane, (h) Inverse transformation process from the 2D curve in the xoy plane to the 3D curve in the original position.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

H ( x , y ) = [ I x x ( x , y ) I x y ( x , y ) I x y ( x , y ) I y y ( x , y ) ]
h 1 ( λ 1 ) = exp ( - λ 1 2 / c )
h 2 ( λ 2 ) = 1 1 + exp [ - b ( | λ 2 | a / 2 ) ]
h ( λ 1 , λ 2 ) = exp ( - λ 1 2 / c ) 1 + exp [ - b ( | λ 2 | a / 2 ) ]
a = max [ | λ 2 ( I l ( x , y ) ) | ]
I ( x + t n x , y + t n y ) = I ( x , y ) + t n x I x ( x , y ) + t n y I y ( x , y ) + 1 / 2 ( t 2 n x 2 I x x ( x , y ) + 2 t 2 n x n y I x y ( x , y ) + t 2 n y 2 I y y ( x , y ) )
t = - n x I x ( x , y ) + n y I y ( x , y ) n x 2 I x x ( x , y ) + 2 n x n y I x y ( x , y ) + n y 2 I y y ( x , y )
s i [ x i y i 1 ] = [ m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 m 31 m 32 m 33 m 34 ] [ X i Y i Z i 1 ]
a X + b Y + c Z + d = 0
( X ( T ) Y ( T ) Z ( T ) ) = ( X Y Z ) + T
( X ( α , T ) Y ( α , T ) Z ( α , T ) ) = R x ( α ) ( X ( T ) Y ( T ) Z ( T ) )
α = a r c tan ( t y / t z )
( t x ( α ) t y ( α ) t z ( α ) ) = R x ( α ) ( t x t y t z )
( X ( α , β , T ) Y ( α , β , T ) Z ( α , β , T ) ) = R y ( β ) ( X ( α , T ) Y ( α , T ) Z ( α , T ) )
β = a r c tan ( - t y ( α ) / t z ( α ) )
{ x = x y = f ( x ) z = 0
{ x = f 1 ( y ) y = y z = 0
[ X Y Z ] = [ cos β x t x cos α f ( x ) + sin α sin β x t y - sin α f ( x ) + cos α sin β x t z ]
[ X Y Z ] = [ cos β f 1 ( y ) t x cos α y + sin α sin β f 1 ( y ) t y - sin α y + cos α sin β f 1 ( y ) t z ]
Z = 0.0167 X + 0 . 3 6 1 7 Y + 2 3 1 . 6 0 6 7
{ x = 0.0012 y 2 + 0.4597 y + 127.751 y = y z = 0
{ x = 0.0012 y 2 + 0.4597 y + 127.751 y = y z = 0
{ X = - 0.0012 y 2 + 0.4596 y + 124.3226 Y = 6.3022 × 10 - 6 y 2 + 0.9379 y - 74.7379 Z = - 1.7426 × 10 - 5 y 2 + 0.3469 y + 206.6485
{ X = - 0.0012 y 2 + 0.4596 y + 124.3226 Y = 6.3022 × 10 - 6 y 2 + 0.9379 y - 74.7379 Z = - 1.7426 × 10 - 5 y 2 + 0.3469 y + 206.6485
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.