Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Geometry-invariant-based reconstruction generated from planar laser and metrical rectification with conic dual to circular points in the similarity space

Open Access Open Access

Abstract

3D point reconstruction is a crucial component in optical inspection. A direct reconstruction process is proposed by combining two similarity invariants in active vision. A planar reference with an isosceles-right-angle pattern and a coplanar laser are adopted to generate the laser projective point on the measured object. The first invariant is the image of the conic dual to the circular points (ICDCP), which is derived from the lines in two pairs of perpendicular directions on the reference pattern. The invariant provides the transform from the projection space to the similarity space. Then, the ratio of the line segments consisting of the laser projection points and reference points is constructed as the other similarity invariant, by which the laser projection point in the similarity space is converted to Euclidean space. The solution of the laser point is modeled by the ratio invariant of the line segments and improved by a special point selection to avoid nonlinear equations. Finally, the benchmark-camera distance, the benchmark-generator distance, the benchmark length, image noise, and the number of orthogonal lines are experimentally investigated to explore the effectiveness and reconstruction error of the method. The reconstruction error averages of 0.94, 1.22, 1.77, and 2.15 mm are observed from the experiment results with the benchmark-camera distances from 600 mm to 750 mm with a 50 mm interval. This proves the validity and practicability of the reconstruction method.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Reconstruction of 3D points is an important task for the optical inspection [1,2], which includes scientific research and industrial applications of manufacturing information acquisition [3], architecture [4], vehicle model inspection [5], mechanical part inspection [6,7], biometric techniques [8,9], etc.. Many studies for shape reconstruction have been reported. They can be classified as contact measurement, or vision-based measurements. The coordinate measuring machine (CMM) is a typical kind of the contact measurement [10], and is a precise measurement method for 3D shape. However, the probe of CMM should be moved along with the three rails, which is unsuitable for on-site inspection. Therefore, the vision-based measurement is a crucial method for 3D reconstruction.

As the camera realizes the projection from 3D to 2D, the 3D information is difficult to recover by the inverse process. Vision-based measurement is classified by two main approaches, passive vision [11] and active vision [12]. Passive vision adopts two or more views to provide the enough constraint conditions for the 3D reconstruction. It works best for objects with obvious texture. Active vision based on the camera-laser structure makes up for the lack of obvious texture on the object. Active vision enhances the reconstruction by actively structured light. As the structured light can be calibrated and actively projected to surface as the metrical mark, the measurement of the active vision is appropriate for objects without feature points.

Our reconstruction system consists of a reference object, a camera, and a planar laser. A 3D object is mapped to the image by the camera. However, a point of the 3D object on the planar laser that is coplanar to the reference plane is dimensionally reduced to a quasi-2D point, i.e. the 3D point with a zero element. Thus, the intersection between the object and the planar laser is typical 2D information. The object-image transform can be considered as a transform from the Euclidean space to the projection space. The transform is the combination of a simple projection transform, an affine transform, and a similarity transform [13]. The degrees of freedom of the transform from the Euclidean space to the projection space are 8, which contain 2 degrees of freedom of the simple projection transform, 2 degrees of freedom of the affine transform, 4 degrees of freedom of the similarity transform. Therefore, there is a 2-2-4 transform chain from the spatial object to the image. The current active-vision-based methods focus on the reconstruction from the projection space to the Euclidean space. However, it is unnecessary to solve all the 8 degrees from the projection space to the Euclidean space for the active-vision reconstruction. In the transform process, some invariants are immune to the transforms above. The conic dual to the circular points (CDCP) and ratio of line segments are two invariants for the similarity transform [13]. After the above simple projection and affine transforms, the invariants in the similarity space are the same as the values in the Euclidean space. Therefore, for the purpose of the point reconstruction, it is unnecessary to rectify the laser points from the image to the Euclidean space on the condition that a reasonable invariant is constructed by the laser points and coplanar reference points. The benefit of the active vision method is that only 2 degrees of freedom of the simple projection transform and 2 degrees of freedom of the affine transform should be rectified in the reconstruction, which avoids rectifying the unnecessary 4 degrees of freedom of the similarity transform. This paper has the following innovations. Firstly, although the similarity invariabilities of the CDCP and the ratio of line segments have been separately reported in the passive vision before, the paper combines both invariants in the active-vision-reconstruction system. Secondly, the isosceles-right-angle pattern on the reference is designed and can generate the ICDCP by the lines in two pairs of perpendicular directions and reference points for the ratio of the line segments. Finally, the solution model of the laser point is constructed by the ratio invariant of the line segments and improved by the special point selection to avoid the nonlinear equations.

The rest paper includes 4 sections. Section 2 introduces the related work. Section 3 outlines the reconstruction model. The similarity invariant of the CDCP is employed to rectify the image to the similarity space. The ratio of the line segments including the laser points and points on the reference is constructed as the other similarity invariant to bridge the laser point between the similarity space and the Euclidean space. Section 4 contributes the experiments to prove the validity and discussions. Section 5 summarizes the paper.

2. Related work

The metric properties of the camera are important factors for the vision-based metrology and should be calibrated before the measurement. The geometrical elements of the point, the line, and the conic have been intensively used for the camera calibration. Z. Zhang [14] proposes a classical calibration method according to the points on a 2D calibration target. The method employs the characteristic of 2D reference to eliminate the z coordinate of the point on the target and calculate the rotation and translation parameters by the orthogonality of the unit vectors. The calibration method [15] is further simplified by the coordinate-known points of the 1D reference. The length of the 1D reference and the position of the points contribute the basic equations to estimate the intrinsic parameters. The 1D reference is convenient for the global calibration of a multiple-view system. Y. Bok et al. [16] proposes a method to register the cameras based on a small lens and line features. The projection model of the camera is transformed to a homogeneous group of linear equations, which are represented by line features. The intrinsic and extrinsic parameters are calculated from the linear equations. A line feature is more impervious to image noise than a point feature. However, the line extraction is time-consuming as it is often extracted by the Hough transform [17]. X. Q. Meng et al. [18] outlines a camera calibration method with a circle and lines radiating from the center. As the circle and the line at infinity intersect at two circular points, the projections of the points are derived from the image. Furthermore, the camera parameters are solved by the ICDCP and the Cholesky decomposition. J. S. Kim et al. [19] investigates the geometrical and algebraic characteristics of the projection between the concentric circles and the image. The conic dual to the circle center and the CDCP are generated from a rank-1 matrix and rank-2 matrix, respectively. The camera calibration is performed by the projected circular points.

Active vision employs an active laser to mark a smooth object surface. Y. Bok et al. [20] present a novel method to reconstruct a large-scale structure with a hand-held fusion-sensor system, which is applicable to the digital acquisition of heritage object shapes. The fusion-sensor system consists of two laser sensors and four cameras. The poses of the laser sensors are exactly calibrated relative to the four non-overlapping cameras. The system motion is estimated by the observations of the cameras, which are able to scan the large structures with convenience and flexibility. T. T. Nguyen et al. [21] propose a camera-structured-light reconstruction system for plants that integrates hardware structure and a software algorithm. Ten cameras in the hardware structure are installed on an arc framework and employed to obtain images of the target object in different poses. A random dot texture is efficiently generated by focusing a slide with the dots, illuminated by a high power LED, onto the plant with a telephoto lens. The measured plant is placed on the turntable to provide different view angles. Multiple pairs of stereo images captured from different perspectives are taken to reconstruct a 3D model of the entire plant without destroying any part of the plant. The method is also applicable to other objects with complex structures. M. Y. Kim et al. [22] discuss a 3D sensing system with adjustable zoom based on stereo vision and structured light. The system consists of stereo cameras, the illumination projector and zoom lenses. The lens is zoomed with respect to the distance between the object and the cameras. The normal cross correlation is employed to the stereo image pairs and the construction of the linear equation. The advantage of the system is that the camera lens can be zoomed to collect high-resolution data from a distant object. A general object is represented in 3D Euclidean space. G. Xu et al. [23] provide an active-vision method with a 2D reference and a non-coplanar laser plane. The advantage is that there is nonrestrictive installation position and posture between the 2D reference and the laser plane. However, an additional calibration process should be performed to solve the relative coordinate of the laser plane.

3. Reconstruction principle

3.1 Instrumentation and model

The instrumentation system of the reconstruction method includes a camera and a laser generator installed on a 2D reference with a perpendicular-line pattern, as shown in Fig. 1. The planar laser is placed on the same plane as the reference. The perpendicular lines, which generate the ICDCP, are recognized by the right triangles on the reference. According to the ICDCP, the projection space is transformed to the similarity space. In the model, the frame of the camera coordinate (FCC), the frame of the reference coordinate (FRC), and the frame of the image coordinate (FIC) are OC-XCYCZC, OR-XRYRZR, and OI-XIYIZI, respectively. The reference and the intersection laser stripe between the object and the laser are both captured by the camera for the reconstruction.

 figure: Fig. 1.

Fig. 1. Instrumentation, coordinate system definition, geometry invariant generation and point reconstruction based on the planar laser and the metrical rectification with the ICDCP in the similarity space.

Download Full Size | PDF

The reconstruction process based on the planar laser and the metrical rectification with the ICDCP in similarity space is introduced in Fig. 2. Firstly, the isosceles-right-angle pattern on the reference provides a multiple-perpendicular-line structure. Thus, the images of the perpendicular lines are extracted and employed to solve the ICDCP. Secondly, the projection matrix that is independent to the similarity transform is solved. Finally, the ratio of the line segments is a similarity invariant, by the invariability of the CDCP in the similarity transform. The feature points on the reference and the laser point compose the equations, which are improved by the special point selection to avoid the nonlinear equations. The laser point is reconstructed by the equations in the FRC.

 figure: Fig. 2.

Fig. 2. The reconstruction process based on the planar laser and the metrical rectification with the ICDCP in the similarity space.

Download Full Size | PDF

3.2 Similarity rectification

The conic dual to the circular point is a degenerate conic on the infinite plane with the rank of 2. Therefore, in Euclidean space, the CDCP $\textrm{G}_\infty ^ \ast $ is represented by two lines in Fig. 1. The conic $\textrm{G}_\infty ^ \ast $ on the infinite plane is then projected to the image plane. The ICDCP is represented by $\textrm{G}_\infty ^{ {\ast} \textrm{I}}$. The transform from the projection space to the similarity space can be deduced by the CDCP $\textrm{G}_\infty ^ \ast $ and ICDCP $\textrm{G}_\infty ^{ {\ast} \textrm{I}}$. The rectification method [13] of the similarity space employing perpendicular lines is adopted to obtain the conic image $\textrm{G}_\infty ^{ {\ast} \textrm{I}}$. In the image, the perpendicular lines satisfy

$${{\mathbf l}_i}^\textrm{T}\textrm{G}_\infty ^{ {\ast} \textrm{I}}{{\mathbf m}_i}\textrm{ = 0}$$
where ${{\textbf l}_i},\,{{\textbf m}_i}$ (i=1, 2, …, n, n≥5) are the projective lines on the image of the perpendicular lines ${\textbf l}_i^\textrm{R},\,{\textbf m}_i^\textrm{R}$ on the reference. $\textrm{G}_\infty ^{ {\ast} \textrm{I}} = \left( {\begin{array}{ccc} {{a_1}}&{{{{a_2}} \mathord{\left/ {\vphantom {{{a_2}} 2}} \right.} 2}}&{{{{a_4}} \mathord{\left/ {\vphantom {{{a_4}} 2}} \right.} 2}}\\ {{{{a_2}} \mathord{\left/ {\vphantom {{{a_2}} 2}} \right.} 2}}&{{a_3}}&{{{{a_5}} \mathord{\left/ {\vphantom {{{a_5}} 2}} \right.} 2}}\\ {{{{a_4}} \mathord{\left/ {\vphantom {{{a_4}} 2}} \right.} 2}}&{{{{a_5}} \mathord{\left/ {\vphantom {{{a_5}} 2}} \right.} 2}}&{{a_6}} \end{array}} \right)$ can be derived from the SVD method [24].

A general projection matrix can be decomposed to the production of a simple projection matrix HP, an affine matrix HA, and a similarity matrix. Moreover, the CDCP is a similarity invariant. Thus, the general projection of the conic is represented by

$$\textrm{G}_\infty ^{ {\ast} \textrm{I}} = ({\textrm{H}_\textrm{P}}{\textrm{H}_\textrm{A}})\textrm{G}_\infty ^ \ast (\textrm{H}_\textrm{A}^\textrm{T}\textrm{H}_\textrm{P}^\textrm{T})$$
where ${\textrm{H}_\textrm{A}} = \left( {\begin{array}{cc} \textrm{K}&{\textbf 0}\\ {{{\textbf 0}^\textrm{T}}}&1 \end{array}} \right)$, K is a 2×2 lower triangular matrix, ${\textrm{H}_\textrm{P}}\textrm{ = }\left( {\begin{array}{cc} \textrm{I}&{\textbf 0}\\ {{{\textbf w}^\textrm{T}}}&1 \end{array}} \right),\,{\textbf w} = \left( {\begin{array}{c} {{w_1}}\\ {{w_2}} \end{array}} \right)$, $\textrm{G}_\infty ^ \ast{=} \left( {\begin{array}{ccc} 1&0&0\\ 0&1&0\\ 0&0&0 \end{array}} \right)$.

According to Eq. (2), similar to the solution method of the projection matrix for the image of the absolute conic in [13], the projection matrix HPHA is expressed by

$$\textrm{V = }\left( {\begin{array}{cc} \textrm{K}&{\textbf 0}\\ {{{\textbf w}^\textrm{T}}\textrm{K}}&1 \end{array}} \right)$$

Stacking Eqs. (2), (3) and substituting the result $\textrm{G}_\infty ^{ {\ast} \textrm{I}}$ of Eq. (1), then

$$\left( {\begin{array}{{cc}} {\textrm{K}{\textrm{K}^\textrm{T}}}&{\textrm{K}{\textrm{K}^\textrm{T}}{\textbf w}}\\ {{{\textbf w}^\textrm{T}}\textrm{K}{\textrm{K}^\textrm{T}}}&{{{\textbf w}^\textrm{T}}\textrm{K}{\textrm{K}^\textrm{T}}{\textbf w}} \end{array}} \right) = \left( {\begin{array}{{ccc}} {{a_1}}&{{{{a_2}} \mathord{\left/ {\vphantom {{{a_2}} 2}} \right.} 2}}&{{{{a_4}} \mathord{\left/ {\vphantom {{{a_4}} 2}} \right.} 2}}\\ {{{{a_2}} \mathord{\left/ {\vphantom {{{a_2}} 2}} \right.} 2}}&{{a_3}}&{{{{a_5}} \mathord{\left/ {\vphantom {{{a_5}} 2}} \right.} 2}}\\ {{{{a_4}} \mathord{\left/ {\vphantom {{{a_4}} 2}} \right.} 2}}&{{{{a_5}} \mathord{\left/ {\vphantom {{{a_5}} 2}} \right.} 2}}&{{a_6}} \end{array}} \right)$$
Equation (4) gives the relationships of
$$\textrm{K}{\textrm{K}^\textrm{T}} = \left( {\begin{array}{{cc}} {{a_1}}&{{{{a_2}} \mathord{\left/ {\vphantom {{{a_2}} 2}} \right.} 2}}\\ {{{{a_2}} \mathord{\left/ {\vphantom {{{a_2}} 2}} \right.} 2}}&{{a_3}} \end{array}} \right)$$
$$\textrm{K}{\textrm{K}^\textrm{T}}{\textbf w} = \left( {\begin{array}{c} {{{{a_4}} \mathord{\left/ {\vphantom {{{a_4}} 2}} \right.} 2}}\\ {{{{a_5}} \mathord{\left/ {\vphantom {{{a_5}} 2}} \right.} 2}} \end{array}} \right)$$
where K is derived from the Cholesky decomposition of the right of Eq. (5) and given by
$$\textrm{K = }\left( {\begin{array}{cc} {\sqrt {{a_1}} }&\textrm{0}\\ {{a_2}\textrm{/(2}\sqrt {{a_1}} )}&{\sqrt {{a_3} - {a_2}^2\textrm{/(4}{a_1})} } \end{array}} \right)$$

Stacking Eqs. (6), (7), then

$${\textbf w} = \left( {\begin{array}{c} {{{(2{a_3}{a_4} - {a_5}{a_2})} \mathord{\left/ {\vphantom {{(2{a_3}{a_4} - {a_5}{a_2})} {(4{a_1}{a_3} - {a_2}^2)}}} \right.} {(4{a_1}{a_3} - {a_2}^2)}}}\\ {{{(2{a_1}{a_5} - {a_2}{a_4})} \mathord{\left/ {\vphantom {{(2{a_1}{a_5} - {a_2}{a_4})} {(4{a_1}{a_3} - {a_2}^2)}}} \right.} {(4{a_1}{a_3} - {a_2}^2)}}} \end{array}} \right)$$

Substituting Eqs. (7), (8) to Eq. (3), the projection matrix that is independent to the similarity transform is

$$\textrm{V} = \left( {\begin{array}{ccc} {\sqrt {{a_1}} }&0&0\\ {{{{a_2}} \mathord{\left/ {\vphantom {{{a_2}} {(2\sqrt {{a_1}} )}}} \right. } {(2\sqrt {{a_1}} )}}}&{\sqrt {{a_3} - {{{a_2}^2} \mathord{\left/ {\vphantom {{{a_2}^2} {(4{a_1})}}} \right.} {(4{a_1})}}} }&0\\ {{{{a_4}} \mathord{\left/ {\vphantom {{{a_4}} {(2\sqrt {{a_1}} )}}} \right. } {(2\sqrt {{a_1}} )}}}&{{{({2{a_1}{a_5} - {a_2}{a_4}} )} \mathord{\left/ {\vphantom {{({2{a_1}{a_5} - {a_2}{a_4}} )} {\textrm{(}2\sqrt {4{a_1}^2{a_3} - {a_2}^2{a_1}} )}}} \right. } {\textrm{(}2\sqrt {4{a_1}^2{a_3} - {a_2}^2{a_1}} )}}}&1 \end{array}} \right)$$

As the conic $\textrm{G}_\infty ^ \ast $ is a similarity invariant, V is further considered as the transform bridge from the projection space to the similarity space.

3.3 Similarity rectification

In the last section, the transform matrix from the projection space in the image to the similarity space, V, has been determined by the perpendicular lines on the reference coplanar to the laser. Therefore, all the coordinates in FIC can be transformed to the similarity space. As the ratio of the line segments is a similarity invariant [13], the ratio in the Euclidean space is equal to the ratio in the similarity space. Hence, the laser point to be reconstructed in the Euclidean space can be directly determined in the similarity space.

The beginning of the reconstruction process is the transform from the image point, i.e. the projection space to the point in the similarity space. In Fig. 1, the planar laser intersects the object with a light stripe and ${\textbf X}_j^\textrm{R}$ is the spatial point on the light stripe. The image of ${\textbf X}_j^\textrm{R}$ is ${{\textbf x}_j} = {(\begin{array}{ccc} {{x_j}}&{{y_j}}&1 \end{array})^\textrm{T}}$. The images of the reference points are ${\textbf A}_q^\textrm{I} = {(\begin{array}{ccc} {x_{A,q}^\textrm{I}}&{y_{A,q}^\textrm{I}}&1 \end{array})^\textrm{T}},\,{\textbf B}_q^\textrm{I} = {(\begin{array}{ccc} {x_{B,q}^\textrm{I}}&{y_{B,q}^\textrm{I}}&1 \end{array})^\textrm{T}}$ (q=1, 2, …, m-1, m≥2). The above points are transformed to the similarity space by

$${\textbf A}_q^\textrm{S} = {\textrm{V}^{ - 1}}{\textbf A}_q^\textrm{I}$$
$${\textbf B}_q^\textrm{S} = {\textrm{V}^{ - 1}}{\textbf B}_q^\textrm{I}$$
$${\textbf X}_j^\textrm{S} = {\textrm{V}^{ - 1}}{{\textbf x}_j}$$
where ${\textbf A}_q^\textrm{S} = {(\begin{array}{ccc} {x_{A,q}^\textrm{S}}&{y_{A,q}^\textrm{S}}&1 \end{array})^\textrm{T}},\,{\textbf B}_q^\textrm{S} = {(\begin{array}{ccc} {x_{B,q}^\textrm{S}}&{y_{B,q}^\textrm{S}}&1 \end{array})^\textrm{T}},\,{\textbf X}_j^\textrm{S} = {(\begin{array}{ccc} {x_j^\textrm{S}}&{y_j^\textrm{S}}&1 \end{array})^\textrm{T}}$, are the corresponding coordinates that are rectified to the similarity space.

As the ratio of the line segments is a similarity invariant [13], the ratio for the recovery is constructed by

$${{{\textbf X}_{j,q}^\textrm{R}{\textbf B}_q^\textrm{R}} \mathord{\left/ {\vphantom {{{\textbf X}_{j,q}^\textrm{R}{\textbf B}_q^\textrm{R}} {{\textbf A}_q^\textrm{R}{\textbf B}_q^\textrm{R}}}} \right.} {{\textbf A}_q^\textrm{R}{\textbf B}_q^\textrm{R}}} = {{{\textbf X}_j^S{\textbf B}_q^\textrm{S}} \mathord{\left/ {\vphantom {{{\textbf X}_j^S{\textbf B}_q^\textrm{S}} {{\textbf A}_q^\textrm{S}{\textbf B}_q^\textrm{S}}}} \right.} {{\textbf A}_q^\textrm{S}{\textbf B}_q^\textrm{S}}} = {k_{j,q}}$$
where ${\textbf A}_q^\textrm{R} = {(\begin{array}{ccc} {x_{A,q}^\textrm{R}}&{y_{A,q}^\textrm{R}}&1 \end{array})^\textrm{T}},\,{\textbf B}_q^\textrm{R} = {(\begin{array}{ccc} {x_{B,q}^\textrm{R}}&{y_{B,q}^\textrm{R}}&1 \end{array})^\textrm{T}},\,{\textbf X}_{j,q}^\textrm{R} = {(\begin{array}{ccc} {x_{j,q}^\textrm{R}}&{y_{j,q}^\textrm{R}}&1 \end{array})^\textrm{T}}$ are the corresponding coordinates that are rectified to the Euclidean space on the reference plane. kj,q is the ratio of the line segments. $\textbf{A}_q^\textrm{R},\,{\textbf B}_q^\textrm{R}$ are the known value from the configuration of the reference.

In order to avoid the knotty nonlinear equations, ${\textbf B}_q^\textrm{R}$ is initially positioned on the OR-YR axis and let ${h_q} = {\textbf A}_q^\textrm{R}{\textbf B}_q^\textrm{R}$, then

$${(x_{j,q}^\textrm{R})^2} + {(y_{B,q}^\textrm{R} - y_{j,q}^\textrm{R})^2} = {({k_{j,q}}{h_q})^2}$$

For another pair of points on the reference, ${\textbf A}_{q\textrm{ + }1}^\textrm{R},\,{\textbf B}_{q\textrm{ + }1}^\textrm{R}$,

$${(x_{j,q}^\textrm{R})^2} + {(y_{B,q\textrm{ + }1}^\textrm{R} - y_{j,q}^\textrm{R})^2} = {({k_{j,q\textrm{ + }1}}{h_{q\textrm{ + }1}})^2}$$

Stacking Eqs. (14), (15), then

$$y_{j,q}^\textrm{R} = {{({{{({k_{q\textrm{ + }1}}{h_{q\textrm{ + }1}})}^2} - {{({k_q}{h_q})}^2} + {{(y_{B,q}^\textrm{R})}^2} - {{(y_{B,q\textrm{ + }1}^\textrm{R})}^2}} )} \mathord{\left/ {\vphantom {{({{{({k_{q\textrm{ + }1}}{h_{q\textrm{ + }1}})}^2} - {{({k_q}{h_q})}^2} + {{(y_{B,q}^\textrm{R})}^2} - {{(y_{B,q\textrm{ + }1}^\textrm{R})}^2}} )} {2(y_{B,q}^\textrm{R} - y_{B,q\textrm{ + }1}^\textrm{R})}}} \right.} {2(y_{B,q}^\textrm{R} - y_{B,q\textrm{ + }1}^\textrm{R})}}$$
$$y_j^\textrm{R}\textrm{ = }\frac{1}{{m - 1}}\sum\limits_{q = 1}^{m - 1} {y_{j,q}^\textrm{R}}$$

For the m-1 pairs of points on the reference, the average $y_{j,q}^\textrm{R}$ is chosen as the final coordinate of $y_j^\textrm{R}$. According to Eq. (14),

$$x_{j,q}^\textrm{R} = \sqrt {{{({k_{j,q}}{h_q})}^2} - {{(y_{B,q}^\textrm{R} - y_{j,q}^\textrm{R})}^2}}$$
$$x_j^\textrm{R}\textrm{ = }\frac{1}{{m - 1}}\sum\limits_{q = 1}^{m - 1} {x_{j,q}^\textrm{R}}$$
From Eqs. (17) and (19), the laser point is transformed to the Euclidean space in the FRC. The famous Zhang’s method [14] can provide the reference-camera homography for the transform from the FRC to the FCC and the non-zero of the third degree of freedom. A ruler is chosen for the third degree of freedom to avoid the duplication of the reference-camera homography. Thus, the point is represented in the FRC instead of the FCC.

4. Experiments

The experiments are performed to investigate the precision of the direct reconstruction based on the planar laser and the metrical rectification with the ICDCP in the similarity space. Figures 3(a) and 3(b) show the experiment instruments for the reconstruction experiments and verification experiments. The experiment instruments include a 2D reference, a planar laser generator, a benchmark plate, a ruler, a computer, a camera to acquire the image with the resolution of 1280×960, and the objects to be reconstructed. In order to further verify the performance of the method, vision reconstruction based on the planar laser with the nonrestrictive installation pose relative to the 2D reference [23] and Zhang’s method [14] are selected for the comparison experiments. The experiment process is shown in Fig. 4. The camera in the paper is an industrial camera with a narrow band spectral filter. The laser wavelength emitted by the projector is the same as the bandpass wavelength of the narrow band filter on the camera lens. Therefore, all the laser spectrum and little natural light pass through the filter and captured by the camera. The interference of the environmental light to the imaging process can be effectively reduced so that the measurement accuracy is improved. First, the camera captures the image with the laser strip and extracts the laser strip center based on C. Steger’s algorithm [25]. The method calculates the Hessian matrices for each pixel in the image. Second, the eigen-values of the Hessian matrices are solved to determine the stripe centers. The point with one eigen-value approximate to zero and the other eigen-value far less than zero is determined to the stripe center in pixel level. Taylor expansion is employed at the stripe center in the pixel-level to obtain the stripe center at sub-pixel accuracy. Third, the 2D reference is pasted with the pattern containing many orthogonal lines and feature points, which are adopted to solve the ICDCP and geometric invariant. Theoretically, five pairs of orthogonal straight lines in the image provide one ICDCP [13]. Nevertheless, for the accuracy of the test, twenty pairs of orthogonal lines are selected in the experiments. According to the invariability of CDCP and transform matrix in Eq. (9), the laser points in the projection space are rectified to the similarity space. Finally, on the basis of nine pairs of ${\textbf A}_q^\textrm{R},\,{\textbf B}_q^\textrm{R}$ on the reference, the ratio of the line segments is generated from the two points on the reference, and the laser point in the Euclidean space and the similarity space. According to the invariability of the ratio of the line segments, the laser point ${\textbf X}_j^\textrm{R}$ in the Euclidean space is solved by Eqs. (17) and (19). In addition, the planar laser generator is mounted on the 2D reference. It is necessary to adjust laser to the same plane with the 2D reference before the test.

 figure: Fig. 3.

Fig. 3. The experiments of the object reconstruction and the accuracy verification by the benchmark plate. (a) reconstruction test (b) verification test.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The experiments of the comparison methods. (a), (c) calibration of methods in Refs. [23,14], (b), (d) error estimation of methods in Refs. [23],14].

Download Full Size | PDF

In the experiments of object reconstruction, six objects are recovered and the reconstruction results are shown in Fig. 5. Figures 5(a)–5(d) are the tested objects. Figures 5(e)–5(h) are the image-processing results of the laser strips on the tested objects. Figures 5(i)–5(l) are the reconstruction points of the tested objects. Moreover, the experiment results show that the reconstructed object contours profile the actual object shape well.

 figure: Fig. 5.

Fig. 5. The reconstructions of four objects using the geometry invariant generated from planar laser and metrical rectification in similarity space. (a) a vacuum bottle, (b) a cubic box, (c) a tea cup, (d) a flat surface, (e)-(h) the image-processing results of (a)-(d), (i)-(l) the reconstructions of (a)-(d).

Download Full Size | PDF

In the verification experiments, the benchmark plate is employed as the length ground truth. The different distances, 30, 60, 90, and 120 mm, are selected as the reference lengths on the benchmark plate and reconstructed in a 2D reference coordinate system. The accuracy of this method is tested by the discrepancy between the reconstruction distance and the ground-truth value on the benchmark plate. Furthermore, according to the view field and the triangularity among the camera, the laser sensor, and the measured object, the reconstruction errors are evaluated at the different distances, 600, 650, 700, and 750 mm, from the camera to the 2D reference and different distances, 100, 200, 300, and 400 mm, from the benchmark to the laser generator. The absolute error, which is the absolute discrepancy between the reconstruction length and the ground truth, is chosen to describe the reconstruction error quantitatively. The CDCP consists of imaginary points. Therefore, the camera homography transforms the imaginary points on CDCP to the imaginary points on ICDCP. For different camera-reference distances, the parameters of the ICDCP are summarized in Table 1.

Tables Icon

Table 1. The parameters of the ICDCP.

First, the reference-camera distance is 600 mm in the test. Figure 6(a) describes the absolute errors of the reconstruction when the benchmark-generator distance is 100 mm. For the reference lengths of 30, 60, 90, and 120 mm, the error absolutes are 0.51, 0.55, 0.87, and 1.28 mm, respectively. In Fig. 6(b), the related absolute errors obtained by the geometric invariant and ICDCP are 0.56, 0.60, 0.94, and 1.31 mm. The error data are concluded in Table 2.

 figure: Fig. 6.

Fig. 6. The reconstruction errors using the geometry invariant generated from planar laser and metrical rectification in similarity space. The benchmark-generator distances are from 100 mm to 400 mm, with an interval of 100 mm, respectively. (a)-(d), (e)-(h), (i)-(l), (m)-(p) the camera-reference distances are 600, 650, 700, 750 mm, respectively.

Download Full Size | PDF

Tables Icon

Table 2. The error statistics of the proposed method and comparison methods. M1 is the proposed method. M2 is the reconstruction method by the planar laser with the nonrestrictive installation pose relative to the 2D reference. M3 is the reconstruction method with the homography solution. D1 is the reference-camera distance. D2 is the benchmark-generator distance. L is the benchmark length.

The errors of the next three experiment groups are represented in Figs. 6(e)–6(p), where the reference-camera distances are 650, 700, and 750 mm, respectively. In the second group, the means of error absolutes are 1.09, 1.19, 1.26, and 1.36 mm, whereas the experimental error averages of the first group are 0.80, 0.85, 0.97, and 1.14 mm. In the next group, the reference-camera distance is 700 mm. The error means are 1.58, 1.74, 1.81, and 1.96 mm. The final tests are performed with the reference-camera distance of 750 mm. The average errors are 1.76, 1.93, 2.18, and 2.72 mm, respectively.

Figure 7 illustrates the reconstruction errors of the above four groups of experiments. By comparing Fig. 7(a) to Fig. 7(d), the position of the ball becomes higher. It indicates that the error increases with the growing benchmark-camera distance. Moreover, the reconstruction error evidently rises when the benchmark-generator distance increases from 100 mm to 400 mm. For the fixed benchmark-camera and the benchmark-generator distance, the reconstruction error obviously climbs with the increase of the reference length. The error is the lowest when the benchmark-camera distance is 600 mm, the benchmark-generator distance is 100 mm and the benchmark length is 30 mm.

 figure: Fig. 7.

Fig. 7. The error means of the reconstruction method using the geometry invariant generated from planar laser and metrical rectification in similarity space. (a)-(d) the reference-camera distance is from 600 mm to 750 mm, with an interval of 50 mm, separately.

Download Full Size | PDF

In the comparison experiments, the reconstruction errors of the three methods are obtained in the same experiment environment. The comparison results of the three methods are summarized in Table 2. It can be seen from the error comparison results that there are the similar variation principles of the three methods. However, the first comparison method requires an additional cylindrical reference to calibrate the camera-laser system. The average error of the first comparison method is 2.20 mm. The average error of the second comparison method is 2.21 mm. The average error of the proposed method is 1.52 mm. Therefore, the proposed method is better than the comparison methods in terms of the reconstruction accuracy and calibration process.

5. Discussion

In the test, the metrical rectification is performed on the basis of the orthogonal lines on the reference. Although using a larger number of orthogonal lines enhances the accuracy of the reconstruction, it also is more time-consuming. In order to explore the impact of the number of the orthogonal lines on the experiment results and confirm the moderate number for the line pairs, a series of investigation experiments are carried out. The experiment results are indicated in Fig. 8. The pair number of the orthogonal lines rises from 12 to 20 with a step of 1 pair. The reconstruction error declines with some fluctuation. However, when the orthogonal lines are not less than 16 pairs, the reconstruction error tends to be stable. Theoretically, only five pairs of the orthogonal lines are indispensable to generate the ICDCP. Nevertheless, 16 pairs of the orthogonal lines or more are needed to solve the conic image closer to the accurate value in practice. Furthermore, when the pairs of the orthogonal lines grow from 12 to 20, the averages of the reconstruction errors are 15.58, 5.09, 3.74, 3.26, 1.81, 1.67, 1.65, 1.64, and 1.52 mm. The relative errors of the experiments are 20.75%, 7.16%, 5.07%, 4.64%, 2.63%, 2.47%, 2.44%, 2.29%, and 1.98%, respectively. It can be found that the relative error is less than 2% only for the 20 pairs of orthogonal lines in the experiments, which provides a number selection method of the orthogonal lines. Therefore, in the reconstruction experiments and verification experiments, 20 pairs of lines are selected to achieve the accurate results.

 figure: Fig. 8.

Fig. 8. The reconstruction errors influenced by the pair number of the orthogonal lines under different distances. The benchmark-generator distances are from 100 mm to 400 mm, with an interval of 100 mm, respectively. (a)-(d), (e)-(h), (i)-(l), (m)-(p) the camera-reference distances are 600, 650, 700, 750 mm, respectively.

Download Full Size | PDF

As the image noise influences the coordinate extraction of the 9 pairs of image feature points, ${\textbf A}_q^\textrm{I}$ and ${\textbf B}_q^\textrm{I}$, Gaussian noise, taking the mean of 0, the standard deviation from 0 pixel to 1 pixel and the step of 0.1 pixel, are added to the image coordinates of points on the reference. The error means of the reconstruction test are indicated in Table 3 and Fig. 9 after adding Gaussian noise. It is clear from Fig. 9 that the mean of the errors presents an increment with the growing noise level. When the camera-reference distances are 600, 650, 700, and 750 mm, the overall error averages are 0.94, 1.22, 1.77, and 2.15 mm, separately. However, the error averages over all 10 non-zero noise values are 1.28, 1.59, 2.29, and 2.55 mm, for the Gaussian noise from 0.1 to 1 pixel with the 0.1 pixel interval. For the noise value of 0.5 pixel, the errors are 1.21, 1.47, 2.31 and 2.57 mm, respectively. The relative variations above are 1.13%, 0.62%, 0.58%, 0.33%. Thus, when the noise level goes up, the error means under different experiment conditions have a slowly fluctuating jump. However, the overall error increases slightly and shows a moderately anti-noise ability.

 figure: Fig. 9.

Fig. 9. The impact of the image noise on the error means of the reconstruction method. (a)-(d) the reference-camera distances are 600 mm-750 mm, with the interval of 50 mm, separately.

Download Full Size | PDF

Tables Icon

Table 3. The statistics of error means of the reconstruction based on the geometry invariant generated from the planar laser and the metrical rectification influenced by the Gaussian noise.

6. Summary

A method to reconstruct the laser projection point is proposed by adopting two invariants in the similarity space. The laser plane is fixed on the same plane as the 2D reference. The image with the laser projection point is first rectified to the similarity space by the image invariability of the CDCP, which is solved by the isosceles-right-angle pattern on the reference, generating the ICDCP by the lines in two pairs of perpendicular directions and reference points for the ratio of the line segments. Then, the laser projection point and the feature points on the reference compose the second similarity invariant that is derived from the ratio of the line segments. The solution model of the laser point is constructed by the ratio invariant of the line segments and improved by the special point selection to avoid nonlinear equations. The reconstruction is performed by solving for the laser point from the invariant in the Euclidean space and similarity space. The overall average of the reconstruction error is 1.52 mm on the condition of the camera-reference distance from 600 mm to 750 mm and the benchmark-generator distance from 100 mm to 400 mm. Experiments reveal that the active-vision-based recovery with the metrical rectification and ICDCP in similarity space is a promising and adaptable reconstruction method with strong anti-noise ability in the field of surface measurement.

Funding

National Natural Science Foundation of China (51205164, 51478204, 51875247); Natural Science Foundation of Jilin Province (20170101214JC).

Disclosures

The authors declare no conflicts of interest.

References

1. Y. Ko and S. Yi, “Development of color 3D scanner using laser structured-light imaging method,” Curr. Opt. Photon. 2(6), 554–562 (2018).

2. A. Glowacz and Z. Glowacz, “Diagnostics of stator faults of the single-phase induction motor using thermal images, moasos and selected classifiers,” Measurement 93, 86–93 (2016). [CrossRef]  

3. T. H. Lin, “Automatic 3D color shape measurement system based on a stereo camera,” Appl. Opt. 59(7), 2086–2096 (2020). [CrossRef]  

4. A. Costanzo, M. Minasi, G. Casula, M. Musacchio, and M. F. Buongiorno, “Combined use of terrestrial laser scanning and IR thermography applied to a historical building,” Sensors 15(1), 194–213 (2014). [CrossRef]  

5. G. Xu, J. Yuan, X. Li, and J. Su, “Optimization reconstruction method of object profile using flexible laser plane and bi-planar references,” Sci. Rep. 8(1), 1526 (2018). [CrossRef]  

6. G. Zhan, H. Tang, K. Zhong, Z. Li, Y. Shi, and C. Wang, “High-speed FPGA-based phase measuring profilometry architecture,” Opt. Express 25(9), 10553–10564 (2017). [CrossRef]  

7. Z. G. Ren, J. R. Liao, and L. L. Cai, “Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision,” Appl. Opt. 49(10), 1789–1801 (2010). [CrossRef]  

8. J. S. Hyun and S. Zhang, “Influence of projector pixel shape on ultrahigh-resolution 3D shape measurement,” Opt. Express 28(7), 9510–9520 (2020). [CrossRef]  

9. T. T. Wu and J. Y. Qu, “Optical imaging for medical diagnosis based on active stereo vision and motion tracking,” Opt. Express 15(16), 10421–10426 (2007). [CrossRef]  

10. V. I. Venzel’, M. F. Danilov, A. A. Savel’eva, and A. A. Semenov, “Use of coordinate measuring machines for the assembly of axisymmetric two-mirror objectives with aspherical mirrors,” J. Opt. Technol. 86(2), 119–123 (2019). [CrossRef]  

11. A. Glowacz, A. Glowacz, and Z. Glowacz, “Recognition of thermal images of direct current motor with application of area perimeter vector and Bayes classifier,” Meas. Sci. Rev. 15(3), 119–126 (2015). [CrossRef]  

12. G. Xu, J. Yuan, X. T. Li, and J. Su, “Profile reconstruction method adopting parameterized re-projection errors of laser lines generated from bi-cuboid references,” Opt. Express 25(24), 29746–29760 (2017). [CrossRef]  

13. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

14. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

15. Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004). [CrossRef]  

16. Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE T. Pattern. Anal. 39(2), 287–300 (2017). [CrossRef]  

17. P. Mukhopadhyay and B. B. Chaudhuri, “A survey of Hough transform,” Pattern Recogn. 48(3), 993–1010 (2015). [CrossRef]  

18. X. Q. Meng and Z. Y. Hu, “A new easy camera calibration technique based on circular points,” Pattern Recogn. 36(5), 1155–1164 (2003). [CrossRef]  

19. J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE T. Pattern. Anal. 27(4), 637–642 (2005). [CrossRef]  

20. Y. Bok, Y. Jeong, D. G. Choi, and I. S. Kweon, “Capturing village-level heritages with a hand-held camera-laser fusion sensor,” Int. J. Comput. Vis. 94(1), 36–53 (2011). [CrossRef]  

21. T. T. Nguyen, D. C. Slaughter, N. Max, J. N. Maloof, and N. Sinha, “Structured light-based 3D reconstruction system for plants,” Sensors 15(8), 18587–18612 (2015). [CrossRef]  

22. M. Y. Kim, S. M. Ayaz, J. Park, and Y. Roh, “Adaptive 3D sensing system based on variable magnification using stereo vision and structured light,” Opt. Laser. Eng. 55, 113–127 (2014). [CrossRef]  

23. G. Xu, Y. P. Zhu, X. T. Li, and R. Chen, “Vision reconstruction based on planar laser with nonrestrictive installation position and posture relative to 2D reference,” Opt. Express 27(26), 38567–38578 (2019). [CrossRef]  

24. R. A. Horn and C. R. Johnson, Matrix Analysis (Cambridge University, 2012).

25. C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Machine Intell. 20(2), 113–125 (1998). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Instrumentation, coordinate system definition, geometry invariant generation and point reconstruction based on the planar laser and the metrical rectification with the ICDCP in the similarity space.
Fig. 2.
Fig. 2. The reconstruction process based on the planar laser and the metrical rectification with the ICDCP in the similarity space.
Fig. 3.
Fig. 3. The experiments of the object reconstruction and the accuracy verification by the benchmark plate. (a) reconstruction test (b) verification test.
Fig. 4.
Fig. 4. The experiments of the comparison methods. (a), (c) calibration of methods in Refs. [23,14], (b), (d) error estimation of methods in Refs. [23],14].
Fig. 5.
Fig. 5. The reconstructions of four objects using the geometry invariant generated from planar laser and metrical rectification in similarity space. (a) a vacuum bottle, (b) a cubic box, (c) a tea cup, (d) a flat surface, (e)-(h) the image-processing results of (a)-(d), (i)-(l) the reconstructions of (a)-(d).
Fig. 6.
Fig. 6. The reconstruction errors using the geometry invariant generated from planar laser and metrical rectification in similarity space. The benchmark-generator distances are from 100 mm to 400 mm, with an interval of 100 mm, respectively. (a)-(d), (e)-(h), (i)-(l), (m)-(p) the camera-reference distances are 600, 650, 700, 750 mm, respectively.
Fig. 7.
Fig. 7. The error means of the reconstruction method using the geometry invariant generated from planar laser and metrical rectification in similarity space. (a)-(d) the reference-camera distance is from 600 mm to 750 mm, with an interval of 50 mm, separately.
Fig. 8.
Fig. 8. The reconstruction errors influenced by the pair number of the orthogonal lines under different distances. The benchmark-generator distances are from 100 mm to 400 mm, with an interval of 100 mm, respectively. (a)-(d), (e)-(h), (i)-(l), (m)-(p) the camera-reference distances are 600, 650, 700, 750 mm, respectively.
Fig. 9.
Fig. 9. The impact of the image noise on the error means of the reconstruction method. (a)-(d) the reference-camera distances are 600 mm-750 mm, with the interval of 50 mm, separately.

Tables (3)

Tables Icon

Table 1. The parameters of the ICDCP.

Tables Icon

Table 2. The error statistics of the proposed method and comparison methods. M1 is the proposed method. M2 is the reconstruction method by the planar laser with the nonrestrictive installation pose relative to the 2D reference. M3 is the reconstruction method with the homography solution. D1 is the reference-camera distance. D2 is the benchmark-generator distance. L is the benchmark length.

Tables Icon

Table 3. The statistics of error means of the reconstruction based on the geometry invariant generated from the planar laser and the metrical rectification influenced by the Gaussian noise.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

l i T G I m i  = 0
G I = ( H P H A ) G ( H A T H P T )
V =  ( K 0 w T K 1 )
( K K T K K T w w T K K T w T K K T w ) = ( a 1 a 2 / a 2 2 2 a 4 / a 4 2 2 a 2 / a 2 2 2 a 3 a 5 / a 5 2 2 a 4 / a 4 2 2 a 5 / a 5 2 2 a 6 )
K K T = ( a 1 a 2 / a 2 2 2 a 2 / a 2 2 2 a 3 )
K K T w = ( a 4 / a 4 2 2 a 5 / a 5 2 2 )
K =  ( a 1 0 a 2 /(2 a 1 ) a 3 a 2 2 /(4 a 1 ) )
w = ( ( 2 a 3 a 4 a 5 a 2 ) / ( 2 a 3 a 4 a 5 a 2 ) ( 4 a 1 a 3 a 2 2 ) ( 4 a 1 a 3 a 2 2 ) ( 2 a 1 a 5 a 2 a 4 ) / ( 2 a 1 a 5 a 2 a 4 ) ( 4 a 1 a 3 a 2 2 ) ( 4 a 1 a 3 a 2 2 ) )
V = ( a 1 0 0 a 2 / a 2 ( 2 a 1 ) ( 2 a 1 ) a 3 a 2 2 / a 2 2 ( 4 a 1 ) ( 4 a 1 ) 0 a 4 / a 4 ( 2 a 1 ) ( 2 a 1 ) ( 2 a 1 a 5 a 2 a 4 ) / ( 2 a 1 a 5 a 2 a 4 ) ( 2 4 a 1 2 a 3 a 2 2 a 1 ) ( 2 4 a 1 2 a 3 a 2 2 a 1 ) 1 )
A q S = V 1 A q I
B q S = V 1 B q I
X j S = V 1 x j
X j , q R B q R / X j , q R B q R A q R B q R A q R B q R = X j S B q S / X j S B q S A q S B q S A q S B q S = k j , q
( x j , q R ) 2 + ( y B , q R y j , q R ) 2 = ( k j , q h q ) 2
( x j , q R ) 2 + ( y B , q  +  1 R y j , q R ) 2 = ( k j , q  +  1 h q  +  1 ) 2
y j , q R = ( ( k q  +  1 h q  +  1 ) 2 ( k q h q ) 2 + ( y B , q R ) 2 ( y B , q  +  1 R ) 2 ) / ( ( k q  +  1 h q  +  1 ) 2 ( k q h q ) 2 + ( y B , q R ) 2 ( y B , q  +  1 R ) 2 ) 2 ( y B , q R y B , q  +  1 R ) 2 ( y B , q R y B , q  +  1 R )
y j R  =  1 m 1 q = 1 m 1 y j , q R
x j , q R = ( k j , q h q ) 2 ( y B , q R y j , q R ) 2
x j R  =  1 m 1 q = 1 m 1 x j , q R
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.