Abstract

The feasibility and accuracy of four-mirror-based monocular stereo vision (FMSV) are related to the system layout and calibration accuracy, respectively. In this study, a spatial light path analysis method and a calibration method are proposed for an FMSV system. As two-dimensional light path analysis cannot fully characterize the imaging parameters, a spatial light path model is proposed, which allows refinement of the system design. Then, considering the relationship between the lens distortion and the imaging depth of field (DoF), a DoF-distortion equal-partition-based model is established. In the traditional calibration method, the optical axis must be perpendicular to the chessboard. Here, an accurate and practical FMSV calibration method without this constraint is proposed based on the above model. Using the proposed spatial light path analysis technique, a high-accuracy, high-portability FMSV system is constructed and calibrated, for which the average error of the vision-reconstructed distance is 0.0298 mm. In addition, robot path accuracy is evaluated by the system and compared to laser-tracker measurement results. Hence, high accuracy of 0.031 mm is determined for the proposed vision system.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Stereo vision allows three-dimensional (3D) perception through use of two cameras located a set distance apart, as the same pixels are triangulated from both two-dimensional (2D) image planes. This technique is advantageous in that it is non-contact and facilitates real-time, high-accuracy, and whole-field measurement [1]; therefore, it has been widely used in intelligent manufacturing [2], intelligent transportation [3], aerospace [4], and marine [5] applications. However, multi-camera vision is limited as regards highly dynamic measurement scenarios in limited spaces because of its high cost, strong asynchronism, and high space occupancy [6].

Monocular stereo vision (MSV) is an alternative technique [7] in which 3D perception systems are constructed by placing specifically shaped mirrors in front of a camera. This method is low-cost and highly integrable, and solves the synchronization problem of multi-camera vision as a single camera is used to obtain stereoscopic photographs. MSV can be classified into diffraction-based (e.g., with an optical grating [8]), refraction-based (e.g., using a migration [9] or rotation prism [10]), and reflection-based (e.g., with hyperboloid [11], parabolic [12], or plane mirrors [13]) techniques. Reflection-based MSVs with curved or plane mirrors are most common [14]. A curved mirror can widen the field of view (FoV) and is suitable for panoramic measurement; however, serious shape-induced image distortion occurs, which creates problems for high-accuracy system calibration and image processing. In contrast, plane-mirror-based MSV exhibits smaller image distortion and conforms to the pinhole imaging model, and is especially suitable for high-accuracy measurement [15]. Of the various systems, the four-plane-mirror-based MSV (FMSV) system was first proposed by Inaba et al. [16]. In this system, the image sensor is divided into two parts and, hence, virtual binocular vision (VBV) is established, which is used to collect dual-perspective images of the target. Owing to its symmetrical structure, parameter consistency for the two virtual cameras, and flexible adjustment, the FMSV system has obvious advantages over other plane-mirror-based MSV systems.

To ensure measurement accuracy and measurement feasibility (e.g., regarding the FoV, depth of field (DoF), and system structure), research has focused on structural design and calibration of FMSV systems. For example, Yu et al. [17] previously proposed a cost-effective and ultra-portable smartphone-based FMSV system and provided design suggestions based on thermal error analysis of the system. In another study, the FMSV mechanism and geometry were further developed for stereoscopic imaging of fire dynamics [18]. Similarly, FMSV setup to reconstruct the 3D spatial structures of streamer discharges was constructed in a different work [19]. To elucidate the flow mechanism of bubbly flows, Xue et al. [20] established a mathematical model of the FMSV; hence, they effectively reconstructed the 3D trajectories of multiple bubbles in a gas-liquid two-phase flow via image processing, stereo matching, and motion tracking. To enable deformation measurement under non-laboratory conditions, Pan integrated a blue light-emitting diode (LED) light source [21] and a coupled band-pass optical filter into an FMSV system. Following analysis of the optical design and basic principles of the established system, the tensile strains and Poisson's ratio of a specimen were then measured with high accuracy [22]. Other FMSV designs involved more integrated systems, with full-field 3D displacement measurement [23] and real-time strain measurement [24] being achieved. To date, however, research on FMSV structural design has been based on 2D light path analysis, and less attention has been paid to establishing the relationship between the structural and imaging parameters through spatial light path analysis. Specifically, FoV analysis has been limited to the horizontal direction. Moreover, because of inadequate characterization of the light-mirror interaction region, FMSV systems mostly use rectangular mirrors. However, if spatial light path analysis is employed, more compact mirrors can be enabled.

FMSV calibration refers to the use of high-accuracy targets (e.g., a chessboard) to estimate both types of pinhole model parameter, i.e., the intrinsic and extrinsic parameters, along with the lens distortion model parameters (DMPs). Such classification techniques can be grouped as either of two types: parameter inconsistency or parameter consistency techniques. For the former, the equivalent VBV is calibrated directly, with the understanding that there is no relationship between the parameters of the two cameras. For the latter, the intrinsic parameters of the two cameras are deemed to be the same, and a two-step method, first without a mirror and then with a mirror, is employed for calibration [25]. However, human error can be induced through the mirror movements. Thus, Zhou et al. [26] and Cui et al. [27] previously proposed various mirror-based calibration methods, which improved upon the FMSV calibration method proposed by Zhang [28]. In the latter approach, the pinhole model and the lens distortion model are coupled for calibration; however, this causes propagation of the intrinsic and extrinsic parameter errors to the DMPs, resulting in a local convergence with no optimal solution.

Notably, in medium- and high-accuracy setups, lens distortion is the key factor restricting accuracy improvement. Thus, to achieve higher-accuracy DMP calibration, the geometric invariance of the image features (for a straight line [29], vanishing point [30], or sphere [31]) has been used to calibrate the distortion separately. In addition, new lens distortion models have been constructed to characterize the image distortion. For example, Magill et al. [32] proposed a DoF-dependent lens distortion model that is applicable to the focal plane only. Brown et al. [33] subsequently improved upon the Magill model using knowledge of the distortion on two focal planes; hence, they proposed a method to solve the DMPs on both the focal plane and an arbitrary object plane (the defocused plane perpendicular to the optical axis). Fryer et al. [34] then utilized the Brown model to correct for underwater lens distortion, and Fraser and Shortis [35] introduced an empirical distortion model that overcomes the failure of the Brown model to accurately describe large image distortions. Furthermore, bundle adjustment [36], Zernike polynomials [37] and multi-order splines [38] have been used to obtain the DoF-dependent DMPs. Note that the above DoF-related distortion models depend on the focusing status and the DMPs on the focal plane. However, in practical applications involving a settled FMSV system, the zoom ring and focusing ring cannot be adjusted. Moreover, the focusing position of the system and the DMPs on the focal plane cannot be accurately determined. Thus, to improve the DoF distortion model practicability, the DMP influence on the focal plane should be removed. Thus, by combining the Brown and Fraser models, Alvarez et al. [39] derived a radial distortion model suitable for planar scenarios. Under the condition of a locked focal length, the distortion of any image point could be accurately estimated using two straight lines. Additionally, Dong et al. [40] proposed a DoF-dependent distortion model for calibrating the DMPs on any object plane; hence, the 3D measurement accuracy in a 7.0 m × 3.5 m × 2.5 m space at a 6 m object distance was improved from 0.028 to 0.055 mm.

In previous studies, a set of DMPs only have been used to characterize the lens distortion [28], and the distortion variation across different regions in a single object plane has been neglected. To overcome this problem, the overall distortion has been divided into circular regions and multiple sets of DMPs have been used to describe the distortion on the 2D object plane. In previous works, the present authors considered the correlation between the lens distortion and DoF, and partitioned the DoF-dependent distortion using equal-radius [41] and equal-increment [42] approaches to effectively improve the vision measurement accuracy. However, that method is limited to monocular cameras only, and the chessboard must be perpendicular to the optical axis during implementation. Therefore, there are strict requirements for the experimental devices, the operation is lengthy, and human error is introduced.

In view of the above deficiencies, in this study, both a spatial light path analysis method and a calibration method for FMSV are proposed. Hence, 3D mapping from the structural parameters to the imaging parameters is achieved, along with high-accuracy and convenient FMSV calibration. The remainder of this paper is organized as follows. In Section 2, the FMSV measurement principles are introduced. In Section 3, spatial light path analysis to determine the relationship between the structural and imaging parameters is described. In Section 4, a calibration method is proposed for the FMSV system, which overcomes the dependence of the calibration on a perpendicular relationship between the chessboard and optical axis. In Section 5, the FMSV system constructed in this work is described, and a 3D measurement experiment for calibration accuracy verification is reported. Finally, in Section 6, the study is concluded.

2. FMSV measurement principles

FMSV is equivalent to VBV. In this study, the following notation is employed, as shown in Fig. 1: the left camera coordinate system is $O\textrm{ - }XYZ$ and coincides with the world coordinate system (WCS); the right camera coordinate system is ${O_r}\textrm{ - }{X_r}{Y_r}{Z_r}$; the left- and right-camera pixel coordinate systems are ${o_l}\textrm{ - }{u_l}{v_l}$ and ${o_r}\textrm{ - }{u_r}{v_r}$, respectively; and the effective focal lengths of the left and right cameras are ${f_l}$ and ${f_r}$, respectively. Camera mapping from a 3D point Q to certain image points (i.e., ${{\bf q}_l}$ and ${{\bf q}_r}$ in Fig. 1) can be expressed according to the pinhole camera model, such that:

$$\left\{ {\begin{array}{ccc} {{s_l}{{\tilde{{\mathbf m}}}_l} = {{\mathbf A}_l} \cdot \tilde{{\mathbf M}}\textrm{ = }\left[ {\begin{array}{ccc} {{\alpha_l}}&0&{{u_0}}\\ 0&{{\beta_l}}&{{v_0}}\\ 0&0&1 \end{array}} \right] \cdot \tilde{{\mathbf M}}}\\ {{s_r}{{\tilde{{\mathbf m}}}_r} = {{\mathbf A}_r} \cdot {{\tilde{{\mathbf M}}}_r}\textrm{ = }\left[ {\begin{array}{ccc} {{\alpha_r}}&0&{{u_0}}\\ 0&{{\beta_r}}&{{v_0}}\\ 0&0&1 \end{array}} \right] \cdot {{\tilde{{\bf M}}}_r}} \end{array}} \right.$$
where ${s_l}$ and ${s_r}$ are the scaling factors of the left and right cameras, respectively, and ${({u_0}\textrm{, }{v_0})^T}$ is the image center. In this paper, the “∼” symbol is used to denote an augmented vector to which 1 is added as the last element. Thus, ${\tilde{{\bf m}}_l} = {({u_l}\textrm{, }{v_l}\textrm{, }1)^T}$ and ${\tilde{{\bf m}}_r} = {({u_r}\textrm{, }{v_r}\textrm{, }1)^T}$ denote the 2D projections of ${\bf Q}$ in the left and right image planes, respectively; $\tilde{{\bf M}}\textrm{ = }{(X,\textrm{ }Y,\textrm{ }Z,\textrm{ 1})^T}$ and ${\tilde{{\bf M}}_r} = {({X_r}\textrm{, }{Y_r}\textrm{, }{Z_r}\textrm{, 1})^T}$ describe the homogeneous coordinates of spatial point ${\bf Q}$ in the two camera coordinate systems; ${{\bf A}_l}$ and ${{\bf A}_r}$ represent the intrinsic matrices of the left and right cameras, respectively; and ${\alpha _l} = {f_l}/{d_x}$, ${\beta _l} = {f_l}/{d_y}$, ${\alpha _r} = {f_r}/{d_x}$, and ${\beta _r} = {f_r}/{d_y}$, where ${d_x}$ and ${d_y}$ are the pixel sizes in the u and v directions, respectively, and subscripts “r” and “l” indicate the right and left camera coordinate systems, respectively.

 figure: Fig. 1.

Fig. 1. VBV measurement principles.

Download Full Size | PPT Slide | PDF

However, lens manufacturing and assembly errors cause radial and decentering distortions, which in turn project the straight lines in Fig. 1 as curved lines; thus, the imaging process no longer follows Eq. (1). To characterize this distortion, Brown proposed the following polynomial distortion model [33]:

$$\left\{ \begin{array}{l} u = \bar{u} + \bar{u} \cdot (1 + {k_1} \cdot {r^2} + {k_2} \cdot {r^4} + \cdots ) + [{{p_1} \cdot ({r^2} + 2 \cdot {{\bar{u}}^2}) + 2{p_2} \cdot \bar{u} \cdot \bar{v}} ]\cdots \\ v = \bar{v} + \bar{v} \cdot (1 + {k_1} \cdot {r^2} + {k_2} \cdot {r^4} + \cdots ) + [{{p_2} \cdot ({r^2} + 2 \cdot {{\bar{v}}^2}) + 2{p_1} \cdot \bar{u} \cdot \bar{v}} ]\cdots \end{array} \right.$$
where $( \begin{array}{cc} {\bar{u},}&{\bar{v}} \end{array}){^T}$ is the distorted point (${{\bf q^{\prime}}_l}$ in Fig. 1); $r = \sqrt {{{(\bar{u} - {u_0})}^2} + {{(\bar{v} - {v_0})}^2}}$ describes the distortion radius; ${k_1}$ and ${k_2}$ are the first- and second-order radial distortion coefficients, respectively; and ${p_1}$ and ${p_2}$ represent the first- and second-order decentering distortion coefficients, respectively. Further, ${{\bf M}_{lr}} = \left[ {\begin{array}{{cccc}} {\bf R}&{\bf t} \end{array}} \right] = \left[ {\begin{array}{cccc} {{r_1}}&{{r_2}}&{{r_3}}&{{t_x}}\\ {{r_4}}&{{r_5}}&{{r_6}}&{{t_y}}\\ {{r_7}}&{{r_8}}&{{r_9}}&{{t_z}} \end{array}} \right]$, where ${\bf R}$ and ${\bf t}$ are the rotation and translation matrices, respectively, between $O\textrm{ - }XYZ$ and ${O_r}\textrm{ - }{X_r}{Y_r}{Z_r}$. Then, ${\bf Q} = {(X,\textrm{ }Y,\textrm{ }Z)^T}$ can be calculated from the following:
$$\left\{ \begin{array}{l} X = \frac{{{z_l}({u_l} - {u_0})}}{{{\alpha_l}}}\\ Y = \frac{{{z_l}({v_l} - {v_0})}}{{{\beta_l}}}\\ Z = \frac{{{\alpha_l}{\beta_l}{\alpha_r}{t_x} - {\alpha_l}{\beta_l}({u_r} - {u_0}){t_z}}}{{{\beta_l}({u_l} - {u_0})(({u_r} - {u_0}){r_7} - {\alpha_r}{r_1}) + {\alpha_l}({v_l} - {v_0})(({u_r} - {u_0}){r_8} - {\alpha_r}{r_2}) + {\alpha_l}{\beta_l}(({u_r} - {u_0}){r_9} - {\alpha_r}{r_3})}} \end{array} \right.$$

From Eq. (3), the 3D position of ${\bf Q}$ can be calculated following calibration of the intrinsic (Eq. (1)), extrinsic (i.e., ${\bf R}$ and ${\bf t}$), and distortion parameters (Eq. (2)) of the VBV system.

3. FMSV light path analysis

In this section, the 2D and 3D light paths of the FMSV system are analyzed to establish the relationship between the structural and imaging parameters. On this basis, the influence of the structural parameters on the imaging parameters is studied.

3.1 2D light path analysis

As shown in Fig. 2, the FMSV system is composed of a reflection unit and an area-array camera (where K denotes the image plane). The reflection unit, which is formed by two internal mirrors (${M_1}$ and ${M_2}$) and two external mirrors (${N_1}$ and ${N_2}$), is symmetrically placed in front of the real camera. Through the multiple reflections of the mirrors, the image plane is equally divided into left (i.e., ${K_1}$) and right (i.e., ${K_2}$) components, with each corresponding to one imaging perspective of the target. In this manner, FMSV is equivalent to VBV, and the 3D position can be reconstructed by matching the projections of the same target from two perspectives.

 figure: Fig. 2.

Fig. 2. 2D light path in FMSV system.

Download Full Size | PPT Slide | PDF

The coordinate system is established at the intersection point of the optical axis and internal mirrors. As shown in Fig. 2, the optical center is defined as ${O_c}$, half the camera field angle is $\theta $, and the FMSV structural parameters are $\alpha $, $\beta $, d, and L. Taking the virtual right camera (VRC) as an example, the mapping from the structural parameters to the imaging parameters can be expressed as

$$\left\{ \begin{array}{l} B = 2{c_{rx}}\\ FovM = \frac{{[{{c_{ry}} + {c_{rx}}\tan (\theta + \phi ) - (DepL - |{{c_{ry}}} |)} ]}}{{\tan (\phi ) + \tan ({\theta + \phi } )}}\\ DepM = \frac{1}{2}B\tan (\phi ) + FovM\tan (\phi )\\ {D_f} = \frac{1}{2}B\tan (\theta + \phi ) - \frac{1}{2}B\tan (\phi ) \end{array} \right.$$
where $\phi = 2\alpha + 90 - \beta $; ${({c_{rx}}\textrm{, }{c_{ry}})^T}$ denotes the VRC optical center; ${c_{rx}} = L\sin (\alpha ) + L + d\sin (2\beta - 2\alpha )$ and ${c_{ry}} ={-} L\sin (2\beta ) - d\cos (2\beta - 2\alpha )$; ${l_1} = {{d\sin (\theta )} / {\cos (\alpha + \theta )}}$ and ${l_2} = [{l\sin (\beta ) + d\cos (2\alpha - \beta )} ]\tan (2\alpha - \beta + \theta ) - l\sin \beta \tan (2\alpha - \beta ) - d\sin (2\alpha - \beta )$ are the minimum lengths of the external and internal mirrors, respectively, when there is no light path interference; and B, ${D_f}$, $FovM$, and $DepM$ describe the baseline distance, DoF, half the maximum FoV, and the object distance of the maximum FoV, respectively.

3.2 3D light path analysis

In this approach, a 2D area-array sensor is used to take a snapshot of the target and, thus, the FoV is of 2D type. Further, the FoV analysis of Section 3.1 is limited to $FovM$ and the analysis of the light-mirror interaction region is inadequate; thus, rectangular mirrors are primarily used for this system. However, the mirror compactness can be enhanced through spatial light path analysis. To extend the analysis of the 2D light path to 3D space, the WCS $O\textrm{ - }XYZ$ is established by taking the midpoint of the intersection line of the two internal mirrors as the origin and the intersection line itself as the X axis. The Z axis of the WCS is collinear with the optical axis of the real camera. We assume that the real-camera coordinate system is ${O_c}\textrm{ - }{X_c}{Y_c}{Z_c}$, that the internal and external mirrors are symmetrically distributed, and that both are perpendicular to the ${O_c}{X_c}{Y_c}$ plane. Mirror rotation occurs around the X axis of the WCS only, and $\alpha$, $\beta$, d, and L are defined as in Section 3.1.

The normal vector of a mirror is defined as ${\bf n}\textrm{ = (}{n_x},\textrm{ }{n_y},\textrm{ }{n_z}{\textrm{)}^T}$ and the direction vector of the incident ray and a point on it are defined as ${\bf l}$ and ${{\bf L}_0}$, respectively. Then, the intersection point ${\bf p}$ of the incident ray (reflected ray) and mirror can be expressed as

$$\left\{ {\begin{array}{c} {{{\bf p}^T} \cdot {\bf n} + d = 0}\\ {{\bf p} = {{\bf L}_0} + g{\bf l}} \end{array}} \right.$$
where, d is the signed distance from the origin of the reference coordinate system to the mirror. From the above equation, ${({{\bf L}_0} + g{\bf l})^T} \cdot {\bf n} + d = 0$ and, hence, $g = {{ - \textrm{(}{\bf L}_0^T \cdot {\bf n} + d\textrm{)}} / {{{\bf l}^T} \cdot {\bf n}}}$. Thus, ${\bf p} = {{\bf L}_0}{{ - \textrm{(}{\bf L}_0^T \cdot {\bf n} + d\textrm{)} \cdot {\bf l}} / {{{\bf l}^T} \cdot {\bf n}}}$. The vector obtained following reflection by s mirrors, i.e., ${\tilde{{\bf l}}_s}$, is expressed as
$${\tilde{{\bf l}}_s} = \prod\limits_{k = 1}^s {\left[ {\begin{array}{cc} {{\mathbf I} - 2{{\mathbf n}_k} \cdot {{\mathbf n}_k}^\textrm{T}}&{2{d_k}{{\mathbf n}_k}}\\ {{{\mathbf 0}^\textrm{T}}}&1 \end{array}} \right]} \cdot \tilde{{\mathbf l}}$$
where ${{\bf n}_k}$ is the unit normal vector of the $k$-th mirror and satisfies $n_x^kn_x^k + n_y^kn_y^k + n_z^kn_z^k = 1$, and ${d_k}$ is the signed distance from the origin of $O\textrm{ - }XYZ$ to the $k$-th mirror.

Let ${\bf \Pi } = {O_c}{X_c}{Z_c}$ and the symbols “r” and “l” in the left superscript of a variable indicate the right and left views of the FMSV, respectively. Then, the 3D light path from the image side to the object side can be determined. As shown in Fig. 3, for VRC, there are four rays from the sensor point ${}^r{\bf G}_1^i$ that pass through ${O_c}$. Following reflection by two mirrors (i.e., the internal and external mirrors), the outgoing ray ${}^r{\bf L}_2^i$ ($i = 1, \cdots ,4$) intersects with ${\bf \Pi }$ at ${}^r{\bf P}_\Pi ^i$. The spatial light path is then expressed as follows:

$$\left\{ \begin{array}{l} {}^r{\bf p}_k^i = {}^r{\bf G}_k^i\textrm{ - }\frac{{{}^r{\bf G}_k^i \cdot {}^r{{\bf n}_k} + {d_k}}}{{{}^r{\bf l}_k^i \cdot {}^r{{\bf n}_k}}} \cdot {}^r{\bf l}_k^i\\ {}^r\tilde{{\bf l}}_{k + 1}^i = \left[ {\begin{array}{cc} {{\bf I} - 2{}^r{{\bf n}_k} \cdot {}^r{{\bf n}_k}^\textrm{T}}&{2{d_k}{}^r{{\bf n}_k}}\\ {{{\bf 0}^\textrm{T}}}&1 \end{array}} \right] \cdot {}^r\tilde{{\bf l}}_k^i\\ {}^r{\bf p}_\Pi ^i = {}^r{\bf G}_\Pi ^i\textrm{ - }\frac{{{}^r{\bf G}_\Pi ^i \cdot {{\bf n}_\Pi } + {d_\Pi }}}{{{}^r{\bf l}_\Pi ^i \cdot {{\bf n}_\Pi }}} \cdot {}^r{\bf l}_\Pi ^i \end{array} \right.\textrm{ }\begin{array}{c} {k = 1,2}\\ {i = 1, \cdots ,4} \end{array}$$

Here, ${}^r{\bf l}_k^i$ is the direction vector of ${}^r{\bf L}_k^i$, which is itself the $i$-th incident ray of the $k$-th mirror; ${}^r{\bf G}_k^i$ denotes a point on ${}^r{\bf L}_k^i$; ${}^r{\bf p}_k^i$ describes the intersection point of the $i$-th incident ray and the $k$-th mirror, and also the point on the $i$-th incident ray corresponding to the $(k\textrm{ + }1)$-th mirror (i.e., ${}^r{\bf G}_{k + 1}^i\textrm{ = }{}^r{\bf p}_k^i$); ${}^r{{\bf n}_k}$ describes the unit normal vector of the $k$-th mirror; ${}^r{{\bf M}_k} = \left[ {\begin{array}{cc} {{\bf I} - 2{}^r{{\bf n}_k} \cdot {}^r{{\bf n}_k}^\textrm{T}}&{2{d_k}{}^r{{\bf n}_k}}\\ {{{\bf 0}^\textrm{T}}}&1 \end{array}} \right]$ is the reflection matrix of the $k$-th mirror; ${}^r\tilde{{\bf l}}_{k + 1}^i$ is the homogeneous direction vector of ${}^r{\bf L}_{k + 1}^i$, which is itself the $i$-th incident ray of the $(k\textrm{ + }1)$-th mirror (i.e., the $i$-th reflected ray of the $k$-th mirror); ${}^r{\bf G}_\Pi ^i$ describes a point on the $i$-th reflected ray of the $r$-th mirror; ${}^r{\bf p}_\Pi ^i$ is the intersection point of the $i$-th reflected ray of the $r$-th mirror and ${\bf \Pi }$; ${{\bf n}_\Pi }$ is the unit normal vector of ${\bf \Pi }$; ${d_\Pi }$ is the signed distance from O to ${\bf \Pi }$; and ${\bf l}_\Pi ^i$ is the direction vector of the $i$-th incident ray of ${\bf \Pi }$.

 figure: Fig. 3.

Fig. 3. 3D light path in FMSV system: (a) Single mirror reflection and (b) multiple mirrors reflection.

Download Full Size | PPT Slide | PDF

Then, the geometric attributes of the region surrounded by the four incident rays of the $k$-th ($k = 1,\textrm{ }2$) mirror are

$$\left\{ \begin{array}{l} S = \sum\limits_{i = 2}^3 {\frac{1}{2}||{{}^r{\bf p}_k^1{}^r{\bf p}_k^i \times {}^r{\bf p}_k^1{}^r{\bf p}_k^{i\textrm{ + }1}} ||} \\ C = \sum\limits_{m = 1}^3 {||{{}^r{\boldsymbol p}_k^m{}^r{\boldsymbol p}_k^{m\textrm{ + }1}} ||\textrm{ }} \end{array} \right.$$
where S is the area of the region and C denotes the distance between adjacent points.

For each mirror, the region formed by the intersection points of the four rays and the mirror is an isosceles trapezoid with four sides, i.e., ${}^r{\bf P}_k^1{}^r{\bf P}_k^2$ (${}^l{\bf P}_k^1{}^l{\bf P}_k^2$), ${}^r{\bf P}_k^2{}^r{\bf P}_k^3$ (${}^l{\bf P}_k^2{}^l{\bf P}_k^3$), ${}^r{\bf P}_k^3{}^r{\bf P}_k^4$ (${}^l{\bf P}_k^3{}^l{\bf P}_k^4$), and ${}^r{\bf P}_k^4{}^r{\bf P}_k^1$ (${}^l{\bf P}_k^4{}^l{\bf P}_k^1$), and with ${}^r{\bf P}_k^1{}^r{\bf P}_k^2// {}^r{\bf P}_k^3{}^r{\bf P}_k^4// X$ axis. The geometric properties of this region (i.e., the effective internal mirrors) can be determined from the following expressions:

$$\left\{ {\begin{array}{{l}} {AIn = \sum\limits_{j = 2}^3 {\frac{1}{2}||{{}^r{\bf p}_1^1{}^r{\bf p}_1^j \times {}^r{\bf p}_1^1{}^r{\bf p}_1^{j\textrm{ + }1}} ||} }\\ {InC = \sum\limits_{m = 1}^3 {||{{}^r{\bf p}_1^m{}^r{\bf p}_1^{m\textrm{ + }1}} ||\textrm{ }} }\\ {InBH = ||{{}^r{\bf p}_1^1{}^r{\bf p}_1^2} ||}\\ {InFH = ||{{}^r{\bf p}_1^3{}^r{\bf p}_1^4} ||}\\ {MirrInL = {{2AIn} / {(InBH + InFH)}}} \end{array}} \right.$$
where $MirrInL$ is the length of the effective internal mirrors; $AIn$ is the region occupied by the incident rays; and $InFH$, $InBH$, and $InC$ are the base length, the top length, and the circumference of the isosceles trapezoid formed by the intersection points of the rays and the internal mirror, respectively.

Similarly, the geometric properties of the effective external mirrors can be obtained as follows:

$$\left\{ {\begin{array}{{l}} {AEx = \sum\limits_{j = 2}^3 {\frac{1}{2}||{{}^r{\bf p}_2^1{}^r{\bf p}_2^j \times {}^r{\bf p}_2^1{}^r{\bf p}_2^{j\textrm{ + }1}} ||} }\\ {ExC = \sum\limits_{m = 1}^3 {||{{}^r{\bf p}_2^m{}^r{\bf p}_2^{m\textrm{ + }1}} ||\textrm{ }} }\\ {ExBH = ||{{}^r{\bf p}_2^1{}^r{\bf p}_2^2} ||}\\ {ExFH = ||{{}^r{\bf p}_2^3{}^r{\bf p}_2^4} ||}\\ {MirrExL = {{2AEx} / {(ExBH + ExFH)}}} \end{array}} \right.$$
where $MirrExL$ is the external-mirror length; $AEx$ is the area occupied by the incident rays; and $ExFH$, $ExBH$, and $ExC$ are the base length, the top length, and the circumference of the isosceles trapezoid enclosed by the intersection points of the rays and the external mirror, respectively.

The FMSV imaging region can be enclosed by ${}^r{\bf P}_\Pi ^i$ and the intersection point of the reflected rays (i.e., ${}^r{\bf L}_2^i$ and ${}^l{\bf L}_2^i$ ($i = 1, \cdots 4$)), which can be expressed as

$$\left\{ \begin{array}{l} {}^r{{\bf p}^\prime }_2^1 = {}^r{\bf L}_2^1 \cap {}^l{\bf L}_2^4\\ {}^r{{\bf p}^\prime }_2^2 = {}^r{\bf L}_2^2 \cap {}^l{\bf L}_2^3\\ {}^l{{\bf p}^\prime }_2^1 = {}^r{\bf L}_2^4 \cap {}^l{\bf L}_2^1\\ {}^l{{\bf p}^\prime }_2^2 = {}^r{\bf L}_2^3 \cap {}^l{\bf L}_2^2\textrm{ } \end{array} \right.\textrm{ }$$

Then, the FMSV imaging parameters can be calculated as

$$ \left\{\begin{array}{l} D o F=\left\|^{r} \mathbf{p}_{\Pi}^{\prime 1} \mathbf{p}_{\Pi}^{\prime 4}\right\| \\ F o v L=\left\|^{r} \mathbf{p}_{2}^{\prime 1} \mathbf{p}_{2}^{\prime 1}\right\|=\left\|^{r} \mathbf{p}_{2}^{\prime 2} \mathbf{p}_{2}^{\prime 2}\right\| \\ F o V H=\left\|^{r} \mathbf{p}_{\Pi}^{1} r^{r} \mathbf{p}_{\Pi}^{2}\right\| \\ F O V H M=\left\|^{r} \mathbf{p}_{\Pi}^{4} r^{r} \mathbf{p}_{\Pi}^{3}\right\| \\ B L=2\left(\prod \prod_{k=1}^{2}\left[\begin{array}{cc} I-2^{r} \mathbf{n}_{k} \cdot{ }^{r} \mathbf{n}_{k}^{T} & 2 d_{k}^{r} \mathbf{n}_{k}^{T} \\ \mathbf{0}^{T} & 1 \end{array}\right]\right)^{T} \mathbf{O}_{C} \mid \\ D i s=\frac{1}{2} \prod_{k=1}^{2} \mathbf{M}_{k}\left\|^{r} \mathbf{p}_{2}^{\prime 2}+{ }^{\prime} \mathbf{p}_{2}^{\prime 2}-{ }^{r} \mathbf{p}_{2}^{\prime 1}-{ }^{\prime} \mathbf{p}_{2}^{\prime 1}\right\|+f+d \end{array}\right. $$
where $FovL$ is the maximum FoV in the direction of the ${Y_c}$ axis; $BL$ is the baseline distance; $FovH$ and $FoVHM$ are the minimum and maximum FoVs in the direction of the ${X_c}$ axis, respectively; and $Dis$ is the object distance when the FoV is at its maximum.

The parameters of the FMSV imaging region (i.e., $FoVL$, $FoVH$, and $FoVHM$), $Dis$, and $BL$ influence the measurement range, DoF, and accuracy of the FMSV system. Therefore, the variation of the 3D light path in accordance with the structural parameters was analyzed in this study for the following setup. Taking $\alpha = 45^\circ $, $\beta = 62^\circ $, $d = 200\textrm{ }\textrm{mm}$, and $L = 90\textrm{ }\textrm{mm}$, we analyzed the influence of $\alpha $, $\beta $, d, and L on the imaging parameters. Figures 4(a), 5(a), 6(a), and 7(a) show the 3D light path variations when only $\alpha = \left[ {\begin{array}{cc} {35^\circ }&{50^\circ } \end{array}} \right]$, $\beta = \left[ {\begin{array}{cc} {50^\circ }&{75^\circ } \end{array}} \right]$, $d = \left[ {\begin{array}{cc} {100\textrm{ }\textrm{mm}}&{200\textrm{ }\textrm{mm}} \end{array}} \right]$, or $L = \left[ {\begin{array}{cc} {50\textrm{ }\textrm{mm}}&{150\textrm{ }\textrm{mm}} \end{array}} \right]$ was varied within the given range, respectively. In these figures, the interaction regions between the rays and the internal and external mirrors are marked in yellow, and the color gradients characterize the common region of the VBV. Further, $\textrm{VL}$ and $\textrm{VR}$ are the two optical centers. The imaging parameter trends according to the variation of each factor, i.e., $\alpha $, $\beta $, d, and L, are shown in Figs. 4(b), 5(b), 6(b), and 7(b), respectively.

 figure: Fig. 4.

Fig. 4. Influence of $\alpha $ on FMSV system: (a) 3D light path and (b) imaging parameters.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5.

Fig. 5. Influence of $\beta $ on FMSV system: (a) 3D light path and (b) imaging parameters.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. Influence of d on FMSV system: (a) 3D light path and (b) imaging parameters.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7.

Fig. 7. Influence of L on FMSV system: (a) 3D light path and (b) imaging parameters.

Download Full Size | PPT Slide | PDF

Table 1 lists the correlations between the imaging and structural parameters. It is apparent that $\alpha $ is negatively correlated with $BL$ and $FoVH$; nonlinearly correlated with $FoVL$; uncorrelated with $FoVHM$, $ExBH$, and $InBH$; and positively correlated with the other parameters. Moreover, $\beta $ is negatively correlated with $DoF$, $Dis$, $MirrInL$, and $ExC$; nonlinearly correlated with $FoVL$; positively correlated with $BL$; and uncorrelated with the other parameters. An increase in d has no influence on $FoVH$, $ExBH$, and $InBH$; however, d is positively correlated with the other parameters. Finally, L is positively correlated with $FoVL$, $DoF$, $BL$, $Dis$, $MirrExL$, $AEx$, and $ExC$, but has no relationship with the other parameters.

Tables Icon

Table 1. Correlation between imaging and structural parameters.

By adjusting the four structural parameters, the FMSV imaging parameters can be optimized. Let ${x_1} = \alpha $, ${x_2} = \beta $, ${x_3} = d$, ${x_4} = L$, and ${\bf x} = {[{x_1},\textrm{ }{x_2},\textrm{ }{x_3},\textrm{ }{x_4}]^T}$. Then, in accordance with the target size, measurement accuracy, and system compactness requirements, the objective functions of $FoVL$, $FoVH$, $FoVHM$, $Dis$, and $BL$ are set to ${f_1}({\bf x})$, ${f_2}({\bf x})$, ${f_3}({\bf x})$, ${f_4}({\bf x})$, and ${f_5}({\bf x})$, respectively. The functions to be optimized can be used to construct the vector $F({\bf x}) = {[{f_1}({\bf x}),\textrm{ }{f_2}({\bf x}),\textrm{ } \cdots ,\textrm{ }{f_5}({\bf x})]^T}$, and the cost function can be defined as

$$\textrm{ }\left\{ {\begin{array}{c} {\textrm{minimize }F({\bf x}) = {{\sum\limits_{i = 1}^5 {[{{f_i}({\bf x})} ]} }^2}}\\ {s.t.\left\{ {\begin{array}{c} {{{\bf l}_b} \le {\bf x} \le {{\bf u}_b}}\\ {{{\bf A}_e} \cdot {\bf x} \le {\bf b}} \end{array}} \right.} \end{array}} \right.$$
where ${{\bf l}_b}$ and ${{\bf u}_b}$ represent the upper and lower bounds of ${\bf x}$, respectively, and ${{\bf A}_e} \cdot {\bf x} \le {\bf b}$ denotes the inequality constraint. To satisfy $0 < \phi < {90^ \circ }$, ${{\bf A}_e}$ and ${\bf b}$ are set to $[{ - 2\textrm{ 2 0 0; 1 - 1 0 0}} ]$ and $[{ - 9\textrm{0; 0}} ]$, respectively. As d and L strongly impact the FMSV dimensions, the ${\bf x}$ corresponding to the smallest d and L obtained from the optimization results is finally selected.

4. FMSV calibration method

When the existing method [33,42] is used to calibrate the DoF-dependent distortion model, the chessboard must be perpendicular to the optical axis. However, this condition sets high demands for the experimental device and FMSV calibration and, therefore, this method is both costly and time-consuming. In this study, an FMSV calibration method based on a loose constraint for the chessboard pose is proposed; hence, the practicability is improved while the calibration accuracy of the system is ensured.

4.1 DoF distortion partition model

The lens distortion is closely related to the target position in the DoF, especially for close-range conditions where the target is closer to the lens and the distortion is larger. Traditional calibration methods adopt only one set of distortion coefficients to represent the lens distortion and neglect the influence of the DoF, thereby yielding low camera calibration accuracy. To solve this problem, in this study, the DoF-dependent lens distortion is calibrated based on the Magill and Brown models [32,33], such that

$$\left\{ {\begin{array}{c} {k_i^{{s_n}} = {\alpha_{{s_n}}}k_i^{{s_m}} + (1 - {\alpha_{{s_n}}})k_i^{{s_k}}\textrm{ }}\\ {p_i^{{s_n}} = \frac{{{s_n}({s_m} - f)}}{{{s_m}({s_n} - f)}}p_i^{{s_m}}} \end{array}i = 1,2} \right.$$
where ${\alpha _{{s_n}}} = \frac{{({s_k} - {s_m})({s_n} - f)}}{{({s_k} - {s_n})({s_m} - f)}}$ is the amplification coefficient; $k_i^{{s_n}}$, $k_i^{{s_m}}$, and $k_i^{{s_k}}$ are the $i$-th order radial distortion coefficients on the focused planes at distances ${s_n}$, ${s_m}$, and ${s_k}$, respectively; $p_i^{{s_n}}$ and $p_i^{{s_m}}$ denote the $i$-th order decentering distortion coefficients on the focal planes at distances ${s_n}$ and ${s_m}$, respectively; and f is the focal length. Here, $k_i^{s,{s_k}} = k_i^s + \lambda (k_i^{{s_k}} - k_i^s)$ (where $\lambda$ is the empirical coefficient) [33] and $p_i^{s,{s_k}} = (1 - \frac{f}{s})\frac{{{s_k}(s - f)}}{{s({s_k} - f)}}p_i^\infty$ (where $p_i^\infty$ represents the $i$-th order decentering distortion coefficient on the focal planes at infinity) [35]. Then, by extending Eq. (14), we can obtain the distortion coefficients on any defocused plane (object plane):
$$\left\{ {\begin{array}{c} {k_i^{s,{s_n}} = \frac{{k_i^{s,{s_k}}({s_m} - f)({s_k} - {s_m}) + ({s_k} - {s_n})({s_k} - f)(k_i^{s,{s_m}} - k_i^{s,{s_k}})}}{{({s_m} - f)({s_k} - {s_m})}}}\\ {p_i^{s,{s_n}} = \frac{{p_i^{s,{s_k}}(1 - s_m^2) + p_i^{s,{s_m}}({s_m}{s_k} - 1)}}{{p_i^{s,{s_k}}(1 - {s_m}{s_n}) + p_i^{s,{s_m}}({s_n}{s_k} - 1)}}\frac{{{s_n}}}{{{s_m}}}p_i^{s,{s_m}}\textrm{ }} \end{array}} \right.\textrm{ }i = 1,2.$$
where $k_i^{s,{s_n}}$ ($k_i^{s,{s_k}}$, $k_i^{s,{s_m}}$) and $p_i^{s,{s_n}}$ ($p_i^{s,{s_k}}$, $p_i^{s,{s_m}}$) are the $i$-th order radial and decentering distortion coefficients, respectively, on the defocused plane at a distance of ${s_n}$ (${s_k}$, ${s_m}$) when the lens is focused at distance s. As apparent from Eq. (15), when $k_i^{s,{s_k}}$, $k_i^{s,{s_m}}$, $p_i^{s,{s_k}}$, $p_i^{s,{s_m}}$, ${s_k}$, ${s_m}$, and f are known, $k_i^{s,{s_n}}$ and $p_i^{s,{s_n}}$ are independent of the focused distances and the distortion coefficients on the focal planes. Thus, the relationship between the DoF and distortion is established. Owing to the multi-order property of the distortion model (Eq. (2)), the distortion diffuses and grows non-uniformly from the image center in the form of circumferential isolines. Therefore, to represent the distortion more accurately, the distortion is circularly partitioned by an equal distortion increment. Hence, we obtain the following lens distortion model using equal partitioning of the distortion:
$$\left\{ {\begin{array}{c} {{}^gk_i^{s,{s_n}} = \frac{{{}^gk_i^{s,{s_k}}({s_m} - f)({s_k} - {s_m}) + ({s_k} - {s_n})({s_k} - f)({}^gk_i^{s,{s_m}} - {}^gk_i^{s,{s_k}})}}{{({s_m} - f)({s_k} - {s_m})}}}\\ {{}^gp_i^{s,{s_n}} = \frac{{{}^gp_i^{s,{s_k}}(1 - s_m^2) + {}^gp_i^{s,{s_m}}({s_m}{s_k} - 1)}}{{{}^gp_i^{s,{s_k}}(1 - {s_m}{s_n}) + {}^gp_i^{s,{s_m}}({s_n}{s_k} - 1)}}\frac{{{s_n}}}{{{s_m}}}{}^gp_i^{s,{s_m}}\textrm{ }} \end{array}} \right.$$
where ${}^gk_i^{s,{s_n}}$ (${}^gk_i^{s,{s_k}}$, ${}^gk_i^{s,{s_m}}$) and ${}^gp_i^{s,{s_n}}$ (${}^gp_i^{s,{s_k}}$, ${}^gp_i^{s,{s_m}}$) are the $i$-th order radial and decentering distortion coefficients, respectively, on the $g$-th partition of the defocused plane with depth ${s_n}$ (${s_k}$, ${s_m}$) when the lens is focused at distance s. In this manner, a lens distortion model can be obtained, which considers both the DoF and equal distortion increment partition but does not depend on the relevant parameters of the focal plane.

4.2 Image processing of straight lines in subregion

To avoid the coupling effect of the FMSV intrinsic and extrinsic parameters on the distortion coefficients, straight lines are used to calibrate the distortion separately [33]. As straight lines in a particular subregion are required to optimize the coefficients of the lens distortion model (Eq. (16)), this study proposes a multiple-constraint-based method to extract line points from image subregions. Figure 8 shows the image processing method, which is implemented according to the following steps.

  • (1) Chessboard image acquisition: As shown in Fig. 8(a), an image featuring straight lines is collected.
  • (2) Corner detection: The corner detection operator is employed to detect the corners (Fig. 8(b)). Meanwhile, the points on the lines are extracted using the Canny operator, and the OPTA algorithm [43] is employed to obtain single-pixel edge skeletons of the lines.
  • (3) Point linking: Points between two adjacent corners are used to form a unit segment. For each segment, starting from a certain detected point, the Lowe method [44] is used to track and link subsequent points in the four-connected area (Fig. 8(c)), and the minimum link length is set to exceed 12 pixels.
  • (4) Outlier elimination and segment linking: During image acquisition, outliers are unavoidable (Fig. 8(d)); thus, adjacent corners are formed into a straight line, and outliers that are beyond the tolerance band from the straight line are excluded. Hence, points belonging to the same unit segment are determined. For each unit segment, there are three adjacent unit segments, two of which must be removed. As lens distortion bends but does not break the line, the angle between the unit segments is only used as a constraint to judge whether two adjacent unit segments are located on the same line (Fig. 8(e)). In this study, the angle threshold was set to 0.15–0.22 rad.
  • (5) Line-point extraction from each subregion: All line points in the designated subregion can be extracted according to the number of subregions, the distortion radius, and the points on each line, as shown in Fig. 8(f).

 figure: Fig. 8.

Fig. 8. Image processing of lines in image: (a) Image collection, (b) corner detection, (c) point linking, (d) image with outliers, (e) outlier elimination and segment linking, and (f) line-point extraction from each subregion.

Download Full Size | PPT Slide | PDF

4.3 FMSV calibration based on loose constraint for chessboard pose

As shown in Fig. 9(a), unlike existing calibration methods, which require the optical axis to be perpendicular to the chessboard, the calibration method presented in this study eases this constraint. Here, the chessboard must only cover the imaging FoV and DoF. As shown in Fig. 9(b), the VRC is taken as an example to show the equal distortion increment partitioning of the image. The process is as follows.

  • (1) The distortion curve ${l_d}$ of each chessboard image is determined using all the lines in the image. Hence, the average distortion curve ${l^{\prime}_d}$ of all images can be obtained. Then, the maximum image distortion ${\delta ^{\prime}_{\textrm{max}}}$ can be determined from ${l^{\prime}_d}$ and the maximum distortion radius ${r_{\max }} = \sqrt {{{({I_l} - {u_0})}^2} + {{({I_h} - {v_0})}^2}}$, where ${I_l}$ and ${I_h}$ are the length and width of the image, respectively.
  • (2) The distortion is small in the central region of the image and yields a singular solution of the distortion coefficients. Thus, the undistorted image is of poorer quality than the original. The lower limit of the image distortion radius ${r_{\textrm{limited}}}$ is obtained by taking the distortion when the algorithm converges in the central region of the image, i.e., ${\delta _{\textrm{limited}}}$, as a threshold. The lower limit of the average distortion ${\delta ^{\prime}_{\textrm{limited}}}$ and the lower limit of the image distortion radius ${r^{\prime}_{\textrm{limited}}}$ can be obtained for all images.
  • (3) The number of subregions ${n_p}$ can be determined from the ratio of ${r_{\max }}$ and ${r^{\prime}_{\textrm{limited}}}$, i.e., from ${n_p} = \lceil{{\raise0.7ex\hbox{${{r_{max}}}$} \!\mathord{\left/ {\vphantom {{{r_{max}}} {{{r^{\prime}}_{\textrm{limited}}}}}} \right.}\!\lower0.7ex\hbox{${{{r^{\prime}}_{\textrm{limited}}}}$}}} \rceil $.
  • (4) The equal distortion ${\delta _{equ}}$ is determined from ${\delta ^{\prime}_{\textrm{max}}}$, ${\delta ^{\prime}_{\textrm{limited}}}$, and ${n_p}$, as ${\delta _{equ}} = \frac{{{{\delta ^{\prime}}_{\max }} - {{\delta ^{\prime}}_{\textrm{limited}}}}}{{n - 1}}$, where the distortion increment satisfies ${\Delta _2}\textrm{ = }{\Delta _3}\textrm{ = }{\Delta _4}\textrm{ = }{\Delta _5}$ (see Fig. 9(b)).
  • (5) The partition radius $\rho $ is determined from ${\delta _{equ}}$ and ${l^{\prime}_d}$, where ${\rho _2} \ne {\rho _3} \ne {\rho _4} \ne {\rho _5}$.

 figure: Fig. 9.

Fig. 9. Schematic diagram of distortion partitioning: (a) Various chessboard poses and (b) equal distortion increment partitioning for various subregions.

Download Full Size | PPT Slide | PDF

The chessboard in Fig. 8 contains corner and line features, which are used to obtain the pinhole model parameters and DMPs, respectively. Considering the distance constraints between the corners, the initial values of the intrinsic and extrinsic parameters of the FMSV system (neglecting the DoF and partition) can be estimated using the Zhang calibration method [28]. From Eq. (16), the distortion coefficients depend on the object distance from a given line point. To this end, in this study, an epipolar constraint is first implemented to match the homologous points in the dual-view image. Then, according to the preliminarily calibrated intrinsic and extrinsic parameters and the 2D pixel positions of the detected line points, the 3D coordinates of the line point can be calculated using Eq. (3). Finally, the object distance can be obtained.

It is assumed that ${n_\eta }$ chessboard images with different positions are collected by the FMSV system, and that the distortion is processed for ${n_{\eta ,g}}$ partitions, with ${n_{\eta ,g,t}}$ lines in the $g$-th partition of the $\eta$-th image and ${n_{\eta ,g,t,w}}$ points on the $t$-th line in the $g$-th partition of the $\eta$-th image. For the chessboard images, the method described in Section 4.2 is used to extract the line points within each partition. As shown in Fig. 10, ${\Omega _{\eta ,g,t}}$ is the point set on the $t$-th line ${l_{\eta ,g,t}}$ in the $g$-th partition of the $\eta$-th image; ${\Omega _{\eta ,g,t,w}}$ and ${s_{\eta ,g,t,w}}$ are the pixel coordinate and object distance of the $w$-th point on ${l_{\eta ,g,t}}$, respectively; and ${\Omega ^{\prime}_{\eta ,g,t}}$ is the point set on ${l_{\eta ,g,t}}$ after image correction. Then, the point ${\Omega ^{\prime}_{\eta ,g,t,w}}$ in ${\Omega ^{\prime}_{\eta ,g,t}}$ is used to fit the line ${l^{\prime}_{\eta ,g,t}}$. The quadratic sum of the distances from ${\Omega ^{\prime}_{\eta ,g,t,w}}$ to ${l^{\prime}_{\eta ,g,t}}$, i.e., ${\gamma _{\eta ,g,t,w}}$, is taken as the constraint to minimize the cost function. Finally, with known ${s_k}$, ${s_m}$, ${\Omega _{\eta ,g,t,w}}$, and ${s_{\eta ,g,t,w}}$, the distortion model of the $g$-th partition in Eq. (16) is substituted into Eq. (2) to optimize the distortion coefficients ${}^gk_i^{s,{s_k}}$, ${}^gk_i^{s,{s_m}}$, ${}^gp_i^{s,{s_k}}$, and ${}^gp_i^{s,{s_m}}$, where $i = 1,\textrm{ }2$. The objective function is as follows:

$$\textrm{minimize }\sum\limits_{\eta \textrm{ = }1}^{{n_\eta }} {{\gamma _{\eta ,g,t,w}}} \textrm{ = }\sum\limits_{\eta \textrm{ = }1}^{{n_\eta }} {\sum\limits_{t\textrm{ = }1}^{{n_{\eta ,g,t}}} {\sum\limits_{w\textrm{ = }1}^{{n_{\eta ,g,t,w}}} {F({{s_k},{s_m},{s_{\eta ,g,t,w}},{\Omega _{\eta ,g,t,w}};{}^gk_i^{s,{s_k}},{}^gk_i^{s,{s_m}},{}^gp_i^{s,{s_k}},{}^gp_i^{s,{s_m}}} )} } }$$

The objective function is optimized using the Levenberg-Marquardt (LM) algorithm. Here, ${}^gk_i^{s,{s_k}}$, ${}^gk_i^{s,{s_m}}$, ${}^gp_i^{s,{s_k}}$, and ${}^gp_i^{s,{s_m}}$ are the distortion coefficients ${}^g\delta$ in the $g$-th partition. Then, according to Eq. (16), the distortion coefficients of any point in the $g$-th partition can be calculated. Thereafter, to avoid the coupling effect between the camera parameters, both the distortion and intrinsic parameters are fixed, and the extrinsic parameters are optimized by minimizing the reprojection error, such that

$$\textrm{minimize }{}^gE({}^g{{\bf R}_\eta },{}^g{{\bf T}_\eta }) = \sum\limits_{\eta \textrm{ = }1}^{{n_\eta }} {\sum\limits_{t\textrm{ = }1}^{{n_{\eta ,g,t}}} {\sum\limits_{w\textrm{ = }1}^{{n_{\eta ,g,t,w}}} {||{^g{\bf H}_\eta^{ - 1}({{\hat{\Omega }}_{\eta ,g,t,w}}) - {\Omega _{\eta ,g,t,w}}} ||} } }$$

 figure: Fig. 10.

Fig. 10. FMSV calibration based on loose constraint for chessboard pose.

Download Full Size | PPT Slide | PDF

Here, $^g{\bf H}_\eta ^{}$ is the homography matrix corresponding to the $g$-th partition of the $\eta$-th chessboard; ${\hat{\Omega }_{\eta ,g,t,w}}$ represents the point estimated by the lens distortion model; and ${}^g{{\bf R}_\eta }$ and ${}^g{{\bf T}_\eta }$ denote the pose matrixes of the $g$-th partition of the $\eta$-th chessboard. The optimal extrinsic parameters are obtained using the LM algorithm. In the same manner, the counterparts for the virtual left camera can be obtained. Finally, the calibrated extrinsic parameter matrices of the g -th partition of the $\eta$-th image are defined as $\left[ {\begin{array}{cc} {{{\bf R}_{\eta ,g,l}}}&{{{\bf T}_{\eta ,g,l}}} \end{array}} \right]$ and $\left[ {\begin{array}{cc} {{{\bf R}_{\eta ,g,r}}}&{{{\bf T}_{\eta ,g,r}}} \end{array}} \right]$ for the left and right cameras, respectively. The transformation matrix between the two cameras $\left[ {\begin{array}{cc} {{{{\bf R^{\prime}}}_{\eta ,g}}}&{{{{\bf T^{\prime}}}_{\eta ,g}}} \end{array}} \right]$ can then be calculated, where ${{\bf R^{\prime}}_{\eta ,g}}_\eta = {{\bf R}_{\eta ,g,l}} \cdot {\bf R}_{\eta ,g,r}^{\textrm{ - }1}$ and ${\bf T^{\prime}}_{\eta ,g}^{} = {{\bf T}_{\eta ,g,l}} - {{\bf R}_{\eta ,g,l}} \cdot {\bf R}_{\eta ,g,r}^{\textrm{ - }1} \cdot {{\bf T}_{\eta ,g,r}}$. Finally, the FMSV system can be calibrated by applying the same process to all subregions.

In practical applications, the 3D position of a point ${(x,\textrm{ }y,\textrm{ }z)^T}$ is estimated and used to determine the subregion $\left\lceil {\frac{{f\sqrt {{x^2} + {y^2}} }}{{\rho z}}} \right\rceil$ in which it lies. Then, the corresponding distortion coefficients are selected to correct the point position in the image. Hence, high-accuracy positioning is achieved through 3D recalculation.

5. Accuracy verification and 3D measurement experiment

5.1 FMSV system

An FMSV system was constructed, which was composed of a 4096 pixel × 3072 pixel complementary metal-oxide-semiconductor (CMOS) camera (MARS-1230-23U3M/C), an 8–50-mm zoom lens, and two pairs of mirrors. In the validation experiment performed in this study, we let ${{\bf l}_b} = [{35^\circ \textrm{ }50^\circ \textrm{ }30\textrm{ }\textrm{mm}\textrm{ }30\textrm{ }\textrm{mm}} ]$, ${{\bf u}_b} = [{50^\circ \textrm{ 7}0^\circ \textrm{ 120 }\textrm{mm}\textrm{ 120 }\textrm{mm}} ]$, $DoF > \textrm{150 }\textrm{mm}$, $FoVL > \textrm{10 }\textrm{mm}$, $FoVH > \textrm{10 }\textrm{mm}$, $0\textrm{ }\textrm{mm} < Dis < 500\textrm{ }\textrm{mm}$, and $0\textrm{ }\textrm{mm} < BL < 500\textrm{ }\textrm{mm}$. Based on the above constraints, the 3D optical path was analyzed and the structural parameters of the FMSV system were optimized using the NSGA-II algorithm [45]. The values determined for $\alpha $, $\beta $, d, and L were 45°, 62°, 40 mm, and 100 mm, respectively. The dimensions of the internal and external mirrors (with isosceles trapezoid shapes) were 20 mm (top edge) × 60 mm (bottom edge) × 60 mm (height or mirror length) and 20 mm (top edge) × 60 mm (bottom edge) × 70 mm (height or mirror length), respectively. Finally, the camera and mirrors were integrated with a 3D-printed bracket to form the FMSV system, as shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. Designed FMSV system.

Download Full Size | PPT Slide | PDF

5.2 Calibration accuracy verification

This section reports evaluations of the calibration accuracies of the distortion model and of the FMSV system. First, the accuracy of the proposed calibration method for the lens distortion model in Eq. (16) was verified (the chessboard is shown in Fig. 8(a)). Four distortion models were considered: the Zhang distortion model [28] (which neglects partitioning and the DoF), the Brown DoF-dependent distortion model [33] (without partitioning), the DoF-dependent distortion partition model [42] (which requires a perpendicular chessboard pose), and the proposed DoF-dependent distortion partition model (which has a loose constraint for the chessboard pose). Hereafter, these models are referred to as $Model\textrm{ - }A$, $Model\textrm{ - }B$, $Model\textrm{ - }C$, and $Model\textrm{ - }D$, respectively. Taking the VRC as an example, the equal distortion increment principle was used to divide the image distortion into four subregions. The calibration accuracies of the selected distortion models were then compared. The chessboard was fixed to an electronic control platform, which was repeatedly adjusted to ensure that the chessboard was perpendicular to the optical axis and symmetrically distributed. The $Model\textrm{ - }B$ and $Model\textrm{ - }C$ calibrations required platform movement such that the chessboard adopted several positions within the DoF perpendicular to the optical axis of the camera. There were no strict requirements for the chessboard pose for calibration of $Model\textrm{ - }A$ and $Model\textrm{ - }D$ ($Model\textrm{ - }D$ as can be confirmed with reference to the Zhang calibration method).

Two object planes perpendicular to the optical axis were selected, and the distortions in each subregion of the two planes were calculated using each of the four lens distortion models. The results were compared with the distortions directly determined from the corresponding lines (observed values) in order to verify the lens distortion calibration method proposed in this work. As detailed in Table 2, the distortion models that considered the DoF and partition (i.e., $Model\textrm{ - }C$ and $Model\textrm{ - }D$) were more accurate than the models that neglected these factors (i.e., $Model\textrm{ - }A$ and $Model\textrm{ - }B$). For $Model\textrm{ - }D$, the maximum and average differences between the calculated and observed distortions were 52.26 and 31.65 µm, respectively; for $Model\textrm{ - }B$, these differences were 85.36 and 51.59 µm, respectively; and for $Model\textrm{ - }A$, these differences were 92.36 and 55.91 µm, respectively. The accuracy of $Model\textrm{ - }D$ was 43% and 38% higher than those of $Model\textrm{ - }A$ and $Model\textrm{ - }B$, respectively. The maximum difference between $Model\textrm{ - }D$ and $Model\textrm{ - }C$ was only 3 µm. This result shows that, by ensuring measurement accuracy, the proposed distortion calibration method also improves the measurement convenience through loosening of the chessboard position constraint.

Tables Icon

Table 2. Accuracy verification of proposed DoF-dependent distortion partition model (Model-D).

On this basis, $Model\textrm{ - }D$ was used to correct the distortions of three chessboard images with different pose, as shown in Figs. 12(a), (b), and (c). The peak signal-to-noise ratio (PSNR) and three straightness indicators (the maximum, average, and root mean square (RMS) of the distances from the points to the straight lines) were used to evaluate the distortion correction results. Table 3 presents the results of the three indicators for each partition in Image #2. The undistorted image had a good PSNR of 38.75 dB. Additionally, for all subregions, the averages of the three indicators were 0.94, 0.40, and 0.20 pixels, respectively. Thus, good distortion correction results can be achieved using $Model\textrm{ - }D$.

 figure: Fig. 12.

Fig. 12. Image distortion correction: Images (a–c) #1-#3, respectively.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 3. Distortion correction results for Image #2.

Finally, the FMSV calibration accuracy was evaluated using a target (Fig. 13(a)) with a high-accuracy dot-to-dot distance. First, the target was placed at multiple positions within the FoV and the 3D coordinates of all dots on the target at each position were reconstructed using the FMSV system. Then, the distance between two dots was taken as the standard to compare the deviation between the vision measurement and the reference. For the 2352 distance errors shown in Fig. 13(b), the maximum and average reconstructed distance errors were 0.052 and 0.0298 mm, respectively, and the RMS was 0.0152 mm. Hence, the effectiveness and high accuracy of the proposed calibration method are verified.

 figure: Fig. 13.

Fig. 13. FMSV calibration-accuracy validation results: (a) Two images of target and (b) 2352 distance errors.

Download Full Size | PPT Slide | PDF

5.3 3D measurement experiment

For an industrial robot, path accuracy evaluation is important to improve its dynamic operating performance. In this study, an artifact (Fig. 14(a)) was fixed to end-effector of a six-degree-of-freedom robot to form a measurement system that incorporated the FMSV system shown in Fig. 11. The constructed system was then used to detect the path error of a trajectory implemented in the space plane at a 2 m/min feed rate according to ISO 9283 [46] and GB/T 12642-2013 [47]. The experiment was performed three times at a camera frame rate of 35.

 figure: Fig. 14.

Fig. 14. Path error measurement equipment: (a) Artifact and (b) laser tracker.

Download Full Size | PPT Slide | PDF

To verify the accuracy of the proposed vision system, a laser tracker (Leica AT-960 MR) with a tracker machine control sensor (T-Mac) is used to measure the path in the same condition (Fig. 14). The 3D paths and the 2D paths in the movement plane are shown in Figs. 15(a) and (b), respectively. The paths measured by the two methods were consistent with the nominal paths. The maximum and average path errors measured by the laser tracker at 2 m/min were 3.06 and 2.15 mm, respectively, while those measured by the FMSV were 2.99 and 2.12 mm, respectively. These results show that, owing to insufficient dynamic capability and an inadequate control algorithm, the robot produced a large path error. Using the 3D path data measured by the laser tracker as the standard, the accuracy of the vision system when evaluating the path performance was verified by the differences between the two measurements. Finally, Fig. 15(c) reveals that the maximum and average measurement errors at 2 m/min were 0.057 and 0.031 mm, respectively, both of which were less than 1/3rd of the measured path error. Thus, the accuracy of the FMSV system is validated.

 figure: Fig. 15.

Fig. 15. Path error results: (a) 3D and (b) 2D paths in movement plane, and (c) vision measurement error.

Download Full Size | PPT Slide | PDF

6. Conclusion

To improve the feasibility and accuracy of the FMSV system, this study proposed both a light path analysis method and a calibration method. The major contributions of this work are the following. (1) An analysis method for the spatial light path in the FMSV system was proposed. The 3D correspondence between the VBV imaging parameters and FMSV structural parameters was established, overcoming the problem that a 2D light path cannot fully represent the FMSV measurement field and effective mirror size. In addition, this analysis method can improve the system compactness. (2) A DoF-distortion equal-partition-based model was established, and an FMSV calibration method without the constraint that the chessboard is perpendicular to the optical axis was proposed; this outcome ensures the calibration accuracy while also improving the calibration practicability. (3) In accordance with the results of a 3D light path analysis, an FMSV system was designed and calibrated with high accuracy and high convenience. This system was then used to measure the path error of an industrial robot at a 2 m/min feed rate. Comparison with laser-tracker results for accuracy verification revealed that the proposed FMSV system had high accuracy of up to 0.031 mm. Through the study, FMSV system can be designed more compact and calibrated more conveniently, which has potential applications in 3D dynamic measurement in confined space. The limitation of the study is that the influence of the mirror installation errors on the system measurement accuracy is ignored. In future research, installation error and other optical elements will be introduced, more reflection-based MSV variants will be designed, and matching optical path analysis and calibration methods will be studied, so as to improve the system measurement feasibility and accuracy for related applications.

Funding

National Natural Science Foundation of China (52005513); China Postdoctoral Science Foundation (2020M682256); Fundamental Research Funds for the Central Universities (27RA2003015).

Acknowledgments

The authors would like to acknowledge funding support from the National Natural Science Foundation of China and the China Postdoctoral Science Foundation.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Z. Su, J. Pan, S. Zhang, S. Wu, Q. Yu, and D. Zhang, “Characterizing dynamic deformation of marine propeller blades with stroboscopic stereo digital image correlation,” MSSP. 162, 108072 (2022). [CrossRef]  

2. D. Hou, X. Mei, W. Huang, J. Li, C. Wang, and X. Wang, “An online and vision-based method for fixtured pose measurement of nondatum complex component,” IEEE Trans. Instrum. Meas. 69(6), 3370–3376 (2020). [CrossRef]  

3. F. Shao, W. C. Lin, and R. Fu, “Optimizing multiview video plus depth retargeting technique for stereoscopic 3D displays,” Opt. Express 25(11), 12478–12492 (2017). [CrossRef]  

4. J. Peng, W. Xu, B. Liang, and A. Wu, “Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion,” IEEE Sens. J. 19(8), 3008–3019 (2019). [CrossRef]  

5. Z. L. Su, J. Y. Pan, and L. Lu, “Refractive three-dimensional reconstruction for underwater stereo digital image correlation,” Opt. Express 29(8), 12131–12144 (2021). [CrossRef]  

6. T. Luhmann, S. Robson, and S. Kyle, Close-Range Photogrammetry and 3D Imaging, 4th ed. (Walter De Gruyter GmbH, Berlin/Boston, 2014), Chap. 6.

7. K. Takahashi, S. Nobuhara, and T. Matsuyama, “A new mirror-based extrinsic camera calibration using an orthogonality constraint,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2012), pp. 1051–1058.

8. R. Henao, F. Medina, H. J. Rabal, and M. Trivi, “Three-dimensional speckle measurements with a diffraction grating,” Appl. Opt. 32(5), 726–729 (1993). [CrossRef]  

9. K. B. Lim and Y. Xiao, “Virtual stereovision system: new understanding on single-lens stereovision using a biprism,” J. Electron. Imaging. 14(4), 043020 (2005). [CrossRef]  

10. C. Gao and N. Ahuja, “A refractive camera for acquiring stereo and super-resolution images,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (2006), pp: 2316–2323.

11. F. Bravo-Valenzuela and M. Torres-Torriti, “Comparison of panoramic stereoscopic sensors based on hyperboloidal mirrors,” 2009 6th Latin American Robotics Symposium (LARS 2009) (2009), pp. 103–110.

12. S. K. Nayar, “Catadioptric omnidirectional camera,” IEEE Conference on Computer Vision and Pattern Recognition (1997), pp. 482–488.

13. Y. Yan and B. W. He, “Single camera stereo with planar mirrors,” Adv. Mat. Res. 684, 447–450 (2013). [CrossRef]  

14. P. Sturm, S. Ramalingam, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6(1-2), 1–183 (2010). [CrossRef]  

15. B. Pan, L. Yu, and Q. Zhang, “Review of single-camera stereo-digital image correlation techniques for full-field 3D shape and deformation measurement,” Sci. China Technol. Sci. 61(1), 2–20 (2018). [CrossRef]  

16. M. Inaba, T. Hara, and H. Inoue, “A stereo viewer based on a single camera with view-control mechanisms,” IEEE/RSJ International Conference on Intelligent Robots & Systems (1993), pp. 1857–1865.

17. L. Yu, R. Tao, and G. Lubineau, “Accurate 3D shape, displacement and deformation measurement using a smartphone,” Sensors 19(3), 719 (2019). [CrossRef]  

18. W. B. Ng and Y. Zhang, “Stereoscopic imaging and computer vision of impinging fires by a single camera with a stereo adapter,” Int. J. Imaging. Syst. Technol. 15(2), 114–122 (2005). [CrossRef]  

19. S. Nijdam, J. S. Moerman, T. M. P. Briels, E. M. van Veldhuizen, and U. Ebert, “Stereo-photography of streamers in air,” Appl. Phys. Lett. 92(10), 101502 (2008). [CrossRef]  

20. T. Xue, L. Qu, and B. Wu, “Matching and 3-D reconstruction of multibubbles based on virtual stereo vision,” IEEE Trans. Instrum. Meas. 63(6), 1639–1647 (2014). [CrossRef]  

21. B. Pan, D. F. Wu, and Y. Xia, “An active imaging digital image correlation method for deformation measurement insensitive to ambient light,” Optics. Laser. Tech. 44(1), 204–209 (2012). [CrossRef]  

22. L. Yu and B. Pan, “Single-camera stereo-digital image correlation with a four-mirror adapter: Optimized design and validation,” Opt. Laser. Eng. 87, 120–128 (2016). [CrossRef]  

23. E. López-Alba, L. Felipe-Sesé, S. Schmeer, and F. A. Díaz, “Optical low-cost and portable arrangement for full field 3D displacement measurement using a single camera,” Meas. Sci. Technol. 27(11), 115901 (2016). [CrossRef]  

24. X. Shao, M. M. Eisa, Z. Chen, S. Dong, and X. He, “Self-calibration single-lens 3D video extensometer for high-accuracy and real-time strain measurement,” Opt. Express 24(26), 30124–30138 (2016). [CrossRef]  

25. J. Zhu, J. Yang, Y. Li, and S. Ye, “Study on structure precision of single camera stereo vision measurement,” Sci. Tech. Eng. 7(17), 4278–4282 (2007).

26. F. Zhou, Y. Wang, B. Peng, and Y. Cui, “A novel way of understanding for calibrating stereo vision sensor constructed by a single camera and mirrors,” Meas. 46(3), 1147–1160 (2013). [CrossRef]  

27. Y. Cui, F. Zhou, Y. Wang, L. Liu, and H. Gao, “Precise calibration of binocular vision system used for vision measurement,” Opt. Express 22(8), 9134–9149 (2014). [CrossRef]  

28. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

29. F. Devernay and O. Faugeras, “Straight lines have to be straight,” Mach. Vis. Appl. 13(1), 14–24 (2001). [CrossRef]  

30. S. C. Becker and Jr. V. M. Bove, “Semiautomatic 3D-model extraction from uncalibrated 2D-camera views,” Proceedings SPIE Visual Data Exploration and Analysis II (1995), pp. 447–461.

31. M. A. Penna, “Camera calibration: a quick and easy way to determine the scale factor,” IEEE Trans. Pattern Anal. Mach. Intell. 13(12), 1240–1245 (1991). [CrossRef]  

32. A. A. Magill, “Variation in distortion with magnification,” J. Opt. Soc. Am. 45(3), 148–149 (1955). [CrossRef]  

33. D. C. Brown, “Close–range camera calibration,” Photogramm. Eng. 37, 855–866 (1971).

34. J. G. Fryer and D. C. Brown, “Lens distortion for close–range photogrammetry,” Photogramm. Eng. Remote Sensing 52(1), 51–58 (1986).

35. C. S. Fraser and M. R. Shortis, “Variation of distortion within the photographic field,” Photogramm. Eng. Remote Sensing 58(2), 851–855 (1992).

36. J. Dold, “Ein hybrides photogrammetrisches Industriemesssys-temhöchster Genauigkeit und seiner Überprüfung,” PhD thesis, A Universität der Bundeswehr München, (1997).

37. P. Brakhage, G. Notni, and R. Kowarschik, “Image aberrations in optical three–dimensional measurement systems with fringe projection,” Appl. Opt. 43(16), 3217–3223 (2004). [CrossRef]  

38. T. Hanning, “High precision camera calibration with a depth dependent distortion mapping,” IASTED international conference on visualization, imaging, and image processing (VIIP) (2008), pp. 304–309.

39. L. Alvarez, L. Gómez, and J. R. Sendra, “Accurate depth dependent lens distortion models: an application to planar view scenarios,” J. Math. Imaging Vis. 39(1), 75–85 (2011). [CrossRef]  

40. P. Sun, N. Lu, and M. Dong, “Modelling and calibration of depth–dependent distortion for large depth visual measurement cameras,” Opt. Express 25(9), 9834–9847 (2017). [CrossRef]  

41. X. Li, W. Liu, Y. Pan, J. Ma, and F. Wang, “A knowledge-driven approach for 3D high temporal–spatial measurement of an arbitrary contouring error of CNC machine tools using monocular vision,” Sensors 19(3), 744 (2019). [CrossRef]  

42. X. Li, W. Li, X. Yuan, X. Yin, and X. Ma, “DoF-dependent and equal-partition based lens distortion modeling and calibration method for close-range photogrammetry,” Sensors 20(20), C1 (2020). [CrossRef]  

43. R. T. Chin, H. K. Wan, D. L. Stover, and R. D. Iverson, “A one-pass thinning algorithm and its parallel implementation,” CVGIP 40(1), 30–40 (1987). [CrossRef]  

44. D. G. Lowe, “Object recognition from local scale–invariant features,” in IEEE International Conference on Computer Vision (ICCV) (1999), pp. 51–65.

45. K. Deb, A. Pratap, and S. Agarwal, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput. 6(2), 182–197 (2002). [CrossRef]  

46. . “Manipulating industrial robots—performance criteria and related test methods,” ISO 9283.

47. . “Industrial robots—performance specifications and test methods,” GB/T 12642-2013.

References

  • View by:

  1. Z. Su, J. Pan, S. Zhang, S. Wu, Q. Yu, and D. Zhang, “Characterizing dynamic deformation of marine propeller blades with stroboscopic stereo digital image correlation,” MSSP. 162, 108072 (2022).
    [Crossref]
  2. D. Hou, X. Mei, W. Huang, J. Li, C. Wang, and X. Wang, “An online and vision-based method for fixtured pose measurement of nondatum complex component,” IEEE Trans. Instrum. Meas. 69(6), 3370–3376 (2020).
    [Crossref]
  3. F. Shao, W. C. Lin, and R. Fu, “Optimizing multiview video plus depth retargeting technique for stereoscopic 3D displays,” Opt. Express 25(11), 12478–12492 (2017).
    [Crossref]
  4. J. Peng, W. Xu, B. Liang, and A. Wu, “Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion,” IEEE Sens. J. 19(8), 3008–3019 (2019).
    [Crossref]
  5. Z. L. Su, J. Y. Pan, and L. Lu, “Refractive three-dimensional reconstruction for underwater stereo digital image correlation,” Opt. Express 29(8), 12131–12144 (2021).
    [Crossref]
  6. T. Luhmann, S. Robson, and S. Kyle, Close-Range Photogrammetry and 3D Imaging, 4th ed. (Walter De Gruyter GmbH, Berlin/Boston, 2014), Chap. 6.
  7. K. Takahashi, S. Nobuhara, and T. Matsuyama, “A new mirror-based extrinsic camera calibration using an orthogonality constraint,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2012), pp. 1051–1058.
  8. R. Henao, F. Medina, H. J. Rabal, and M. Trivi, “Three-dimensional speckle measurements with a diffraction grating,” Appl. Opt. 32(5), 726–729 (1993).
    [Crossref]
  9. K. B. Lim and Y. Xiao, “Virtual stereovision system: new understanding on single-lens stereovision using a biprism,” J. Electron. Imaging. 14(4), 043020 (2005).
    [Crossref]
  10. C. Gao and N. Ahuja, “A refractive camera for acquiring stereo and super-resolution images,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (2006), pp: 2316–2323.
  11. F. Bravo-Valenzuela and M. Torres-Torriti, “Comparison of panoramic stereoscopic sensors based on hyperboloidal mirrors,” 2009 6th Latin American Robotics Symposium (LARS 2009) (2009), pp. 103–110.
  12. S. K. Nayar, “Catadioptric omnidirectional camera,” IEEE Conference on Computer Vision and Pattern Recognition (1997), pp. 482–488.
  13. Y. Yan and B. W. He, “Single camera stereo with planar mirrors,” Adv. Mat. Res. 684, 447–450 (2013).
    [Crossref]
  14. P. Sturm, S. Ramalingam, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6(1-2), 1–183 (2010).
    [Crossref]
  15. B. Pan, L. Yu, and Q. Zhang, “Review of single-camera stereo-digital image correlation techniques for full-field 3D shape and deformation measurement,” Sci. China Technol. Sci. 61(1), 2–20 (2018).
    [Crossref]
  16. M. Inaba, T. Hara, and H. Inoue, “A stereo viewer based on a single camera with view-control mechanisms,” IEEE/RSJ International Conference on Intelligent Robots & Systems (1993), pp. 1857–1865.
  17. L. Yu, R. Tao, and G. Lubineau, “Accurate 3D shape, displacement and deformation measurement using a smartphone,” Sensors 19(3), 719 (2019).
    [Crossref]
  18. W. B. Ng and Y. Zhang, “Stereoscopic imaging and computer vision of impinging fires by a single camera with a stereo adapter,” Int. J. Imaging. Syst. Technol. 15(2), 114–122 (2005).
    [Crossref]
  19. S. Nijdam, J. S. Moerman, T. M. P. Briels, E. M. van Veldhuizen, and U. Ebert, “Stereo-photography of streamers in air,” Appl. Phys. Lett. 92(10), 101502 (2008).
    [Crossref]
  20. T. Xue, L. Qu, and B. Wu, “Matching and 3-D reconstruction of multibubbles based on virtual stereo vision,” IEEE Trans. Instrum. Meas. 63(6), 1639–1647 (2014).
    [Crossref]
  21. B. Pan, D. F. Wu, and Y. Xia, “An active imaging digital image correlation method for deformation measurement insensitive to ambient light,” Optics. Laser. Tech. 44(1), 204–209 (2012).
    [Crossref]
  22. L. Yu and B. Pan, “Single-camera stereo-digital image correlation with a four-mirror adapter: Optimized design and validation,” Opt. Laser. Eng. 87, 120–128 (2016).
    [Crossref]
  23. E. López-Alba, L. Felipe-Sesé, S. Schmeer, and F. A. Díaz, “Optical low-cost and portable arrangement for full field 3D displacement measurement using a single camera,” Meas. Sci. Technol. 27(11), 115901 (2016).
    [Crossref]
  24. X. Shao, M. M. Eisa, Z. Chen, S. Dong, and X. He, “Self-calibration single-lens 3D video extensometer for high-accuracy and real-time strain measurement,” Opt. Express 24(26), 30124–30138 (2016).
    [Crossref]
  25. J. Zhu, J. Yang, Y. Li, and S. Ye, “Study on structure precision of single camera stereo vision measurement,” Sci. Tech. Eng. 7(17), 4278–4282 (2007).
  26. F. Zhou, Y. Wang, B. Peng, and Y. Cui, “A novel way of understanding for calibrating stereo vision sensor constructed by a single camera and mirrors,” Meas. 46(3), 1147–1160 (2013).
    [Crossref]
  27. Y. Cui, F. Zhou, Y. Wang, L. Liu, and H. Gao, “Precise calibration of binocular vision system used for vision measurement,” Opt. Express 22(8), 9134–9149 (2014).
    [Crossref]
  28. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  29. F. Devernay and O. Faugeras, “Straight lines have to be straight,” Mach. Vis. Appl. 13(1), 14–24 (2001).
    [Crossref]
  30. S. C. Becker and Jr. V. M. Bove, “Semiautomatic 3D-model extraction from uncalibrated 2D-camera views,” Proceedings SPIE Visual Data Exploration and Analysis II (1995), pp. 447–461.
  31. M. A. Penna, “Camera calibration: a quick and easy way to determine the scale factor,” IEEE Trans. Pattern Anal. Mach. Intell. 13(12), 1240–1245 (1991).
    [Crossref]
  32. A. A. Magill, “Variation in distortion with magnification,” J. Opt. Soc. Am. 45(3), 148–149 (1955).
    [Crossref]
  33. D. C. Brown, “Close–range camera calibration,” Photogramm. Eng. 37, 855–866 (1971).
  34. J. G. Fryer and D. C. Brown, “Lens distortion for close–range photogrammetry,” Photogramm. Eng. Remote Sensing 52(1), 51–58 (1986).
  35. C. S. Fraser and M. R. Shortis, “Variation of distortion within the photographic field,” Photogramm. Eng. Remote Sensing 58(2), 851–855 (1992).
  36. J. Dold, “Ein hybrides photogrammetrisches Industriemesssys-temhöchster Genauigkeit und seiner Überprüfung,” PhD thesis, A Universität der Bundeswehr München, (1997).
  37. P. Brakhage, G. Notni, and R. Kowarschik, “Image aberrations in optical three–dimensional measurement systems with fringe projection,” Appl. Opt. 43(16), 3217–3223 (2004).
    [Crossref]
  38. T. Hanning, “High precision camera calibration with a depth dependent distortion mapping,” IASTED international conference on visualization, imaging, and image processing (VIIP) (2008), pp. 304–309.
  39. L. Alvarez, L. Gómez, and J. R. Sendra, “Accurate depth dependent lens distortion models: an application to planar view scenarios,” J. Math. Imaging Vis. 39(1), 75–85 (2011).
    [Crossref]
  40. P. Sun, N. Lu, and M. Dong, “Modelling and calibration of depth–dependent distortion for large depth visual measurement cameras,” Opt. Express 25(9), 9834–9847 (2017).
    [Crossref]
  41. X. Li, W. Liu, Y. Pan, J. Ma, and F. Wang, “A knowledge-driven approach for 3D high temporal–spatial measurement of an arbitrary contouring error of CNC machine tools using monocular vision,” Sensors 19(3), 744 (2019).
    [Crossref]
  42. X. Li, W. Li, X. Yuan, X. Yin, and X. Ma, “DoF-dependent and equal-partition based lens distortion modeling and calibration method for close-range photogrammetry,” Sensors 20(20), C1 (2020).
    [Crossref]
  43. R. T. Chin, H. K. Wan, D. L. Stover, and R. D. Iverson, “A one-pass thinning algorithm and its parallel implementation,” CVGIP 40(1), 30–40 (1987).
    [Crossref]
  44. D. G. Lowe, “Object recognition from local scale–invariant features,” in IEEE International Conference on Computer Vision (ICCV) (1999), pp. 51–65.
  45. K. Deb, A. Pratap, and S. Agarwal, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput. 6(2), 182–197 (2002).
    [Crossref]
  46. . “Manipulating industrial robots—performance criteria and related test methods,” ISO 9283.
  47. . “Industrial robots—performance specifications and test methods,” GB/T 12642-2013.

2022 (1)

Z. Su, J. Pan, S. Zhang, S. Wu, Q. Yu, and D. Zhang, “Characterizing dynamic deformation of marine propeller blades with stroboscopic stereo digital image correlation,” MSSP. 162, 108072 (2022).
[Crossref]

2021 (1)

2020 (2)

D. Hou, X. Mei, W. Huang, J. Li, C. Wang, and X. Wang, “An online and vision-based method for fixtured pose measurement of nondatum complex component,” IEEE Trans. Instrum. Meas. 69(6), 3370–3376 (2020).
[Crossref]

X. Li, W. Li, X. Yuan, X. Yin, and X. Ma, “DoF-dependent and equal-partition based lens distortion modeling and calibration method for close-range photogrammetry,” Sensors 20(20), C1 (2020).
[Crossref]

2019 (3)

X. Li, W. Liu, Y. Pan, J. Ma, and F. Wang, “A knowledge-driven approach for 3D high temporal–spatial measurement of an arbitrary contouring error of CNC machine tools using monocular vision,” Sensors 19(3), 744 (2019).
[Crossref]

J. Peng, W. Xu, B. Liang, and A. Wu, “Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion,” IEEE Sens. J. 19(8), 3008–3019 (2019).
[Crossref]

L. Yu, R. Tao, and G. Lubineau, “Accurate 3D shape, displacement and deformation measurement using a smartphone,” Sensors 19(3), 719 (2019).
[Crossref]

2018 (1)

B. Pan, L. Yu, and Q. Zhang, “Review of single-camera stereo-digital image correlation techniques for full-field 3D shape and deformation measurement,” Sci. China Technol. Sci. 61(1), 2–20 (2018).
[Crossref]

2017 (2)

2016 (3)

L. Yu and B. Pan, “Single-camera stereo-digital image correlation with a four-mirror adapter: Optimized design and validation,” Opt. Laser. Eng. 87, 120–128 (2016).
[Crossref]

E. López-Alba, L. Felipe-Sesé, S. Schmeer, and F. A. Díaz, “Optical low-cost and portable arrangement for full field 3D displacement measurement using a single camera,” Meas. Sci. Technol. 27(11), 115901 (2016).
[Crossref]

X. Shao, M. M. Eisa, Z. Chen, S. Dong, and X. He, “Self-calibration single-lens 3D video extensometer for high-accuracy and real-time strain measurement,” Opt. Express 24(26), 30124–30138 (2016).
[Crossref]

2014 (2)

T. Xue, L. Qu, and B. Wu, “Matching and 3-D reconstruction of multibubbles based on virtual stereo vision,” IEEE Trans. Instrum. Meas. 63(6), 1639–1647 (2014).
[Crossref]

Y. Cui, F. Zhou, Y. Wang, L. Liu, and H. Gao, “Precise calibration of binocular vision system used for vision measurement,” Opt. Express 22(8), 9134–9149 (2014).
[Crossref]

2013 (2)

F. Zhou, Y. Wang, B. Peng, and Y. Cui, “A novel way of understanding for calibrating stereo vision sensor constructed by a single camera and mirrors,” Meas. 46(3), 1147–1160 (2013).
[Crossref]

Y. Yan and B. W. He, “Single camera stereo with planar mirrors,” Adv. Mat. Res. 684, 447–450 (2013).
[Crossref]

2012 (1)

B. Pan, D. F. Wu, and Y. Xia, “An active imaging digital image correlation method for deformation measurement insensitive to ambient light,” Optics. Laser. Tech. 44(1), 204–209 (2012).
[Crossref]

2011 (1)

L. Alvarez, L. Gómez, and J. R. Sendra, “Accurate depth dependent lens distortion models: an application to planar view scenarios,” J. Math. Imaging Vis. 39(1), 75–85 (2011).
[Crossref]

2010 (1)

P. Sturm, S. Ramalingam, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6(1-2), 1–183 (2010).
[Crossref]

2008 (1)

S. Nijdam, J. S. Moerman, T. M. P. Briels, E. M. van Veldhuizen, and U. Ebert, “Stereo-photography of streamers in air,” Appl. Phys. Lett. 92(10), 101502 (2008).
[Crossref]

2007 (1)

J. Zhu, J. Yang, Y. Li, and S. Ye, “Study on structure precision of single camera stereo vision measurement,” Sci. Tech. Eng. 7(17), 4278–4282 (2007).

2005 (2)

W. B. Ng and Y. Zhang, “Stereoscopic imaging and computer vision of impinging fires by a single camera with a stereo adapter,” Int. J. Imaging. Syst. Technol. 15(2), 114–122 (2005).
[Crossref]

K. B. Lim and Y. Xiao, “Virtual stereovision system: new understanding on single-lens stereovision using a biprism,” J. Electron. Imaging. 14(4), 043020 (2005).
[Crossref]

2004 (1)

2002 (1)

K. Deb, A. Pratap, and S. Agarwal, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput. 6(2), 182–197 (2002).
[Crossref]

2001 (1)

F. Devernay and O. Faugeras, “Straight lines have to be straight,” Mach. Vis. Appl. 13(1), 14–24 (2001).
[Crossref]

2000 (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
[Crossref]

1993 (1)

1992 (1)

C. S. Fraser and M. R. Shortis, “Variation of distortion within the photographic field,” Photogramm. Eng. Remote Sensing 58(2), 851–855 (1992).

1991 (1)

M. A. Penna, “Camera calibration: a quick and easy way to determine the scale factor,” IEEE Trans. Pattern Anal. Mach. Intell. 13(12), 1240–1245 (1991).
[Crossref]

1987 (1)

R. T. Chin, H. K. Wan, D. L. Stover, and R. D. Iverson, “A one-pass thinning algorithm and its parallel implementation,” CVGIP 40(1), 30–40 (1987).
[Crossref]

1986 (1)

J. G. Fryer and D. C. Brown, “Lens distortion for close–range photogrammetry,” Photogramm. Eng. Remote Sensing 52(1), 51–58 (1986).

1971 (1)

D. C. Brown, “Close–range camera calibration,” Photogramm. Eng. 37, 855–866 (1971).

1955 (1)

Agarwal, S.

K. Deb, A. Pratap, and S. Agarwal, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput. 6(2), 182–197 (2002).
[Crossref]

Ahuja, N.

C. Gao and N. Ahuja, “A refractive camera for acquiring stereo and super-resolution images,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (2006), pp: 2316–2323.

Alvarez, L.

L. Alvarez, L. Gómez, and J. R. Sendra, “Accurate depth dependent lens distortion models: an application to planar view scenarios,” J. Math. Imaging Vis. 39(1), 75–85 (2011).
[Crossref]

Barreto, J.

P. Sturm, S. Ramalingam, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6(1-2), 1–183 (2010).
[Crossref]

Becker, S. C.

S. C. Becker and Jr. V. M. Bove, “Semiautomatic 3D-model extraction from uncalibrated 2D-camera views,” Proceedings SPIE Visual Data Exploration and Analysis II (1995), pp. 447–461.

Bove, Jr. V. M.

S. C. Becker and Jr. V. M. Bove, “Semiautomatic 3D-model extraction from uncalibrated 2D-camera views,” Proceedings SPIE Visual Data Exploration and Analysis II (1995), pp. 447–461.

Brakhage, P.

Bravo-Valenzuela, F.

F. Bravo-Valenzuela and M. Torres-Torriti, “Comparison of panoramic stereoscopic sensors based on hyperboloidal mirrors,” 2009 6th Latin American Robotics Symposium (LARS 2009) (2009), pp. 103–110.

Briels, T. M. P.

S. Nijdam, J. S. Moerman, T. M. P. Briels, E. M. van Veldhuizen, and U. Ebert, “Stereo-photography of streamers in air,” Appl. Phys. Lett. 92(10), 101502 (2008).
[Crossref]

Brown, D. C.

J. G. Fryer and D. C. Brown, “Lens distortion for close–range photogrammetry,” Photogramm. Eng. Remote Sensing 52(1), 51–58 (1986).

D. C. Brown, “Close–range camera calibration,” Photogramm. Eng. 37, 855–866 (1971).

Chen, Z.

Chin, R. T.

R. T. Chin, H. K. Wan, D. L. Stover, and R. D. Iverson, “A one-pass thinning algorithm and its parallel implementation,” CVGIP 40(1), 30–40 (1987).
[Crossref]

Cui, Y.

Y. Cui, F. Zhou, Y. Wang, L. Liu, and H. Gao, “Precise calibration of binocular vision system used for vision measurement,” Opt. Express 22(8), 9134–9149 (2014).
[Crossref]

F. Zhou, Y. Wang, B. Peng, and Y. Cui, “A novel way of understanding for calibrating stereo vision sensor constructed by a single camera and mirrors,” Meas. 46(3), 1147–1160 (2013).
[Crossref]

Deb, K.

K. Deb, A. Pratap, and S. Agarwal, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput. 6(2), 182–197 (2002).
[Crossref]

Devernay, F.

F. Devernay and O. Faugeras, “Straight lines have to be straight,” Mach. Vis. Appl. 13(1), 14–24 (2001).
[Crossref]

Díaz, F. A.

E. López-Alba, L. Felipe-Sesé, S. Schmeer, and F. A. Díaz, “Optical low-cost and portable arrangement for full field 3D displacement measurement using a single camera,” Meas. Sci. Technol. 27(11), 115901 (2016).
[Crossref]

Dold, J.

J. Dold, “Ein hybrides photogrammetrisches Industriemesssys-temhöchster Genauigkeit und seiner Überprüfung,” PhD thesis, A Universität der Bundeswehr München, (1997).

Dong, M.

Dong, S.

Ebert, U.

S. Nijdam, J. S. Moerman, T. M. P. Briels, E. M. van Veldhuizen, and U. Ebert, “Stereo-photography of streamers in air,” Appl. Phys. Lett. 92(10), 101502 (2008).
[Crossref]

Eisa, M. M.

Faugeras, O.

F. Devernay and O. Faugeras, “Straight lines have to be straight,” Mach. Vis. Appl. 13(1), 14–24 (2001).
[Crossref]

Felipe-Sesé, L.

E. López-Alba, L. Felipe-Sesé, S. Schmeer, and F. A. Díaz, “Optical low-cost and portable arrangement for full field 3D displacement measurement using a single camera,” Meas. Sci. Technol. 27(11), 115901 (2016).
[Crossref]

Fraser, C. S.

C. S. Fraser and M. R. Shortis, “Variation of distortion within the photographic field,” Photogramm. Eng. Remote Sensing 58(2), 851–855 (1992).

Fryer, J. G.

J. G. Fryer and D. C. Brown, “Lens distortion for close–range photogrammetry,” Photogramm. Eng. Remote Sensing 52(1), 51–58 (1986).

Fu, R.

Gao, C.

C. Gao and N. Ahuja, “A refractive camera for acquiring stereo and super-resolution images,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (2006), pp: 2316–2323.

Gao, H.

Gasparini, S.

P. Sturm, S. Ramalingam, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6(1-2), 1–183 (2010).
[Crossref]

Gómez, L.

L. Alvarez, L. Gómez, and J. R. Sendra, “Accurate depth dependent lens distortion models: an application to planar view scenarios,” J. Math. Imaging Vis. 39(1), 75–85 (2011).
[Crossref]

Hanning, T.

T. Hanning, “High precision camera calibration with a depth dependent distortion mapping,” IASTED international conference on visualization, imaging, and image processing (VIIP) (2008), pp. 304–309.

Hara, T.

M. Inaba, T. Hara, and H. Inoue, “A stereo viewer based on a single camera with view-control mechanisms,” IEEE/RSJ International Conference on Intelligent Robots & Systems (1993), pp. 1857–1865.

He, B. W.

Y. Yan and B. W. He, “Single camera stereo with planar mirrors,” Adv. Mat. Res. 684, 447–450 (2013).
[Crossref]

He, X.

Henao, R.

Hou, D.

D. Hou, X. Mei, W. Huang, J. Li, C. Wang, and X. Wang, “An online and vision-based method for fixtured pose measurement of nondatum complex component,” IEEE Trans. Instrum. Meas. 69(6), 3370–3376 (2020).
[Crossref]

Huang, W.

D. Hou, X. Mei, W. Huang, J. Li, C. Wang, and X. Wang, “An online and vision-based method for fixtured pose measurement of nondatum complex component,” IEEE Trans. Instrum. Meas. 69(6), 3370–3376 (2020).
[Crossref]

Inaba, M.

M. Inaba, T. Hara, and H. Inoue, “A stereo viewer based on a single camera with view-control mechanisms,” IEEE/RSJ International Conference on Intelligent Robots & Systems (1993), pp. 1857–1865.

Inoue, H.

M. Inaba, T. Hara, and H. Inoue, “A stereo viewer based on a single camera with view-control mechanisms,” IEEE/RSJ International Conference on Intelligent Robots & Systems (1993), pp. 1857–1865.

Iverson, R. D.

R. T. Chin, H. K. Wan, D. L. Stover, and R. D. Iverson, “A one-pass thinning algorithm and its parallel implementation,” CVGIP 40(1), 30–40 (1987).
[Crossref]

Kowarschik, R.

Kyle, S.

T. Luhmann, S. Robson, and S. Kyle, Close-Range Photogrammetry and 3D Imaging, 4th ed. (Walter De Gruyter GmbH, Berlin/Boston, 2014), Chap. 6.

Li, J.

D. Hou, X. Mei, W. Huang, J. Li, C. Wang, and X. Wang, “An online and vision-based method for fixtured pose measurement of nondatum complex component,” IEEE Trans. Instrum. Meas. 69(6), 3370–3376 (2020).
[Crossref]

Li, W.

X. Li, W. Li, X. Yuan, X. Yin, and X. Ma, “DoF-dependent and equal-partition based lens distortion modeling and calibration method for close-range photogrammetry,” Sensors 20(20), C1 (2020).
[Crossref]

Li, X.

X. Li, W. Li, X. Yuan, X. Yin, and X. Ma, “DoF-dependent and equal-partition based lens distortion modeling and calibration method for close-range photogrammetry,” Sensors 20(20), C1 (2020).
[Crossref]

X. Li, W. Liu, Y. Pan, J. Ma, and F. Wang, “A knowledge-driven approach for 3D high temporal–spatial measurement of an arbitrary contouring error of CNC machine tools using monocular vision,” Sensors 19(3), 744 (2019).
[Crossref]

Li, Y.

J. Zhu, J. Yang, Y. Li, and S. Ye, “Study on structure precision of single camera stereo vision measurement,” Sci. Tech. Eng. 7(17), 4278–4282 (2007).

Liang, B.

J. Peng, W. Xu, B. Liang, and A. Wu, “Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion,” IEEE Sens. J. 19(8), 3008–3019 (2019).
[Crossref]

Lim, K. B.

K. B. Lim and Y. Xiao, “Virtual stereovision system: new understanding on single-lens stereovision using a biprism,” J. Electron. Imaging. 14(4), 043020 (2005).
[Crossref]

Lin, W. C.

Liu, L.

Liu, W.

X. Li, W. Liu, Y. Pan, J. Ma, and F. Wang, “A knowledge-driven approach for 3D high temporal–spatial measurement of an arbitrary contouring error of CNC machine tools using monocular vision,” Sensors 19(3), 744 (2019).
[Crossref]

López-Alba, E.

E. López-Alba, L. Felipe-Sesé, S. Schmeer, and F. A. Díaz, “Optical low-cost and portable arrangement for full field 3D displacement measurement using a single camera,” Meas. Sci. Technol. 27(11), 115901 (2016).
[Crossref]

Lowe, D. G.

D. G. Lowe, “Object recognition from local scale–invariant features,” in IEEE International Conference on Computer Vision (ICCV) (1999), pp. 51–65.

Lu, L.

Lu, N.

Lubineau, G.

L. Yu, R. Tao, and G. Lubineau, “Accurate 3D shape, displacement and deformation measurement using a smartphone,” Sensors 19(3), 719 (2019).
[Crossref]

Luhmann, T.

T. Luhmann, S. Robson, and S. Kyle, Close-Range Photogrammetry and 3D Imaging, 4th ed. (Walter De Gruyter GmbH, Berlin/Boston, 2014), Chap. 6.

Ma, J.

X. Li, W. Liu, Y. Pan, J. Ma, and F. Wang, “A knowledge-driven approach for 3D high temporal–spatial measurement of an arbitrary contouring error of CNC machine tools using monocular vision,” Sensors 19(3), 744 (2019).
[Crossref]

Ma, X.

X. Li, W. Li, X. Yuan, X. Yin, and X. Ma, “DoF-dependent and equal-partition based lens distortion modeling and calibration method for close-range photogrammetry,” Sensors 20(20), C1 (2020).
[Crossref]

Magill, A. A.

Matsuyama, T.

K. Takahashi, S. Nobuhara, and T. Matsuyama, “A new mirror-based extrinsic camera calibration using an orthogonality constraint,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2012), pp. 1051–1058.

Medina, F.

Mei, X.

D. Hou, X. Mei, W. Huang, J. Li, C. Wang, and X. Wang, “An online and vision-based method for fixtured pose measurement of nondatum complex component,” IEEE Trans. Instrum. Meas. 69(6), 3370–3376 (2020).
[Crossref]

Moerman, J. S.

S. Nijdam, J. S. Moerman, T. M. P. Briels, E. M. van Veldhuizen, and U. Ebert, “Stereo-photography of streamers in air,” Appl. Phys. Lett. 92(10), 101502 (2008).
[Crossref]

Nayar, S. K.

S. K. Nayar, “Catadioptric omnidirectional camera,” IEEE Conference on Computer Vision and Pattern Recognition (1997), pp. 482–488.

Ng, W. B.

W. B. Ng and Y. Zhang, “Stereoscopic imaging and computer vision of impinging fires by a single camera with a stereo adapter,” Int. J. Imaging. Syst. Technol. 15(2), 114–122 (2005).
[Crossref]

Nijdam, S.

S. Nijdam, J. S. Moerman, T. M. P. Briels, E. M. van Veldhuizen, and U. Ebert, “Stereo-photography of streamers in air,” Appl. Phys. Lett. 92(10), 101502 (2008).
[Crossref]

Nobuhara, S.

K. Takahashi, S. Nobuhara, and T. Matsuyama, “A new mirror-based extrinsic camera calibration using an orthogonality constraint,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2012), pp. 1051–1058.

Notni, G.

Pan, B.

B. Pan, L. Yu, and Q. Zhang, “Review of single-camera stereo-digital image correlation techniques for full-field 3D shape and deformation measurement,” Sci. China Technol. Sci. 61(1), 2–20 (2018).
[Crossref]

L. Yu and B. Pan, “Single-camera stereo-digital image correlation with a four-mirror adapter: Optimized design and validation,” Opt. Laser. Eng. 87, 120–128 (2016).
[Crossref]

B. Pan, D. F. Wu, and Y. Xia, “An active imaging digital image correlation method for deformation measurement insensitive to ambient light,” Optics. Laser. Tech. 44(1), 204–209 (2012).
[Crossref]

Pan, J.

Z. Su, J. Pan, S. Zhang, S. Wu, Q. Yu, and D. Zhang, “Characterizing dynamic deformation of marine propeller blades with stroboscopic stereo digital image correlation,” MSSP. 162, 108072 (2022).
[Crossref]

Pan, J. Y.

Pan, Y.

X. Li, W. Liu, Y. Pan, J. Ma, and F. Wang, “A knowledge-driven approach for 3D high temporal–spatial measurement of an arbitrary contouring error of CNC machine tools using monocular vision,” Sensors 19(3), 744 (2019).
[Crossref]

Peng, B.

F. Zhou, Y. Wang, B. Peng, and Y. Cui, “A novel way of understanding for calibrating stereo vision sensor constructed by a single camera and mirrors,” Meas. 46(3), 1147–1160 (2013).
[Crossref]

Peng, J.

J. Peng, W. Xu, B. Liang, and A. Wu, “Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion,” IEEE Sens. J. 19(8), 3008–3019 (2019).
[Crossref]

Penna, M. A.

M. A. Penna, “Camera calibration: a quick and easy way to determine the scale factor,” IEEE Trans. Pattern Anal. Mach. Intell. 13(12), 1240–1245 (1991).
[Crossref]

Pratap, A.

K. Deb, A. Pratap, and S. Agarwal, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput. 6(2), 182–197 (2002).
[Crossref]

Qu, L.

T. Xue, L. Qu, and B. Wu, “Matching and 3-D reconstruction of multibubbles based on virtual stereo vision,” IEEE Trans. Instrum. Meas. 63(6), 1639–1647 (2014).
[Crossref]

Rabal, H. J.

Ramalingam, S.

P. Sturm, S. Ramalingam, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6(1-2), 1–183 (2010).
[Crossref]

Robson, S.

T. Luhmann, S. Robson, and S. Kyle, Close-Range Photogrammetry and 3D Imaging, 4th ed. (Walter De Gruyter GmbH, Berlin/Boston, 2014), Chap. 6.

Schmeer, S.

E. López-Alba, L. Felipe-Sesé, S. Schmeer, and F. A. Díaz, “Optical low-cost and portable arrangement for full field 3D displacement measurement using a single camera,” Meas. Sci. Technol. 27(11), 115901 (2016).
[Crossref]

Sendra, J. R.

L. Alvarez, L. Gómez, and J. R. Sendra, “Accurate depth dependent lens distortion models: an application to planar view scenarios,” J. Math. Imaging Vis. 39(1), 75–85 (2011).
[Crossref]

Shao, F.

Shao, X.

Shortis, M. R.

C. S. Fraser and M. R. Shortis, “Variation of distortion within the photographic field,” Photogramm. Eng. Remote Sensing 58(2), 851–855 (1992).

Stover, D. L.

R. T. Chin, H. K. Wan, D. L. Stover, and R. D. Iverson, “A one-pass thinning algorithm and its parallel implementation,” CVGIP 40(1), 30–40 (1987).
[Crossref]

Sturm, P.

P. Sturm, S. Ramalingam, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6(1-2), 1–183 (2010).
[Crossref]

Su, Z.

Z. Su, J. Pan, S. Zhang, S. Wu, Q. Yu, and D. Zhang, “Characterizing dynamic deformation of marine propeller blades with stroboscopic stereo digital image correlation,” MSSP. 162, 108072 (2022).
[Crossref]

Su, Z. L.

Sun, P.

Takahashi, K.

K. Takahashi, S. Nobuhara, and T. Matsuyama, “A new mirror-based extrinsic camera calibration using an orthogonality constraint,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2012), pp. 1051–1058.

Tao, R.

L. Yu, R. Tao, and G. Lubineau, “Accurate 3D shape, displacement and deformation measurement using a smartphone,” Sensors 19(3), 719 (2019).
[Crossref]

Torres-Torriti, M.

F. Bravo-Valenzuela and M. Torres-Torriti, “Comparison of panoramic stereoscopic sensors based on hyperboloidal mirrors,” 2009 6th Latin American Robotics Symposium (LARS 2009) (2009), pp. 103–110.

Trivi, M.

van Veldhuizen, E. M.

S. Nijdam, J. S. Moerman, T. M. P. Briels, E. M. van Veldhuizen, and U. Ebert, “Stereo-photography of streamers in air,” Appl. Phys. Lett. 92(10), 101502 (2008).
[Crossref]

Wan, H. K.

R. T. Chin, H. K. Wan, D. L. Stover, and R. D. Iverson, “A one-pass thinning algorithm and its parallel implementation,” CVGIP 40(1), 30–40 (1987).
[Crossref]

Wang, C.

D. Hou, X. Mei, W. Huang, J. Li, C. Wang, and X. Wang, “An online and vision-based method for fixtured pose measurement of nondatum complex component,” IEEE Trans. Instrum. Meas. 69(6), 3370–3376 (2020).
[Crossref]

Wang, F.

X. Li, W. Liu, Y. Pan, J. Ma, and F. Wang, “A knowledge-driven approach for 3D high temporal–spatial measurement of an arbitrary contouring error of CNC machine tools using monocular vision,” Sensors 19(3), 744 (2019).
[Crossref]

Wang, X.

D. Hou, X. Mei, W. Huang, J. Li, C. Wang, and X. Wang, “An online and vision-based method for fixtured pose measurement of nondatum complex component,” IEEE Trans. Instrum. Meas. 69(6), 3370–3376 (2020).
[Crossref]

Wang, Y.

Y. Cui, F. Zhou, Y. Wang, L. Liu, and H. Gao, “Precise calibration of binocular vision system used for vision measurement,” Opt. Express 22(8), 9134–9149 (2014).
[Crossref]

F. Zhou, Y. Wang, B. Peng, and Y. Cui, “A novel way of understanding for calibrating stereo vision sensor constructed by a single camera and mirrors,” Meas. 46(3), 1147–1160 (2013).
[Crossref]

Wu, A.

J. Peng, W. Xu, B. Liang, and A. Wu, “Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion,” IEEE Sens. J. 19(8), 3008–3019 (2019).
[Crossref]

Wu, B.

T. Xue, L. Qu, and B. Wu, “Matching and 3-D reconstruction of multibubbles based on virtual stereo vision,” IEEE Trans. Instrum. Meas. 63(6), 1639–1647 (2014).
[Crossref]

Wu, D. F.

B. Pan, D. F. Wu, and Y. Xia, “An active imaging digital image correlation method for deformation measurement insensitive to ambient light,” Optics. Laser. Tech. 44(1), 204–209 (2012).
[Crossref]

Wu, S.

Z. Su, J. Pan, S. Zhang, S. Wu, Q. Yu, and D. Zhang, “Characterizing dynamic deformation of marine propeller blades with stroboscopic stereo digital image correlation,” MSSP. 162, 108072 (2022).
[Crossref]

Xia, Y.

B. Pan, D. F. Wu, and Y. Xia, “An active imaging digital image correlation method for deformation measurement insensitive to ambient light,” Optics. Laser. Tech. 44(1), 204–209 (2012).
[Crossref]

Xiao, Y.

K. B. Lim and Y. Xiao, “Virtual stereovision system: new understanding on single-lens stereovision using a biprism,” J. Electron. Imaging. 14(4), 043020 (2005).
[Crossref]

Xu, W.

J. Peng, W. Xu, B. Liang, and A. Wu, “Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion,” IEEE Sens. J. 19(8), 3008–3019 (2019).
[Crossref]

Xue, T.

T. Xue, L. Qu, and B. Wu, “Matching and 3-D reconstruction of multibubbles based on virtual stereo vision,” IEEE Trans. Instrum. Meas. 63(6), 1639–1647 (2014).
[Crossref]

Yan, Y.

Y. Yan and B. W. He, “Single camera stereo with planar mirrors,” Adv. Mat. Res. 684, 447–450 (2013).
[Crossref]

Yang, J.

J. Zhu, J. Yang, Y. Li, and S. Ye, “Study on structure precision of single camera stereo vision measurement,” Sci. Tech. Eng. 7(17), 4278–4282 (2007).

Ye, S.

J. Zhu, J. Yang, Y. Li, and S. Ye, “Study on structure precision of single camera stereo vision measurement,” Sci. Tech. Eng. 7(17), 4278–4282 (2007).

Yin, X.

X. Li, W. Li, X. Yuan, X. Yin, and X. Ma, “DoF-dependent and equal-partition based lens distortion modeling and calibration method for close-range photogrammetry,” Sensors 20(20), C1 (2020).
[Crossref]

Yu, L.

L. Yu, R. Tao, and G. Lubineau, “Accurate 3D shape, displacement and deformation measurement using a smartphone,” Sensors 19(3), 719 (2019).
[Crossref]

B. Pan, L. Yu, and Q. Zhang, “Review of single-camera stereo-digital image correlation techniques for full-field 3D shape and deformation measurement,” Sci. China Technol. Sci. 61(1), 2–20 (2018).
[Crossref]

L. Yu and B. Pan, “Single-camera stereo-digital image correlation with a four-mirror adapter: Optimized design and validation,” Opt. Laser. Eng. 87, 120–128 (2016).
[Crossref]

Yu, Q.

Z. Su, J. Pan, S. Zhang, S. Wu, Q. Yu, and D. Zhang, “Characterizing dynamic deformation of marine propeller blades with stroboscopic stereo digital image correlation,” MSSP. 162, 108072 (2022).
[Crossref]

Yuan, X.

X. Li, W. Li, X. Yuan, X. Yin, and X. Ma, “DoF-dependent and equal-partition based lens distortion modeling and calibration method for close-range photogrammetry,” Sensors 20(20), C1 (2020).
[Crossref]

Zhang, D.

Z. Su, J. Pan, S. Zhang, S. Wu, Q. Yu, and D. Zhang, “Characterizing dynamic deformation of marine propeller blades with stroboscopic stereo digital image correlation,” MSSP. 162, 108072 (2022).
[Crossref]

Zhang, Q.

B. Pan, L. Yu, and Q. Zhang, “Review of single-camera stereo-digital image correlation techniques for full-field 3D shape and deformation measurement,” Sci. China Technol. Sci. 61(1), 2–20 (2018).
[Crossref]

Zhang, S.

Z. Su, J. Pan, S. Zhang, S. Wu, Q. Yu, and D. Zhang, “Characterizing dynamic deformation of marine propeller blades with stroboscopic stereo digital image correlation,” MSSP. 162, 108072 (2022).
[Crossref]

Zhang, Y.

W. B. Ng and Y. Zhang, “Stereoscopic imaging and computer vision of impinging fires by a single camera with a stereo adapter,” Int. J. Imaging. Syst. Technol. 15(2), 114–122 (2005).
[Crossref]

Zhang, Z.

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zhou, F.

Y. Cui, F. Zhou, Y. Wang, L. Liu, and H. Gao, “Precise calibration of binocular vision system used for vision measurement,” Opt. Express 22(8), 9134–9149 (2014).
[Crossref]

F. Zhou, Y. Wang, B. Peng, and Y. Cui, “A novel way of understanding for calibrating stereo vision sensor constructed by a single camera and mirrors,” Meas. 46(3), 1147–1160 (2013).
[Crossref]

Zhu, J.

J. Zhu, J. Yang, Y. Li, and S. Ye, “Study on structure precision of single camera stereo vision measurement,” Sci. Tech. Eng. 7(17), 4278–4282 (2007).

Adv. Mat. Res. (1)

Y. Yan and B. W. He, “Single camera stereo with planar mirrors,” Adv. Mat. Res. 684, 447–450 (2013).
[Crossref]

Appl. Opt. (2)

Appl. Phys. Lett. (1)

S. Nijdam, J. S. Moerman, T. M. P. Briels, E. M. van Veldhuizen, and U. Ebert, “Stereo-photography of streamers in air,” Appl. Phys. Lett. 92(10), 101502 (2008).
[Crossref]

CVGIP (1)

R. T. Chin, H. K. Wan, D. L. Stover, and R. D. Iverson, “A one-pass thinning algorithm and its parallel implementation,” CVGIP 40(1), 30–40 (1987).
[Crossref]

Found. Trends Comput. Graph. Vis. (1)

P. Sturm, S. Ramalingam, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6(1-2), 1–183 (2010).
[Crossref]

IEEE Sens. J. (1)

J. Peng, W. Xu, B. Liang, and A. Wu, “Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion,” IEEE Sens. J. 19(8), 3008–3019 (2019).
[Crossref]

IEEE Trans. Evol. Comput. (1)

K. Deb, A. Pratap, and S. Agarwal, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput. 6(2), 182–197 (2002).
[Crossref]

IEEE Trans. Instrum. Meas. (2)

D. Hou, X. Mei, W. Huang, J. Li, C. Wang, and X. Wang, “An online and vision-based method for fixtured pose measurement of nondatum complex component,” IEEE Trans. Instrum. Meas. 69(6), 3370–3376 (2020).
[Crossref]

T. Xue, L. Qu, and B. Wu, “Matching and 3-D reconstruction of multibubbles based on virtual stereo vision,” IEEE Trans. Instrum. Meas. 63(6), 1639–1647 (2014).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

M. A. Penna, “Camera calibration: a quick and easy way to determine the scale factor,” IEEE Trans. Pattern Anal. Mach. Intell. 13(12), 1240–1245 (1991).
[Crossref]

IEEE Trans. Pattern Anal. Machine Intell. (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
[Crossref]

Int. J. Imaging. Syst. Technol. (1)

W. B. Ng and Y. Zhang, “Stereoscopic imaging and computer vision of impinging fires by a single camera with a stereo adapter,” Int. J. Imaging. Syst. Technol. 15(2), 114–122 (2005).
[Crossref]

J. Electron. Imaging. (1)

K. B. Lim and Y. Xiao, “Virtual stereovision system: new understanding on single-lens stereovision using a biprism,” J. Electron. Imaging. 14(4), 043020 (2005).
[Crossref]

J. Math. Imaging Vis. (1)

L. Alvarez, L. Gómez, and J. R. Sendra, “Accurate depth dependent lens distortion models: an application to planar view scenarios,” J. Math. Imaging Vis. 39(1), 75–85 (2011).
[Crossref]

J. Opt. Soc. Am. (1)

Mach. Vis. Appl. (1)

F. Devernay and O. Faugeras, “Straight lines have to be straight,” Mach. Vis. Appl. 13(1), 14–24 (2001).
[Crossref]

Meas. (1)

F. Zhou, Y. Wang, B. Peng, and Y. Cui, “A novel way of understanding for calibrating stereo vision sensor constructed by a single camera and mirrors,” Meas. 46(3), 1147–1160 (2013).
[Crossref]

Meas. Sci. Technol. (1)

E. López-Alba, L. Felipe-Sesé, S. Schmeer, and F. A. Díaz, “Optical low-cost and portable arrangement for full field 3D displacement measurement using a single camera,” Meas. Sci. Technol. 27(11), 115901 (2016).
[Crossref]

MSSP. (1)

Z. Su, J. Pan, S. Zhang, S. Wu, Q. Yu, and D. Zhang, “Characterizing dynamic deformation of marine propeller blades with stroboscopic stereo digital image correlation,” MSSP. 162, 108072 (2022).
[Crossref]

Opt. Express (5)

Opt. Laser. Eng. (1)

L. Yu and B. Pan, “Single-camera stereo-digital image correlation with a four-mirror adapter: Optimized design and validation,” Opt. Laser. Eng. 87, 120–128 (2016).
[Crossref]

Optics. Laser. Tech. (1)

B. Pan, D. F. Wu, and Y. Xia, “An active imaging digital image correlation method for deformation measurement insensitive to ambient light,” Optics. Laser. Tech. 44(1), 204–209 (2012).
[Crossref]

Photogramm. Eng. (1)

D. C. Brown, “Close–range camera calibration,” Photogramm. Eng. 37, 855–866 (1971).

Photogramm. Eng. Remote Sensing (2)

J. G. Fryer and D. C. Brown, “Lens distortion for close–range photogrammetry,” Photogramm. Eng. Remote Sensing 52(1), 51–58 (1986).

C. S. Fraser and M. R. Shortis, “Variation of distortion within the photographic field,” Photogramm. Eng. Remote Sensing 58(2), 851–855 (1992).

Sci. China Technol. Sci. (1)

B. Pan, L. Yu, and Q. Zhang, “Review of single-camera stereo-digital image correlation techniques for full-field 3D shape and deformation measurement,” Sci. China Technol. Sci. 61(1), 2–20 (2018).
[Crossref]

Sci. Tech. Eng. (1)

J. Zhu, J. Yang, Y. Li, and S. Ye, “Study on structure precision of single camera stereo vision measurement,” Sci. Tech. Eng. 7(17), 4278–4282 (2007).

Sensors (3)

L. Yu, R. Tao, and G. Lubineau, “Accurate 3D shape, displacement and deformation measurement using a smartphone,” Sensors 19(3), 719 (2019).
[Crossref]

X. Li, W. Liu, Y. Pan, J. Ma, and F. Wang, “A knowledge-driven approach for 3D high temporal–spatial measurement of an arbitrary contouring error of CNC machine tools using monocular vision,” Sensors 19(3), 744 (2019).
[Crossref]

X. Li, W. Li, X. Yuan, X. Yin, and X. Ma, “DoF-dependent and equal-partition based lens distortion modeling and calibration method for close-range photogrammetry,” Sensors 20(20), C1 (2020).
[Crossref]

Other (12)

. “Manipulating industrial robots—performance criteria and related test methods,” ISO 9283.

. “Industrial robots—performance specifications and test methods,” GB/T 12642-2013.

D. G. Lowe, “Object recognition from local scale–invariant features,” in IEEE International Conference on Computer Vision (ICCV) (1999), pp. 51–65.

M. Inaba, T. Hara, and H. Inoue, “A stereo viewer based on a single camera with view-control mechanisms,” IEEE/RSJ International Conference on Intelligent Robots & Systems (1993), pp. 1857–1865.

T. Luhmann, S. Robson, and S. Kyle, Close-Range Photogrammetry and 3D Imaging, 4th ed. (Walter De Gruyter GmbH, Berlin/Boston, 2014), Chap. 6.

K. Takahashi, S. Nobuhara, and T. Matsuyama, “A new mirror-based extrinsic camera calibration using an orthogonality constraint,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2012), pp. 1051–1058.

C. Gao and N. Ahuja, “A refractive camera for acquiring stereo and super-resolution images,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (2006), pp: 2316–2323.

F. Bravo-Valenzuela and M. Torres-Torriti, “Comparison of panoramic stereoscopic sensors based on hyperboloidal mirrors,” 2009 6th Latin American Robotics Symposium (LARS 2009) (2009), pp. 103–110.

S. K. Nayar, “Catadioptric omnidirectional camera,” IEEE Conference on Computer Vision and Pattern Recognition (1997), pp. 482–488.

J. Dold, “Ein hybrides photogrammetrisches Industriemesssys-temhöchster Genauigkeit und seiner Überprüfung,” PhD thesis, A Universität der Bundeswehr München, (1997).

S. C. Becker and Jr. V. M. Bove, “Semiautomatic 3D-model extraction from uncalibrated 2D-camera views,” Proceedings SPIE Visual Data Exploration and Analysis II (1995), pp. 447–461.

T. Hanning, “High precision camera calibration with a depth dependent distortion mapping,” IASTED international conference on visualization, imaging, and image processing (VIIP) (2008), pp. 304–309.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. VBV measurement principles.
Fig. 2.
Fig. 2. 2D light path in FMSV system.
Fig. 3.
Fig. 3. 3D light path in FMSV system: (a) Single mirror reflection and (b) multiple mirrors reflection.
Fig. 4.
Fig. 4. Influence of $\alpha $ on FMSV system: (a) 3D light path and (b) imaging parameters.
Fig. 5.
Fig. 5. Influence of $\beta $ on FMSV system: (a) 3D light path and (b) imaging parameters.
Fig. 6.
Fig. 6. Influence of d on FMSV system: (a) 3D light path and (b) imaging parameters.
Fig. 7.
Fig. 7. Influence of L on FMSV system: (a) 3D light path and (b) imaging parameters.
Fig. 8.
Fig. 8. Image processing of lines in image: (a) Image collection, (b) corner detection, (c) point linking, (d) image with outliers, (e) outlier elimination and segment linking, and (f) line-point extraction from each subregion.
Fig. 9.
Fig. 9. Schematic diagram of distortion partitioning: (a) Various chessboard poses and (b) equal distortion increment partitioning for various subregions.
Fig. 10.
Fig. 10. FMSV calibration based on loose constraint for chessboard pose.
Fig. 11.
Fig. 11. Designed FMSV system.
Fig. 12.
Fig. 12. Image distortion correction: Images (a–c) #1-#3, respectively.
Fig. 13.
Fig. 13. FMSV calibration-accuracy validation results: (a) Two images of target and (b) 2352 distance errors.
Fig. 14.
Fig. 14. Path error measurement equipment: (a) Artifact and (b) laser tracker.
Fig. 15.
Fig. 15. Path error results: (a) 3D and (b) 2D paths in movement plane, and (c) vision measurement error.

Tables (3)

Tables Icon

Table 1. Correlation between imaging and structural parameters.

Tables Icon

Table 2. Accuracy verification of proposed DoF-dependent distortion partition model (Model-D).

Tables Icon

Table 3. Distortion correction results for Image #2.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

{ s l m ~ l = A l M ~  =  [ α l 0 u 0 0 β l v 0 0 0 1 ] M ~ s r m ~ r = A r M ~ r  =  [ α r 0 u 0 0 β r v 0 0 0 1 ] M ~ r
{ u = u ¯ + u ¯ ( 1 + k 1 r 2 + k 2 r 4 + ) + [ p 1 ( r 2 + 2 u ¯ 2 ) + 2 p 2 u ¯ v ¯ ] v = v ¯ + v ¯ ( 1 + k 1 r 2 + k 2 r 4 + ) + [ p 2 ( r 2 + 2 v ¯ 2 ) + 2 p 1 u ¯ v ¯ ]
{ X = z l ( u l u 0 ) α l Y = z l ( v l v 0 ) β l Z = α l β l α r t x α l β l ( u r u 0 ) t z β l ( u l u 0 ) ( ( u r u 0 ) r 7 α r r 1 ) + α l ( v l v 0 ) ( ( u r u 0 ) r 8 α r r 2 ) + α l β l ( ( u r u 0 ) r 9 α r r 3 )
{ B = 2 c r x F o v M = [ c r y + c r x tan ( θ + ϕ ) ( D e p L | c r y | ) ] tan ( ϕ ) + tan ( θ + ϕ ) D e p M = 1 2 B tan ( ϕ ) + F o v M tan ( ϕ ) D f = 1 2 B tan ( θ + ϕ ) 1 2 B tan ( ϕ )
{ p T n + d = 0 p = L 0 + g l
l ~ s = k = 1 s [ I 2 n k n k T 2 d k n k 0 T 1 ] l ~
{ r p k i = r G k i  -  r G k i r n k + d k r l k i r n k r l k i r l ~ k + 1 i = [ I 2 r n k r n k T 2 d k r n k 0 T 1 ] r l ~ k i r p Π i = r G Π i  -  r G Π i n Π + d Π r l Π i n Π r l Π i   k = 1 , 2 i = 1 , , 4
{ S = i = 2 3 1 2 | | r p k 1 r p k i × r p k 1 r p k i  +  1 | | C = m = 1 3 | | r p k m r p k m  +  1 | |  
{ A I n = j = 2 3 1 2 | | r p 1 1 r p 1 j × r p 1 1 r p 1 j  +  1 | | I n C = m = 1 3 | | r p 1 m r p 1 m  +  1 | |   I n B H = | | r p 1 1 r p 1 2 | | I n F H = | | r p 1 3 r p 1 4 | | M i r r I n L = 2 A I n / ( I n B H + I n F H )
{ A E x = j = 2 3 1 2 | | r p 2 1 r p 2 j × r p 2 1 r p 2 j  +  1 | | E x C = m = 1 3 | | r p 2 m r p 2 m  +  1 | |   E x B H = | | r p 2 1 r p 2 2 | | E x F H = | | r p 2 3 r p 2 4 | | M i r r E x L = 2 A E x / ( E x B H + E x F H )
{ r p 2 1 = r L 2 1 l L 2 4 r p 2 2 = r L 2 2 l L 2 3 l p 2 1 = r L 2 4 l L 2 1 l p 2 2 = r L 2 3 l L 2 2    
{ D o F = r p Π 1 p Π 4 F o v L = r p 2 1 p 2 1 = r p 2 2 p 2 2 F o V H = r p Π 1 r r p Π 2 F O V H M = r p Π 4 r r p Π 3 B L = 2 ( k = 1 2 [ I 2 r n k r n k T 2 d k r n k T 0 T 1 ] ) T O C D i s = 1 2 k = 1 2 M k r p 2 2 + p 2 2 r p 2 1 p 2 1 + f + d
  { minimize  F ( x ) = i = 1 5 [ f i ( x ) ] 2 s . t . { l b x u b A e x b
{ k i s n = α s n k i s m + ( 1 α s n ) k i s k   p i s n = s n ( s m f ) s m ( s n f ) p i s m i = 1 , 2
{ k i s , s n = k i s , s k ( s m f ) ( s k s m ) + ( s k s n ) ( s k f ) ( k i s , s m k i s , s k ) ( s m f ) ( s k s m ) p i s , s n = p i s , s k ( 1 s m 2 ) + p i s , s m ( s m s k 1 ) p i s , s k ( 1 s m s n ) + p i s , s m ( s n s k 1 ) s n s m p i s , s m     i = 1 , 2.
{ g k i s , s n = g k i s , s k ( s m f ) ( s k s m ) + ( s k s n ) ( s k f ) ( g k i s , s m g k i s , s k ) ( s m f ) ( s k s m ) g p i s , s n = g p i s , s k ( 1 s m 2 ) + g p i s , s m ( s m s k 1 ) g p i s , s k ( 1 s m s n ) + g p i s , s m ( s n s k 1 ) s n s m g p i s , s m  
minimize  η  =  1 n η γ η , g , t , w  =  η  =  1 n η t  =  1 n η , g , t w  =  1 n η , g , t , w F ( s k , s m , s η , g , t , w , Ω η , g , t , w ; g k i s , s k , g k i s , s m , g p i s , s k , g p i s , s m )
minimize  g E ( g R η , g T η ) = η  =  1 n η t  =  1 n η , g , t w  =  1 n η , g , t , w | | g H η 1 ( Ω ^ η , g , t , w ) Ω η , g , t , w | |