Abstract

Bundle adjustment (BA) is a common estimation algorithm that is widely used in machine vision as the last step in a feature-based three-dimensional (3D) reconstruction algorithm. BA is essentially a non-convex non-linear least-square problem that can simultaneously solve the 3D coordinates of all the feature points describing the scene geometry, as well as the parameters of the camera. The conventional BA takes a parameter either as a fixed value or as an unconstrained variable based on whether the parameter is known or not. In cases where the known parameters are inaccurate but constrained in a range, conventional BA results in an incorrect 3D reconstruction by using these parameters as fixed values. On the other hand, these inaccurate parameters can be treated as unknown variables, but this does not exploit the knowledge of the constraints, and the resulting reconstruction can be erroneous since the BA optimization halts at a dramatically incorrect local minimum due to its non-convexity. In many practical 3D reconstruction applications, unknown variables with range constraints are usually available, such as a measurement with a range of uncertainty or a bounded estimate. Thus to better utilize these pre-known, constrained, but inaccurate parameters, a bound constrained bundle adjustment (BCBA) algorithm is proposed, developed and tested in this study. A scanning fiber endoscope (the camera) is used to capture a sequence of images above a surgery phantom (the object) of known geometry. 3D virtual models are reconstructed based on these images and then compared with the ground truth. The experimental results demonstrate BCBA can achieve a more reliable, rapid, and accurate 3D reconstruction than conventional bundle adjustment.

© 2015 Optical Society of America

1. Introduction

3D reconstruction is a process to retrieve the geometry and appearance of real objects or scene, which can be achieved by two main categories: active and passive methods [1]. A common way in active methods is to use an artificial light source, such as projecting structured-light of a known pattern onto an object and then recovering the depth map from the reflectance image [25]. Passive methods require multiple overlapped images of normally illuminated object without interfering any artificial light, such as stereo vision [6, 7] and structure from motion (SfM) [810]. Typically in SfM, the multiple images that are captured from relative motion of camera and object are used for feature-based 3D reconstruction of the object or scene.

Bundle Adjustment (BA) is an optimization technique, which involves simultaneously refining the camera parameters (focal length, center pixel, distortion, position or/and orientation), as well as the 3D coordinates of all the feature points describing the object [11]. The feature point is a specific structure in the image data, such as a corner. BA is often used as the last step in feature-based 3D reconstruction, following the prior steps of feature detection and feature matching. The cost function of BA is to minimize the L2 norm of reprojection error of each feature point in every image where it is visible. Reprojection error is the pixel distance between the real position of each observed 2D feature and the calculated reprojection of its reconstructed 3D corresponding point, based on the current estimation of camera and object parameters [11]. This optimization can be expressed as a large-scale, non-convex, non-linear and real-valued least square problem [12], which usually consumes a significant amount of computation time in the whole 3D reconstruction process. In the past decade, many approaches have been developed to make the BA as efficient as possible [1316]. Moreover, the non-convexity of BA causes the existence of multiple minima so that it is difficult to find the global minimum without good initialization.

Constraints are logical conditions that bound the estimation with allowable error, reflecting the real-world restriction of tolerancing. This bound constraint could be used as direct prior knowledge in the 3D reconstruction algorithm. In the most practical cases, the general constraints of the camera and object parameters are retrievable. Such as, the camera should be in the hollow body of the reconstructed model for the case of defect detection for the pipeline internal surface (location constraints); the shape of football is oval-like but planar (shape constraints); the size of a cell should be in ~ μm, instead of ~cm (size constraints) and other constraints that are obvious in real world applications. Even those 3D reconstruction without any case-based information, there are also some constraints that can be utilized which have been ignored: such as the focal length of a camera is positive; all the object points should be in front of camera instead of behind; the field of view (FOV) angle is positive but less than 180 degrees. Moreover for the cases that use a well-calibrated camera, the camera parameters are treated as known values without variation. Actually these calibrated parameters are not exact but lie within a range of uncertainty [17]. Taking these parameters as fixed-value may cause inaccurate 3D reconstruction. Considering that the uncertainty can be constrained by lower and upper values, called bound constraint, these constraints can be utilized as prior knowledge to improve the 3D reconstruction algorithm and generate more reliable and accurate models.

Previous work on constraining BA to achieve a more robust and realistic solution has been limited, and did not include the more general approach of having a bound constraint condition. Wong and Chang constrained the distance between camera and object for all the frames as the same value, which was fixed to a single parameter to reduce the number of unknown variables in the BA algorithm [18]. With the similar concept, Albl and Pajdla constrained the camera positions on a circular trajectory for the cases with a revolving camera [19]. These two constrained BA approaches can both achieve more accurate 3D reconstruction with higher efficiency. However they require that either the object or the camera rotates around a fixed axis while the other one is stationary, which limits the generalizability of application for these two constrained BA algorithms [18, 19]. For the 3D reconstruction of planar structures such as floors and walls, planarity or multi-planarity constraints were applied to produce an accurate model [2022], but again these applications are limited to the particular constraint of planarity. Cohen et al. improved the accuracy and efficiency of 3D reconstruction of symmetric and repetitive urban scenes by applying symmetric structure constraints [23], but its applications were limited to symmetric geometries. For the case of 3D reconstruction of internal surface of organ from medical images containing insufficient features, Soper et al. constrained the geometric model of bladder as a spherical shape in the initial step and then relaxed this shape constraints for subsequent model refining [24]. This proper initialization reduces the influence of non-convexity to find the correct minimum and generates an accurate 3D reconstruction. It is essentially an approach to achieve the global minimum by choosing a good initial guess that is closer to the global minimum. However it does not guarantee that this initial guess is good enough to eliminate the deleterious effects of a local minima nearby.

All the constrained BA algorithms mentioned above are not generally applicable to beyond the strict requirements imposed by the specific applications. They achieved higher accuracy and efficiency by essentially reducing the number of variables for each of their specific cases.

Conventional BA algorithm either takes a parameter as known (fixed-value), or as an unknown variable (unconstrained). The former case may result in an incorrect 3D reconstruction with known but inaccurate parameters [25]. The latter case may cause erroneous estimation of the camera and the object parameters since the optimization gets trapped in a dramatically wrong local minimum. In this study, a new approach is proposed to solve this 3D reconstruction problem with bound constrained parameters. We call it Bound Constrained Bundle Adjustment (BCBA) algorithm. The biggest significance of this new algorithm is to achieve a reliable and accurate 3D reconstruction efficiently by taking advantage of known but inaccurate parameters, such as the intrinsic or extrinsic parameters of camera, or the general shape of object. Figure 1 shows the schematic diagram of an applicable case of BCBA. A sequence of images were captured by inaccurate cameras around a static object. The term “inaccurate” represents those cameras that the intrinsic parameters, positions or/and orientations are not exactly known but are within a known range.

 figure: Fig. 1

Fig. 1 Conceptual diagram of an applicable case of BCBA where a sequence of images are captured by inaccurate cameras around a stationary object.

Download Full Size | PPT Slide | PDF

The motivation of solving this problem is to reconstruct an accurate 3D model of surgical field for an autonomous image-guided surgery robotic system [25]. In our case, an endoscope (camera) was moved around by an ill-calibrated robot arm to capture images of the surgical field. The camera position and orientation could be obtained approximately from the robotic system, which was not accurate, but in a measurement range. Using the inaccurate information directly with conventional BA would result in an erroneous 3D model, which can be solved by utilizing the suitable bounds with BCBA.

In this study, each step of the conventional BA algorithm is presented, and the modifications made for the use of bound constrained parameters are listed and discussed. Following the proposed constrained algorithm, an experiment is conducted by applying the conventional BA algorithm that is compared to the proposed BCBA algorithm for reconstructing the surgery phantom.

2. Methodology

Our BCBA algorithm proposed in this letter is an extension of the traditional BA, which is essentially a non-linear real-valued least square problem. Levenberg-Marquardt (LM) algorithm is widely used as standard tool to solve BA due to its efficient damping strategy and ease of implementation [26]. Instead of directly tracking nonlinear problems, LM algorithm iteratively solves a sequence of linear least square problems. From the optimization perspective, it can be viewed as a combination of Gauss-Newton method and steepest descent method: the closer the current iteration point approaches the local minimum, the more like the Gauss-Newton method the LM algorithm behaves. This adjusting strategy allows LM to achieve a better tradeoff between stability of the steepest descent method and faster convergence of the Gauss-Newton method. Our BCBA algorithm inherits this advantage from the LM algorithm.

Given an object with a set of n 3D feature points Qn ={Q1,Q2,…,Qn}in world coordinates, a sequence of m images {I1,I2,…,Im} were captured by a camera at different locations and orientations. Let Rj represent the rotation matrix and tj denote the translation vector of the camera at Ij. Then the 3D position Pij = [Xij,Yij, Zij]T of a feature point Qi with respect to Ij camera coordinate is given as Pij = RjQi +tj. The calculated pixel location of its projection on Ij is pij = [xij,yij]T, see Fig. 1, which can be calculated by

ω[uijvij1]=K[Rjtj][Qij1]xij=uij(1+k1ρ2+k2ρ4)yij=vij(1+k1ρ2+k2ρ4)ρ2=uij2+vij2
where ω = Zij, and K is the camera calibration matrix containing the parameters of focal lengths (fx, fy), center pixel (cx,cy). Coefficients (k1,k2) are the first- and second- order radial distortion of the real lens. Since we were using a single camera in this study, the calibration for all the images K’s and the distortion coefficients (k1,k2) are the same.

Let the position of the observed feature be p^ij=[x^ij,y^ij]T, then the reprojection error r=pp^ was calculated and rearranged as a column vector of size M. The value of M depends on the number of views that the feature is visible in. The projection error r is essentially a function of N = 3n+6m+6 variables: the 3D coordinate of each feature point (3n), the camera position and orientation (6m), and the camera intrinsic parameters (6). Denote all N variables as s. As mentioned before, information of the lower bound and upper bound on s is often available. We would like to exploit this additional bound information to better minimize the total reprojection error.

Given the bound constraints of parameters, the BCBA optimization problem can be written as

Objectiveminf(s)=12r(s)Tr(s)Subject tolisiui;i=1,2,3,,N

In most cases of feature-based 3D reconstruction, N is potentially equal to several thousands or larger. l and u are the same size of s, and represent the lower and upper bound vectors for the constraint of s, respectively. For those variables with no bound defined, we set −∞ and +∞ as its lower and upper bound, respectively.

2.1. Bundle adjustment

The optimization problem Eq. (2) without the constraints is the conventional BA problem. Its objective function f (s) is non-linear, which can be approximated by a local linear model [11], such as a quadratic Taylor series expansion of f (s), For a small step δ,

f(s+δ)f(s)+g(s)Tδ+12δTH(s)δwithg(s)dfds(s)H(s)d2fds2(s)
where g(s) and H(s) are the gradient vector and Hessian matrix of f (s), respectively. This approximation model of Eq. (3) is a simple quadratic with a unique global minimum that can be calculated explicitly.

The Gauss-Newton method is built to solve the step δ in the following linear system repeatedly

H¯(s)δ=g(s)
where the N ×N matrix H¯(s) is the Gauss-Newton approximation of the Hessian matrix H(s) of f(s). s is updated at each iteration as s ← s+δ, if the cost f(s+δ) < f(s). Let J(s) denote the M ×N Jacobian matrix of r(s), given by Jij = ∂ri/∂sj. Then the gradient g(s) can be computed by g(s) = J(s)T r(s). The H¯(s) can be calculated by H¯(s)=J(s)TJ(s) by ignoring the second derivative terms. It is easy to see H¯(s) is a symmetric matrix. For now, assume that H¯(s) is positive definite.

To solve step δ in Gauss-Newton method of Eq. (4), the inverse matrix of H¯(s) needs to be calculated. Since N could be very large, computing the inverse matrix of H¯(s) is often prohibitively expensive. Therefore, it is desired to reduce the size of the linear system first. Among all potential methods, Schur complement method is typically used in the BA algorithm to split the original linear equation into two smaller linear systems by Gaussian elimination.

Denote the parameter column vector s = [sc,sp] where sc and sp are the camera and point parameters, respectively. Similarly, the subscripts “c” and “p” are used to denote camera and point for other notations: J(s), g(s) and δ. So H¯(s) can be expressed as a block matrix as follows:

H¯(s)=[H¯ccH¯cpH¯pcH¯pp]=[JcTJcJcTJpJpTJcJpTJp].

Since H¯(s) is positive definite, H¯cc and H¯pp are then positive definite, since they are principal matrices of H¯(s) [27]. Equation (4) also can be rewritten as:

[H¯ccH¯cpH¯pcH¯pp][δcδp]=[gcgp].

By separating the camera and point parameters using Schur complement, the solution δ = [δcp] of Eq. (4) can be calculated in the form of reduced camera system:

(H¯ccH¯cpH¯pp1H¯pc)δc=(H¯cpH¯pp1)gpgcδp=H¯pp1(gpH¯pcδc)

Note that the reason why δc is chosen to be computed first is that the number of the camera parameters is usually smaller than the number of points. Once δ = [δcp] is solved, the parameter vector will be updated if it leads to a smaller cost f(s).

The LM algorithm can be considered as the interpolation between Gauss-Newton method and steepest descent method. The steepest descent method is given by Dδ = −g(s), D′ is a diagonal matrix of positive constant value, size of N × N. Thus, LM can be written as

Bδ=g(s),withB=H¯(s)+λD

D is the diagonal matrix of H¯. The damping factor λ is a non-negative value, adjusted at each iteration. Bigger λ brings the algorithm closer to steepest descent method, while the algorithm behaves more like a Gauss-Newton method as λ becomes smaller.

Similarly, to calculate the step at each iteration of LM algorithm, Schur complement is applied on Eq. (8) by splitting B as four sub-matrices corresponding to camera and point parameters. δ can be calculated by Eq. (7). Without considering the bound constraints in the optimization problem of Eq. (2), s ← s + δ will be accepted if the updated s vector leads to a smaller f(s) value.

The non-negative damping factor λ is adjusted in each iteration. The initial value of λ in the first iteration is pre-defined as λ0. If the current value of updated s leads to a cost reduction, a smaller λ can be used to bring the algorithm closer to Gauss-Newton method exhibiting faster convergence. Whereas if an iteration could not provide reduction in the cost, larger λ is chosen, giving a step closer to the gradient descent direction that is slow but guaranteed to converge [26]. Once the λ exceeds the pre-defined threshold λmax that is usually very large, or the updated cost reduction is less than a threshold ∆fmin, that means the local minimum is found. Conventional bundle adjustment is achieved by repeating the update of s and λ of Eq. (8) until reaching the local minimum.

2.2. Bound constrained bundle adjustment (BCBA)

BA can refine the camera and 3D structure parameters simultaneously in an efficient and stable way by utilizing LM algorithm. However, it considers parameters either as known data (fixed-value), or unknown variable (unconstrained). The conventional BA could not take advantage of the known but inaccurate information, which leads to bound constraints that are commonly available in practical applications. We now address this problem by modifying the BA algorithm mentioned above. Before the discussion, some preliminary tools are needed.

Firstly, we define the projection of the parameter vector s onto the feasible set with [l,u] bound as a function:

proj(s)=min{max{l,s},u}.
max{a,b} is a vector whose ith entry as max{ai,bi}; and min{a,b} is a vector whose i-th entry as min{ai,bi}.

Secondly, an active set A(s) for s ∈ [l,u] is defined. A(s) is a set that contains parameters at which either upper or lower bound is tight and the update direction drives the parameter outward away from the bounds. Consider the following cases:

  • Case 1: si ∈ (li,ui)

    The computation of gi is not constrained by the bounds, the same for J(si). The calculation of δ can be treated as an unconstrained problem.

  • Case 2: si < li or si > ui

    The si value will be projected onto the bounds of the feasible set by Eq. (9).

  • Case 3: si = li

    If gi > 0, that means the updated si is supposed to be smaller than li, which will be constrained by the bounds; if gi 0, that means the updated si will be larger than li, falling in the feasible set. Thus it is not affected by the constraints.

  • Case 4: si = ui

    We have the opposite observation as case 3, the constraints are active if gi < 0, but inactive if gi 0.

Finally the active set for the bound constrained problem s ∈ [l,u] can be given below. The complementary set of A(s) is called inactive set and denoted by I(s).

A(s)={i|si=liandgi>0orsi=uiandgi<0}

Gradient projection method [28] is then applied in this algorithm, which is a common method for solving constrained optimization problems. The projected gradient of f (s) in the feasible set is a vector of size N, denoted by ĝ(s):

g^i={gi,iI(s)0,otherwise

The projected gradient is the same as the unconstrained gradient for those parameters whose constraints are inactive. For the parameters that constraints are active, the projected gradient will be forced to be zero to keep the parameters at one of the bounds.

One more component that we need to modify in Eq. (8) for the BCBA algorithm is the approximation of Hessian matrix H¯(s), which we define as the reduced Hessian matrix. The reduced Hessian is designed as follows: since the parameters in the active set A(s) are fixed then ∂f/∂si = 0; and the projected gradient of active parameters are set to zero, we zero out all rows and columns of the Hessian matrix H¯ that are corresponding to the active parameters, that is, to set H¯ij=0 if either i or j is in A(s). However, we will add the diagonal entries H¯ii, i ∈ A(s) back so that all diagonal entries of H¯ are retained in the reduced Hessian. It can be easily verified that this does not change the gradient update but will later make it easier to control the positive definiteness of the reduced Hessian matrix. Finally, we define the reduced Hessian matrix Ĥ to be

H^ij={H¯ijifiI(s)andjI(s)H¯iiifiA(s)0,otherwise

Once we have the reduced Hessian matrix and projected gradient vector, Eq. (8) in LM algorithm is modified as below for the bound constraints.

B^δ=g^(s),withB^=H^(s)+λD^

D^ is the diagonal matrix of Ĥ(s), which is also equal to the diagonal matrix D. Similar to the process of Eq. (4), Schur complement is also used to calculate the step δ in Eq. (13). The update of s ← pro j(s + δ) is accepted once it leads to a reduction in the cost function f(s).

Similar to the BA algorithm, the non-negative damping factor λ in BCBA is also adjusted in each iteration. Smaller λ value is chosen if current iteration leads to less cost residual; a larger λ value can be used to guarantee the convergence to local minimum under bound constraints.

2.3. Convergence of BCBA

Because we have assumed that H¯(s)=J(s)TJ(s) is positive definite, it is easy to see that Ĥ(s) is also a positive-definite matrix based on the modification in Eq. (12) [27]. D^ is the diagonal matrix of Ĥ(s), it is also positive definite. Thus B^ is positive-definite in Eq. (13), regardless of how the damping factor λ changes. Notice the right side of Eq. (13) is actually the steepest descent direction. Hence, our proposed algorithm can generate a sequence of parameter vectors (s) in each iteration that converges to a local minimum.

3. Experiment and result

To demonstrate the feasibility of this constrained algorithm, an experiment was performed to compare the 3D reconstruction results of conventional BA and BCBA. In this study, a Scanning Fiber Endoscope (SFE) [29] was used to capture a sequence of images above a surgery phantom (the object) of known geometry. The SFE is a tiny flexible endoscope of only 1.6 mm outer diameter, see Fig. 2(a), which is feasible to Minimal Invasive Surgery (MIS). It can generate 30 Hz high resolution color video with wide FOV from scanning a single beam of red (635 nm), green (532 nm), and blue (444 nm) laser light. To validate and compare two 3D reconstruction algorithms, an object with pre-known geometry was required. In this study, a 3D model was designed and 3D-printed out to mimic the surgical field [30], see Figs. 2(b) and 2(c). This reconstruction of the surgical field aims to guide the autonomous surgical robot by obtaining the accurate 3D coordinates of residual tumor within the surgical field [31].

 figure: Fig. 2

Fig. 2 The experiment setup. (a) the SFE of 1.6 mm outer diameter with camera frame rate at 30 Hz; (b) the CAD design of the object. It is a spherical dome with maximum radius of 17.5 mm and depth of 10 mm; (c) the 3D printed phantom with near-realistic surgical features; (d) the experiment setup on a micro-positioning stage, which can provide high accurate data of the camera positions with accuracy of 0.01 mm.

Download Full Size | PPT Slide | PDF

In this experiment, the SFE was attached on a micro-positioning stage, where the position of the camera was obtained. The orientation of the camera was set perpendicularly downwards and kept the same in the entire experiment. Figure 2 shows the SFE, CAD design, printed phantom and also the experiment setup. The distance between the SFE and the top surface of phantom is about 15 mm. Note, the BCBA algorithm is independent of the choice of the camera and the object. It can be applied as long as the bound constraints exist.

In the experiment, 16 images (size of 608 × 608 pixels) were captured above the phantom using the SFE. The position of SFE was read from micro-positioning stage with high accuracy, which can be considered as ground truth. To simulate known but inaccurate information, noise of uniform distribution on the interval [−0.25, 0.25] mm was added to X, Y, Z coordinate of the camera positions, which can be treated as the pre-known bound constraints when using the BCBA algorithm. Other constraints were also pre-known: a rough camera calibration concluded that the FOV was in the range of [50, 56] degrees; the surface geometry of the object was 10 mm in depth; and the distance of the object to camera was 15 mm. Thus, the depth of object points should be in the range of [15, 25] mm. However, for this experiment, we only used the constraints of camera position for BCBA. Other pre-known parameters were used to evaluate accuracy and reliability of the 3D reconstruction result.

Our method was implemented in MATLAB, running on a workstation Dell Precision M4700 with 2.7 GHz Intel i7-3740QM CPUs, 20.0 GB memory in a 64-bit Window operating system.

In the same manner as conventional BA, features were detected in each image and matched with its correspondences in other images. In this study, scale-invariant feature transform (SIFT) algorithm [32] was applied to find feature points from each SFE image. Figure 3 shows two endoscopic frames with image overlap, and the matching features were lined up with color lines. This experiment generated 2,250 feature points and 5,689 point-to-image reprojections in total from 16 frames.

 figure: Fig. 3

Fig. 3 Matching features in a pair of frames with image overlap were lined up in color. One tenth of the matching features were randomly chosen for better visualization.

Download Full Size | PPT Slide | PDF

After adding the same set of random uniform noise (∈ [−0.25,0.25] mm) to the accurate camera position, both BA and BCBA optimizations were performed with the same initialization. The process of BA showed the reconstructed 3D model shrinking from an initial spherical-like shape to a flat surface in the final optimization iteration, see Figs. 4(a)–4(d). While the process of BCBA showed the reconstructed points bounded in a range and quickly stabilized to a spherical shape, Figs. 4(e)–4(h). We can notice that the reconstructed point clouds at the first iterations of BA and BCBA are the same, Figs. 4(a) and 4(e), since the initial guess of parameter vector was identical for both algorithms. Figures 4(d) and 4(h) are the final reconstructed 3D points of BA and BCBA, respectively.

 figure: Fig. 4

Fig. 4 The procedures of BA and BCBA, respectively. (a)-(d) shows the reconstructed 3D point cloud by BA was shrinking to a flat geometry. (e)-(h) shows only minor adjustment of the 3D points happened in the BCBA optimization process due to the bound constraints.

Download Full Size | PPT Slide | PDF

Figure 5(a) shows the BA resulted in a flat surface that is very different to our real object of a spherical concave surface. In addition, the reconstructed 3D points are located around a depth of 10 mm, which is dramatically deviated from the experiment setup of 15 mm distance. Whereas the BCBA algorithm achieves a much better 3D reconstruction with most points distributing as a spherical concave dome shape, see Fig. 5(d). Furthermore, 97.56% of the reconstructed object points are located in the range of [15, 25] mm, which is the range we get from the experiment setup. Once the 3D point cloud was obtained, a thin spline algorithm was applied to generate a smooth 3D surface by fitting these reconstructed 3d points [24]. Figures 5(b) and 5(e) show the reconstructed surface of BA and BCBA, respectively. Also their corresponding depth maps are shown in Figs. 5(c) and 5(f).

 figure: Fig. 5

Fig. 5 ( Media 1) The comparison of 3D reconstruction results between BA and BCBA. (a)-(c) and (d)-(f) show the reconstructed 3D point clouds, 3D surfaces and the depth maps of BA and BCBA, respectively.

Download Full Size | PPT Slide | PDF

To quantify the reconstruction result, Iterative Closest Points (ICP) algorithm [33] was employed to align and compare the reconstructed virtual models to the CAD designed model (ground truth, see Fig. 2(b)). Figure 6 gives the comparison of ICP error analysis of BA and BCBA. The alignment of the two point clouds are shown in Figs. 6(a) and 6(d), and the blue point cloud represents the CAD designed model for each reconstructed surface. Qualitatively, the BA algorithm produces a radically different 3D surface, which does not match the CAD model; while the BCBA result matches the CAD model very well. The distance maps between the aligned models are shown in Figs. 6(b) and 6(e), and also the histograms of the distance in Figs. 6(c) and 6(f), which show the distribution of ICP error of each reconstructed point to the CAD model. The quantitative error of BA’s result was very large, which was diminished greatly by using the BCBA algorithm.

 figure: Fig. 6

Fig. 6 The comparison of ICP error analysis between BA and BCBA. (a) and (d) show the 3D alignment of CAD point cloud and reconstructed surfaces for BA and BCBA, respectively; (b) and (e) show the distance maps of the alignment for BA and BCBA, respectively; (c) and (f) show the histograms of the distance of each reconstructed point to CAD model for BA and BCBA, respectively.

Download Full Size | PPT Slide | PDF

To further understand the significance of our proposed BCBA algorithm, a table was generated with other comparison details of BA and BCBA. Table 1 shows the number of iteration, time spent, the minimum error achieved, the estimated parameters of camera (FOV), the estimated depth of 3D points Qz and also the ICP error. With the bound constraints of camera position, the BCBA optimization can reach to the local minimum faster than BA. Also the accurate 3D reconstruction result of BCBA has a much smaller ICP error. Moreover, only the pre-known constraints of camera position were used in this study, the estimated FOV and Qz can be compared with the pre-known values to demonstrate the reliability of our constrained algorithm. For example, the estimated FOV using BA was 177.05°, which is far away from the calibration value ∈ [50°, 56°]. In stark contrast, the BCBA algorithm estimates a FOV of 54.69° which is within our experimentally determined range (Table 1).

Tables Icon

Table 1. The experiment result comparison of BA and BCBA algorithm.

Table 1 shows that BA can result in a smaller cost value of the objective function Eq. (2). The reason is that the corresponding local minimum is located outside the feasible region. BA can achieve the local minimum while BCBA was constrained by the bound constraints, which results in a larger cost. Note that all the values in Table 1 might change slightly based on a different set of random noise added to the accurate camera position data used to simulate motion in a constrained range of uncertainty.

In order to validate the reliability of the BCBA algorithm, the same comparison shown above was performed for an additional ten times with different sets of uniform noise (∈ [−0.25, 0.25] mm). Since the accuracy and efficiency are the most important criteria for the potential applications of this proposed algorithm, we listed the comparison of ICP error between BA and BCBA in Table 2, as well as the computation time. The obtained ICP error values of BA were in the range of [1.8776, 1.9072] mm with an average of 1.8903 mm and a standard deviation of 0.011 mm; while for BCBA results, ICP error values were within [0.4521, 0.5343] mm with an average of 0.4955 mm and a standard deviation of 0.0306 mm. These results show that BCBA can generate more reliable and accurate 3D models than BA with very small deviation across the ten data sets simulating the real-world measurements. While the computation time of either BA or BCBA varied for these ten cases. BA spent [67.34, 49.27] sec, with average of 60.61 sec and deviation of 6.88 sec; whereas BCBA spent much less time for the computation, which was in the range of [3.56, 30.61] sec, with average of 14.40 sec and deviation of 9.56 sec.

Tables Icon

Table 2. The comparisons of ICP error and computation time of the BA and BCBA algorithms with different noise.

From the comparison results of these ten tests in Table 2, we see the ICP error was very stable with a small deviation. Overall, the accuracy of BCBA was around 3.8× higher than the conventional BA algorithm. The time spend for both BA and BCBA was less stable, being highly dependent on the noise added. On average, the BCBA’efficiency was roughly 4× higher than BA’s for these ten tests.

4. Discussion

This paper proposed a bound constrained BA algorithm to take advantage of the known but inaccurate information of the parameters of camera positions. The theoretical development was provided on the convergence property of the proposed BCBA algorithm. Moreover, the experimental result demonstrated its feasibility and reliability for a practical problem. For this specific test (Table 1), the accuracy and efficiency of BCBA was 3.8× (ICP error) and about 4× (computation time) improvement over the conventional BA algorithm, respectively. This test simulated the application of a robotically-driven camera with motion that is defined within a range of accuracy (tolerance). In this case, only the constraints of camera positions were used for the 3D reconstruction, while pre-known information of “camera FOV” from calibration and “object to camera distance Qz” from the experimental setup were utilized to validate the estimation of the unknown parameters. To better evaluate the value and impact of BCBA, we used the constraints of FOV and Qz for the reconstruction and relaxed the constraints on the camera positions and replicated the experiment analysis, the result of which was shown in Table 1. By bounding the FOV ∈ [50°,56°] and Qz ∈ [15,25] mm, the comparison between the conventional BA and BCBA demonstrates the higher accuracy (3.8× less ICP error) and improved efficiency, as shown in Table 3. Importantly, the estimation of camera positions compared to ground truth (micro-positioning stage values) shows that BCBA performed significantly better than BA. This additional test demonstrates the feasibility and versatility of BCBA for more applications.

Tables Icon

Table 3. The comparison of BA versus BCBA with the constraints on FOV and Qz.

Comparing with conventional BA algorithm that considers a parameter either as known fixed-value or an unconstrained variable, BCBA achieves less optimization iterations, less computation time, smaller ICP error and more reliable parameter estimation. Furthermore, BCBA also achieved a good estimation of other parameters that have no constraints, such as the estimation of camera FOV in Table 1, and estimation of camera position in Table 3.

As the extension work of our previous research [25, 30], these advantages of BCBA can be utilized for intraoperatively reconstructing accurate 3D virtual models of the surgical field to extract the tumor coordinates, which requires highly efficient processing. These advantages also can be utilized to provide reliable estimation of camera poses for 3D image-guided surgery [31] and MIS. Not limited to biomedical research, the BCBA algorithm may be utilized in other fields, such as a common application of the 3D reconstruction of Google street view [34]. The position of street-view-car is provided by GPS; it is inaccurate but the error range is pre-known. Such bound constraints can be utilized to improve the accuracy and efficiency of the 3D reconstruction of the street scene, similar to the experiment performed in this paper with bound constrained camera positions. Another application of 3D reconstruction with bound constrained parameters could be the 3D metrology of internal threads for the quality control of automobile engine parts, which requires high reconstruction accuracy. In a preliminary study, high-accuracy robot motion is mandatory to reconstruct a dense 3D point cloud [7]. In practical applications, BCBA could be utilized to reduce the deleterious effect of inaccuracy of industrial robots.

Previous approaches to constraining the BA algorithm cannot be generalized to the examples of cameras involved in robotic MIS and 3D street-view reconstruction. Several constrained BA algorithms could improve the reconstruction accuracy and efficiency [1823], but their applications were limited in specific cases, such as the reconstruction of planar, symmetric or self-rotating objects. Soper et al.’s method achieved reliable 3D reconstruction, but limited to the spherical-like internal surface for a specific medical application [24]. In contrast to these constrained algorithms, our proposed BCBA is applicable to these specific cases as well as more general cases as long as the bound constraints of parameters are known.

However, notice that the constrained algorithms [1824] may work more efficiently and/or accurately than BCBA for their particular cases since they used constraints among different parameters, such as the X, Y, Z parameters of the object which are subject to specific mathematics equations of plane, circle or sphere. However, these authors assumed the particular objects are mathematically perfect without any manufacturing tolerance, which cannot be true for real world applications. To improve the 3D reconstruction for such cases, the future work is to improve BCBA with geometric constraints. For example, for the 3D reconstruction of bladder phantoms [24], we can replace the sphericity constraint to BA with a spherical-like shape constraint that contains lower/upper bounds once the minimum and maximum radius of the bladder is known based on human anatomical data.

Similar to the conventional bundle adjustment, BCBA is also subject to halting at a local minimum. Even within the bounds, different initial guesses of parameter vector s produce different optimization results, as shown in Table 2. Nonetheless in BCBA, the pre-known bounds can prevent the reconstructed model running out to an unreasonable shape. Future work will be finding the global minimum in the feasible set, from which the computation could be dramatically reduced, compared with the unconstrained cases.

Acknowledgments

Funding was provided by NIH NIBIB R01 EB016457 “NRI-Small: Advanced biophotonics for image-guided robotic surgery”, PI: Dr. Eric Seibel and Dr. Blake Hannaford (co-I). The authors appreciate Professor Maryam Fazel at University of Washington for her comments, reviewers for their helpful critiques, and Richard Johnston and David Melville at Human Photonics Lab, University of Washington for their technical support.

References and links

1. G. Bianco, A. Gallo, F. Bruno, and M. Muzzupappa, “A comparison between active and passive techniques for underwater 3d applications,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences3816 (2011), pp. 357–363.

2. Y. Gong and S. Zhang, “Improving 4-D shape measurement by using projector defocusing,” Proc. SPIE 7790, 77901A (2010). [CrossRef]  

3. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon. 3(2), 128–160 (2011). [CrossRef]  

4. Y. Gong and S. Zhang, “Ultrafast 3-D shape measurement with an off-the-shelf DLP projector,” Opt. Express 18(19), 19743–19754 (2010). [CrossRef]   [PubMed]  

5. Z. Zhang, “Microsoft kinect sensor and its effect,” MultiMedia IEEE 19(2), 4–10 (2012). [CrossRef]  

6. S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 519–528.

7. Y. Gong, R. S. Johnston, C. D. Melville, and E. J. Seibel, “Axial-stereo 3D optical metrology of internally machined parts using high-quality imaging from a scanning laser endoscope,” in International Symposium on Optomechatronic Technologies (ISOT), Seattle, USA, November 5–7, (2014).

8. S. Agarwal, N. Snavely, I. Simon, S. M. Seitz, and R. Szeliski, “Building rome in a day,” in IEEE 12th International Conference on Computer Vision (IEEE, 2011), pp. 105–112.

9. C. Wu, “Towards linear-time incremental structure from motion,” in International Conference on 3D Vision (IEEE, 2013), pp. 127–134.

10. D. Crandall, A. Owens, N. Snavely, and D. Huttenlocher, “Discrete-continuous optimization for large-scale structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3001–3008.

11. B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment–a modern synthesis,” In Vision Algorithms: Theory and Practice, B. Triggs, A. Zisserman, and R. Szeliski, eds. (Springer Berlin Heidelberg, 2000), pp. 298–372. [CrossRef]  

12. K. Mitra and R. Chellappa, “A Scalable Projective Bundle Adjustment Algorithm using the L infinity Norm,” in Sixth Indian Conference on Computer Vision, Graphics and Image Processing (IEEE, 2008), pp. 79–86.

13. C. Engels, H. Stewénius, and D. Nistér, “Bundle adjustment rules,” in Photogrammetric computer vision, (2006).

14. M. I. A. Lourakis and A. A. Argyros, “SBA: A software package for generic sparse bundle adjustment,” ACM Trans. Math. Software 36(1), 1–30 (2009) [CrossRef]  

15. C. Wu, S. Agarwal, B. Curless, and S. M. Seitz, “Multicore bundle adjustment,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3057–3064.

16. Y. Jeong, D. Nister, D. Steedly, R. Szeliski, and I. S. Kweon, “Pushing the envelope of modern methods for bundle adjustment,” IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1605–1617 (2012). [CrossRef]   [PubMed]  

17. J. Salvi, X. Armangu, and J. Batlle, “A comparative review of camera calibrating methods with accuracy evaluation,” Pattern Recognit. 35(7), 1617–1635 (2002). [CrossRef]  

18. K. H. Wong and M. M. Y. Chang, “3D model reconstruction by constrained bundle adjustment,” in Proceedings of the 17th International Conference on Pattern Recognition (IEEE, 2004), pp. 902–905.

19. C. Albl and T. Pajdla, “Constrained Bundle Adjustment for Panoramic Cameras,” in 18th Computer Vision Winter Workshop, Hernstein, Austria, 4–6 February 2013.

20. Y. Zhang, K. Hu, and R. Huang, “Bundle adjustment with additional constraints applied to imagery of the Dunhuang wall paintings,” ISPRS J. Photogramm. Remote Sens. 72, 113–120 (2012). [CrossRef]  

21. R. Szeliski and P. H. Torr, “Geometrically constrained structure from motion: Points on planes,” In 3D Structure from Multiple Images of Large-Scale Environments, R. Koch and L. V. Gool, eds. (Springer Berlin Heidelberg, 1998), pp. 171–186. [CrossRef]  

22. A. Bartoli and P. Sturm, “Constrained structure and motion from multiple uncalibrated views of a piecewise planar scene,” Int. J. Comput. Vision 52(1), 45–64 (2003). [CrossRef]  

23. A. Cohen, C. Zach, S. N. Sinha, and M. Pollefeys, “Discovering and exploiting 3D symmetries in structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1514–1521.

24. T. D. Soper, M. P. Porter, and E. J. Seibel, “Surface mosaics of the bladder reconstructed from endoscopic video for automated surveillance,” IEEE Trans. Biomed. Eng. 59(6), 1670–1680 (2012). [CrossRef]   [PubMed]  

25. Y. Gong, T. D. Soper, V. W. Hou, D. Hu, B. Hannaford, and E. J. Seibel, “Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm,” Proc. SPIE 9036, 90362S (2014). [CrossRef]  

26. M. I. A. Lourakis and A. A. Argyros, “Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment?,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2005), pp. 1526–1531.

27. A. Berman and N. Shaked-Monderer, Completely positive matrices (World Scientific, 2003), Chap. 1.

28. P. H. Calamai and J. J. Mor, “Projected gradient methods for linearly constrained problems,” Math. Program. 39(1), 93–116 (1987). [CrossRef]  

29. C. M. Lee, C. J. Engelbrech, T. D. Soper, F. Helmchen, and E. J. Seibel, “Scanning fiber endoscopy with highly flexible, 1 mm catheterscopes for wide-field, full-color imaging,” J. Biophotonics 3(5-6), 385–407 (2010). [CrossRef]  

30. Y. Gong, D. Hu, E. J. Seibel, and B. Hannaford, “Accurate 3D virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot,” J. Med. Imag. 1(3), 035002 (2014). [CrossRef]  

31. Y. Gong, D. Hu, B. Hannaford, and E.J. Seibel, “Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model,” Proc. SPIE 9415, 94150C (2015).

32. D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 1999), pp. 1150–1157.

33. P. J. Besl and N.D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 239–256 (1992). [CrossRef]  

34. B. Klingner, D. Martin, and J. Roseborough, “Street view motion-from-structure-from-motion,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2013), pp. 953–960.

References

  • View by:
  • |
  • |
  • |

  1. G. Bianco, A. Gallo, F. Bruno, and M. Muzzupappa, “A comparison between active and passive techniques for underwater 3d applications,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences3816 (2011), pp. 357–363.
  2. Y. Gong and S. Zhang, “Improving 4-D shape measurement by using projector defocusing,” Proc. SPIE 7790, 77901A (2010).
    [Crossref]
  3. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon. 3(2), 128–160 (2011).
    [Crossref]
  4. Y. Gong and S. Zhang, “Ultrafast 3-D shape measurement with an off-the-shelf DLP projector,” Opt. Express 18(19), 19743–19754 (2010).
    [Crossref] [PubMed]
  5. Z. Zhang, “Microsoft kinect sensor and its effect,” MultiMedia IEEE 19(2), 4–10 (2012).
    [Crossref]
  6. S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 519–528.
  7. Y. Gong, R. S. Johnston, C. D. Melville, and E. J. Seibel, “Axial-stereo 3D optical metrology of internally machined parts using high-quality imaging from a scanning laser endoscope,” in International Symposium on Optomechatronic Technologies (ISOT), Seattle, USA, November 5–7, (2014).
  8. S. Agarwal, N. Snavely, I. Simon, S. M. Seitz, and R. Szeliski, “Building rome in a day,” in IEEE 12th International Conference on Computer Vision (IEEE, 2011), pp. 105–112.
  9. C. Wu, “Towards linear-time incremental structure from motion,” in International Conference on 3D Vision (IEEE, 2013), pp. 127–134.
  10. D. Crandall, A. Owens, N. Snavely, and D. Huttenlocher, “Discrete-continuous optimization for large-scale structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3001–3008.
  11. B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment–a modern synthesis,” In Vision Algorithms: Theory and Practice, B. Triggs, A. Zisserman, and R. Szeliski, eds. (Springer Berlin Heidelberg, 2000), pp. 298–372.
    [Crossref]
  12. K. Mitra and R. Chellappa, “A Scalable Projective Bundle Adjustment Algorithm using the L infinity Norm,” in Sixth Indian Conference on Computer Vision, Graphics and Image Processing (IEEE, 2008), pp. 79–86.
  13. C. Engels, H. Stewénius, and D. Nistér, “Bundle adjustment rules,” in Photogrammetric computer vision, (2006).
  14. M. I. A. Lourakis and A. A. Argyros, “SBA: A software package for generic sparse bundle adjustment,” ACM Trans. Math. Software 36(1), 1–30 (2009)
    [Crossref]
  15. C. Wu, S. Agarwal, B. Curless, and S. M. Seitz, “Multicore bundle adjustment,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3057–3064.
  16. Y. Jeong, D. Nister, D. Steedly, R. Szeliski, and I. S. Kweon, “Pushing the envelope of modern methods for bundle adjustment,” IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1605–1617 (2012).
    [Crossref] [PubMed]
  17. J. Salvi, X. Armangu, and J. Batlle, “A comparative review of camera calibrating methods with accuracy evaluation,” Pattern Recognit. 35(7), 1617–1635 (2002).
    [Crossref]
  18. K. H. Wong and M. M. Y. Chang, “3D model reconstruction by constrained bundle adjustment,” in Proceedings of the 17th International Conference on Pattern Recognition (IEEE, 2004), pp. 902–905.
  19. C. Albl and T. Pajdla, “Constrained Bundle Adjustment for Panoramic Cameras,” in 18th Computer Vision Winter Workshop, Hernstein, Austria, 4–6 February 2013.
  20. Y. Zhang, K. Hu, and R. Huang, “Bundle adjustment with additional constraints applied to imagery of the Dunhuang wall paintings,” ISPRS J. Photogramm. Remote Sens. 72, 113–120 (2012).
    [Crossref]
  21. R. Szeliski and P. H. Torr, “Geometrically constrained structure from motion: Points on planes,” In 3D Structure from Multiple Images of Large-Scale Environments, R. Koch and L. V. Gool, eds. (Springer Berlin Heidelberg, 1998), pp. 171–186.
    [Crossref]
  22. A. Bartoli and P. Sturm, “Constrained structure and motion from multiple uncalibrated views of a piecewise planar scene,” Int. J. Comput. Vision 52(1), 45–64 (2003).
    [Crossref]
  23. A. Cohen, C. Zach, S. N. Sinha, and M. Pollefeys, “Discovering and exploiting 3D symmetries in structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1514–1521.
  24. T. D. Soper, M. P. Porter, and E. J. Seibel, “Surface mosaics of the bladder reconstructed from endoscopic video for automated surveillance,” IEEE Trans. Biomed. Eng. 59(6), 1670–1680 (2012).
    [Crossref] [PubMed]
  25. Y. Gong, T. D. Soper, V. W. Hou, D. Hu, B. Hannaford, and E. J. Seibel, “Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm,” Proc. SPIE 9036, 90362S (2014).
    [Crossref]
  26. M. I. A. Lourakis and A. A. Argyros, “Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment?,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2005), pp. 1526–1531.
  27. A. Berman and N. Shaked-Monderer, Completely positive matrices (World Scientific, 2003), Chap. 1.
  28. P. H. Calamai and J. J. Mor, “Projected gradient methods for linearly constrained problems,” Math. Program. 39(1), 93–116 (1987).
    [Crossref]
  29. C. M. Lee, C. J. Engelbrech, T. D. Soper, F. Helmchen, and E. J. Seibel, “Scanning fiber endoscopy with highly flexible, 1 mm catheterscopes for wide-field, full-color imaging,” J. Biophotonics 3(5-6), 385–407 (2010).
    [Crossref]
  30. Y. Gong, D. Hu, E. J. Seibel, and B. Hannaford, “Accurate 3D virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot,” J. Med. Imag. 1(3), 035002 (2014).
    [Crossref]
  31. Y. Gong, D. Hu, B. Hannaford, and E.J. Seibel, “Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model,” Proc. SPIE 9415, 94150C (2015).
  32. D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 1999), pp. 1150–1157.
  33. P. J. Besl and N.D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 239–256 (1992).
    [Crossref]
  34. B. Klingner, D. Martin, and J. Roseborough, “Street view motion-from-structure-from-motion,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2013), pp. 953–960.

2015 (1)

Y. Gong, D. Hu, B. Hannaford, and E.J. Seibel, “Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model,” Proc. SPIE 9415, 94150C (2015).

2014 (2)

Y. Gong, D. Hu, E. J. Seibel, and B. Hannaford, “Accurate 3D virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot,” J. Med. Imag. 1(3), 035002 (2014).
[Crossref]

Y. Gong, T. D. Soper, V. W. Hou, D. Hu, B. Hannaford, and E. J. Seibel, “Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm,” Proc. SPIE 9036, 90362S (2014).
[Crossref]

2012 (4)

T. D. Soper, M. P. Porter, and E. J. Seibel, “Surface mosaics of the bladder reconstructed from endoscopic video for automated surveillance,” IEEE Trans. Biomed. Eng. 59(6), 1670–1680 (2012).
[Crossref] [PubMed]

Z. Zhang, “Microsoft kinect sensor and its effect,” MultiMedia IEEE 19(2), 4–10 (2012).
[Crossref]

Y. Jeong, D. Nister, D. Steedly, R. Szeliski, and I. S. Kweon, “Pushing the envelope of modern methods for bundle adjustment,” IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1605–1617 (2012).
[Crossref] [PubMed]

Y. Zhang, K. Hu, and R. Huang, “Bundle adjustment with additional constraints applied to imagery of the Dunhuang wall paintings,” ISPRS J. Photogramm. Remote Sens. 72, 113–120 (2012).
[Crossref]

2011 (1)

2010 (3)

Y. Gong and S. Zhang, “Ultrafast 3-D shape measurement with an off-the-shelf DLP projector,” Opt. Express 18(19), 19743–19754 (2010).
[Crossref] [PubMed]

Y. Gong and S. Zhang, “Improving 4-D shape measurement by using projector defocusing,” Proc. SPIE 7790, 77901A (2010).
[Crossref]

C. M. Lee, C. J. Engelbrech, T. D. Soper, F. Helmchen, and E. J. Seibel, “Scanning fiber endoscopy with highly flexible, 1 mm catheterscopes for wide-field, full-color imaging,” J. Biophotonics 3(5-6), 385–407 (2010).
[Crossref]

2009 (1)

M. I. A. Lourakis and A. A. Argyros, “SBA: A software package for generic sparse bundle adjustment,” ACM Trans. Math. Software 36(1), 1–30 (2009)
[Crossref]

2003 (1)

A. Bartoli and P. Sturm, “Constrained structure and motion from multiple uncalibrated views of a piecewise planar scene,” Int. J. Comput. Vision 52(1), 45–64 (2003).
[Crossref]

2002 (1)

J. Salvi, X. Armangu, and J. Batlle, “A comparative review of camera calibrating methods with accuracy evaluation,” Pattern Recognit. 35(7), 1617–1635 (2002).
[Crossref]

1992 (1)

P. J. Besl and N.D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 239–256 (1992).
[Crossref]

1987 (1)

P. H. Calamai and J. J. Mor, “Projected gradient methods for linearly constrained problems,” Math. Program. 39(1), 93–116 (1987).
[Crossref]

Agarwal, S.

S. Agarwal, N. Snavely, I. Simon, S. M. Seitz, and R. Szeliski, “Building rome in a day,” in IEEE 12th International Conference on Computer Vision (IEEE, 2011), pp. 105–112.

C. Wu, S. Agarwal, B. Curless, and S. M. Seitz, “Multicore bundle adjustment,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3057–3064.

Albl, C.

C. Albl and T. Pajdla, “Constrained Bundle Adjustment for Panoramic Cameras,” in 18th Computer Vision Winter Workshop, Hernstein, Austria, 4–6 February 2013.

Argyros, A. A.

M. I. A. Lourakis and A. A. Argyros, “SBA: A software package for generic sparse bundle adjustment,” ACM Trans. Math. Software 36(1), 1–30 (2009)
[Crossref]

M. I. A. Lourakis and A. A. Argyros, “Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment?,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2005), pp. 1526–1531.

Armangu, X.

J. Salvi, X. Armangu, and J. Batlle, “A comparative review of camera calibrating methods with accuracy evaluation,” Pattern Recognit. 35(7), 1617–1635 (2002).
[Crossref]

Bartoli, A.

A. Bartoli and P. Sturm, “Constrained structure and motion from multiple uncalibrated views of a piecewise planar scene,” Int. J. Comput. Vision 52(1), 45–64 (2003).
[Crossref]

Batlle, J.

J. Salvi, X. Armangu, and J. Batlle, “A comparative review of camera calibrating methods with accuracy evaluation,” Pattern Recognit. 35(7), 1617–1635 (2002).
[Crossref]

Berman, A.

A. Berman and N. Shaked-Monderer, Completely positive matrices (World Scientific, 2003), Chap. 1.

Besl, P. J.

P. J. Besl and N.D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 239–256 (1992).
[Crossref]

Bianco, G.

G. Bianco, A. Gallo, F. Bruno, and M. Muzzupappa, “A comparison between active and passive techniques for underwater 3d applications,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences3816 (2011), pp. 357–363.

Bruno, F.

G. Bianco, A. Gallo, F. Bruno, and M. Muzzupappa, “A comparison between active and passive techniques for underwater 3d applications,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences3816 (2011), pp. 357–363.

Calamai, P. H.

P. H. Calamai and J. J. Mor, “Projected gradient methods for linearly constrained problems,” Math. Program. 39(1), 93–116 (1987).
[Crossref]

Chang, M. M. Y.

K. H. Wong and M. M. Y. Chang, “3D model reconstruction by constrained bundle adjustment,” in Proceedings of the 17th International Conference on Pattern Recognition (IEEE, 2004), pp. 902–905.

Chellappa, R.

K. Mitra and R. Chellappa, “A Scalable Projective Bundle Adjustment Algorithm using the L infinity Norm,” in Sixth Indian Conference on Computer Vision, Graphics and Image Processing (IEEE, 2008), pp. 79–86.

Cohen, A.

A. Cohen, C. Zach, S. N. Sinha, and M. Pollefeys, “Discovering and exploiting 3D symmetries in structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1514–1521.

Crandall, D.

D. Crandall, A. Owens, N. Snavely, and D. Huttenlocher, “Discrete-continuous optimization for large-scale structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3001–3008.

Curless, B.

S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 519–528.

C. Wu, S. Agarwal, B. Curless, and S. M. Seitz, “Multicore bundle adjustment,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3057–3064.

Diebel, J.

S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 519–528.

Engelbrech, C. J.

C. M. Lee, C. J. Engelbrech, T. D. Soper, F. Helmchen, and E. J. Seibel, “Scanning fiber endoscopy with highly flexible, 1 mm catheterscopes for wide-field, full-color imaging,” J. Biophotonics 3(5-6), 385–407 (2010).
[Crossref]

Engels, C.

C. Engels, H. Stewénius, and D. Nistér, “Bundle adjustment rules,” in Photogrammetric computer vision, (2006).

Fitzgibbon, A. W.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment–a modern synthesis,” In Vision Algorithms: Theory and Practice, B. Triggs, A. Zisserman, and R. Szeliski, eds. (Springer Berlin Heidelberg, 2000), pp. 298–372.
[Crossref]

Gallo, A.

G. Bianco, A. Gallo, F. Bruno, and M. Muzzupappa, “A comparison between active and passive techniques for underwater 3d applications,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences3816 (2011), pp. 357–363.

Geng, J.

Gong, Y.

Y. Gong, D. Hu, B. Hannaford, and E.J. Seibel, “Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model,” Proc. SPIE 9415, 94150C (2015).

Y. Gong, D. Hu, E. J. Seibel, and B. Hannaford, “Accurate 3D virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot,” J. Med. Imag. 1(3), 035002 (2014).
[Crossref]

Y. Gong, T. D. Soper, V. W. Hou, D. Hu, B. Hannaford, and E. J. Seibel, “Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm,” Proc. SPIE 9036, 90362S (2014).
[Crossref]

Y. Gong and S. Zhang, “Ultrafast 3-D shape measurement with an off-the-shelf DLP projector,” Opt. Express 18(19), 19743–19754 (2010).
[Crossref] [PubMed]

Y. Gong and S. Zhang, “Improving 4-D shape measurement by using projector defocusing,” Proc. SPIE 7790, 77901A (2010).
[Crossref]

Y. Gong, R. S. Johnston, C. D. Melville, and E. J. Seibel, “Axial-stereo 3D optical metrology of internally machined parts using high-quality imaging from a scanning laser endoscope,” in International Symposium on Optomechatronic Technologies (ISOT), Seattle, USA, November 5–7, (2014).

Hannaford, B.

Y. Gong, D. Hu, B. Hannaford, and E.J. Seibel, “Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model,” Proc. SPIE 9415, 94150C (2015).

Y. Gong, D. Hu, E. J. Seibel, and B. Hannaford, “Accurate 3D virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot,” J. Med. Imag. 1(3), 035002 (2014).
[Crossref]

Y. Gong, T. D. Soper, V. W. Hou, D. Hu, B. Hannaford, and E. J. Seibel, “Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm,” Proc. SPIE 9036, 90362S (2014).
[Crossref]

Hartley, R. I.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment–a modern synthesis,” In Vision Algorithms: Theory and Practice, B. Triggs, A. Zisserman, and R. Szeliski, eds. (Springer Berlin Heidelberg, 2000), pp. 298–372.
[Crossref]

Helmchen, F.

C. M. Lee, C. J. Engelbrech, T. D. Soper, F. Helmchen, and E. J. Seibel, “Scanning fiber endoscopy with highly flexible, 1 mm catheterscopes for wide-field, full-color imaging,” J. Biophotonics 3(5-6), 385–407 (2010).
[Crossref]

Hou, V. W.

Y. Gong, T. D. Soper, V. W. Hou, D. Hu, B. Hannaford, and E. J. Seibel, “Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm,” Proc. SPIE 9036, 90362S (2014).
[Crossref]

Hu, D.

Y. Gong, D. Hu, B. Hannaford, and E.J. Seibel, “Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model,” Proc. SPIE 9415, 94150C (2015).

Y. Gong, D. Hu, E. J. Seibel, and B. Hannaford, “Accurate 3D virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot,” J. Med. Imag. 1(3), 035002 (2014).
[Crossref]

Y. Gong, T. D. Soper, V. W. Hou, D. Hu, B. Hannaford, and E. J. Seibel, “Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm,” Proc. SPIE 9036, 90362S (2014).
[Crossref]

Hu, K.

Y. Zhang, K. Hu, and R. Huang, “Bundle adjustment with additional constraints applied to imagery of the Dunhuang wall paintings,” ISPRS J. Photogramm. Remote Sens. 72, 113–120 (2012).
[Crossref]

Huang, R.

Y. Zhang, K. Hu, and R. Huang, “Bundle adjustment with additional constraints applied to imagery of the Dunhuang wall paintings,” ISPRS J. Photogramm. Remote Sens. 72, 113–120 (2012).
[Crossref]

Huttenlocher, D.

D. Crandall, A. Owens, N. Snavely, and D. Huttenlocher, “Discrete-continuous optimization for large-scale structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3001–3008.

Jeong, Y.

Y. Jeong, D. Nister, D. Steedly, R. Szeliski, and I. S. Kweon, “Pushing the envelope of modern methods for bundle adjustment,” IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1605–1617 (2012).
[Crossref] [PubMed]

Johnston, R. S.

Y. Gong, R. S. Johnston, C. D. Melville, and E. J. Seibel, “Axial-stereo 3D optical metrology of internally machined parts using high-quality imaging from a scanning laser endoscope,” in International Symposium on Optomechatronic Technologies (ISOT), Seattle, USA, November 5–7, (2014).

Klingner, B.

B. Klingner, D. Martin, and J. Roseborough, “Street view motion-from-structure-from-motion,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2013), pp. 953–960.

Kweon, I. S.

Y. Jeong, D. Nister, D. Steedly, R. Szeliski, and I. S. Kweon, “Pushing the envelope of modern methods for bundle adjustment,” IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1605–1617 (2012).
[Crossref] [PubMed]

Lee, C. M.

C. M. Lee, C. J. Engelbrech, T. D. Soper, F. Helmchen, and E. J. Seibel, “Scanning fiber endoscopy with highly flexible, 1 mm catheterscopes for wide-field, full-color imaging,” J. Biophotonics 3(5-6), 385–407 (2010).
[Crossref]

Lourakis, M. I. A.

M. I. A. Lourakis and A. A. Argyros, “SBA: A software package for generic sparse bundle adjustment,” ACM Trans. Math. Software 36(1), 1–30 (2009)
[Crossref]

M. I. A. Lourakis and A. A. Argyros, “Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment?,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2005), pp. 1526–1531.

Lowe, D. G.

D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 1999), pp. 1150–1157.

Martin, D.

B. Klingner, D. Martin, and J. Roseborough, “Street view motion-from-structure-from-motion,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2013), pp. 953–960.

McKay, N.D.

P. J. Besl and N.D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 239–256 (1992).
[Crossref]

McLauchlan, P. F.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment–a modern synthesis,” In Vision Algorithms: Theory and Practice, B. Triggs, A. Zisserman, and R. Szeliski, eds. (Springer Berlin Heidelberg, 2000), pp. 298–372.
[Crossref]

Melville, C. D.

Y. Gong, R. S. Johnston, C. D. Melville, and E. J. Seibel, “Axial-stereo 3D optical metrology of internally machined parts using high-quality imaging from a scanning laser endoscope,” in International Symposium on Optomechatronic Technologies (ISOT), Seattle, USA, November 5–7, (2014).

Mitra, K.

K. Mitra and R. Chellappa, “A Scalable Projective Bundle Adjustment Algorithm using the L infinity Norm,” in Sixth Indian Conference on Computer Vision, Graphics and Image Processing (IEEE, 2008), pp. 79–86.

Mor, J. J.

P. H. Calamai and J. J. Mor, “Projected gradient methods for linearly constrained problems,” Math. Program. 39(1), 93–116 (1987).
[Crossref]

Muzzupappa, M.

G. Bianco, A. Gallo, F. Bruno, and M. Muzzupappa, “A comparison between active and passive techniques for underwater 3d applications,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences3816 (2011), pp. 357–363.

Nister, D.

Y. Jeong, D. Nister, D. Steedly, R. Szeliski, and I. S. Kweon, “Pushing the envelope of modern methods for bundle adjustment,” IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1605–1617 (2012).
[Crossref] [PubMed]

Nistér, D.

C. Engels, H. Stewénius, and D. Nistér, “Bundle adjustment rules,” in Photogrammetric computer vision, (2006).

Owens, A.

D. Crandall, A. Owens, N. Snavely, and D. Huttenlocher, “Discrete-continuous optimization for large-scale structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3001–3008.

Pajdla, T.

C. Albl and T. Pajdla, “Constrained Bundle Adjustment for Panoramic Cameras,” in 18th Computer Vision Winter Workshop, Hernstein, Austria, 4–6 February 2013.

Pollefeys, M.

A. Cohen, C. Zach, S. N. Sinha, and M. Pollefeys, “Discovering and exploiting 3D symmetries in structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1514–1521.

Porter, M. P.

T. D. Soper, M. P. Porter, and E. J. Seibel, “Surface mosaics of the bladder reconstructed from endoscopic video for automated surveillance,” IEEE Trans. Biomed. Eng. 59(6), 1670–1680 (2012).
[Crossref] [PubMed]

Roseborough, J.

B. Klingner, D. Martin, and J. Roseborough, “Street view motion-from-structure-from-motion,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2013), pp. 953–960.

Salvi, J.

J. Salvi, X. Armangu, and J. Batlle, “A comparative review of camera calibrating methods with accuracy evaluation,” Pattern Recognit. 35(7), 1617–1635 (2002).
[Crossref]

Scharstein, D.

S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 519–528.

Seibel, E. J.

Y. Gong, D. Hu, E. J. Seibel, and B. Hannaford, “Accurate 3D virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot,” J. Med. Imag. 1(3), 035002 (2014).
[Crossref]

Y. Gong, T. D. Soper, V. W. Hou, D. Hu, B. Hannaford, and E. J. Seibel, “Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm,” Proc. SPIE 9036, 90362S (2014).
[Crossref]

T. D. Soper, M. P. Porter, and E. J. Seibel, “Surface mosaics of the bladder reconstructed from endoscopic video for automated surveillance,” IEEE Trans. Biomed. Eng. 59(6), 1670–1680 (2012).
[Crossref] [PubMed]

C. M. Lee, C. J. Engelbrech, T. D. Soper, F. Helmchen, and E. J. Seibel, “Scanning fiber endoscopy with highly flexible, 1 mm catheterscopes for wide-field, full-color imaging,” J. Biophotonics 3(5-6), 385–407 (2010).
[Crossref]

Y. Gong, R. S. Johnston, C. D. Melville, and E. J. Seibel, “Axial-stereo 3D optical metrology of internally machined parts using high-quality imaging from a scanning laser endoscope,” in International Symposium on Optomechatronic Technologies (ISOT), Seattle, USA, November 5–7, (2014).

Seibel, E.J.

Y. Gong, D. Hu, B. Hannaford, and E.J. Seibel, “Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model,” Proc. SPIE 9415, 94150C (2015).

Seitz, S. M.

S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 519–528.

S. Agarwal, N. Snavely, I. Simon, S. M. Seitz, and R. Szeliski, “Building rome in a day,” in IEEE 12th International Conference on Computer Vision (IEEE, 2011), pp. 105–112.

C. Wu, S. Agarwal, B. Curless, and S. M. Seitz, “Multicore bundle adjustment,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3057–3064.

Shaked-Monderer, N.

A. Berman and N. Shaked-Monderer, Completely positive matrices (World Scientific, 2003), Chap. 1.

Simon, I.

S. Agarwal, N. Snavely, I. Simon, S. M. Seitz, and R. Szeliski, “Building rome in a day,” in IEEE 12th International Conference on Computer Vision (IEEE, 2011), pp. 105–112.

Sinha, S. N.

A. Cohen, C. Zach, S. N. Sinha, and M. Pollefeys, “Discovering and exploiting 3D symmetries in structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1514–1521.

Snavely, N.

S. Agarwal, N. Snavely, I. Simon, S. M. Seitz, and R. Szeliski, “Building rome in a day,” in IEEE 12th International Conference on Computer Vision (IEEE, 2011), pp. 105–112.

D. Crandall, A. Owens, N. Snavely, and D. Huttenlocher, “Discrete-continuous optimization for large-scale structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3001–3008.

Soper, T. D.

Y. Gong, T. D. Soper, V. W. Hou, D. Hu, B. Hannaford, and E. J. Seibel, “Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm,” Proc. SPIE 9036, 90362S (2014).
[Crossref]

T. D. Soper, M. P. Porter, and E. J. Seibel, “Surface mosaics of the bladder reconstructed from endoscopic video for automated surveillance,” IEEE Trans. Biomed. Eng. 59(6), 1670–1680 (2012).
[Crossref] [PubMed]

C. M. Lee, C. J. Engelbrech, T. D. Soper, F. Helmchen, and E. J. Seibel, “Scanning fiber endoscopy with highly flexible, 1 mm catheterscopes for wide-field, full-color imaging,” J. Biophotonics 3(5-6), 385–407 (2010).
[Crossref]

Steedly, D.

Y. Jeong, D. Nister, D. Steedly, R. Szeliski, and I. S. Kweon, “Pushing the envelope of modern methods for bundle adjustment,” IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1605–1617 (2012).
[Crossref] [PubMed]

Stewénius, H.

C. Engels, H. Stewénius, and D. Nistér, “Bundle adjustment rules,” in Photogrammetric computer vision, (2006).

Sturm, P.

A. Bartoli and P. Sturm, “Constrained structure and motion from multiple uncalibrated views of a piecewise planar scene,” Int. J. Comput. Vision 52(1), 45–64 (2003).
[Crossref]

Szeliski, R.

Y. Jeong, D. Nister, D. Steedly, R. Szeliski, and I. S. Kweon, “Pushing the envelope of modern methods for bundle adjustment,” IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1605–1617 (2012).
[Crossref] [PubMed]

R. Szeliski and P. H. Torr, “Geometrically constrained structure from motion: Points on planes,” In 3D Structure from Multiple Images of Large-Scale Environments, R. Koch and L. V. Gool, eds. (Springer Berlin Heidelberg, 1998), pp. 171–186.
[Crossref]

S. Agarwal, N. Snavely, I. Simon, S. M. Seitz, and R. Szeliski, “Building rome in a day,” in IEEE 12th International Conference on Computer Vision (IEEE, 2011), pp. 105–112.

S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 519–528.

Torr, P. H.

R. Szeliski and P. H. Torr, “Geometrically constrained structure from motion: Points on planes,” In 3D Structure from Multiple Images of Large-Scale Environments, R. Koch and L. V. Gool, eds. (Springer Berlin Heidelberg, 1998), pp. 171–186.
[Crossref]

Triggs, B.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment–a modern synthesis,” In Vision Algorithms: Theory and Practice, B. Triggs, A. Zisserman, and R. Szeliski, eds. (Springer Berlin Heidelberg, 2000), pp. 298–372.
[Crossref]

Wong, K. H.

K. H. Wong and M. M. Y. Chang, “3D model reconstruction by constrained bundle adjustment,” in Proceedings of the 17th International Conference on Pattern Recognition (IEEE, 2004), pp. 902–905.

Wu, C.

C. Wu, S. Agarwal, B. Curless, and S. M. Seitz, “Multicore bundle adjustment,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3057–3064.

C. Wu, “Towards linear-time incremental structure from motion,” in International Conference on 3D Vision (IEEE, 2013), pp. 127–134.

Zach, C.

A. Cohen, C. Zach, S. N. Sinha, and M. Pollefeys, “Discovering and exploiting 3D symmetries in structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1514–1521.

Zhang, S.

Y. Gong and S. Zhang, “Improving 4-D shape measurement by using projector defocusing,” Proc. SPIE 7790, 77901A (2010).
[Crossref]

Y. Gong and S. Zhang, “Ultrafast 3-D shape measurement with an off-the-shelf DLP projector,” Opt. Express 18(19), 19743–19754 (2010).
[Crossref] [PubMed]

Zhang, Y.

Y. Zhang, K. Hu, and R. Huang, “Bundle adjustment with additional constraints applied to imagery of the Dunhuang wall paintings,” ISPRS J. Photogramm. Remote Sens. 72, 113–120 (2012).
[Crossref]

Zhang, Z.

Z. Zhang, “Microsoft kinect sensor and its effect,” MultiMedia IEEE 19(2), 4–10 (2012).
[Crossref]

ACM Trans. Math. Software (1)

M. I. A. Lourakis and A. A. Argyros, “SBA: A software package for generic sparse bundle adjustment,” ACM Trans. Math. Software 36(1), 1–30 (2009)
[Crossref]

Adv. Opt. Photon. (1)

IEEE Trans. Biomed. Eng. (1)

T. D. Soper, M. P. Porter, and E. J. Seibel, “Surface mosaics of the bladder reconstructed from endoscopic video for automated surveillance,” IEEE Trans. Biomed. Eng. 59(6), 1670–1680 (2012).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

Y. Jeong, D. Nister, D. Steedly, R. Szeliski, and I. S. Kweon, “Pushing the envelope of modern methods for bundle adjustment,” IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1605–1617 (2012).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Machine Intell. (1)

P. J. Besl and N.D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 239–256 (1992).
[Crossref]

Int. J. Comput. Vision (1)

A. Bartoli and P. Sturm, “Constrained structure and motion from multiple uncalibrated views of a piecewise planar scene,” Int. J. Comput. Vision 52(1), 45–64 (2003).
[Crossref]

ISPRS J. Photogramm. Remote Sens. (1)

Y. Zhang, K. Hu, and R. Huang, “Bundle adjustment with additional constraints applied to imagery of the Dunhuang wall paintings,” ISPRS J. Photogramm. Remote Sens. 72, 113–120 (2012).
[Crossref]

J. Biophotonics (1)

C. M. Lee, C. J. Engelbrech, T. D. Soper, F. Helmchen, and E. J. Seibel, “Scanning fiber endoscopy with highly flexible, 1 mm catheterscopes for wide-field, full-color imaging,” J. Biophotonics 3(5-6), 385–407 (2010).
[Crossref]

J. Med. Imag. (1)

Y. Gong, D. Hu, E. J. Seibel, and B. Hannaford, “Accurate 3D virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot,” J. Med. Imag. 1(3), 035002 (2014).
[Crossref]

Math. Program. (1)

P. H. Calamai and J. J. Mor, “Projected gradient methods for linearly constrained problems,” Math. Program. 39(1), 93–116 (1987).
[Crossref]

MultiMedia IEEE (1)

Z. Zhang, “Microsoft kinect sensor and its effect,” MultiMedia IEEE 19(2), 4–10 (2012).
[Crossref]

Opt. Express (1)

Pattern Recognit. (1)

J. Salvi, X. Armangu, and J. Batlle, “A comparative review of camera calibrating methods with accuracy evaluation,” Pattern Recognit. 35(7), 1617–1635 (2002).
[Crossref]

Proc. SPIE (3)

Y. Gong and S. Zhang, “Improving 4-D shape measurement by using projector defocusing,” Proc. SPIE 7790, 77901A (2010).
[Crossref]

Y. Gong, D. Hu, B. Hannaford, and E.J. Seibel, “Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model,” Proc. SPIE 9415, 94150C (2015).

Y. Gong, T. D. Soper, V. W. Hou, D. Hu, B. Hannaford, and E. J. Seibel, “Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm,” Proc. SPIE 9036, 90362S (2014).
[Crossref]

Other (18)

M. I. A. Lourakis and A. A. Argyros, “Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment?,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2005), pp. 1526–1531.

A. Berman and N. Shaked-Monderer, Completely positive matrices (World Scientific, 2003), Chap. 1.

R. Szeliski and P. H. Torr, “Geometrically constrained structure from motion: Points on planes,” In 3D Structure from Multiple Images of Large-Scale Environments, R. Koch and L. V. Gool, eds. (Springer Berlin Heidelberg, 1998), pp. 171–186.
[Crossref]

C. Wu, S. Agarwal, B. Curless, and S. M. Seitz, “Multicore bundle adjustment,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3057–3064.

D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 1999), pp. 1150–1157.

A. Cohen, C. Zach, S. N. Sinha, and M. Pollefeys, “Discovering and exploiting 3D symmetries in structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1514–1521.

B. Klingner, D. Martin, and J. Roseborough, “Street view motion-from-structure-from-motion,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2013), pp. 953–960.

G. Bianco, A. Gallo, F. Bruno, and M. Muzzupappa, “A comparison between active and passive techniques for underwater 3d applications,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences3816 (2011), pp. 357–363.

K. H. Wong and M. M. Y. Chang, “3D model reconstruction by constrained bundle adjustment,” in Proceedings of the 17th International Conference on Pattern Recognition (IEEE, 2004), pp. 902–905.

C. Albl and T. Pajdla, “Constrained Bundle Adjustment for Panoramic Cameras,” in 18th Computer Vision Winter Workshop, Hernstein, Austria, 4–6 February 2013.

S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 519–528.

Y. Gong, R. S. Johnston, C. D. Melville, and E. J. Seibel, “Axial-stereo 3D optical metrology of internally machined parts using high-quality imaging from a scanning laser endoscope,” in International Symposium on Optomechatronic Technologies (ISOT), Seattle, USA, November 5–7, (2014).

S. Agarwal, N. Snavely, I. Simon, S. M. Seitz, and R. Szeliski, “Building rome in a day,” in IEEE 12th International Conference on Computer Vision (IEEE, 2011), pp. 105–112.

C. Wu, “Towards linear-time incremental structure from motion,” in International Conference on 3D Vision (IEEE, 2013), pp. 127–134.

D. Crandall, A. Owens, N. Snavely, and D. Huttenlocher, “Discrete-continuous optimization for large-scale structure from motion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3001–3008.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment–a modern synthesis,” In Vision Algorithms: Theory and Practice, B. Triggs, A. Zisserman, and R. Szeliski, eds. (Springer Berlin Heidelberg, 2000), pp. 298–372.
[Crossref]

K. Mitra and R. Chellappa, “A Scalable Projective Bundle Adjustment Algorithm using the L infinity Norm,” in Sixth Indian Conference on Computer Vision, Graphics and Image Processing (IEEE, 2008), pp. 79–86.

C. Engels, H. Stewénius, and D. Nistér, “Bundle adjustment rules,” in Photogrammetric computer vision, (2006).

Supplementary Material (1)

» Media 1: MP4 (4810 KB)     

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Conceptual diagram of an applicable case of BCBA where a sequence of images are captured by inaccurate cameras around a stationary object.
Fig. 2
Fig. 2 The experiment setup. (a) the SFE of 1.6 mm outer diameter with camera frame rate at 30 Hz; (b) the CAD design of the object. It is a spherical dome with maximum radius of 17.5 mm and depth of 10 mm; (c) the 3D printed phantom with near-realistic surgical features; (d) the experiment setup on a micro-positioning stage, which can provide high accurate data of the camera positions with accuracy of 0.01 mm.
Fig. 3
Fig. 3 Matching features in a pair of frames with image overlap were lined up in color. One tenth of the matching features were randomly chosen for better visualization.
Fig. 4
Fig. 4 The procedures of BA and BCBA, respectively. (a)-(d) shows the reconstructed 3D point cloud by BA was shrinking to a flat geometry. (e)-(h) shows only minor adjustment of the 3D points happened in the BCBA optimization process due to the bound constraints.
Fig. 5
Fig. 5 ( Media 1) The comparison of 3D reconstruction results between BA and BCBA. (a)-(c) and (d)-(f) show the reconstructed 3D point clouds, 3D surfaces and the depth maps of BA and BCBA, respectively.
Fig. 6
Fig. 6 The comparison of ICP error analysis between BA and BCBA. (a) and (d) show the 3D alignment of CAD point cloud and reconstructed surfaces for BA and BCBA, respectively; (b) and (e) show the distance maps of the alignment for BA and BCBA, respectively; (c) and (f) show the histograms of the distance of each reconstructed point to CAD model for BA and BCBA, respectively.

Tables (3)

Tables Icon

Table 1 The experiment result comparison of BA and BCBA algorithm.

Tables Icon

Table 2 The comparisons of ICP error and computation time of the BA and BCBA algorithms with different noise.

Tables Icon

Table 3 The comparison of BA versus BCBA with the constraints on FOV and Qz.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

ω [ u i j v i j 1 ] = K [ R j t j ] [ Q i j 1 ] x i j = u i j ( 1 + k 1 ρ 2 + k 2 ρ 4 ) y i j = v i j ( 1 + k 1 ρ 2 + k 2 ρ 4 ) ρ 2 = u i j 2 + v i j 2
Objective min f ( s ) = 1 2 r ( s ) T r ( s ) Subject to l i s i u i ; i = 1 , 2 , 3 , , N
f ( s + δ ) f ( s ) + g ( s ) T δ + 1 2 δ T H ( s ) δ with g ( s ) d f d s ( s ) H ( s ) d 2 f d s 2 ( s )
H ¯ ( s ) δ = g ( s )
H ¯ ( s ) = [ H ¯ c c H ¯ c p H ¯ p c H ¯ p p ] = [ J c T J c J c T J p J p T J c J p T J p ] .
[ H ¯ c c H ¯ c p H ¯ p c H ¯ p p ] [ δ c δ p ] = [ g c g p ] .
( H ¯ c c H ¯ c p H ¯ p p 1 H ¯ p c ) δ c = ( H ¯ c p H ¯ p p 1 ) g p g c δ p = H ¯ p p 1 ( g p H ¯ p c δ c )
B δ = g ( s ) , with B = H ¯ ( s ) + λ D
pro j ( s ) = min { max { l , s } , u } .
A ( s ) = { i | s i = l i and g i > 0 or s i = u i and g i < 0 }
g ^ i = { g i , i I ( s ) 0 , otherwise
H ^ i j = { H ¯ i j if i I ( s ) and j I ( s ) H ¯ i i if i A ( s ) 0 , otherwise
B ^ δ = g ^ ( s ) , with B ^ = H ^ ( s ) + λ D ^

Metrics