Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Online stereo vision measurement based on correction of sensor structural parameters

Open Access Open Access

Abstract

Vibration can easily affect the structure of long baseline binocular vision sensors, resulting in changes in the external parameters of the binocular calibration model and the failure of measurement method. This paper presents an online stereo vision measurement based on correction of sensor structural parameters. The flexible structure model based on calibration model and iterative gradient descent nonlinear optimization model based on 3D redundant information are established. The optimal estimation of external parameters and object position measurement are realized according to multi-information constraints. Experiments show that this method can effectively solve the measurement failure caused by vibration in stereo vision measurement.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With the improvement of measurement accuracy and range requirements, long baseline stereo vision [13] is widely used in various fields, especially in aviation. Taking automatic aerial refueling (AAR) [4] as an example, the installation position of airborne vision sensor [57] is limited and has high requirements for payload. Most of the fixings are made of lightweight materials and are more sensitive to mechanical stiffness. If the relative position between sensors cannot be obtained in real time, it will not only lead to the failure of 3D positioning of refueling drogue, but also bring great potential safety hazards to the formation flight of tanker andreceiver.

Image depth estimation methods are mainly divided into three categories: active, passive and deep learning. Active measurement [8], such as FRA is proposed by Dorozynska [9]. This kind of algorithms are limited by light absorbing materials and expensive costs. Passive measurement, such as TPM is proposed by Wang [10]. Deep learning, such as the classic algorithm proposed by Lecun [11]; FCRN is proposed by Laina [12] and unsupervised CNN is proposed by Garg [13]. In recent years, machine learning has been promoting the development of stereo vision [14]. The core problem of the non-active algorithms is that they all have the same type of premise: based on the rectified image pair. In various fields, this prerequisite cannot be met.

In view of the difficulties encountered in image depth estimation at present, it is mainly solved by online calibration and online correction. For example, Mikkel present a novel streaming learning approach, which uses DNN to estimate drilling parameters [15]. Ling proposes a self-calibration algorithm [16], which uses the UAV fish eye upper and lower binocular matching based on 2D image features for real-time external parameter estimation, but there is a lack of scale information. The quality of long-distance (>5 m) point cloud is also not ideal. The correcting calibration of stereo cameras in self-driving vehicles proposed by Jon Muhovč can recalibrate all parameters except the baseline distance [17]. Other absolute ranging methods are still needed to determine the baseline distance. There is a kind of automatic driving technology commonly used to solve the problem of optical jitter, such as ECBP is proposed by R Pathum [18]. This kind of method requires high-definition cameras and practical objectives, and is completely not applicable in the air refueling installation environment with multi-layer thick glass partitions. In addition, peripherals are basically required to feedback vibration parameters in real time. For example, BDM proposed by M.A.Niu [19] and CCM proposed by X.Chen [20]. This kind of method loses the great advantage of convenient passive measurement.

To solve the above problems, this paper proposes online stereo vision measurement based on correction of sensor structural parameters (OCSP). There are two main contributions of this paper:

  • 1. A flexible structure model and a novel nonlinear optimization model based on multi-information are proposed and integrated into a parameter online correction model.
  • 2. A close-loop stereo vision measurement based on 2D and 3D information iteration is proposed, including high-precision correction and position measurement.

The rest of this paper: the second chapter introduces the details of online stereo vision measurement. The third chapter presents the results of experiment. The fourth chapter presents the results of application. The fifth chapter discusses the conclusion and prospect.

2. Online stereo vision measurement

The premise of correct rectification of stereo image pair is that the structural parameters are correct. Theoretically, when the parameter correction value is closer to parameter calibration value, more corresponding points should reach the parallel of polar lines, more correct matching points in disparity map. It means that there are fewer false matching points and missing matching points. Based on the above criteria, the online correction model of structural parameters is established and takes 2D and 3D information into the online correction model for iterative solution, forming a closed-loop feedback stereo vision measurement. At the same time, online optimal estimation of calibration model parameters and high-precision solution of 3D position are realized. The framework of the proposed method is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Framework of the proposed method.

Download Full Size | PDF

2.1 Offline stage

There are three parts of work in the offline stage: detection classifier training, binocular calibration and online correction model building:

  • 1. Detection classifier training is the basic of object detection and tracking. Although it sacrifices a lot of image information, it can effectively determine the core information area in the image and greatly improve the speed of the algorithm. Therefore, the training of detection classifier is an indispensable part in the field of autonomous aerial refueling.
  • 2. Binocular calibration determines the relationship between the positions of the camera image pixel and the scene point is established. According to the camera imaging model, the corresponding relationship between the coordinates of feature points in the image and the world coordinates is established. Zhang's calibration method [21] is used on the binocular vision sensor, and the internal and external parameters of the calibration model as formula (1).
    $$\left[ {\begin{array}{l} \mu \\ v\\ 1 \end{array}} \right] = \textrm{A}[{\textrm{R|t}} ]\left[ {\begin{array}{c} {{X_w}}\\ {{Y_w}}\\ {{Z_w}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{cccc} {{f_x}}&0&{{C_x}}&0\\ 0&{{f_y}}&{{C_y}}&0\\ 0&0&1&0 \end{array}} \right]\left[ {\begin{array}{cccc} {{R_{11}}}&{{R_{12}}}&{{R_{13}}}&{{T_x}}\\ {{R_{21}}}&{{R_{22}}}&{{R_{23}}}&{{T_y}}\\ {{R_{31}}}&{{R_{32}}}&{{R_{33}}}&{{T_z}}\\ 0&0&0&1 \end{array}} \right]\left[ {\begin{array}{c} {{X_w}}\\ {{Y_w}}\\ {{Z_w}}\\ 1 \end{array}} \right],$$

    Among, µ and v are the pixel coordinates of points, Xw, Yw and Zw are the world coordinates of point, fx, Cx, fy and Cy are the components of focal length and principal point coordinates in x and y directions in the common internal parameters, respectively. The main function of binocular calibration is to initialize the online correction model.

  • 3. Online correction model (OCM) building is mainly divided into the following two parts: flexible structure model and nonlinear optimization model. According to the design of airborne fixing, the groove of fixed parts and fixed plates on both sides can stabilize the visual sensor on the XZ plane. Through the vibration experiment analysis of simulated UAV, it is mainly the change of pitch angle Δψ and Y-direction translation vector ΔTy in the external parameters of the calibration model. Due to the large amount of translation in Y direction, the baseline distance between binocular vision sensors cannot be approximated as the translation vector in X direction. OCM is based on non-standard scale external parameters to infinitely approximate the true value of the current frame of external parameters for optimal estimation, the more degrees of freedom of external parameters, the worse the convergence of global optimal estimation, but the local estimation of single degree of freedom is not affected. Therefore, OCM selects the pitch angle ψ and translation vector Y component Ty as the object of online correction

2.1.1 Flexible structure model

The Schematic diagram of correction model is shown in the Fig. 2.

 figure: Fig. 2.

Fig. 2. Schematic diagram of correction model.

Download Full Size | PDF

In the process of measurement, irregular new sensor coordinate system will be generated continuously due to vibration. In this paper, this kind of irregular sensor model is defined as flexible structure model. The mathematical model as formula (2):

$$\left[ {\begin{array}{c} \mu \\ v\\ 1 \end{array}} \right] = A\left[ {\begin{array}{cc} {{R_{\Delta \psi }}}&{{T_{\Delta {T_y}}}}\\ {0_3^T}&1 \end{array}} \right]\left[ {\begin{array}{c} {{X_w}}\\ {{Y_w}}\\ {{Z_w}}\\ 1 \end{array}} \right],$$

Among, µ, v are the horizontal and vertical coordinates of the image, Xw, Yw and Zw are the world coordinates of 3D points, A is the internal parameter matrix of the camera, RΔψ is the rotation matrix after online updating the elevation angle, and TΔTy is the translation vector after its vertical component is updated online. According to the Bouguer correction principle, this method calculates the mapping relationship between the ΔTy, Δψ and the rectification matrix of binocular image as formula (3).

\begin{align}{R_{\Delta \psi }} &= {R_l}\ast {R_r},{R_l} = {[{{e_1}^T{e_2}^T{{({{e_1} \times {e_2}} )}^T}} ]^T} \times \textrm{ rodrigues }({ - o{m_{\Delta \psi }}/2} ),\nonumber\\{e_1} &= \frac{{{T_{\Delta {T_y}}}}}{{||{{T_{\Delta {T_y}}}} ||}},{e_2} = \frac{{[{ - {T_y} + \Delta {T_y}\quad {T_x}\quad 0} ]}}{{\sqrt {T_x^2 + {{({{T_y} + \Delta {T_y}} )}^2}} }}.\end{align}

Among, Rl is the rotation matrix of the left camera, Rr is the rotation matrix of the right camera. Tx is the horizontal component of the translation vector, Ty is the vertical component of the translation vector. Before correction, the binocular image pixels µ’ and v’ are known quantities. After correction, the binocular image pixels µ1 and v1 can be obtained by the µ’, v’, Rl and Rr. If the correction is successful, the pixel ordinate difference Δv should be 0. Therefore, when the Δvi of multiple corresponding points reaches the minimum, the offset of pitch angle Δψ and vertical component of translation vector ΔTy can be approximated, as shown in the formula (4).

$$\min F({\Delta \psi ,\Delta {T_y}} )= \sum\limits_{i = 1}^m {{{[{f({\Delta {\psi_i},\Delta {T_{{y_i}}}} )- \Delta {v_i}} ]}^2}} ,$$

Among, m is the number of point pairs. The external parameter correction amount of each binocular image is updated to the flexible structure model in real-time.

2.1.2 Nonlinear optimization model

The iterative gradient descent nonlinear optimization model based on redundant information of 3D point cloud is mainly established based on 3D point cloud information. The 3D model can be sorted into matrix form [22], and the 3D coordinates in the depth direction can be simplified as formula (5).

$$\left[ {\begin{array}{c} {{X_w}}\\ {{Y_w}}\\ {{Z_w}} \end{array}} \right] = P\backslash q = {P^{ - 1}}\ast q,f(x,y) = \frac{{{k_1}x}}{{{k_2}\sin y}}.$$

Among, “P \ q” in formula (5) means that the P matrix divides the q matrix by the left, which is equivalent to multiplying the inverse of the P matrix by the q matrix. P is the matrix of pitch angle mapping and q is the matrix of vertical component mapping of translation vector. k1 and k2 are constant coefficients, x is the simplified translation variable, and y is the simplified rotation variable. The simplified model is the joint distribution function of the combination of linear function and trigonometric function in Fig. 3.

 figure: Fig. 3.

Fig. 3. Joint distribution function of 3D model.

Download Full Size | PDF

Weighting the range of independent variables with correction error can ensure that there is only one extreme point. In the interval with the initial solution as the central value, curve fitting is carried out for different (x, y) and alignment errors, and the approximation problem of the discrete function is solved by the least square method. According to the fitted curve, the abscissa x of the minimum point is calculated, which is the initial calibration solution.

The nonlinear optimization model takes the false matching points and missing matching points in the disparity map as the optimization strategy. Left right consistency (LRC) check in stereo matching post-processing is generally used for occlusion detection to obtain the occlusion points corresponding to the left image, and give reasonable disparity values to the occlusion points to make the disparity map smoother. In this paper, the target to be measured does not have any occlusion and there is no sudden change in disparity value. Therefore, for a point P in the left figure, the obtained disparity value is d1, then the corresponding point of P in the right figure should be (p-d1) and the parallax value of (P-d1) is recorded as d2. If | d1-d2 | > threshold, P is marked as a false matching point, and the number of such points is recorded at the same time. The default threshold is 3, which is generally set according to the disparity difference near point P.

According to the gray value of the target region in the 2D image, the object is segmented by adaptive threshold based on OTSU. After LRC check, disparity estimation is not performed, but median filtering is directly performed. Therefore, there are hollowed-out points in the disparity map. If the hollowed-out point mapped in the 2D image is within the range of the object pixel set, the point is recorded as a missing matching point, and the number of such points is counted at the same time.

Takes the false matching points eliminated in the matching post-processing step and the missing matching points determined in the 2D image area as the 3D error statistics. Takes the area S surrounded by the 3D error statistics and the independent variable range values [x1, x2] as the iterative reference, as shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Iteration method of nonlinear optimization model.

Download Full Size | PDF

According to the area of curve intersection area, the optimization can be considered as completed. The starting point is S0 and finally converges to S. Therefore, the descending condition is introduced: Sn+1 < Sn. When S < T, then the nonlinear optimization of correction parameters is completed.

$$S = {S_{n + 1}} - {S_n} = \int_{{x_1}}^{{x_2}} {(e{q_2} - e{q_1})} - \int_{{x_1}}^{{x_2}} {(e{q_2} - {y_{e{q_2}}}\min )} ,$$

Among, Sn is the area enclosed by the redundant value curve, eq1 is the blue curve equation in the n iteration optimization, eq2 is the orange curve equation in the n+1 iteration optimization, yeq2min is the ordinate of the minimum point of the blue curve. When the optimization interval is [- 0.25,0.25], the optimization accuracy of translation vector has reached 0.1 mm and the optimization accuracy of pitch angle has reached e-3rad, realizing high-precision correction. The 3D error statistics are stable at about 0.2, so the default value of T is e-4 (e-4=0.25e-3${\times} $2${\times} $0.2). After the offline work is completed, the rest of the work is completed online.

2.2 Online stage

Using the drogue tracking proposed by Zhang [23], the horizontal spacing Δv of the binocular image tracking frame is brought into the flexible structure model to obtain the initial solution. The region of interest (ROI) is formed by expanding the equal scale coefficient in the center of the tracking frame. The feature points [24], edges and segmented prospects in the ROI are constrained by horizontal epipolar lines, and brought into the flexible structure model with initial solution to solve the 2D solution and complete the image stereo correction. Displ and Dispr based on left and right viewing angles are obtained through BM3D [25], and the error value of 3D point horizontal constraint is brought into the flexible structure model.

Taking the alignment error of the current parameter as the range of independent variable, the parameter, range and 3D redundancy information are brought into the nonlinear optimization model. Redundant information is two types of information in the depth map, including false matching points in the red area and missing matching points in the yellow area, as shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Sketch map of stereo matching error points.

Download Full Size | PDF

The 2D information is updated by using the 3D correction results, and mutual feedback iterations are performed, finally the estimated values of 3D topography and current calibration model parameters are output. In order to show the logical structure of OCSP more clearly, the pseudocode of this paper is in the table of algorithm 1.

oe-29-23-37987-i001

3. Physical experiment

This section is divided into two parts: vibration simulation experiment and vibration platform experiment. The main purpose of the vibration simulation experiment is to verify the correction accuracy under static measurement. The main purpose of the vibration platform experiment is to verify the measurement accuracy under dynamic measurement.

3.1 Implementation details of vibration simulation

Binocular vision sensors are placed in parallel. The camera focal length is 28 mm, the baseline distance is about 497 mm, the vibration source is located in the middle of the sensor system, and the vibration intensity increases gradually with the number of experiments.

3.2 Vibration simulation experiment result

  • 1. Calibration model parameter correction accuracy verification experimental result of weak vibration in Table 1.
  • 2. Calibration model parameter correction accuracy verification experimental result of strong vibration in Table 2.
  • 3. Comparison experimental result of image rectification is in Fig. 6. The comparison algorithm mainly includes MMI [16], TRMMIRI [17], GVF [26], MSR [27].

In the case of weak vibration, the correction error percentage is about 0.7-3.5%, and the measurement error percentage is 0.25-0.59%, which is 55.05% higher than that before correction. In the case of strong vibration, the correction error percentage is about 1.8-5.6%, and the measurement error percentage is 0.76-8.4%, which is 85.63% higher than that before correction.

 figure: Fig. 6.

Fig. 6. Comparison of image rectification algorithms.

Download Full Size | PDF

Tables Icon

Table 1. CORRECTION ACCURACY OF CALIBRATION MODEL PARAMETERS

Tables Icon

Table 2. CORRECTION ACCURACY OF CALIBRATION MODEL PARAMETERS

3.3 Implementation details of vibration platform

The vibration platform simulates the vibration state of the UAV during aerial refueling in the application experiment as shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Vibration Platform Equipment Diagram.

Download Full Size | PDF

The connecting rod baseline distance is about 90 cm. The objects to be tested is divided into checkerboard target, standard unit and human. The vibration condition is that the vibration frequency is 20-150hz, the vibration level is divided into four types: 0.25 g, 0.50 g, 0.75 g and 1.0 g. In order to accurately grasp the actual vibration coefficient of the binocular camera, an additional vibration sensor is added to detect the actual vibration acceleration, as shown in the Fig. 8.

 figure: Fig. 8.

Fig. 8. Inspection Chart of Vibration Sensor.

Download Full Size | PDF

3.4 Vibration platform experiment result

Under different vibration accelerations, the measurement accuracy results of the algorithm are compared in Fig. 9. With the increase of vibration acceleration, the measurement accuracy decreases. When the object to be measured is the checkerboard target, the acceleration increases from 0.25 g to 1 g, and the average error difference is no more than 1.3%. When the object to be measured is standard unit, the acceleration increases from 0.25 g to 1 g, and the average error difference is no more than 3.4%.

 figure: Fig. 9.

Fig. 9. Comparison chart of measurement accuracy under different vibration acceleration.

Download Full Size | PDF

Under the same vibration acceleration, the measurement accuracy result of the proposed algorithm is compared with the MSR in Fig. 10 and 11. Due to the failure of other comparison algorithms in the measurement process, it is only compared with the MSR. For all kinds of conditions and objects to be measured, the measurement accuracy of this method is higher than that of the MSR. After correction, the measurement accuracy is doubled, and the measurement effect for unit is more significant.

 figure: Fig. 10.

Fig. 10. Comparison of target measurement accuracy under different method.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Comparison of unit measurement accuracy under different method.

Download Full Size | PDF

The total error of the target to be measured is less than 1.775%, and the total error of the unit to be measured is less than 3.16%.

4. Application experiment

In the application experiment, two UAVs cooperate to complete the aerial refueling application. Two color cameras GT1910c are installed symmetrically and parallel on both sides of the wing of the receiving aircraft. The resolution is 1920×1080. The baseline distance between the two cameras is about 42 cm and the focal length of the camera is 50 mm. The practical application image acquisition module is FPGA kinex-7 xc7k325t embedded hardware platform, and the acquisition frame rate is 30 frames/s.

In the practical application, the refueling drogue is made with umbrella frame and 3D printing, and the fixing parts of binocular sensor are also made by 3D printing, and the wing material is light and thin, and it is easy to produce severe turbulence due to the influence of air flow. At the same time, the method proposed in this paper is between online calibration and online correction. The online correction method is easy to be limited by the premise that the baseline distance is unchanged, and the online calibration method is easy to be limited by calibrators and peripherals. These two kinds of restrictions make both methods not suitable for air refueling. This method is based on the flexible sensor structure. Relying only on the binocular vision sensor, it can break through the bottleneck of online correction and online calibration in the air refueling environment. However, at present, it is mainly applicable to the vibration state in the aviation field and cannot calibrate the internal parameters of the sensor. It needs to be based on the traditional stereo calibration results. Therefore, this chapter focuses on comparing the image rectification error and the matching effect after online correction or online calibration. Take one of the datasets applicable to all four comparison algorithms as an example, take each frame image pair under the dataset as a sample for online correction, and the comparison results of image pair rectification error after sparse sampling are shown in Fig. 12.

 figure: Fig. 12.

Fig. 12. Comparison of image pair rectification accuracy.

Download Full Size | PDF

The matching effect of the same stereo matching algorithm after image pair stereo rectification with different comparison algorithms is shown in the Fig. 13.

 figure: Fig. 13.

Fig. 13. Comparison of image pair rectification accuracy.

Download Full Size | PDF

Because this algorithm needs the combination of target detection and tracking algorithm, only the region near the target is reconstructed in stereo matching.

5. Conclusion

With the aim to address the failure of stereo vision sensor system due to the impact of vibration and turbulence, this paper proposes an online stereo vision measurement based on correction of sensor structural parameters. The closed-loop mutual feedback stereo vision measurement system and correction model solve the dynamic measurement failure caused by environmental conditions, such as vibration and turbulence. This method can achieve the optimal estimation of model parameters with higher accuracy at the frequency of 30 frames per second, the calibration error is less than 6%, measurement error is less than 8.5%, and the depth positioning error is below 1%. In the future, deep learning embedded chips will be considered, and the application field will not be limited to autonomous aerial refueling.

Funding

National Science Fund for Distinguished Young Scholars (51625501); Aviation Science Fund (201946051001).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Cui, C. Min, and D. Feng, “Research on pose estimation for stereo vision measurement system by an improved method: Uncertainty weighted stereopsis pose solution method based on projection vector,” Opt. Express 28(4), 5470–5491 (2020). [CrossRef]  

2. X. Chen, J. Lin, Y. Sun, H. Ma, and J. Zhu, “Analytical solution of uncertainty with the gum method for a dynamic stereo vision measurement system,” Opt. Express 29(6), 8967–8984 (2021). [CrossRef]  

3. Sun Junhua, Zhang Yue, Xiaoqi, and Cheng, “A high precision 3d reconstruction method for bend tube axis based on binocular stereo vision,” Opt. Express 27(3), 2292–2304 (2019). [CrossRef]  

4. S. Chen, H. Duan, Y. Deng, L. Cong, G. Zhao, and X. Yan, “Drogue pose estimation for unmanned aerial vehicle autonomous aerial refueling system based on infrared vision sensor,” Opt. Eng. 56(12), 124105 (2017). [CrossRef]  

5. T. Luo, D. Y. Chen, Z. Chen, Z. Dong, and R. Fan, “A voxel-based spatial elongation filtering method for airborne single-photon lidar data,” Opt. Express 28(3), 3922–3931 (2020). [CrossRef]  

6. H James, R. D. Churnside, and Marchbanks, “Calibration of an airborne oceanographic lidar using ocean backscattering measurements from space,” Opt. Express 27(8), A536–A542 (2019). [CrossRef]  

7. L. Yuan, J. Xie, Z. He, Y. Wang, and J. Wang, “Optical design and evaluation of airborne prism-grating imaging spectrometer,” Opt. Express 27(13), 17686 (2019). [CrossRef]  

8. G. Xu, Y. Zhu, X. Li, and R. Chen, “Vision reconstruction based on planar laser with nonrestrictive installation position and posture relative to 2d reference,” Opt. Express 27(26), 38567–38578 (2019). [CrossRef]  

9. K. Dorozynska, V. Kornienko, E. Kristensson, and M. Alden, “A versatile, low-cost, snapshot multidimensional imaging approach based on structured light,” Opt. Express 28(7), 9572–9586 (2020). [CrossRef]  

10. Y. Wang and X. Wang, “On-line three-dimensional coordinate measurement of dynamic binocular stereo vision based on rotating camera in large fov,” Opt. Express 29(4), 4986–5005 (2021). [CrossRef]  

11. J. bontar and Y. Lecun, “Stereo matching by training a convolutional neural network to compare image patches,” The Journal of Machine Learning Research (2016).

12. I. Laina, C. Rupprecht, V. Belagiannis, F Tombari, and N. Navab, “Deeper Depth Prediction with Fully Convolutional Residual Networks,” Fourth International Conference on 3d Vision. IEEE (2016).

13. R. Garg, V. K. Bg, G. Carneiro, and I. Reid, “Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue,” European Conference on Computer Vision. Springer, Cham (2016).

14. M. Poggi, F. Tosi, K. Batsos, P. Mordohai, and S. Mattoccia, “On the synergies between machine learning and binocular stereo for depth estimation from images: a survey,” in Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE, 2021), p. 1.

15. M. L. Arn, J. M. Godhavn, and O. M. Aamo, “At-bit estimation of rock density from real-time drilling data using deep learning with online calibration,” J. Pet. Sci. Eng. 206, 109006 (2021). [CrossRef]  

16. Y. Ling and S. Shen, “High-precision online markerless stereo extrinsic calibration,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots & Systems (IEEE, 2016), pp.1771–1778.

17. J. Muhovič and J. Perš, “Correcting decalibration of stereo cameras in self-driving vehicles,” Sensors 20(11), 3241 (2020). [CrossRef]  

18. R. Pathum, B. Seung-Hae, and P. Soon-Yong, “An efficient calibration method for a stereo camera system with heterogeneous lenses using an embedded checkerboard pattern,” J. Sens. 2017(8), 1–12 (2017). [CrossRef]  

19. M. Niu, K. Zhao, Z. Yang, and R Wang. “Implementation of binocular dynamic measurement for vibrating object under the multithreaded programming,” Journal of Qinghai University (2016).

20. X. Chen, C. Wang, W. Zhang, K. Lan, and Q. Huang, “An integrated two-pose calibration method for estimating head-eye parameters of a robotic bionic eye,” IEEE Trans. Instrum. Meas. 69(4), 1664–1672 (2020). [CrossRef]  

21. Z. Zhang “A flexible new technique for camera calibration,” Tpami (2000).

22. Jiang Hongzhi, Zhai Huanjie, Yang, and Xudong, “3d shape measurement of translucent objects based on fourier single-pixel imaging in projector-camera system,” Opt. Express 27(23), 33564–33574 (2019). [CrossRef]  

23. J. Zhang and L. Zhen, “Tracking and Position of Drogue for Autonomous Aerial Refueling,” 2018 IEEE 3rd Optoelectronics Global Conference (OGC). IEEE (2018).

24. J. W. Bian, W. Y. Lin, Y. Liu, L. Zhang, and I. Reid, “Gms: grid-based motion statistics for fast, ultra-robust feature correspondence,” Int. J. Comput. Vis. 128(1), 4181 (2017).

25. J. Yao, D. Qi, Y. Yao, F. Cao, and S. Zhang, “Total variation and block-matching 3d filtering-based image reconstruction for single-shot compressed ultrafast photography,” Opt. Lasers Eng. 139, 106475 (2021). [CrossRef]  

26. Y. Guo and C. C. Lu “Multi-modality image registration using mutual information based on gradient vector flow,” Springer Berlin Heidelberg (2006).

27. C. Ceylan, U. Heide, G. H. Bol, J. Lagendijk, and A. Kotte, “Assessment of rigid multi-modality image registration consistency using the multiple sub-volume registration (msr) method,” Phys. Med. Biol. 50(10), N101–N108 (2005). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Framework of the proposed method.
Fig. 2.
Fig. 2. Schematic diagram of correction model.
Fig. 3.
Fig. 3. Joint distribution function of 3D model.
Fig. 4.
Fig. 4. Iteration method of nonlinear optimization model.
Fig. 5.
Fig. 5. Sketch map of stereo matching error points.
Fig. 6.
Fig. 6. Comparison of image rectification algorithms.
Fig. 7.
Fig. 7. Vibration Platform Equipment Diagram.
Fig. 8.
Fig. 8. Inspection Chart of Vibration Sensor.
Fig. 9.
Fig. 9. Comparison chart of measurement accuracy under different vibration acceleration.
Fig. 10.
Fig. 10. Comparison of target measurement accuracy under different method.
Fig. 11.
Fig. 11. Comparison of unit measurement accuracy under different method.
Fig. 12.
Fig. 12. Comparison of image pair rectification accuracy.
Fig. 13.
Fig. 13. Comparison of image pair rectification accuracy.

Tables (2)

Tables Icon

Table 1. CORRECTION ACCURACY OF CALIBRATION MODEL PARAMETERS

Tables Icon

Table 2. CORRECTION ACCURACY OF CALIBRATION MODEL PARAMETERS

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

[ μ v 1 ] = A [ R|t ] [ X w Y w Z w 1 ] = [ f x 0 C x 0 0 f y C y 0 0 0 1 0 ] [ R 11 R 12 R 13 T x R 21 R 22 R 23 T y R 31 R 32 R 33 T z 0 0 0 1 ] [ X w Y w Z w 1 ] ,
[ μ v 1 ] = A [ R Δ ψ T Δ T y 0 3 T 1 ] [ X w Y w Z w 1 ] ,
R Δ ψ = R l R r , R l = [ e 1 T e 2 T ( e 1 × e 2 ) T ] T ×  rodrigues  ( o m Δ ψ / 2 ) , e 1 = T Δ T y | | T Δ T y | | , e 2 = [ T y + Δ T y T x 0 ] T x 2 + ( T y + Δ T y ) 2 .
min F ( Δ ψ , Δ T y ) = i = 1 m [ f ( Δ ψ i , Δ T y i ) Δ v i ] 2 ,
[ X w Y w Z w ] = P q = P 1 q , f ( x , y ) = k 1 x k 2 sin y .
S = S n + 1 S n = x 1 x 2 ( e q 2 e q 1 ) x 1 x 2 ( e q 2 y e q 2 min ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.