Abstract

Visual odometry has received a great deal of attention during the past decade. However, being fragile to rapid motion and dynamic scenarios prevents it from practical use. Here, we present PALVO by applying panoramic annular lens to visual odometry, greatly increasing the robustness to both cases. We modify the camera model for PAL and specially design the initialization process based on the essential matrix. Our method estimates the camera’s poses through two-stage tracking, meanwhile builds the local map using a probabilistic mapping method based on the Bayesian framework and feature correspondence search along the epipolar curve. Several experiments are performed to verify our algorithm, demonstrating that our algorithm provides an extremely competitive performance in robustness to rapid motion and dynamic scenarios, meanwhile achieves the same level of accuracy as the state-of-the-art visual odometry.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Comparison of two panoramic front unit arrangements in design of a super wide angle panoramic annular lens

Xiangdong Zhou, Jian Bai, Chen Wang, Xiyun Hou, and Kaiwei Wang
Appl. Opt. 55(12) 3219-3225 (2016)

Design of panoramic lens based on ogive and aspheric surface

Junhua Wang, Yuechao Liang, and Min Xu
Opt. Express 23(15) 19489-19499 (2015)

Stray light analysis and suppression of panoramic annular lens

Zhi Huang, Jian Bai, Tian Xiong Lu, and Xi Yun Hou
Opt. Express 21(9) 10810-10820 (2013)

References

  • View by:
  • |
  • |
  • |

  1. D. Scaramuzza and F. Fraundorfer, “Tutorial: Visual odometry,” IEEE Robot. Autom. Mag. 18(4), 80–92 (2011).
    [Crossref]
  2. Stereolabs, “ZED Stereo camera,” https://www.stereolabs.com/zed/ .
  3. L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel realsense stereoscopic depth cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1–10.
  4. H. Chen, K. Wang, and K. Yang, “Improving realsense by fusing color stereo vision and infrared stereo vision for the visually impaired,” in Proceedings of International Conference on Information Science and System (ACM, 2018), pp. 142–146.
  5. K. Yang, K. Wang, H. Chen, and J. Bai, “Reducing the minimum range of a rgb-depth sensor to aid navigation in visually impaired individuals,” Appl. Optics 57(11), 2809–2819 (2018).
    [Crossref]
  6. M. R. U. Saputra, A. Markham, and N. Trigoni, “Visual SLAM and structure from motion in dynamic environments: A survey,” ACM Comput. Surv. 51(2), 1–36 (2018).
    [Crossref]
  7. W. Tan, H. Liu, Z. Dong, G. Zhang, and H. Bao, “Robust monocular SLAM in dynamic environments,” in Proceedings of IEEE International Symposium on Mixed and Augmented Reality (IEEE, 2013), pp. 209–218.
  8. Y. Luo, X. Huang, J. Bai, and R. Liang, “Compact polarization-based dual-view panoramic lens,” Appl. Optics 56(22), 6283–6287 (2017).
    [Crossref]
  9. T. Taketomi, H. Uchiyama, and S. Ikeda, “Visual SLAM algorithms: A survey from 2010 to 2016,” IPSJ Transactions on Computer Vision and Applications 9(1), 16 (2017).
    [Crossref]
  10. A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,” IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007).
    [Crossref] [PubMed]
  11. G. Welch and G. Bishop, “An introduction to the Kalman Filter,” (University of North Carolina at Chapel Hill, 1995).
  12. G. Klein and D. Murray, “Parallel tracking and mapping for small AR workspaces,” in Proceedings of IEEE and ACM International Symposium on Mixed and Augmented Reality (IEEE, 2007), pp. 1–10.
  13. B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment – a modern synthesis,” in Proceedings of International Workshop on Vision Algorithms, B. Triggs, P.F. McLauchlan, R. I Hartley, and A.W. Fitzgibbon, eds. (Springer, 2000), pp. 298–372.
  14. R. Mur-Artal and J. D. Tardós, “ORB-SLAM: Tracking and mapping recognizable features,” presented at Robotics: Science and Systems (RSS) Workshop on Multi View Geometry in Robotics, Berkeley, USA, July 2014.
  15. R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A versatile and accurate monocular slam system,” IEEE Trans. Robot. 31(5), 1147–1163 (2015).
    [Crossref]
  16. R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE Trans. Robot. 33(5), 1255–1262 (2017).
    [Crossref]
  17. R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “DTAM: Dense tracking and mapping in real-time,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 2320–2327.
  18. J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular slam,” in Proceedings of European Conference on Computer Vision, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds. (Springer, 2014), pp. 834–849.
  19. J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2017).
    [Crossref] [PubMed]
  20. P. Bergmann, R. Wang, and D. Cremers, “Online photometric calibration of auto exposure video for realtime visual odometry and slam,” IEEE Robotics and Automation Letters 3(2), 627–634 (2018).
    [Crossref]
  21. C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2014), pp. 15–22.
  22. D. Gutierrez, A. Rituerto, J. Montiel, and J. J. Guerrero, “Adapting a real-time monocular visual slam from conventional to omnidirectional cameras,” in Proceedings of IEEE International Conference on Computer Vision Workshops (IEEE, 2011), pp. 343–350.
  23. H. Matsuki, L. von Stumberg, V. Usenko, J. Stückler, and D. Cremers, “Omnidirectional DSO: Direct sparse odometry with fisheye cameras,” IEEE Robotics and Automation Letters 3(4), 3693–3700 (2018).
    [Crossref]
  24. M. Lin, Q. Cao, and H. Zhang, “PVO: Panoramic visual odometry,” in Proceedings of IEEE International Conference on Advanced Robotics and Mechatronics (IEEE, 2018), pp. 491–496.
  25. Ricoh Company, “RICOH THETA V,” https://theta360.com/en/about/theta/v.html .
  26. Z. Huang, J. Bai, T. X. Lu, and X. Y. Hou, “Stray light analysis and suppression of panoramic annular lens,” Opt. Express 21(9), 10810–10820 (2013).
    [Crossref] [PubMed]
  27. Z. Huang, J. Bai, and X. Y. Hou, “Design of panoramic stereo imaging with single optical system,” Opt. Express 20(6), 6085–6096 (2012).
    [Crossref] [PubMed]
  28. D. Scaramuzza, A. Martinelli, and R. Siegwart, “A flexible technique for accurate omnidirectional camera calibration and structure from motion,” in Proceedings of IEEE International Conference on Computer Vision Systems (IEEE, 2006), pp. 45–55.
  29. D. Scaramuzza, A. Martinelli, and R. Siegwart, “A toolbox for easily calibrating omnidirectional cameras,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2007), pp. 5695–5701.
  30. H. C. Longuet-Higgins, “A computer algorithm for reconstructing a scene from two projections,” Nature 293(5828), 133 (1981).
    [Crossref]
  31. R. Hartley and A. Zisserman, Multiple view geometry in computer vision (Cambridge university press, 2003).
  32. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence, (Morgan Kaufmann Publishers Inc., 1981), pp. 674–679.
  33. H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robust features,” in Proceedings of European Conference on Computer Vision, A. Leonardis, H. Bischof, and A. Pinz, eds. (Springer, 2006), pp. 404–417.
  34. M. Trajković and M. Hedley, “Fast corner detection,” Image Vis. Comput. 16(2), 75–87 (1998).
    [Crossref]
  35. D. Caruso, J. Engel, and D. Cremers, “Large-scale direct SLAM for omnidirectional cameras,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2015), pp. 141–148.
  36. E. Marder-Eppstein, “Project tango,” in Proceedings of ACM SIGGRAPH 2016 Real-Time Live! (ACM, 2016), pp. 40:25.
  37. R. Cheng, K. Wang, S. Lin, W. Hu, K. Yang, X. Huang, H. Li, D. Sun, and J. Bai, “Panoramic annular localizer: Tackling the variation challenges of outdoor localization using panoramic annular images and active deep descriptors,” arXiv preprint arXiv:1905.05425 (2019).
  38. H. Chen, K. Wang, W. Hu, and L. Fei, “SORB: Improve ORB feature matching by semantic segmentation,” Proc. SPIE 10799, 1–7 (2018).
  39. K. Yang, L. M. Bergasa, E. Romera, and K. Wang, “Robustifying semantic cognition of traversability across wearable rgb-depth cameras,” Appl. Optics 58(12), 3141–3155 (2019).
    [Crossref]
  40. K. Yang, X. Hu, L. M. Bergasa, E. Romera, X. Huang, D. Sun, and K. Wang, “Can we PASS beyond the field of view? panoramic annular semantic segmentation for real-world surrounding perception,” in Proceedings of IEEE Intelligent Vehicles Symposium (IEEE, 2019), pp. 374–381.

2019 (1)

K. Yang, L. M. Bergasa, E. Romera, and K. Wang, “Robustifying semantic cognition of traversability across wearable rgb-depth cameras,” Appl. Optics 58(12), 3141–3155 (2019).
[Crossref]

2018 (5)

H. Chen, K. Wang, W. Hu, and L. Fei, “SORB: Improve ORB feature matching by semantic segmentation,” Proc. SPIE 10799, 1–7 (2018).

P. Bergmann, R. Wang, and D. Cremers, “Online photometric calibration of auto exposure video for realtime visual odometry and slam,” IEEE Robotics and Automation Letters 3(2), 627–634 (2018).
[Crossref]

H. Matsuki, L. von Stumberg, V. Usenko, J. Stückler, and D. Cremers, “Omnidirectional DSO: Direct sparse odometry with fisheye cameras,” IEEE Robotics and Automation Letters 3(4), 3693–3700 (2018).
[Crossref]

K. Yang, K. Wang, H. Chen, and J. Bai, “Reducing the minimum range of a rgb-depth sensor to aid navigation in visually impaired individuals,” Appl. Optics 57(11), 2809–2819 (2018).
[Crossref]

M. R. U. Saputra, A. Markham, and N. Trigoni, “Visual SLAM and structure from motion in dynamic environments: A survey,” ACM Comput. Surv. 51(2), 1–36 (2018).
[Crossref]

2017 (4)

Y. Luo, X. Huang, J. Bai, and R. Liang, “Compact polarization-based dual-view panoramic lens,” Appl. Optics 56(22), 6283–6287 (2017).
[Crossref]

T. Taketomi, H. Uchiyama, and S. Ikeda, “Visual SLAM algorithms: A survey from 2010 to 2016,” IPSJ Transactions on Computer Vision and Applications 9(1), 16 (2017).
[Crossref]

R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE Trans. Robot. 33(5), 1255–1262 (2017).
[Crossref]

J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2017).
[Crossref] [PubMed]

2015 (1)

R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A versatile and accurate monocular slam system,” IEEE Trans. Robot. 31(5), 1147–1163 (2015).
[Crossref]

2013 (1)

2012 (1)

2011 (1)

D. Scaramuzza and F. Fraundorfer, “Tutorial: Visual odometry,” IEEE Robot. Autom. Mag. 18(4), 80–92 (2011).
[Crossref]

2007 (1)

A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,” IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007).
[Crossref] [PubMed]

1998 (1)

M. Trajković and M. Hedley, “Fast corner detection,” Image Vis. Comput. 16(2), 75–87 (1998).
[Crossref]

1981 (1)

H. C. Longuet-Higgins, “A computer algorithm for reconstructing a scene from two projections,” Nature 293(5828), 133 (1981).
[Crossref]

Bai, J.

K. Yang, K. Wang, H. Chen, and J. Bai, “Reducing the minimum range of a rgb-depth sensor to aid navigation in visually impaired individuals,” Appl. Optics 57(11), 2809–2819 (2018).
[Crossref]

Y. Luo, X. Huang, J. Bai, and R. Liang, “Compact polarization-based dual-view panoramic lens,” Appl. Optics 56(22), 6283–6287 (2017).
[Crossref]

Z. Huang, J. Bai, T. X. Lu, and X. Y. Hou, “Stray light analysis and suppression of panoramic annular lens,” Opt. Express 21(9), 10810–10820 (2013).
[Crossref] [PubMed]

Z. Huang, J. Bai, and X. Y. Hou, “Design of panoramic stereo imaging with single optical system,” Opt. Express 20(6), 6085–6096 (2012).
[Crossref] [PubMed]

R. Cheng, K. Wang, S. Lin, W. Hu, K. Yang, X. Huang, H. Li, D. Sun, and J. Bai, “Panoramic annular localizer: Tackling the variation challenges of outdoor localization using panoramic annular images and active deep descriptors,” arXiv preprint arXiv:1905.05425 (2019).

Bao, H.

W. Tan, H. Liu, Z. Dong, G. Zhang, and H. Bao, “Robust monocular SLAM in dynamic environments,” in Proceedings of IEEE International Symposium on Mixed and Augmented Reality (IEEE, 2013), pp. 209–218.

Bay, H.

H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robust features,” in Proceedings of European Conference on Computer Vision, A. Leonardis, H. Bischof, and A. Pinz, eds. (Springer, 2006), pp. 404–417.

Bergasa, L. M.

K. Yang, L. M. Bergasa, E. Romera, and K. Wang, “Robustifying semantic cognition of traversability across wearable rgb-depth cameras,” Appl. Optics 58(12), 3141–3155 (2019).
[Crossref]

K. Yang, X. Hu, L. M. Bergasa, E. Romera, X. Huang, D. Sun, and K. Wang, “Can we PASS beyond the field of view? panoramic annular semantic segmentation for real-world surrounding perception,” in Proceedings of IEEE Intelligent Vehicles Symposium (IEEE, 2019), pp. 374–381.

Bergmann, P.

P. Bergmann, R. Wang, and D. Cremers, “Online photometric calibration of auto exposure video for realtime visual odometry and slam,” IEEE Robotics and Automation Letters 3(2), 627–634 (2018).
[Crossref]

Bhowmik, A.

L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel realsense stereoscopic depth cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1–10.

Bishop, G.

G. Welch and G. Bishop, “An introduction to the Kalman Filter,” (University of North Carolina at Chapel Hill, 1995).

Cao, Q.

M. Lin, Q. Cao, and H. Zhang, “PVO: Panoramic visual odometry,” in Proceedings of IEEE International Conference on Advanced Robotics and Mechatronics (IEEE, 2018), pp. 491–496.

Caruso, D.

D. Caruso, J. Engel, and D. Cremers, “Large-scale direct SLAM for omnidirectional cameras,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2015), pp. 141–148.

Chen, H.

H. Chen, K. Wang, W. Hu, and L. Fei, “SORB: Improve ORB feature matching by semantic segmentation,” Proc. SPIE 10799, 1–7 (2018).

K. Yang, K. Wang, H. Chen, and J. Bai, “Reducing the minimum range of a rgb-depth sensor to aid navigation in visually impaired individuals,” Appl. Optics 57(11), 2809–2819 (2018).
[Crossref]

H. Chen, K. Wang, and K. Yang, “Improving realsense by fusing color stereo vision and infrared stereo vision for the visually impaired,” in Proceedings of International Conference on Information Science and System (ACM, 2018), pp. 142–146.

Cheng, R.

R. Cheng, K. Wang, S. Lin, W. Hu, K. Yang, X. Huang, H. Li, D. Sun, and J. Bai, “Panoramic annular localizer: Tackling the variation challenges of outdoor localization using panoramic annular images and active deep descriptors,” arXiv preprint arXiv:1905.05425 (2019).

Cremers, D.

H. Matsuki, L. von Stumberg, V. Usenko, J. Stückler, and D. Cremers, “Omnidirectional DSO: Direct sparse odometry with fisheye cameras,” IEEE Robotics and Automation Letters 3(4), 3693–3700 (2018).
[Crossref]

P. Bergmann, R. Wang, and D. Cremers, “Online photometric calibration of auto exposure video for realtime visual odometry and slam,” IEEE Robotics and Automation Letters 3(2), 627–634 (2018).
[Crossref]

J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2017).
[Crossref] [PubMed]

J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular slam,” in Proceedings of European Conference on Computer Vision, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds. (Springer, 2014), pp. 834–849.

D. Caruso, J. Engel, and D. Cremers, “Large-scale direct SLAM for omnidirectional cameras,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2015), pp. 141–148.

Davison, A. J.

A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,” IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007).
[Crossref] [PubMed]

R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “DTAM: Dense tracking and mapping in real-time,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 2320–2327.

Dong, Z.

W. Tan, H. Liu, Z. Dong, G. Zhang, and H. Bao, “Robust monocular SLAM in dynamic environments,” in Proceedings of IEEE International Symposium on Mixed and Augmented Reality (IEEE, 2013), pp. 209–218.

Engel, J.

J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2017).
[Crossref] [PubMed]

J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular slam,” in Proceedings of European Conference on Computer Vision, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds. (Springer, 2014), pp. 834–849.

D. Caruso, J. Engel, and D. Cremers, “Large-scale direct SLAM for omnidirectional cameras,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2015), pp. 141–148.

Fei, L.

H. Chen, K. Wang, W. Hu, and L. Fei, “SORB: Improve ORB feature matching by semantic segmentation,” Proc. SPIE 10799, 1–7 (2018).

Fitzgibbon, A. W.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment – a modern synthesis,” in Proceedings of International Workshop on Vision Algorithms, B. Triggs, P.F. McLauchlan, R. I Hartley, and A.W. Fitzgibbon, eds. (Springer, 2000), pp. 298–372.

Forster, C.

C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2014), pp. 15–22.

Fraundorfer, F.

D. Scaramuzza and F. Fraundorfer, “Tutorial: Visual odometry,” IEEE Robot. Autom. Mag. 18(4), 80–92 (2011).
[Crossref]

Grunnet-Jepsen, A.

L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel realsense stereoscopic depth cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1–10.

Guerrero, J. J.

D. Gutierrez, A. Rituerto, J. Montiel, and J. J. Guerrero, “Adapting a real-time monocular visual slam from conventional to omnidirectional cameras,” in Proceedings of IEEE International Conference on Computer Vision Workshops (IEEE, 2011), pp. 343–350.

Gutierrez, D.

D. Gutierrez, A. Rituerto, J. Montiel, and J. J. Guerrero, “Adapting a real-time monocular visual slam from conventional to omnidirectional cameras,” in Proceedings of IEEE International Conference on Computer Vision Workshops (IEEE, 2011), pp. 343–350.

Hartley, R.

R. Hartley and A. Zisserman, Multiple view geometry in computer vision (Cambridge university press, 2003).

Hartley, R. I.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment – a modern synthesis,” in Proceedings of International Workshop on Vision Algorithms, B. Triggs, P.F. McLauchlan, R. I Hartley, and A.W. Fitzgibbon, eds. (Springer, 2000), pp. 298–372.

Hedley, M.

M. Trajković and M. Hedley, “Fast corner detection,” Image Vis. Comput. 16(2), 75–87 (1998).
[Crossref]

Hou, X. Y.

Hu, W.

H. Chen, K. Wang, W. Hu, and L. Fei, “SORB: Improve ORB feature matching by semantic segmentation,” Proc. SPIE 10799, 1–7 (2018).

R. Cheng, K. Wang, S. Lin, W. Hu, K. Yang, X. Huang, H. Li, D. Sun, and J. Bai, “Panoramic annular localizer: Tackling the variation challenges of outdoor localization using panoramic annular images and active deep descriptors,” arXiv preprint arXiv:1905.05425 (2019).

Hu, X.

K. Yang, X. Hu, L. M. Bergasa, E. Romera, X. Huang, D. Sun, and K. Wang, “Can we PASS beyond the field of view? panoramic annular semantic segmentation for real-world surrounding perception,” in Proceedings of IEEE Intelligent Vehicles Symposium (IEEE, 2019), pp. 374–381.

Huang, X.

Y. Luo, X. Huang, J. Bai, and R. Liang, “Compact polarization-based dual-view panoramic lens,” Appl. Optics 56(22), 6283–6287 (2017).
[Crossref]

K. Yang, X. Hu, L. M. Bergasa, E. Romera, X. Huang, D. Sun, and K. Wang, “Can we PASS beyond the field of view? panoramic annular semantic segmentation for real-world surrounding perception,” in Proceedings of IEEE Intelligent Vehicles Symposium (IEEE, 2019), pp. 374–381.

R. Cheng, K. Wang, S. Lin, W. Hu, K. Yang, X. Huang, H. Li, D. Sun, and J. Bai, “Panoramic annular localizer: Tackling the variation challenges of outdoor localization using panoramic annular images and active deep descriptors,” arXiv preprint arXiv:1905.05425 (2019).

Huang, Z.

Ikeda, S.

T. Taketomi, H. Uchiyama, and S. Ikeda, “Visual SLAM algorithms: A survey from 2010 to 2016,” IPSJ Transactions on Computer Vision and Applications 9(1), 16 (2017).
[Crossref]

Kanade, T.

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence, (Morgan Kaufmann Publishers Inc., 1981), pp. 674–679.

Keselman, L.

L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel realsense stereoscopic depth cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1–10.

Klein, G.

G. Klein and D. Murray, “Parallel tracking and mapping for small AR workspaces,” in Proceedings of IEEE and ACM International Symposium on Mixed and Augmented Reality (IEEE, 2007), pp. 1–10.

Koltun, V.

J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2017).
[Crossref] [PubMed]

Li, H.

R. Cheng, K. Wang, S. Lin, W. Hu, K. Yang, X. Huang, H. Li, D. Sun, and J. Bai, “Panoramic annular localizer: Tackling the variation challenges of outdoor localization using panoramic annular images and active deep descriptors,” arXiv preprint arXiv:1905.05425 (2019).

Liang, R.

Y. Luo, X. Huang, J. Bai, and R. Liang, “Compact polarization-based dual-view panoramic lens,” Appl. Optics 56(22), 6283–6287 (2017).
[Crossref]

Lin, M.

M. Lin, Q. Cao, and H. Zhang, “PVO: Panoramic visual odometry,” in Proceedings of IEEE International Conference on Advanced Robotics and Mechatronics (IEEE, 2018), pp. 491–496.

Lin, S.

R. Cheng, K. Wang, S. Lin, W. Hu, K. Yang, X. Huang, H. Li, D. Sun, and J. Bai, “Panoramic annular localizer: Tackling the variation challenges of outdoor localization using panoramic annular images and active deep descriptors,” arXiv preprint arXiv:1905.05425 (2019).

Liu, H.

W. Tan, H. Liu, Z. Dong, G. Zhang, and H. Bao, “Robust monocular SLAM in dynamic environments,” in Proceedings of IEEE International Symposium on Mixed and Augmented Reality (IEEE, 2013), pp. 209–218.

Longuet-Higgins, H. C.

H. C. Longuet-Higgins, “A computer algorithm for reconstructing a scene from two projections,” Nature 293(5828), 133 (1981).
[Crossref]

Lovegrove, S. J.

R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “DTAM: Dense tracking and mapping in real-time,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 2320–2327.

Lu, T. X.

Lucas, B. D.

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence, (Morgan Kaufmann Publishers Inc., 1981), pp. 674–679.

Luo, Y.

Y. Luo, X. Huang, J. Bai, and R. Liang, “Compact polarization-based dual-view panoramic lens,” Appl. Optics 56(22), 6283–6287 (2017).
[Crossref]

Marder-Eppstein, E.

E. Marder-Eppstein, “Project tango,” in Proceedings of ACM SIGGRAPH 2016 Real-Time Live! (ACM, 2016), pp. 40:25.

Markham, A.

M. R. U. Saputra, A. Markham, and N. Trigoni, “Visual SLAM and structure from motion in dynamic environments: A survey,” ACM Comput. Surv. 51(2), 1–36 (2018).
[Crossref]

Martinelli, A.

D. Scaramuzza, A. Martinelli, and R. Siegwart, “A toolbox for easily calibrating omnidirectional cameras,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2007), pp. 5695–5701.

D. Scaramuzza, A. Martinelli, and R. Siegwart, “A flexible technique for accurate omnidirectional camera calibration and structure from motion,” in Proceedings of IEEE International Conference on Computer Vision Systems (IEEE, 2006), pp. 45–55.

Matsuki, H.

H. Matsuki, L. von Stumberg, V. Usenko, J. Stückler, and D. Cremers, “Omnidirectional DSO: Direct sparse odometry with fisheye cameras,” IEEE Robotics and Automation Letters 3(4), 3693–3700 (2018).
[Crossref]

McLauchlan, P. F.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment – a modern synthesis,” in Proceedings of International Workshop on Vision Algorithms, B. Triggs, P.F. McLauchlan, R. I Hartley, and A.W. Fitzgibbon, eds. (Springer, 2000), pp. 298–372.

Molton, N. D.

A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,” IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007).
[Crossref] [PubMed]

Montiel, J.

D. Gutierrez, A. Rituerto, J. Montiel, and J. J. Guerrero, “Adapting a real-time monocular visual slam from conventional to omnidirectional cameras,” in Proceedings of IEEE International Conference on Computer Vision Workshops (IEEE, 2011), pp. 343–350.

Montiel, J. M. M.

R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A versatile and accurate monocular slam system,” IEEE Trans. Robot. 31(5), 1147–1163 (2015).
[Crossref]

Mur-Artal, R.

R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE Trans. Robot. 33(5), 1255–1262 (2017).
[Crossref]

R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A versatile and accurate monocular slam system,” IEEE Trans. Robot. 31(5), 1147–1163 (2015).
[Crossref]

R. Mur-Artal and J. D. Tardós, “ORB-SLAM: Tracking and mapping recognizable features,” presented at Robotics: Science and Systems (RSS) Workshop on Multi View Geometry in Robotics, Berkeley, USA, July 2014.

Murray, D.

G. Klein and D. Murray, “Parallel tracking and mapping for small AR workspaces,” in Proceedings of IEEE and ACM International Symposium on Mixed and Augmented Reality (IEEE, 2007), pp. 1–10.

Newcombe, R. A.

R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “DTAM: Dense tracking and mapping in real-time,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 2320–2327.

Pizzoli, M.

C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2014), pp. 15–22.

Reid, I. D.

A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,” IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007).
[Crossref] [PubMed]

Rituerto, A.

D. Gutierrez, A. Rituerto, J. Montiel, and J. J. Guerrero, “Adapting a real-time monocular visual slam from conventional to omnidirectional cameras,” in Proceedings of IEEE International Conference on Computer Vision Workshops (IEEE, 2011), pp. 343–350.

Romera, E.

K. Yang, L. M. Bergasa, E. Romera, and K. Wang, “Robustifying semantic cognition of traversability across wearable rgb-depth cameras,” Appl. Optics 58(12), 3141–3155 (2019).
[Crossref]

K. Yang, X. Hu, L. M. Bergasa, E. Romera, X. Huang, D. Sun, and K. Wang, “Can we PASS beyond the field of view? panoramic annular semantic segmentation for real-world surrounding perception,” in Proceedings of IEEE Intelligent Vehicles Symposium (IEEE, 2019), pp. 374–381.

Saputra, M. R. U.

M. R. U. Saputra, A. Markham, and N. Trigoni, “Visual SLAM and structure from motion in dynamic environments: A survey,” ACM Comput. Surv. 51(2), 1–36 (2018).
[Crossref]

Scaramuzza, D.

D. Scaramuzza and F. Fraundorfer, “Tutorial: Visual odometry,” IEEE Robot. Autom. Mag. 18(4), 80–92 (2011).
[Crossref]

C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2014), pp. 15–22.

D. Scaramuzza, A. Martinelli, and R. Siegwart, “A flexible technique for accurate omnidirectional camera calibration and structure from motion,” in Proceedings of IEEE International Conference on Computer Vision Systems (IEEE, 2006), pp. 45–55.

D. Scaramuzza, A. Martinelli, and R. Siegwart, “A toolbox for easily calibrating omnidirectional cameras,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2007), pp. 5695–5701.

Schöps, T.

J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular slam,” in Proceedings of European Conference on Computer Vision, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds. (Springer, 2014), pp. 834–849.

Siegwart, R.

D. Scaramuzza, A. Martinelli, and R. Siegwart, “A flexible technique for accurate omnidirectional camera calibration and structure from motion,” in Proceedings of IEEE International Conference on Computer Vision Systems (IEEE, 2006), pp. 45–55.

D. Scaramuzza, A. Martinelli, and R. Siegwart, “A toolbox for easily calibrating omnidirectional cameras,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2007), pp. 5695–5701.

Stasse, O.

A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,” IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007).
[Crossref] [PubMed]

Stückler, J.

H. Matsuki, L. von Stumberg, V. Usenko, J. Stückler, and D. Cremers, “Omnidirectional DSO: Direct sparse odometry with fisheye cameras,” IEEE Robotics and Automation Letters 3(4), 3693–3700 (2018).
[Crossref]

Sun, D.

R. Cheng, K. Wang, S. Lin, W. Hu, K. Yang, X. Huang, H. Li, D. Sun, and J. Bai, “Panoramic annular localizer: Tackling the variation challenges of outdoor localization using panoramic annular images and active deep descriptors,” arXiv preprint arXiv:1905.05425 (2019).

K. Yang, X. Hu, L. M. Bergasa, E. Romera, X. Huang, D. Sun, and K. Wang, “Can we PASS beyond the field of view? panoramic annular semantic segmentation for real-world surrounding perception,” in Proceedings of IEEE Intelligent Vehicles Symposium (IEEE, 2019), pp. 374–381.

Taketomi, T.

T. Taketomi, H. Uchiyama, and S. Ikeda, “Visual SLAM algorithms: A survey from 2010 to 2016,” IPSJ Transactions on Computer Vision and Applications 9(1), 16 (2017).
[Crossref]

Tan, W.

W. Tan, H. Liu, Z. Dong, G. Zhang, and H. Bao, “Robust monocular SLAM in dynamic environments,” in Proceedings of IEEE International Symposium on Mixed and Augmented Reality (IEEE, 2013), pp. 209–218.

Tardos, J. D.

R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE Trans. Robot. 33(5), 1255–1262 (2017).
[Crossref]

R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A versatile and accurate monocular slam system,” IEEE Trans. Robot. 31(5), 1147–1163 (2015).
[Crossref]

Tardós, J. D.

R. Mur-Artal and J. D. Tardós, “ORB-SLAM: Tracking and mapping recognizable features,” presented at Robotics: Science and Systems (RSS) Workshop on Multi View Geometry in Robotics, Berkeley, USA, July 2014.

Trajkovic, M.

M. Trajković and M. Hedley, “Fast corner detection,” Image Vis. Comput. 16(2), 75–87 (1998).
[Crossref]

Triggs, B.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment – a modern synthesis,” in Proceedings of International Workshop on Vision Algorithms, B. Triggs, P.F. McLauchlan, R. I Hartley, and A.W. Fitzgibbon, eds. (Springer, 2000), pp. 298–372.

Trigoni, N.

M. R. U. Saputra, A. Markham, and N. Trigoni, “Visual SLAM and structure from motion in dynamic environments: A survey,” ACM Comput. Surv. 51(2), 1–36 (2018).
[Crossref]

Tuytelaars, T.

H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robust features,” in Proceedings of European Conference on Computer Vision, A. Leonardis, H. Bischof, and A. Pinz, eds. (Springer, 2006), pp. 404–417.

Uchiyama, H.

T. Taketomi, H. Uchiyama, and S. Ikeda, “Visual SLAM algorithms: A survey from 2010 to 2016,” IPSJ Transactions on Computer Vision and Applications 9(1), 16 (2017).
[Crossref]

Usenko, V.

H. Matsuki, L. von Stumberg, V. Usenko, J. Stückler, and D. Cremers, “Omnidirectional DSO: Direct sparse odometry with fisheye cameras,” IEEE Robotics and Automation Letters 3(4), 3693–3700 (2018).
[Crossref]

Van Gool, L.

H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robust features,” in Proceedings of European Conference on Computer Vision, A. Leonardis, H. Bischof, and A. Pinz, eds. (Springer, 2006), pp. 404–417.

von Stumberg, L.

H. Matsuki, L. von Stumberg, V. Usenko, J. Stückler, and D. Cremers, “Omnidirectional DSO: Direct sparse odometry with fisheye cameras,” IEEE Robotics and Automation Letters 3(4), 3693–3700 (2018).
[Crossref]

Wang, K.

K. Yang, L. M. Bergasa, E. Romera, and K. Wang, “Robustifying semantic cognition of traversability across wearable rgb-depth cameras,” Appl. Optics 58(12), 3141–3155 (2019).
[Crossref]

H. Chen, K. Wang, W. Hu, and L. Fei, “SORB: Improve ORB feature matching by semantic segmentation,” Proc. SPIE 10799, 1–7 (2018).

K. Yang, K. Wang, H. Chen, and J. Bai, “Reducing the minimum range of a rgb-depth sensor to aid navigation in visually impaired individuals,” Appl. Optics 57(11), 2809–2819 (2018).
[Crossref]

H. Chen, K. Wang, and K. Yang, “Improving realsense by fusing color stereo vision and infrared stereo vision for the visually impaired,” in Proceedings of International Conference on Information Science and System (ACM, 2018), pp. 142–146.

R. Cheng, K. Wang, S. Lin, W. Hu, K. Yang, X. Huang, H. Li, D. Sun, and J. Bai, “Panoramic annular localizer: Tackling the variation challenges of outdoor localization using panoramic annular images and active deep descriptors,” arXiv preprint arXiv:1905.05425 (2019).

K. Yang, X. Hu, L. M. Bergasa, E. Romera, X. Huang, D. Sun, and K. Wang, “Can we PASS beyond the field of view? panoramic annular semantic segmentation for real-world surrounding perception,” in Proceedings of IEEE Intelligent Vehicles Symposium (IEEE, 2019), pp. 374–381.

Wang, R.

P. Bergmann, R. Wang, and D. Cremers, “Online photometric calibration of auto exposure video for realtime visual odometry and slam,” IEEE Robotics and Automation Letters 3(2), 627–634 (2018).
[Crossref]

Welch, G.

G. Welch and G. Bishop, “An introduction to the Kalman Filter,” (University of North Carolina at Chapel Hill, 1995).

Woodfill, J. I.

L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel realsense stereoscopic depth cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1–10.

Yang, K.

K. Yang, L. M. Bergasa, E. Romera, and K. Wang, “Robustifying semantic cognition of traversability across wearable rgb-depth cameras,” Appl. Optics 58(12), 3141–3155 (2019).
[Crossref]

K. Yang, K. Wang, H. Chen, and J. Bai, “Reducing the minimum range of a rgb-depth sensor to aid navigation in visually impaired individuals,” Appl. Optics 57(11), 2809–2819 (2018).
[Crossref]

H. Chen, K. Wang, and K. Yang, “Improving realsense by fusing color stereo vision and infrared stereo vision for the visually impaired,” in Proceedings of International Conference on Information Science and System (ACM, 2018), pp. 142–146.

K. Yang, X. Hu, L. M. Bergasa, E. Romera, X. Huang, D. Sun, and K. Wang, “Can we PASS beyond the field of view? panoramic annular semantic segmentation for real-world surrounding perception,” in Proceedings of IEEE Intelligent Vehicles Symposium (IEEE, 2019), pp. 374–381.

R. Cheng, K. Wang, S. Lin, W. Hu, K. Yang, X. Huang, H. Li, D. Sun, and J. Bai, “Panoramic annular localizer: Tackling the variation challenges of outdoor localization using panoramic annular images and active deep descriptors,” arXiv preprint arXiv:1905.05425 (2019).

Zhang, G.

W. Tan, H. Liu, Z. Dong, G. Zhang, and H. Bao, “Robust monocular SLAM in dynamic environments,” in Proceedings of IEEE International Symposium on Mixed and Augmented Reality (IEEE, 2013), pp. 209–218.

Zhang, H.

M. Lin, Q. Cao, and H. Zhang, “PVO: Panoramic visual odometry,” in Proceedings of IEEE International Conference on Advanced Robotics and Mechatronics (IEEE, 2018), pp. 491–496.

Zisserman, A.

R. Hartley and A. Zisserman, Multiple view geometry in computer vision (Cambridge university press, 2003).

ACM Comput. Surv. (1)

M. R. U. Saputra, A. Markham, and N. Trigoni, “Visual SLAM and structure from motion in dynamic environments: A survey,” ACM Comput. Surv. 51(2), 1–36 (2018).
[Crossref]

Appl. Optics (3)

Y. Luo, X. Huang, J. Bai, and R. Liang, “Compact polarization-based dual-view panoramic lens,” Appl. Optics 56(22), 6283–6287 (2017).
[Crossref]

K. Yang, K. Wang, H. Chen, and J. Bai, “Reducing the minimum range of a rgb-depth sensor to aid navigation in visually impaired individuals,” Appl. Optics 57(11), 2809–2819 (2018).
[Crossref]

K. Yang, L. M. Bergasa, E. Romera, and K. Wang, “Robustifying semantic cognition of traversability across wearable rgb-depth cameras,” Appl. Optics 58(12), 3141–3155 (2019).
[Crossref]

IEEE Robot. Autom. Mag. (1)

D. Scaramuzza and F. Fraundorfer, “Tutorial: Visual odometry,” IEEE Robot. Autom. Mag. 18(4), 80–92 (2011).
[Crossref]

IEEE Robotics and Automation Letters (2)

P. Bergmann, R. Wang, and D. Cremers, “Online photometric calibration of auto exposure video for realtime visual odometry and slam,” IEEE Robotics and Automation Letters 3(2), 627–634 (2018).
[Crossref]

H. Matsuki, L. von Stumberg, V. Usenko, J. Stückler, and D. Cremers, “Omnidirectional DSO: Direct sparse odometry with fisheye cameras,” IEEE Robotics and Automation Letters 3(4), 3693–3700 (2018).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2017).
[Crossref] [PubMed]

A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,” IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007).
[Crossref] [PubMed]

IEEE Trans. Robot. (2)

R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A versatile and accurate monocular slam system,” IEEE Trans. Robot. 31(5), 1147–1163 (2015).
[Crossref]

R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE Trans. Robot. 33(5), 1255–1262 (2017).
[Crossref]

Image Vis. Comput. (1)

M. Trajković and M. Hedley, “Fast corner detection,” Image Vis. Comput. 16(2), 75–87 (1998).
[Crossref]

IPSJ Transactions on Computer Vision and Applications (1)

T. Taketomi, H. Uchiyama, and S. Ikeda, “Visual SLAM algorithms: A survey from 2010 to 2016,” IPSJ Transactions on Computer Vision and Applications 9(1), 16 (2017).
[Crossref]

Nature (1)

H. C. Longuet-Higgins, “A computer algorithm for reconstructing a scene from two projections,” Nature 293(5828), 133 (1981).
[Crossref]

Opt. Express (2)

Proc. SPIE (1)

H. Chen, K. Wang, W. Hu, and L. Fei, “SORB: Improve ORB feature matching by semantic segmentation,” Proc. SPIE 10799, 1–7 (2018).

Other (23)

M. Lin, Q. Cao, and H. Zhang, “PVO: Panoramic visual odometry,” in Proceedings of IEEE International Conference on Advanced Robotics and Mechatronics (IEEE, 2018), pp. 491–496.

Ricoh Company, “RICOH THETA V,” https://theta360.com/en/about/theta/v.html .

K. Yang, X. Hu, L. M. Bergasa, E. Romera, X. Huang, D. Sun, and K. Wang, “Can we PASS beyond the field of view? panoramic annular semantic segmentation for real-world surrounding perception,” in Proceedings of IEEE Intelligent Vehicles Symposium (IEEE, 2019), pp. 374–381.

D. Scaramuzza, A. Martinelli, and R. Siegwart, “A flexible technique for accurate omnidirectional camera calibration and structure from motion,” in Proceedings of IEEE International Conference on Computer Vision Systems (IEEE, 2006), pp. 45–55.

D. Scaramuzza, A. Martinelli, and R. Siegwart, “A toolbox for easily calibrating omnidirectional cameras,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2007), pp. 5695–5701.

C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2014), pp. 15–22.

D. Gutierrez, A. Rituerto, J. Montiel, and J. J. Guerrero, “Adapting a real-time monocular visual slam from conventional to omnidirectional cameras,” in Proceedings of IEEE International Conference on Computer Vision Workshops (IEEE, 2011), pp. 343–350.

R. Hartley and A. Zisserman, Multiple view geometry in computer vision (Cambridge university press, 2003).

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence, (Morgan Kaufmann Publishers Inc., 1981), pp. 674–679.

H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robust features,” in Proceedings of European Conference on Computer Vision, A. Leonardis, H. Bischof, and A. Pinz, eds. (Springer, 2006), pp. 404–417.

D. Caruso, J. Engel, and D. Cremers, “Large-scale direct SLAM for omnidirectional cameras,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2015), pp. 141–148.

E. Marder-Eppstein, “Project tango,” in Proceedings of ACM SIGGRAPH 2016 Real-Time Live! (ACM, 2016), pp. 40:25.

R. Cheng, K. Wang, S. Lin, W. Hu, K. Yang, X. Huang, H. Li, D. Sun, and J. Bai, “Panoramic annular localizer: Tackling the variation challenges of outdoor localization using panoramic annular images and active deep descriptors,” arXiv preprint arXiv:1905.05425 (2019).

W. Tan, H. Liu, Z. Dong, G. Zhang, and H. Bao, “Robust monocular SLAM in dynamic environments,” in Proceedings of IEEE International Symposium on Mixed and Augmented Reality (IEEE, 2013), pp. 209–218.

Stereolabs, “ZED Stereo camera,” https://www.stereolabs.com/zed/ .

L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel realsense stereoscopic depth cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1–10.

H. Chen, K. Wang, and K. Yang, “Improving realsense by fusing color stereo vision and infrared stereo vision for the visually impaired,” in Proceedings of International Conference on Information Science and System (ACM, 2018), pp. 142–146.

R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “DTAM: Dense tracking and mapping in real-time,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 2320–2327.

J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular slam,” in Proceedings of European Conference on Computer Vision, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds. (Springer, 2014), pp. 834–849.

G. Welch and G. Bishop, “An introduction to the Kalman Filter,” (University of North Carolina at Chapel Hill, 1995).

G. Klein and D. Murray, “Parallel tracking and mapping for small AR workspaces,” in Proceedings of IEEE and ACM International Symposium on Mixed and Augmented Reality (IEEE, 2007), pp. 1–10.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment – a modern synthesis,” in Proceedings of International Workshop on Vision Algorithms, B. Triggs, P.F. McLauchlan, R. I Hartley, and A.W. Fitzgibbon, eds. (Springer, 2000), pp. 298–372.

R. Mur-Artal and J. D. Tardós, “ORB-SLAM: Tracking and mapping recognizable features,” presented at Robotics: Science and Systems (RSS) Workshop on Multi View Geometry in Robotics, Berkeley, USA, July 2014.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 (a) PAL transforms the cylindrical side view onto a planar annular image. P and u represent the object and image points respectively. (b) Schematic diagram of the PAL camera model. P1 and P2 are points in 3D space with projections u1 and u2 on image plane D. Without loss of generality, P1 and P2 are points at upper and lower margin FoV respectively. The optical center is O, which is also the origin of the O-xyz coordinate system. S represents the unit sphere. f is the polynomial with coefficients to be determined.
Fig. 2
Fig. 2 The pipeline of PALVO.
Fig. 3
Fig. 3 The initialization module. (a) Initial feature correspondences are determined using Lucas-Kanade optical flow, and the essential matrix is used to compute the relative motion. (b) By triangulating the corresponding keypoints, an initial map can be created.
Fig. 4
Fig. 4 (a) Track previous frame. We seek to find the relative motion between two consecutive frames that minimizes the photometric error. The colorful dash lines mean affirmed projection, while the gray ones mean undetermined projection because the camera pose is to be estimated. The following Figs. 5 and 6 are the same. (b) Residual pattern. Pattern ��ui used for photometric error computation [19].
Fig. 5
Fig. 5 Track local map. (a) Keypoints of the local map that are visible from the current frame are projected onto the image and then the features get aligned. (b) The pose and structure are optimized to minimize the re-projection error.
Fig. 6
Fig. 6 (a) Feature correspondence search along the epipolar curve. (b) The depth estimate is updated in a Bayesian framework.
Fig. 7
Fig. 7 (a) Experimental platform. The real-world dataset is collected using the robot of TurtleBot, with a PAL camera, a RealSense D435, a project tango device and a laptop on it. (b) The images of synthesis dataset are generated by a virtual camera which gives an aerial view to a football field. Left: the PAL image. Medium: the football field. Right: the perspective image.
Fig. 8
Fig. 8 The trajectories produced by PALVO, ORB-SLAM2 and SVO on the synthesis image sequences s1 (a), s4 (b), s5 (c) and on the real-world image sequences with loop closure r3 (d), r4 (e) and r5 (f). In (d)–(f), the starting point is indicated by a black dot, and the ending points of trajectories produced by different algorithms are represented by circles in different color.
Fig. 9
Fig. 9 Experiment results. We run the three methods (PALVO, SVO and monocular ORB-SLAM2) 10 times on each sequence and draw diagrams of number of sequences with a success rate greater than N% as a function of N. The dash lines represent results of the control group (slow motion and static scenarios), while the real lines stand for results of the experimental groups with rapid motion (a) and dynamic scenarios (b). The higher the curve, the more robust the algorithm. Moreover, we gradually increase the velocity from the basic speed to 10 times faster, and run the three algorithms 40 times at each velocity. The relationship between success rate and the velocity ratio is shown in (c).
Fig. 10
Fig. 10 Trajectories produced by PALVO, ORB-SLAM2, SVO and Tango device. Both SVO and ORB-SLAM2 failed in tracking when turning at a fast angular velocity. The picture on the left shows the point cloud produced by Tango device.
Fig. 11
Fig. 11 Trajectories produced by PALVO, ORB-SLAM2, SVO and Tango device. Both SVO and ORB-SLAM2 got interrupted when there’s a pedestrian walking in front of the camera. The picture on the left shows the point cloud produced by Tango device.
Fig. 12
Fig. 12 Trajectories produced by PALVO in field tests. There exists a fast U-turn at A, and there is a car passing by at B. Despite this, PALVO runs successfully and produces correct trajectories.

Tables (1)

Tables Icon

Table 1 Accuracy test results.

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

P = π 1 ( u ) = λ g ( u ) , λ > 0
g ( u ) = [ u v f b ( ρ ) ] T
f b ( ρ ) = α 0 + α 1 ρ + α 2 ρ 2 + α 3 ρ 3 + α 4 ρ 4 +
ρ = u 2 + v 2
λ = ρ 2 + f b 2 ( ρ )
u = π ( P ) = f p ( θ ) h ( P )
h ( P ) = [ x x 2 + y 2 y x 2 + y 2 ] T
f p ( θ ) = β 0 + β 1 θ + β 2 θ 2 + β 3 θ 3 + β 4 θ 4 +
θ = arctan ( z x 2 + y 2 )
d π ( P ) d P = [ π h ] 1 × 1 [ h P ] 2 × 3 + [ π f ] 2 × 1 [ f θ ] 1 × 1 [ θ P ] 1 × 3
T k , w P i = P i k = d P i s k
T k , w = T k , k 1 T k 1 , w
P i 2 T E P i 1 = 0
d 2 d 1 P i s 2 T E P i s 1 = 0 P i s 2 T E P i s 1 = 0
T ^ k , k 1 = T k 1 , k 2
δ I ( u i , T ^ k , k 1 ) = I k ( u i ) I k 1 ( u i )
u i = π ( T ^ k , k 1 d u i π 1 ( u i ) )
E u i ( T ^ k , k 1 ) = 1 2 u i 𝒩 u i δ ( u i , T ^ k , k 1 ) 2
T ^ k , k 1 = arg min T ^ k , k 1 i E u i ( T ^ k , k 1 )
J u i = I k u i u i P i k P i k T ^ k , k 1
P i k = T ^ k , k 1 d u i π 1 ( u i )
T ^ k , w = T ^ k , k 1 T k 1 , w
u i = arg min u ^ i 1 2 u ^ i 𝒩 u i I k ( u ^ i ) I r ( π ( H π 1 ( u ^ i ) ) ) 2
T k , w = arg min T ^ k , w 1 2 i π ( T ^ k , w P i ) u i 2
P L ( α ) = α P max s + ( 1 α ) P min s , α [ 0 , 1 ]
N sample = π 2 π ( P max s ) π ( P min s )
d ^ new = σ old 2 d tri + σ tri 2 d old σ old 2 + σ tri 2
σ new 2 = σ old 2 σ tri 2 σ old 2 + σ tri 2