Abstract

Calibration of a vehicle camera is a key technology for advanced driver assistance systems (ADAS). This paper presents a novel estimation method to measure the orientation of a camera that is mounted on a driving vehicle. By considering the characteristics of vehicle cameras and driving environment, we detect three orthogonal vanishing points as a basis of the imaging geometry. The proposed method consists of three steps: i) detection of lines projected to the Gaussian sphere and extraction of the plane normal, ii) estimation of the vanishing point about the optical axis using linear Hough transform, and iii) voting for the rest two vanishing points using circular histogram. The proposed method increases both accuracy and stability by considering the practical driving situation using sequentially estimated three vanishing points. In addition, we can rapidly estimate the orientation by converting the voting space into a 2D plane at each stage. As a result, the proposed method can quickly and accurately estimate the orientation of the vehicle camera in a normal driving situation.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Homography estimation by two PClines Hough transforms and a square-radial checkerboard pattern

Rigoberto Juarez-Salazar and Victor H. Diaz-Ramirez
Appl. Opt. 57(12) 3316-3322 (2018)

Bi-tangent line based approach for multi-camera calibration using spheres

Jian Yu and Feipeng Da
J. Opt. Soc. Am. A 35(2) 221-229 (2018)

References

  • View by:
  • |
  • |
  • |

  1. P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011).
    [Crossref]
  2. Y. L. Chang, L. Y. Hsu, and O. T. C. Chen, “Auto-calibration around-view monitoring system,” in Proceedings of 2013 IEEE 78th Vehicular Technology Conference (VTC Fall) (IEEE, 2013), pp. 1–5.
  3. W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
    [Crossref]
  4. H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
    [Crossref]
  5. S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
    [Crossref]
  6. O. Stankiewicz and M. Domański, “Depth map estimation based on maximum a posteriori probability,” IEIE Trans. Smart Process. Comput. 7(1), 49–61 (2018).
    [Crossref]
  7. M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018).
    [Crossref]
  8. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Analysis Mach. Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  9. J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.
    [Crossref]
  10. B. Caprile and V. Torre, “Using vanishing points for camera calibration,” Int. J. Comput. Vis. 4(2), 127–139 (1990).
    [Crossref]
  11. S. T. Barnard, “Interpreting perspective images,” Artif. Intell. 21(4), 435–462 (1983).
    [Crossref]
  12. H. Wildenauer and A. Hanbury, “Robust camera self-calibration from monocular images of Manhattan worlds,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2831–2838.
    [Crossref]
  13. M. Hornáček and S. Maierhofer, “Extracting vanishing points across multiple views,” in Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 953–960.
  14. Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
    [Crossref]
  15. W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).
  16. J. P. Tardif, “Non-iterative approach for fast and accurate vanishing point detection,” in Proceedings of 2009 IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 1250–1257.
    [Crossref]
  17. Y. Xu, S. Oh, and A. Hoogs, “A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1376–1383.
    [Crossref]
  18. C. H. Chang and N. Kehtarnavaz, “Fast J-linkage algorithm for camera orientation applications,” J. Real-Time Image Process. 14(4), 823–832 (2018).
    [Crossref]
  19. M. J. Magee and J. K. Aggarwal, “Determining vanishing points from perspective images,” Comput. Vision, Graph. Image Process. 26(2), 256–267 (1984).
    [Crossref]
  20. M. Antunes and J. P. Barreto, “A global approach for the detection of vanishing points and mutually orthogonal vanishing directions,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1336–1343.
    [Crossref]
  21. J. C. Bazin and M. Pollefeys, “3-line RANSAC for orthogonal vanishing point detection,” in Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 4282–4287.
    [Crossref]
  22. J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
    [Crossref]
  23. J. C. Bazin, Y. Seo, and M. Pollefeys, “Globally optimal consensus set maximization through rotation search,” in Proceedings of Asian Conference on Computer Vision (Springer, 2012), pp. 539–551.
  24. X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353.
    [Crossref]
  25. J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017).
    [Crossref]
  26. T. Kroeger, D. Dai, and L. Van Gool, “Joint vanishing point extraction and tracking,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 2449–2457.
    [Crossref]
  27. J. Lee and K. Yoon, “Real-time joint estimation of camera orientation and vanishing points,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1866–1874.
  28. W. Elloumi, S. Treuillet, and R. Leconge, “Tracking orthogonal vanishing points in video sequences for a reliable camera orientation in Manhattan world,” in Proceedings of 2012 5th International Congress on Image and Signal Processing (IEEE, 2012), pp. 128–132.
    [Crossref]
  29. R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
    [Crossref]
  30. J. M. Coughlan and A. L. Yuille, “Manhattan world: Compass direction from a single image by bayesian inference,” in Proceedings of the 7th IEEE International Conference on Computer Vision (IEEE, 1999), pp. 941–947.
  31. W. J. Kim and S. W. Lee, “Depth estimation with Manhattan world cues on a monocular image,” IEIE Trans. Smart Process. Comput. 7(3), 201–209 (2018).
    [Crossref]
  32. M. E. Antone and S. Teller, “Automatic recovery of relative camera rotations for urban scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 282–289.
  33. R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
    [Crossref]
  34. T. Tuytelaars, M. Proesmans, and L. Van Gool, “The cascaded hough transform as support for grouping and finding vanishing points and lines,” in Proceedings of International Workshop on Algebraic Frames for the Perception-Action Cycle (Springer, 1997), pp. 278–289.
    [Crossref]

2019 (1)

R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
[Crossref]

2018 (5)

W. J. Kim and S. W. Lee, “Depth estimation with Manhattan world cues on a monocular image,” IEIE Trans. Smart Process. Comput. 7(3), 201–209 (2018).
[Crossref]

C. H. Chang and N. Kehtarnavaz, “Fast J-linkage algorithm for camera orientation applications,” J. Real-Time Image Process. 14(4), 823–832 (2018).
[Crossref]

S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
[Crossref]

O. Stankiewicz and M. Domański, “Depth map estimation based on maximum a posteriori probability,” IEIE Trans. Smart Process. Comput. 7(1), 49–61 (2018).
[Crossref]

M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018).
[Crossref]

2017 (1)

J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017).
[Crossref]

2016 (2)

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
[Crossref]

2015 (1)

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

2014 (1)

W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).

2012 (1)

R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
[Crossref]

2011 (1)

P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011).
[Crossref]

2000 (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Analysis Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

1990 (1)

B. Caprile and V. Torre, “Using vanishing points for camera calibration,” Int. J. Comput. Vis. 4(2), 127–139 (1990).
[Crossref]

1984 (1)

M. J. Magee and J. K. Aggarwal, “Determining vanishing points from perspective images,” Comput. Vision, Graph. Image Process. 26(2), 256–267 (1984).
[Crossref]

1983 (1)

S. T. Barnard, “Interpreting perspective images,” Artif. Intell. 21(4), 435–462 (1983).
[Crossref]

Aggarwal, J. K.

M. J. Magee and J. K. Aggarwal, “Determining vanishing points from perspective images,” Comput. Vision, Graph. Image Process. 26(2), 256–267 (1984).
[Crossref]

Alfalou, A.

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Antone, M. E.

M. E. Antone and S. Teller, “Automatic recovery of relative camera rotations for urban scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 282–289.

Antunes, M.

M. Antunes and J. P. Barreto, “A global approach for the detection of vanishing points and mutually orthogonal vanishing directions,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1336–1343.
[Crossref]

Barnard, S. T.

S. T. Barnard, “Interpreting perspective images,” Artif. Intell. 21(4), 435–462 (1983).
[Crossref]

Barreto, J. P.

M. Antunes and J. P. Barreto, “A global approach for the detection of vanishing points and mutually orthogonal vanishing directions,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1336–1343.
[Crossref]

Bazin, J. C.

J. C. Bazin and M. Pollefeys, “3-line RANSAC for orthogonal vanishing point detection,” in Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 4282–4287.
[Crossref]

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

J. C. Bazin, Y. Seo, and M. Pollefeys, “Globally optimal consensus set maximization through rotation search,” in Proceedings of Asian Conference on Computer Vision (Springer, 2012), pp. 539–551.

Brosseau, C.

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Caprile, B.

B. Caprile and V. Torre, “Using vanishing points for camera calibration,” Int. J. Comput. Vis. 4(2), 127–139 (1990).
[Crossref]

Chang, C. H.

C. H. Chang and N. Kehtarnavaz, “Fast J-linkage algorithm for camera orientation applications,” J. Real-Time Image Process. 14(4), 823–832 (2018).
[Crossref]

Chang, Y. L.

Y. L. Chang, L. Y. Hsu, and O. T. C. Chen, “Auto-calibration around-view monitoring system,” in Proceedings of 2013 IEEE 78th Vehicular Technology Conference (VTC Fall) (IEEE, 2013), pp. 1–5.

Chen, O. T. C.

Y. L. Chang, L. Y. Hsu, and O. T. C. Chen, “Auto-calibration around-view monitoring system,” in Proceedings of 2013 IEEE 78th Vehicular Technology Conference (VTC Fall) (IEEE, 2013), pp. 1–5.

Coughlan, J. M.

J. M. Coughlan and A. L. Yuille, “Manhattan world: Compass direction from a single image by bayesian inference,” in Proceedings of the 7th IEEE International Conference on Computer Vision (IEEE, 1999), pp. 941–947.

Dai, D.

T. Kroeger, D. Dai, and L. Van Gool, “Joint vanishing point extraction and tracking,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 2449–2457.
[Crossref]

Demonceaux, C.

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

Desthieux, M.

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Domanski, M.

O. Stankiewicz and M. Domański, “Depth map estimation based on maximum a posteriori probability,” IEIE Trans. Smart Process. Comput. 7(1), 49–61 (2018).
[Crossref]

Elloumi, W.

W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).

W. Elloumi, S. Treuillet, and R. Leconge, “Tracking orthogonal vanishing points in video sequences for a reliable camera orientation in Manhattan world,” in Proceedings of 2012 5th International Congress on Image and Signal Processing (IEEE, 2012), pp. 128–132.
[Crossref]

Fu, W.

Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
[Crossref]

Guo, R.

R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
[Crossref]

Gutierrez, C.

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Hanbury, A.

H. Wildenauer and A. Hanbury, “Robust camera self-calibration from monocular images of Manhattan worlds,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2831–2838.
[Crossref]

Heikkila, J.

J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.
[Crossref]

Hoogs, A.

Y. Xu, S. Oh, and A. Hoogs, “A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1376–1383.
[Crossref]

Hornácek, M.

M. Hornáček and S. Maierhofer, “Extracting vanishing points across multiple views,” in Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 953–960.

Hsu, L. Y.

Y. L. Chang, L. Y. Hsu, and O. T. C. Chen, “Auto-calibration around-view monitoring system,” in Proceedings of 2013 IEEE 78th Vehicular Technology Conference (VTC Fall) (IEEE, 2013), pp. 1–5.

Ikeuchi, K.

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

Jakubowicz, J.

R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
[Crossref]

Jang, J.

M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018).
[Crossref]

Kaddah, W.

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Kehtarnavaz, N.

C. H. Chang and N. Kehtarnavaz, “Fast J-linkage algorithm for camera orientation applications,” J. Real-Time Image Process. 14(4), 823–832 (2018).
[Crossref]

Kim, K.

S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
[Crossref]

Kim, W. J.

W. J. Kim and S. W. Lee, “Depth estimation with Manhattan world cues on a monocular image,” IEIE Trans. Smart Process. Comput. 7(3), 201–209 (2018).
[Crossref]

Kroeger, T.

T. Kroeger, D. Dai, and L. Van Gool, “Joint vanishing point extraction and tracking,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 2449–2457.
[Crossref]

Kweon, I.

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

Leconge, R.

W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).

W. Elloumi, S. Treuillet, and R. Leconge, “Tracking orthogonal vanishing points in video sequences for a reliable camera orientation in Manhattan world,” in Proceedings of 2012 5th International Congress on Image and Signal Processing (IEEE, 2012), pp. 128–132.
[Crossref]

Lee, J.

J. Lee and K. Yoon, “Real-time joint estimation of camera orientation and vanishing points,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1866–1874.

Lee, S. W.

W. J. Kim and S. W. Lee, “Depth estimation with Manhattan world cues on a monocular image,” IEIE Trans. Smart Process. Comput. 7(3), 201–209 (2018).
[Crossref]

Lezama, J.

J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017).
[Crossref]

Li, H.

X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353.
[Crossref]

Liu, P.

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

Liu, Y.

R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
[Crossref]

X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353.
[Crossref]

Lu, X.

X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353.
[Crossref]

Magee, M. J.

M. J. Magee and J. K. Aggarwal, “Determining vanishing points from perspective images,” Comput. Vision, Graph. Image Process. 26(2), 256–267 (1984).
[Crossref]

Maierhofer, S.

M. Hornáček and S. Maierhofer, “Extracting vanishing points across multiple views,” in Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 953–960.

Morel, J. M.

R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
[Crossref]

Oh, S.

Y. Xu, S. Oh, and A. Hoogs, “A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1376–1383.
[Crossref]

Ouerhani, Y.

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Paik, J.

S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
[Crossref]

M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018).
[Crossref]

Park, S.

S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
[Crossref]

Pei, L.

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

Peng, K.

R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
[Crossref]

Pollefeys, M.

J. C. Bazin and M. Pollefeys, “3-line RANSAC for orthogonal vanishing point detection,” in Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 4282–4287.
[Crossref]

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

J. C. Bazin, Y. Seo, and M. Pollefeys, “Globally optimal consensus set maximization through rotation search,” in Proceedings of Asian Conference on Computer Vision (Springer, 2012), pp. 539–551.

Proesmans, M.

T. Tuytelaars, M. Proesmans, and L. Van Gool, “The cascaded hough transform as support for grouping and finding vanishing points and lines,” in Proceedings of International Workshop on Algebraic Frames for the Perception-Action Cycle (Springer, 1997), pp. 278–289.
[Crossref]

Randall, G.

J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017).
[Crossref]

R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
[Crossref]

Seo, Y.

J. C. Bazin, Y. Seo, and M. Pollefeys, “Globally optimal consensus set maximization through rotation search,” in Proceedings of Asian Conference on Computer Vision (Springer, 2012), pp. 539–551.

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

Shin, M.

M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018).
[Crossref]

Silven, O.

J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.
[Crossref]

Stankiewicz, O.

O. Stankiewicz and M. Domański, “Depth map estimation based on maximum a posteriori probability,” IEIE Trans. Smart Process. Comput. 7(1), 49–61 (2018).
[Crossref]

Tardif, J. P.

J. P. Tardif, “Non-iterative approach for fast and accurate vanishing point detection,” in Proceedings of 2009 IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 1250–1257.
[Crossref]

Teller, S.

M. E. Antone and S. Teller, “Automatic recovery of relative camera rotations for urban scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 282–289.

Torre, V.

B. Caprile and V. Torre, “Using vanishing points for camera calibration,” Int. J. Comput. Vis. 4(2), 127–139 (1990).
[Crossref]

Treuillet, S.

W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).

W. Elloumi, S. Treuillet, and R. Leconge, “Tracking orthogonal vanishing points in video sequences for a reliable camera orientation in Manhattan world,” in Proceedings of 2012 5th International Congress on Image and Signal Processing (IEEE, 2012), pp. 128–132.
[Crossref]

Tsai, W. H.

P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011).
[Crossref]

Tuytelaars, T.

T. Tuytelaars, M. Proesmans, and L. Van Gool, “The cascaded hough transform as support for grouping and finding vanishing points and lines,” in Proceedings of International Workshop on Algebraic Frames for the Perception-Action Cycle (Springer, 1997), pp. 278–289.
[Crossref]

Van Gool, L.

T. Tuytelaars, M. Proesmans, and L. Van Gool, “The cascaded hough transform as support for grouping and finding vanishing points and lines,” in Proceedings of International Workshop on Algebraic Frames for the Perception-Action Cycle (Springer, 1997), pp. 278–289.
[Crossref]

T. Kroeger, D. Dai, and L. Van Gool, “Joint vanishing point extraction and tracking,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 2449–2457.
[Crossref]

Vasseur, P.

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

von Gioi, R. G.

J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017).
[Crossref]

R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
[Crossref]

Wang, W.

Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
[Crossref]

Wildenauer, H.

H. Wildenauer and A. Hanbury, “Robust camera self-calibration from monocular images of Manhattan worlds,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2831–2838.
[Crossref]

Wu, Z.

Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
[Crossref]

Xu, Y.

Y. Xu, S. Oh, and A. Hoogs, “A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1376–1383.
[Crossref]

Xue, R.

Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
[Crossref]

Yang, K. F.

P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011).
[Crossref]

Yaoy, J.

X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353.
[Crossref]

Ying, R.

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

Yoon, K.

J. Lee and K. Yoon, “Real-time joint estimation of camera orientation and vanishing points,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1866–1874.

Yu, S.

S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
[Crossref]

Yu, W.

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

Yuan, P. H.

P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011).
[Crossref]

Yuille, A. L.

J. M. Coughlan and A. L. Yuille, “Manhattan world: Compass direction from a single image by bayesian inference,” in Proceedings of the 7th IEEE International Conference on Computer Vision (IEEE, 1999), pp. 941–947.

Zhang, Z.

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Analysis Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zhou, D.

R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
[Crossref]

Zhou, H.

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

Zou, D.

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

Artif. Intell. (1)

S. T. Barnard, “Interpreting perspective images,” Artif. Intell. 21(4), 435–462 (1983).
[Crossref]

Comput. Vision, Graph. Image Process. (1)

M. J. Magee and J. K. Aggarwal, “Determining vanishing points from perspective images,” Comput. Vision, Graph. Image Process. 26(2), 256–267 (1984).
[Crossref]

Electronics (1)

R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
[Crossref]

IEEE Trans. Pattern Analysis Mach. Intell. (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Analysis Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

IEEE Trans. Veh. Technol. (2)

P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011).
[Crossref]

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

IEIE Trans. Smart Process. Comput. (4)

S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
[Crossref]

O. Stankiewicz and M. Domański, “Depth map estimation based on maximum a posteriori probability,” IEIE Trans. Smart Process. Comput. 7(1), 49–61 (2018).
[Crossref]

M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018).
[Crossref]

W. J. Kim and S. W. Lee, “Depth estimation with Manhattan world cues on a monocular image,” IEIE Trans. Smart Process. Comput. 7(3), 201–209 (2018).
[Crossref]

Image Process. On Line (2)

R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
[Crossref]

J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017).
[Crossref]

Int. J. Comput. Vis. (1)

B. Caprile and V. Torre, “Using vanishing points for camera calibration,” Int. J. Comput. Vis. 4(2), 127–139 (1990).
[Crossref]

J. Real-Time Image Process. (2)

W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).

C. H. Chang and N. Kehtarnavaz, “Fast J-linkage algorithm for camera orientation applications,” J. Real-Time Image Process. 14(4), 823–832 (2018).
[Crossref]

Opt. Commun. (1)

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Sensors (1)

Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
[Crossref]

Other (17)

J. P. Tardif, “Non-iterative approach for fast and accurate vanishing point detection,” in Proceedings of 2009 IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 1250–1257.
[Crossref]

Y. Xu, S. Oh, and A. Hoogs, “A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1376–1383.
[Crossref]

H. Wildenauer and A. Hanbury, “Robust camera self-calibration from monocular images of Manhattan worlds,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2831–2838.
[Crossref]

M. Hornáček and S. Maierhofer, “Extracting vanishing points across multiple views,” in Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 953–960.

Y. L. Chang, L. Y. Hsu, and O. T. C. Chen, “Auto-calibration around-view monitoring system,” in Proceedings of 2013 IEEE 78th Vehicular Technology Conference (VTC Fall) (IEEE, 2013), pp. 1–5.

J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.
[Crossref]

T. Kroeger, D. Dai, and L. Van Gool, “Joint vanishing point extraction and tracking,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 2449–2457.
[Crossref]

J. Lee and K. Yoon, “Real-time joint estimation of camera orientation and vanishing points,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1866–1874.

W. Elloumi, S. Treuillet, and R. Leconge, “Tracking orthogonal vanishing points in video sequences for a reliable camera orientation in Manhattan world,” in Proceedings of 2012 5th International Congress on Image and Signal Processing (IEEE, 2012), pp. 128–132.
[Crossref]

M. Antunes and J. P. Barreto, “A global approach for the detection of vanishing points and mutually orthogonal vanishing directions,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1336–1343.
[Crossref]

J. C. Bazin and M. Pollefeys, “3-line RANSAC for orthogonal vanishing point detection,” in Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 4282–4287.
[Crossref]

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

J. C. Bazin, Y. Seo, and M. Pollefeys, “Globally optimal consensus set maximization through rotation search,” in Proceedings of Asian Conference on Computer Vision (Springer, 2012), pp. 539–551.

X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353.
[Crossref]

T. Tuytelaars, M. Proesmans, and L. Van Gool, “The cascaded hough transform as support for grouping and finding vanishing points and lines,” in Proceedings of International Workshop on Algebraic Frames for the Perception-Action Cycle (Springer, 1997), pp. 278–289.
[Crossref]

M. E. Antone and S. Teller, “Automatic recovery of relative camera rotations for urban scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 282–289.

J. M. Coughlan and A. L. Yuille, “Manhattan world: Compass direction from a single image by bayesian inference,” in Proceedings of the 7th IEEE International Conference on Computer Vision (IEEE, 1999), pp. 941–947.

Supplementary Material (3)

NameDescription
» Visualization 1       Video 1 consists of 600 frames captured on a two-lane road environment. In the first set, all the extracted lines do not correspond to the Manhattan world, especially in regions including trees and bushes.
» Visualization 2       Video 2 consists of 600 frames captured on a six-lane road. Many lines satisfying the Manhattan world show from not only road but also background with a lot of buildings.
» Visualization 3       The third set of 1200 frames, Video 3, has a large number of markers on the street, and many lines of street lamps, banners and buildings are found altogether.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Camera geometry in vehicle: (a) six extrinsic parameters including roll, pitch, yaw, and three-dimensional coordinates and (b) the origin and axis of the vehicle’s 3D coordinate system. The Y-axis indicates the upward direction from the origin.
Fig. 2
Fig. 2 Relationship between the image plane of a camera and the corresponding Gaussian sphere. XG, YG, and ZG represent axes of the Gaussian space, and XI and YI represent axes of the image plane.
Fig. 3
Fig. 3 Vanishing direction (VD) representations using the Gaussian sphere model: (a) VD representation using plane normals and (b) VD representation using the intersection of great circles.
Fig. 4
Fig. 4 Relationship between Manhattan world and Gaussian sphere: (a) a street image that satisfies Manhattan world assumption and detected line segments and (b) the corresponding plane normal vectors distributed in the Gaussian sphere.
Fig. 5
Fig. 5 Block diagram of the proposed camera orientation estimation algorithm.
Fig. 6
Fig. 6 Line segment detection result: (a) input image and (a) the result of LSD.
Fig. 7
Fig. 7 Relationship between the Gaussian sphere and unit cube.
Fig. 8
Fig. 8 Vanishing direction estimation using the linear Hough transform: (a) distribution of unit plane normals in the Gaussian sphere, (b) projection of plane normals into a 2D plane of the unit cube, (c) the strongest line estimation by linear Hough Transform, and (d) result of VD estimation.
Fig. 9
Fig. 9 Estimation of X and Y-axes: (a) distribution of plane normal on the spherical surface and the estimated Z-axis and (b) VD candidate c(ω).
Fig. 10
Fig. 10 Estimation of X- and Y-axes: (a) rotated vectors onto the XY-plane, (b) the corresponding circular histogram and (c) the finally estimated VD.
Fig. 11
Fig. 11 A test video acquired from the real world: (a) input frame of three videos under the straightforward driving environment (see Visualization 1, Visualization 2, and Visualization 3) and (b) line extraction results from (a).
Fig. 12
Fig. 12 Classification results using the camera orientation angles estimated from a real video: (a) the input 160th, 230th frames of first video, and 400th frame of second video, (b) the corresponding classification results, (c) the input of 530th frame of second video, 120th, and 500th frames of third video, and (d) the corresponding classification results.

Tables (4)

Tables Icon

Table 1 Evaluated standard deviation of camera orientation estimation using four different methods.

Tables Icon

Table 2 Orthogonality between vanishing directions estimated by three different methods.

Tables Icon

Table 3 Running time of each part of the proposed algorithm (sec/frame).

Tables Icon

Table 4 Comparison of running time (sec/frame).

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

x i = PX W ,
P = K [ R | T ] = [ f x s c x 0 f y c y 0 0 1 ] [ r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ] [ t x t y t z ] ,
x g = K 1 x i .
x g = K 1 PX W = [ R | T ] X W .
V = [ R | T ] = R [ v x v y v z ] T ,
V c = RI ,
θ = arctan ( f x ( x , y ) f y ( x , y ) ) ,
f x ( x , y ) = [ 1 1 1 1 ] * f ( x , y ) , f y ( x , y ) = [ 1 1 1 1 ] * f ( x , y ) ,
l G i = K 1 l i ,
n i = n y i | n y i | n i ,
n i = p s i × p e i | p s i × p e i | ,
u i = { n i / n y i | | n x i | 1 , | n z i | 1 } .
V Z = v 1 × v 2 .
v 1 = [ 1 1 θ max + μ max ] , v 2 = [ 1 1 θ max + μ max ] .
m i = V Z × n i | V Z × n i | .
m i = R C m i ,
R C = R Cx R Cz = [ 1 0 0 0 cos α sin α 0 sin α cos α ] [ cos β sin β 0 sin β cos β 0 0 0 1 ] ,
α = arccos ( v z z ) , β = arctan ( v z x / v z y ) .
ω max = max ω h ( ω ) ,
h ( ω ) = i = 0 3 c ( ω + 90 i ) .
V x = R C V rot ,
V y = V x × V z .
e = V x V y + V y V z + V x V z .

Metrics