Abstract

This paper presents a panoramic stereo imaging system which uses a single camera coaxially combined with a fisheye lens and a convex mirror. It provides the design methodology, trade analysis, and experimental results using commercially available components. The trade study shows the design equations and the various tradeoffs that must be made during design. The system’s novelty is that it provides stereo vision over a full 360-degree horizontal field-of-view (FOV). Meanwhile, the entire vertical FOV is enlarged compared to the existing systems. The system is calibrated with a computational model that can accommodate the non-single viewpoint imaging cases to conduct 3D reconstruction in Euclidean space.

© 2011 OSA

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. J. Kim, and M. Chungy, “SLAM with omni-directional stereo vision sensor,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (Institute of Electrical and Electronics Engineers, New York, 2003), pp. 442–447.
  2. H. Koyasu, J. Miura, and Y. Shirai, “Mobile robot navigation in dynamic environments using omnidirectional stereo,” in Proceedings of IEEE International Conference on Robotics and Automation (Institute of Electrical and Electronics Engineers, New York, 2003), pp. 893–898.
  3. Z. Zhu, D. R. Karuppiah, E. Riseman, and A. Hanson, “Adaptive panoramic stereovision,” Robot. Autom. Magazine 11, 69–78 (2004).
  4. F. B. Valenzuela, and M. T. Torriti, Comparison of panoramic stereoscopic sensors based on hyperboloidal mirrors, in Proceedings of the 6th Latin American Robotics Symposium, (Institute of Electrical and Electronics Engineers, New York, 2009), pp. 1–8.
  5. E. L. L. Cabral, J. C. S. Junior, and M. C. Hunold, “Omnidirectional stereo vision with a hyperbolic double lobed mirror,” in Proceedings of International Conference on Pattern Recognition, J. Kittler, M. Petrou, and M. Nixon, ed. (IEEE Computer Society, Los Alamitos, Calif., 2004), pp.1–9.
  6. G. Jang, S. Kim, and I. Kweon, “Single-camera panoramic stereo system with single-viewpoint optics,” Opt. Lett. 31(1), 41–43 (2006).
    [CrossRef] [PubMed]
  7. L. C. Su, C. J. Luo, and F. Zhu, “Obtaining obstacle information by an omnidirectional stereo vision system,” Int. J. Robot. Autom. 24, 222–227 (2009).
  8. G. Caron, E. Marchand, and E. M. Mouaddib, “3D model based pose estimation for omnidirectional stereovision,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (Institute of Electrical and Electronics Engineers, New York, 2009), pp. 5228–5233.
  9. S. Yi, and N. Ahuja, “An omnidirectional stereo vision system using a single camera,” in Proceedings of International Conference on Pattern Recognition, Y.Y. Tang, S.P. Wang, G. Lorette, D.S. Yeung, and H. Yan, ed. (IEEE Computer Society, Los Alamitos, Calif., 2006), pp. 861–865.
  10. K. H. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 941–946 (2004).
    [CrossRef]
  11. G. Krishnan, and S. K. Nayar, “Cata-fisheye camera for panoramic imaging,” in Proceedings of IEEE Workshop on Application of Computer Vision (Institute of Electrical and Electronics Engineers, New York, 2008), pp. 1–8.
  12. S. Baker and S. K. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vis. 35(2), 175–196 (1999).
    [CrossRef]
  13. J. P. Tardif, P. Sturm, M. Trudeau, and S. Roy, “Calibration of cameras with radially symmetric distortion,” IEEE Trans. Pattern Anal. Mach. Intell. 31(9), 1552–1566 (2009).
    [CrossRef] [PubMed]
  14. R. Swaminathan, M. D. Grossberg, and S. K. Nayar, “Non-Single Viewpoint Catadioptric Cameras: Geometry and Analysis,” Int. J. Comput. Vis. 66(3), 211–229 (2006).
    [CrossRef]
  15. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
    [CrossRef]
  16. ACCOWLE. Vision, http://www.accowle.com/english/
  17. H. Hirschmüller, P. R. Innocent, and J. Garibaldi, “Real-time correlation-based stereo vision with reduced border errors,” Int. J. Comput. Vis. 47(1/3), 229–246 (2002).
    [CrossRef]
  18. Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
    [CrossRef]

2009 (2)

L. C. Su, C. J. Luo, and F. Zhu, “Obtaining obstacle information by an omnidirectional stereo vision system,” Int. J. Robot. Autom. 24, 222–227 (2009).

J. P. Tardif, P. Sturm, M. Trudeau, and S. Roy, “Calibration of cameras with radially symmetric distortion,” IEEE Trans. Pattern Anal. Mach. Intell. 31(9), 1552–1566 (2009).
[CrossRef] [PubMed]

2008 (1)

Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[CrossRef]

2006 (2)

G. Jang, S. Kim, and I. Kweon, “Single-camera panoramic stereo system with single-viewpoint optics,” Opt. Lett. 31(1), 41–43 (2006).
[CrossRef] [PubMed]

R. Swaminathan, M. D. Grossberg, and S. K. Nayar, “Non-Single Viewpoint Catadioptric Cameras: Geometry and Analysis,” Int. J. Comput. Vis. 66(3), 211–229 (2006).
[CrossRef]

2004 (2)

K. H. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 941–946 (2004).
[CrossRef]

Z. Zhu, D. R. Karuppiah, E. Riseman, and A. Hanson, “Adaptive panoramic stereovision,” Robot. Autom. Magazine 11, 69–78 (2004).

2002 (1)

H. Hirschmüller, P. R. Innocent, and J. Garibaldi, “Real-time correlation-based stereo vision with reduced border errors,” Int. J. Comput. Vis. 47(1/3), 229–246 (2002).
[CrossRef]

2000 (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[CrossRef]

1999 (1)

S. Baker and S. K. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vis. 35(2), 175–196 (1999).
[CrossRef]

Ahuja, N.

K. H. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 941–946 (2004).
[CrossRef]

Baker, S.

S. Baker and S. K. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vis. 35(2), 175–196 (1999).
[CrossRef]

Garibaldi, J.

H. Hirschmüller, P. R. Innocent, and J. Garibaldi, “Real-time correlation-based stereo vision with reduced border errors,” Int. J. Comput. Vis. 47(1/3), 229–246 (2002).
[CrossRef]

Grossberg, M. D.

R. Swaminathan, M. D. Grossberg, and S. K. Nayar, “Non-Single Viewpoint Catadioptric Cameras: Geometry and Analysis,” Int. J. Comput. Vis. 66(3), 211–229 (2006).
[CrossRef]

Hanson, A.

Z. Zhu, D. R. Karuppiah, E. Riseman, and A. Hanson, “Adaptive panoramic stereovision,” Robot. Autom. Magazine 11, 69–78 (2004).

Hirschmüller, H.

H. Hirschmüller, P. R. Innocent, and J. Garibaldi, “Real-time correlation-based stereo vision with reduced border errors,” Int. J. Comput. Vis. 47(1/3), 229–246 (2002).
[CrossRef]

Hua, H.

K. H. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 941–946 (2004).
[CrossRef]

Innocent, P. R.

H. Hirschmüller, P. R. Innocent, and J. Garibaldi, “Real-time correlation-based stereo vision with reduced border errors,” Int. J. Comput. Vis. 47(1/3), 229–246 (2002).
[CrossRef]

Jang, G.

Karuppiah, D. R.

Z. Zhu, D. R. Karuppiah, E. Riseman, and A. Hanson, “Adaptive panoramic stereovision,” Robot. Autom. Magazine 11, 69–78 (2004).

Kim, S.

Kweon, I.

Li, Z.

Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[CrossRef]

Luo, C. J.

L. C. Su, C. J. Luo, and F. Zhu, “Obtaining obstacle information by an omnidirectional stereo vision system,” Int. J. Robot. Autom. 24, 222–227 (2009).

Nayar, S. K.

R. Swaminathan, M. D. Grossberg, and S. K. Nayar, “Non-Single Viewpoint Catadioptric Cameras: Geometry and Analysis,” Int. J. Comput. Vis. 66(3), 211–229 (2006).
[CrossRef]

S. Baker and S. K. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vis. 35(2), 175–196 (1999).
[CrossRef]

Riseman, E.

Z. Zhu, D. R. Karuppiah, E. Riseman, and A. Hanson, “Adaptive panoramic stereovision,” Robot. Autom. Magazine 11, 69–78 (2004).

Roy, S.

J. P. Tardif, P. Sturm, M. Trudeau, and S. Roy, “Calibration of cameras with radially symmetric distortion,” IEEE Trans. Pattern Anal. Mach. Intell. 31(9), 1552–1566 (2009).
[CrossRef] [PubMed]

Shi, Y.

Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[CrossRef]

Sturm, P.

J. P. Tardif, P. Sturm, M. Trudeau, and S. Roy, “Calibration of cameras with radially symmetric distortion,” IEEE Trans. Pattern Anal. Mach. Intell. 31(9), 1552–1566 (2009).
[CrossRef] [PubMed]

Su, L. C.

L. C. Su, C. J. Luo, and F. Zhu, “Obtaining obstacle information by an omnidirectional stereo vision system,” Int. J. Robot. Autom. 24, 222–227 (2009).

Swaminathan, R.

R. Swaminathan, M. D. Grossberg, and S. K. Nayar, “Non-Single Viewpoint Catadioptric Cameras: Geometry and Analysis,” Int. J. Comput. Vis. 66(3), 211–229 (2006).
[CrossRef]

Tan, K. H.

K. H. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 941–946 (2004).
[CrossRef]

Tardif, J. P.

J. P. Tardif, P. Sturm, M. Trudeau, and S. Roy, “Calibration of cameras with radially symmetric distortion,” IEEE Trans. Pattern Anal. Mach. Intell. 31(9), 1552–1566 (2009).
[CrossRef] [PubMed]

Trudeau, M.

J. P. Tardif, P. Sturm, M. Trudeau, and S. Roy, “Calibration of cameras with radially symmetric distortion,” IEEE Trans. Pattern Anal. Mach. Intell. 31(9), 1552–1566 (2009).
[CrossRef] [PubMed]

Wang, C.

Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[CrossRef]

Wang, Y.

Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[CrossRef]

Zhang, Z.

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[CrossRef]

Zhu, F.

L. C. Su, C. J. Luo, and F. Zhu, “Obtaining obstacle information by an omnidirectional stereo vision system,” Int. J. Robot. Autom. 24, 222–227 (2009).

Zhu, Z.

Z. Zhu, D. R. Karuppiah, E. Riseman, and A. Hanson, “Adaptive panoramic stereovision,” Robot. Autom. Magazine 11, 69–78 (2004).

IEEE Trans. Pattern Anal. Mach. Intell. (3)

K. H. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 941–946 (2004).
[CrossRef]

J. P. Tardif, P. Sturm, M. Trudeau, and S. Roy, “Calibration of cameras with radially symmetric distortion,” IEEE Trans. Pattern Anal. Mach. Intell. 31(9), 1552–1566 (2009).
[CrossRef] [PubMed]

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[CrossRef]

Int. J. Comput. Vis. (3)

R. Swaminathan, M. D. Grossberg, and S. K. Nayar, “Non-Single Viewpoint Catadioptric Cameras: Geometry and Analysis,” Int. J. Comput. Vis. 66(3), 211–229 (2006).
[CrossRef]

H. Hirschmüller, P. R. Innocent, and J. Garibaldi, “Real-time correlation-based stereo vision with reduced border errors,” Int. J. Comput. Vis. 47(1/3), 229–246 (2002).
[CrossRef]

S. Baker and S. K. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vis. 35(2), 175–196 (1999).
[CrossRef]

Int. J. Robot. Autom. (1)

L. C. Su, C. J. Luo, and F. Zhu, “Obtaining obstacle information by an omnidirectional stereo vision system,” Int. J. Robot. Autom. 24, 222–227 (2009).

Opt. Eng. (1)

Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[CrossRef]

Opt. Lett. (1)

Robot. Autom. Magazine (1)

Z. Zhu, D. R. Karuppiah, E. Riseman, and A. Hanson, “Adaptive panoramic stereovision,” Robot. Autom. Magazine 11, 69–78 (2004).

Other (8)

F. B. Valenzuela, and M. T. Torriti, Comparison of panoramic stereoscopic sensors based on hyperboloidal mirrors, in Proceedings of the 6th Latin American Robotics Symposium, (Institute of Electrical and Electronics Engineers, New York, 2009), pp. 1–8.

E. L. L. Cabral, J. C. S. Junior, and M. C. Hunold, “Omnidirectional stereo vision with a hyperbolic double lobed mirror,” in Proceedings of International Conference on Pattern Recognition, J. Kittler, M. Petrou, and M. Nixon, ed. (IEEE Computer Society, Los Alamitos, Calif., 2004), pp.1–9.

J. Kim, and M. Chungy, “SLAM with omni-directional stereo vision sensor,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (Institute of Electrical and Electronics Engineers, New York, 2003), pp. 442–447.

H. Koyasu, J. Miura, and Y. Shirai, “Mobile robot navigation in dynamic environments using omnidirectional stereo,” in Proceedings of IEEE International Conference on Robotics and Automation (Institute of Electrical and Electronics Engineers, New York, 2003), pp. 893–898.

G. Caron, E. Marchand, and E. M. Mouaddib, “3D model based pose estimation for omnidirectional stereovision,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (Institute of Electrical and Electronics Engineers, New York, 2009), pp. 5228–5233.

S. Yi, and N. Ahuja, “An omnidirectional stereo vision system using a single camera,” in Proceedings of International Conference on Pattern Recognition, Y.Y. Tang, S.P. Wang, G. Lorette, D.S. Yeung, and H. Yan, ed. (IEEE Computer Society, Los Alamitos, Calif., 2006), pp. 861–865.

G. Krishnan, and S. K. Nayar, “Cata-fisheye camera for panoramic imaging,” in Proceedings of IEEE Workshop on Application of Computer Vision (Institute of Electrical and Electronics Engineers, New York, 2008), pp. 1–8.

ACCOWLE. Vision, http://www.accowle.com/english/

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1

An illustration of the proposed single-camera panoramic stereo vision system.

Fig. 2
Fig. 2

(a) Vertical FOV with respect to p and e. (b) Average IRI of the catadioptric image and the stereo baseline length with respect to p and e.

Fig. 3
Fig. 3

Four examples of possible system designs.

Fig. 4
Fig. 4

An illustration of the computational model and the epipolar geometry.

Fig. 5
Fig. 5

Experimental setup of the single-camera panoramic stereo system.

Fig. 6
Fig. 6

Image of an indoor environment captured by the system.

Fig. 7
Fig. 7

A latitude-longitude unwrapping of the image shown in Fig. 6.

Fig. 8
Fig. 8

(a) and (b) show a panoramic stereo image pair from a latitude-longitude unwrapping of the image in Fig. 6. A set of matching points are displayed over the image. Fig. 8(c) shows a visualization of the disparity values of the matching point pairs.

Fig. 9
Fig. 9

Setup of the test environment for 3D reconstruction.

Fig. 10
Fig. 10

Sample points on the LCD used in the 3D reconstruction experiment.

Fig. 11
Fig. 11

Result of 3D reconstruction of the tested sample points.

Tables (1)

Tables Icon

Table 1 Several examples of system design.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

A z 2 + r 2 + B z = C .
{ z ( t ) = t r ( t ) = C B t A t 2
P ( t , k ) = S r ( t ) + k V i ( t ) = [ p z ( t , k ) , p r ( t , k ) ]
{ p z ( t , k ) = N z / D z p r ( t , k ) = N r / D r
N z = B 2 ( d k + t ( 3 k + Ω ) ) + 4 ( C k ( - d + t - 2 A t ) + C Ω t + A t 2 ( ( 1 + A ) d k + ( - 1 + A ) t ( k + Ω ) ) ) - ... 4 B ( C k + t ( - ( 1 + A ) d k + t ( k - 2 A k - ( - 1 + A ) Ω ) ) ) ; N r = C - t ( B + A t ) ( B 2 ( k + Ω ) - 4 B ( - d k + t ( Ω - A ( k + Ω ) ) ) + 4 ( C ( - k + Ω ) + ... A t ( 2 d k + ( - 1 + A ) t ( k + Ω ) ) ) ) ; D z = D r = Ω ( B 2 + 4 ( - 1 + A ) B t + 4 ( C + ( - 1 + A ) A t 2 ) ) ; Ω = C + d 2 - 2 d t - t ( B + ( - 1 + A ) t ) .
α 2 = arctan ( 2 w A / ( d B + B 2 4 A ( w 2 C ) ) ) .
d = - p + 2 e 2 p / ( e 2 - 1 ) .
θ ( c ) = { θ m ( c ) = a 0 + a 1 c + a 2 c 2 + ... + a N 1 c N 1 , c C b θ f ( c ) = b 0 + b 1 c + b 2 c 2 + ... + b N 2 c N 2 , c > C b
v ( c ) = { v m ( c ) = g 0 + g 1 c + g 2 c 2 + ... + g N 3 c N 3 , c C b v f ( c ) = h 0 + h 1 c + h 2 c 2 + ... + h N 4 c N 4 , c > C b

Metrics