Abstract

This paper proposes an approach to rectifying two images of the same scene captured by cameras at general positions so that the results form a stereo pair that satisfies the constraints of the stereoscopic visualization platforms. This is unlike conventional image rectification research that primarily focuses on making stereo matching easier but pays little attention to 3D viewing. The novel derivation of the rectification algorithm also has an intuitive physical meaning that is not available from conventional approaches. Practical issues related to wide-baseline rectification and operation range of the proposed method are analyzed. Both simulated and real data experiments are used to assess the performance of the proposed algorithm.

© 2008 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. P. May, “A survey of 3-D display technologies,” Inf. Disp. 32, 28-33 (2005).
  2. http://sharp-world.com/products/device/catalog/pdf/lcd200807_e.pdf (October 2008).
  3. A. Fusiello, E. Trucco, and A. Verri, “A compact algorithm for rectification of stereo pairs,” Mach. Vision Appl. 12, 16-22 (2000).
    [CrossRef]
  4. N. Ayache and C. Hansen, “Rectification of images for binocular and trinocular stereovision,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 1988), pp. 11-16.
  5. N. Ayache and C. Hansen, “Trinocular stereo vision for robotics,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 73-85 (1991).
    [CrossRef]
  6. R. Hartley, “Theory and practice of projective rectification,” Int. J. Comput. Vis. 35, 1-16 (1999).
    [CrossRef]
  7. F. Isgrò and E. Trucco, “On robust rectification for uncalibrated images,” in Proceedings of International Conference on Image Analysis and Processing (ICIAP, 1999), pp. 297-302.
    [CrossRef]
  8. C. Loop and Z. Zhang, “Computing rectifying homographies for stereo vision,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 125-131.
  9. M. Pollefeys, R. Koch and L. Van Gool, “A simple and efficient rectification method for general motion,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 1999), pp. 496-501.
    [CrossRef]
  10. Z. Chen, C. Wu, and H. T. Tsui, “A new image rectification algorithm,” Pattern Recogn. Lett. 24, 251-260 (2003).
    [CrossRef]
  11. D. V. Papadimitriou and T. J. Dennis, “Epipolar line estimation and rectification for stereo image pairs,” IEEE Trans. Image Process. 5, 672-676 (1996).
    [CrossRef] [PubMed]
  12. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge U. Press, 2003).
  13. H. H. P. Wu, Y. H. Yu, W. C. Chen, “Projective rectification based on relative modification and size extension for stereo image pairs,” IEE Proc. Vision Image Signal Process. 152623-633 (2005).
    [CrossRef]
  14. J. Mallon, and P. F. Whelan, “Projective rectification from the fundamental matrix,” Image Vis. Comput. 23, 643-650 (2005).
    [CrossRef]
  15. R. I. Hartley, “In defense of the eight-point algorithm,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 580-593 (1997).
    [CrossRef]
  16. Z. Zhang, “Determining the epipolar geometry and its uncertainty: a review,” Int. J. Comput. Vis. 27, 161-195 (1998).
    [CrossRef]
  17. L. B. Stelmach, W. J. Tam, F. Speranza, R. Renaud, and T. Martin, “Improving the visual comfort of stereoscopic images,” Proc. SPIE 5006, 269-282 (2003).
    [CrossRef]
  18. P. Neri, H. Bridge, and D. J. Heeger, “Stereoscopic processing of absolute and relative disparity in human visual cortex,” J. Neurophysiol. 92, 1880-1891 (2004).
    [CrossRef] [PubMed]
  19. D. Regan, C. J. Erkelens, and H. Collewijn, “Necessary conditions for the perception of motion-in-depth,” Invest. Ophthalmol. Visual Sci. 27, 584-597 (1986).
  20. F. L. Kooi and A. Toet, “Visual comfort of binocular and 3D displays,” Displays 25, 99-108 (2004).
    [CrossRef]
  21. J. Zhou and B. Li, “Rectification with intersecting optical axes for stereoscopic visualization,” in Proceedings of International Conference on Pattern Recognition (IAPR, 2006), pp. 17-20.
  22. S. Pastoor, “Human factors of 3D imaging: results of recent research at Heinrich-Hertz-Institut Berlin,” in Proceedings of the International Display Workshop (IDW, 1995), pp. 66-72.
  23. http://www.public.asu.edu/~bli24/Rectification.html.
  24. C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. A. J. Winder, and R. Szeliski, “High-quality video view interpolation using a layered representation,” in Proceedings of SIGGRAPH (ACM, 2004), pp. 600-608.
  25. The Persistence of Vision Raytracer, http://www.povray.org.

2005 (3)

P. May, “A survey of 3-D display technologies,” Inf. Disp. 32, 28-33 (2005).

H. H. P. Wu, Y. H. Yu, W. C. Chen, “Projective rectification based on relative modification and size extension for stereo image pairs,” IEE Proc. Vision Image Signal Process. 152623-633 (2005).
[CrossRef]

J. Mallon, and P. F. Whelan, “Projective rectification from the fundamental matrix,” Image Vis. Comput. 23, 643-650 (2005).
[CrossRef]

2004 (2)

P. Neri, H. Bridge, and D. J. Heeger, “Stereoscopic processing of absolute and relative disparity in human visual cortex,” J. Neurophysiol. 92, 1880-1891 (2004).
[CrossRef] [PubMed]

F. L. Kooi and A. Toet, “Visual comfort of binocular and 3D displays,” Displays 25, 99-108 (2004).
[CrossRef]

2003 (2)

L. B. Stelmach, W. J. Tam, F. Speranza, R. Renaud, and T. Martin, “Improving the visual comfort of stereoscopic images,” Proc. SPIE 5006, 269-282 (2003).
[CrossRef]

Z. Chen, C. Wu, and H. T. Tsui, “A new image rectification algorithm,” Pattern Recogn. Lett. 24, 251-260 (2003).
[CrossRef]

2000 (1)

A. Fusiello, E. Trucco, and A. Verri, “A compact algorithm for rectification of stereo pairs,” Mach. Vision Appl. 12, 16-22 (2000).
[CrossRef]

1999 (1)

R. Hartley, “Theory and practice of projective rectification,” Int. J. Comput. Vis. 35, 1-16 (1999).
[CrossRef]

1998 (1)

Z. Zhang, “Determining the epipolar geometry and its uncertainty: a review,” Int. J. Comput. Vis. 27, 161-195 (1998).
[CrossRef]

1997 (1)

R. I. Hartley, “In defense of the eight-point algorithm,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 580-593 (1997).
[CrossRef]

1996 (1)

D. V. Papadimitriou and T. J. Dennis, “Epipolar line estimation and rectification for stereo image pairs,” IEEE Trans. Image Process. 5, 672-676 (1996).
[CrossRef] [PubMed]

1991 (1)

N. Ayache and C. Hansen, “Trinocular stereo vision for robotics,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 73-85 (1991).
[CrossRef]

1986 (1)

D. Regan, C. J. Erkelens, and H. Collewijn, “Necessary conditions for the perception of motion-in-depth,” Invest. Ophthalmol. Visual Sci. 27, 584-597 (1986).

Ayache, N.

N. Ayache and C. Hansen, “Trinocular stereo vision for robotics,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 73-85 (1991).
[CrossRef]

N. Ayache and C. Hansen, “Rectification of images for binocular and trinocular stereovision,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 1988), pp. 11-16.

Bridge, H.

P. Neri, H. Bridge, and D. J. Heeger, “Stereoscopic processing of absolute and relative disparity in human visual cortex,” J. Neurophysiol. 92, 1880-1891 (2004).
[CrossRef] [PubMed]

Chen, W. C.

H. H. P. Wu, Y. H. Yu, W. C. Chen, “Projective rectification based on relative modification and size extension for stereo image pairs,” IEE Proc. Vision Image Signal Process. 152623-633 (2005).
[CrossRef]

Chen, Z.

Z. Chen, C. Wu, and H. T. Tsui, “A new image rectification algorithm,” Pattern Recogn. Lett. 24, 251-260 (2003).
[CrossRef]

Collewijn, H.

D. Regan, C. J. Erkelens, and H. Collewijn, “Necessary conditions for the perception of motion-in-depth,” Invest. Ophthalmol. Visual Sci. 27, 584-597 (1986).

Dennis, T. J.

D. V. Papadimitriou and T. J. Dennis, “Epipolar line estimation and rectification for stereo image pairs,” IEEE Trans. Image Process. 5, 672-676 (1996).
[CrossRef] [PubMed]

Erkelens, C. J.

D. Regan, C. J. Erkelens, and H. Collewijn, “Necessary conditions for the perception of motion-in-depth,” Invest. Ophthalmol. Visual Sci. 27, 584-597 (1986).

Fusiello, A.

A. Fusiello, E. Trucco, and A. Verri, “A compact algorithm for rectification of stereo pairs,” Mach. Vision Appl. 12, 16-22 (2000).
[CrossRef]

Hansen, C.

N. Ayache and C. Hansen, “Trinocular stereo vision for robotics,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 73-85 (1991).
[CrossRef]

N. Ayache and C. Hansen, “Rectification of images for binocular and trinocular stereovision,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 1988), pp. 11-16.

Hartley, R.

R. Hartley, “Theory and practice of projective rectification,” Int. J. Comput. Vis. 35, 1-16 (1999).
[CrossRef]

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge U. Press, 2003).

Hartley, R. I.

R. I. Hartley, “In defense of the eight-point algorithm,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 580-593 (1997).
[CrossRef]

Heeger, D. J.

P. Neri, H. Bridge, and D. J. Heeger, “Stereoscopic processing of absolute and relative disparity in human visual cortex,” J. Neurophysiol. 92, 1880-1891 (2004).
[CrossRef] [PubMed]

Isgrò, F.

F. Isgrò and E. Trucco, “On robust rectification for uncalibrated images,” in Proceedings of International Conference on Image Analysis and Processing (ICIAP, 1999), pp. 297-302.
[CrossRef]

Kang, S. B.

C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. A. J. Winder, and R. Szeliski, “High-quality video view interpolation using a layered representation,” in Proceedings of SIGGRAPH (ACM, 2004), pp. 600-608.

Koch, R.

M. Pollefeys, R. Koch and L. Van Gool, “A simple and efficient rectification method for general motion,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 1999), pp. 496-501.
[CrossRef]

Kooi, F. L.

F. L. Kooi and A. Toet, “Visual comfort of binocular and 3D displays,” Displays 25, 99-108 (2004).
[CrossRef]

Li, B.

J. Zhou and B. Li, “Rectification with intersecting optical axes for stereoscopic visualization,” in Proceedings of International Conference on Pattern Recognition (IAPR, 2006), pp. 17-20.

Loop, C.

C. Loop and Z. Zhang, “Computing rectifying homographies for stereo vision,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 125-131.

Mallon, J.

J. Mallon, and P. F. Whelan, “Projective rectification from the fundamental matrix,” Image Vis. Comput. 23, 643-650 (2005).
[CrossRef]

Martin, T.

L. B. Stelmach, W. J. Tam, F. Speranza, R. Renaud, and T. Martin, “Improving the visual comfort of stereoscopic images,” Proc. SPIE 5006, 269-282 (2003).
[CrossRef]

May, P.

P. May, “A survey of 3-D display technologies,” Inf. Disp. 32, 28-33 (2005).

Neri, P.

P. Neri, H. Bridge, and D. J. Heeger, “Stereoscopic processing of absolute and relative disparity in human visual cortex,” J. Neurophysiol. 92, 1880-1891 (2004).
[CrossRef] [PubMed]

Papadimitriou, D. V.

D. V. Papadimitriou and T. J. Dennis, “Epipolar line estimation and rectification for stereo image pairs,” IEEE Trans. Image Process. 5, 672-676 (1996).
[CrossRef] [PubMed]

Pastoor, S.

S. Pastoor, “Human factors of 3D imaging: results of recent research at Heinrich-Hertz-Institut Berlin,” in Proceedings of the International Display Workshop (IDW, 1995), pp. 66-72.

Pollefeys, M.

M. Pollefeys, R. Koch and L. Van Gool, “A simple and efficient rectification method for general motion,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 1999), pp. 496-501.
[CrossRef]

Regan, D.

D. Regan, C. J. Erkelens, and H. Collewijn, “Necessary conditions for the perception of motion-in-depth,” Invest. Ophthalmol. Visual Sci. 27, 584-597 (1986).

Renaud, R.

L. B. Stelmach, W. J. Tam, F. Speranza, R. Renaud, and T. Martin, “Improving the visual comfort of stereoscopic images,” Proc. SPIE 5006, 269-282 (2003).
[CrossRef]

Speranza, F.

L. B. Stelmach, W. J. Tam, F. Speranza, R. Renaud, and T. Martin, “Improving the visual comfort of stereoscopic images,” Proc. SPIE 5006, 269-282 (2003).
[CrossRef]

Stelmach, L. B.

L. B. Stelmach, W. J. Tam, F. Speranza, R. Renaud, and T. Martin, “Improving the visual comfort of stereoscopic images,” Proc. SPIE 5006, 269-282 (2003).
[CrossRef]

Szeliski, R.

C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. A. J. Winder, and R. Szeliski, “High-quality video view interpolation using a layered representation,” in Proceedings of SIGGRAPH (ACM, 2004), pp. 600-608.

Tam, W. J.

L. B. Stelmach, W. J. Tam, F. Speranza, R. Renaud, and T. Martin, “Improving the visual comfort of stereoscopic images,” Proc. SPIE 5006, 269-282 (2003).
[CrossRef]

Toet, A.

F. L. Kooi and A. Toet, “Visual comfort of binocular and 3D displays,” Displays 25, 99-108 (2004).
[CrossRef]

Trucco, E.

A. Fusiello, E. Trucco, and A. Verri, “A compact algorithm for rectification of stereo pairs,” Mach. Vision Appl. 12, 16-22 (2000).
[CrossRef]

F. Isgrò and E. Trucco, “On robust rectification for uncalibrated images,” in Proceedings of International Conference on Image Analysis and Processing (ICIAP, 1999), pp. 297-302.
[CrossRef]

Tsui, H. T.

Z. Chen, C. Wu, and H. T. Tsui, “A new image rectification algorithm,” Pattern Recogn. Lett. 24, 251-260 (2003).
[CrossRef]

Uyttendaele, M.

C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. A. J. Winder, and R. Szeliski, “High-quality video view interpolation using a layered representation,” in Proceedings of SIGGRAPH (ACM, 2004), pp. 600-608.

Van Gool, L.

M. Pollefeys, R. Koch and L. Van Gool, “A simple and efficient rectification method for general motion,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 1999), pp. 496-501.
[CrossRef]

Verri, A.

A. Fusiello, E. Trucco, and A. Verri, “A compact algorithm for rectification of stereo pairs,” Mach. Vision Appl. 12, 16-22 (2000).
[CrossRef]

Whelan, P. F.

J. Mallon, and P. F. Whelan, “Projective rectification from the fundamental matrix,” Image Vis. Comput. 23, 643-650 (2005).
[CrossRef]

Winder, S. A. J.

C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. A. J. Winder, and R. Szeliski, “High-quality video view interpolation using a layered representation,” in Proceedings of SIGGRAPH (ACM, 2004), pp. 600-608.

Wu, C.

Z. Chen, C. Wu, and H. T. Tsui, “A new image rectification algorithm,” Pattern Recogn. Lett. 24, 251-260 (2003).
[CrossRef]

Wu, H. H. P.

H. H. P. Wu, Y. H. Yu, W. C. Chen, “Projective rectification based on relative modification and size extension for stereo image pairs,” IEE Proc. Vision Image Signal Process. 152623-633 (2005).
[CrossRef]

Yu, Y. H.

H. H. P. Wu, Y. H. Yu, W. C. Chen, “Projective rectification based on relative modification and size extension for stereo image pairs,” IEE Proc. Vision Image Signal Process. 152623-633 (2005).
[CrossRef]

Zhang, Z.

Z. Zhang, “Determining the epipolar geometry and its uncertainty: a review,” Int. J. Comput. Vis. 27, 161-195 (1998).
[CrossRef]

C. Loop and Z. Zhang, “Computing rectifying homographies for stereo vision,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 125-131.

Zhou, J.

J. Zhou and B. Li, “Rectification with intersecting optical axes for stereoscopic visualization,” in Proceedings of International Conference on Pattern Recognition (IAPR, 2006), pp. 17-20.

Zisserman, A.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge U. Press, 2003).

Zitnick, C. L.

C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. A. J. Winder, and R. Szeliski, “High-quality video view interpolation using a layered representation,” in Proceedings of SIGGRAPH (ACM, 2004), pp. 600-608.

Displays (1)

F. L. Kooi and A. Toet, “Visual comfort of binocular and 3D displays,” Displays 25, 99-108 (2004).
[CrossRef]

IEE Proc. Vision Image Signal Process. (1)

H. H. P. Wu, Y. H. Yu, W. C. Chen, “Projective rectification based on relative modification and size extension for stereo image pairs,” IEE Proc. Vision Image Signal Process. 152623-633 (2005).
[CrossRef]

IEEE Trans. Image Process. (1)

D. V. Papadimitriou and T. J. Dennis, “Epipolar line estimation and rectification for stereo image pairs,” IEEE Trans. Image Process. 5, 672-676 (1996).
[CrossRef] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

R. I. Hartley, “In defense of the eight-point algorithm,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 580-593 (1997).
[CrossRef]

N. Ayache and C. Hansen, “Trinocular stereo vision for robotics,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 73-85 (1991).
[CrossRef]

Image Vis. Comput. (1)

J. Mallon, and P. F. Whelan, “Projective rectification from the fundamental matrix,” Image Vis. Comput. 23, 643-650 (2005).
[CrossRef]

Inf. Disp. (1)

P. May, “A survey of 3-D display technologies,” Inf. Disp. 32, 28-33 (2005).

Int. J. Comput. Vis. (2)

R. Hartley, “Theory and practice of projective rectification,” Int. J. Comput. Vis. 35, 1-16 (1999).
[CrossRef]

Z. Zhang, “Determining the epipolar geometry and its uncertainty: a review,” Int. J. Comput. Vis. 27, 161-195 (1998).
[CrossRef]

Invest. Ophthalmol. Visual Sci. (1)

D. Regan, C. J. Erkelens, and H. Collewijn, “Necessary conditions for the perception of motion-in-depth,” Invest. Ophthalmol. Visual Sci. 27, 584-597 (1986).

J. Neurophysiol. (1)

P. Neri, H. Bridge, and D. J. Heeger, “Stereoscopic processing of absolute and relative disparity in human visual cortex,” J. Neurophysiol. 92, 1880-1891 (2004).
[CrossRef] [PubMed]

Mach. Vision Appl. (1)

A. Fusiello, E. Trucco, and A. Verri, “A compact algorithm for rectification of stereo pairs,” Mach. Vision Appl. 12, 16-22 (2000).
[CrossRef]

Pattern Recogn. Lett. (1)

Z. Chen, C. Wu, and H. T. Tsui, “A new image rectification algorithm,” Pattern Recogn. Lett. 24, 251-260 (2003).
[CrossRef]

Proc. SPIE (1)

L. B. Stelmach, W. J. Tam, F. Speranza, R. Renaud, and T. Martin, “Improving the visual comfort of stereoscopic images,” Proc. SPIE 5006, 269-282 (2003).
[CrossRef]

Other (11)

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge U. Press, 2003).

N. Ayache and C. Hansen, “Rectification of images for binocular and trinocular stereovision,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 1988), pp. 11-16.

http://sharp-world.com/products/device/catalog/pdf/lcd200807_e.pdf (October 2008).

F. Isgrò and E. Trucco, “On robust rectification for uncalibrated images,” in Proceedings of International Conference on Image Analysis and Processing (ICIAP, 1999), pp. 297-302.
[CrossRef]

C. Loop and Z. Zhang, “Computing rectifying homographies for stereo vision,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 125-131.

M. Pollefeys, R. Koch and L. Van Gool, “A simple and efficient rectification method for general motion,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 1999), pp. 496-501.
[CrossRef]

J. Zhou and B. Li, “Rectification with intersecting optical axes for stereoscopic visualization,” in Proceedings of International Conference on Pattern Recognition (IAPR, 2006), pp. 17-20.

S. Pastoor, “Human factors of 3D imaging: results of recent research at Heinrich-Hertz-Institut Berlin,” in Proceedings of the International Display Workshop (IDW, 1995), pp. 66-72.

http://www.public.asu.edu/~bli24/Rectification.html.

C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. A. J. Winder, and R. Szeliski, “High-quality video view interpolation using a layered representation,” in Proceedings of SIGGRAPH (ACM, 2004), pp. 600-608.

The Persistence of Vision Raytracer, http://www.povray.org.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1

(a) Illustration of how a parallax-barrier-based 3D display works. The light pixels are the right-eye image, and the dark ones are the left-eye image. Both eyes need to fall into the correct viewing diamond to sense 3D. Assuming normal viewing distance (and thus fixed viewing diamonds), then the image pair must have proper disparities to give rise to 3D perception. (b) The actual glasses-free LC 15 - inch display ([2]) used in this study.

Fig. 2
Fig. 2

Rationale behind the Hartley method and the potential distortion.

Fig. 3
Fig. 3

Rationale behind the Loop and Zhang method and the potential distortion. The vertical and horizontal crossed lines in the bottom left image become those in the right-hand image.

Fig. 4
Fig. 4

Illustration of a standard stereo setup.

Fig. 5
Fig. 5

Results of the grouping procedure. (a) White crosses show the set of all point correspondences, with the black tail indicating displacement of the feature points. (b)–(d) are three different groups.

Fig. 6
Fig. 6

(a) Standard stereo setup. (b) Stereo with intersecting optical axes.

Fig. 7
Fig. 7

Relationship between the focal length, the image width, and the camera view angle.

Fig. 8
Fig. 8

Relationship between the epipole, the focal length, the baseline width, and the scene distance.

Fig. 9
Fig. 9

(a), (b) Stereo pair after the rectification results with f = ( w + h ) 2 . (c), (d) Results with f = ( w + h ) 8 .

Fig. 10
Fig. 10

(a1), (a2) Original pair. (b1), (b2) Some eipolar lines are plotted. (c1), (c2) Rectification results based on the fundamental matrix. (d1), (d2) Results based on camera calibration.

Fig. 11
Fig. 11

(a), (b) Standard stereo pair. (c) View after rotating the camera of (b). (d), (e) Rectified results on (a) and (c) based purely on the images. (d), (e) Apparently form a stereo pair. Note that, in practice, (d) and (e) may be both rotated during rectification in relation to (a) and (b), respectively, and thus one cannot simply compute the difference between (a) and (d), for example.

Fig. 12
Fig. 12

Sample results using synthetic data, with comparison to the conventional approach of [12]. (a1), (a2) Original pair. (b1), (b2) Rectified results based on [12]. (c1), (c2) Results of our method. Note the obvious shear distortion in (b1). Please also refer to the accompanying website [23] for the full-resolution images, including the red–cyan pair.

Fig. 13
Fig. 13

Sample results of real images captured by hand-held cameras. (a1), (a2) Original pair. (b1), (b2) Rectified results based on the method of [12]. (c1), (c2) Results of our method. Note the distortion of the electric pole in (b1), which is more obvious when viewed on the 3D display. Please refer to the accompanying website for the full-resolution images, including the red-cyan pair.

Equations (58)

Equations on this page are rendered with MathJax. Learn more.

P 1 = K [ I 0 ] ,
P 2 = K [ I t ] t = ( k , 0 , 0 ) T ,
{ P 1 = K 1 R 1 [ I C 1 ] = M 1 [ I C 1 ] P 2 = K 2 R 2 [ I C 2 ] = M 2 [ I C 2 ] } ,
{ K 1 = K 2 = K R 1 = R 2 = R R ( C 2 C 1 ) = ( k , 0 , 0 ) T } .
P ̃ i = K R [ I C i ] i = 1 , 2 ;
H i = K R ( K i R i ) 1 .
x = P i X ,
x ̃ = P ̃ i X = K R [ I C i ] X = K R ( K i R i ) 1 K i R i [ I C i ] X = K R ( K i R i ) 1 P i X = K R ( K i R i ) 1 x ,
r = ( C 1 C 2 ) C 1 C 2 , t = ( r × p ) r × p , s = r × t .
[ a b c 0 d e 0 0 1 ] ,
K = K 2 = [ f 0 p x 0 f p y 0 0 1 ] { p x = w 2 p y = h 2 f = ( w + h ) 2 } ,
e 2 = P 2 C ̃ 1 = K 2 R 2 [ I C 2 ] ( C 1 , 1 ) T = K 2 R 2 ( C 1 C 2 ) ,
C 1 C 2 = ( K 2 R 2 ) 1 e 2 = K 2 1 e 2 ,
F T e 2 = 0 .
R = [ r , s , t ] T ,
r = ( K 2 1 e 2 ) K 2 1 e 2 , t = r × s 2 , s = r × t .
H 2 = K R ( K 2 R 2 ) 1 = K R K 2 1 ,
H 1 = K R ( K 1 R 1 ) 1 = H 2 K 2 R 2 ( K 1 R 1 ) 1 = H 2 M ,
P 1 = M 1 [ I 0 ] , P 1 + = [ M 1 1 0 T ] ,
F = [ e 2 ] x P 2 P 1 + = [ e 2 ] x M 2 M 1 1 = [ e 2 ] x M ,
H 1 = H 2 M = H 2 ( I + e 2 v T ) M 0 = H 2 ( M 0 + e 2 v T M 0 ) = H 2 M 0 + H 2 e 2 v T H 2 1 H 2 M 0 = ( I + H 2 e 2 v 1 T ) H 2 M 0 ( v 1 T = v T H 2 1 ) .
H 2 e 2 = K R K 2 1 e 2 = K [ r , s , t ] T r = K ( 1 , 0 , 0 ) T = ( k , 0 , 0 ) T ,
H 1 = ( I + H 2 e 2 v T ) H 2 M 0 = H A H 2 M 0 with H A = [ a b c 0 1 0 0 0 1 ] .
x 1 x 2 = x 1 x 2 .
p i , j A p ( H 1 x i H 1 x j H 2 x i H 2 x j ) 2 .
p i , j A p ( H A x ̂ i H A x ̂ j x ̂ i x ̂ j ) 2 .
H A x ̂ i = ( a x ̂ i + b y ̂ i + c , y ̂ i , 1 ) T .
x ̂ i x ̂ j 2 = ( x ̂ i x ̂ j ) 2 + ( y ̂ i y ̂ j ) 2
H A x ̂ i H A x ̂ j 2 = [ a ( x ̂ i x ̂ j ) + b ( y ̂ i y ̂ j ) ] 2 + ( y ̂ i y ̂ j ) 2 ,
y ̂ i y ̂ j = y ̂ i y ̂ j .
p i , j A p [ a ( x ̂ i x ̂ i ) + b ( y ̂ i y ̂ j ) ( x ̂ i x ̂ j ) ] 2 .
a x ̂ 0 + b y ̂ 0 + c = x ̂ 0 .
x 1 = P ̃ 1 X = K R ̃ 1 [ I C 1 ] X = K [ 1 0 t g ( θ ) 0 1 0 t g ( θ ) 0 1 ] [ x x 0 y z ] = K [ x x 0 + z t g ( θ ) y ( x x 0 ) t g ( θ ) + z ] [ f x x 0 + z t g ( θ ) ( x x 0 ) t g ( θ ) + z f y ( x x 0 ) t g ( θ ) + z ] .
u 1 = f x x 0 + z t g ( θ ) ( x x 0 ) t g ( θ ) + z , v 1 = f y ( x x 0 ) t g ( θ ) + z ,
u 2 = f x + x 0 z t g ( θ ) ( x + x 0 ) t g ( θ ) + z , v 2 = f y ( x + x 0 ) t g ( θ ) + z ,
v 2 v 1 = f y 2 x t g ( θ ) [ z + x 0 t g ( θ ) ] 2 [ x t g ( θ ) ] 2 .
u 1 = f x x 0 + ( z 0 Δ z ) t g ( θ ) ( x x 0 ) t g ( θ ) + z = f x Δ z t g ( θ ) ( x x 0 ) t g ( θ ) + z ,
u 2 = f x + Δ z t g ( θ ) ( x + x 0 ) t g ( θ ) + z ,
u 2 u 1 = f 2 x 2 t g ( θ ) + 2 Δ z t g ( θ ) [ z + x 0 t g ( θ ) ] [ z + x 0 t g ( θ ) ] 2 [ x t g ( θ ) ] 2 .
v 2 v 1 0 , u 2 u 1 f z 2 Δ z t g ( θ ) .
d = u 2 u 1 = f k z = 2 f x z ,
d 0 = 2 f x 0 z 0 .
d = 2 f x 0 ( z 0 Δ z ) .
d d 0 = 2 f x 0 ( 1 z 1 z 0 ) = 2 f Δ z x 0 z z 0 = 2 f z Δ z t g ( θ ) ,
w 2 f = t g ( θ 2 ) ,
D = H ̃ 2 H 2 1 ,
H 2 = F ( K , e ) = F ( f , e ) = F ( θ , p ) .
w = 1024 , h = 768 , f = w 2 t g ( θ 2 ) , e = ( e x , e y , 1 ) T ,
where e y = 0.2 × e z and e z = 2 p f 2 p w .
D ( θ ̃ = 60 ° , θ = 30 ° , p = 5 ) = [ 0.996 0.001 150 0 1.000 29.4 0 0 1.004 ] .
D ( θ ̃ = 60 ° , θ = 120 ° , p = 5 ) = [ 1.056 0.009 214 0 1.000 41.7 0 0 0.947 ] .
D ( θ ̃ = 60 ° , θ = 120 ° , p = 10 ) = [ 1.01 0.003 116 0 1.00 23.0 0 0 0.988 ] ,
D ( θ ̃ = 60 ° , θ = 90 ° , p = 5 ) = [ 1.01 0.003 99.1 0 1.00 19.7 0 0 0.988 ] .
k = z d f = z d ( w ρ ) = z ρ d w .
k = ρ d w z 0.73 × 40 1024 z 1 35 z .
d 1 d m = f k 2 ( 1 z 1 1 z 2 ) .
f k 2 ( 1 z 1 1 z 2 ) d max k 2 ρ d max w 1 1 z 1 1 z 2 .
k 2 ρ d max w 1 1 z 1 1 ( z 1 + Δ z ) = 2 ρ d max w z 1 ( 1 + z 1 Δ z ) ,

Metrics