Abstract

A complementary catadioptric imaging technique was proposed to solve the problem of low and nonuniform resolution in omnidirectional imaging. To enhance this research, our paper focuses on how to generate a high-resolution panoramic image from the captured omnidirectional image. To avoid the interference between the inner and outer images while fusing the two complementary views, a cross-selection kernel regression method is proposed. First, in view of the complementarity of sampling resolution in the tangential and radial directions between the inner and the outer images, respectively, the horizontal gradients in the expected panoramic image are estimated based on the scattered neighboring pixels mapped from the outer, while the vertical gradients are estimated using the inner image. Then, the size and shape of the regression kernel are adaptively steered based on the local gradients. Furthermore, the neighboring pixels in the next interpolation step of kernel regression are also selected based on the comparison between the horizontal and vertical gradients. In simulation and real-image experiments, the proposed method outperforms existing kernel regression methods and our previous wavelet-based fusion method in terms of both visual quality and objective evaluation.

© 2014 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. K. Ikeuchi, M. Sakauchi, H. Kawasaki, and I. Sato, “Constructing virtual cities by using panoramic images,” Int. J. Comput. Vis. 58, 237–247 (2004).
    [CrossRef]
  2. M. Lhuillier, “Automatic scene structure and camera motion using a catadioptric system,” Comput. Vis. Image Underst. 109, 186–203 (2008).
    [CrossRef]
  3. W. Chen, I. Cheng, Z. Xiong, A. Basu, and M. Zhang, “A 2-point algorithm for 3D reconstruction of horizontal lines from a single omni-directional image,” Pattern Recogn. Lett. 32, 524–531 (2011).
    [CrossRef]
  4. M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recogn. 37, 2195–2215 (2004).
    [CrossRef]
  5. H. M. Becerra, G. López-Nicolás, and C. Sagüés, “Omnidirectional visual control of mobile robots based on the 1D trifocal tensor,” Robot. Auton. Syst. 58, 796–808 (2010).
    [CrossRef]
  6. T. E. Boult, X. Gao, R. Micheals, and M. Eckman, “Omni-directional visual surveillance,” Image Vis. Comput. 22, 515–534 (2004).
    [CrossRef]
  7. C. H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Heterogeneous fusion of omnidirectional and PTZ cameras for multiple object tracking,” IEEE Trans. Circuits Syst. Video Technol. 18, 1052–1063 (2008).
    [CrossRef]
  8. J. Gaspar, C. Decco, J. Okamoto, and J. S. Victor, “Constant resolution omnidirectional cameras,” in Proceedings of the 3rd Workshop on Omnidirectional Vision (2002), pp. 27–34.
  9. J. S. Chahl and M. V. Srinivasan, “Reflective surfaces for panoramic imaging,” Appl. Opt. 36, 8275–8285 (1997).
    [CrossRef]
  10. J. Zeng and X. Su, “A distortionless catadioptric panoramic imaging system for cylindrical scene,” Opto-Electron. Eng. 30, 42–45 (2003).
  11. R. A. Hicks and R. Bajcsy, “Catadioptric sensors that approximate wide-angle perspective projections,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2000), pp. 545–551.
  12. G. Stefan, “Mirror design for an omnidirectional camera with a uniform cylindrical projection when using the SVAVISCA sensor,” Research Reports of CMP (Czech Technical University in Prague, 2001).
  13. H. Nagahara, Y. Yagi, and M. Yachida, “Resolution improving method from multi-focal omnidirectional images,” in Proceedings of the IEEE International Conference on Image Processing (2001), pp. 654–657.
  14. H. Nagahara, Y. Yagi, and M. Yachida, “Super-resolution modeling,” in Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (2003), pp. 215–221.
  15. H. Nagahara, Y. Yagi, and M. Yachida, “Superresolution modeling using an omnidirectional image sensor,” IEEE Trans. Syst. Man Cybern. 33, 607–615 (2003).
    [CrossRef]
  16. Q. Peng and Y. Jia, “Wavelet-based resolution enhancement of omnidirectional images,” Acta Electron. Sin. 32, 1875–1879 (2004).
  17. C. Gao, H. Hua, and N. Ahuja, “A hemispherical imaging camera,” Comput. Vis. Image Underst. 114, 168–178 (2010).
    [CrossRef]
  18. L. Chen, W. Wang, M. Zhang, and W. Chen, “Design analysis of a complementary double-mirror catadioptric omnidirectional imaging system,” Acta Opt. Sin. 30, 3487–3494 (2010).
    [CrossRef]
  19. L. Chen, W. Wang, M. Zhang, W. Bao, and X. Zhang, “Complementary-structure catadioptric omnidirectional sensor design for resolution enhancement,” Opt. Eng. 50, 033201 (2011).
    [CrossRef]
  20. L. Chen, J. Lou, M. Zhang, W. Wang, and Y. Liu, “Fusion of complementary catadioptric panoramic images based on nonsubsampled contourlet transform,” Opt. Eng. 50, 127002 (2011).
    [CrossRef]
  21. H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Trans. Image Process. 16, 349–366 (2007).
    [CrossRef]
  22. H. Chang and D. Yeung, “Kernel-based distance metric learning for content-based image retrieval,” Image Vis. Comput. 25, 695–703 (2007).
    [CrossRef]
  23. P. H. Gosselin, F. Precioso, and S. Philipp-Foliguet, “Incremental kernel learning for active image retrieval without global dictionaries,” Pattern Recogn. 44, 2244–2254 (2011) http://www.sciencedirect.com/science/article/pii/S0031320310005716 .
    [CrossRef]
  24. H. Zhang, J. Yang, Y. Zhang, and T. S. Huang, “Non-local kernel regression for image and video restoration,” in Proceedings of the 11th European Conference on Computer Vision (2010), pp. 566–579.
  25. R. Saface-Rad, I. Tchoukanov, K. C. Smith, and B. Benhabib, “Three-dimensional location estimation of circular features for machine vision,” IEEE Trans. Robot. Autom. 8, 624–640 (1992).
    [CrossRef]
  26. T. Mashita, Y. Iwai, and M. Yachida, “Calibration method for misaligned catadioptric camera,” IEICE Trans. Inf. Syst. E89-D, 1984–1993 (2006).
    [CrossRef]
  27. G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
    [CrossRef]
  28. C. S. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
    [CrossRef]
  29. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error measurement to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [CrossRef]

2011 (4)

W. Chen, I. Cheng, Z. Xiong, A. Basu, and M. Zhang, “A 2-point algorithm for 3D reconstruction of horizontal lines from a single omni-directional image,” Pattern Recogn. Lett. 32, 524–531 (2011).
[CrossRef]

L. Chen, W. Wang, M. Zhang, W. Bao, and X. Zhang, “Complementary-structure catadioptric omnidirectional sensor design for resolution enhancement,” Opt. Eng. 50, 033201 (2011).
[CrossRef]

L. Chen, J. Lou, M. Zhang, W. Wang, and Y. Liu, “Fusion of complementary catadioptric panoramic images based on nonsubsampled contourlet transform,” Opt. Eng. 50, 127002 (2011).
[CrossRef]

P. H. Gosselin, F. Precioso, and S. Philipp-Foliguet, “Incremental kernel learning for active image retrieval without global dictionaries,” Pattern Recogn. 44, 2244–2254 (2011) http://www.sciencedirect.com/science/article/pii/S0031320310005716 .
[CrossRef]

2010 (3)

C. Gao, H. Hua, and N. Ahuja, “A hemispherical imaging camera,” Comput. Vis. Image Underst. 114, 168–178 (2010).
[CrossRef]

L. Chen, W. Wang, M. Zhang, and W. Chen, “Design analysis of a complementary double-mirror catadioptric omnidirectional imaging system,” Acta Opt. Sin. 30, 3487–3494 (2010).
[CrossRef]

H. M. Becerra, G. López-Nicolás, and C. Sagüés, “Omnidirectional visual control of mobile robots based on the 1D trifocal tensor,” Robot. Auton. Syst. 58, 796–808 (2010).
[CrossRef]

2008 (2)

C. H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Heterogeneous fusion of omnidirectional and PTZ cameras for multiple object tracking,” IEEE Trans. Circuits Syst. Video Technol. 18, 1052–1063 (2008).
[CrossRef]

M. Lhuillier, “Automatic scene structure and camera motion using a catadioptric system,” Comput. Vis. Image Underst. 109, 186–203 (2008).
[CrossRef]

2007 (2)

H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Trans. Image Process. 16, 349–366 (2007).
[CrossRef]

H. Chang and D. Yeung, “Kernel-based distance metric learning for content-based image retrieval,” Image Vis. Comput. 25, 695–703 (2007).
[CrossRef]

2006 (1)

T. Mashita, Y. Iwai, and M. Yachida, “Calibration method for misaligned catadioptric camera,” IEICE Trans. Inf. Syst. E89-D, 1984–1993 (2006).
[CrossRef]

2004 (5)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error measurement to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Q. Peng and Y. Jia, “Wavelet-based resolution enhancement of omnidirectional images,” Acta Electron. Sin. 32, 1875–1879 (2004).

K. Ikeuchi, M. Sakauchi, H. Kawasaki, and I. Sato, “Constructing virtual cities by using panoramic images,” Int. J. Comput. Vis. 58, 237–247 (2004).
[CrossRef]

M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recogn. 37, 2195–2215 (2004).
[CrossRef]

T. E. Boult, X. Gao, R. Micheals, and M. Eckman, “Omni-directional visual surveillance,” Image Vis. Comput. 22, 515–534 (2004).
[CrossRef]

2003 (2)

J. Zeng and X. Su, “A distortionless catadioptric panoramic imaging system for cylindrical scene,” Opto-Electron. Eng. 30, 42–45 (2003).

H. Nagahara, Y. Yagi, and M. Yachida, “Superresolution modeling using an omnidirectional image sensor,” IEEE Trans. Syst. Man Cybern. 33, 607–615 (2003).
[CrossRef]

2002 (1)

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[CrossRef]

2000 (1)

C. S. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
[CrossRef]

1997 (1)

1992 (1)

R. Saface-Rad, I. Tchoukanov, K. C. Smith, and B. Benhabib, “Three-dimensional location estimation of circular features for machine vision,” IEEE Trans. Robot. Autom. 8, 624–640 (1992).
[CrossRef]

Abidi, B.

C. H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Heterogeneous fusion of omnidirectional and PTZ cameras for multiple object tracking,” IEEE Trans. Circuits Syst. Video Technol. 18, 1052–1063 (2008).
[CrossRef]

Abidi, M.

C. H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Heterogeneous fusion of omnidirectional and PTZ cameras for multiple object tracking,” IEEE Trans. Circuits Syst. Video Technol. 18, 1052–1063 (2008).
[CrossRef]

Ahuja, N.

C. Gao, H. Hua, and N. Ahuja, “A hemispherical imaging camera,” Comput. Vis. Image Underst. 114, 168–178 (2010).
[CrossRef]

Bajcsy, R.

R. A. Hicks and R. Bajcsy, “Catadioptric sensors that approximate wide-angle perspective projections,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2000), pp. 545–551.

Bao, W.

L. Chen, W. Wang, M. Zhang, W. Bao, and X. Zhang, “Complementary-structure catadioptric omnidirectional sensor design for resolution enhancement,” Opt. Eng. 50, 033201 (2011).
[CrossRef]

Basu, A.

W. Chen, I. Cheng, Z. Xiong, A. Basu, and M. Zhang, “A 2-point algorithm for 3D reconstruction of horizontal lines from a single omni-directional image,” Pattern Recogn. Lett. 32, 524–531 (2011).
[CrossRef]

M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recogn. 37, 2195–2215 (2004).
[CrossRef]

Becerra, H. M.

H. M. Becerra, G. López-Nicolás, and C. Sagüés, “Omnidirectional visual control of mobile robots based on the 1D trifocal tensor,” Robot. Auton. Syst. 58, 796–808 (2010).
[CrossRef]

Benhabib, B.

R. Saface-Rad, I. Tchoukanov, K. C. Smith, and B. Benhabib, “Three-dimensional location estimation of circular features for machine vision,” IEEE Trans. Robot. Autom. 8, 624–640 (1992).
[CrossRef]

Boult, T. E.

T. E. Boult, X. Gao, R. Micheals, and M. Eckman, “Omni-directional visual surveillance,” Image Vis. Comput. 22, 515–534 (2004).
[CrossRef]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error measurement to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Chahl, J. S.

Chang, H.

H. Chang and D. Yeung, “Kernel-based distance metric learning for content-based image retrieval,” Image Vis. Comput. 25, 695–703 (2007).
[CrossRef]

Chen, C. H.

C. H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Heterogeneous fusion of omnidirectional and PTZ cameras for multiple object tracking,” IEEE Trans. Circuits Syst. Video Technol. 18, 1052–1063 (2008).
[CrossRef]

Chen, L.

L. Chen, J. Lou, M. Zhang, W. Wang, and Y. Liu, “Fusion of complementary catadioptric panoramic images based on nonsubsampled contourlet transform,” Opt. Eng. 50, 127002 (2011).
[CrossRef]

L. Chen, W. Wang, M. Zhang, W. Bao, and X. Zhang, “Complementary-structure catadioptric omnidirectional sensor design for resolution enhancement,” Opt. Eng. 50, 033201 (2011).
[CrossRef]

L. Chen, W. Wang, M. Zhang, and W. Chen, “Design analysis of a complementary double-mirror catadioptric omnidirectional imaging system,” Acta Opt. Sin. 30, 3487–3494 (2010).
[CrossRef]

Chen, W.

W. Chen, I. Cheng, Z. Xiong, A. Basu, and M. Zhang, “A 2-point algorithm for 3D reconstruction of horizontal lines from a single omni-directional image,” Pattern Recogn. Lett. 32, 524–531 (2011).
[CrossRef]

L. Chen, W. Wang, M. Zhang, and W. Chen, “Design analysis of a complementary double-mirror catadioptric omnidirectional imaging system,” Acta Opt. Sin. 30, 3487–3494 (2010).
[CrossRef]

Cheng, I.

W. Chen, I. Cheng, Z. Xiong, A. Basu, and M. Zhang, “A 2-point algorithm for 3D reconstruction of horizontal lines from a single omni-directional image,” Pattern Recogn. Lett. 32, 524–531 (2011).
[CrossRef]

Decco, C.

J. Gaspar, C. Decco, J. Okamoto, and J. S. Victor, “Constant resolution omnidirectional cameras,” in Proceedings of the 3rd Workshop on Omnidirectional Vision (2002), pp. 27–34.

Eckman, M.

T. E. Boult, X. Gao, R. Micheals, and M. Eckman, “Omni-directional visual surveillance,” Image Vis. Comput. 22, 515–534 (2004).
[CrossRef]

Farsiu, S.

H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Trans. Image Process. 16, 349–366 (2007).
[CrossRef]

Fiala, M.

M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recogn. 37, 2195–2215 (2004).
[CrossRef]

Gao, C.

C. Gao, H. Hua, and N. Ahuja, “A hemispherical imaging camera,” Comput. Vis. Image Underst. 114, 168–178 (2010).
[CrossRef]

Gao, X.

T. E. Boult, X. Gao, R. Micheals, and M. Eckman, “Omni-directional visual surveillance,” Image Vis. Comput. 22, 515–534 (2004).
[CrossRef]

Gaspar, J.

J. Gaspar, C. Decco, J. Okamoto, and J. S. Victor, “Constant resolution omnidirectional cameras,” in Proceedings of the 3rd Workshop on Omnidirectional Vision (2002), pp. 27–34.

Gosselin, P. H.

P. H. Gosselin, F. Precioso, and S. Philipp-Foliguet, “Incremental kernel learning for active image retrieval without global dictionaries,” Pattern Recogn. 44, 2244–2254 (2011) http://www.sciencedirect.com/science/article/pii/S0031320310005716 .
[CrossRef]

Hicks, R. A.

R. A. Hicks and R. Bajcsy, “Catadioptric sensors that approximate wide-angle perspective projections,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2000), pp. 545–551.

Hua, H.

C. Gao, H. Hua, and N. Ahuja, “A hemispherical imaging camera,” Comput. Vis. Image Underst. 114, 168–178 (2010).
[CrossRef]

Huang, T. S.

H. Zhang, J. Yang, Y. Zhang, and T. S. Huang, “Non-local kernel regression for image and video restoration,” in Proceedings of the 11th European Conference on Computer Vision (2010), pp. 566–579.

Ikeuchi, K.

K. Ikeuchi, M. Sakauchi, H. Kawasaki, and I. Sato, “Constructing virtual cities by using panoramic images,” Int. J. Comput. Vis. 58, 237–247 (2004).
[CrossRef]

Iwai, Y.

T. Mashita, Y. Iwai, and M. Yachida, “Calibration method for misaligned catadioptric camera,” IEICE Trans. Inf. Syst. E89-D, 1984–1993 (2006).
[CrossRef]

Jia, Y.

Q. Peng and Y. Jia, “Wavelet-based resolution enhancement of omnidirectional images,” Acta Electron. Sin. 32, 1875–1879 (2004).

Kawasaki, H.

K. Ikeuchi, M. Sakauchi, H. Kawasaki, and I. Sato, “Constructing virtual cities by using panoramic images,” Int. J. Comput. Vis. 58, 237–247 (2004).
[CrossRef]

Koschan, A.

C. H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Heterogeneous fusion of omnidirectional and PTZ cameras for multiple object tracking,” IEEE Trans. Circuits Syst. Video Technol. 18, 1052–1063 (2008).
[CrossRef]

Lhuillier, M.

M. Lhuillier, “Automatic scene structure and camera motion using a catadioptric system,” Comput. Vis. Image Underst. 109, 186–203 (2008).
[CrossRef]

Liu, Y.

L. Chen, J. Lou, M. Zhang, W. Wang, and Y. Liu, “Fusion of complementary catadioptric panoramic images based on nonsubsampled contourlet transform,” Opt. Eng. 50, 127002 (2011).
[CrossRef]

López-Nicolás, G.

H. M. Becerra, G. López-Nicolás, and C. Sagüés, “Omnidirectional visual control of mobile robots based on the 1D trifocal tensor,” Robot. Auton. Syst. 58, 796–808 (2010).
[CrossRef]

Lou, J.

L. Chen, J. Lou, M. Zhang, W. Wang, and Y. Liu, “Fusion of complementary catadioptric panoramic images based on nonsubsampled contourlet transform,” Opt. Eng. 50, 127002 (2011).
[CrossRef]

Mashita, T.

T. Mashita, Y. Iwai, and M. Yachida, “Calibration method for misaligned catadioptric camera,” IEICE Trans. Inf. Syst. E89-D, 1984–1993 (2006).
[CrossRef]

Micheals, R.

T. E. Boult, X. Gao, R. Micheals, and M. Eckman, “Omni-directional visual surveillance,” Image Vis. Comput. 22, 515–534 (2004).
[CrossRef]

Milanfar, P.

H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Trans. Image Process. 16, 349–366 (2007).
[CrossRef]

Nagahara, H.

H. Nagahara, Y. Yagi, and M. Yachida, “Superresolution modeling using an omnidirectional image sensor,” IEEE Trans. Syst. Man Cybern. 33, 607–615 (2003).
[CrossRef]

H. Nagahara, Y. Yagi, and M. Yachida, “Super-resolution modeling,” in Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (2003), pp. 215–221.

H. Nagahara, Y. Yagi, and M. Yachida, “Resolution improving method from multi-focal omnidirectional images,” in Proceedings of the IEEE International Conference on Image Processing (2001), pp. 654–657.

Okamoto, J.

J. Gaspar, C. Decco, J. Okamoto, and J. S. Victor, “Constant resolution omnidirectional cameras,” in Proceedings of the 3rd Workshop on Omnidirectional Vision (2002), pp. 27–34.

Page, D.

C. H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Heterogeneous fusion of omnidirectional and PTZ cameras for multiple object tracking,” IEEE Trans. Circuits Syst. Video Technol. 18, 1052–1063 (2008).
[CrossRef]

Peng, Q.

Q. Peng and Y. Jia, “Wavelet-based resolution enhancement of omnidirectional images,” Acta Electron. Sin. 32, 1875–1879 (2004).

Petrovic, V.

C. S. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
[CrossRef]

Philipp-Foliguet, S.

P. H. Gosselin, F. Precioso, and S. Philipp-Foliguet, “Incremental kernel learning for active image retrieval without global dictionaries,” Pattern Recogn. 44, 2244–2254 (2011) http://www.sciencedirect.com/science/article/pii/S0031320310005716 .
[CrossRef]

Precioso, F.

P. H. Gosselin, F. Precioso, and S. Philipp-Foliguet, “Incremental kernel learning for active image retrieval without global dictionaries,” Pattern Recogn. 44, 2244–2254 (2011) http://www.sciencedirect.com/science/article/pii/S0031320310005716 .
[CrossRef]

Qu, G.

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[CrossRef]

Saface-Rad, R.

R. Saface-Rad, I. Tchoukanov, K. C. Smith, and B. Benhabib, “Three-dimensional location estimation of circular features for machine vision,” IEEE Trans. Robot. Autom. 8, 624–640 (1992).
[CrossRef]

Sagüés, C.

H. M. Becerra, G. López-Nicolás, and C. Sagüés, “Omnidirectional visual control of mobile robots based on the 1D trifocal tensor,” Robot. Auton. Syst. 58, 796–808 (2010).
[CrossRef]

Sakauchi, M.

K. Ikeuchi, M. Sakauchi, H. Kawasaki, and I. Sato, “Constructing virtual cities by using panoramic images,” Int. J. Comput. Vis. 58, 237–247 (2004).
[CrossRef]

Sato, I.

K. Ikeuchi, M. Sakauchi, H. Kawasaki, and I. Sato, “Constructing virtual cities by using panoramic images,” Int. J. Comput. Vis. 58, 237–247 (2004).
[CrossRef]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error measurement to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error measurement to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Smith, K. C.

R. Saface-Rad, I. Tchoukanov, K. C. Smith, and B. Benhabib, “Three-dimensional location estimation of circular features for machine vision,” IEEE Trans. Robot. Autom. 8, 624–640 (1992).
[CrossRef]

Srinivasan, M. V.

Stefan, G.

G. Stefan, “Mirror design for an omnidirectional camera with a uniform cylindrical projection when using the SVAVISCA sensor,” Research Reports of CMP (Czech Technical University in Prague, 2001).

Su, X.

J. Zeng and X. Su, “A distortionless catadioptric panoramic imaging system for cylindrical scene,” Opto-Electron. Eng. 30, 42–45 (2003).

Takeda, H.

H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Trans. Image Process. 16, 349–366 (2007).
[CrossRef]

Tchoukanov, I.

R. Saface-Rad, I. Tchoukanov, K. C. Smith, and B. Benhabib, “Three-dimensional location estimation of circular features for machine vision,” IEEE Trans. Robot. Autom. 8, 624–640 (1992).
[CrossRef]

Victor, J. S.

J. Gaspar, C. Decco, J. Okamoto, and J. S. Victor, “Constant resolution omnidirectional cameras,” in Proceedings of the 3rd Workshop on Omnidirectional Vision (2002), pp. 27–34.

Wang, W.

L. Chen, J. Lou, M. Zhang, W. Wang, and Y. Liu, “Fusion of complementary catadioptric panoramic images based on nonsubsampled contourlet transform,” Opt. Eng. 50, 127002 (2011).
[CrossRef]

L. Chen, W. Wang, M. Zhang, W. Bao, and X. Zhang, “Complementary-structure catadioptric omnidirectional sensor design for resolution enhancement,” Opt. Eng. 50, 033201 (2011).
[CrossRef]

L. Chen, W. Wang, M. Zhang, and W. Chen, “Design analysis of a complementary double-mirror catadioptric omnidirectional imaging system,” Acta Opt. Sin. 30, 3487–3494 (2010).
[CrossRef]

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error measurement to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Xiong, Z.

W. Chen, I. Cheng, Z. Xiong, A. Basu, and M. Zhang, “A 2-point algorithm for 3D reconstruction of horizontal lines from a single omni-directional image,” Pattern Recogn. Lett. 32, 524–531 (2011).
[CrossRef]

Xydeas, C. S.

C. S. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
[CrossRef]

Yachida, M.

T. Mashita, Y. Iwai, and M. Yachida, “Calibration method for misaligned catadioptric camera,” IEICE Trans. Inf. Syst. E89-D, 1984–1993 (2006).
[CrossRef]

H. Nagahara, Y. Yagi, and M. Yachida, “Superresolution modeling using an omnidirectional image sensor,” IEEE Trans. Syst. Man Cybern. 33, 607–615 (2003).
[CrossRef]

H. Nagahara, Y. Yagi, and M. Yachida, “Super-resolution modeling,” in Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (2003), pp. 215–221.

H. Nagahara, Y. Yagi, and M. Yachida, “Resolution improving method from multi-focal omnidirectional images,” in Proceedings of the IEEE International Conference on Image Processing (2001), pp. 654–657.

Yagi, Y.

H. Nagahara, Y. Yagi, and M. Yachida, “Superresolution modeling using an omnidirectional image sensor,” IEEE Trans. Syst. Man Cybern. 33, 607–615 (2003).
[CrossRef]

H. Nagahara, Y. Yagi, and M. Yachida, “Super-resolution modeling,” in Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (2003), pp. 215–221.

H. Nagahara, Y. Yagi, and M. Yachida, “Resolution improving method from multi-focal omnidirectional images,” in Proceedings of the IEEE International Conference on Image Processing (2001), pp. 654–657.

Yan, P.

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[CrossRef]

Yang, J.

H. Zhang, J. Yang, Y. Zhang, and T. S. Huang, “Non-local kernel regression for image and video restoration,” in Proceedings of the 11th European Conference on Computer Vision (2010), pp. 566–579.

Yao, Y.

C. H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Heterogeneous fusion of omnidirectional and PTZ cameras for multiple object tracking,” IEEE Trans. Circuits Syst. Video Technol. 18, 1052–1063 (2008).
[CrossRef]

Yeung, D.

H. Chang and D. Yeung, “Kernel-based distance metric learning for content-based image retrieval,” Image Vis. Comput. 25, 695–703 (2007).
[CrossRef]

Zeng, J.

J. Zeng and X. Su, “A distortionless catadioptric panoramic imaging system for cylindrical scene,” Opto-Electron. Eng. 30, 42–45 (2003).

Zhang, D.

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[CrossRef]

Zhang, H.

H. Zhang, J. Yang, Y. Zhang, and T. S. Huang, “Non-local kernel regression for image and video restoration,” in Proceedings of the 11th European Conference on Computer Vision (2010), pp. 566–579.

Zhang, M.

L. Chen, W. Wang, M. Zhang, W. Bao, and X. Zhang, “Complementary-structure catadioptric omnidirectional sensor design for resolution enhancement,” Opt. Eng. 50, 033201 (2011).
[CrossRef]

L. Chen, J. Lou, M. Zhang, W. Wang, and Y. Liu, “Fusion of complementary catadioptric panoramic images based on nonsubsampled contourlet transform,” Opt. Eng. 50, 127002 (2011).
[CrossRef]

W. Chen, I. Cheng, Z. Xiong, A. Basu, and M. Zhang, “A 2-point algorithm for 3D reconstruction of horizontal lines from a single omni-directional image,” Pattern Recogn. Lett. 32, 524–531 (2011).
[CrossRef]

L. Chen, W. Wang, M. Zhang, and W. Chen, “Design analysis of a complementary double-mirror catadioptric omnidirectional imaging system,” Acta Opt. Sin. 30, 3487–3494 (2010).
[CrossRef]

Zhang, X.

L. Chen, W. Wang, M. Zhang, W. Bao, and X. Zhang, “Complementary-structure catadioptric omnidirectional sensor design for resolution enhancement,” Opt. Eng. 50, 033201 (2011).
[CrossRef]

Zhang, Y.

H. Zhang, J. Yang, Y. Zhang, and T. S. Huang, “Non-local kernel regression for image and video restoration,” in Proceedings of the 11th European Conference on Computer Vision (2010), pp. 566–579.

Acta Electron. Sin. (1)

Q. Peng and Y. Jia, “Wavelet-based resolution enhancement of omnidirectional images,” Acta Electron. Sin. 32, 1875–1879 (2004).

Acta Opt. Sin. (1)

L. Chen, W. Wang, M. Zhang, and W. Chen, “Design analysis of a complementary double-mirror catadioptric omnidirectional imaging system,” Acta Opt. Sin. 30, 3487–3494 (2010).
[CrossRef]

Appl. Opt. (1)

Comput. Vis. Image Underst. (2)

M. Lhuillier, “Automatic scene structure and camera motion using a catadioptric system,” Comput. Vis. Image Underst. 109, 186–203 (2008).
[CrossRef]

C. Gao, H. Hua, and N. Ahuja, “A hemispherical imaging camera,” Comput. Vis. Image Underst. 114, 168–178 (2010).
[CrossRef]

Electron. Lett. (2)

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[CrossRef]

C. S. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
[CrossRef]

IEEE Trans. Circuits Syst. Video Technol. (1)

C. H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Heterogeneous fusion of omnidirectional and PTZ cameras for multiple object tracking,” IEEE Trans. Circuits Syst. Video Technol. 18, 1052–1063 (2008).
[CrossRef]

IEEE Trans. Image Process. (2)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error measurement to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Trans. Image Process. 16, 349–366 (2007).
[CrossRef]

IEEE Trans. Robot. Autom. (1)

R. Saface-Rad, I. Tchoukanov, K. C. Smith, and B. Benhabib, “Three-dimensional location estimation of circular features for machine vision,” IEEE Trans. Robot. Autom. 8, 624–640 (1992).
[CrossRef]

IEEE Trans. Syst. Man Cybern. (1)

H. Nagahara, Y. Yagi, and M. Yachida, “Superresolution modeling using an omnidirectional image sensor,” IEEE Trans. Syst. Man Cybern. 33, 607–615 (2003).
[CrossRef]

IEICE Trans. Inf. Syst. (1)

T. Mashita, Y. Iwai, and M. Yachida, “Calibration method for misaligned catadioptric camera,” IEICE Trans. Inf. Syst. E89-D, 1984–1993 (2006).
[CrossRef]

Image Vis. Comput. (2)

T. E. Boult, X. Gao, R. Micheals, and M. Eckman, “Omni-directional visual surveillance,” Image Vis. Comput. 22, 515–534 (2004).
[CrossRef]

H. Chang and D. Yeung, “Kernel-based distance metric learning for content-based image retrieval,” Image Vis. Comput. 25, 695–703 (2007).
[CrossRef]

Int. J. Comput. Vis. (1)

K. Ikeuchi, M. Sakauchi, H. Kawasaki, and I. Sato, “Constructing virtual cities by using panoramic images,” Int. J. Comput. Vis. 58, 237–247 (2004).
[CrossRef]

Opt. Eng. (2)

L. Chen, W. Wang, M. Zhang, W. Bao, and X. Zhang, “Complementary-structure catadioptric omnidirectional sensor design for resolution enhancement,” Opt. Eng. 50, 033201 (2011).
[CrossRef]

L. Chen, J. Lou, M. Zhang, W. Wang, and Y. Liu, “Fusion of complementary catadioptric panoramic images based on nonsubsampled contourlet transform,” Opt. Eng. 50, 127002 (2011).
[CrossRef]

Opto-Electron. Eng. (1)

J. Zeng and X. Su, “A distortionless catadioptric panoramic imaging system for cylindrical scene,” Opto-Electron. Eng. 30, 42–45 (2003).

Pattern Recogn. (2)

P. H. Gosselin, F. Precioso, and S. Philipp-Foliguet, “Incremental kernel learning for active image retrieval without global dictionaries,” Pattern Recogn. 44, 2244–2254 (2011) http://www.sciencedirect.com/science/article/pii/S0031320310005716 .
[CrossRef]

M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recogn. 37, 2195–2215 (2004).
[CrossRef]

Pattern Recogn. Lett. (1)

W. Chen, I. Cheng, Z. Xiong, A. Basu, and M. Zhang, “A 2-point algorithm for 3D reconstruction of horizontal lines from a single omni-directional image,” Pattern Recogn. Lett. 32, 524–531 (2011).
[CrossRef]

Robot. Auton. Syst. (1)

H. M. Becerra, G. López-Nicolás, and C. Sagüés, “Omnidirectional visual control of mobile robots based on the 1D trifocal tensor,” Robot. Auton. Syst. 58, 796–808 (2010).
[CrossRef]

Other (6)

J. Gaspar, C. Decco, J. Okamoto, and J. S. Victor, “Constant resolution omnidirectional cameras,” in Proceedings of the 3rd Workshop on Omnidirectional Vision (2002), pp. 27–34.

H. Zhang, J. Yang, Y. Zhang, and T. S. Huang, “Non-local kernel regression for image and video restoration,” in Proceedings of the 11th European Conference on Computer Vision (2010), pp. 566–579.

R. A. Hicks and R. Bajcsy, “Catadioptric sensors that approximate wide-angle perspective projections,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2000), pp. 545–551.

G. Stefan, “Mirror design for an omnidirectional camera with a uniform cylindrical projection when using the SVAVISCA sensor,” Research Reports of CMP (Czech Technical University in Prague, 2001).

H. Nagahara, Y. Yagi, and M. Yachida, “Resolution improving method from multi-focal omnidirectional images,” in Proceedings of the IEEE International Conference on Image Processing (2001), pp. 654–657.

H. Nagahara, Y. Yagi, and M. Yachida, “Super-resolution modeling,” in Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (2003), pp. 215–221.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1.

Complementary-structure catadioptric imaging. (a) Geometric imaging model. (b) Illustration of the captured omnidirectional image.

Fig. 2.
Fig. 2.

Distribution curves of spatial resolution in complementary catadioptric imaging. (a) Tangential resolution. (b) Radial resolution.

Fig. 3.
Fig. 3.

Illustration of panoramic unwrapping of the complementary omnidirectional image. (a) Captured omnidirectional image, including the inner and the outer. (b) Panoramic image unwrapped from the inner, marked as Pano_Inner. (c) Panoramic image unwrapped from the outer, marked as Pano_Outer.

Fig. 4.
Fig. 4.

Distribution of the scattered pixels on the panoramic space, respectively mapped from (a) inner, (b) outer, and (c) inner and outer, i.e., the whole omnidirectional image.

Fig. 5.
Fig. 5.

Illustration of kernel spread: (a) kernels in the classic method depend only on the sample density. (b) Data-adapted kernels elongate with respect to the edge.

Fig. 6.
Fig. 6.

Schematic representation illustrating the effects of the steering matrix on the size and shape of the regression kernel.

Fig. 7.
Fig. 7.

Block diagram of the proposed cross-selection kernel regression method. KR is short for kernel regression.

Fig. 8.
Fig. 8.

Simulation experiment based on decimation and interpolation. (a) Original image. (b) Row-decimated image. (c) Column-decimated image. (d) Column-interpolated image. (e) Row-interpolated image. (f) NSCT-based fused image. (g) Combined pixel set of (b) and (c). (h) Interpolated image using classic kernel regression. (i) Interpolated image using steering kernel regression.

Fig. 9.
Fig. 9.

Simulation experiment based on two complementary images with luminance difference. (a), (b) Pair of source images with luminance difference. (c) Interpolated image on a row-decimated pixel set of (a) by using steering kernel regression. (d) Interpolated image on a column-decimated pixel set of (b) by using steering kernel regression. (e) Interpolated image on the combined pixel set of (c) and (d) by using steering kernel regression. (f) Interpolated image on the combined pixel set of (c) and (d) by using the proposed cross-selection kernel regression method.

Fig. 10.
Fig. 10.

Comparison of different values of the threshold for cross-selection kernel regression. The values of the thresholds of (a)–(f) are 0, 1, 3, 5, 7, and 9, respectively.

Fig. 11.
Fig. 11.

Simulation experiment based on 3ds Max. (a) Prototype sensor and virtual panoramic environment. (b) Captured omni-image. (c) Original panoramic image. (d), (e) Blocks of the two panoramic images unwrapped from the inner and the outer, respectively. (f) NSCT-based fused image. (g)–(i) Interpolated images using classic kernel regression, steering kernel regression, and the proposed cross-selection kernel regression method, respectively. (h-2), (i-2) Interpolated images using steering kernel regression and the proposed method considering registration error, respectively.

Fig. 12.
Fig. 12.

Experiment of indoor real-scene imaging. (a) Prototype sensor of complementary omnidirectional imaging. (b) Captured omni-image. (c) Fused image based on NSCT. (d) Interpolated image using steering kernel regression. (e) Proposed cross-selection kernel regression.

Fig. 13.
Fig. 13.

Experiment of outdoor real-scene imaging. (a) Captured omnidirectional image. (b),(c) Blocks of the two panoramic images unwrapped from the inner and the outer, respectively. (d) Fused image based on NSCT. (e) Interpolated image using steering kernel regression. (f) Proposed cross-selection kernel regression method.

Tables (2)

Tables Icon

Table 1. Quantitative Comparison Using MI and QAB/F

Tables Icon

Table 2. Quantitative Comparison Using PSNR and SSIM

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

min{βn}i=1P[yiβ0β1T(xix)β2Tvech{(xix)(xix)T}]2KH(xix),
z^(x)=β^(x)=e1T(XxTWxXx)1XxTWxy,
Xx=[1(x1x)TvechT{(x1x)(x1x)T}1(x2x)TvechT{(x2x)(x2x)T}1(xPx)TvechT{(xPx)(xPx)T}],
vech([abcbefcfi])=[abcefi]T,
Wx=diag[KH(x1x),KH(x2x),,KH(xPx)].
KHsteer(xix)=det(Ci)2πh2μi2exp{(xix)TCi(xix)2h2μi2},
C^i[xjwizx1(xj)zx1(xj)xjwizx1(xj)zx2(xj)xjwizx1(xj)zx2(xj)xjwizx2(xj)zx2(xj)],
Ci=γiUθiΛiUθiT.
xjwi|zx1Inner|xjwi|zx2Outer|>th
xjwi|zx2Outer|xjwi|zx1Inner|>th

Metrics