Abstract

Reliable and efficient stereo matching is a challenging task due to the presence of multiple radiometric variations. In stereo matching, correspondence between left and right images can become hard owing to low correlation between radiometric changes in left and right images. Previously presented cost metrics are not robust enough against intensive radiometric variations and/or are computationally expensive. In this work, we propose a new similarity metric coined as Intensity Guided Cost Metric (IGCM). IGCM turns out to significantly contribute to the depth accuracy by rejecting outliers and reducing the edge-fattening effect in object boundaries. IGCM is further combined explicitly with a color formation model to handle various radiometric changes that occur between stereo images. Experimental results on Middlebury dataset show 13.8%, 22.8%, 20.9%, 19.5 % and 9.1% decrease in average error rate compared to Adaptive Normalized Cross-Correlation (ANCC), Dense Adaptive Self-Correlation (DASC), Adaptive Descriptor(AD), Fast Cost Volume Filtering (FCVF) and Iterative Guided Filter (IGF)-based methods, respectively. Moreover, using integral images IGCM can achieve a speedup of 20x, 6x, 41x, 25x and 45x compared to the aforementioned methods.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Aggregation functions to combine RGB color channels in stereo matching

Mikel Galar, Aranzazu Jurio, Carlos Lopez-Molina, Daniel Paternain, Jose Sanz, and Humberto Bustince
Opt. Express 21(1) 1247-1257 (2013)

Disparity-selective stereo matching using correlation confidence measure

Sijung Kim, Jinbeum Jang, Jaeseung Lim, Joonki Paik, and Sangkeun Lee
J. Opt. Soc. Am. A 35(9) 1653-1662 (2018)

Automatic parameter estimation based on the degree of texture overlapping in accurate cost-aggregation stereo matching for three-dimensional video display

Nan Guo, Xinzhu Sang, Duo Chen, Peng Wang, Songlin Xie, Xunbo Yu, Binbin Yan, and Chongxiu Yu
Appl. Opt. 54(29) 8678-8685 (2015)

References

  • View by:
  • |
  • |
  • |

  1. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002).
    [Crossref]
  2. H. Hirschmuller and D. Scharstein, “Evaluation of stereo matching costs on images with radiometric differences,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 1582–1599 (2009).
    [Crossref] [PubMed]
  3. J. Yang, H. Wang, Z. Ding, Z. Lv, W. Wei, and H. Song, “Local stereo matching based on support weight with motion flow for dynamic scene,” IEEE Access 4, 4840–4847 (2016).
    [Crossref]
  4. Y. S. Heo, K. M. Lee, and S. U. Lee, “Robust stereo matching using adaptive normalized cross-correlation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 807–822 (2011).
    [Crossref]
  5. P. Pinggera, T. Breckon, and H. Bischof, “On cross-spectral stereo matching using dense gradient features,” in “Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on,” (2012).
  6. A. V. Le and C. S. Won, “Key-point based stereo matching and its application to interpolations,” Multidimension. Syst. Signal Process. 28, 265–280 (2017).
    [Crossref]
  7. S. Kim, D. Min, B. Ham, S. Ryu, M. N. Do, and K. Sohn, “Dasc: Dense adaptive self-correlation descriptor for multi-modal and multi-spectral correspondence,” in “2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),” (IEEE, 2015), pp. 2103–2112.
  8. Y.-H. Kim, J. Koo, and S. Lee, “Adaptive descriptor-based robust stereo matching under radiometric changes,” Pattern Recognit. Lett. 78, 41–47 (2016).
    [Crossref]
  9. H.-G. Jeon, J.-Y. Lee, S. Im, H. Ha, and I. So Kweon, “Stereo matching with color and monochrome cameras in low-light conditions,” in “Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,” (2016), pp. 4086–4094.
  10. J. Yang, Z. Gao, R. Chu, Y. Liu, and Y. Lin, “New stereo shooting evaluation metric based on stereoscopic distortion and subjective perception,” Opt. Rev. 22, 459–468 (2015).
    [Crossref]
  11. D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” in “German Conference on Pattern Recognition,” (Springer, 2014), pp. 31–42.
  12. K.-J. Yoon and I. S. Kweon, “Adaptive support-weight approach for correspondence search,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 650–656 (2006).
    [Crossref] [PubMed]
  13. A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 504–511 (2013).
    [Crossref]
  14. R. A. Hamzah, H. Ibrahim, and A. H. A. Hassan, “Stereo matching algorithm based on per pixel difference adjustment, iterative guided filter and graph segmentation,” J. Visual Commun. Image Represent. 42, 145–160 (2017).
    [Crossref]
  15. K. He, J. Sun, and X. Tang, “Guided image filtering,” in “European conference on computer vision,” (Springer, 2010), pp. 1–14.
  16. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60, 91–110 (2004).
    [Crossref]
  17. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013).
    [Crossref] [PubMed]
  18. H. Hirschmuller, “Accurate and efficient stereo processing by semi-global matching and mutual information,” in “2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05),”, vol. 2 (IEEE, 2005), vol. 2, pp. 807–814.
  19. S. Birchfield and C. Tomasi, “Depth discontinuities by pixel-to-pixel stereo,” Int. J. Comput. Vision 35, 269–293 (1999).
    [Crossref]

2017 (2)

A. V. Le and C. S. Won, “Key-point based stereo matching and its application to interpolations,” Multidimension. Syst. Signal Process. 28, 265–280 (2017).
[Crossref]

R. A. Hamzah, H. Ibrahim, and A. H. A. Hassan, “Stereo matching algorithm based on per pixel difference adjustment, iterative guided filter and graph segmentation,” J. Visual Commun. Image Represent. 42, 145–160 (2017).
[Crossref]

2016 (2)

Y.-H. Kim, J. Koo, and S. Lee, “Adaptive descriptor-based robust stereo matching under radiometric changes,” Pattern Recognit. Lett. 78, 41–47 (2016).
[Crossref]

J. Yang, H. Wang, Z. Ding, Z. Lv, W. Wei, and H. Song, “Local stereo matching based on support weight with motion flow for dynamic scene,” IEEE Access 4, 4840–4847 (2016).
[Crossref]

2015 (1)

J. Yang, Z. Gao, R. Chu, Y. Liu, and Y. Lin, “New stereo shooting evaluation metric based on stereoscopic distortion and subjective perception,” Opt. Rev. 22, 459–468 (2015).
[Crossref]

2013 (2)

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013).
[Crossref] [PubMed]

A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 504–511 (2013).
[Crossref]

2011 (1)

Y. S. Heo, K. M. Lee, and S. U. Lee, “Robust stereo matching using adaptive normalized cross-correlation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 807–822 (2011).
[Crossref]

2009 (1)

H. Hirschmuller and D. Scharstein, “Evaluation of stereo matching costs on images with radiometric differences,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 1582–1599 (2009).
[Crossref] [PubMed]

2006 (1)

K.-J. Yoon and I. S. Kweon, “Adaptive support-weight approach for correspondence search,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 650–656 (2006).
[Crossref] [PubMed]

2004 (1)

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60, 91–110 (2004).
[Crossref]

2002 (1)

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002).
[Crossref]

1999 (1)

S. Birchfield and C. Tomasi, “Depth discontinuities by pixel-to-pixel stereo,” Int. J. Comput. Vision 35, 269–293 (1999).
[Crossref]

Birchfield, S.

S. Birchfield and C. Tomasi, “Depth discontinuities by pixel-to-pixel stereo,” Int. J. Comput. Vision 35, 269–293 (1999).
[Crossref]

Bischof, H.

P. Pinggera, T. Breckon, and H. Bischof, “On cross-spectral stereo matching using dense gradient features,” in “Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on,” (2012).

Bleyer, M.

A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 504–511 (2013).
[Crossref]

Breckon, T.

P. Pinggera, T. Breckon, and H. Bischof, “On cross-spectral stereo matching using dense gradient features,” in “Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on,” (2012).

Chu, R.

J. Yang, Z. Gao, R. Chu, Y. Liu, and Y. Lin, “New stereo shooting evaluation metric based on stereoscopic distortion and subjective perception,” Opt. Rev. 22, 459–468 (2015).
[Crossref]

Ding, Z.

J. Yang, H. Wang, Z. Ding, Z. Lv, W. Wei, and H. Song, “Local stereo matching based on support weight with motion flow for dynamic scene,” IEEE Access 4, 4840–4847 (2016).
[Crossref]

Do, M. N.

S. Kim, D. Min, B. Ham, S. Ryu, M. N. Do, and K. Sohn, “Dasc: Dense adaptive self-correlation descriptor for multi-modal and multi-spectral correspondence,” in “2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),” (IEEE, 2015), pp. 2103–2112.

Gao, Z.

J. Yang, Z. Gao, R. Chu, Y. Liu, and Y. Lin, “New stereo shooting evaluation metric based on stereoscopic distortion and subjective perception,” Opt. Rev. 22, 459–468 (2015).
[Crossref]

Gelautz, M.

A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 504–511 (2013).
[Crossref]

Ha, H.

H.-G. Jeon, J.-Y. Lee, S. Im, H. Ha, and I. So Kweon, “Stereo matching with color and monochrome cameras in low-light conditions,” in “Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,” (2016), pp. 4086–4094.

Ham, B.

S. Kim, D. Min, B. Ham, S. Ryu, M. N. Do, and K. Sohn, “Dasc: Dense adaptive self-correlation descriptor for multi-modal and multi-spectral correspondence,” in “2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),” (IEEE, 2015), pp. 2103–2112.

Hamzah, R. A.

R. A. Hamzah, H. Ibrahim, and A. H. A. Hassan, “Stereo matching algorithm based on per pixel difference adjustment, iterative guided filter and graph segmentation,” J. Visual Commun. Image Represent. 42, 145–160 (2017).
[Crossref]

Hassan, A. H. A.

R. A. Hamzah, H. Ibrahim, and A. H. A. Hassan, “Stereo matching algorithm based on per pixel difference adjustment, iterative guided filter and graph segmentation,” J. Visual Commun. Image Represent. 42, 145–160 (2017).
[Crossref]

He, K.

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013).
[Crossref] [PubMed]

K. He, J. Sun, and X. Tang, “Guided image filtering,” in “European conference on computer vision,” (Springer, 2010), pp. 1–14.

Heo, Y. S.

Y. S. Heo, K. M. Lee, and S. U. Lee, “Robust stereo matching using adaptive normalized cross-correlation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 807–822 (2011).
[Crossref]

Hirschmuller, H.

H. Hirschmuller and D. Scharstein, “Evaluation of stereo matching costs on images with radiometric differences,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 1582–1599 (2009).
[Crossref] [PubMed]

H. Hirschmuller, “Accurate and efficient stereo processing by semi-global matching and mutual information,” in “2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05),”, vol. 2 (IEEE, 2005), vol. 2, pp. 807–814.

Hirschmüller, H.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” in “German Conference on Pattern Recognition,” (Springer, 2014), pp. 31–42.

Hosni, A.

A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 504–511 (2013).
[Crossref]

Ibrahim, H.

R. A. Hamzah, H. Ibrahim, and A. H. A. Hassan, “Stereo matching algorithm based on per pixel difference adjustment, iterative guided filter and graph segmentation,” J. Visual Commun. Image Represent. 42, 145–160 (2017).
[Crossref]

Im, S.

H.-G. Jeon, J.-Y. Lee, S. Im, H. Ha, and I. So Kweon, “Stereo matching with color and monochrome cameras in low-light conditions,” in “Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,” (2016), pp. 4086–4094.

Jeon, H.-G.

H.-G. Jeon, J.-Y. Lee, S. Im, H. Ha, and I. So Kweon, “Stereo matching with color and monochrome cameras in low-light conditions,” in “Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,” (2016), pp. 4086–4094.

Kim, S.

S. Kim, D. Min, B. Ham, S. Ryu, M. N. Do, and K. Sohn, “Dasc: Dense adaptive self-correlation descriptor for multi-modal and multi-spectral correspondence,” in “2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),” (IEEE, 2015), pp. 2103–2112.

Kim, Y.-H.

Y.-H. Kim, J. Koo, and S. Lee, “Adaptive descriptor-based robust stereo matching under radiometric changes,” Pattern Recognit. Lett. 78, 41–47 (2016).
[Crossref]

Kitajima, Y.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” in “German Conference on Pattern Recognition,” (Springer, 2014), pp. 31–42.

Koo, J.

Y.-H. Kim, J. Koo, and S. Lee, “Adaptive descriptor-based robust stereo matching under radiometric changes,” Pattern Recognit. Lett. 78, 41–47 (2016).
[Crossref]

Krathwohl, G.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” in “German Conference on Pattern Recognition,” (Springer, 2014), pp. 31–42.

Kweon, I. S.

K.-J. Yoon and I. S. Kweon, “Adaptive support-weight approach for correspondence search,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 650–656 (2006).
[Crossref] [PubMed]

Le, A. V.

A. V. Le and C. S. Won, “Key-point based stereo matching and its application to interpolations,” Multidimension. Syst. Signal Process. 28, 265–280 (2017).
[Crossref]

Lee, J.-Y.

H.-G. Jeon, J.-Y. Lee, S. Im, H. Ha, and I. So Kweon, “Stereo matching with color and monochrome cameras in low-light conditions,” in “Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,” (2016), pp. 4086–4094.

Lee, K. M.

Y. S. Heo, K. M. Lee, and S. U. Lee, “Robust stereo matching using adaptive normalized cross-correlation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 807–822 (2011).
[Crossref]

Lee, S.

Y.-H. Kim, J. Koo, and S. Lee, “Adaptive descriptor-based robust stereo matching under radiometric changes,” Pattern Recognit. Lett. 78, 41–47 (2016).
[Crossref]

Lee, S. U.

Y. S. Heo, K. M. Lee, and S. U. Lee, “Robust stereo matching using adaptive normalized cross-correlation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 807–822 (2011).
[Crossref]

Lin, Y.

J. Yang, Z. Gao, R. Chu, Y. Liu, and Y. Lin, “New stereo shooting evaluation metric based on stereoscopic distortion and subjective perception,” Opt. Rev. 22, 459–468 (2015).
[Crossref]

Liu, Y.

J. Yang, Z. Gao, R. Chu, Y. Liu, and Y. Lin, “New stereo shooting evaluation metric based on stereoscopic distortion and subjective perception,” Opt. Rev. 22, 459–468 (2015).
[Crossref]

Lowe, D. G.

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60, 91–110 (2004).
[Crossref]

Lv, Z.

J. Yang, H. Wang, Z. Ding, Z. Lv, W. Wei, and H. Song, “Local stereo matching based on support weight with motion flow for dynamic scene,” IEEE Access 4, 4840–4847 (2016).
[Crossref]

Min, D.

S. Kim, D. Min, B. Ham, S. Ryu, M. N. Do, and K. Sohn, “Dasc: Dense adaptive self-correlation descriptor for multi-modal and multi-spectral correspondence,” in “2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),” (IEEE, 2015), pp. 2103–2112.

Nešic, N.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” in “German Conference on Pattern Recognition,” (Springer, 2014), pp. 31–42.

Pinggera, P.

P. Pinggera, T. Breckon, and H. Bischof, “On cross-spectral stereo matching using dense gradient features,” in “Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on,” (2012).

Rhemann, C.

A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 504–511 (2013).
[Crossref]

Rother, C.

A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 504–511 (2013).
[Crossref]

Ryu, S.

S. Kim, D. Min, B. Ham, S. Ryu, M. N. Do, and K. Sohn, “Dasc: Dense adaptive self-correlation descriptor for multi-modal and multi-spectral correspondence,” in “2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),” (IEEE, 2015), pp. 2103–2112.

Scharstein, D.

H. Hirschmuller and D. Scharstein, “Evaluation of stereo matching costs on images with radiometric differences,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 1582–1599 (2009).
[Crossref] [PubMed]

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002).
[Crossref]

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” in “German Conference on Pattern Recognition,” (Springer, 2014), pp. 31–42.

So Kweon, I.

H.-G. Jeon, J.-Y. Lee, S. Im, H. Ha, and I. So Kweon, “Stereo matching with color and monochrome cameras in low-light conditions,” in “Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,” (2016), pp. 4086–4094.

Sohn, K.

S. Kim, D. Min, B. Ham, S. Ryu, M. N. Do, and K. Sohn, “Dasc: Dense adaptive self-correlation descriptor for multi-modal and multi-spectral correspondence,” in “2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),” (IEEE, 2015), pp. 2103–2112.

Song, H.

J. Yang, H. Wang, Z. Ding, Z. Lv, W. Wei, and H. Song, “Local stereo matching based on support weight with motion flow for dynamic scene,” IEEE Access 4, 4840–4847 (2016).
[Crossref]

Sun, J.

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013).
[Crossref] [PubMed]

K. He, J. Sun, and X. Tang, “Guided image filtering,” in “European conference on computer vision,” (Springer, 2010), pp. 1–14.

Szeliski, R.

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002).
[Crossref]

Tang, X.

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013).
[Crossref] [PubMed]

K. He, J. Sun, and X. Tang, “Guided image filtering,” in “European conference on computer vision,” (Springer, 2010), pp. 1–14.

Tomasi, C.

S. Birchfield and C. Tomasi, “Depth discontinuities by pixel-to-pixel stereo,” Int. J. Comput. Vision 35, 269–293 (1999).
[Crossref]

Wang, H.

J. Yang, H. Wang, Z. Ding, Z. Lv, W. Wei, and H. Song, “Local stereo matching based on support weight with motion flow for dynamic scene,” IEEE Access 4, 4840–4847 (2016).
[Crossref]

Wang, X.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” in “German Conference on Pattern Recognition,” (Springer, 2014), pp. 31–42.

Wei, W.

J. Yang, H. Wang, Z. Ding, Z. Lv, W. Wei, and H. Song, “Local stereo matching based on support weight with motion flow for dynamic scene,” IEEE Access 4, 4840–4847 (2016).
[Crossref]

Westling, P.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” in “German Conference on Pattern Recognition,” (Springer, 2014), pp. 31–42.

Won, C. S.

A. V. Le and C. S. Won, “Key-point based stereo matching and its application to interpolations,” Multidimension. Syst. Signal Process. 28, 265–280 (2017).
[Crossref]

Yang, J.

J. Yang, H. Wang, Z. Ding, Z. Lv, W. Wei, and H. Song, “Local stereo matching based on support weight with motion flow for dynamic scene,” IEEE Access 4, 4840–4847 (2016).
[Crossref]

J. Yang, Z. Gao, R. Chu, Y. Liu, and Y. Lin, “New stereo shooting evaluation metric based on stereoscopic distortion and subjective perception,” Opt. Rev. 22, 459–468 (2015).
[Crossref]

Yoon, K.-J.

K.-J. Yoon and I. S. Kweon, “Adaptive support-weight approach for correspondence search,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 650–656 (2006).
[Crossref] [PubMed]

IEEE Access (1)

J. Yang, H. Wang, Z. Ding, Z. Lv, W. Wei, and H. Song, “Local stereo matching based on support weight with motion flow for dynamic scene,” IEEE Access 4, 4840–4847 (2016).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (5)

Y. S. Heo, K. M. Lee, and S. U. Lee, “Robust stereo matching using adaptive normalized cross-correlation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 807–822 (2011).
[Crossref]

H. Hirschmuller and D. Scharstein, “Evaluation of stereo matching costs on images with radiometric differences,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 1582–1599 (2009).
[Crossref] [PubMed]

K.-J. Yoon and I. S. Kweon, “Adaptive support-weight approach for correspondence search,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 650–656 (2006).
[Crossref] [PubMed]

A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 504–511 (2013).
[Crossref]

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013).
[Crossref] [PubMed]

Int. J. Comput. Vision (3)

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002).
[Crossref]

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60, 91–110 (2004).
[Crossref]

S. Birchfield and C. Tomasi, “Depth discontinuities by pixel-to-pixel stereo,” Int. J. Comput. Vision 35, 269–293 (1999).
[Crossref]

J. Visual Commun. Image Represent. (1)

R. A. Hamzah, H. Ibrahim, and A. H. A. Hassan, “Stereo matching algorithm based on per pixel difference adjustment, iterative guided filter and graph segmentation,” J. Visual Commun. Image Represent. 42, 145–160 (2017).
[Crossref]

Multidimension. Syst. Signal Process. (1)

A. V. Le and C. S. Won, “Key-point based stereo matching and its application to interpolations,” Multidimension. Syst. Signal Process. 28, 265–280 (2017).
[Crossref]

Opt. Rev. (1)

J. Yang, Z. Gao, R. Chu, Y. Liu, and Y. Lin, “New stereo shooting evaluation metric based on stereoscopic distortion and subjective perception,” Opt. Rev. 22, 459–468 (2015).
[Crossref]

Pattern Recognit. Lett. (1)

Y.-H. Kim, J. Koo, and S. Lee, “Adaptive descriptor-based robust stereo matching under radiometric changes,” Pattern Recognit. Lett. 78, 41–47 (2016).
[Crossref]

Other (6)

H.-G. Jeon, J.-Y. Lee, S. Im, H. Ha, and I. So Kweon, “Stereo matching with color and monochrome cameras in low-light conditions,” in “Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,” (2016), pp. 4086–4094.

S. Kim, D. Min, B. Ham, S. Ryu, M. N. Do, and K. Sohn, “Dasc: Dense adaptive self-correlation descriptor for multi-modal and multi-spectral correspondence,” in “2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),” (IEEE, 2015), pp. 2103–2112.

P. Pinggera, T. Breckon, and H. Bischof, “On cross-spectral stereo matching using dense gradient features,” in “Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on,” (2012).

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” in “German Conference on Pattern Recognition,” (Springer, 2014), pp. 31–42.

H. Hirschmuller, “Accurate and efficient stereo processing by semi-global matching and mutual information,” in “2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05),”, vol. 2 (IEEE, 2005), vol. 2, pp. 807–814.

K. He, J. Sun, and X. Tang, “Guided image filtering,” in “European conference on computer vision,” (Springer, 2010), pp. 1–14.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Matching Cost Comparison. Left and right images show spatially variant intensity transformations. Results of different matching costs computed along scan-lines passing through regions A, B, C are shown in Figs. (c)–(e), respectively. Unlike conventional methods, IGCM yields more reliable global minimum and better selectivity.
Fig. 2
Fig. 2 IGCM-based Depth Extraction. (a) and (b) show patch A in Fig. 1. (c) and (d) represent weights for respective patches in (a) and (b), respectively. (e) and (f) are NCC-based and IGCM-based cost values for patch A. Disparity maps are shown in (g) and (h).
Fig. 3
Fig. 3 IGCM-based Depth Extraction Pipeline where D is the disparity map.
Fig. 4
Fig. 4 Results at each step of IGCM-based Depth Extraction Pipeline
Fig. 5
Fig. 5 Qualitative results over MB14 datasets for different illumination settings (Setup1). (a) Left Image (b) Right Image (c) Ground Truth (d) ANCC [4] (e) IGF [14] (f) IGCM (Proposed)
Fig. 6
Fig. 6 Qualitative results over MB14 datasets for different illumination settings (Setup2). (a) Left Image (b) Right Image (c) Ground Truth (d) ANCC [4] (e) IGF [14] (f) IGCM (Proposed)
Fig. 7
Fig. 7 Qualitative results over MB14 datasets for different exposure settings (Setup3). (a) Left Image (b) Right Image (c) Ground Truth (d) ANCC [4] (e) IGF [14] (f) IGCM (Proposed)
Fig. 8
Fig. 8 Qualitative results over MB14 datasets for different expsoure settings (Setup4). (a) Left Image (b) Right Image (c) Ground Truth (e) ANCC [4] (e) IGF [14] (f) IGCM (Proposed)
Fig. 9
Fig. 9 Limitations of IGCM-based Framework

Tables (8)

Tables Icon

Algorithm 1 Computation of a and b coefficients

Tables Icon

Table 1 Experimental Setups.

Tables Icon

Table 2 Error rate with CT [2], BT [19], SIE [9], ANCC [4], DASC [7], AD [8], FCVF [13], IGF [14] and IGCM (proposed) over different Illumination settings (Setup1) of MB14 datasets. Gray cells represent minimum error rate for each dataset.

Tables Icon

Table 3 Error rate with CT [2], BT [19], SIE [9], ANCC [4], DASC [7], AD [8], FCVF [13], IGF [14] and IGCM (proposed) over different Illumination settings (Setup2) of MB14 datasets. Gray cells represent minimum error rate for each dataset.

Tables Icon

Table 4 Error rate with CT [2], BT [19], SIE [9], ANCC [4], DASC [7], AD [8], FCVF [13], IGF [14] and IGCM (proposed) over different exposure settings (Setup3) of MB14 datasets. Gray cells represent minimum error rate for each dataset.

Tables Icon

Table 5 Error rate with CT [2], BT [19], SIE [9], ANCC [4], DASC [7], AD [8], FCVF [13], IGF [14] and IGCM (proposed) over different exposure settings (Setup4) of MB14 datasets. Gray cells represent minimum error rate for each dataset.

Tables Icon

Table 6 Runtime Comparison.

Equations (36)

Equations on this page are rendered with MathJax. Learn more.

( R L ( p ) G L ( p ) B L ( p ) ) ( ρ L ( p ) a L R L γ L ( p ) ρ L ( p ) b L G L γ L ( p ) ρ L ( p ) c L B L γ L ( p ) ) ,
( R R ( p + d ) G R ( p + d ) B R ( p + d ) ) ( ρ R ( p ) a R R R γ R ( p + d ) ρ R ( p ) b R G R γ R ( p + d ) ρ R ( p ) c R B R γ R ( p + d ) ) ,
R L ( p ) log { ρ L ( p ) a L R R γ L ( p ) }
= log ρ L ( p ) + log a L + γ L log R L ( p ) ,
R R ( p + d ) log { ρ R ( p + d ) a R R R γ R ( p + d ) }
= log ρ R ( p + d ) + log a R + γ R log R R ( p + d ) ,
I ¯ L ( p ) R L ( p ) + G L ( p ) + B L ( p ) 3
= log ρ L ( p ) + log a L b L c L 3 + γ L ( log R L ( p ) G L ( p ) B L ( p ) ) 3 .
R L ( p ) R L ( p ) I ¯ L ( p )
= log a L a L b L c L 3 + γ L log R L ( p ) R L ( p ) G L ( p ) B L ( p ) 3 .
R L ( p ) = α L + γ L K L ( p ) ,
α L log a L a L b L c L 3 ,
K L ( p ) log R L ( p ) R L ( p ) G L ( p ) B L ( p ) 3 .
R R ( p + d ) = α R + γ R K R ( p + d ) .
NCC ( p , d ) = q N p I L ( q ) I R ( q + d ) ( q N p ( I L ( q ) ) 2 ) ( q N p ( I R ( q + d ) ) 2 ) ,
WNCC ( p , d ) = q N p w L ( q ) I L ( q ) w R ( q + d ) I R ( q + d ) ( q N p ( w L ( q ) I L ( q ) ) 2 ) ( q N p ( w R ( q + d ) I R ( q + d ) ) 2 ) ,
IGCM ( p , d ) = LR ( p , d ) L ( p , d ) R ( p , d ) = q N p w L ( G ) ( q ) I L ( q ) w R ( G ) ( q + d ) I R ( q + d ) ( q N p ( w L ( G ) ( q ) I L ( q ) ) 2 ) ( q N p ( w R ( G ) ( q + d ) I R ( q + d ) ) 2 ) ,
w ( G ) ( p ) = 1 | ω | 2 q N p ( 1 + ( J ( p ) μ ( p ) ) ( J ( q ) μ ( p ) ) σ 2 ( p ) + ) ,
LR ( p , d ) = q N p w L ( G ) ( q ) I L ( q ) w R ( G ) ( q + d ) I R ( q + d ) ,
LR ( p , d ) = 1 | ω | 2 q N p { ( I ¯ L ( q ) + a L ( q ) ) ( J L ( p ) μ L ( q ) ) ( I ¯ R ( q + d ) + a R ( p + d ) ) ( J R ( p + d ) μ R ( q + d ) ) } ,
a j ( q ) = 1 | ω | k N q J j ( k ) I j ( k ) μ j ( q ) I ¯ j ( q ) σ j 2 ( q ) + s . t . j [ L , R ] ,
I ¯ j ( q ) = 1 | ω | k N q I j ( k ) s . t . j [ L , R ] .
LR ( p , d ) = 1 | ω | 2 q N p { ( a L ( q ) J L ( p ) + b L ( q ) ) ( a R ( p + d ) J R ( p + d ) + b R ( q + d ) ) } ,
b j ( q ) = I ¯ j ( q ) a j ( q ) μ j ( q ) s . t . j [ L , R ] .
LR ( p , d ) = 1 | ω | 2 q N p { ( a L ( q ) J L ( p ) + b L ( q ) ) ( a R ( p + d ) J R ( p + d ) + b R ( q + d ) ) } ,
LR ( p , d ) = 1 | ω | ( a L a R ¯ J L ( p ) I R ( p + d ) + a L b R ¯ J L ( p ) + a R b L ¯ J R ( p + d ) + b L b R ¯ ) ,
L ( p , d ) = 1 | ω | ( a L 2 ¯ I L 2 ( p ) + 2 a L b L ¯ J L ( p ) + b L 2 ¯ ) ,
R ( p , d ) = 1 | ω | ( a R 2 ¯ I R 2 ( p + d ) + 2 a R b R ¯ J R ( p + d ) + b R 2 ¯ ) .
a L a R ¯ J L ( p ) J R ( p + d ) + a L b R ¯ J L ( p ) + a R b L ¯ J R ( p + d ) b L b R ¯ ( a L 2 ¯ J L 2 ( p ) + 2 a L b L ¯ J L ( p ) + b L 2 ¯ ) ( a R 2 ¯ J R 2 ( p + d ) + 2 a R b R ¯ J R ( p + d ) + b R 2 ¯ ) ,
a j ( p ) = 1 | ω | k N p J j ( k ) I j ( k ) μ j ( p ) I ¯ j ( p ) σ j 2 ( p ) + s . t . j [ L , R ] ,
b j ( p ) = I ¯ j ( p ) a j ( p ) μ j ( p ) s . t . j [ L , R ] .
C ( p , d ) = 1 [ θ ξ IGCM ξ ( p , d ) 3 + ( 1 θ ) k IGCM ξ ( p , d ) 3 ] ,
A r ( p , d ) = C ( p , d ) + min { A r ( p r , d ) A r ( p r , d 1 ) + P 1 A r ( p r , d + 1 ) + P 1 min i A r ( p r , i ) + P 2 } min k A r ( p r , k ) ,
θ r k : = ( k 1 ) π 4 for k { 1 , 2 , , 8 } .
A ( p , d ) = r A r ( p , d ) .
D ( p ) = argmin d A ( p , d ) .

Metrics