Abstract

Occlusion is one of the most important issues in light-field depth estimation. In this paper, we propose a light-field multi-occlusion model with the analysis of light transmission. By the model, occlusions in different views are discussed separately. An adaptive algorithm of anti-occlusion in the central view is proposed to obtain more precise consistency regions (unoccluded views) in the angular domain and a subpatch approach of anti-occlusion in other views is presented to optimize the initial depth maps, where depth boundaries are better preserved. Then we propose a curvature confidence analysis approach to make depth evaluation more accurate and it is designed in an energy model to regularize the depth maps. Experimental results demonstrate that the proposed algorithm achieves better subjective and objective quality in depth maps compared with state-of-the-art algorithms.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Depth estimation algorithm for light field data by epipolar image analysis and region interpolation

Zhuang Ma, Zhaofeng Cen, and Xiaotong Li
Appl. Opt. 56(23) 6603-6610 (2017)

Synthetic aperture integral imaging using edge depth maps of unstructured monocular video

Jian Wei, Shigang Wang, Yan Zhao, and Meilan Piao
Opt. Express 26(26) 34894-34908 (2018)

Accurate depth estimation in structured light fields

Zewei Cai, Xiaoli Liu, Giancarlo Pedrini, Wolfgang Osten, and Xiang Peng
Opt. Express 27(9) 13532-13546 (2019)

References

  • View by:
  • |
  • |
  • |

  1. T. Georgiev, Z. Yu, and A. Lumsdaine, “Lytro camera technology: theory, algorithms, performance analysis,” Int. Soc. Opt. Eng. 8667, 1–10 (2013).
  2. N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).
  3. H. Mark, “Focusing on everything,” IEEE Spectr. 49(5), 44–50 (2012).
    [Crossref]
  4. C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22, 26659–26673 (2014).
    [Crossref] [PubMed]
  5. Z. Ma, Z. Cen, and X. Li, “Depth estimation algorithm for light field data by epipolar image analysis and region interpolation,” Appl. Opt. 56, 6603–6610 (2017).
    [Crossref] [PubMed]
  6. Y. Qin, X. Jin, Y. Chen, and Q. Dai, “Enhanced depth estimation for hand-held light field cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2017), pp. 2032–2036.
  7. C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).
    [Crossref]
  8. Z. Cai, X. Liu, X. Peng, Y. Yin, A. Li, J. Wu, and B. Z. Gao, “Structured light field 3d imaging,” Opt. Express 24, 20324–20334 (2016).
    [Crossref] [PubMed]
  9. T. Tao, Q. Chen, S. Feng, Y. Hu, and C. Zuo, “Active depth estimation from defocus using a camera array,” Appl. Opt. 57, 4960–4967 (2018).
    [Crossref] [PubMed]
  10. S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4d light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2012), pp. 41–48.
  11. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 673–680.
  12. M. W. Tao, P. P. Srinivasan, J. Malik, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1940–1948.
  13. H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.
  14. T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 3487–3495.
  15. T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 2170–2181 (2016).
    [Crossref] [PubMed]
  16. H. Zhu, Q. Wang, and J. Y. Yu, “Occlusion-model guided anti-occlusion depth estimation in light field,” IEEE J. Sel. Top. Signal Process. 11, 965–978 (2017).
    [Crossref]
  17. W. Williem and I. K. Park, “Robust light field depth estimation for noisy scene with occlusion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4396–4404.
  18. W. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 2484–2497 (2018).
    [Crossref]
  19. T. Ryu, B. Lee, and S. Lee, “Mutual constraint using partial occlusion artifact removal for computational integral imaging reconstruction,” Appl. Opt. 54, 4147–4153 (2015).
    [Crossref]
  20. M. Ghaneizad, Z. Kavehvash, and H. Aghajan, “Human detection in occluded scenes through optically inspired multi-camera image fusion,” J. Opt. Soc. Am. A 34, 856–869 (2017).
    [Crossref] [PubMed]
  21. S. Xie, P. Wang, X. Sang, Z. Chen, N. Guo, B. Yan, K. Wang, and C. Yu, “Profile preferentially partial occlusion removal for three-dimensional integral imaging,” Opt. Express 24, 23519–23530 (2016).
    [Crossref] [PubMed]
  22. A. L. Dulmage and N. S. Mendelsohn, “Coverings of bipartite graphs,” Can. J. Math. 10, 516–534 (1958).
    [Crossref]
  23. A. Pothen and C.-J. Fan, “Computing the block triangular form of a sparse matrix,” ACM Trans. Math. Softw. 16, 303–324 (1990).
    [Crossref]
  24. Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1124–1137 (2004).
    [Crossref]
  25. Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).
    [Crossref]
  26. V. Kolmogorov and R. Zabin, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).
    [Crossref] [PubMed]
  27. K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 19–34.
  28. O. Johannsen, A. Sulc, and B. Goldluecke, “What sparse light field coding reveals about scene structure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 3262–3270.
  29. L. Si and Q. Wang, “Dense depth-map estimation and geometry inference from light fields via global optimization,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 83–98.
  30. S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).
    [Crossref]
  31. A. S. Raj, M. Lowney, and R. Shah, “Light-field database creation and depth estimation,” Tech. Rep., Department of Computer Science, Stanford University (2016).

2018 (2)

W. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 2484–2497 (2018).
[Crossref]

T. Tao, Q. Chen, S. Feng, Y. Hu, and C. Zuo, “Active depth estimation from defocus using a camera array,” Appl. Opt. 57, 4960–4967 (2018).
[Crossref] [PubMed]

2017 (4)

M. Ghaneizad, Z. Kavehvash, and H. Aghajan, “Human detection in occluded scenes through optically inspired multi-camera image fusion,” J. Opt. Soc. Am. A 34, 856–869 (2017).
[Crossref] [PubMed]

Z. Ma, Z. Cen, and X. Li, “Depth estimation algorithm for light field data by epipolar image analysis and region interpolation,” Appl. Opt. 56, 6603–6610 (2017).
[Crossref] [PubMed]

H. Zhu, Q. Wang, and J. Y. Yu, “Occlusion-model guided anti-occlusion depth estimation in light field,” IEEE J. Sel. Top. Signal Process. 11, 965–978 (2017).
[Crossref]

C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).
[Crossref]

2016 (4)

Z. Cai, X. Liu, X. Peng, Y. Yin, A. Li, J. Wu, and B. Z. Gao, “Structured light field 3d imaging,” Opt. Express 24, 20324–20334 (2016).
[Crossref] [PubMed]

S. Xie, P. Wang, X. Sang, Z. Chen, N. Guo, B. Yan, K. Wang, and C. Yu, “Profile preferentially partial occlusion removal for three-dimensional integral imaging,” Opt. Express 24, 23519–23530 (2016).
[Crossref] [PubMed]

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).
[Crossref]

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 2170–2181 (2016).
[Crossref] [PubMed]

2015 (1)

2014 (1)

2013 (1)

T. Georgiev, Z. Yu, and A. Lumsdaine, “Lytro camera technology: theory, algorithms, performance analysis,” Int. Soc. Opt. Eng. 8667, 1–10 (2013).

2012 (1)

H. Mark, “Focusing on everything,” IEEE Spectr. 49(5), 44–50 (2012).
[Crossref]

2005 (1)

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

2004 (2)

V. Kolmogorov and R. Zabin, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).
[Crossref] [PubMed]

Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1124–1137 (2004).
[Crossref]

2001 (1)

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).
[Crossref]

1990 (1)

A. Pothen and C.-J. Fan, “Computing the block triangular form of a sparse matrix,” ACM Trans. Math. Softw. 16, 303–324 (1990).
[Crossref]

1958 (1)

A. L. Dulmage and N. S. Mendelsohn, “Coverings of bipartite graphs,” Can. J. Math. 10, 516–534 (1958).
[Crossref]

Aggoun, A.

Aghajan, H.

Alexander, S.-H.

C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).
[Crossref]

Bok, Y.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

Boykov, Y.

Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1124–1137 (2004).
[Crossref]

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).
[Crossref]

Cai, Z.

Cen, Z.

Chen, Q.

Chen, Y.

Y. Qin, X. Jin, Y. Chen, and Q. Dai, “Enhanced depth estimation for hand-held light field cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2017), pp. 2032–2036.

Chen, Z.

ChoE, G.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

Dai, Q.

Y. Qin, X. Jin, Y. Chen, and Q. Dai, “Enhanced depth estimation for hand-held light field cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2017), pp. 2032–2036.

Dulmage, A. L.

A. L. Dulmage and N. S. Mendelsohn, “Coverings of bipartite graphs,” Can. J. Math. 10, 516–534 (1958).
[Crossref]

Efros, A. A.

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 2170–2181 (2016).
[Crossref] [PubMed]

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 3487–3495.

Fan, C.-J.

A. Pothen and C.-J. Fan, “Computing the block triangular form of a sparse matrix,” ACM Trans. Math. Softw. 16, 303–324 (1990).
[Crossref]

Feng, S.

Fernández, J. C. J.

Gao, B. Z.

Gene, D.

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Georgiev, T.

T. Georgiev, Z. Yu, and A. Lumsdaine, “Lytro camera technology: theory, algorithms, performance analysis,” Int. Soc. Opt. Eng. 8667, 1–10 (2013).

Ghaneizad, M.

Goldluecke, B.

K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 19–34.

O. Johannsen, A. Sulc, and B. Goldluecke, “What sparse light field coding reveals about scene structure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 3262–3270.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4d light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2012), pp. 41–48.

Gross, M.

C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).
[Crossref]

Guo, N.

Hadap, S.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 673–680.

Hahne, C.

Haxha, S.

Honauer, K.

K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 19–34.

Hu, Y.

In, S. K.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

Jeon, H.-G.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

Jin, X.

Y. Qin, X. Jin, Y. Chen, and Q. Dai, “Enhanced depth estimation for hand-held light field cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2017), pp. 2032–2036.

Johannsen, O.

K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 19–34.

O. Johannsen, A. Sulc, and B. Goldluecke, “What sparse light field coding reveals about scene structure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 3262–3270.

Kavehvash, Z.

Kim, C.

C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).
[Crossref]

Kolmogorov, V.

V. Kolmogorov and R. Zabin, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).
[Crossref] [PubMed]

Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1124–1137 (2004).
[Crossref]

Kondermann, D.

K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 19–34.

Lee, B.

Lee, K. M.

W. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 2484–2497 (2018).
[Crossref]

Lee, S.

Li, A.

Li, C.

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).
[Crossref]

Li, X.

Liu, X.

Lowney, M.

A. S. Raj, M. Lowney, and R. Shah, “Light-field database creation and depth estimation,” Tech. Rep., Department of Computer Science, Stanford University (2016).

Lumsdaine, A.

T. Georgiev, Z. Yu, and A. Lumsdaine, “Lytro camera technology: theory, algorithms, performance analysis,” Int. Soc. Opt. Eng. 8667, 1–10 (2013).

Ma, Z.

Malik, J.

M. W. Tao, P. P. Srinivasan, J. Malik, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1940–1948.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 673–680.

Marc, L.

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Mark, H.

H. Mark, “Focusing on everything,” IEEE Spectr. 49(5), 44–50 (2012).
[Crossref]

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Mathieu, B.

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Mendelsohn, N. S.

A. L. Dulmage and N. S. Mendelsohn, “Coverings of bipartite graphs,” Can. J. Math. 10, 516–534 (1958).
[Crossref]

Park, I. K.

W. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 2484–2497 (2018).
[Crossref]

W. Williem and I. K. Park, “Robust light field depth estimation for noisy scene with occlusion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4396–4404.

Park, J.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

Pat, H.

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Peng, X.

Pothen, A.

A. Pothen and C.-J. Fan, “Computing the block triangular form of a sparse matrix,” ACM Trans. Math. Softw. 16, 303–324 (1990).
[Crossref]

Pritch, Y.

C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).
[Crossref]

Qin, Y.

Y. Qin, X. Jin, Y. Chen, and Q. Dai, “Enhanced depth estimation for hand-held light field cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2017), pp. 2032–2036.

Raj, A. S.

A. S. Raj, M. Lowney, and R. Shah, “Light-field database creation and depth estimation,” Tech. Rep., Department of Computer Science, Stanford University (2016).

Ramamoorthi, R.

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 2170–2181 (2016).
[Crossref] [PubMed]

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 3487–3495.

M. W. Tao, P. P. Srinivasan, J. Malik, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1940–1948.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 673–680.

Ren, N.

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Ryu, T.

Sang, X.

Shah, R.

A. S. Raj, M. Lowney, and R. Shah, “Light-field database creation and depth estimation,” Tech. Rep., Department of Computer Science, Stanford University (2016).

Sheng, H.

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).
[Crossref]

Si, L.

L. Si and Q. Wang, “Dense depth-map estimation and geometry inference from light fields via global optimization,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 83–98.

Srinivasan, P. P.

M. W. Tao, P. P. Srinivasan, J. Malik, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1940–1948.

Sulc, A.

O. Johannsen, A. Sulc, and B. Goldluecke, “What sparse light field coding reveals about scene structure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 3262–3270.

Tai, Y.-W.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

Tao, M. W.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 673–680.

M. W. Tao, P. P. Srinivasan, J. Malik, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1940–1948.

Tao, T.

Veksler, O.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).
[Crossref]

Velisavljevic, V.

Wang, K.

Wang, P.

Wang, Q.

H. Zhu, Q. Wang, and J. Y. Yu, “Occlusion-model guided anti-occlusion depth estimation in light field,” IEEE J. Sel. Top. Signal Process. 11, 965–978 (2017).
[Crossref]

L. Si and Q. Wang, “Dense depth-map estimation and geometry inference from light fields via global optimization,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 83–98.

Wang, T. C.

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 2170–2181 (2016).
[Crossref] [PubMed]

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 3487–3495.

Wanner, S.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4d light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2012), pp. 41–48.

Williem, W.

W. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 2484–2497 (2018).
[Crossref]

W. Williem and I. K. Park, “Robust light field depth estimation for noisy scene with occlusion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4396–4404.

Wu, J.

Xie, S.

Yan, B.

Yin, Y.

Yu, C.

Yu, J. Y.

H. Zhu, Q. Wang, and J. Y. Yu, “Occlusion-model guided anti-occlusion depth estimation in light field,” IEEE J. Sel. Top. Signal Process. 11, 965–978 (2017).
[Crossref]

Yu, Z.

T. Georgiev, Z. Yu, and A. Lumsdaine, “Lytro camera technology: theory, algorithms, performance analysis,” Int. Soc. Opt. Eng. 8667, 1–10 (2013).

Zabih, R.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).
[Crossref]

Zabin, R.

V. Kolmogorov and R. Zabin, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).
[Crossref] [PubMed]

Zhang, J.

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).
[Crossref]

Zhang, S.

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).
[Crossref]

Zhang, X.

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).
[Crossref]

Zhu, H.

H. Zhu, Q. Wang, and J. Y. Yu, “Occlusion-model guided anti-occlusion depth estimation in light field,” IEEE J. Sel. Top. Signal Process. 11, 965–978 (2017).
[Crossref]

Zimmer, H.

C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).
[Crossref]

Zuo, C.

Acm Trans. Graph. (1)

C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).
[Crossref]

ACM Trans. Math. Softw. (1)

A. Pothen and C.-J. Fan, “Computing the block triangular form of a sparse matrix,” ACM Trans. Math. Softw. 16, 303–324 (1990).
[Crossref]

Appl. Opt. (3)

Can. J. Math. (1)

A. L. Dulmage and N. S. Mendelsohn, “Coverings of bipartite graphs,” Can. J. Math. 10, 516–534 (1958).
[Crossref]

Comput. Sci. Tech. Rep. (1)

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

Comput. Vis. Image Underst. (1)

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).
[Crossref]

IEEE J. Sel. Top. Signal Process. (1)

H. Zhu, Q. Wang, and J. Y. Yu, “Occlusion-model guided anti-occlusion depth estimation in light field,” IEEE J. Sel. Top. Signal Process. 11, 965–978 (2017).
[Crossref]

IEEE Spectr. (1)

H. Mark, “Focusing on everything,” IEEE Spectr. 49(5), 44–50 (2012).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (5)

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 2170–2181 (2016).
[Crossref] [PubMed]

W. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 2484–2497 (2018).
[Crossref]

Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1124–1137 (2004).
[Crossref]

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).
[Crossref]

V. Kolmogorov and R. Zabin, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).
[Crossref] [PubMed]

Int. Soc. Opt. Eng. (1)

T. Georgiev, Z. Yu, and A. Lumsdaine, “Lytro camera technology: theory, algorithms, performance analysis,” Int. Soc. Opt. Eng. 8667, 1–10 (2013).

J. Opt. Soc. Am. A (1)

Opt. Express (3)

Other (11)

Y. Qin, X. Jin, Y. Chen, and Q. Dai, “Enhanced depth estimation for hand-held light field cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2017), pp. 2032–2036.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4d light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2012), pp. 41–48.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 673–680.

M. W. Tao, P. P. Srinivasan, J. Malik, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1940–1948.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 3487–3495.

W. Williem and I. K. Park, “Robust light field depth estimation for noisy scene with occlusion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4396–4404.

K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 19–34.

O. Johannsen, A. Sulc, and B. Goldluecke, “What sparse light field coding reveals about scene structure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 3262–3270.

L. Si and Q. Wang, “Dense depth-map estimation and geometry inference from light fields via global optimization,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 83–98.

A. S. Raj, M. Lowney, and R. Shah, “Light-field database creation and depth estimation,” Tech. Rep., Department of Computer Science, Stanford University (2016).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Comparison of results of state-of-the-art depth estimation algorithms on a multi-occlusion real scene.
Fig. 2
Fig. 2 Different occlusion situations. (a) The light model of two occlusion situations. Point x1 is occluded in the central view u3 and it is the edge pixel in the central view image. Near the point x1, point x2 is not occluded in the central view u3 and it is not the edge pixel in the central view image. However it can be seen that point x2 is occluded in other views (u1). (b) Real images of two occlusion situations. The left column of Fig. 2(b) are the spatial patches of two points in the central view image. The right column of Fig. 2(b) are their angular patches when refocused to the true depth. There are some occluded views in the angular patch of green point although it is not occluded in the central view.
Fig. 3
Fig. 3 The light-field single occluder model. The left of Fig. 3(a) shows the pinhole imaging model with an occluder where the center of the camera image is (u0, v0) and the right of Fig. 3(a) is the spatial patch centered at (x0, y0). The left of Fig. 3(b) shows the point (x0, y0, F) can be focused by the views within the edges of an occluder while the other views are blocked by the occluder. The right of Fig. 3(b) is the angular patch formed at (x0, y0).
Fig. 4
Fig. 4 (a) The patch situations on condition that (x0, y0) is the edge pixel and occluded in the central view. (b) The patch situations on condition that (x0, y0) is near the edge pixel and occluded in other views.
Fig. 5
Fig. 5 The light-field multi-occlusion model. The left of Fig. 5(a) shows the pinhole imaging model with a multi-occluder where the center of the camera image is (u0, v0) and the right of Fig. 5(a) is the spatial patch centered at (x0, y0). The left of Fig. 5(b) shows the point (x0, y0, F) can be focused by the views within the edges of multi-occluder (shown by two green planes), while the other views are blocked by the occluder. The right of Fig. 5(b) is the angular patch formed at (x0, y0).
Fig. 6
Fig. 6 Consistency region selection analysis. (a) A multi-occlusion point. (b) Spatial patch for the point. (c) Our initial label result. (d) Our relabeled result(white area is the consistency region). (e) Wang et al. [15]. (f) Zhu et al. [16]. (g) Ground truth.
Fig. 7
Fig. 7 An example of botton scene. (a) color image. (b) Initial depth map. (c) Detection map for points occluded in other views (the white area). (d) refined depth map
Fig. 8
Fig. 8 Confidence analysis. (a) Two points marked in initial depth map(red point and yellow point). (b) The D-C curves of two points: top one for the yellow point in Fig. 8(a) and bottom one for the red point in Fig. 8(a). Ground truth is 90 for all.
Fig. 9
Fig. 9 An example of boxes scene. (a) color image. (b) Initial depth map. (c) Difference map. (d) Confidence map
Fig. 10
Fig. 10 The comparison of dino details on 4D Light Field Dataset.
Fig. 11
Fig. 11 The comparison of synthetic datasets on 4D Light Field Dataset.
Fig. 12
Fig. 12 The comparison of real datasets on Stanford Dataset [31].

Tables (2)

Tables Icon

Table 1 General, Stratified and Photorealistic Performance Evaluation

Tables Icon

Table 2 MSE Comparison on Each Synthetic Image

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

D spatial = F Z 1 * D gt ( x 1 x 0 , y 1 y 0 ) / / ( X 1 X 0 , Y 1 Y 0 )
D angular = F F Z 1 * D gt ( u 1 u 0 , v 1 v 0 ) / / ( x 1 x 0 , y 1 y 0 )
e 1 = ( X 1 X 0 , Y 1 Y 0 ) e 2 = ( X 2 X 0 , Y 2 Y 0 )
e up × e 1 > 0 D 1 spatial = F Z 1 * D 1 gt e up × e 2 < 0 D 2 spatial = F Z 2 * D 2 gt
e u v × e 1 > 0 D 1 angular = F F Z 1 * D 1 gt e u v × e 2 < 0 D 2 angular = F F Z 2 * D 2 gt
Dc i ¯ = 1 N i | I e I i |
Ds i = ( x e x c i ) 2 + ( y e y c i ) 2
Label e = arg min i Dc i ¯ * Ds i
L α ( x , y , u , v ) = L ( x + u ( 1 1 α ) , y + v ( 1 1 α ) , u , v )
E ( Δ α ( x , y ) ) = | 1 N i u i , v i L α ( x , y , u i , v i ) L α ( x , y , 0 , 0 ) | i = Ω p
E ( Δ α 2 ( x , y ) ) = 1 N i 1 u i , v i ( L α ( x , y , u i , v i ) L α ( x , y , 0 , 0 ) ) 2
α ( x , y ) = arg min α ( E ( Δ α ( x , y ) ) + E ( Δ α 2 ( x , y ) ) )
C α ( p ) = 1 N p ( L α ( x , y , u p , v p ) L α ( x , y , 0 , 0 ) ) 2 + | ( 1 N p L α ( x , y , u p , v p ) ) L α ( x , y , 0 , 0 ) |
i = arg min p C α ( p )
α ( x , y ) = arg min α C α ( i )
C min ( x , y ) = min ( E ( Δ α ( x , y ) ) + E ( Δ α 2 ( x , y ) ) )
curvature : Cur ( D , C ) = ( d 2 C ( 1 + d C 2 ) 3 2 )
Con ( x , y ) = k Cur ( D , C min ) C min * Cur ( D , C min ) Cur ( D second , C second ) * C second C min
E = p E data ( p , α ( p ) ) + λ p , q E smooth ( p , q , α ( p ) , α ( q ) )
E data ( p , α ( p ) ) = 1 exp ( α α ( p ) ) 2 2 ( 1 con ( p ) ) 2
E smooth ( p , q , α ( p ) , α ( q ) ) = | α ( p ) α ( q ) | ( I ( p ) I ( q ) ) + ω c ( I e ( p ) I e ( q ) ) + δ

Metrics