C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).

[Crossref]

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1124–1137 (2004).

[Crossref]

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).

[Crossref]

Y. Qin, X. Jin, Y. Chen, and Q. Dai, “Enhanced depth estimation for hand-held light field cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2017), pp. 2032–2036.

S. Xie, P. Wang, X. Sang, Z. Chen, N. Guo, B. Yan, K. Wang, and C. Yu, “Profile preferentially partial occlusion removal for three-dimensional integral imaging,” Opt. Express 24, 23519–23530 (2016).

[Crossref]
[PubMed]

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

Y. Qin, X. Jin, Y. Chen, and Q. Dai, “Enhanced depth estimation for hand-held light field cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2017), pp. 2032–2036.

A. L. Dulmage and N. S. Mendelsohn, “Coverings of bipartite graphs,” Can. J. Math. 10, 516–534 (1958).

[Crossref]

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 2170–2181 (2016).

[Crossref]
[PubMed]

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 3487–3495.

A. Pothen and C.-J. Fan, “Computing the block triangular form of a sparse matrix,” ACM Trans. Math. Softw. 16, 303–324 (1990).

[Crossref]

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

T. Georgiev, Z. Yu, and A. Lumsdaine, “Lytro camera technology: theory, algorithms, performance analysis,” Int. Soc. Opt. Eng. 8667, 1–10 (2013).

K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 19–34.

O. Johannsen, A. Sulc, and B. Goldluecke, “What sparse light field coding reveals about scene structure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 3262–3270.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4d light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2012), pp. 41–48.

C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).

[Crossref]

S. Xie, P. Wang, X. Sang, Z. Chen, N. Guo, B. Yan, K. Wang, and C. Yu, “Profile preferentially partial occlusion removal for three-dimensional integral imaging,” Opt. Express 24, 23519–23530 (2016).

[Crossref]
[PubMed]

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 673–680.

K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 19–34.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

Y. Qin, X. Jin, Y. Chen, and Q. Dai, “Enhanced depth estimation for hand-held light field cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2017), pp. 2032–2036.

K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 19–34.

O. Johannsen, A. Sulc, and B. Goldluecke, “What sparse light field coding reveals about scene structure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 3262–3270.

C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).

[Crossref]

V. Kolmogorov and R. Zabin, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).

[Crossref]
[PubMed]

Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1124–1137 (2004).

[Crossref]

K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 19–34.

W. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 2484–2497 (2018).

[Crossref]

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).

[Crossref]

A. S. Raj, M. Lowney, and R. Shah, “Light-field database creation and depth estimation,” Tech. Rep., Department of Computer Science, Stanford University (2016).

T. Georgiev, Z. Yu, and A. Lumsdaine, “Lytro camera technology: theory, algorithms, performance analysis,” Int. Soc. Opt. Eng. 8667, 1–10 (2013).

M. W. Tao, P. P. Srinivasan, J. Malik, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1940–1948.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 673–680.

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

H. Mark, “Focusing on everything,” IEEE Spectr. 49(5), 44–50 (2012).

[Crossref]

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

A. L. Dulmage and N. S. Mendelsohn, “Coverings of bipartite graphs,” Can. J. Math. 10, 516–534 (1958).

[Crossref]

W. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 2484–2497 (2018).

[Crossref]

W. Williem and I. K. Park, “Robust light field depth estimation for noisy scene with occlusion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4396–4404.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

A. Pothen and C.-J. Fan, “Computing the block triangular form of a sparse matrix,” ACM Trans. Math. Softw. 16, 303–324 (1990).

[Crossref]

C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).

[Crossref]

Y. Qin, X. Jin, Y. Chen, and Q. Dai, “Enhanced depth estimation for hand-held light field cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2017), pp. 2032–2036.

A. S. Raj, M. Lowney, and R. Shah, “Light-field database creation and depth estimation,” Tech. Rep., Department of Computer Science, Stanford University (2016).

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 2170–2181 (2016).

[Crossref]
[PubMed]

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 3487–3495.

M. W. Tao, P. P. Srinivasan, J. Malik, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1940–1948.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 673–680.

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

S. Xie, P. Wang, X. Sang, Z. Chen, N. Guo, B. Yan, K. Wang, and C. Yu, “Profile preferentially partial occlusion removal for three-dimensional integral imaging,” Opt. Express 24, 23519–23530 (2016).

[Crossref]
[PubMed]

A. S. Raj, M. Lowney, and R. Shah, “Light-field database creation and depth estimation,” Tech. Rep., Department of Computer Science, Stanford University (2016).

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).

[Crossref]

L. Si and Q. Wang, “Dense depth-map estimation and geometry inference from light fields via global optimization,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 83–98.

M. W. Tao, P. P. Srinivasan, J. Malik, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1940–1948.

O. Johannsen, A. Sulc, and B. Goldluecke, “What sparse light field coding reveals about scene structure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 3262–3270.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 673–680.

M. W. Tao, P. P. Srinivasan, J. Malik, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1940–1948.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).

[Crossref]

S. Xie, P. Wang, X. Sang, Z. Chen, N. Guo, B. Yan, K. Wang, and C. Yu, “Profile preferentially partial occlusion removal for three-dimensional integral imaging,” Opt. Express 24, 23519–23530 (2016).

[Crossref]
[PubMed]

S. Xie, P. Wang, X. Sang, Z. Chen, N. Guo, B. Yan, K. Wang, and C. Yu, “Profile preferentially partial occlusion removal for three-dimensional integral imaging,” Opt. Express 24, 23519–23530 (2016).

[Crossref]
[PubMed]

H. Zhu, Q. Wang, and J. Y. Yu, “Occlusion-model guided anti-occlusion depth estimation in light field,” IEEE J. Sel. Top. Signal Process. 11, 965–978 (2017).

[Crossref]

L. Si and Q. Wang, “Dense depth-map estimation and geometry inference from light fields via global optimization,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 83–98.

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 2170–2181 (2016).

[Crossref]
[PubMed]

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 3487–3495.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4d light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2012), pp. 41–48.

W. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 2484–2497 (2018).

[Crossref]

W. Williem and I. K. Park, “Robust light field depth estimation for noisy scene with occlusion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4396–4404.

S. Xie, P. Wang, X. Sang, Z. Chen, N. Guo, B. Yan, K. Wang, and C. Yu, “Profile preferentially partial occlusion removal for three-dimensional integral imaging,” Opt. Express 24, 23519–23530 (2016).

[Crossref]
[PubMed]

S. Xie, P. Wang, X. Sang, Z. Chen, N. Guo, B. Yan, K. Wang, and C. Yu, “Profile preferentially partial occlusion removal for three-dimensional integral imaging,” Opt. Express 24, 23519–23530 (2016).

[Crossref]
[PubMed]

S. Xie, P. Wang, X. Sang, Z. Chen, N. Guo, B. Yan, K. Wang, and C. Yu, “Profile preferentially partial occlusion removal for three-dimensional integral imaging,” Opt. Express 24, 23519–23530 (2016).

[Crossref]
[PubMed]

H. Zhu, Q. Wang, and J. Y. Yu, “Occlusion-model guided anti-occlusion depth estimation in light field,” IEEE J. Sel. Top. Signal Process. 11, 965–978 (2017).

[Crossref]

T. Georgiev, Z. Yu, and A. Lumsdaine, “Lytro camera technology: theory, algorithms, performance analysis,” Int. Soc. Opt. Eng. 8667, 1–10 (2013).

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).

[Crossref]

V. Kolmogorov and R. Zabin, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).

[Crossref]
[PubMed]

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).

[Crossref]

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).

[Crossref]

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).

[Crossref]

H. Zhu, Q. Wang, and J. Y. Yu, “Occlusion-model guided anti-occlusion depth estimation in light field,” IEEE J. Sel. Top. Signal Process. 11, 965–978 (2017).

[Crossref]

C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).

[Crossref]

C. Kim, H. Zimmer, Y. Pritch, S.-H. Alexander, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” Acm Trans. Graph. 32, 1–12 (2017).

[Crossref]

A. Pothen and C.-J. Fan, “Computing the block triangular form of a sparse matrix,” ACM Trans. Math. Softw. 16, 303–324 (1990).

[Crossref]

T. Ryu, B. Lee, and S. Lee, “Mutual constraint using partial occlusion artifact removal for computational integral imaging reconstruction,” Appl. Opt. 54, 4147–4153 (2015).

[Crossref]

Z. Ma, Z. Cen, and X. Li, “Depth estimation algorithm for light field data by epipolar image analysis and region interpolation,” Appl. Opt. 56, 6603–6610 (2017).

[Crossref]
[PubMed]

T. Tao, Q. Chen, S. Feng, Y. Hu, and C. Zuo, “Active depth estimation from defocus using a camera array,” Appl. Opt. 57, 4960–4967 (2018).

[Crossref]
[PubMed]

A. L. Dulmage and N. S. Mendelsohn, “Coverings of bipartite graphs,” Can. J. Math. 10, 516–534 (1958).

[Crossref]

N. Ren, L. Marc, B. Mathieu, D. Gene, H. Mark, and H. Pat, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. 2, 1–11 (2005).

S. Zhang, H. Sheng, C. Li, J. Zhang, and X. Zhang, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Underst. 145, 148–159 (2016).

[Crossref]

H. Zhu, Q. Wang, and J. Y. Yu, “Occlusion-model guided anti-occlusion depth estimation in light field,” IEEE J. Sel. Top. Signal Process. 11, 965–978 (2017).

[Crossref]

H. Mark, “Focusing on everything,” IEEE Spectr. 49(5), 44–50 (2012).

[Crossref]

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 2170–2181 (2016).

[Crossref]
[PubMed]

W. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 2484–2497 (2018).

[Crossref]

Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1124–1137 (2004).

[Crossref]

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).

[Crossref]

V. Kolmogorov and R. Zabin, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).

[Crossref]
[PubMed]

T. Georgiev, Z. Yu, and A. Lumsdaine, “Lytro camera technology: theory, algorithms, performance analysis,” Int. Soc. Opt. Eng. 8667, 1–10 (2013).

S. Xie, P. Wang, X. Sang, Z. Chen, N. Guo, B. Yan, K. Wang, and C. Yu, “Profile preferentially partial occlusion removal for three-dimensional integral imaging,” Opt. Express 24, 23519–23530 (2016).

[Crossref]
[PubMed]

Z. Cai, X. Liu, X. Peng, Y. Yin, A. Li, J. Wu, and B. Z. Gao, “Structured light field 3d imaging,” Opt. Express 24, 20324–20334 (2016).

[Crossref]
[PubMed]

C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22, 26659–26673 (2014).

[Crossref]
[PubMed]

Y. Qin, X. Jin, Y. Chen, and Q. Dai, “Enhanced depth estimation for hand-held light field cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2017), pp. 2032–2036.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4d light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2012), pp. 41–48.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 673–680.

M. W. Tao, P. P. Srinivasan, J. Malik, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1940–1948.

H.-G. Jeon, J. Park, G. ChoE, J. Park, Y. Bok, Y.-W. Tai, and S. K. In, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 1547–1555.

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 3487–3495.

W. Williem and I. K. Park, “Robust light field depth estimation for noisy scene with occlusion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4396–4404.

K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 19–34.

O. Johannsen, A. Sulc, and B. Goldluecke, “What sparse light field coding reveals about scene structure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 3262–3270.

L. Si and Q. Wang, “Dense depth-map estimation and geometry inference from light fields via global optimization,” in Proceedings of Asian Conference on Computer Vision, (Springer, 2016), pp. 83–98.

A. S. Raj, M. Lowney, and R. Shah, “Light-field database creation and depth estimation,” Tech. Rep., Department of Computer Science, Stanford University (2016).