Abstract

Scene depth estimation is gaining importance as more and more AR/VR and robot vision applications are developed. Conventional depth-from-defocus techniques can passively provide depth maps from a single image. This is especially advantageous for moving scenes. However, they suffer a depth ambiguity problem where two distinct depth planes can have the same amount of defocus blur in the captured image. We solve the ambiguity problem and, as a consequence, introduce a passive technique that provides a one-to-one mapping between depth and defocus blur. Our method relies on the fact that the relationship between defocus blur and depth is also wavelength dependent. The depth ambiguity is thus solved by leveraging (multi-) spectral information. Specifically, we analyze the difference in defocus blur of two channels to obtain different scene depth regions. This paper provides the derivation of our solution, a robustness analysis, and validation on consumer lenses.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Quantitative analysis of error bounds in the recovery of depth from defocused images

Ambasamudram N. Rajagopalan, Subhasis Chaudhuri, and Rama Chellappa
J. Opt. Soc. Am. A 17(10) 1722-1731 (2000)

Passive depth estimation using chromatic aberration and a depth from defocus approach

Pauline Trouvé, Frédéric Champagnat, Guy Le Besnerais, Jacques Sabater, Thierry Avignon, and Jérôme Idier
Appl. Opt. 52(29) 7152-7164 (2013)

A depth estimation algorithm with a single image

V. Aslantas
Opt. Express 15(8) 5024-5029 (2007)

References

  • View by:
  • |
  • |
  • |

  1. A. Wilson and H. Benko, “Combining multiple depth cameras and projectors for interactions on, above and between surfaces,” in ACM symposium on User interface software and technology, (2010), pp. 273–282.
  2. A. Canessa, M. Chessa, A. Gibaldi, S. Sabatini, and F. Solari, “Calibrated depth and color cameras for accurate 3d interaction in a stereoscopic augmented reality environment,” J. Vis. Commun. Image Represent. 25(1), 227–237 (2014).
    [Crossref]
  3. E. Marchand, H. Uchiyama, and F. Spindler, “Pose estimation for augmented reality: a hands-on survey,” IEEE Trans. Visual. Comput. Graphics 22(12), 2633–2651 (2016).
    [Crossref]
  4. J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” in European Conference on Computer Vision (ECCV), (2014), pp. 834–849.
  5. M. Mancini, G. Costante, P. Valigi, and T. Ciarfuglia, “Fast robust monocular depth estimation for obstacle detection with fully convolutional networks,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2016), pp. 4296–4303.
  6. Q. Yang, “Local smoothness enforced cost volume regularization for fast stereo correspondence,” IEEE Signal Process. Lett. 22(9), 1429–1433 (2015).
    [Crossref]
  7. M. Nia, E. Wong, K. Sim, and T. Rakgowa, “Stereo correspondence matching with clifford phase correlation,” in IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), (2015), pp. 193–195.
  8. B. Park, Y. Keh, D. Lee, Y. Kim, S. Kim, K. Sung, J. Lee, D. Jang, and Y. Yoon, “Outdoor operation of structured light in mobile phone,” in ICCV Workshop, (2017), pp. 2392–2398.
  9. J. Elder and S. Zucker, “Local scale control for edge detection and blur estimation,” IEEE Trans. Pattern Anal. Machine Intell. 20(7), 699–716 (1998).
    [Crossref]
  10. S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognit. 44(9), 1852–1858 (2011).
    [Crossref]
  11. H. Hu and G. Haan, “Low cost robust blur estimator,” in IEEE International Conference on Image Processing (ICIP), (2006), pp. 617–620.
  12. J. Lin, X. Ji, W. Xu, and Q. Dai, “Absolute depth estimation from a single defocused image,” IEEE Trans. on Image Process. 22(11), 4545–4550 (2013).
    [Crossref]
  13. F. Mannan and M. S. Langer, “What is a good model for depth from defocus?” in IEEE Conference on Computer and Robot Vision (CRV), (2016), pp. 273–280.
  14. Y. Tai and M. S. Brown, “Single image defocus map estimation using local contrast prior,” in IEEE International Conference on Image Processing (ICIP), (2009), pp. 1797–1800.
  15. A. Sellent and P. Favaro, “Which side of the focal plane are you on?” in IEEE International Conference on Computational Photography (ICCP), (2014), pp. 1–8.
  16. M. El Helou, M. Shahpaski, and S. Süsstrunk, “Closed-form solution to disambiguate defocus blur in single-perspective images,” in Mathematics in Imaging, (Optical Society of America, 2019), pp. MM1D–2.
  17. Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1993), pp. 68–73.
  18. Y. Tai and M. Brown, “Single image defocus map estimation using local contrast prior,” in IEEE International Conference on Image Processing (ICIP), (2009), pp. 1797–1800.
  19. J. Garcia, J. Sanchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in IEEE International Conference on Pattern Recognition (ICPR), (2000), pp. 762–765.
  20. M. El Helou, Z. Sadeghipoor, and S. Süsstrunk, “Correlation-based deblurring leveraging multispectral chromatic aberration in color and near-infrared joint acquisition,” in IEEE International Conference on Image Processing (ICIP), (2017).
  21. M. Chiang and T. E. Boult, “Local blur estimation and super-resolution,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1997), pp. 821–826.
  22. A. Chakrabarti, T. Zickler, and W. T. Freeman, “Analyzing spatially-varying blur,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (2010), pp. 2512–2519.
  23. N. Joshi, R. Szeliski, and D. J. Kriegman, “Psf estimation using sharp edge prediction,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (2008), pp. 1–8.
  24. P. Peebles and R. Dicke, “Origin of the globular star clusters,” Astrophys. J. 154, 891 (1968).
    [Crossref]
  25. J. Ables, “Fourier transform photography: a new method for x-ray astronomy,” Publ. Astron. Soc. Aust. 1(4), 172–173 (1968).
    [Crossref]
  26. E. E. Fenimore and T. Cannon, “Coded aperture imaging with uniformly redundant arrays,” Appl. Opt. 17(3), 337–347 (1978).
    [Crossref]
  27. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
    [Crossref]
  28. S. Mahmoudpour and M. Kim, “Superpixel-based depth map estimation using defocus blur,” in IEEE International Conference on Image Processing (ICIP), (2016), pp. 2613–2617.
  29. O. Cossairt and S. Nayar, “Spectral focal sweep: Extended depth of field from chromatic aberrations,” in IEEE International Conference on Computational Photography (ICCP), (2010), pp. 1–8.
  30. P. Trouvé, F. Champagnat, G. Le Besnerais, J. Sabater, T. Avignon, and J. Idier, “Passive depth estimation using chromatic aberration and a depth from defocus approach,” Appl. Opt. 52(29), 7152–7164 (2013).
    [Crossref]
  31. M. El Helou, F. Dümbgen, and S. Siisstrunk, “AAM: An assessment metric of axial chromatic aberration,” in IEEE International Conference on Image Processing (ICIP), (2018), pp. 2486–2490.
  32. A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1989), pp. 256–261.
  33. A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9(4), 523–531 (1987).
    [Crossref]
  34. “ISO 12233:2014, photography - electronic still-picture cameras. resolution and spatial frequency responses.”.
  35. E. Allen and S. Triantaphillidou, The Manual of Photography and Digital Imaging (CRC Press, 2012).
  36. F. Crété, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Electronic Imaging, (2007), pp. 64920I.

2016 (1)

E. Marchand, H. Uchiyama, and F. Spindler, “Pose estimation for augmented reality: a hands-on survey,” IEEE Trans. Visual. Comput. Graphics 22(12), 2633–2651 (2016).
[Crossref]

2015 (1)

Q. Yang, “Local smoothness enforced cost volume regularization for fast stereo correspondence,” IEEE Signal Process. Lett. 22(9), 1429–1433 (2015).
[Crossref]

2014 (1)

A. Canessa, M. Chessa, A. Gibaldi, S. Sabatini, and F. Solari, “Calibrated depth and color cameras for accurate 3d interaction in a stereoscopic augmented reality environment,” J. Vis. Commun. Image Represent. 25(1), 227–237 (2014).
[Crossref]

2013 (2)

J. Lin, X. Ji, W. Xu, and Q. Dai, “Absolute depth estimation from a single defocused image,” IEEE Trans. on Image Process. 22(11), 4545–4550 (2013).
[Crossref]

P. Trouvé, F. Champagnat, G. Le Besnerais, J. Sabater, T. Avignon, and J. Idier, “Passive depth estimation using chromatic aberration and a depth from defocus approach,” Appl. Opt. 52(29), 7152–7164 (2013).
[Crossref]

2011 (1)

S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognit. 44(9), 1852–1858 (2011).
[Crossref]

2007 (1)

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

1998 (1)

J. Elder and S. Zucker, “Local scale control for edge detection and blur estimation,” IEEE Trans. Pattern Anal. Machine Intell. 20(7), 699–716 (1998).
[Crossref]

1987 (1)

A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9(4), 523–531 (1987).
[Crossref]

1978 (1)

1968 (2)

P. Peebles and R. Dicke, “Origin of the globular star clusters,” Astrophys. J. 154, 891 (1968).
[Crossref]

J. Ables, “Fourier transform photography: a new method for x-ray astronomy,” Publ. Astron. Soc. Aust. 1(4), 172–173 (1968).
[Crossref]

Ables, J.

J. Ables, “Fourier transform photography: a new method for x-ray astronomy,” Publ. Astron. Soc. Aust. 1(4), 172–173 (1968).
[Crossref]

Allen, E.

E. Allen and S. Triantaphillidou, The Manual of Photography and Digital Imaging (CRC Press, 2012).

Avignon, T.

Benko, H.

A. Wilson and H. Benko, “Combining multiple depth cameras and projectors for interactions on, above and between surfaces,” in ACM symposium on User interface software and technology, (2010), pp. 273–282.

Binefa, X.

J. Garcia, J. Sanchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in IEEE International Conference on Pattern Recognition (ICPR), (2000), pp. 762–765.

Boult, T. E.

M. Chiang and T. E. Boult, “Local blur estimation and super-resolution,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1997), pp. 821–826.

Brown, M.

Y. Tai and M. Brown, “Single image defocus map estimation using local contrast prior,” in IEEE International Conference on Image Processing (ICIP), (2009), pp. 1797–1800.

Brown, M. S.

Y. Tai and M. S. Brown, “Single image defocus map estimation using local contrast prior,” in IEEE International Conference on Image Processing (ICIP), (2009), pp. 1797–1800.

Canessa, A.

A. Canessa, M. Chessa, A. Gibaldi, S. Sabatini, and F. Solari, “Calibrated depth and color cameras for accurate 3d interaction in a stereoscopic augmented reality environment,” J. Vis. Commun. Image Represent. 25(1), 227–237 (2014).
[Crossref]

Cannon, T.

Chakrabarti, A.

A. Chakrabarti, T. Zickler, and W. T. Freeman, “Analyzing spatially-varying blur,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (2010), pp. 2512–2519.

Champagnat, F.

Chessa, M.

A. Canessa, M. Chessa, A. Gibaldi, S. Sabatini, and F. Solari, “Calibrated depth and color cameras for accurate 3d interaction in a stereoscopic augmented reality environment,” J. Vis. Commun. Image Represent. 25(1), 227–237 (2014).
[Crossref]

Chiang, M.

M. Chiang and T. E. Boult, “Local blur estimation and super-resolution,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1997), pp. 821–826.

Ciarfuglia, T.

M. Mancini, G. Costante, P. Valigi, and T. Ciarfuglia, “Fast robust monocular depth estimation for obstacle detection with fully convolutional networks,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2016), pp. 4296–4303.

Cossairt, O.

O. Cossairt and S. Nayar, “Spectral focal sweep: Extended depth of field from chromatic aberrations,” in IEEE International Conference on Computational Photography (ICCP), (2010), pp. 1–8.

Costante, G.

M. Mancini, G. Costante, P. Valigi, and T. Ciarfuglia, “Fast robust monocular depth estimation for obstacle detection with fully convolutional networks,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2016), pp. 4296–4303.

Cremers, D.

J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” in European Conference on Computer Vision (ECCV), (2014), pp. 834–849.

Crété, F.

F. Crété, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Electronic Imaging, (2007), pp. 64920I.

Dai, Q.

J. Lin, X. Ji, W. Xu, and Q. Dai, “Absolute depth estimation from a single defocused image,” IEEE Trans. on Image Process. 22(11), 4545–4550 (2013).
[Crossref]

Darrell, T.

A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1989), pp. 256–261.

Dicke, R.

P. Peebles and R. Dicke, “Origin of the globular star clusters,” Astrophys. J. 154, 891 (1968).
[Crossref]

Dolmiere, T.

F. Crété, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Electronic Imaging, (2007), pp. 64920I.

Dümbgen, F.

M. El Helou, F. Dümbgen, and S. Siisstrunk, “AAM: An assessment metric of axial chromatic aberration,” in IEEE International Conference on Image Processing (ICIP), (2018), pp. 2486–2490.

Durand, F.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

El Helou, M.

M. El Helou, F. Dümbgen, and S. Siisstrunk, “AAM: An assessment metric of axial chromatic aberration,” in IEEE International Conference on Image Processing (ICIP), (2018), pp. 2486–2490.

M. El Helou, M. Shahpaski, and S. Süsstrunk, “Closed-form solution to disambiguate defocus blur in single-perspective images,” in Mathematics in Imaging, (Optical Society of America, 2019), pp. MM1D–2.

M. El Helou, Z. Sadeghipoor, and S. Süsstrunk, “Correlation-based deblurring leveraging multispectral chromatic aberration in color and near-infrared joint acquisition,” in IEEE International Conference on Image Processing (ICIP), (2017).

Elder, J.

J. Elder and S. Zucker, “Local scale control for edge detection and blur estimation,” IEEE Trans. Pattern Anal. Machine Intell. 20(7), 699–716 (1998).
[Crossref]

Engel, J.

J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” in European Conference on Computer Vision (ECCV), (2014), pp. 834–849.

Favaro, P.

A. Sellent and P. Favaro, “Which side of the focal plane are you on?” in IEEE International Conference on Computational Photography (ICCP), (2014), pp. 1–8.

Fenimore, E. E.

Fergus, R.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

Freeman, W. T.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

A. Chakrabarti, T. Zickler, and W. T. Freeman, “Analyzing spatially-varying blur,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (2010), pp. 2512–2519.

Garcia, J.

J. Garcia, J. Sanchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in IEEE International Conference on Pattern Recognition (ICPR), (2000), pp. 762–765.

Gibaldi, A.

A. Canessa, M. Chessa, A. Gibaldi, S. Sabatini, and F. Solari, “Calibrated depth and color cameras for accurate 3d interaction in a stereoscopic augmented reality environment,” J. Vis. Commun. Image Represent. 25(1), 227–237 (2014).
[Crossref]

Haan, G.

H. Hu and G. Haan, “Low cost robust blur estimator,” in IEEE International Conference on Image Processing (ICIP), (2006), pp. 617–620.

Hu, H.

H. Hu and G. Haan, “Low cost robust blur estimator,” in IEEE International Conference on Image Processing (ICIP), (2006), pp. 617–620.

Huang, W.

A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1989), pp. 256–261.

Idier, J.

Jang, D.

B. Park, Y. Keh, D. Lee, Y. Kim, S. Kim, K. Sung, J. Lee, D. Jang, and Y. Yoon, “Outdoor operation of structured light in mobile phone,” in ICCV Workshop, (2017), pp. 2392–2398.

Ji, X.

J. Lin, X. Ji, W. Xu, and Q. Dai, “Absolute depth estimation from a single defocused image,” IEEE Trans. on Image Process. 22(11), 4545–4550 (2013).
[Crossref]

Joshi, N.

N. Joshi, R. Szeliski, and D. J. Kriegman, “Psf estimation using sharp edge prediction,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (2008), pp. 1–8.

Keh, Y.

B. Park, Y. Keh, D. Lee, Y. Kim, S. Kim, K. Sung, J. Lee, D. Jang, and Y. Yoon, “Outdoor operation of structured light in mobile phone,” in ICCV Workshop, (2017), pp. 2392–2398.

Kim, M.

S. Mahmoudpour and M. Kim, “Superpixel-based depth map estimation using defocus blur,” in IEEE International Conference on Image Processing (ICIP), (2016), pp. 2613–2617.

Kim, S.

B. Park, Y. Keh, D. Lee, Y. Kim, S. Kim, K. Sung, J. Lee, D. Jang, and Y. Yoon, “Outdoor operation of structured light in mobile phone,” in ICCV Workshop, (2017), pp. 2392–2398.

Kim, Y.

B. Park, Y. Keh, D. Lee, Y. Kim, S. Kim, K. Sung, J. Lee, D. Jang, and Y. Yoon, “Outdoor operation of structured light in mobile phone,” in ICCV Workshop, (2017), pp. 2392–2398.

Kriegman, D. J.

N. Joshi, R. Szeliski, and D. J. Kriegman, “Psf estimation using sharp edge prediction,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (2008), pp. 1–8.

Ladret, P.

F. Crété, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Electronic Imaging, (2007), pp. 64920I.

Langer, M. S.

F. Mannan and M. S. Langer, “What is a good model for depth from defocus?” in IEEE Conference on Computer and Robot Vision (CRV), (2016), pp. 273–280.

Le Besnerais, G.

Lee, D.

B. Park, Y. Keh, D. Lee, Y. Kim, S. Kim, K. Sung, J. Lee, D. Jang, and Y. Yoon, “Outdoor operation of structured light in mobile phone,” in ICCV Workshop, (2017), pp. 2392–2398.

Lee, J.

B. Park, Y. Keh, D. Lee, Y. Kim, S. Kim, K. Sung, J. Lee, D. Jang, and Y. Yoon, “Outdoor operation of structured light in mobile phone,” in ICCV Workshop, (2017), pp. 2392–2398.

Levin, A.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

Lin, J.

J. Lin, X. Ji, W. Xu, and Q. Dai, “Absolute depth estimation from a single defocused image,” IEEE Trans. on Image Process. 22(11), 4545–4550 (2013).
[Crossref]

Mahmoudpour, S.

S. Mahmoudpour and M. Kim, “Superpixel-based depth map estimation using defocus blur,” in IEEE International Conference on Image Processing (ICIP), (2016), pp. 2613–2617.

Mancini, M.

M. Mancini, G. Costante, P. Valigi, and T. Ciarfuglia, “Fast robust monocular depth estimation for obstacle detection with fully convolutional networks,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2016), pp. 4296–4303.

Mannan, F.

F. Mannan and M. S. Langer, “What is a good model for depth from defocus?” in IEEE Conference on Computer and Robot Vision (CRV), (2016), pp. 273–280.

Marchand, E.

E. Marchand, H. Uchiyama, and F. Spindler, “Pose estimation for augmented reality: a hands-on survey,” IEEE Trans. Visual. Comput. Graphics 22(12), 2633–2651 (2016).
[Crossref]

Nayar, S.

O. Cossairt and S. Nayar, “Spectral focal sweep: Extended depth of field from chromatic aberrations,” in IEEE International Conference on Computational Photography (ICCP), (2010), pp. 1–8.

Nia, M.

M. Nia, E. Wong, K. Sim, and T. Rakgowa, “Stereo correspondence matching with clifford phase correlation,” in IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), (2015), pp. 193–195.

Nicolas, M.

F. Crété, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Electronic Imaging, (2007), pp. 64920I.

Orriols, X.

J. Garcia, J. Sanchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in IEEE International Conference on Pattern Recognition (ICPR), (2000), pp. 762–765.

Park, B.

B. Park, Y. Keh, D. Lee, Y. Kim, S. Kim, K. Sung, J. Lee, D. Jang, and Y. Yoon, “Outdoor operation of structured light in mobile phone,” in ICCV Workshop, (2017), pp. 2392–2398.

Peebles, P.

P. Peebles and R. Dicke, “Origin of the globular star clusters,” Astrophys. J. 154, 891 (1968).
[Crossref]

Pentland, A.

A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9(4), 523–531 (1987).
[Crossref]

A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1989), pp. 256–261.

Rakgowa, T.

M. Nia, E. Wong, K. Sim, and T. Rakgowa, “Stereo correspondence matching with clifford phase correlation,” in IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), (2015), pp. 193–195.

Sabater, J.

Sabatini, S.

A. Canessa, M. Chessa, A. Gibaldi, S. Sabatini, and F. Solari, “Calibrated depth and color cameras for accurate 3d interaction in a stereoscopic augmented reality environment,” J. Vis. Commun. Image Represent. 25(1), 227–237 (2014).
[Crossref]

Sadeghipoor, Z.

M. El Helou, Z. Sadeghipoor, and S. Süsstrunk, “Correlation-based deblurring leveraging multispectral chromatic aberration in color and near-infrared joint acquisition,” in IEEE International Conference on Image Processing (ICIP), (2017).

Sanchez, J.

J. Garcia, J. Sanchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in IEEE International Conference on Pattern Recognition (ICPR), (2000), pp. 762–765.

Schöps, T.

J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” in European Conference on Computer Vision (ECCV), (2014), pp. 834–849.

Sellent, A.

A. Sellent and P. Favaro, “Which side of the focal plane are you on?” in IEEE International Conference on Computational Photography (ICCP), (2014), pp. 1–8.

Shafer, S. A.

Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1993), pp. 68–73.

Shahpaski, M.

M. El Helou, M. Shahpaski, and S. Süsstrunk, “Closed-form solution to disambiguate defocus blur in single-perspective images,” in Mathematics in Imaging, (Optical Society of America, 2019), pp. MM1D–2.

Siisstrunk, S.

M. El Helou, F. Dümbgen, and S. Siisstrunk, “AAM: An assessment metric of axial chromatic aberration,” in IEEE International Conference on Image Processing (ICIP), (2018), pp. 2486–2490.

Sim, K.

M. Nia, E. Wong, K. Sim, and T. Rakgowa, “Stereo correspondence matching with clifford phase correlation,” in IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), (2015), pp. 193–195.

Sim, T.

S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognit. 44(9), 1852–1858 (2011).
[Crossref]

Solari, F.

A. Canessa, M. Chessa, A. Gibaldi, S. Sabatini, and F. Solari, “Calibrated depth and color cameras for accurate 3d interaction in a stereoscopic augmented reality environment,” J. Vis. Commun. Image Represent. 25(1), 227–237 (2014).
[Crossref]

Spindler, F.

E. Marchand, H. Uchiyama, and F. Spindler, “Pose estimation for augmented reality: a hands-on survey,” IEEE Trans. Visual. Comput. Graphics 22(12), 2633–2651 (2016).
[Crossref]

Sung, K.

B. Park, Y. Keh, D. Lee, Y. Kim, S. Kim, K. Sung, J. Lee, D. Jang, and Y. Yoon, “Outdoor operation of structured light in mobile phone,” in ICCV Workshop, (2017), pp. 2392–2398.

Süsstrunk, S.

M. El Helou, M. Shahpaski, and S. Süsstrunk, “Closed-form solution to disambiguate defocus blur in single-perspective images,” in Mathematics in Imaging, (Optical Society of America, 2019), pp. MM1D–2.

M. El Helou, Z. Sadeghipoor, and S. Süsstrunk, “Correlation-based deblurring leveraging multispectral chromatic aberration in color and near-infrared joint acquisition,” in IEEE International Conference on Image Processing (ICIP), (2017).

Szeliski, R.

N. Joshi, R. Szeliski, and D. J. Kriegman, “Psf estimation using sharp edge prediction,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (2008), pp. 1–8.

Tai, Y.

Y. Tai and M. Brown, “Single image defocus map estimation using local contrast prior,” in IEEE International Conference on Image Processing (ICIP), (2009), pp. 1797–1800.

Y. Tai and M. S. Brown, “Single image defocus map estimation using local contrast prior,” in IEEE International Conference on Image Processing (ICIP), (2009), pp. 1797–1800.

Triantaphillidou, S.

E. Allen and S. Triantaphillidou, The Manual of Photography and Digital Imaging (CRC Press, 2012).

Trouvé, P.

Turk, M.

A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1989), pp. 256–261.

Uchiyama, H.

E. Marchand, H. Uchiyama, and F. Spindler, “Pose estimation for augmented reality: a hands-on survey,” IEEE Trans. Visual. Comput. Graphics 22(12), 2633–2651 (2016).
[Crossref]

Valigi, P.

M. Mancini, G. Costante, P. Valigi, and T. Ciarfuglia, “Fast robust monocular depth estimation for obstacle detection with fully convolutional networks,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2016), pp. 4296–4303.

Wilson, A.

A. Wilson and H. Benko, “Combining multiple depth cameras and projectors for interactions on, above and between surfaces,” in ACM symposium on User interface software and technology, (2010), pp. 273–282.

Wong, E.

M. Nia, E. Wong, K. Sim, and T. Rakgowa, “Stereo correspondence matching with clifford phase correlation,” in IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), (2015), pp. 193–195.

Xiong, Y.

Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1993), pp. 68–73.

Xu, W.

J. Lin, X. Ji, W. Xu, and Q. Dai, “Absolute depth estimation from a single defocused image,” IEEE Trans. on Image Process. 22(11), 4545–4550 (2013).
[Crossref]

Yang, Q.

Q. Yang, “Local smoothness enforced cost volume regularization for fast stereo correspondence,” IEEE Signal Process. Lett. 22(9), 1429–1433 (2015).
[Crossref]

Yoon, Y.

B. Park, Y. Keh, D. Lee, Y. Kim, S. Kim, K. Sung, J. Lee, D. Jang, and Y. Yoon, “Outdoor operation of structured light in mobile phone,” in ICCV Workshop, (2017), pp. 2392–2398.

Zhuo, S.

S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognit. 44(9), 1852–1858 (2011).
[Crossref]

Zickler, T.

A. Chakrabarti, T. Zickler, and W. T. Freeman, “Analyzing spatially-varying blur,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (2010), pp. 2512–2519.

Zucker, S.

J. Elder and S. Zucker, “Local scale control for edge detection and blur estimation,” IEEE Trans. Pattern Anal. Machine Intell. 20(7), 699–716 (1998).
[Crossref]

ACM Trans. Graph. (1)

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

Appl. Opt. (2)

Astrophys. J. (1)

P. Peebles and R. Dicke, “Origin of the globular star clusters,” Astrophys. J. 154, 891 (1968).
[Crossref]

IEEE Signal Process. Lett. (1)

Q. Yang, “Local smoothness enforced cost volume regularization for fast stereo correspondence,” IEEE Signal Process. Lett. 22(9), 1429–1433 (2015).
[Crossref]

IEEE Trans. on Image Process. (1)

J. Lin, X. Ji, W. Xu, and Q. Dai, “Absolute depth estimation from a single defocused image,” IEEE Trans. on Image Process. 22(11), 4545–4550 (2013).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9(4), 523–531 (1987).
[Crossref]

IEEE Trans. Pattern Anal. Machine Intell. (1)

J. Elder and S. Zucker, “Local scale control for edge detection and blur estimation,” IEEE Trans. Pattern Anal. Machine Intell. 20(7), 699–716 (1998).
[Crossref]

IEEE Trans. Visual. Comput. Graphics (1)

E. Marchand, H. Uchiyama, and F. Spindler, “Pose estimation for augmented reality: a hands-on survey,” IEEE Trans. Visual. Comput. Graphics 22(12), 2633–2651 (2016).
[Crossref]

J. Vis. Commun. Image Represent. (1)

A. Canessa, M. Chessa, A. Gibaldi, S. Sabatini, and F. Solari, “Calibrated depth and color cameras for accurate 3d interaction in a stereoscopic augmented reality environment,” J. Vis. Commun. Image Represent. 25(1), 227–237 (2014).
[Crossref]

Pattern Recognit. (1)

S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognit. 44(9), 1852–1858 (2011).
[Crossref]

Publ. Astron. Soc. Aust. (1)

J. Ables, “Fourier transform photography: a new method for x-ray astronomy,” Publ. Astron. Soc. Aust. 1(4), 172–173 (1968).
[Crossref]

Other (24)

S. Mahmoudpour and M. Kim, “Superpixel-based depth map estimation using defocus blur,” in IEEE International Conference on Image Processing (ICIP), (2016), pp. 2613–2617.

O. Cossairt and S. Nayar, “Spectral focal sweep: Extended depth of field from chromatic aberrations,” in IEEE International Conference on Computational Photography (ICCP), (2010), pp. 1–8.

“ISO 12233:2014, photography - electronic still-picture cameras. resolution and spatial frequency responses.”.

E. Allen and S. Triantaphillidou, The Manual of Photography and Digital Imaging (CRC Press, 2012).

F. Crété, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Electronic Imaging, (2007), pp. 64920I.

M. El Helou, F. Dümbgen, and S. Siisstrunk, “AAM: An assessment metric of axial chromatic aberration,” in IEEE International Conference on Image Processing (ICIP), (2018), pp. 2486–2490.

A. Pentland, T. Darrell, M. Turk, and W. Huang, “A simple, real-time range camera,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1989), pp. 256–261.

H. Hu and G. Haan, “Low cost robust blur estimator,” in IEEE International Conference on Image Processing (ICIP), (2006), pp. 617–620.

M. Nia, E. Wong, K. Sim, and T. Rakgowa, “Stereo correspondence matching with clifford phase correlation,” in IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), (2015), pp. 193–195.

B. Park, Y. Keh, D. Lee, Y. Kim, S. Kim, K. Sung, J. Lee, D. Jang, and Y. Yoon, “Outdoor operation of structured light in mobile phone,” in ICCV Workshop, (2017), pp. 2392–2398.

A. Wilson and H. Benko, “Combining multiple depth cameras and projectors for interactions on, above and between surfaces,” in ACM symposium on User interface software and technology, (2010), pp. 273–282.

J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” in European Conference on Computer Vision (ECCV), (2014), pp. 834–849.

M. Mancini, G. Costante, P. Valigi, and T. Ciarfuglia, “Fast robust monocular depth estimation for obstacle detection with fully convolutional networks,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2016), pp. 4296–4303.

F. Mannan and M. S. Langer, “What is a good model for depth from defocus?” in IEEE Conference on Computer and Robot Vision (CRV), (2016), pp. 273–280.

Y. Tai and M. S. Brown, “Single image defocus map estimation using local contrast prior,” in IEEE International Conference on Image Processing (ICIP), (2009), pp. 1797–1800.

A. Sellent and P. Favaro, “Which side of the focal plane are you on?” in IEEE International Conference on Computational Photography (ICCP), (2014), pp. 1–8.

M. El Helou, M. Shahpaski, and S. Süsstrunk, “Closed-form solution to disambiguate defocus blur in single-perspective images,” in Mathematics in Imaging, (Optical Society of America, 2019), pp. MM1D–2.

Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1993), pp. 68–73.

Y. Tai and M. Brown, “Single image defocus map estimation using local contrast prior,” in IEEE International Conference on Image Processing (ICIP), (2009), pp. 1797–1800.

J. Garcia, J. Sanchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in IEEE International Conference on Pattern Recognition (ICPR), (2000), pp. 762–765.

M. El Helou, Z. Sadeghipoor, and S. Süsstrunk, “Correlation-based deblurring leveraging multispectral chromatic aberration in color and near-infrared joint acquisition,” in IEEE International Conference on Image Processing (ICIP), (2017).

M. Chiang and T. E. Boult, “Local blur estimation and super-resolution,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (1997), pp. 821–826.

A. Chakrabarti, T. Zickler, and W. T. Freeman, “Analyzing spatially-varying blur,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (2010), pp. 2512–2519.

N. Joshi, R. Szeliski, and D. J. Kriegman, “Psf estimation using sharp edge prediction,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (2008), pp. 1–8.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. The focal plane of the green channel (a) is at a shallower depth than that of the NIR (b). By analyzing the defocus blur difference (c), the depth range is split into two regions that are used to solve the depth ambiguity. A similar shift in focus is present between red and green channels; we show the NIR channel for a better visualization.
Fig. 2.
Fig. 2. Simple lens model physics. (a) Objects A and C have the same circle of confusion (blur radius r) despite being at different depths. (b) Axial chromatic aberration: light is dispersed by the lens due to different refractive indices for different wavelengths.
Fig. 3.
Fig. 3. (a) Sharp image. (b) and (c) Same image blurred as it would be if captured in channels $Y$ (green) and $Z$ (red) respectively, with defocus blur (depth increasing linearly from left to right) and uniform depth-independent blur. Best seen on screen.
Fig. 4.
Fig. 4. Blur estimation results across synthetic images of channels $Y$ (Fig. 3(b)) and $Z$ (Fig. 3(c)), moving in the direction of increasing depth. The experimental estimate of $\Delta _{Z,Y}(d)$ is plotted in black. The sign of the metric $\Delta _{Z,Y}$ defines the green and red shaded regions that we use to create our one-to-one blur to depth mapping.
Fig. 5.
Fig. 5. Top row: blur magnitude as a function of depth for each of the RGBN channels, repeated over three image sets. The plots are in the corresponding colors and in black for NIR. Blur magnitude is estimated as the standard deviation of a Gaussian curve fitted to the edge spread function, which in turn is computed from a 5$^\circ$ slanted edge. Bottom row: the corresponding $\Delta$ plots in red for $\Delta _{R,G}$ and in black for $\Delta _{NIR, G}$.
Fig. 6.
Fig. 6. Examples with the sign of $\Delta _{NIR,G}$ overlaid transparently. Blue corresponds to $\Delta _{NIR, G}>0$ and red to $\Delta _{NIR, G}<0$.
Fig. 7.
Fig. 7. (a) Ruler with continuously increasing depth. (b) The corresponding depth map by [11] cannot resolve the depth ambiguity: depth is incorrectly assumed proportional to defocus blur. (c) Depth map corrected using our $\Delta _{NIR,G}$ solution is proportional to blur on one side of the focal plane, and inversely related to it on the other side, yielding a correct depth from defocus, (depth values increase from blue to red).
Fig. 8.
Fig. 8. (a) Five books placed with decreasing depth from left to right. (b) Depth from defocus blur based on [36], generated using the G channel and on every book. As in Fig. 7(b), depth ambiguity cannot be resolved. (c) The depth map corrected using our $\Delta _{R,G}$ solution, (depth values increase from blue to red).

Tables (1)

Tables Icon

Table 1. Percentage accuracy results of the sign (positive or negative) of the metrics $\Delta _{R, G}$ (left) and $\Delta _{NIR, G}$ (right), determining the disambiguation regions. The results are on our four test sets and are based on four off-the-shelf blur estimators.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

r W ( d ) = L | 1 x f W + x d | ,
d W 0 = x x f W 1 .
Δ Z , Y ( d ) = { α L ( x f Y x f Z ) d d Y 0 2 L ( 1 + x d ) L ( x f Y + x f Z ) d [ d Y 0 , d Z 0 ] α = L ( x f Z x f Y ) d d Z 0 .
I b ( d , λ W ) = I ( d ) P S F e q ( d , λ W , x , y ) = I ( d ) H d e f ( d , λ W , x , y ) H 0 ( x , y ) ,
σ d e f ( d , λ W ) = k r W ( d ) ,
σ e q 2 ( d , λ W , x , y ) = σ d e f 2 ( d , λ W , x , y ) + σ 0 2 ( x , y ) .
k 1 = [ 1 / 2 0 0 0 0 0 0 0 1 / 2 ] ; k 2 = [ 0 0 1 / 2 0 0 0 1 / 2 0 0 ] ; k 3 = [ 1 / 2 0 1 / 2 ] ; k 4 = k 3 T .
b ( x e d g e , y e d g e ) = 1 M ( x e d g e , y e d g e ) 2 M r e b l u r ( x e d g e , y e d g e ) 2
r W ( d ) = L | 1 x f W + γ W m a x + x d | + e r W .
Δ Z , Y ( d ) = { L ( x f Y + γ Y m a x x f Z + γ Z m a x ) + E Z , Y d d Y 0 2 L ( 1 + x d ) L ( x f Y + γ Y m a x + x f Z + γ Z m a x ) + E Z , Y d [ d Y 0 , d Z 0 ] L ( x f Z + γ Z m a x x f Y + γ Y m a x ) + E Z , Y d d Z 0 ,
d W 0 = x x f W + γ W m a x 1 .
d n = 2 L x L ( x f Y + γ Y m a x + x f Z + γ Z m a x ) E Z , Y 2 L .