Abstract

This paper proposes a new focus measurement method for Depth From Focus to recover depth of scenes. The method employs an all-focused image of the scene to address the focus measure ambiguity problem of the existing focus measures in the presence of occlusions. Depth discontinuities are handled effectively by using adaptively shaped and weighted support windows. The size of the support window can be increased conveniently for more robust depth estimation without introducing any window size related Depth From Focus problems. The experiments on the real and synthetically refocused images show that the introduced focus measurement method works effectively and efficiently in real world applications.

© 2010 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. M. Subbarao, and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13, 271–294 (1994).
    [CrossRef]
  2. A. Pentland, S. Scherock, T. Darrell, and B. Girod, “Simple range cameras based on focal error,” J. Opt. Soc. Am. A 11, 2925–2934 (1994).
    [CrossRef]
  3. S. K. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 1186–1198 (1996).
    [CrossRef]
  4. A. N. Rajagopalan, and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1158–1164 (1997).
    [CrossRef]
  5. V. Aslantas, and D. T. Pham, “Depth from automatic defocusing,” Opt. Express 15, 1011–1023 (2007).
    [CrossRef] [PubMed]
  6. P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 518–531 (2008).
    [CrossRef] [PubMed]
  7. E. Krotkov, “Focusing,” Int. J. Comput. Vis. 1, 223–237 (1987).
    [CrossRef]
  8. J. V. Michael Bove, “Entropy-based depth from focus,” J. Opt. Soc. Am. A 10, 561–566 (1993).
    [CrossRef]
  9. S. Nayar, and Y. Nakagawa, “Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
    [CrossRef]
  10. M. Subbarao, and T. Choi, “Accurate recovery of three-dimensional shape from image focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17, 266–274 (1995).
    [CrossRef]
  11. Y. Schechner, and N. Kiryati, “Depth from defocus vs. stereo: How different really are they?” Int. J. Comput. Vis. 39, 141–162 (2000).
    [CrossRef]
  12. J. A. Marshall, C. A. Burbeck, D. Ariely, J. P. Rolland, and K. E. Martin, “Occlusion edge blur: a cue to relative visual depth,” J. Opt. Soc. Am. A 13, 681–688 (1996).
    [CrossRef]
  13. N. Asada, H. Fujiwara, and T. Matsuyama, “Seeing behind the scene: Analysis of photometric properties of occluding edges by the reversed projection blurring model,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 155–167 (1998).
    [CrossRef]
  14. S. S. Bhasin and S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” Computer Vision, IEEE International Conference on 1, 488 (2001).
  15. P. Favaro and S. Soatto, “Seeing beyond occlusions (and other marvels of a finite lens aperture),” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on 2, 579 (2003).
  16. T. Aydin and Y. Akgul, “A new adaptive focus measure for shape from focus,” in “BMVC08,” (2008).
  17. H. Nair and C. Stewart, “Robust focus ranging,” in “Computer Vision and Pattern Recognition, 1992. Proceedings CVPR ’92, 1992 IEEE Computer Society Conference on,” (1992), pp. 309–314.
  18. J. M. Tenenbaum, “Accommodation in computer vision,” Ph.D. thesis, Stanford, CA, USA (1971).
  19. T. M. Subbarao and A. Nikzad, “Focusing technique,” Image Sig. Process. Anal. 32, 2824–2836 (1993).
  20. S. Jutamulia, T. Asakura, R. D. Bahuguna, and P. C. DeGuzman, “Autofocusing based on power-spectra analysis,” Appl. Opt. 33, 6210–6212 (1994).
    [CrossRef] [PubMed]
  21. Y. Xiong and S. Shafer, “Moment and hypergeometric filters for high precision computation of focus, stereo and optical flow,” Int. J. Comput. Vis. 22, 25–59 (1997).
    [CrossRef]
  22. J. Kautsky, J. Flusser, B. Zitov, and S. Simberov, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23, 1785–1794 (2002).
    [CrossRef]
  23. M. Kristan, J. Pers, M. Perse, and S. Kovacic, “A bayes-spectral-entropy-based measure of camera focus using a discrete cosine transform,” Pattern Recognit. Lett. 27, 1431–1439 (2006).
    [CrossRef]
  24. J. Meneses, M. A. Suarez, J. Braga, and T. Gharbi, “Extended depth of field using shapelet-based image analysis,” Appl. Opt. 47, 169–178 (2008).
    [CrossRef] [PubMed]
  25. W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recognit. Lett. 28, 493–500 (2007).
    [CrossRef]
  26. M. Subbarao and J.-K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 864–870 (1998).
    [CrossRef]
  27. Y. Tian, K. Shieh, and C. F. Wildsoet, “Performance of focus measures in the presence of nondefocus aberrations,” J. Opt. Soc. Am. A 24, B165–B173 (2007).
    [CrossRef]
  28. M. Subbarao, T.-C. Wei, and G. Surya, “Focused image recovery from two defocused images recorded with different camera settings,” IEEE Trans. Image Process. 4, 1613–1628 (1995).
    [CrossRef] [PubMed]
  29. P. Favaro and S. Soatto, 3-D Shape Estimation and Image Restoration: Exploiting Defocus and Motion-Blur (Springer London, 2007).
  30. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
    [CrossRef]
  31. R. Sakurai, “Irisfilter,” http://www.reiji.net/ (2004).
  32. J. Chen, S. Paris, and F. Durand, “Real-time edge-aware image processing with the bilateral grid,” in “SIGGRAPH 07,” (ACM, New York, NY, USA, 2007), p. 103.
  33. N. Joshi, R. Szeliski, and D. Kriegman, “PSF estimation using sharp edge prediction,” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on pp. 1–8 (2008).
  34. N. Joshi, C. Zitnick, R. Szeliski, and D. Kriegman, “Image deblurring and denoising using color priors,” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on pp. 1550–1557 (2009).

2008 (2)

P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 518–531 (2008).
[CrossRef] [PubMed]

J. Meneses, M. A. Suarez, J. Braga, and T. Gharbi, “Extended depth of field using shapelet-based image analysis,” Appl. Opt. 47, 169–178 (2008).
[CrossRef] [PubMed]

2007 (3)

2006 (1)

M. Kristan, J. Pers, M. Perse, and S. Kovacic, “A bayes-spectral-entropy-based measure of camera focus using a discrete cosine transform,” Pattern Recognit. Lett. 27, 1431–1439 (2006).
[CrossRef]

2002 (2)

J. Kautsky, J. Flusser, B. Zitov, and S. Simberov, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23, 1785–1794 (2002).
[CrossRef]

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
[CrossRef]

2000 (1)

Y. Schechner, and N. Kiryati, “Depth from defocus vs. stereo: How different really are they?” Int. J. Comput. Vis. 39, 141–162 (2000).
[CrossRef]

1998 (2)

N. Asada, H. Fujiwara, and T. Matsuyama, “Seeing behind the scene: Analysis of photometric properties of occluding edges by the reversed projection blurring model,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 155–167 (1998).
[CrossRef]

M. Subbarao and J.-K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 864–870 (1998).
[CrossRef]

1997 (2)

Y. Xiong and S. Shafer, “Moment and hypergeometric filters for high precision computation of focus, stereo and optical flow,” Int. J. Comput. Vis. 22, 25–59 (1997).
[CrossRef]

A. N. Rajagopalan, and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1158–1164 (1997).
[CrossRef]

1996 (2)

S. K. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 1186–1198 (1996).
[CrossRef]

J. A. Marshall, C. A. Burbeck, D. Ariely, J. P. Rolland, and K. E. Martin, “Occlusion edge blur: a cue to relative visual depth,” J. Opt. Soc. Am. A 13, 681–688 (1996).
[CrossRef]

1995 (2)

M. Subbarao, and T. Choi, “Accurate recovery of three-dimensional shape from image focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17, 266–274 (1995).
[CrossRef]

M. Subbarao, T.-C. Wei, and G. Surya, “Focused image recovery from two defocused images recorded with different camera settings,” IEEE Trans. Image Process. 4, 1613–1628 (1995).
[CrossRef] [PubMed]

1994 (4)

S. Nayar, and Y. Nakagawa, “Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
[CrossRef]

S. Jutamulia, T. Asakura, R. D. Bahuguna, and P. C. DeGuzman, “Autofocusing based on power-spectra analysis,” Appl. Opt. 33, 6210–6212 (1994).
[CrossRef] [PubMed]

M. Subbarao, and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13, 271–294 (1994).
[CrossRef]

A. Pentland, S. Scherock, T. Darrell, and B. Girod, “Simple range cameras based on focal error,” J. Opt. Soc. Am. A 11, 2925–2934 (1994).
[CrossRef]

1993 (2)

J. V. Michael Bove, “Entropy-based depth from focus,” J. Opt. Soc. Am. A 10, 561–566 (1993).
[CrossRef]

T. M. Subbarao and A. Nikzad, “Focusing technique,” Image Sig. Process. Anal. 32, 2824–2836 (1993).

1987 (1)

E. Krotkov, “Focusing,” Int. J. Comput. Vis. 1, 223–237 (1987).
[CrossRef]

Ariely, D.

Asada, N.

N. Asada, H. Fujiwara, and T. Matsuyama, “Seeing behind the scene: Analysis of photometric properties of occluding edges by the reversed projection blurring model,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 155–167 (1998).
[CrossRef]

Asakura, T.

Aslantas, V.

Bahuguna, R. D.

Braga, J.

Burbeck, C. A.

Burger, M.

P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 518–531 (2008).
[CrossRef] [PubMed]

Chaudhuri, S.

A. N. Rajagopalan, and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1158–1164 (1997).
[CrossRef]

Choi, T.

M. Subbarao, and T. Choi, “Accurate recovery of three-dimensional shape from image focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17, 266–274 (1995).
[CrossRef]

Darrell, T.

DeGuzman, P. C.

Favaro, P.

P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 518–531 (2008).
[CrossRef] [PubMed]

Flusser, J.

J. Kautsky, J. Flusser, B. Zitov, and S. Simberov, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23, 1785–1794 (2002).
[CrossRef]

Fujiwara, H.

N. Asada, H. Fujiwara, and T. Matsuyama, “Seeing behind the scene: Analysis of photometric properties of occluding edges by the reversed projection blurring model,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 155–167 (1998).
[CrossRef]

Gharbi, T.

Girod, B.

Huang, W.

W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recognit. Lett. 28, 493–500 (2007).
[CrossRef]

Jing, Z.

W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recognit. Lett. 28, 493–500 (2007).
[CrossRef]

Jutamulia, S.

Kautsky, J.

J. Kautsky, J. Flusser, B. Zitov, and S. Simberov, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23, 1785–1794 (2002).
[CrossRef]

Kiryati, N.

Y. Schechner, and N. Kiryati, “Depth from defocus vs. stereo: How different really are they?” Int. J. Comput. Vis. 39, 141–162 (2000).
[CrossRef]

Kovacic, S.

M. Kristan, J. Pers, M. Perse, and S. Kovacic, “A bayes-spectral-entropy-based measure of camera focus using a discrete cosine transform,” Pattern Recognit. Lett. 27, 1431–1439 (2006).
[CrossRef]

Kristan, M.

M. Kristan, J. Pers, M. Perse, and S. Kovacic, “A bayes-spectral-entropy-based measure of camera focus using a discrete cosine transform,” Pattern Recognit. Lett. 27, 1431–1439 (2006).
[CrossRef]

Krotkov, E.

E. Krotkov, “Focusing,” Int. J. Comput. Vis. 1, 223–237 (1987).
[CrossRef]

Marshall, J. A.

Martin, K. E.

Matsuyama, T.

N. Asada, H. Fujiwara, and T. Matsuyama, “Seeing behind the scene: Analysis of photometric properties of occluding edges by the reversed projection blurring model,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 155–167 (1998).
[CrossRef]

Meneses, J.

Michael Bove, J. V.

Nakagawa, Y.

S. Nayar, and Y. Nakagawa, “Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
[CrossRef]

Nayar, S.

S. Nayar, and Y. Nakagawa, “Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
[CrossRef]

Nayar, S. K.

S. K. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 1186–1198 (1996).
[CrossRef]

Nikzad, A.

T. M. Subbarao and A. Nikzad, “Focusing technique,” Image Sig. Process. Anal. 32, 2824–2836 (1993).

Noguchi, M.

S. K. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 1186–1198 (1996).
[CrossRef]

Osher, S. J.

P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 518–531 (2008).
[CrossRef] [PubMed]

Pentland, A.

Pers, J.

M. Kristan, J. Pers, M. Perse, and S. Kovacic, “A bayes-spectral-entropy-based measure of camera focus using a discrete cosine transform,” Pattern Recognit. Lett. 27, 1431–1439 (2006).
[CrossRef]

Perse, M.

M. Kristan, J. Pers, M. Perse, and S. Kovacic, “A bayes-spectral-entropy-based measure of camera focus using a discrete cosine transform,” Pattern Recognit. Lett. 27, 1431–1439 (2006).
[CrossRef]

Pham, D. T.

Rajagopalan, A. N.

A. N. Rajagopalan, and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1158–1164 (1997).
[CrossRef]

Rolland, J. P.

Scharstein, D.

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
[CrossRef]

Schechner, Y.

Y. Schechner, and N. Kiryati, “Depth from defocus vs. stereo: How different really are they?” Int. J. Comput. Vis. 39, 141–162 (2000).
[CrossRef]

Scherock, S.

Shafer, S.

Y. Xiong and S. Shafer, “Moment and hypergeometric filters for high precision computation of focus, stereo and optical flow,” Int. J. Comput. Vis. 22, 25–59 (1997).
[CrossRef]

Shieh, K.

Simberov, S.

J. Kautsky, J. Flusser, B. Zitov, and S. Simberov, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23, 1785–1794 (2002).
[CrossRef]

Soatto, S.

P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 518–531 (2008).
[CrossRef] [PubMed]

Suarez, M. A.

Subbarao, M.

M. Subbarao and J.-K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 864–870 (1998).
[CrossRef]

M. Subbarao, T.-C. Wei, and G. Surya, “Focused image recovery from two defocused images recorded with different camera settings,” IEEE Trans. Image Process. 4, 1613–1628 (1995).
[CrossRef] [PubMed]

M. Subbarao, and T. Choi, “Accurate recovery of three-dimensional shape from image focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17, 266–274 (1995).
[CrossRef]

M. Subbarao, and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13, 271–294 (1994).
[CrossRef]

Subbarao, T. M.

T. M. Subbarao and A. Nikzad, “Focusing technique,” Image Sig. Process. Anal. 32, 2824–2836 (1993).

Surya, G.

M. Subbarao, T.-C. Wei, and G. Surya, “Focused image recovery from two defocused images recorded with different camera settings,” IEEE Trans. Image Process. 4, 1613–1628 (1995).
[CrossRef] [PubMed]

M. Subbarao, and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13, 271–294 (1994).
[CrossRef]

Szeliski, R.

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
[CrossRef]

Tian, Y.

Tyan, J.-K.

M. Subbarao and J.-K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 864–870 (1998).
[CrossRef]

Watanabe, M.

S. K. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 1186–1198 (1996).
[CrossRef]

Wei, T.-C.

M. Subbarao, T.-C. Wei, and G. Surya, “Focused image recovery from two defocused images recorded with different camera settings,” IEEE Trans. Image Process. 4, 1613–1628 (1995).
[CrossRef] [PubMed]

Wildsoet, C. F.

Xiong, Y.

Y. Xiong and S. Shafer, “Moment and hypergeometric filters for high precision computation of focus, stereo and optical flow,” Int. J. Comput. Vis. 22, 25–59 (1997).
[CrossRef]

Zitov, B.

J. Kautsky, J. Flusser, B. Zitov, and S. Simberov, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23, 1785–1794 (2002).
[CrossRef]

Appl. Opt. (2)

IEEE Trans. Image Process. (1)

M. Subbarao, T.-C. Wei, and G. Surya, “Focused image recovery from two defocused images recorded with different camera settings,” IEEE Trans. Image Process. 4, 1613–1628 (1995).
[CrossRef] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (7)

M. Subbarao and J.-K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 864–870 (1998).
[CrossRef]

P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 518–531 (2008).
[CrossRef] [PubMed]

N. Asada, H. Fujiwara, and T. Matsuyama, “Seeing behind the scene: Analysis of photometric properties of occluding edges by the reversed projection blurring model,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 155–167 (1998).
[CrossRef]

S. K. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 1186–1198 (1996).
[CrossRef]

A. N. Rajagopalan, and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1158–1164 (1997).
[CrossRef]

S. Nayar, and Y. Nakagawa, “Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
[CrossRef]

M. Subbarao, and T. Choi, “Accurate recovery of three-dimensional shape from image focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17, 266–274 (1995).
[CrossRef]

Image Sig. Process. Anal. (1)

T. M. Subbarao and A. Nikzad, “Focusing technique,” Image Sig. Process. Anal. 32, 2824–2836 (1993).

Int. J. Comput. Vis. (5)

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
[CrossRef]

E. Krotkov, “Focusing,” Int. J. Comput. Vis. 1, 223–237 (1987).
[CrossRef]

Y. Schechner, and N. Kiryati, “Depth from defocus vs. stereo: How different really are they?” Int. J. Comput. Vis. 39, 141–162 (2000).
[CrossRef]

M. Subbarao, and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13, 271–294 (1994).
[CrossRef]

Y. Xiong and S. Shafer, “Moment and hypergeometric filters for high precision computation of focus, stereo and optical flow,” Int. J. Comput. Vis. 22, 25–59 (1997).
[CrossRef]

J. Opt. Soc. Am. A (4)

Opt. Express (1)

Pattern Recognit. Lett. (3)

J. Kautsky, J. Flusser, B. Zitov, and S. Simberov, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23, 1785–1794 (2002).
[CrossRef]

M. Kristan, J. Pers, M. Perse, and S. Kovacic, “A bayes-spectral-entropy-based measure of camera focus using a discrete cosine transform,” Pattern Recognit. Lett. 27, 1431–1439 (2006).
[CrossRef]

W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recognit. Lett. 28, 493–500 (2007).
[CrossRef]

Other (10)

P. Favaro and S. Soatto, 3-D Shape Estimation and Image Restoration: Exploiting Defocus and Motion-Blur (Springer London, 2007).

R. Sakurai, “Irisfilter,” http://www.reiji.net/ (2004).

J. Chen, S. Paris, and F. Durand, “Real-time edge-aware image processing with the bilateral grid,” in “SIGGRAPH 07,” (ACM, New York, NY, USA, 2007), p. 103.

N. Joshi, R. Szeliski, and D. Kriegman, “PSF estimation using sharp edge prediction,” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on pp. 1–8 (2008).

N. Joshi, C. Zitnick, R. Szeliski, and D. Kriegman, “Image deblurring and denoising using color priors,” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on pp. 1550–1557 (2009).

S. S. Bhasin and S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” Computer Vision, IEEE International Conference on 1, 488 (2001).

P. Favaro and S. Soatto, “Seeing beyond occlusions (and other marvels of a finite lens aperture),” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on 2, 579 (2003).

T. Aydin and Y. Akgul, “A new adaptive focus measure for shape from focus,” in “BMVC08,” (2008).

H. Nair and C. Stewart, “Robust focus ranging,” in “Computer Vision and Pattern Recognition, 1992. Proceedings CVPR ’92, 1992 IEEE Computer Society Conference on,” (1992), pp. 309–314.

J. M. Tenenbaum, “Accommodation in computer vision,” Ph.D. thesis, Stanford, CA, USA (1971).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1.

Image formation process of a scene with two fronto parallel layers where focus is set to (a) background (occluded) and (b) foreground (occluding). The image point q may appear focused when focus is set to foreground or background. Real images of a sample scene taken with focus setting set to (c) occluded and (d) occluding. Although the antlers are defocused in (c), its corresponding image regions contain higher frequency variation than in (d). The images in (c) and (d) are cropped from the images of the scene in Fig. 8

Fig. 2.
Fig. 2.

Computed adaptive support windows for four different pixels in an image. The brighter pixel means larger weight. The weights for darker regions are close to zero.

Fig. 3.
Fig. 3.

Comparison of different focus measures for synthetically generated images. The focus measures are the proposed adaptive focus measure (AFM), sum of laplace (LPC), sum of modified laplace (SML), and variance (VAR)

Fig. 4.
Fig. 4.

All-focused image of a sample real scene (a) and its images with different focus settings (b–d). Sample occlusion regions causing focus measure ambiguity are marked with red and yellow squares in (a).

Fig. 5.
Fig. 5.

Performance of different focus measures around two occlusion regions marked in Fig. 4(a). Three different existing focus measures and proposed focus measure (AFM) are used. (a) is the graph for the red window and (b) is the graph for the yellow window. The dashed lines show the manually detected true depth values. Depending on the texture in the occluded and occluding region, existing focus measures produce ambiguous or incorrect responses.

Fig. 6.
Fig. 6.

Synthetically refocused images (a-c) generated with Iris filter from the all-focused image (d) with true depth map (e). Estimated depth map of the scene using our method (f) and traditional DFF with SML (g), LPC (h), and VAR (i) focus measures. The closer objects are represented by darker regions in the depth images.

Fig. 7.
Fig. 7.

Synthetically refocused images (a-c), all-focused image (d), and true depth image (e). Estimated depth image of the scene of our method (f) and traditional DFF with SML (g), LPC (h), and VAR (i) focus measures.

Fig. 8.
Fig. 8.

All-focused image of real scene (a), sample images in DFF set (b–d) and its depth image of our method (e). Estimated depth image of the scene by DFF using adaptive windows with ML focus measure (f) and traditional DFF with SML (g), LPC (h), and VAR (i) focus measures.

Fig. 9.
Fig. 9.

All-focused image of another real scene (a), sample images in DFF set (b–d). Estimated depth image of the scene by our method (e), DFF using adaptive windows with ML focus measure and traditional DFF method with SML (g), LPC (h), and VAR (i) focus measures.

Tables (2)

Tables Icon

Table 1. RMS errors of depth values with various window sizes on synthetically refocused real images of Fig. 6.

Tables Icon

Table 2. RMS errors of depth values with various window sizes on synthetically refocused real images of Fig. 7.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

1 f = 1 u + 1 v ,
f i ( x , y ) = arg i max ( F M i ( x , y ) ) ,
I q = L occluding * δ ( R ) ,
I q = α L occluded * δ ( P ) + L occluding * H R ,
NCC Ω ( A , B ) = Σ ( A A ¯ ) ( B B ¯ ) Σ ( A A ¯ ) 2 Σ ( B B ¯ ) 2 ,
FM i ( x , y ) = NCC Ω ( I f ( x , y ) , I i ( x , y ) ) ,
ω x 0 y 0 ( x , y ) = exp ( ( d γ 1 + Δ I f γ 2 ) ) ,
Δ d = [ ( x x 0 ) 2 + ( y y 0 ) 2 ] 1 2 , ( x , y ) Ω x 0 , y 0
Δ I f = || I f ( x , y ) I f ( x 0 , y 0 ) ||
AF M i ( x 0 , y 0 ) = Σ ω x 0 y 0 ( x , y ) F M i ( x , y ) i = 1 . . N

Metrics