Abstract

A defocus blur identification technique based on histogram analysis of a real edge image is presented. The image defocus process of a camera is formulated by incorporating the nonlinear camera response and the intensity-dependent noise model. The histogram matching between the synthesized and real defocused regions is then carried out with intensity-dependent filtering. By iteratively changing the point-spread function parameters, the best blur extent is identified from histogram comparison. We have performed the experiments on both the synthetic and real edge images. The results have demonstrated the robustness and feasibility of the proposed technique.

© 2012 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. M. Banham and A. Katsaggelos, “Digital image restoration,” IEEE Signal Process. Mag. 14, 24–41 (1997).
    [CrossRef]
  2. M. A. Kutay and H. M. Ozaktas, “Optimal image restoration with the fractional fourier transform,” J. Opt. Soc. Am. A 15, 825–833 (1998).
    [CrossRef]
  3. S. Wu, W. Lin, S. Xie, Z. Lu, E. P. Ong, and S. Yao, “Blind blur assessment for vision-based applications,” J. Vis. Commun. Image Represent 20, 231–241 (2009).
    [CrossRef]
  4. I. van Zyl Marais and W. H. Steyn, “Robust defocus blur identification in the context of blind image quality assessment,” Signal Process. Image Commun. 22, 833–844 (2007).
    [CrossRef]
  5. D. Rajan, S. Chaudhuri, and M. Joshi, “Multi-objective super resolution: concepts and examples,” IEEE Signal Process. Mag. 20, 49–61 (2003).
    [CrossRef]
  6. J. Yang and D. Schonfeld, “Virtual focus and depth estimation from defocused video sequences,” IEEE Trans. Image Process. 19, 668–679 (2010).
    [CrossRef]
  7. C. Swain and T. Chen, “Defocus-based image segmentation,” inIEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 4 (1995), pp. 2403–2406.
  8. Z. Liu, W. Li, L. Shen, Z. Han, and Z. Zhang, “Automatic segmentation of focused objects from images with low depth of field,” Pattern Recogn. Lett. 31, 572–581 (2010).
    [CrossRef]
  9. K. Pradeep and A. Rajagopalan, “Improving shape from focus using defocus cue,” IEEE Trans. Image Process. 16, 1920–1925(2007).
    [CrossRef]
  10. S. Chaudhuri and A. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1998).
  11. M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13, 271–294(1994).
    [CrossRef]
  12. J. Elder and S. Zucker, “Local scale control for edge detection and blur estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 699–716 (1998).
    [CrossRef]
  13. V. Kayargadde and J. B. Martens, “Estimation of edge parameters and image blur using polynomial transforms,” in CVGIP: Graphical Models and Image Processing, Vol. 56 (1994), pp. 442–461.
    [CrossRef]
  14. O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).
    [CrossRef]
  15. M. Chang, A. Tekalp, and A. Erdem, “Blur identification using the bispectrum,” Signal Processing 39, 2323–2325 (1991).
  16. R. Fabian and D. Malah, “Robust identification of motion and out of focus blur parameters from blurred and noisy images,” in CVGIP: Graphical Models and Image Processing, Vol. 53 (1991), pp. 403–412.
    [CrossRef]
  17. R. Rom, “On the cepstrum of two-dimensional functions (corresp.),” IEEE Trans. Inf. Theory 21, 214–217 (1975).
    [CrossRef]
  18. M. Cannon, “Blind deconvolution of spatially invariant image blurs with phase,” IEEE Trans. Acoustics, Speech, Signal Process. ASSP-24, 58–63 (1976).
  19. S. Reeves and R. Mersereau, “Blur identification by the method of generalized cross-validation,” IEEE Trans. Image Process. 1, 301–311 (1992).
    [CrossRef]
  20. L. Chen and K.-H. Yap, “A soft double regularization approach to parametric blind image deconvolution,” IEEE Trans. Image Process. 14, 624–633 (2005).
    [CrossRef]
  21. A. Savakis and H. Trussell, “Blur identification by residual spectral matching,” IEEE Trans. Image Process. 2, 141–151 (1993).
    [CrossRef]
  22. I. Aizenberg, T. Bregin, C. Butakoff, V. N. Karnaukhov, N. S. Merzlyakov, and O. Milukova, “Type of blur and blur parameters identification using neural network and its application to image restoration,” in Proceedings of the International Conference on Artificial Neural Networks, ICANN ’02, (Springer-Verlag, 2002), pp. 1231–1236.
  23. I. Aizenberg, D. Paliy, J. Zurada, and J. Astola, “Blur identification by multilayer neural network based on multivalued neurons,” IEEE Trans. Neural Netw. 19, 883–898 (2008).
    [CrossRef]
  24. J. Da Rugna and H. Konik, “Blur identification in image processing,” in Proceedings of International Joint Conference on Neural Networks (2006), pp. 2536–2541.
  25. L. Chen and K.-H. Yap, “Efficient discrete spatial techniques for blur support identification in blind image deconvolution,” IEEE Trans. Signal Process. 54, 1557–1562 (2006).
    [CrossRef]
  26. D. Li, R. Mersereau, and S. Simske, “Blur identification based on kurtosis minimization,” in IEEE International Conference on Image Processing, Vol. 1 (2005), I-905–8.
  27. F. Chen and J. Ma, “An empirical identification method of Gaussian blur parameter for image deblurring,” IEEE Trans. Signal Process. 57, 2467–2478 (2009).
    [CrossRef]
  28. J. Lin, C. Zhang, and Q. Shi, “Estimating the amount of defocus through a wavelet transform approach,” Pattern Recogn. Lett. 25, 407–411 (2004).
    [CrossRef]
  29. K. Rank, M. Lendl, and R. Unbehauen, “Estimation of image noise variance,” IEE Proc. Vis. Image Signal Process. 146, 80–84 (1999).
    [CrossRef]
  30. G. Healey and R. Kondepudy, “Radiometric ccd camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 267–276 (1994).
    [CrossRef]
  31. Y. Tsin, V. Ramesh, and T. Kanade, “Statistical calibration of ccd imaging process,” in Proceedings of Eighth IEEE International Conference on Computer Vision, Vol. 1 (IEEE, 2001), pp. 480–487.
  32. B. Horn, Robot Vision (MIT Press, 1986).
  33. M. Grossberg and S. Nayar, “Modeling the space of camera response functions,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1272–1282 (2004).
    [CrossRef]
  34. H. Y. Lin and C. H. Chang, “Photo-consistent motion blur modeling for realistic image synthesis,” in Advances in Image and Video Technology (2006), pp. 1273–1282.
  35. C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1 (IEEE, 2006), pp. 901–908.
  36. S. Gabarda and G. Cristóbal, “Blind image quality assessment through anisotropy,” J. Opt. Soc. Am. A 24, B42–B51 (2007).
    [CrossRef]
  37. A. Ciancio, A. da Costa, E. da Silva, A. Said, R. Samadani, and P. Obrador, “No-reference blur assessment of digital pictures based on multifeature classifiers,” IEEE Trans. Image Process. 20, 64–75 (2011).
    [CrossRef]
  38. H. Sheikh, M. Sabir, and A. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440–3451 (2006).
    [CrossRef]
  39. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [CrossRef]

2011

A. Ciancio, A. da Costa, E. da Silva, A. Said, R. Samadani, and P. Obrador, “No-reference blur assessment of digital pictures based on multifeature classifiers,” IEEE Trans. Image Process. 20, 64–75 (2011).
[CrossRef]

2010

J. Yang and D. Schonfeld, “Virtual focus and depth estimation from defocused video sequences,” IEEE Trans. Image Process. 19, 668–679 (2010).
[CrossRef]

Z. Liu, W. Li, L. Shen, Z. Han, and Z. Zhang, “Automatic segmentation of focused objects from images with low depth of field,” Pattern Recogn. Lett. 31, 572–581 (2010).
[CrossRef]

2009

S. Wu, W. Lin, S. Xie, Z. Lu, E. P. Ong, and S. Yao, “Blind blur assessment for vision-based applications,” J. Vis. Commun. Image Represent 20, 231–241 (2009).
[CrossRef]

F. Chen and J. Ma, “An empirical identification method of Gaussian blur parameter for image deblurring,” IEEE Trans. Signal Process. 57, 2467–2478 (2009).
[CrossRef]

2008

I. Aizenberg, D. Paliy, J. Zurada, and J. Astola, “Blur identification by multilayer neural network based on multivalued neurons,” IEEE Trans. Neural Netw. 19, 883–898 (2008).
[CrossRef]

2007

S. Gabarda and G. Cristóbal, “Blind image quality assessment through anisotropy,” J. Opt. Soc. Am. A 24, B42–B51 (2007).
[CrossRef]

I. van Zyl Marais and W. H. Steyn, “Robust defocus blur identification in the context of blind image quality assessment,” Signal Process. Image Commun. 22, 833–844 (2007).
[CrossRef]

O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).
[CrossRef]

K. Pradeep and A. Rajagopalan, “Improving shape from focus using defocus cue,” IEEE Trans. Image Process. 16, 1920–1925(2007).
[CrossRef]

2006

L. Chen and K.-H. Yap, “Efficient discrete spatial techniques for blur support identification in blind image deconvolution,” IEEE Trans. Signal Process. 54, 1557–1562 (2006).
[CrossRef]

H. Sheikh, M. Sabir, and A. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440–3451 (2006).
[CrossRef]

2005

L. Chen and K.-H. Yap, “A soft double regularization approach to parametric blind image deconvolution,” IEEE Trans. Image Process. 14, 624–633 (2005).
[CrossRef]

2004

J. Lin, C. Zhang, and Q. Shi, “Estimating the amount of defocus through a wavelet transform approach,” Pattern Recogn. Lett. 25, 407–411 (2004).
[CrossRef]

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

M. Grossberg and S. Nayar, “Modeling the space of camera response functions,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1272–1282 (2004).
[CrossRef]

2003

D. Rajan, S. Chaudhuri, and M. Joshi, “Multi-objective super resolution: concepts and examples,” IEEE Signal Process. Mag. 20, 49–61 (2003).
[CrossRef]

1999

K. Rank, M. Lendl, and R. Unbehauen, “Estimation of image noise variance,” IEE Proc. Vis. Image Signal Process. 146, 80–84 (1999).
[CrossRef]

1998

J. Elder and S. Zucker, “Local scale control for edge detection and blur estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 699–716 (1998).
[CrossRef]

M. A. Kutay and H. M. Ozaktas, “Optimal image restoration with the fractional fourier transform,” J. Opt. Soc. Am. A 15, 825–833 (1998).
[CrossRef]

1997

M. Banham and A. Katsaggelos, “Digital image restoration,” IEEE Signal Process. Mag. 14, 24–41 (1997).
[CrossRef]

1994

G. Healey and R. Kondepudy, “Radiometric ccd camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 267–276 (1994).
[CrossRef]

M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13, 271–294(1994).
[CrossRef]

1993

A. Savakis and H. Trussell, “Blur identification by residual spectral matching,” IEEE Trans. Image Process. 2, 141–151 (1993).
[CrossRef]

1992

S. Reeves and R. Mersereau, “Blur identification by the method of generalized cross-validation,” IEEE Trans. Image Process. 1, 301–311 (1992).
[CrossRef]

1991

M. Chang, A. Tekalp, and A. Erdem, “Blur identification using the bispectrum,” Signal Processing 39, 2323–2325 (1991).

1976

M. Cannon, “Blind deconvolution of spatially invariant image blurs with phase,” IEEE Trans. Acoustics, Speech, Signal Process. ASSP-24, 58–63 (1976).

1975

R. Rom, “On the cepstrum of two-dimensional functions (corresp.),” IEEE Trans. Inf. Theory 21, 214–217 (1975).
[CrossRef]

Aizenberg, I.

I. Aizenberg, D. Paliy, J. Zurada, and J. Astola, “Blur identification by multilayer neural network based on multivalued neurons,” IEEE Trans. Neural Netw. 19, 883–898 (2008).
[CrossRef]

I. Aizenberg, T. Bregin, C. Butakoff, V. N. Karnaukhov, N. S. Merzlyakov, and O. Milukova, “Type of blur and blur parameters identification using neural network and its application to image restoration,” in Proceedings of the International Conference on Artificial Neural Networks, ICANN ’02, (Springer-Verlag, 2002), pp. 1231–1236.

Astola, J.

I. Aizenberg, D. Paliy, J. Zurada, and J. Astola, “Blur identification by multilayer neural network based on multivalued neurons,” IEEE Trans. Neural Netw. 19, 883–898 (2008).
[CrossRef]

Banham, M.

M. Banham and A. Katsaggelos, “Digital image restoration,” IEEE Signal Process. Mag. 14, 24–41 (1997).
[CrossRef]

Bovik, A.

H. Sheikh, M. Sabir, and A. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440–3451 (2006).
[CrossRef]

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Bregin, T.

I. Aizenberg, T. Bregin, C. Butakoff, V. N. Karnaukhov, N. S. Merzlyakov, and O. Milukova, “Type of blur and blur parameters identification using neural network and its application to image restoration,” in Proceedings of the International Conference on Artificial Neural Networks, ICANN ’02, (Springer-Verlag, 2002), pp. 1231–1236.

Butakoff, C.

I. Aizenberg, T. Bregin, C. Butakoff, V. N. Karnaukhov, N. S. Merzlyakov, and O. Milukova, “Type of blur and blur parameters identification using neural network and its application to image restoration,” in Proceedings of the International Conference on Artificial Neural Networks, ICANN ’02, (Springer-Verlag, 2002), pp. 1231–1236.

Cannon, M.

M. Cannon, “Blind deconvolution of spatially invariant image blurs with phase,” IEEE Trans. Acoustics, Speech, Signal Process. ASSP-24, 58–63 (1976).

Chang, C. H.

H. Y. Lin and C. H. Chang, “Photo-consistent motion blur modeling for realistic image synthesis,” in Advances in Image and Video Technology (2006), pp. 1273–1282.

Chang, M.

M. Chang, A. Tekalp, and A. Erdem, “Blur identification using the bispectrum,” Signal Processing 39, 2323–2325 (1991).

Chaudhuri, S.

D. Rajan, S. Chaudhuri, and M. Joshi, “Multi-objective super resolution: concepts and examples,” IEEE Signal Process. Mag. 20, 49–61 (2003).
[CrossRef]

S. Chaudhuri and A. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1998).

Chen, F.

F. Chen and J. Ma, “An empirical identification method of Gaussian blur parameter for image deblurring,” IEEE Trans. Signal Process. 57, 2467–2478 (2009).
[CrossRef]

Chen, L.

L. Chen and K.-H. Yap, “Efficient discrete spatial techniques for blur support identification in blind image deconvolution,” IEEE Trans. Signal Process. 54, 1557–1562 (2006).
[CrossRef]

L. Chen and K.-H. Yap, “A soft double regularization approach to parametric blind image deconvolution,” IEEE Trans. Image Process. 14, 624–633 (2005).
[CrossRef]

Chen, T.

C. Swain and T. Chen, “Defocus-based image segmentation,” inIEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 4 (1995), pp. 2403–2406.

Ciancio, A.

A. Ciancio, A. da Costa, E. da Silva, A. Said, R. Samadani, and P. Obrador, “No-reference blur assessment of digital pictures based on multifeature classifiers,” IEEE Trans. Image Process. 20, 64–75 (2011).
[CrossRef]

Cristóbal, G.

da Costa, A.

A. Ciancio, A. da Costa, E. da Silva, A. Said, R. Samadani, and P. Obrador, “No-reference blur assessment of digital pictures based on multifeature classifiers,” IEEE Trans. Image Process. 20, 64–75 (2011).
[CrossRef]

Da Rugna, J.

J. Da Rugna and H. Konik, “Blur identification in image processing,” in Proceedings of International Joint Conference on Neural Networks (2006), pp. 2536–2541.

da Silva, E.

A. Ciancio, A. da Costa, E. da Silva, A. Said, R. Samadani, and P. Obrador, “No-reference blur assessment of digital pictures based on multifeature classifiers,” IEEE Trans. Image Process. 20, 64–75 (2011).
[CrossRef]

Elder, J.

J. Elder and S. Zucker, “Local scale control for edge detection and blur estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 699–716 (1998).
[CrossRef]

Erdem, A.

M. Chang, A. Tekalp, and A. Erdem, “Blur identification using the bispectrum,” Signal Processing 39, 2323–2325 (1991).

Fabian, R.

R. Fabian and D. Malah, “Robust identification of motion and out of focus blur parameters from blurred and noisy images,” in CVGIP: Graphical Models and Image Processing, Vol. 53 (1991), pp. 403–412.
[CrossRef]

Freeman, W. T.

C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1 (IEEE, 2006), pp. 901–908.

Gabarda, S.

Grossberg, M.

M. Grossberg and S. Nayar, “Modeling the space of camera response functions,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1272–1282 (2004).
[CrossRef]

Haik, O.

O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).
[CrossRef]

Han, Z.

Z. Liu, W. Li, L. Shen, Z. Han, and Z. Zhang, “Automatic segmentation of focused objects from images with low depth of field,” Pattern Recogn. Lett. 31, 572–581 (2010).
[CrossRef]

Healey, G.

G. Healey and R. Kondepudy, “Radiometric ccd camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 267–276 (1994).
[CrossRef]

Horn, B.

B. Horn, Robot Vision (MIT Press, 1986).

Joshi, M.

D. Rajan, S. Chaudhuri, and M. Joshi, “Multi-objective super resolution: concepts and examples,” IEEE Signal Process. Mag. 20, 49–61 (2003).
[CrossRef]

Kanade, T.

Y. Tsin, V. Ramesh, and T. Kanade, “Statistical calibration of ccd imaging process,” in Proceedings of Eighth IEEE International Conference on Computer Vision, Vol. 1 (IEEE, 2001), pp. 480–487.

Kang, S. B.

C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1 (IEEE, 2006), pp. 901–908.

Karnaukhov, V. N.

I. Aizenberg, T. Bregin, C. Butakoff, V. N. Karnaukhov, N. S. Merzlyakov, and O. Milukova, “Type of blur and blur parameters identification using neural network and its application to image restoration,” in Proceedings of the International Conference on Artificial Neural Networks, ICANN ’02, (Springer-Verlag, 2002), pp. 1231–1236.

Katsaggelos, A.

M. Banham and A. Katsaggelos, “Digital image restoration,” IEEE Signal Process. Mag. 14, 24–41 (1997).
[CrossRef]

Kayargadde, V.

V. Kayargadde and J. B. Martens, “Estimation of edge parameters and image blur using polynomial transforms,” in CVGIP: Graphical Models and Image Processing, Vol. 56 (1994), pp. 442–461.
[CrossRef]

Kondepudy, R.

G. Healey and R. Kondepudy, “Radiometric ccd camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 267–276 (1994).
[CrossRef]

Konik, H.

J. Da Rugna and H. Konik, “Blur identification in image processing,” in Proceedings of International Joint Conference on Neural Networks (2006), pp. 2536–2541.

Kutay, M. A.

Lendl, M.

K. Rank, M. Lendl, and R. Unbehauen, “Estimation of image noise variance,” IEE Proc. Vis. Image Signal Process. 146, 80–84 (1999).
[CrossRef]

Li, D.

D. Li, R. Mersereau, and S. Simske, “Blur identification based on kurtosis minimization,” in IEEE International Conference on Image Processing, Vol. 1 (2005), I-905–8.

Li, W.

Z. Liu, W. Li, L. Shen, Z. Han, and Z. Zhang, “Automatic segmentation of focused objects from images with low depth of field,” Pattern Recogn. Lett. 31, 572–581 (2010).
[CrossRef]

Lin, H. Y.

H. Y. Lin and C. H. Chang, “Photo-consistent motion blur modeling for realistic image synthesis,” in Advances in Image and Video Technology (2006), pp. 1273–1282.

Lin, J.

J. Lin, C. Zhang, and Q. Shi, “Estimating the amount of defocus through a wavelet transform approach,” Pattern Recogn. Lett. 25, 407–411 (2004).
[CrossRef]

Lin, W.

S. Wu, W. Lin, S. Xie, Z. Lu, E. P. Ong, and S. Yao, “Blind blur assessment for vision-based applications,” J. Vis. Commun. Image Represent 20, 231–241 (2009).
[CrossRef]

Liu, C.

C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1 (IEEE, 2006), pp. 901–908.

Liu, Z.

Z. Liu, W. Li, L. Shen, Z. Han, and Z. Zhang, “Automatic segmentation of focused objects from images with low depth of field,” Pattern Recogn. Lett. 31, 572–581 (2010).
[CrossRef]

Lu, Z.

S. Wu, W. Lin, S. Xie, Z. Lu, E. P. Ong, and S. Yao, “Blind blur assessment for vision-based applications,” J. Vis. Commun. Image Represent 20, 231–241 (2009).
[CrossRef]

Ma, J.

F. Chen and J. Ma, “An empirical identification method of Gaussian blur parameter for image deblurring,” IEEE Trans. Signal Process. 57, 2467–2478 (2009).
[CrossRef]

Malah, D.

R. Fabian and D. Malah, “Robust identification of motion and out of focus blur parameters from blurred and noisy images,” in CVGIP: Graphical Models and Image Processing, Vol. 53 (1991), pp. 403–412.
[CrossRef]

Martens, J. B.

V. Kayargadde and J. B. Martens, “Estimation of edge parameters and image blur using polynomial transforms,” in CVGIP: Graphical Models and Image Processing, Vol. 56 (1994), pp. 442–461.
[CrossRef]

Mersereau, R.

S. Reeves and R. Mersereau, “Blur identification by the method of generalized cross-validation,” IEEE Trans. Image Process. 1, 301–311 (1992).
[CrossRef]

D. Li, R. Mersereau, and S. Simske, “Blur identification based on kurtosis minimization,” in IEEE International Conference on Image Processing, Vol. 1 (2005), I-905–8.

Merzlyakov, N. S.

I. Aizenberg, T. Bregin, C. Butakoff, V. N. Karnaukhov, N. S. Merzlyakov, and O. Milukova, “Type of blur and blur parameters identification using neural network and its application to image restoration,” in Proceedings of the International Conference on Artificial Neural Networks, ICANN ’02, (Springer-Verlag, 2002), pp. 1231–1236.

Milukova, O.

I. Aizenberg, T. Bregin, C. Butakoff, V. N. Karnaukhov, N. S. Merzlyakov, and O. Milukova, “Type of blur and blur parameters identification using neural network and its application to image restoration,” in Proceedings of the International Conference on Artificial Neural Networks, ICANN ’02, (Springer-Verlag, 2002), pp. 1231–1236.

Nayar, S.

M. Grossberg and S. Nayar, “Modeling the space of camera response functions,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1272–1282 (2004).
[CrossRef]

Obrador, P.

A. Ciancio, A. da Costa, E. da Silva, A. Said, R. Samadani, and P. Obrador, “No-reference blur assessment of digital pictures based on multifeature classifiers,” IEEE Trans. Image Process. 20, 64–75 (2011).
[CrossRef]

Ong, E. P.

S. Wu, W. Lin, S. Xie, Z. Lu, E. P. Ong, and S. Yao, “Blind blur assessment for vision-based applications,” J. Vis. Commun. Image Represent 20, 231–241 (2009).
[CrossRef]

Ozaktas, H. M.

Paliy, D.

I. Aizenberg, D. Paliy, J. Zurada, and J. Astola, “Blur identification by multilayer neural network based on multivalued neurons,” IEEE Trans. Neural Netw. 19, 883–898 (2008).
[CrossRef]

Pradeep, K.

K. Pradeep and A. Rajagopalan, “Improving shape from focus using defocus cue,” IEEE Trans. Image Process. 16, 1920–1925(2007).
[CrossRef]

Rajagopalan, A.

K. Pradeep and A. Rajagopalan, “Improving shape from focus using defocus cue,” IEEE Trans. Image Process. 16, 1920–1925(2007).
[CrossRef]

S. Chaudhuri and A. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1998).

Rajan, D.

D. Rajan, S. Chaudhuri, and M. Joshi, “Multi-objective super resolution: concepts and examples,” IEEE Signal Process. Mag. 20, 49–61 (2003).
[CrossRef]

Ramesh, V.

Y. Tsin, V. Ramesh, and T. Kanade, “Statistical calibration of ccd imaging process,” in Proceedings of Eighth IEEE International Conference on Computer Vision, Vol. 1 (IEEE, 2001), pp. 480–487.

Rank, K.

K. Rank, M. Lendl, and R. Unbehauen, “Estimation of image noise variance,” IEE Proc. Vis. Image Signal Process. 146, 80–84 (1999).
[CrossRef]

Reeves, S.

S. Reeves and R. Mersereau, “Blur identification by the method of generalized cross-validation,” IEEE Trans. Image Process. 1, 301–311 (1992).
[CrossRef]

Rom, R.

R. Rom, “On the cepstrum of two-dimensional functions (corresp.),” IEEE Trans. Inf. Theory 21, 214–217 (1975).
[CrossRef]

Sabir, M.

H. Sheikh, M. Sabir, and A. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440–3451 (2006).
[CrossRef]

Said, A.

A. Ciancio, A. da Costa, E. da Silva, A. Said, R. Samadani, and P. Obrador, “No-reference blur assessment of digital pictures based on multifeature classifiers,” IEEE Trans. Image Process. 20, 64–75 (2011).
[CrossRef]

Samadani, R.

A. Ciancio, A. da Costa, E. da Silva, A. Said, R. Samadani, and P. Obrador, “No-reference blur assessment of digital pictures based on multifeature classifiers,” IEEE Trans. Image Process. 20, 64–75 (2011).
[CrossRef]

Savakis, A.

A. Savakis and H. Trussell, “Blur identification by residual spectral matching,” IEEE Trans. Image Process. 2, 141–151 (1993).
[CrossRef]

Schonfeld, D.

J. Yang and D. Schonfeld, “Virtual focus and depth estimation from defocused video sequences,” IEEE Trans. Image Process. 19, 668–679 (2010).
[CrossRef]

Shacham, O.

O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).
[CrossRef]

Sheikh, H.

H. Sheikh, M. Sabir, and A. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440–3451 (2006).
[CrossRef]

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Shen, L.

Z. Liu, W. Li, L. Shen, Z. Han, and Z. Zhang, “Automatic segmentation of focused objects from images with low depth of field,” Pattern Recogn. Lett. 31, 572–581 (2010).
[CrossRef]

Shi, Q.

J. Lin, C. Zhang, and Q. Shi, “Estimating the amount of defocus through a wavelet transform approach,” Pattern Recogn. Lett. 25, 407–411 (2004).
[CrossRef]

Simoncelli, E.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Simske, S.

D. Li, R. Mersereau, and S. Simske, “Blur identification based on kurtosis minimization,” in IEEE International Conference on Image Processing, Vol. 1 (2005), I-905–8.

Steyn, W. H.

I. van Zyl Marais and W. H. Steyn, “Robust defocus blur identification in the context of blind image quality assessment,” Signal Process. Image Commun. 22, 833–844 (2007).
[CrossRef]

Subbarao, M.

M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13, 271–294(1994).
[CrossRef]

Surya, G.

M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13, 271–294(1994).
[CrossRef]

Swain, C.

C. Swain and T. Chen, “Defocus-based image segmentation,” inIEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 4 (1995), pp. 2403–2406.

Szeliski, R.

C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1 (IEEE, 2006), pp. 901–908.

Tekalp, A.

M. Chang, A. Tekalp, and A. Erdem, “Blur identification using the bispectrum,” Signal Processing 39, 2323–2325 (1991).

Trussell, H.

A. Savakis and H. Trussell, “Blur identification by residual spectral matching,” IEEE Trans. Image Process. 2, 141–151 (1993).
[CrossRef]

Tsin, Y.

Y. Tsin, V. Ramesh, and T. Kanade, “Statistical calibration of ccd imaging process,” in Proceedings of Eighth IEEE International Conference on Computer Vision, Vol. 1 (IEEE, 2001), pp. 480–487.

Unbehauen, R.

K. Rank, M. Lendl, and R. Unbehauen, “Estimation of image noise variance,” IEE Proc. Vis. Image Signal Process. 146, 80–84 (1999).
[CrossRef]

van Zyl Marais, I.

I. van Zyl Marais and W. H. Steyn, “Robust defocus blur identification in the context of blind image quality assessment,” Signal Process. Image Commun. 22, 833–844 (2007).
[CrossRef]

Wang, Z.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Wu, S.

S. Wu, W. Lin, S. Xie, Z. Lu, E. P. Ong, and S. Yao, “Blind blur assessment for vision-based applications,” J. Vis. Commun. Image Represent 20, 231–241 (2009).
[CrossRef]

Xie, S.

S. Wu, W. Lin, S. Xie, Z. Lu, E. P. Ong, and S. Yao, “Blind blur assessment for vision-based applications,” J. Vis. Commun. Image Represent 20, 231–241 (2009).
[CrossRef]

Yang, J.

J. Yang and D. Schonfeld, “Virtual focus and depth estimation from defocused video sequences,” IEEE Trans. Image Process. 19, 668–679 (2010).
[CrossRef]

Yao, S.

S. Wu, W. Lin, S. Xie, Z. Lu, E. P. Ong, and S. Yao, “Blind blur assessment for vision-based applications,” J. Vis. Commun. Image Represent 20, 231–241 (2009).
[CrossRef]

Yap, K.-H.

L. Chen and K.-H. Yap, “Efficient discrete spatial techniques for blur support identification in blind image deconvolution,” IEEE Trans. Signal Process. 54, 1557–1562 (2006).
[CrossRef]

L. Chen and K.-H. Yap, “A soft double regularization approach to parametric blind image deconvolution,” IEEE Trans. Image Process. 14, 624–633 (2005).
[CrossRef]

Yitzhaky, Y.

O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).
[CrossRef]

Zhang, C.

J. Lin, C. Zhang, and Q. Shi, “Estimating the amount of defocus through a wavelet transform approach,” Pattern Recogn. Lett. 25, 407–411 (2004).
[CrossRef]

Zhang, Z.

Z. Liu, W. Li, L. Shen, Z. Han, and Z. Zhang, “Automatic segmentation of focused objects from images with low depth of field,” Pattern Recogn. Lett. 31, 572–581 (2010).
[CrossRef]

Zucker, S.

J. Elder and S. Zucker, “Local scale control for edge detection and blur estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 699–716 (1998).
[CrossRef]

Zurada, J.

I. Aizenberg, D. Paliy, J. Zurada, and J. Astola, “Blur identification by multilayer neural network based on multivalued neurons,” IEEE Trans. Neural Netw. 19, 883–898 (2008).
[CrossRef]

IEE Proc. Vis. Image Signal Process.

K. Rank, M. Lendl, and R. Unbehauen, “Estimation of image noise variance,” IEE Proc. Vis. Image Signal Process. 146, 80–84 (1999).
[CrossRef]

IEEE Signal Process. Mag.

M. Banham and A. Katsaggelos, “Digital image restoration,” IEEE Signal Process. Mag. 14, 24–41 (1997).
[CrossRef]

D. Rajan, S. Chaudhuri, and M. Joshi, “Multi-objective super resolution: concepts and examples,” IEEE Signal Process. Mag. 20, 49–61 (2003).
[CrossRef]

IEEE Trans. Acoustics, Speech, Signal Process.

M. Cannon, “Blind deconvolution of spatially invariant image blurs with phase,” IEEE Trans. Acoustics, Speech, Signal Process. ASSP-24, 58–63 (1976).

IEEE Trans. Image Process.

S. Reeves and R. Mersereau, “Blur identification by the method of generalized cross-validation,” IEEE Trans. Image Process. 1, 301–311 (1992).
[CrossRef]

L. Chen and K.-H. Yap, “A soft double regularization approach to parametric blind image deconvolution,” IEEE Trans. Image Process. 14, 624–633 (2005).
[CrossRef]

A. Savakis and H. Trussell, “Blur identification by residual spectral matching,” IEEE Trans. Image Process. 2, 141–151 (1993).
[CrossRef]

J. Yang and D. Schonfeld, “Virtual focus and depth estimation from defocused video sequences,” IEEE Trans. Image Process. 19, 668–679 (2010).
[CrossRef]

K. Pradeep and A. Rajagopalan, “Improving shape from focus using defocus cue,” IEEE Trans. Image Process. 16, 1920–1925(2007).
[CrossRef]

A. Ciancio, A. da Costa, E. da Silva, A. Said, R. Samadani, and P. Obrador, “No-reference blur assessment of digital pictures based on multifeature classifiers,” IEEE Trans. Image Process. 20, 64–75 (2011).
[CrossRef]

H. Sheikh, M. Sabir, and A. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440–3451 (2006).
[CrossRef]

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

IEEE Trans. Inf. Theory

R. Rom, “On the cepstrum of two-dimensional functions (corresp.),” IEEE Trans. Inf. Theory 21, 214–217 (1975).
[CrossRef]

IEEE Trans. Neural Netw.

I. Aizenberg, D. Paliy, J. Zurada, and J. Astola, “Blur identification by multilayer neural network based on multivalued neurons,” IEEE Trans. Neural Netw. 19, 883–898 (2008).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell.

G. Healey and R. Kondepudy, “Radiometric ccd camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 267–276 (1994).
[CrossRef]

M. Grossberg and S. Nayar, “Modeling the space of camera response functions,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1272–1282 (2004).
[CrossRef]

J. Elder and S. Zucker, “Local scale control for edge detection and blur estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 699–716 (1998).
[CrossRef]

IEEE Trans. Signal Process.

F. Chen and J. Ma, “An empirical identification method of Gaussian blur parameter for image deblurring,” IEEE Trans. Signal Process. 57, 2467–2478 (2009).
[CrossRef]

L. Chen and K.-H. Yap, “Efficient discrete spatial techniques for blur support identification in blind image deconvolution,” IEEE Trans. Signal Process. 54, 1557–1562 (2006).
[CrossRef]

Int. J. Comput. Vis.

M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13, 271–294(1994).
[CrossRef]

J. Opt. Soc. Am. A

J. Vis. Commun. Image Represent

S. Wu, W. Lin, S. Xie, Z. Lu, E. P. Ong, and S. Yao, “Blind blur assessment for vision-based applications,” J. Vis. Commun. Image Represent 20, 231–241 (2009).
[CrossRef]

Pattern Recogn. Lett.

Z. Liu, W. Li, L. Shen, Z. Han, and Z. Zhang, “Automatic segmentation of focused objects from images with low depth of field,” Pattern Recogn. Lett. 31, 572–581 (2010).
[CrossRef]

O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).
[CrossRef]

J. Lin, C. Zhang, and Q. Shi, “Estimating the amount of defocus through a wavelet transform approach,” Pattern Recogn. Lett. 25, 407–411 (2004).
[CrossRef]

Signal Process. Image Commun.

I. van Zyl Marais and W. H. Steyn, “Robust defocus blur identification in the context of blind image quality assessment,” Signal Process. Image Commun. 22, 833–844 (2007).
[CrossRef]

Signal Processing

M. Chang, A. Tekalp, and A. Erdem, “Blur identification using the bispectrum,” Signal Processing 39, 2323–2325 (1991).

Other

R. Fabian and D. Malah, “Robust identification of motion and out of focus blur parameters from blurred and noisy images,” in CVGIP: Graphical Models and Image Processing, Vol. 53 (1991), pp. 403–412.
[CrossRef]

V. Kayargadde and J. B. Martens, “Estimation of edge parameters and image blur using polynomial transforms,” in CVGIP: Graphical Models and Image Processing, Vol. 56 (1994), pp. 442–461.
[CrossRef]

I. Aizenberg, T. Bregin, C. Butakoff, V. N. Karnaukhov, N. S. Merzlyakov, and O. Milukova, “Type of blur and blur parameters identification using neural network and its application to image restoration,” in Proceedings of the International Conference on Artificial Neural Networks, ICANN ’02, (Springer-Verlag, 2002), pp. 1231–1236.

C. Swain and T. Chen, “Defocus-based image segmentation,” inIEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 4 (1995), pp. 2403–2406.

S. Chaudhuri and A. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1998).

H. Y. Lin and C. H. Chang, “Photo-consistent motion blur modeling for realistic image synthesis,” in Advances in Image and Video Technology (2006), pp. 1273–1282.

C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1 (IEEE, 2006), pp. 901–908.

D. Li, R. Mersereau, and S. Simske, “Blur identification based on kurtosis minimization,” in IEEE International Conference on Image Processing, Vol. 1 (2005), I-905–8.

J. Da Rugna and H. Konik, “Blur identification in image processing,” in Proceedings of International Joint Conference on Neural Networks (2006), pp. 2536–2541.

Y. Tsin, V. Ramesh, and T. Kanade, “Statistical calibration of ccd imaging process,” in Proceedings of Eighth IEEE International Conference on Computer Vision, Vol. 1 (IEEE, 2001), pp. 480–487.

B. Horn, Robot Vision (MIT Press, 1986).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1.

Defocused edges and their intensity profiles (a) obtained from a real edge image captured with a nonlinear CMOS sensor, and synthesized by (b) the proposed intensity-dependent filtering.

Fig. 2.
Fig. 2.

Images captured with different exposure times and their histograms. The midrange intensity images possess higher noise level than bright and dark images.

Fig. 3.
Fig. 3.

Histograms of (a) a synthetic defocused edge image generated using Gaussian convolution, (b) a real defocused edge image, and (c) a synthesized defocused edge image using the proposed intensity-dependent filtering.

Fig. 4.
Fig. 4.

Intensity profiles of a real defocused edge (curved red line) and an ideal defocused edge (angular blue line). The difference is used to derive the nonlinear camera response parameter.

Fig. 5.
Fig. 5.

Histograms of a focused edge image (in blue) and a defocused edge image (in red). When the defocus blur occurs, some image pixels in the two blue lobes shift to midintensity transition region of the red histogram.

Fig. 6.
Fig. 6.

Defocused images and the associated histograms from various F-numbers and ISO settings. All images are captured with a fixed scene distance. The asymmetric behavior can be clearly seen in the histogram, especially for the F-5.6 cases in (a) and (d).

Fig. 7.
Fig. 7.

Reference images (without Gaussian blur) from the LIVE database. “bikes,” “lighthouse,” “womanhat,” “buildings,” caps,” “house,” “monarch,” “painthouse,” “parrots,” and “plane” are shown from left to right and top to bottom.

Fig. 8.
Fig. 8.

Images captured with different focus settings and used in the experiment, “cartridge” on the left and “book” on the right.

Fig. 9.
Fig. 9.

From the top to the bottom: the edge images derived from Sobel edge detection, the candidate edge segments for ROIs, and the ROIs used for histogram matching. The ROIs are indicated by red boxes.

Fig. 10.
Fig. 10.

Quality evaluation of the 12 sets of test images. (a), (b), and (c) show the evaluation results of the LIVE database images. The quality assessment of the captured defocused images is shown in (d).

Tables (8)

Tables Icon

Table 1. Estimation of Blur Extent from Synthetic Defocused Images Generated with Different Parameter Settingsa

Tables Icon

Table 2. Estimation of Defocus Blur and Nonlinear Response Parameter smax for the Defocused Images Captured with F-5.6 and ISO-100a

Tables Icon

Table 3. Estimation of Defocus Blur and Nonlinear Response Parameter smax for the Defocused Images captured with F-10 and ISO-100a

Tables Icon

Table 4. Estimation of Defocus Blur and Nonlinear Response Parameter smax for the Defocused Images Captured with F-18 and ISO-100a

Tables Icon

Table 5. Estimation of Defocus Blur for the Images Captured with F-5.6 and ISO-3200a

Tables Icon

Table 6. Estimation of Defocus Blur for the Images cCaptured with F-10 and ISO-3200

Tables Icon

Table 7. Estimation of Defocus Blur for the Images Captured with F-18 and ISO-3200

Tables Icon

Table 8. Blur Extents of the LIVE Database Images and Our Captured Defocused Imagesa

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

g(x,y)=f(x,y)*h(x,y)+ξ(x,y),
h(x,y)={1/(πρ2),forx2+y2ρ20,forx2+y2>ρ2,
s(x,y)=α·(f(x,y)min(x,y)Kf(x,y))+smin,
α=smaxsminmax(x,y)Kf(x,y)min(x,y)Kf(x,y)
h(x,y)={s(x,y)Ks(x,y)dxdy,forx2+y2ρ20,forx2+y2>ρ2,
g(x,y)=f(x,y)*h(x,y)+n(0,σ2),
σ(f(x,y))=σmaxk(f(x,y)128)2.
g(x,y)=f(x,y)*h(x,y)·n(1,σ2),
σi=σmaxk(μi128)2,fori=1,2,
z^=s1μ1+s2μ2s1+s2
s2=z^μ1μ2z^.
B=c1c2(histb(x)histf(x))dx,
Blb=μ˜1μ˜2histb(x)dx.
Blb+histb(μ˜1)(μ˜1c1)+histb(μ˜2)(c2μ˜2).
Bub=Blb+histb(μ˜1)(μ˜1μ1)+histb(μ˜2)(μ2μ˜2)
ρinit=Blbhandρmax=Bubh,

Metrics