Abstract

Due to the nature of involved optics, the depth of field in imaging systems is usually constricted in the field of view. As a result, we get the image with only parts of the scene in focus. To extend the depth of field, fusing the images at different focus levels is a promising approach. This paper proposes a novel multifocus image fusion approach based on clarity enhanced image segmentation and regional sparse representation. On the one hand, using clarity enhanced image that contains both intensity and clarity information, the proposed method decreases the risk of partitioning the in-focus and out-of-focus pixels in the same region. On the other hand, due to the regional selection of sparse coefficients, the proposed method strengthens its robustness to the distortions and misplacement usually resulting from pixel based coefficients selection. In short, the proposed method combines the merits of regional image fusion and sparse representation based image fusion. The experimental results demonstrate that the proposed method outperforms six recently proposed multifocus image fusion methods.

© 2013 OSA

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Model. Im. Proc.57(3), 235–245 (1995)
    [CrossRef]
  2. H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity similarity,” Opt. Commun.285(2), 91–100 (2012).
    [CrossRef]
  3. Y. Song, M. Li, Q. Li, and L. Sun, “A new wavelet based multi-focus image fusion scheme and its application on optical microscopy,” in Proceedings of IEEE Conference on Robotics and Biomimetics (Institute of Electrical and Electronics Engineers, Kunming, China, 2006), pp. 401–405.
  4. Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Opt. Express18(21), 21757–21769 (2010).
    [CrossRef] [PubMed]
  5. Q. Guihong, Z. Dali, and Y. Pingfan, “Medical image fusion by wavelet transform modulus maxima,” Opt. Express9(4), 184–190 (2001).
    [CrossRef] [PubMed]
  6. T. Stathaki, Image Fusion: Algorithms and Applications (Academic Press, 2008).
  7. X. Bai, F. Zhou, and B. Xue, “Fusion of infrared and visual images through region extraction by using multi-scale center-surround top-hat transform,” Opt. Express19(9), 8444–8457 (2011).
    [CrossRef] [PubMed]
  8. H. Hariharan, “Extending Depth of Field via Multifocus Fusion,” PhD Thesis, The University of Tennessee, Knoxville, 2011.
  9. H. B. Mitchell, Image Fusion: Theories, Techniques and Applications (Springer, 2010).
    [CrossRef]
  10. J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Opt. Commun.284(1), 80–87 (2011).
    [CrossRef]
  11. Y. Zhang and L. Ge, “Efficient fusion scheme for multi-focus images by using blurring measure,” Digital Sig. Process.19(2), 186–193 (2009).
    [CrossRef]
  12. J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell.22(8), 888–905 (2000)
    [CrossRef]
  13. A. Bleau and L.J. Leon, “Watershed-based segmentation and region merging” Comput. Vis. Image Und.77(3), 317–370 (2000).
    [CrossRef]
  14. N.R. Pal and S.K. Pal, “A review on image segmentation techniques” Pattern Recogn.26(9), 1277–1294 (1993)
    [CrossRef]
  15. S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput.26(7), 971–979 (2008).
    [CrossRef]
  16. L. Guo, M. Dai, and M. Zhu, “Multifocus color image fusion based on quaternion curvelet transform,” Opt. Express20(17), 18846–18860 (2012).
    [CrossRef] [PubMed]
  17. X. Qu, J. Yan, and G. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-laplacian,” Opt. Precis. Eng.17(5), 1203–1212 (2009).
  18. B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas.59(4), 884–892 (2010).
    [CrossRef]
  19. Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn.43(6), 2003–2016 (2010).
    [CrossRef]
  20. S. Li, J. T. Kwok, and Y. Wang, “Combination of images with diverse focuses using the spatial frequency,” Inf. Fusion26(7), 169–176 (2001).
    [CrossRef]
  21. K. Huang and S. Aviyente, “Sparse representation for signal classification,” Adv. Neural Inf. Process. Syst.19, 609–616 (2007).
  22. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory.52(4), 1289–1306 (2006).
    [CrossRef]
  23. B. A. Olshausen, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature (London)381, 607–609 (1996).
    [CrossRef]
  24. R. Rubinstein, A. M. Bruckstein, and M. Elad, “Dictionaries for sparse representation modeling,” Proc. IEEE98(6), 1045–1057 (2010).
    [CrossRef]
  25. G. Davis, S. Mallat, and M. Avellaneda, “Adaptive greedy approximations,” Constr. Approx.13(1), 57–98 (1997).
  26. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse sepresentation,” IEEE Trans. Sig. Proces.54, (11)4311–4322 (2006)
    [CrossRef]
  27. C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett.36(4), 308–309 (2000).
    [CrossRef]
  28. J. Huang, T. Zhang, and D. Metaxas, “Learning with structured sparsity,” Proceedings of the 26th Annual International Conference on Machine Learning, 417–424 (2009).
  29. J. Huang, X. Huang, and D. Metaxas, “Learning with dynamic group sparsity,” Proceedings of the 12th International Conference on Computer Vision, 64–71 (2009).

2012 (2)

H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity similarity,” Opt. Commun.285(2), 91–100 (2012).
[CrossRef]

L. Guo, M. Dai, and M. Zhu, “Multifocus color image fusion based on quaternion curvelet transform,” Opt. Express20(17), 18846–18860 (2012).
[CrossRef] [PubMed]

2011 (2)

X. Bai, F. Zhou, and B. Xue, “Fusion of infrared and visual images through region extraction by using multi-scale center-surround top-hat transform,” Opt. Express19(9), 8444–8457 (2011).
[CrossRef] [PubMed]

J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Opt. Commun.284(1), 80–87 (2011).
[CrossRef]

2010 (4)

B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas.59(4), 884–892 (2010).
[CrossRef]

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn.43(6), 2003–2016 (2010).
[CrossRef]

R. Rubinstein, A. M. Bruckstein, and M. Elad, “Dictionaries for sparse representation modeling,” Proc. IEEE98(6), 1045–1057 (2010).
[CrossRef]

Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Opt. Express18(21), 21757–21769 (2010).
[CrossRef] [PubMed]

2009 (4)

X. Qu, J. Yan, and G. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-laplacian,” Opt. Precis. Eng.17(5), 1203–1212 (2009).

J. Huang, T. Zhang, and D. Metaxas, “Learning with structured sparsity,” Proceedings of the 26th Annual International Conference on Machine Learning, 417–424 (2009).

J. Huang, X. Huang, and D. Metaxas, “Learning with dynamic group sparsity,” Proceedings of the 12th International Conference on Computer Vision, 64–71 (2009).

Y. Zhang and L. Ge, “Efficient fusion scheme for multi-focus images by using blurring measure,” Digital Sig. Process.19(2), 186–193 (2009).
[CrossRef]

2008 (1)

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput.26(7), 971–979 (2008).
[CrossRef]

2007 (1)

K. Huang and S. Aviyente, “Sparse representation for signal classification,” Adv. Neural Inf. Process. Syst.19, 609–616 (2007).

2006 (2)

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory.52(4), 1289–1306 (2006).
[CrossRef]

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse sepresentation,” IEEE Trans. Sig. Proces.54, (11)4311–4322 (2006)
[CrossRef]

2001 (2)

Q. Guihong, Z. Dali, and Y. Pingfan, “Medical image fusion by wavelet transform modulus maxima,” Opt. Express9(4), 184–190 (2001).
[CrossRef] [PubMed]

S. Li, J. T. Kwok, and Y. Wang, “Combination of images with diverse focuses using the spatial frequency,” Inf. Fusion26(7), 169–176 (2001).
[CrossRef]

2000 (3)

J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell.22(8), 888–905 (2000)
[CrossRef]

A. Bleau and L.J. Leon, “Watershed-based segmentation and region merging” Comput. Vis. Image Und.77(3), 317–370 (2000).
[CrossRef]

C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett.36(4), 308–309 (2000).
[CrossRef]

1997 (1)

G. Davis, S. Mallat, and M. Avellaneda, “Adaptive greedy approximations,” Constr. Approx.13(1), 57–98 (1997).

1996 (1)

B. A. Olshausen, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature (London)381, 607–609 (1996).
[CrossRef]

1995 (1)

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Model. Im. Proc.57(3), 235–245 (1995)
[CrossRef]

1993 (1)

N.R. Pal and S.K. Pal, “A review on image segmentation techniques” Pattern Recogn.26(9), 1277–1294 (1993)
[CrossRef]

Aharon, M.

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse sepresentation,” IEEE Trans. Sig. Proces.54, (11)4311–4322 (2006)
[CrossRef]

Avellaneda, M.

G. Davis, S. Mallat, and M. Avellaneda, “Adaptive greedy approximations,” Constr. Approx.13(1), 57–98 (1997).

Aviyente, S.

K. Huang and S. Aviyente, “Sparse representation for signal classification,” Adv. Neural Inf. Process. Syst.19, 609–616 (2007).

Bai, X.

Bleau, A.

A. Bleau and L.J. Leon, “Watershed-based segmentation and region merging” Comput. Vis. Image Und.77(3), 317–370 (2000).
[CrossRef]

Bruckstein, A.

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse sepresentation,” IEEE Trans. Sig. Proces.54, (11)4311–4322 (2006)
[CrossRef]

Bruckstein, A. M.

R. Rubinstein, A. M. Bruckstein, and M. Elad, “Dictionaries for sparse representation modeling,” Proc. IEEE98(6), 1045–1057 (2010).
[CrossRef]

Chai, Y.

H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity similarity,” Opt. Commun.285(2), 91–100 (2012).
[CrossRef]

Chen, L.

J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Opt. Commun.284(1), 80–87 (2011).
[CrossRef]

Chen, Y.

Dai, M.

Dali, Z.

Davis, G.

G. Davis, S. Mallat, and M. Avellaneda, “Adaptive greedy approximations,” Constr. Approx.13(1), 57–98 (1997).

Donoho, D. L.

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory.52(4), 1289–1306 (2006).
[CrossRef]

Elad, M.

R. Rubinstein, A. M. Bruckstein, and M. Elad, “Dictionaries for sparse representation modeling,” Proc. IEEE98(6), 1045–1057 (2010).
[CrossRef]

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse sepresentation,” IEEE Trans. Sig. Proces.54, (11)4311–4322 (2006)
[CrossRef]

Ge, L.

Y. Zhang and L. Ge, “Efficient fusion scheme for multi-focus images by using blurring measure,” Digital Sig. Process.19(2), 186–193 (2009).
[CrossRef]

Gu, J.

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn.43(6), 2003–2016 (2010).
[CrossRef]

Guihong, Q.

Guo, L.

Hariharan, H.

H. Hariharan, “Extending Depth of Field via Multifocus Fusion,” PhD Thesis, The University of Tennessee, Knoxville, 2011.

Huang, J.

J. Huang, X. Huang, and D. Metaxas, “Learning with dynamic group sparsity,” Proceedings of the 12th International Conference on Computer Vision, 64–71 (2009).

J. Huang, T. Zhang, and D. Metaxas, “Learning with structured sparsity,” Proceedings of the 26th Annual International Conference on Machine Learning, 417–424 (2009).

Huang, K.

K. Huang and S. Aviyente, “Sparse representation for signal classification,” Adv. Neural Inf. Process. Syst.19, 609–616 (2007).

Huang, X.

J. Huang, X. Huang, and D. Metaxas, “Learning with dynamic group sparsity,” Proceedings of the 12th International Conference on Computer Vision, 64–71 (2009).

Jiang, Y.

Kwok, J. T.

S. Li, J. T. Kwok, and Y. Wang, “Combination of images with diverse focuses using the spatial frequency,” Inf. Fusion26(7), 169–176 (2001).
[CrossRef]

Leon, L.J.

A. Bleau and L.J. Leon, “Watershed-based segmentation and region merging” Comput. Vis. Image Und.77(3), 317–370 (2000).
[CrossRef]

Li, H.

H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity similarity,” Opt. Commun.285(2), 91–100 (2012).
[CrossRef]

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Model. Im. Proc.57(3), 235–245 (1995)
[CrossRef]

Li, M.

Y. Song, M. Li, Q. Li, and L. Sun, “A new wavelet based multi-focus image fusion scheme and its application on optical microscopy,” in Proceedings of IEEE Conference on Robotics and Biomimetics (Institute of Electrical and Electronics Engineers, Kunming, China, 2006), pp. 401–405.

Li, Q.

Y. Song, M. Li, Q. Li, and L. Sun, “A new wavelet based multi-focus image fusion scheme and its application on optical microscopy,” in Proceedings of IEEE Conference on Robotics and Biomimetics (Institute of Electrical and Electronics Engineers, Kunming, China, 2006), pp. 401–405.

Li, S.

B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas.59(4), 884–892 (2010).
[CrossRef]

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput.26(7), 971–979 (2008).
[CrossRef]

S. Li, J. T. Kwok, and Y. Wang, “Combination of images with diverse focuses using the spatial frequency,” Inf. Fusion26(7), 169–176 (2001).
[CrossRef]

Liu, G.

H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity similarity,” Opt. Commun.285(2), 91–100 (2012).
[CrossRef]

Ma, L.

J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Opt. Commun.284(1), 80–87 (2011).
[CrossRef]

Ma, Y.

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn.43(6), 2003–2016 (2010).
[CrossRef]

Malik, J.

J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell.22(8), 888–905 (2000)
[CrossRef]

Mallat, S.

G. Davis, S. Mallat, and M. Avellaneda, “Adaptive greedy approximations,” Constr. Approx.13(1), 57–98 (1997).

Manjunath, B.

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Model. Im. Proc.57(3), 235–245 (1995)
[CrossRef]

Metaxas, D.

J. Huang, X. Huang, and D. Metaxas, “Learning with dynamic group sparsity,” Proceedings of the 12th International Conference on Computer Vision, 64–71 (2009).

J. Huang, T. Zhang, and D. Metaxas, “Learning with structured sparsity,” Proceedings of the 26th Annual International Conference on Machine Learning, 417–424 (2009).

Mitchell, H. B.

H. B. Mitchell, Image Fusion: Theories, Techniques and Applications (Springer, 2010).
[CrossRef]

Mitra, S. K.

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Model. Im. Proc.57(3), 235–245 (1995)
[CrossRef]

Olshausen, B. A.

B. A. Olshausen, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature (London)381, 607–609 (1996).
[CrossRef]

Pal, N.R.

N.R. Pal and S.K. Pal, “A review on image segmentation techniques” Pattern Recogn.26(9), 1277–1294 (1993)
[CrossRef]

Pal, S.K.

N.R. Pal and S.K. Pal, “A review on image segmentation techniques” Pattern Recogn.26(9), 1277–1294 (1993)
[CrossRef]

Petrovic, V.

C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett.36(4), 308–309 (2000).
[CrossRef]

Pingfan, Y.

Qu, X.

X. Qu, J. Yan, and G. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-laplacian,” Opt. Precis. Eng.17(5), 1203–1212 (2009).

Rubinstein, R.

R. Rubinstein, A. M. Bruckstein, and M. Elad, “Dictionaries for sparse representation modeling,” Proc. IEEE98(6), 1045–1057 (2010).
[CrossRef]

Shi, J.

J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell.22(8), 888–905 (2000)
[CrossRef]

Song, Y.

Y. Song, M. Li, Q. Li, and L. Sun, “A new wavelet based multi-focus image fusion scheme and its application on optical microscopy,” in Proceedings of IEEE Conference on Robotics and Biomimetics (Institute of Electrical and Electronics Engineers, Kunming, China, 2006), pp. 401–405.

Stathaki, T.

T. Stathaki, Image Fusion: Algorithms and Applications (Academic Press, 2008).

Sun, L.

Y. Song, M. Li, Q. Li, and L. Sun, “A new wavelet based multi-focus image fusion scheme and its application on optical microscopy,” in Proceedings of IEEE Conference on Robotics and Biomimetics (Institute of Electrical and Electronics Engineers, Kunming, China, 2006), pp. 401–405.

Sun, Z.

Tian, J.

J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Opt. Commun.284(1), 80–87 (2011).
[CrossRef]

Wang, L.

Wang, Y.

S. Li, J. T. Kwok, and Y. Wang, “Combination of images with diverse focuses using the spatial frequency,” Inf. Fusion26(7), 169–176 (2001).
[CrossRef]

Wang, Z.

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn.43(6), 2003–2016 (2010).
[CrossRef]

Xue, B.

Xydeas, C.

C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett.36(4), 308–309 (2000).
[CrossRef]

Yan, J.

X. Qu, J. Yan, and G. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-laplacian,” Opt. Precis. Eng.17(5), 1203–1212 (2009).

Yang, B.

B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas.59(4), 884–892 (2010).
[CrossRef]

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput.26(7), 971–979 (2008).
[CrossRef]

Yang, G.

X. Qu, J. Yan, and G. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-laplacian,” Opt. Precis. Eng.17(5), 1203–1212 (2009).

Yin, H.

H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity similarity,” Opt. Commun.285(2), 91–100 (2012).
[CrossRef]

Yu, W.

J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Opt. Commun.284(1), 80–87 (2011).
[CrossRef]

Zhai, G.

Zhang, T.

J. Huang, T. Zhang, and D. Metaxas, “Learning with structured sparsity,” Proceedings of the 26th Annual International Conference on Machine Learning, 417–424 (2009).

Zhang, Y.

Y. Zhang and L. Ge, “Efficient fusion scheme for multi-focus images by using blurring measure,” Digital Sig. Process.19(2), 186–193 (2009).
[CrossRef]

Zhou, F.

Zhu, M.

Adv. Neural Inf. Process. Syst. (1)

K. Huang and S. Aviyente, “Sparse representation for signal classification,” Adv. Neural Inf. Process. Syst.19, 609–616 (2007).

Comput. Vis. Image Und. (1)

A. Bleau and L.J. Leon, “Watershed-based segmentation and region merging” Comput. Vis. Image Und.77(3), 317–370 (2000).
[CrossRef]

Constr. Approx. (1)

G. Davis, S. Mallat, and M. Avellaneda, “Adaptive greedy approximations,” Constr. Approx.13(1), 57–98 (1997).

Digital Sig. Process. (1)

Y. Zhang and L. Ge, “Efficient fusion scheme for multi-focus images by using blurring measure,” Digital Sig. Process.19(2), 186–193 (2009).
[CrossRef]

Electron. Lett. (1)

C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett.36(4), 308–309 (2000).
[CrossRef]

Graph. Model. Im. Proc. (1)

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Model. Im. Proc.57(3), 235–245 (1995)
[CrossRef]

IEEE Trans. Inform. Theory. (1)

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory.52(4), 1289–1306 (2006).
[CrossRef]

IEEE Trans. Instrum. Meas. (1)

B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas.59(4), 884–892 (2010).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell.22(8), 888–905 (2000)
[CrossRef]

IEEE Trans. Sig. Proces. (1)

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse sepresentation,” IEEE Trans. Sig. Proces.54, (11)4311–4322 (2006)
[CrossRef]

Image Vis. Comput. (1)

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput.26(7), 971–979 (2008).
[CrossRef]

Inf. Fusion (1)

S. Li, J. T. Kwok, and Y. Wang, “Combination of images with diverse focuses using the spatial frequency,” Inf. Fusion26(7), 169–176 (2001).
[CrossRef]

Nature (London) (1)

B. A. Olshausen, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature (London)381, 607–609 (1996).
[CrossRef]

Opt. Commun. (2)

J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Opt. Commun.284(1), 80–87 (2011).
[CrossRef]

H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity similarity,” Opt. Commun.285(2), 91–100 (2012).
[CrossRef]

Opt. Express (4)

Opt. Precis. Eng. (1)

X. Qu, J. Yan, and G. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-laplacian,” Opt. Precis. Eng.17(5), 1203–1212 (2009).

Pattern Recogn. (2)

N.R. Pal and S.K. Pal, “A review on image segmentation techniques” Pattern Recogn.26(9), 1277–1294 (1993)
[CrossRef]

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn.43(6), 2003–2016 (2010).
[CrossRef]

Proc. IEEE (1)

R. Rubinstein, A. M. Bruckstein, and M. Elad, “Dictionaries for sparse representation modeling,” Proc. IEEE98(6), 1045–1057 (2010).
[CrossRef]

Proceedings of the 12th International Conference on Computer Vision (1)

J. Huang, X. Huang, and D. Metaxas, “Learning with dynamic group sparsity,” Proceedings of the 12th International Conference on Computer Vision, 64–71 (2009).

Proceedings of the 26th Annual International Conference on Machine Learning (1)

J. Huang, T. Zhang, and D. Metaxas, “Learning with structured sparsity,” Proceedings of the 26th Annual International Conference on Machine Learning, 417–424 (2009).

Other (4)

Y. Song, M. Li, Q. Li, and L. Sun, “A new wavelet based multi-focus image fusion scheme and its application on optical microscopy,” in Proceedings of IEEE Conference on Robotics and Biomimetics (Institute of Electrical and Electronics Engineers, Kunming, China, 2006), pp. 401–405.

T. Stathaki, Image Fusion: Algorithms and Applications (Academic Press, 2008).

H. Hariharan, “Extending Depth of Field via Multifocus Fusion,” PhD Thesis, The University of Tennessee, Knoxville, 2011.

H. B. Mitchell, Image Fusion: Theories, Techniques and Applications (Springer, 2010).
[CrossRef]

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1

Magnified portions of fused images (a) by traditional SP method. (b) by proposed method. (c) by traditional SP method. (d) by proposed method.

Fig. 2
Fig. 2

The framework of the proposed method.

Fig. 3
Fig. 3

The image of ‘leaf’: (a) source image A. (b) the average image. (c) the segmentation result of the average image. (d) source image B. (e) the clarity enhanced image. (f) the segmentation result of the clarity enhanced image.

Fig. 4
Fig. 4

Multifocus source images; the white boxes indicate the focus regions: (a) lab A. (b) lab B. (c) disk A. (d) disk B. (e) clock A. (f) clock B. (g) pepsi A. (h) pepsi B. (i) leaf A. (j) leaf B. (k) newspaper A. (l) newspaper B. (m) aircraft A. (n) aircraft B. (o) bottle A. (p) bottle B.

Fig. 5
Fig. 5

The fused image of ‘clock’: (a) method 1. (b) method 2. (c) method 3. (d) method 4. (e) the segmentation result of average image. (f) method 5. (g) method 6. (h) the proposed method with segmentation of average image. (i) the proposed method with segmentation of clarity enhanced image. (j) the segmentation result of clarity enhanced image.

Fig. 6
Fig. 6

The fused image of ‘disk’: (a) method 1. (b) method 2. (c) method 3. (d) method 4. (e) the segmentation result of average image. (f) method 5. (g) method 6. (h) the proposed method with segmentation of average image. (i) the proposed method with segmentation of clarity enhanced image. (j) the segmentation result of clarity enhanced image.

Fig. 7
Fig. 7

The fused image of ‘lab’: (a) method 1.(b) method 2. (c) method 3. (d) method 4. (e) the segmentation result of average image. (f) method 5. (g) method 6. (h) the proposed method with segmentation of average image. (i) the proposed method with segmentation of clarity enhanced image. (j) the segmentation result of clarity enhanced image.

Fig. 8
Fig. 8

The fused image of ‘leaf’: (a) method 1. (b) method 2. (c) method 3. (d) method 4. (e) the segmentation result of average image. (f) method 5. (g) method 6. (h) the proposed method with segmentation of average image. (i) the proposed method with segmentation of clarity enhanced image. (j) the segmentation result of clarity enhanced image.

Fig. 9
Fig. 9

The fused image of ‘newspaper’: (a) method 1. (b) method 2. (c) method 3. (d) method 4. (e) the segmentation result of average image. (f) method 5. (g) method 6. (h) the proposed method with segmentation of average image. (i) the proposed method with segmentation of clarity enhanced image. (j) the segmentation result of clarity enhanced image.

Fig. 10
Fig. 10

The fused image of ‘pepsi’: (a) method 1. (b) method 2. (c) method 3. (d) method 4. (e) the segmentation result of average image. (f) method 5. (g) method 6. (h) the proposed method with segmentation of average image. (i) the proposed method with segmentation of clarity enhanced image. (j) the segmentation result of clarity enhanced image.

Fig. 11
Fig. 11

The fused image of ‘aircraft’: (a) method 1. (b) method 2. (c) method 3. (d) method 4. (e) the segmentation result of average image. (f) method 5. (g) method 6. (h) the proposed method with segmentation of average image. (i) the proposed method with segmentation of clarity enhanced image. (j) the segmentation result of clarity enhanced image.

Fig. 12
Fig. 12

The fused image of ‘bottle’: (a) method 1. (b) method 2. (c) method 3. (d) method 4. (e) the segmentation result of average image. (f) method 5. (g) method 6. (h) the proposed method with segmentation of average image. (i) the proposed method with segmentation of clarity enhanced image. (j) the segmentation result of clarity enhanced image.

Fig. 13
Fig. 13

The image of ‘bottle’: (a) the average image. (b) the clarity image. (c) the segmentation result of the clarity image.

Tables (3)

Tables Icon

Table 1 Performance of different fusion methods on different source images

Tables Icon

Table 2 Performance of different fusion methods on different source images

Tables Icon

Table 3 The best performance of the fused image on the specific α value

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

Ncut ( A , B ) = cut ( A , B ) assoc ( A , V ) + cut ( A , B ) assoc ( B , V ) .
cut ( A , B ) = u A , v B w ( u , v ) .
Ncut = ( A , B ) = 2 ( assoc ( A , A ) assoc ( A , V ) + assoc ( B , B ) assoc ( B , V ) ) .
v = t = 1 T x t d t = D x .
min x x 0 subject to D x v < ε .
v = t = 1 T s ( t ) d t .
V = [ d 1 , d 2 , d T ] [ s 1 ( 1 ) s J ( 1 ) s 1 ( T ) s J ( T ) ] .
S = [ s 1 , , s J ] = [ s 1 ( 1 ) s J ( 1 ) s 1 ( T ) s J ( T ) ]
C = [ s 1 1 , , s J 1 ] .
B s = B s / ( A s + B s ) .
C C = ( 1 α ) ( A + B ) / 2 + α B s .
V F = S F D .
r A = c m c n ( F c m c n F ¯ ) ( A c m c n A ¯ ) ( c m c n ( F c m c n F ¯ ) 2 ) ( c m c n ( A c m c n A ¯ ) 2 )
r A = c m c n ( F c m c n F ¯ ) ( A c m c n A ¯ ) ( c m c n ( F c m c n F ¯ ) 2 ) ( c m c m ( A c m c n A ¯ ) 2 )
r = r A + r B 2

Metrics