Abstract

Fusion of infrared and visual images is an important research area in image analysis. The purpose of infrared and visual image fusion is to combine the image information of the original images into the final fusion result. So, it is crucial to effectively extract the image information of the original images and reasonably combine them into the final fusion image. To achieve this purpose, an algorithm by using multi scale center-surround top-hat transform through region extraction is proposed in this paper. Firstly, multi scale center-surround top-hat transform is discussed and used to extract the multi scale bright and dim image regions of the original images. Secondly, the final extracted image regions for image fusion are constructed from the extracted multi scale bright and dim image regions. Finally, after a base image is calculated from the original images, the final extracted image regions are combined into the base image through a power strategy to form the final fusion result. Because the image information of the original images are well extracted and combined, the proposed algorithm is very effective for image fusion. Comparison experiments have been performed on different image sets, and the results verified the effectiveness of the proposed algorithm.

© 2011 OSA

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 37(3), 1204–1211 (1999).
    [CrossRef]
  2. C. A. Lieber, S. Urayama, N. Rahim, R. Tu, R. Saroufeem, B. Reubner, and S. G. Demos, “Multimodal near infrared spectral imaging as an exploratory tool for dysplastic esophageal lesion identification,” Opt. Express 14(6), 2211–2219 (2006), http://www.opticsinfobase.org/abstract.cfm?URI=oe-14-6-2211 .
    [CrossRef] [PubMed]
  3. Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Opt. Express 18(21), 21757–21769 (2010), http://www.opticsinfobase.org/abstract.cfm?URI=oe-18-21-21757 .
    [CrossRef] [PubMed]
  4. M. Leviner and M. Maltz, “A new multi-spectral feature level image fusion method for human interpretation,” Infrared Phys. Technol. 52(2-3), 79–88 (2009).
    [CrossRef]
  5. G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognit. 37(9), 1855–1872 (2004).
    [CrossRef]
  6. K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—an introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62(4), 249–263 (2007).
    [CrossRef]
  7. Q. Guihong, Z. Dali, and Y. Pingfan, “Medical image fusion by wavelet transform modulus maxima,” Opt. Express 9(4), 184–190 (2001), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-9-4-184 .
    [CrossRef] [PubMed]
  8. F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inf. Fusion 8(2), 143–156 (2007).
    [CrossRef]
  9. N. Mitianoudis and T. Stathaki, “Pixel-based and region-based image fusion schemes using ICA bases,” Inf. Fusion 8(2), 131–142 (2007).
    [CrossRef]
  10. M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 42(6), 1291–1299 (2004).
    [CrossRef]
  11. N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7(5), 743–751 (2007).
    [CrossRef]
  12. S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26(7), 971–979 (2008).
    [CrossRef]
  13. A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inf. Fusion 11(2), 95–113 (2010).
    [CrossRef]
  14. Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recognit. 43(6), 2003–2016 (2010).
    [CrossRef]
  15. W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recognit. Lett. 28(9), 1123–1132 (2007).
    [CrossRef]
  16. P. Soille, Morphological Image Analysis-Principle and Applications (Springer, 2003).
  17. X. Bai, F. Zhou, and T. Jin, “Enhancement of dim small target through modified top-hat transformation under the condition of heavy clutter,” Signal Process. 90(5), 1643–1654 (2010).
    [CrossRef]
  18. X. Bai and F. Zhou, “Analysis of different modified top-hat transformations based on structuring element constructing,” Signal Process. 90(11), 2999–3003 (2010).
    [CrossRef]
  19. X. Bai and F. Zhou, “Top-hat selection transformation for infrared dim small target enhancement,” Imaging Sci. J. 58(2), 112–117 (2010).
    [CrossRef]
  20. M. Zeng, J. Li, and Z. Peng, “The design of top-hat morphological filter and application to infrared target detection,” Infrared Phys. Technol. 48(1), 67–76 (2006).
    [CrossRef]
  21. P. Jackway, “Improved morphological top-hat,” Electron. Lett. 36(14), 1194–1195 (2000).
    [CrossRef]
  22. S. Mukhopadhyay and B. Chanda, “Fusion of 2D grayscale images using multiscale morphology,” Pattern Recognit. 34(10), 1939–1949 (2001).
    [CrossRef]
  23. P. Jackway and M. Deriche, “Scale-space properties of the multiscale morphological dilation-erosion,” IEEE Trans. Pattern Anal. Mach. Intell. 18(1), 38–51 (1996).
    [CrossRef]
  24. M. A. Oliveira and N. J. Leite, “A multiscale directional operator and morphological tools for reconnecting broken ridges in fingerprint images,” Pattern Recognit. 41(1), 367–377 (2008).
    [CrossRef]
  25. I. De, B. Chanda, and B. Chattopadhyay, “Enhancing effective depth-of-field by image fusion using mathematical morphology,” Image Vis. Comput. 24(12), 1278–1287 (2006).
    [CrossRef]
  26. X. Bai, F. Zhou, and B. Xue, “Infrared image enhancement through contrast enhancement by using multi scale new top-hat transform,” Infrared Phys. Technol. 54(2), 61–69 (2011).
    [CrossRef]
  27. X. Bai and F. Zhou, “Analysis of new top-hat transformation and the application for infrared dim small target detection,” Pattern Recognit. 43(6), 2145–2156 (2010).
    [CrossRef]
  28. J. W. Roberts, J. Van Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2(1), 023522 (2008).
    [CrossRef]
  29. Y. Chen, Z. Xue, and R. S. Blum, “Theoretical analysis of an information-based quality measure for image fusion,” Inf. Fusion 9(2), 161–175 (2008).
    [CrossRef]
  30. G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38(7), 313–315 (2002).
    [CrossRef]

2011 (1)

X. Bai, F. Zhou, and B. Xue, “Infrared image enhancement through contrast enhancement by using multi scale new top-hat transform,” Infrared Phys. Technol. 54(2), 61–69 (2011).
[CrossRef]

2010 (7)

X. Bai and F. Zhou, “Analysis of new top-hat transformation and the application for infrared dim small target detection,” Pattern Recognit. 43(6), 2145–2156 (2010).
[CrossRef]

Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Opt. Express 18(21), 21757–21769 (2010), http://www.opticsinfobase.org/abstract.cfm?URI=oe-18-21-21757 .
[CrossRef] [PubMed]

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inf. Fusion 11(2), 95–113 (2010).
[CrossRef]

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recognit. 43(6), 2003–2016 (2010).
[CrossRef]

X. Bai, F. Zhou, and T. Jin, “Enhancement of dim small target through modified top-hat transformation under the condition of heavy clutter,” Signal Process. 90(5), 1643–1654 (2010).
[CrossRef]

X. Bai and F. Zhou, “Analysis of different modified top-hat transformations based on structuring element constructing,” Signal Process. 90(11), 2999–3003 (2010).
[CrossRef]

X. Bai and F. Zhou, “Top-hat selection transformation for infrared dim small target enhancement,” Imaging Sci. J. 58(2), 112–117 (2010).
[CrossRef]

2009 (1)

M. Leviner and M. Maltz, “A new multi-spectral feature level image fusion method for human interpretation,” Infrared Phys. Technol. 52(2-3), 79–88 (2009).
[CrossRef]

2008 (4)

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26(7), 971–979 (2008).
[CrossRef]

J. W. Roberts, J. Van Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2(1), 023522 (2008).
[CrossRef]

Y. Chen, Z. Xue, and R. S. Blum, “Theoretical analysis of an information-based quality measure for image fusion,” Inf. Fusion 9(2), 161–175 (2008).
[CrossRef]

M. A. Oliveira and N. J. Leite, “A multiscale directional operator and morphological tools for reconnecting broken ridges in fingerprint images,” Pattern Recognit. 41(1), 367–377 (2008).
[CrossRef]

2007 (5)

N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7(5), 743–751 (2007).
[CrossRef]

W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recognit. Lett. 28(9), 1123–1132 (2007).
[CrossRef]

K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—an introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62(4), 249–263 (2007).
[CrossRef]

F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inf. Fusion 8(2), 143–156 (2007).
[CrossRef]

N. Mitianoudis and T. Stathaki, “Pixel-based and region-based image fusion schemes using ICA bases,” Inf. Fusion 8(2), 131–142 (2007).
[CrossRef]

2006 (3)

C. A. Lieber, S. Urayama, N. Rahim, R. Tu, R. Saroufeem, B. Reubner, and S. G. Demos, “Multimodal near infrared spectral imaging as an exploratory tool for dysplastic esophageal lesion identification,” Opt. Express 14(6), 2211–2219 (2006), http://www.opticsinfobase.org/abstract.cfm?URI=oe-14-6-2211 .
[CrossRef] [PubMed]

M. Zeng, J. Li, and Z. Peng, “The design of top-hat morphological filter and application to infrared target detection,” Infrared Phys. Technol. 48(1), 67–76 (2006).
[CrossRef]

I. De, B. Chanda, and B. Chattopadhyay, “Enhancing effective depth-of-field by image fusion using mathematical morphology,” Image Vis. Comput. 24(12), 1278–1287 (2006).
[CrossRef]

2004 (2)

G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognit. 37(9), 1855–1872 (2004).
[CrossRef]

M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 42(6), 1291–1299 (2004).
[CrossRef]

2002 (1)

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38(7), 313–315 (2002).
[CrossRef]

2001 (2)

2000 (1)

P. Jackway, “Improved morphological top-hat,” Electron. Lett. 36(14), 1194–1195 (2000).
[CrossRef]

1999 (1)

J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 37(3), 1204–1211 (1999).
[CrossRef]

1996 (1)

P. Jackway and M. Deriche, “Scale-space properties of the multiscale morphological dilation-erosion,” IEEE Trans. Pattern Anal. Mach. Intell. 18(1), 38–51 (1996).
[CrossRef]

Ahmed, F.

J. W. Roberts, J. Van Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2(1), 023522 (2008).
[CrossRef]

Alparone, L.

F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inf. Fusion 8(2), 143–156 (2007).
[CrossRef]

Amolins, K.

K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—an introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62(4), 249–263 (2007).
[CrossRef]

Arbiol, R.

J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 37(3), 1204–1211 (1999).
[CrossRef]

Bai, X.

X. Bai, F. Zhou, and B. Xue, “Infrared image enhancement through contrast enhancement by using multi scale new top-hat transform,” Infrared Phys. Technol. 54(2), 61–69 (2011).
[CrossRef]

X. Bai and F. Zhou, “Analysis of new top-hat transformation and the application for infrared dim small target detection,” Pattern Recognit. 43(6), 2145–2156 (2010).
[CrossRef]

X. Bai, F. Zhou, and T. Jin, “Enhancement of dim small target through modified top-hat transformation under the condition of heavy clutter,” Signal Process. 90(5), 1643–1654 (2010).
[CrossRef]

X. Bai and F. Zhou, “Analysis of different modified top-hat transformations based on structuring element constructing,” Signal Process. 90(11), 2999–3003 (2010).
[CrossRef]

X. Bai and F. Zhou, “Top-hat selection transformation for infrared dim small target enhancement,” Imaging Sci. J. 58(2), 112–117 (2010).
[CrossRef]

Baronti, S.

F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inf. Fusion 8(2), 143–156 (2007).
[CrossRef]

Blum, R. S.

Y. Chen, Z. Xue, and R. S. Blum, “Theoretical analysis of an information-based quality measure for image fusion,” Inf. Fusion 9(2), 161–175 (2008).
[CrossRef]

Bull, D.

N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7(5), 743–751 (2007).
[CrossRef]

Bull, D. R.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inf. Fusion 11(2), 95–113 (2010).
[CrossRef]

Canagarajah, C. N.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inf. Fusion 11(2), 95–113 (2010).
[CrossRef]

Canagarajah, N.

N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7(5), 743–751 (2007).
[CrossRef]

Catalán, R. G.

M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 42(6), 1291–1299 (2004).
[CrossRef]

Chanda, B.

I. De, B. Chanda, and B. Chattopadhyay, “Enhancing effective depth-of-field by image fusion using mathematical morphology,” Image Vis. Comput. 24(12), 1278–1287 (2006).
[CrossRef]

S. Mukhopadhyay and B. Chanda, “Fusion of 2D grayscale images using multiscale morphology,” Pattern Recognit. 34(10), 1939–1949 (2001).
[CrossRef]

Chattopadhyay, B.

I. De, B. Chanda, and B. Chattopadhyay, “Enhancing effective depth-of-field by image fusion using mathematical morphology,” Image Vis. Comput. 24(12), 1278–1287 (2006).
[CrossRef]

Chen, Y.

Cvejic, N.

N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7(5), 743–751 (2007).
[CrossRef]

Dali, Z.

Dare, P.

K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—an introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62(4), 249–263 (2007).
[CrossRef]

De, I.

I. De, B. Chanda, and B. Chattopadhyay, “Enhancing effective depth-of-field by image fusion using mathematical morphology,” Image Vis. Comput. 24(12), 1278–1287 (2006).
[CrossRef]

de la Cruz, J. M.

G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognit. 37(9), 1855–1872 (2004).
[CrossRef]

Demos, S. G.

Deriche, M.

P. Jackway and M. Deriche, “Scale-space properties of the multiscale morphological dilation-erosion,” IEEE Trans. Pattern Anal. Mach. Intell. 18(1), 38–51 (1996).
[CrossRef]

Dixon, T. D.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inf. Fusion 11(2), 95–113 (2010).
[CrossRef]

Fors, O.

J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 37(3), 1204–1211 (1999).
[CrossRef]

García, R.

M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 42(6), 1291–1299 (2004).
[CrossRef]

Garzelli, A.

F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inf. Fusion 8(2), 143–156 (2007).
[CrossRef]

González-Audícana, M.

M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 42(6), 1291–1299 (2004).
[CrossRef]

Gu, J.

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recognit. 43(6), 2003–2016 (2010).
[CrossRef]

Guihong, Q.

Hogervorst, M. A.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inf. Fusion 11(2), 95–113 (2010).
[CrossRef]

Huang, W.

W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recognit. Lett. 28(9), 1123–1132 (2007).
[CrossRef]

Jackway, P.

P. Jackway, “Improved morphological top-hat,” Electron. Lett. 36(14), 1194–1195 (2000).
[CrossRef]

P. Jackway and M. Deriche, “Scale-space properties of the multiscale morphological dilation-erosion,” IEEE Trans. Pattern Anal. Mach. Intell. 18(1), 38–51 (1996).
[CrossRef]

Jiang, Y.

Jin, T.

X. Bai, F. Zhou, and T. Jin, “Enhancement of dim small target through modified top-hat transformation under the condition of heavy clutter,” Signal Process. 90(5), 1643–1654 (2010).
[CrossRef]

Jing, Z.

W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recognit. Lett. 28(9), 1123–1132 (2007).
[CrossRef]

Leite, N. J.

M. A. Oliveira and N. J. Leite, “A multiscale directional operator and morphological tools for reconnecting broken ridges in fingerprint images,” Pattern Recognit. 41(1), 367–377 (2008).
[CrossRef]

Leviner, M.

M. Leviner and M. Maltz, “A new multi-spectral feature level image fusion method for human interpretation,” Infrared Phys. Technol. 52(2-3), 79–88 (2009).
[CrossRef]

Lewis, J. J.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inf. Fusion 11(2), 95–113 (2010).
[CrossRef]

Li, J.

M. Zeng, J. Li, and Z. Peng, “The design of top-hat morphological filter and application to infrared target detection,” Infrared Phys. Technol. 48(1), 67–76 (2006).
[CrossRef]

Li, S.

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26(7), 971–979 (2008).
[CrossRef]

Lieber, C. A.

Ma, Y.

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recognit. 43(6), 2003–2016 (2010).
[CrossRef]

Maltz, M.

M. Leviner and M. Maltz, “A new multi-spectral feature level image fusion method for human interpretation,” Infrared Phys. Technol. 52(2-3), 79–88 (2009).
[CrossRef]

Mitianoudis, N.

N. Mitianoudis and T. Stathaki, “Pixel-based and region-based image fusion schemes using ICA bases,” Inf. Fusion 8(2), 131–142 (2007).
[CrossRef]

Mukhopadhyay, S.

S. Mukhopadhyay and B. Chanda, “Fusion of 2D grayscale images using multiscale morphology,” Pattern Recognit. 34(10), 1939–1949 (2001).
[CrossRef]

Nencini, F.

F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inf. Fusion 8(2), 143–156 (2007).
[CrossRef]

Nikolov, S. G.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inf. Fusion 11(2), 95–113 (2010).
[CrossRef]

Nunez, J.

J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 37(3), 1204–1211 (1999).
[CrossRef]

Oliveira, M. A.

M. A. Oliveira and N. J. Leite, “A multiscale directional operator and morphological tools for reconnecting broken ridges in fingerprint images,” Pattern Recognit. 41(1), 367–377 (2008).
[CrossRef]

Otazu, X.

J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 37(3), 1204–1211 (1999).
[CrossRef]

Pajares, G.

G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognit. 37(9), 1855–1872 (2004).
[CrossRef]

Pala, V.

J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 37(3), 1204–1211 (1999).
[CrossRef]

Peng, Z.

M. Zeng, J. Li, and Z. Peng, “The design of top-hat morphological filter and application to infrared target detection,” Infrared Phys. Technol. 48(1), 67–76 (2006).
[CrossRef]

Pingfan, Y.

Prades, A.

J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 37(3), 1204–1211 (1999).
[CrossRef]

Qu, G.

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38(7), 313–315 (2002).
[CrossRef]

Rahim, N.

Reubner, B.

Roberts, J. W.

J. W. Roberts, J. Van Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2(1), 023522 (2008).
[CrossRef]

Saleta, J. L.

M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 42(6), 1291–1299 (2004).
[CrossRef]

Saroufeem, R.

Stathaki, T.

N. Mitianoudis and T. Stathaki, “Pixel-based and region-based image fusion schemes using ICA bases,” Inf. Fusion 8(2), 131–142 (2007).
[CrossRef]

Sun, Z.

Toet, A.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inf. Fusion 11(2), 95–113 (2010).
[CrossRef]

Tu, R.

Urayama, S.

Van Aardt, J.

J. W. Roberts, J. Van Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2(1), 023522 (2008).
[CrossRef]

Wang, L.

Wang, Z.

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recognit. 43(6), 2003–2016 (2010).
[CrossRef]

Xue, B.

X. Bai, F. Zhou, and B. Xue, “Infrared image enhancement through contrast enhancement by using multi scale new top-hat transform,” Infrared Phys. Technol. 54(2), 61–69 (2011).
[CrossRef]

Xue, Z.

Y. Chen, Z. Xue, and R. S. Blum, “Theoretical analysis of an information-based quality measure for image fusion,” Inf. Fusion 9(2), 161–175 (2008).
[CrossRef]

Yan, P.

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38(7), 313–315 (2002).
[CrossRef]

Yang, B.

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26(7), 971–979 (2008).
[CrossRef]

Zeng, M.

M. Zeng, J. Li, and Z. Peng, “The design of top-hat morphological filter and application to infrared target detection,” Infrared Phys. Technol. 48(1), 67–76 (2006).
[CrossRef]

Zhai, G.

Zhang, D.

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38(7), 313–315 (2002).
[CrossRef]

Zhang, Y.

K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—an introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62(4), 249–263 (2007).
[CrossRef]

Zhou, F.

X. Bai, F. Zhou, and B. Xue, “Infrared image enhancement through contrast enhancement by using multi scale new top-hat transform,” Infrared Phys. Technol. 54(2), 61–69 (2011).
[CrossRef]

X. Bai and F. Zhou, “Analysis of new top-hat transformation and the application for infrared dim small target detection,” Pattern Recognit. 43(6), 2145–2156 (2010).
[CrossRef]

X. Bai and F. Zhou, “Top-hat selection transformation for infrared dim small target enhancement,” Imaging Sci. J. 58(2), 112–117 (2010).
[CrossRef]

X. Bai and F. Zhou, “Analysis of different modified top-hat transformations based on structuring element constructing,” Signal Process. 90(11), 2999–3003 (2010).
[CrossRef]

X. Bai, F. Zhou, and T. Jin, “Enhancement of dim small target through modified top-hat transformation under the condition of heavy clutter,” Signal Process. 90(5), 1643–1654 (2010).
[CrossRef]

Electron. Lett. (2)

P. Jackway, “Improved morphological top-hat,” Electron. Lett. 36(14), 1194–1195 (2000).
[CrossRef]

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38(7), 313–315 (2002).
[CrossRef]

IEEE Sens. J. (1)

N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7(5), 743–751 (2007).
[CrossRef]

IEEE Trans. Geosci. Rem. Sens. (2)

J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 37(3), 1204–1211 (1999).
[CrossRef]

M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 42(6), 1291–1299 (2004).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

P. Jackway and M. Deriche, “Scale-space properties of the multiscale morphological dilation-erosion,” IEEE Trans. Pattern Anal. Mach. Intell. 18(1), 38–51 (1996).
[CrossRef]

Image Vis. Comput. (2)

I. De, B. Chanda, and B. Chattopadhyay, “Enhancing effective depth-of-field by image fusion using mathematical morphology,” Image Vis. Comput. 24(12), 1278–1287 (2006).
[CrossRef]

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26(7), 971–979 (2008).
[CrossRef]

Imaging Sci. J. (1)

X. Bai and F. Zhou, “Top-hat selection transformation for infrared dim small target enhancement,” Imaging Sci. J. 58(2), 112–117 (2010).
[CrossRef]

Inf. Fusion (4)

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inf. Fusion 11(2), 95–113 (2010).
[CrossRef]

F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inf. Fusion 8(2), 143–156 (2007).
[CrossRef]

N. Mitianoudis and T. Stathaki, “Pixel-based and region-based image fusion schemes using ICA bases,” Inf. Fusion 8(2), 131–142 (2007).
[CrossRef]

Y. Chen, Z. Xue, and R. S. Blum, “Theoretical analysis of an information-based quality measure for image fusion,” Inf. Fusion 9(2), 161–175 (2008).
[CrossRef]

Infrared Phys. Technol. (3)

X. Bai, F. Zhou, and B. Xue, “Infrared image enhancement through contrast enhancement by using multi scale new top-hat transform,” Infrared Phys. Technol. 54(2), 61–69 (2011).
[CrossRef]

M. Leviner and M. Maltz, “A new multi-spectral feature level image fusion method for human interpretation,” Infrared Phys. Technol. 52(2-3), 79–88 (2009).
[CrossRef]

M. Zeng, J. Li, and Z. Peng, “The design of top-hat morphological filter and application to infrared target detection,” Infrared Phys. Technol. 48(1), 67–76 (2006).
[CrossRef]

ISPRS J. Photogramm. Remote Sens. (1)

K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—an introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62(4), 249–263 (2007).
[CrossRef]

J. Appl. Remote Sens. (1)

J. W. Roberts, J. Van Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2(1), 023522 (2008).
[CrossRef]

Opt. Express (3)

Pattern Recognit. (5)

G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognit. 37(9), 1855–1872 (2004).
[CrossRef]

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recognit. 43(6), 2003–2016 (2010).
[CrossRef]

X. Bai and F. Zhou, “Analysis of new top-hat transformation and the application for infrared dim small target detection,” Pattern Recognit. 43(6), 2145–2156 (2010).
[CrossRef]

M. A. Oliveira and N. J. Leite, “A multiscale directional operator and morphological tools for reconnecting broken ridges in fingerprint images,” Pattern Recognit. 41(1), 367–377 (2008).
[CrossRef]

S. Mukhopadhyay and B. Chanda, “Fusion of 2D grayscale images using multiscale morphology,” Pattern Recognit. 34(10), 1939–1949 (2001).
[CrossRef]

Pattern Recognit. Lett. (1)

W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recognit. Lett. 28(9), 1123–1132 (2007).
[CrossRef]

Signal Process. (2)

X. Bai, F. Zhou, and T. Jin, “Enhancement of dim small target through modified top-hat transformation under the condition of heavy clutter,” Signal Process. 90(5), 1643–1654 (2010).
[CrossRef]

X. Bai and F. Zhou, “Analysis of different modified top-hat transformations based on structuring element constructing,” Signal Process. 90(11), 2999–3003 (2010).
[CrossRef]

Other (1)

P. Soille, Morphological Image Analysis-Principle and Applications (Springer, 2003).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1

Used structuring element with square shape.

Fig. 2
Fig. 2

Implementation of the algorithm.

Fig. 3
Fig. 3

An example on UNcamp images.

Fig. 8
Fig. 8

Another example on OctecWS images.

Fig. 4
Fig. 4

An example on Trees images.

Fig. 5
Fig. 5

An example on Dune images.

Fig. 6
Fig. 6

An example on navi images.

Fig. 7
Fig. 7

An example on OctecWS images.

Fig. 9
Fig. 9

Quantitative comparison using the entropy measure.

Fig. 10
Fig. 10

Quantitative comparison using the joint entropy measure.

Equations (30)

Equations on this page are rendered with MathJax. Learn more.

f B = max u , v ( f ( x u , y v ) + B ( u , v ) ) ,
f Θ B = min u , v ( f ( x + u , y + v ) B ( u , v ) ) .
f B = ( f Θ B ) B ,
f B = ( f B ) Θ B .
W T H ( x , y ) = f ( x , y ) f B ( x , y ) ,
B T H ( x , y ) = f B ( x , y ) f ( x , y ) .
f B o i ( x , y ) = ( f Δ B ) Θ B b ,
f B o i ( x , y ) = ( f Θ Δ B ) B b .
N W T H ( x , y ) = f ( x , y ) f B o i ( x , y ) ,
N B T H ( x , y ) = f B o i ( x , y ) f ( x , y ) .
N W T H ( x , y ) = f ( x , y ) min ( f B o i ( x , y ) , f ( x , y ) ) = f ( x , y ) min ( ( f Δ B ) Θ B b , f ( x , y ) ) ,
N B T H ( x , y ) = max ( f B o i ( x , y ) , f ( x , y ) ) f ( x , y ) = max ( ( f Θ Δ B ) B b , f ( x , y ) ) f ( x , y ) .
n L s = n L + s × n S ,
n W s = n W + s × n S .
N W T H s ( x , y ) = f ( x , y ) min ( ( f Δ B s ) Θ B b s , f ( x , y ) ) .
N B T H s ( x , y ) = max ( ( f Θ Δ B s ) B b s , f ( x , y ) ) f ( x , y ) .
N W T H s ( f I R ) ( x , y ) = f I R ( x , y ) min ( ( f I R Δ B s ) Θ B b s , f I R ( x , y ) ) .
N B T H s ( f I R ) ( x , y ) = max ( ( f I R Θ Δ B s ) B b s , f I R ( x , y ) ) f I R ( x , y ) .
N W T H s ( f V I ) ( x , y ) = f V I ( x , y ) min ( ( f V I Δ B s ) Θ B b s , f V I ( x , y ) ) ,
N B T H s ( f V I ) ( x , y ) = max ( ( f V I Θ Δ B s ) B b s , f V I ( x , y ) ) f V I ( x , y ) .
N W T H ( f I R ) = max s { N W T H s ( f I R ) } .
N W T H ( f V I ) = max s { N W T H s ( f V I ) } .
R W = max { N W T H ( f I R ) , N W T H ( f V I ) } .
R B = max { N B T H ( f I R ) , N B T H ( f V I ) } ,
N B T H ( f I R ) = max s { N B T H s ( f I R ) } ,
N B T H ( f V I ) = max s { N B T H s ( f V I ) } .
R M ( x , y ) = f I R ( x , y ) + f V I ( x , y ) 2 .
F u = R M × w 1 + R W × w 2 R B × w 3 .
E = l = 0 L 1 p l log 2 p l .
J E F C D = k = 0 L 1 c = 0 L 1 d = 0 L 1 p F C D ( k , c , d ) log 2 p F C D ( k , c , d ) .

Metrics