Abstract

Combing the useful information of multisensor or multifocus images is important for producing effective optical images. To extract and combine the image features of the original images for image fusion well, an algorithm through feature extraction by using the sequentially combined toggle and top-hat based contrast operator is proposed in this paper. Sequentially combining toggle contrast operator and top-hat based contrast operator could be used to identify well the effective bright and dark image features. Furthermore, through multiscale extension, the effective bright and dark image features at multiscales of an image are extracted. After the final bright and dark fusion features are constructed by using the pixel-wise maximum operation on the multiscale image features from different images, the final fusion result is obtained by importing the final bright and dark fusion features into the base image. Experimental results on different types of images show that the proposed algorithm performs well for image fusion, which may be widely used in different applications, such as security surveillance, object recognition, and so on.

© 2012 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Opt. Express 18, 21757–21769 (2010).
    [CrossRef]
  2. M. Leviner and M. Maltz, “A new multi-spectral feature level image fusion method for human interpretation,” Infrared Phys. Technol. 52, 79–88 (2009).
    [CrossRef]
  3. S. E. El-Khamy, M. M. Hadhoud, M. I. Dessouky, B. M. Salam, and F. E. Abd El-Samie, “Blind multichannel reconstruction of high-resolution images using wavelet fusion,” Appl. Opt. 44, 7349–7356 (2005).
    [CrossRef]
  4. G. Pajares and J. M. Cruz, “A wavelet-based image fusion tutorial,” Pattern Recogn. 37, 1855–1872 (2004).
    [CrossRef]
  5. K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—An introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62, 249–263 (2007).
    [CrossRef]
  6. A. Aran, S. Munshi, V. K. Beri, and A. K. Gupta, “Spectral band invariant wavelet-modified MACH filter,” Opt. Lasers Eng. 46, 656–665 (2008).
    [CrossRef]
  7. F. E. Ali, I. M. El-Dokany, A. A. Saad, W. Al-Nuaimy, and F. E. Abd El-Samie, “High resolution image acquisition from magnetic resonance and computed tomography scans using the curvelet fusion algorithm with inverse interpolation techniques,” Appl. Opt. 49, 114–125 (2010).
    [CrossRef]
  8. M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Remote Sens. 42, 1291–1299 (2004).
    [CrossRef]
  9. N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7, 743–751 (2007).
    [CrossRef]
  10. D. M. Bulanona, T. F. Burks, and V. Alchanatis, “Image fusion of visible and thermal images for fruit detection,” Biosyst. Eng. 103, 12–22 (2009).
    [CrossRef]
  11. S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26, 971–979 (2008).
    [CrossRef]
  12. A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inform. Fusion 11, 95–113 (2010).
    [CrossRef]
  13. Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn. 43, 2003–2016 (2010).
    [CrossRef]
  14. W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recogn. Lett. 28, 1123–1132 (2007).
    [CrossRef]
  15. P. Soille, Morphological Image Analysis—Principle and Applications (Springer, 2003).
  16. J. Serra, Image Analysis and Mathematical Morphology(Academic, 1982).
  17. X. Bai, F. Zhou, and B. Xue, “Noise-suppressed image enhancement using multiscale top-hat selection transform through region extraction,” Appl. Opt. 51, 338–347 (2012).
    [CrossRef]
  18. S. Mukhopadhyay and B. Chanda, “Fusion of 2D grayscale images using multiscale morphology,” Pattern Recogn. 34, 1939–1949 (2001).
    [CrossRef]
  19. X. Bai, F. Zhou, and B. Xue, “Fusion of infrared and visual images through region extraction by using multi scale center-surround top-hat transform,” Opt. Express 19, 8444–8457 (2011).
    [CrossRef]
  20. J. Serra, “Toggle mappings,” Technical Report N-18/88/MM (Centre de Morphologie Mathematique, 1988).
  21. L. Dorini and N. Leite, “A scale-space toggle operator for morphological segmentation,” in Proceedings of the 8th International Symposium on Mathematical Morphology (ISMM, 2007), pp. 101–112.
  22. X. Bai and F. Zhou, “Multiscale toggle contrast operator based mineral image enhancement,” J. Microsc. 243, 141–153 (2011).
    [CrossRef]
  23. X. Bai, “Mineral image enhancement based on sequential combination of toggle and top-hat based contrast operator,” Micron, doi:10.1016/j.micron.2012.06.009 (in press).
    [CrossRef]
  24. P. Jackway and M. Deriche, “Scale-space properties of the multiscale morphological dilation-erosion,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 38–51 (1996).
    [CrossRef]
  25. V. Aslantas and R. Kurban, “A comparison of criterion functions for fusion of multi-focus noisy images,” Opt. Commun. 282, 3231–3242 (2009).
    [CrossRef]
  26. W. Wang, P. Tang, and C. Zhu, “A wavelet transform based image fusion method,” J. Image Graphics 6, 1130–1136 (2001).
  27. P. Pradham, N. H. Younan, and R. L. King, “Concepts of image fusion in remote sensing applications,” in Image Fusion: Algorithms and Applications, T. Stathak, ed. (Academic, 2008), pp. 391–428.
  28. J. Roberts, J. Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2, 023522 (2008).
    [CrossRef]
  29. Y. Chen, Z. Xue, and R. S. Blum, “Theoretical analysis of an information-based quality measure for image fusion,” Inform. Fusion 9, 161–175 (2008).
    [CrossRef]
  30. G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
    [CrossRef]
  31. G. Piella and H. Heijmans, “A new quality metric for image fusion,” in Proceedings of International Conference on Image Processing (IEEE, 2003), pp. 173–176.

2012 (1)

2011 (2)

2010 (4)

2009 (3)

D. M. Bulanona, T. F. Burks, and V. Alchanatis, “Image fusion of visible and thermal images for fruit detection,” Biosyst. Eng. 103, 12–22 (2009).
[CrossRef]

V. Aslantas and R. Kurban, “A comparison of criterion functions for fusion of multi-focus noisy images,” Opt. Commun. 282, 3231–3242 (2009).
[CrossRef]

M. Leviner and M. Maltz, “A new multi-spectral feature level image fusion method for human interpretation,” Infrared Phys. Technol. 52, 79–88 (2009).
[CrossRef]

2008 (4)

A. Aran, S. Munshi, V. K. Beri, and A. K. Gupta, “Spectral band invariant wavelet-modified MACH filter,” Opt. Lasers Eng. 46, 656–665 (2008).
[CrossRef]

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26, 971–979 (2008).
[CrossRef]

J. Roberts, J. Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2, 023522 (2008).
[CrossRef]

Y. Chen, Z. Xue, and R. S. Blum, “Theoretical analysis of an information-based quality measure for image fusion,” Inform. Fusion 9, 161–175 (2008).
[CrossRef]

2007 (3)

N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7, 743–751 (2007).
[CrossRef]

W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recogn. Lett. 28, 1123–1132 (2007).
[CrossRef]

K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—An introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62, 249–263 (2007).
[CrossRef]

2005 (1)

2004 (2)

G. Pajares and J. M. Cruz, “A wavelet-based image fusion tutorial,” Pattern Recogn. 37, 1855–1872 (2004).
[CrossRef]

M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Remote Sens. 42, 1291–1299 (2004).
[CrossRef]

2002 (1)

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[CrossRef]

2001 (2)

S. Mukhopadhyay and B. Chanda, “Fusion of 2D grayscale images using multiscale morphology,” Pattern Recogn. 34, 1939–1949 (2001).
[CrossRef]

W. Wang, P. Tang, and C. Zhu, “A wavelet transform based image fusion method,” J. Image Graphics 6, 1130–1136 (2001).

1996 (1)

P. Jackway and M. Deriche, “Scale-space properties of the multiscale morphological dilation-erosion,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 38–51 (1996).
[CrossRef]

Aardt, J.

J. Roberts, J. Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2, 023522 (2008).
[CrossRef]

Abd El-Samie, F. E.

Ahmed, F.

J. Roberts, J. Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2, 023522 (2008).
[CrossRef]

Alchanatis, V.

D. M. Bulanona, T. F. Burks, and V. Alchanatis, “Image fusion of visible and thermal images for fruit detection,” Biosyst. Eng. 103, 12–22 (2009).
[CrossRef]

Ali, F. E.

Al-Nuaimy, W.

Amolins, K.

K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—An introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62, 249–263 (2007).
[CrossRef]

Aran, A.

A. Aran, S. Munshi, V. K. Beri, and A. K. Gupta, “Spectral band invariant wavelet-modified MACH filter,” Opt. Lasers Eng. 46, 656–665 (2008).
[CrossRef]

Aslantas, V.

V. Aslantas and R. Kurban, “A comparison of criterion functions for fusion of multi-focus noisy images,” Opt. Commun. 282, 3231–3242 (2009).
[CrossRef]

Bai, X.

X. Bai, F. Zhou, and B. Xue, “Noise-suppressed image enhancement using multiscale top-hat selection transform through region extraction,” Appl. Opt. 51, 338–347 (2012).
[CrossRef]

X. Bai, F. Zhou, and B. Xue, “Fusion of infrared and visual images through region extraction by using multi scale center-surround top-hat transform,” Opt. Express 19, 8444–8457 (2011).
[CrossRef]

X. Bai and F. Zhou, “Multiscale toggle contrast operator based mineral image enhancement,” J. Microsc. 243, 141–153 (2011).
[CrossRef]

X. Bai, “Mineral image enhancement based on sequential combination of toggle and top-hat based contrast operator,” Micron, doi:10.1016/j.micron.2012.06.009 (in press).
[CrossRef]

Beri, V. K.

A. Aran, S. Munshi, V. K. Beri, and A. K. Gupta, “Spectral band invariant wavelet-modified MACH filter,” Opt. Lasers Eng. 46, 656–665 (2008).
[CrossRef]

Blum, R. S.

Y. Chen, Z. Xue, and R. S. Blum, “Theoretical analysis of an information-based quality measure for image fusion,” Inform. Fusion 9, 161–175 (2008).
[CrossRef]

Bulanona, D. M.

D. M. Bulanona, T. F. Burks, and V. Alchanatis, “Image fusion of visible and thermal images for fruit detection,” Biosyst. Eng. 103, 12–22 (2009).
[CrossRef]

Bull, D.

N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7, 743–751 (2007).
[CrossRef]

Bull, D. R.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inform. Fusion 11, 95–113 (2010).
[CrossRef]

Burks, T. F.

D. M. Bulanona, T. F. Burks, and V. Alchanatis, “Image fusion of visible and thermal images for fruit detection,” Biosyst. Eng. 103, 12–22 (2009).
[CrossRef]

Canagarajah, C. N.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inform. Fusion 11, 95–113 (2010).
[CrossRef]

Canagarajah, N.

N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7, 743–751 (2007).
[CrossRef]

Catalán, R. G.

M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Remote Sens. 42, 1291–1299 (2004).
[CrossRef]

Chanda, B.

S. Mukhopadhyay and B. Chanda, “Fusion of 2D grayscale images using multiscale morphology,” Pattern Recogn. 34, 1939–1949 (2001).
[CrossRef]

Chen, Y.

Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Opt. Express 18, 21757–21769 (2010).
[CrossRef]

Y. Chen, Z. Xue, and R. S. Blum, “Theoretical analysis of an information-based quality measure for image fusion,” Inform. Fusion 9, 161–175 (2008).
[CrossRef]

Cruz, J. M.

G. Pajares and J. M. Cruz, “A wavelet-based image fusion tutorial,” Pattern Recogn. 37, 1855–1872 (2004).
[CrossRef]

Cvejic, N.

N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7, 743–751 (2007).
[CrossRef]

Dare, P.

K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—An introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62, 249–263 (2007).
[CrossRef]

Deriche, M.

P. Jackway and M. Deriche, “Scale-space properties of the multiscale morphological dilation-erosion,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 38–51 (1996).
[CrossRef]

Dessouky, M. I.

Dixon, T. D.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inform. Fusion 11, 95–113 (2010).
[CrossRef]

Dorini, L.

L. Dorini and N. Leite, “A scale-space toggle operator for morphological segmentation,” in Proceedings of the 8th International Symposium on Mathematical Morphology (ISMM, 2007), pp. 101–112.

El-Dokany, I. M.

El-Khamy, S. E.

García, R.

M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Remote Sens. 42, 1291–1299 (2004).
[CrossRef]

González-Audícana, M.

M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Remote Sens. 42, 1291–1299 (2004).
[CrossRef]

Gu, J.

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn. 43, 2003–2016 (2010).
[CrossRef]

Gupta, A. K.

A. Aran, S. Munshi, V. K. Beri, and A. K. Gupta, “Spectral band invariant wavelet-modified MACH filter,” Opt. Lasers Eng. 46, 656–665 (2008).
[CrossRef]

Hadhoud, M. M.

Heijmans, H.

G. Piella and H. Heijmans, “A new quality metric for image fusion,” in Proceedings of International Conference on Image Processing (IEEE, 2003), pp. 173–176.

Hogervorst, M. A.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inform. Fusion 11, 95–113 (2010).
[CrossRef]

Huang, W.

W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recogn. Lett. 28, 1123–1132 (2007).
[CrossRef]

Jackway, P.

P. Jackway and M. Deriche, “Scale-space properties of the multiscale morphological dilation-erosion,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 38–51 (1996).
[CrossRef]

Jiang, Y.

Jing, Z.

W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recogn. Lett. 28, 1123–1132 (2007).
[CrossRef]

King, R. L.

P. Pradham, N. H. Younan, and R. L. King, “Concepts of image fusion in remote sensing applications,” in Image Fusion: Algorithms and Applications, T. Stathak, ed. (Academic, 2008), pp. 391–428.

Kurban, R.

V. Aslantas and R. Kurban, “A comparison of criterion functions for fusion of multi-focus noisy images,” Opt. Commun. 282, 3231–3242 (2009).
[CrossRef]

Leite, N.

L. Dorini and N. Leite, “A scale-space toggle operator for morphological segmentation,” in Proceedings of the 8th International Symposium on Mathematical Morphology (ISMM, 2007), pp. 101–112.

Leviner, M.

M. Leviner and M. Maltz, “A new multi-spectral feature level image fusion method for human interpretation,” Infrared Phys. Technol. 52, 79–88 (2009).
[CrossRef]

Lewis, J. J.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inform. Fusion 11, 95–113 (2010).
[CrossRef]

Li, S.

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26, 971–979 (2008).
[CrossRef]

Ma, Y.

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn. 43, 2003–2016 (2010).
[CrossRef]

Maltz, M.

M. Leviner and M. Maltz, “A new multi-spectral feature level image fusion method for human interpretation,” Infrared Phys. Technol. 52, 79–88 (2009).
[CrossRef]

Mukhopadhyay, S.

S. Mukhopadhyay and B. Chanda, “Fusion of 2D grayscale images using multiscale morphology,” Pattern Recogn. 34, 1939–1949 (2001).
[CrossRef]

Munshi, S.

A. Aran, S. Munshi, V. K. Beri, and A. K. Gupta, “Spectral band invariant wavelet-modified MACH filter,” Opt. Lasers Eng. 46, 656–665 (2008).
[CrossRef]

Nikolov, S. G.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inform. Fusion 11, 95–113 (2010).
[CrossRef]

Pajares, G.

G. Pajares and J. M. Cruz, “A wavelet-based image fusion tutorial,” Pattern Recogn. 37, 1855–1872 (2004).
[CrossRef]

Piella, G.

G. Piella and H. Heijmans, “A new quality metric for image fusion,” in Proceedings of International Conference on Image Processing (IEEE, 2003), pp. 173–176.

Pradham, P.

P. Pradham, N. H. Younan, and R. L. King, “Concepts of image fusion in remote sensing applications,” in Image Fusion: Algorithms and Applications, T. Stathak, ed. (Academic, 2008), pp. 391–428.

Qu, G.

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[CrossRef]

Roberts, J.

J. Roberts, J. Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2, 023522 (2008).
[CrossRef]

Saad, A. A.

Salam, B. M.

Saleta, J. L.

M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Remote Sens. 42, 1291–1299 (2004).
[CrossRef]

Serra, J.

J. Serra, “Toggle mappings,” Technical Report N-18/88/MM (Centre de Morphologie Mathematique, 1988).

J. Serra, Image Analysis and Mathematical Morphology(Academic, 1982).

Soille, P.

P. Soille, Morphological Image Analysis—Principle and Applications (Springer, 2003).

Sun, Z.

Tang, P.

W. Wang, P. Tang, and C. Zhu, “A wavelet transform based image fusion method,” J. Image Graphics 6, 1130–1136 (2001).

Toet, A.

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inform. Fusion 11, 95–113 (2010).
[CrossRef]

Wang, L.

Wang, W.

W. Wang, P. Tang, and C. Zhu, “A wavelet transform based image fusion method,” J. Image Graphics 6, 1130–1136 (2001).

Wang, Z.

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn. 43, 2003–2016 (2010).
[CrossRef]

Xue, B.

Xue, Z.

Y. Chen, Z. Xue, and R. S. Blum, “Theoretical analysis of an information-based quality measure for image fusion,” Inform. Fusion 9, 161–175 (2008).
[CrossRef]

Yan, P.

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[CrossRef]

Yang, B.

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26, 971–979 (2008).
[CrossRef]

Younan, N. H.

P. Pradham, N. H. Younan, and R. L. King, “Concepts of image fusion in remote sensing applications,” in Image Fusion: Algorithms and Applications, T. Stathak, ed. (Academic, 2008), pp. 391–428.

Zhai, G.

Zhang, D.

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[CrossRef]

Zhang, Y.

K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—An introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62, 249–263 (2007).
[CrossRef]

Zhou, F.

Zhu, C.

W. Wang, P. Tang, and C. Zhu, “A wavelet transform based image fusion method,” J. Image Graphics 6, 1130–1136 (2001).

Appl. Opt. (3)

Biosyst. Eng. (1)

D. M. Bulanona, T. F. Burks, and V. Alchanatis, “Image fusion of visible and thermal images for fruit detection,” Biosyst. Eng. 103, 12–22 (2009).
[CrossRef]

Electron. Lett. (1)

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[CrossRef]

IEEE Sens. J. (1)

N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7, 743–751 (2007).
[CrossRef]

IEEE Trans. Geosci. Remote Sens. (1)

M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Remote Sens. 42, 1291–1299 (2004).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

P. Jackway and M. Deriche, “Scale-space properties of the multiscale morphological dilation-erosion,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 38–51 (1996).
[CrossRef]

Image Vis. Comput. (1)

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26, 971–979 (2008).
[CrossRef]

Inform. Fusion (2)

A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inform. Fusion 11, 95–113 (2010).
[CrossRef]

Y. Chen, Z. Xue, and R. S. Blum, “Theoretical analysis of an information-based quality measure for image fusion,” Inform. Fusion 9, 161–175 (2008).
[CrossRef]

Infrared Phys. Technol. (1)

M. Leviner and M. Maltz, “A new multi-spectral feature level image fusion method for human interpretation,” Infrared Phys. Technol. 52, 79–88 (2009).
[CrossRef]

ISPRS J. Photogramm. Remote Sens. (1)

K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—An introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62, 249–263 (2007).
[CrossRef]

J. Appl. Remote Sens. (1)

J. Roberts, J. Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2, 023522 (2008).
[CrossRef]

J. Image Graphics (1)

W. Wang, P. Tang, and C. Zhu, “A wavelet transform based image fusion method,” J. Image Graphics 6, 1130–1136 (2001).

J. Microsc. (1)

X. Bai and F. Zhou, “Multiscale toggle contrast operator based mineral image enhancement,” J. Microsc. 243, 141–153 (2011).
[CrossRef]

Opt. Commun. (1)

V. Aslantas and R. Kurban, “A comparison of criterion functions for fusion of multi-focus noisy images,” Opt. Commun. 282, 3231–3242 (2009).
[CrossRef]

Opt. Express (2)

Opt. Lasers Eng. (1)

A. Aran, S. Munshi, V. K. Beri, and A. K. Gupta, “Spectral band invariant wavelet-modified MACH filter,” Opt. Lasers Eng. 46, 656–665 (2008).
[CrossRef]

Pattern Recogn. (3)

Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn. 43, 2003–2016 (2010).
[CrossRef]

S. Mukhopadhyay and B. Chanda, “Fusion of 2D grayscale images using multiscale morphology,” Pattern Recogn. 34, 1939–1949 (2001).
[CrossRef]

G. Pajares and J. M. Cruz, “A wavelet-based image fusion tutorial,” Pattern Recogn. 37, 1855–1872 (2004).
[CrossRef]

Pattern Recogn. Lett. (1)

W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recogn. Lett. 28, 1123–1132 (2007).
[CrossRef]

Other (7)

P. Soille, Morphological Image Analysis—Principle and Applications (Springer, 2003).

J. Serra, Image Analysis and Mathematical Morphology(Academic, 1982).

J. Serra, “Toggle mappings,” Technical Report N-18/88/MM (Centre de Morphologie Mathematique, 1988).

L. Dorini and N. Leite, “A scale-space toggle operator for morphological segmentation,” in Proceedings of the 8th International Symposium on Mathematical Morphology (ISMM, 2007), pp. 101–112.

X. Bai, “Mineral image enhancement based on sequential combination of toggle and top-hat based contrast operator,” Micron, doi:10.1016/j.micron.2012.06.009 (in press).
[CrossRef]

P. Pradham, N. H. Younan, and R. L. King, “Concepts of image fusion in remote sensing applications,” in Image Fusion: Algorithms and Applications, T. Stathak, ed. (Academic, 2008), pp. 391–428.

G. Piella and H. Heijmans, “A new quality metric for image fusion,” in Proceedings of International Conference on Image Processing (IEEE, 2003), pp. 173–176.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1.

Diagram of the proposed algorithm.

Fig. 2.
Fig. 2.

Fusion example on multifocus images: (a), (b) are the original images, (c) is the result of wavelet based algorithm, (d) is the result of multiscale morphology based algorithm, (e) is the result of the center-surround top-hat transform based algorithm, and (f) is the result of the proposed algorithm.

Fig. 3.
Fig. 3.

Fusion example on multimodal infrared and vision images: (a), (b) are the original infrared and visual images, (c) is the result of wavelet based algorithm, (d) is the result of multiscale morphology based algorithm, (e) is the result of the center-surround top-hat transform based algorithm, and (f) is the result of the proposed algorithm.

Fig. 4.
Fig. 4.

Fusion example on multimodal remote sensing images: (a), (b) are the original remote sensing images, (c) is the result of wavelet based algorithm, (d) is the result of multiscale morphology based algorithm, (e) is the result of the center-surround top-hat transform based algorithm, and (f) is the result of the proposed algorithm.

Fig. 5.
Fig. 5.

Fusion example on multimodal infrared and vision images: (a), (b) are the original infrared and visual images, (c) is the result of wavelet based algorithm, (d) is the result of multiscale morphology based algorithm, (e) is the result of the center-surround top-hat transform based algorithm, and (f) is the result of the proposed algorithm.

Fig. 6.
Fig. 6.

Quantitative comparison using SF.

Fig. 7.
Fig. 7.

Quantitative comparison using MG.

Fig. 8.
Fig. 8.

Quantitative comparison using E.

Fig. 9.
Fig. 9.

Quantitative comparison using MI.

Fig. 10.
Fig. 10.

Quantitative comparison using Q.

Tables (2)

Tables Icon

Table 1. Detailed Numerical Results of SF and MG for Each Image Groups in Figs. 25

Tables Icon

Table 2. Detailed Numerical Results of SF and MG for Overall Comparison

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

fB(x,y)=maxu,v(f(xu,yv)+B(u,v)),fΘB(x,y)=minu,v(f(x+u,y+v)B(u,v)).
fB=(fΘB)B,fB=(fB)ΘB.
WTHB(x,y)=f(x,y)(fB)(x,y),BTHB(x,y)=(fB)(x,y)f(x,y).
TCOB(x,y)={fB(x,y),iffB(x,y)f(x,y)<f(x,y)fΘB(x,y)fΘB(x,y),iffB(x,y)f(x,y)>f(x,y)fΘB(x,y)f(x,y),else.
THCOB(x,y)=f(x,y)+WTHB(x,y)BTHB(x,y).
SCOB(x,y)=TCOB(THCOB(x,y)).
BIFBf(x,y)=max{SCOBf(x,y)f(x,y),0}.
BIFBg(x,y)=max{SCOBg(x,y)g(x,y),0}.
DIFBf(x,y)=max{f(x,y)SCOBf(x,y),0},DIFBg(x,y)=max{g(x,y)SCOBf(x,y),0}.
BIFBsf(x,y)=max{SCOBsf(x,y)f(x,y),0},DIFBsf(x,y)=max{f(x,y)SCOBsf(x,y),0}.
BIFBsg(x,y)=max{SCOBsg(x,y)g(x,y),0},DIFBsg(x,y)=max{g(x,y)SCOBsg(x,y),0}.
BIFBs(x,y)=max{BIFBsf(x,y),BIFBsg(x,y)}.
FBIF=maxs{BIFBs}.
DIFBs(x,y)=max{DIFBsf(x,y),DIFBsg(x,y)}.
FDIF=maxs{DIFBs}.
A(x,y)=mean{f(x,y),g(x,y)}.
FRI(x,y)=A(x,y)+FBIF(x,y)FDIF(x,y).
SF=RF2+CF2,
RF=1M×Nx=1My=1N[FRI(x,y)FRI(x1,y)]2,CF=1M×Nx=1My=1N[FRI(x,y)FRI(x,y1)]2.
MG=1(M1)×(N1)×x=1M1y=1N1(FRI(x,y)FRI(x1,y))2+(FRI(x,y)FRI(x,y1))2/2.
E=l=0L1pllog2pl,
MI=MIf+MIg.
MIf=q=0L1r=0L1pFf(q,r)log2pFf(q,r)pF(q)pf(r),MIg=q=0L1r=0L1pFg(q,r)log2pFg(q,r)pF(q)pg(r),
Q=1|W|ωW(λ(ω)Q0(a,u|ω)+(1λ(ω))Q0(b,u|ω)),
Q0(a,u|ω)=4×σau×a¯×u¯(a¯2+u¯2)×(σa2+σu2),Q0(b,u|ω)=4×σbu×b¯×u¯(b¯2+u¯2)×(σb2+σu2).

Metrics