Abstract

Aiming at the differences of physical characteristics between infrared sensors and visible ones, we introduce the focus measure operators into the curvelet domain in order to propose a novel image fusion method. First, the fast discrete curvelet transform is performed on the original images to obtain the coefficient subbands in different scales and various directions, and the focus measure values are calculated in each coefficient subband. Then, the local variance weighted strategy is employed to the low-frequency coefficient subbands for the purpose of maintaining the low-frequency information of the infrared image and adding the low-frequency features of the visible image to the fused image; meanwhile, the fourth-order correlation coefficient match strategy is performed to the high-frequency coefficient subbands to select the suitable high-frequency information. Finally, the fused image can be obtained through the inverse curvelet transform. The practical experiments indicate that the presented method can integrate more useful information from the original images, and the fusion performance is proved to be much better than the traditional methods based on the wavelet, curvelet, and pyramids.

© 2012 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. F. F. Zhou, W. D. Chen, and L. F. Li, “Fusion of IR and visible images using region growing,” J. Appl. Opt. 28, 737–741 (2007) (in Chinese).
  2. A. Apatean, C. Rusu, A. Rogozan, and A. Bensrhair, “Visible-infrared fusion in the frame of an obstacle recognition system,” in IEEE International Conference on Automation, Quality and Testing, Robotics (IEEE, 2010), pp. 1–6.
  3. X. H. Yang, H. Y. Jin, and L. C. Jiao, “Adaptive image fusion algorithm for infrared and visible light images based on DT-CWT,” J. Infrared Millim. Waves 26, 419–424 (2007).
  4. H. M. Wang, K. Zhang, and Y. J. Li, “Image fusion algorithm based on wavelet transform,” Infrared Laser Eng. 34, 328–332 (2005) (in Chinese).
  5. S. Firooz, “Comparative image fusion analysis,” in IEEE Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 1–8.
  6. G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recogn. 37, 1855–1872 (2004).
    [CrossRef]
  7. R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011).
    [CrossRef]
  8. S. Li and B. Yang, “Multifocus image fusion by combining curvelet and wavelet transform,” Pattern Recogn. Lett. 29, 1295–1301 (2008).
    [CrossRef]
  9. X. B. Qu, J. W. Yan, and G. D. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-Laplacian,” Opt. Precision Eng. 17, 1203–1212 (2009).
  10. E. J. Candès and D. L. Donoho, “New tight frames of curvelets and optimal representations of objects with C2 singularities,” Commun. Pure Appl. Math. 57, 219–266 (2004).
    [CrossRef]
  11. E. J. Candès, L. Demanet, and D. L. Donoho, “Fast discrete curvelet transforms,” Multiscale Model. Simul. 5, 861–899 (2006).
    [CrossRef]
  12. W. Huang, and Z. L. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recogn. Lett. 28, 493–500 (2007).
    [CrossRef]
  13. R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “An efficient algorithm for focus measure computation in constant time,” IEEE Trans. Circuits Syst. Video Technol. 22: 152–156 (2012).
    [CrossRef]
  14. R. Minhas, A. A. Mohammed, Q. M. J. Wu, and M. A. Sid-Ahmed, “3D shape from focus and depth map computation using steerable filters,” Lect. Notes Comput. Sci. 5627, 573–583 (2009).
    [CrossRef]
  15. M. Muhammad and T. S. Choi, “Sampling for shape from focus in optical microscopy,” IEEE Trans. Pattern Anal. Machine Intell. 99, 1–12 (2011).
  16. V. Aslantas and R. Kurban, “A comparison of criterion functions for fusion of multi-focus noisy images,” Opt. Commun. 282, 3231–3242 (2009).
    [CrossRef]
  17. S. K. Nayar, and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Machine Intell. 16, 824–831 (1994).
    [CrossRef]
  18. Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process Lett. 9, 81–84 (2002).
    [CrossRef]
  19. G. X. Liu, S. G. Zhao, and W. H. Yang, “Multi-sensor image fusion scheme based on gradient pyramid decomposition,” J. Optoelectron. Laser 12, 293–296 (2001) (in Chinese).
  20. J. Liu and Z. F. Shao, “Feature-based remote sensing image fusion quality metrics using structure similarity,” Acta Photon. Sin. 40, 126–131 (2011).
    [CrossRef]
  21. M. Welling, “Robust higher order statistics,” in Proceedings of 10th International Workshop on Artificial Intelligence and Statistics (2005), pp. 405–412.

2012 (1)

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “An efficient algorithm for focus measure computation in constant time,” IEEE Trans. Circuits Syst. Video Technol. 22: 152–156 (2012).
[CrossRef]

2011 (3)

M. Muhammad and T. S. Choi, “Sampling for shape from focus in optical microscopy,” IEEE Trans. Pattern Anal. Machine Intell. 99, 1–12 (2011).

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011).
[CrossRef]

J. Liu and Z. F. Shao, “Feature-based remote sensing image fusion quality metrics using structure similarity,” Acta Photon. Sin. 40, 126–131 (2011).
[CrossRef]

2009 (3)

X. B. Qu, J. W. Yan, and G. D. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-Laplacian,” Opt. Precision Eng. 17, 1203–1212 (2009).

V. Aslantas and R. Kurban, “A comparison of criterion functions for fusion of multi-focus noisy images,” Opt. Commun. 282, 3231–3242 (2009).
[CrossRef]

R. Minhas, A. A. Mohammed, Q. M. J. Wu, and M. A. Sid-Ahmed, “3D shape from focus and depth map computation using steerable filters,” Lect. Notes Comput. Sci. 5627, 573–583 (2009).
[CrossRef]

2008 (1)

S. Li and B. Yang, “Multifocus image fusion by combining curvelet and wavelet transform,” Pattern Recogn. Lett. 29, 1295–1301 (2008).
[CrossRef]

2007 (3)

F. F. Zhou, W. D. Chen, and L. F. Li, “Fusion of IR and visible images using region growing,” J. Appl. Opt. 28, 737–741 (2007) (in Chinese).

X. H. Yang, H. Y. Jin, and L. C. Jiao, “Adaptive image fusion algorithm for infrared and visible light images based on DT-CWT,” J. Infrared Millim. Waves 26, 419–424 (2007).

W. Huang, and Z. L. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recogn. Lett. 28, 493–500 (2007).
[CrossRef]

2006 (1)

E. J. Candès, L. Demanet, and D. L. Donoho, “Fast discrete curvelet transforms,” Multiscale Model. Simul. 5, 861–899 (2006).
[CrossRef]

2005 (1)

H. M. Wang, K. Zhang, and Y. J. Li, “Image fusion algorithm based on wavelet transform,” Infrared Laser Eng. 34, 328–332 (2005) (in Chinese).

2004 (2)

G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recogn. 37, 1855–1872 (2004).
[CrossRef]

E. J. Candès and D. L. Donoho, “New tight frames of curvelets and optimal representations of objects with C2 singularities,” Commun. Pure Appl. Math. 57, 219–266 (2004).
[CrossRef]

2002 (1)

Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process Lett. 9, 81–84 (2002).
[CrossRef]

2001 (1)

G. X. Liu, S. G. Zhao, and W. H. Yang, “Multi-sensor image fusion scheme based on gradient pyramid decomposition,” J. Optoelectron. Laser 12, 293–296 (2001) (in Chinese).

1994 (1)

S. K. Nayar, and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Machine Intell. 16, 824–831 (1994).
[CrossRef]

Apatean, A.

A. Apatean, C. Rusu, A. Rogozan, and A. Bensrhair, “Visible-infrared fusion in the frame of an obstacle recognition system,” in IEEE International Conference on Automation, Quality and Testing, Robotics (IEEE, 2010), pp. 1–6.

Aslantas, V.

V. Aslantas and R. Kurban, “A comparison of criterion functions for fusion of multi-focus noisy images,” Opt. Commun. 282, 3231–3242 (2009).
[CrossRef]

Bensrhair, A.

A. Apatean, C. Rusu, A. Rogozan, and A. Bensrhair, “Visible-infrared fusion in the frame of an obstacle recognition system,” in IEEE International Conference on Automation, Quality and Testing, Robotics (IEEE, 2010), pp. 1–6.

Bovik, A. C.

Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process Lett. 9, 81–84 (2002).
[CrossRef]

Candès, E. J.

E. J. Candès, L. Demanet, and D. L. Donoho, “Fast discrete curvelet transforms,” Multiscale Model. Simul. 5, 861–899 (2006).
[CrossRef]

E. J. Candès and D. L. Donoho, “New tight frames of curvelets and optimal representations of objects with C2 singularities,” Commun. Pure Appl. Math. 57, 219–266 (2004).
[CrossRef]

Chen, W. D.

F. F. Zhou, W. D. Chen, and L. F. Li, “Fusion of IR and visible images using region growing,” J. Appl. Opt. 28, 737–741 (2007) (in Chinese).

Choi, T. S.

M. Muhammad and T. S. Choi, “Sampling for shape from focus in optical microscopy,” IEEE Trans. Pattern Anal. Machine Intell. 99, 1–12 (2011).

de la Cruz, J. M.

G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recogn. 37, 1855–1872 (2004).
[CrossRef]

Demanet, L.

E. J. Candès, L. Demanet, and D. L. Donoho, “Fast discrete curvelet transforms,” Multiscale Model. Simul. 5, 861–899 (2006).
[CrossRef]

Donoho, D. L.

E. J. Candès, L. Demanet, and D. L. Donoho, “Fast discrete curvelet transforms,” Multiscale Model. Simul. 5, 861–899 (2006).
[CrossRef]

E. J. Candès and D. L. Donoho, “New tight frames of curvelets and optimal representations of objects with C2 singularities,” Commun. Pure Appl. Math. 57, 219–266 (2004).
[CrossRef]

Firooz, S.

S. Firooz, “Comparative image fusion analysis,” in IEEE Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 1–8.

Huang, W.

W. Huang, and Z. L. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recogn. Lett. 28, 493–500 (2007).
[CrossRef]

Jiao, L. C.

X. H. Yang, H. Y. Jin, and L. C. Jiao, “Adaptive image fusion algorithm for infrared and visible light images based on DT-CWT,” J. Infrared Millim. Waves 26, 419–424 (2007).

Jin, H. Y.

X. H. Yang, H. Y. Jin, and L. C. Jiao, “Adaptive image fusion algorithm for infrared and visible light images based on DT-CWT,” J. Infrared Millim. Waves 26, 419–424 (2007).

Jing, Z. L.

W. Huang, and Z. L. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recogn. Lett. 28, 493–500 (2007).
[CrossRef]

Kurban, R.

V. Aslantas and R. Kurban, “A comparison of criterion functions for fusion of multi-focus noisy images,” Opt. Commun. 282, 3231–3242 (2009).
[CrossRef]

Li, L. F.

F. F. Zhou, W. D. Chen, and L. F. Li, “Fusion of IR and visible images using region growing,” J. Appl. Opt. 28, 737–741 (2007) (in Chinese).

Li, S.

S. Li and B. Yang, “Multifocus image fusion by combining curvelet and wavelet transform,” Pattern Recogn. Lett. 29, 1295–1301 (2008).
[CrossRef]

Li, Y. J.

H. M. Wang, K. Zhang, and Y. J. Li, “Image fusion algorithm based on wavelet transform,” Infrared Laser Eng. 34, 328–332 (2005) (in Chinese).

Liu, G. X.

G. X. Liu, S. G. Zhao, and W. H. Yang, “Multi-sensor image fusion scheme based on gradient pyramid decomposition,” J. Optoelectron. Laser 12, 293–296 (2001) (in Chinese).

Liu, J.

J. Liu and Z. F. Shao, “Feature-based remote sensing image fusion quality metrics using structure similarity,” Acta Photon. Sin. 40, 126–131 (2011).
[CrossRef]

Minhas, R.

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “An efficient algorithm for focus measure computation in constant time,” IEEE Trans. Circuits Syst. Video Technol. 22: 152–156 (2012).
[CrossRef]

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011).
[CrossRef]

R. Minhas, A. A. Mohammed, Q. M. J. Wu, and M. A. Sid-Ahmed, “3D shape from focus and depth map computation using steerable filters,” Lect. Notes Comput. Sci. 5627, 573–583 (2009).
[CrossRef]

Mohammed, A. A.

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “An efficient algorithm for focus measure computation in constant time,” IEEE Trans. Circuits Syst. Video Technol. 22: 152–156 (2012).
[CrossRef]

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011).
[CrossRef]

R. Minhas, A. A. Mohammed, Q. M. J. Wu, and M. A. Sid-Ahmed, “3D shape from focus and depth map computation using steerable filters,” Lect. Notes Comput. Sci. 5627, 573–583 (2009).
[CrossRef]

Muhammad, M.

M. Muhammad and T. S. Choi, “Sampling for shape from focus in optical microscopy,” IEEE Trans. Pattern Anal. Machine Intell. 99, 1–12 (2011).

Nakagawa, Y.

S. K. Nayar, and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Machine Intell. 16, 824–831 (1994).
[CrossRef]

Nayar, S. K.

S. K. Nayar, and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Machine Intell. 16, 824–831 (1994).
[CrossRef]

Pajares, G.

G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recogn. 37, 1855–1872 (2004).
[CrossRef]

Qu, X. B.

X. B. Qu, J. W. Yan, and G. D. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-Laplacian,” Opt. Precision Eng. 17, 1203–1212 (2009).

Rogozan, A.

A. Apatean, C. Rusu, A. Rogozan, and A. Bensrhair, “Visible-infrared fusion in the frame of an obstacle recognition system,” in IEEE International Conference on Automation, Quality and Testing, Robotics (IEEE, 2010), pp. 1–6.

Rusu, C.

A. Apatean, C. Rusu, A. Rogozan, and A. Bensrhair, “Visible-infrared fusion in the frame of an obstacle recognition system,” in IEEE International Conference on Automation, Quality and Testing, Robotics (IEEE, 2010), pp. 1–6.

Shao, Z. F.

J. Liu and Z. F. Shao, “Feature-based remote sensing image fusion quality metrics using structure similarity,” Acta Photon. Sin. 40, 126–131 (2011).
[CrossRef]

Sid-Ahmed, M. A.

R. Minhas, A. A. Mohammed, Q. M. J. Wu, and M. A. Sid-Ahmed, “3D shape from focus and depth map computation using steerable filters,” Lect. Notes Comput. Sci. 5627, 573–583 (2009).
[CrossRef]

Wang, H. M.

H. M. Wang, K. Zhang, and Y. J. Li, “Image fusion algorithm based on wavelet transform,” Infrared Laser Eng. 34, 328–332 (2005) (in Chinese).

Wang, Z.

Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process Lett. 9, 81–84 (2002).
[CrossRef]

Welling, M.

M. Welling, “Robust higher order statistics,” in Proceedings of 10th International Workshop on Artificial Intelligence and Statistics (2005), pp. 405–412.

Wu, Q. M. J.

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “An efficient algorithm for focus measure computation in constant time,” IEEE Trans. Circuits Syst. Video Technol. 22: 152–156 (2012).
[CrossRef]

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011).
[CrossRef]

R. Minhas, A. A. Mohammed, Q. M. J. Wu, and M. A. Sid-Ahmed, “3D shape from focus and depth map computation using steerable filters,” Lect. Notes Comput. Sci. 5627, 573–583 (2009).
[CrossRef]

Yan, J. W.

X. B. Qu, J. W. Yan, and G. D. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-Laplacian,” Opt. Precision Eng. 17, 1203–1212 (2009).

Yang, B.

S. Li and B. Yang, “Multifocus image fusion by combining curvelet and wavelet transform,” Pattern Recogn. Lett. 29, 1295–1301 (2008).
[CrossRef]

Yang, G. D.

X. B. Qu, J. W. Yan, and G. D. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-Laplacian,” Opt. Precision Eng. 17, 1203–1212 (2009).

Yang, W. H.

G. X. Liu, S. G. Zhao, and W. H. Yang, “Multi-sensor image fusion scheme based on gradient pyramid decomposition,” J. Optoelectron. Laser 12, 293–296 (2001) (in Chinese).

Yang, X. H.

X. H. Yang, H. Y. Jin, and L. C. Jiao, “Adaptive image fusion algorithm for infrared and visible light images based on DT-CWT,” J. Infrared Millim. Waves 26, 419–424 (2007).

Zhang, K.

H. M. Wang, K. Zhang, and Y. J. Li, “Image fusion algorithm based on wavelet transform,” Infrared Laser Eng. 34, 328–332 (2005) (in Chinese).

Zhao, S. G.

G. X. Liu, S. G. Zhao, and W. H. Yang, “Multi-sensor image fusion scheme based on gradient pyramid decomposition,” J. Optoelectron. Laser 12, 293–296 (2001) (in Chinese).

Zhou, F. F.

F. F. Zhou, W. D. Chen, and L. F. Li, “Fusion of IR and visible images using region growing,” J. Appl. Opt. 28, 737–741 (2007) (in Chinese).

Acta Photon. Sin. (1)

J. Liu and Z. F. Shao, “Feature-based remote sensing image fusion quality metrics using structure similarity,” Acta Photon. Sin. 40, 126–131 (2011).
[CrossRef]

Commun. Pure Appl. Math. (1)

E. J. Candès and D. L. Donoho, “New tight frames of curvelets and optimal representations of objects with C2 singularities,” Commun. Pure Appl. Math. 57, 219–266 (2004).
[CrossRef]

IEEE Signal Process Lett. (1)

Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process Lett. 9, 81–84 (2002).
[CrossRef]

IEEE Trans. Circuits Syst. Video Technol. (1)

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “An efficient algorithm for focus measure computation in constant time,” IEEE Trans. Circuits Syst. Video Technol. 22: 152–156 (2012).
[CrossRef]

IEEE Trans. Pattern Anal. Machine Intell. (2)

M. Muhammad and T. S. Choi, “Sampling for shape from focus in optical microscopy,” IEEE Trans. Pattern Anal. Machine Intell. 99, 1–12 (2011).

S. K. Nayar, and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Machine Intell. 16, 824–831 (1994).
[CrossRef]

Infrared Laser Eng. (1)

H. M. Wang, K. Zhang, and Y. J. Li, “Image fusion algorithm based on wavelet transform,” Infrared Laser Eng. 34, 328–332 (2005) (in Chinese).

J. Appl. Opt. (1)

F. F. Zhou, W. D. Chen, and L. F. Li, “Fusion of IR and visible images using region growing,” J. Appl. Opt. 28, 737–741 (2007) (in Chinese).

J. Infrared Millim. Waves (1)

X. H. Yang, H. Y. Jin, and L. C. Jiao, “Adaptive image fusion algorithm for infrared and visible light images based on DT-CWT,” J. Infrared Millim. Waves 26, 419–424 (2007).

J. Optoelectron. Laser (1)

G. X. Liu, S. G. Zhao, and W. H. Yang, “Multi-sensor image fusion scheme based on gradient pyramid decomposition,” J. Optoelectron. Laser 12, 293–296 (2001) (in Chinese).

Lect. Notes Comput. Sci. (1)

R. Minhas, A. A. Mohammed, Q. M. J. Wu, and M. A. Sid-Ahmed, “3D shape from focus and depth map computation using steerable filters,” Lect. Notes Comput. Sci. 5627, 573–583 (2009).
[CrossRef]

Multiscale Model. Simul. (1)

E. J. Candès, L. Demanet, and D. L. Donoho, “Fast discrete curvelet transforms,” Multiscale Model. Simul. 5, 861–899 (2006).
[CrossRef]

Opt. Commun. (1)

V. Aslantas and R. Kurban, “A comparison of criterion functions for fusion of multi-focus noisy images,” Opt. Commun. 282, 3231–3242 (2009).
[CrossRef]

Opt. Precision Eng. (1)

X. B. Qu, J. W. Yan, and G. D. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-Laplacian,” Opt. Precision Eng. 17, 1203–1212 (2009).

Pattern Recogn. (2)

G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recogn. 37, 1855–1872 (2004).
[CrossRef]

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011).
[CrossRef]

Pattern Recogn. Lett. (2)

S. Li and B. Yang, “Multifocus image fusion by combining curvelet and wavelet transform,” Pattern Recogn. Lett. 29, 1295–1301 (2008).
[CrossRef]

W. Huang, and Z. L. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recogn. Lett. 28, 493–500 (2007).
[CrossRef]

Other (3)

A. Apatean, C. Rusu, A. Rogozan, and A. Bensrhair, “Visible-infrared fusion in the frame of an obstacle recognition system,” in IEEE International Conference on Automation, Quality and Testing, Robotics (IEEE, 2010), pp. 1–6.

S. Firooz, “Comparative image fusion analysis,” in IEEE Proceedings of the Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 1–8.

M. Welling, “Robust higher order statistics,” in Proceedings of 10th International Workshop on Artificial Intelligence and Statistics (2005), pp. 405–412.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1.

Schematic diagram of the discrete curvelet transform [11].

Fig. 2.
Fig. 2.

Fusion scheme of infrared and visible images based on focus measure operators in the curvelet domain.

Fig. 3.
Fig. 3.

Source images and results of the first experiment: (a) infrared image, (b) visible image, (c) curvelet transform, (d) gradient pyramid, (e) wavelet transform, (f) curvelet and EOG, (g) curvelet and EOL, (h) curvelet and SML, and (i) curvelet and TEN.

Fig. 4.
Fig. 4.

Object error histogram of the first experiment: (a) curvelet transform, (b) gradient pyramid, (c) wavelet transform, (d) curvelet and EOG, (e) curvelet and EOL, (f) curvelet and SML, and (g) curvelet and TEN.

Fig. 5.
Fig. 5.

Source images and results of the second experiment: (a) infrared image, (b) visible image, (c) curvelet transform, (d) gradient pyramid, (e) wavelet transform, (f) curvelet and EOG, (g) curvelet and EOL, (h) curvelet and SML, and (i) curvelet and TEN.

Fig. 6.
Fig. 6.

Object error histogram of the first experiment: (a) curvelet transform, (b) gradient pyramid, (c) wavelet transform, (d) curvelet and EOG, (e) curvelet and EOL, (f) curvelet and SML, and (g) curvelet and TEN.

Fig. 7.
Fig. 7.

Object evaluation results of the second experiment.

Fig. 8.
Fig. 8.

Influence of decomposition level of curvelet transform.

Fig. 9.
Fig. 9.

Influence of threshold of FOCC.

Fig. 10.
Fig. 10.

Results of other datasets: (a) infrared image, (b) visible image, (c) fusion image, (d) infrared remote sensing image, (e) visible remote sensing image, and (f) fusion image.

Tables (1)

Tables Icon

Table 1. Objective Evaluation Results of the First Experiment

Equations (30)

Equations on this page are rendered with MathJax. Learn more.

c(j,l,k)=f,φj,l,k,
j=W2(2jr)=1,r(34,32),
l=V2(tl)=1,t(12,12).
Uj(r,θ)=23j/4W(2jr)V(2[j/2]θ2π),
φj,l,k(x)=φj(Rθl(xxk(j,l))),
c(j,l,k)f,φj,l,k=R2f(x)φj,l,k(x)¯dx=12π2f^(ω)φj,l,k(ω)¯dω=12π2f^(ω)Uj(Rθlω)ejxk(j,l),ωdω.
CD(j,l,k)0t1,t2<nf[t1,t2]φj,l,kD[t1,t2]¯.
Vj(Sθlω)=V(2[j/2]ω2/ω1l),
Sθl=[10tanθl1].
U˜j,l(ω)ψj(ω1)Vj(Sθlω)=U˜j(Sθlω).
f^[n1,n2],n2n1,n2n2(j,l).
f^[n1,n2n1tanθl],(n1,n2)Pj.
f^[n1,n2]=f^[n1,n2n1tanθl]U˜j[n1,n2].
EOG=xy(fx2+fy2),
Tenengrad=x=2M1y2N1[S(x,y)]2,
Sx(x,y)={f(x+1,y1)f(x1,y1)+2f(x+1,y)2f(x1,y)+f(x+1,y+1)f(x1,y+1)},
Sy(x,y)={f(x1,y+1)f(x1,y1)+2f(x,y+1)2f(x,y1)+f(x+1,y+1)f(x+1,y1)}.
EOL=xy(fxx+fyy)2,
fxx+fyy=20f(x,y)f(x1,y1)4f(x1,y)f(x1,y+1)4f(x,y1)4f(x,y+1)f(x+1,y1)4f(x+1,y)f(x+1,y+1).
ML2f(x,y)=|2f(x,y)f(xstep,y)f(x+step,y)|+|2f(x,y)f(x,ystep)f(x,y+step)|,
SML=i=xNi=x+Nj=yNj=y+NML2f(i,j),
Cinfl=FOMCinflow,
Cvisl=FOMCvislow,
Cfusl(x,y)=Cinfl(x,y)+σ(Wvis)σ(Wvis)+σ(Winf)×[Cvisl(x,y)min(W¯vis,W¯inf)],
Cfusl(x,y)=Cinfl(x,y)+w×[Cvisl(x,y)C].
Cinfh=FOMCinfhigh(k),
Cvish=FOMCvishigh(k),
FOCCA,B=1M×N×i=1Mj=1N(A(i,j)μA)2(B(i,j)μB)2(i=1Mj=1N(A(i,j)μA)4)(i=1Mj=1N(B(i,j)μB)4),
Cfush(x,y)=Cinfh(x,y)+Cvish(x,y).
Cfush(x,y)={Cinfh(x,y),if|Cinfh(x,y)|>|Cvish(x,y)|Cvish(x,y),otherwise.

Metrics