Abstract

This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l1l1-TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition

Xiaoye Zhang, Yong Ma, Fan Fan, Ying Zhang, and Jun Huang
J. Opt. Soc. Am. A 34(8) 1400-1410 (2017)

Fusion of infrared and visible images for night-vision context enhancement

Zhiqiang Zhou, Mingjie Dong, Xiaozhu Xie, and Zhifeng Gao
Appl. Opt. 55(23) 6480-6490 (2016)

Infrared and visible image fusion using multiscale directional nonlocal means filter

Xiang Yan, Hanlin Qin, Jia Li, Huixin Zhou, Jing-guo Zong, and Qingjie Zeng
Appl. Opt. 54(13) 4299-4308 (2015)

References

  • View by:
  • |
  • |
  • |

  1. Z. Zhou, M. Dong, X. Xie, and Z. Gao, “Fusion of infrared and visible images for night-vision context enhancement,” Appl. Opt. 55, 6480–6490 (2016).
    [Crossref]
  2. C. Liu, J. Ma, Y. Ma, and J. Huang, “Retinal image registration via feature-guided Gaussian mixture model,” J. Opt. Soc. Am. A 33, 1267–1276 (2016).
    [Crossref]
  3. J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration,” Inf. Sci. 417, 128–142 (2017).
    [Crossref]
  4. Z. Wei, Y. Han, M. Li, K. Yang, Y. Yang, Y. Luo, and S.-H. Ong, “A small UAV based multi-temporal image registration for dynamic agricultural terrace monitoring,” Remote Sens. 9, 904 (2017).
    [Crossref]
  5. K. Yang, A. Pan, Y. Yang, S. Zhang, S. H. Ong, and H. Tang, “Remote sensing image registration using multiple image features,” Remote Sens. 9, 581 (2017).
    [Crossref]
  6. S. G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B. R. Abidi, A. Koschan, M. Yi, and M. A. Abidi, “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” Int. J. Comput. Vision 71, 215–233 (2007).
    [Crossref]
  7. Y. Gao, J. Ma, and A. L. Yuille, “Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples,” IEEE Trans. Image Process. 26, 2545–2560 (2017).
    [Crossref]
  8. C. Yang, J. Ma, S. Qi, J. Tian, S. Zheng, and X. Tian, “Directional support value of Gaussian transformation for infrared small target detection,” Appl. Opt. 54, 2255–2265 (2015).
    [Crossref]
  9. M. Ghaneizad, Z. Kavehvash, and H. Aghajan, “Human detection in occluded scenes through optically inspired multi-camera image fusion,” J. Opt. Soc. Am. A 34, 856–869 (2017).
    [Crossref]
  10. J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognition 48, 772–784 (2015).
    [Crossref]
  11. J. Ma, J. Zhao, J. Tian, X. Bai, and Z. Tu, “Regularized vector field learning with sparse approximation for mismatch removal,” Pattern Recognition 46, 3519–3532 (2013).
    [Crossref]
  12. Y. Yang, S. H. Ong, and K. W. C. Foong, “A robust global and local mixture distance based non-rigid point set registration,” Pattern Recognition 48, 156–173 (2015).
    [Crossref]
  13. J. Ma, J. Zhao, H. Guo, J. Jiang, H. Zhou, and Y. Gao, “Locality preserving matching,” in International Joint Conference on Artificial Intelligence (2017), pp. 4492–4498.
  14. J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inform. Fusion 31, 100–109 (2016).
    [Crossref]
  15. M.-D. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Total variation spatial regularization for sparse hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens. 50, 4484–4502 (2012).
    [Crossref]
  16. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2011).
    [Crossref]
  17. P. Burt and E. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. 31, 532–540 (1983).
    [Crossref]
  18. H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models Image Process. 57, 235–245 (1995).
    [Crossref]
  19. X. Yan, H. Qin, J. Li, H. Zhou, and J.-G. Zong, “Infrared and visible image fusion with spectral graph wavelet transform,” J. Opt. Soc. Am. A 32, 1643–1652 (2015).
    [Crossref]
  20. J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel-and region-based image fusion with complex wavelets,” Inform. Fusion 8, 119–130 (2007).
    [Crossref]
  21. E. J. Candès and D. L. Donoho, “Curvelets and curvilinear integrals,” J. Approx. Theory 113, 59–90 (2001).
    [Crossref]
  22. Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inform. Fusion 30, 15–26 (2016).
    [Crossref]
  23. V. Naidu, “Image fusion technique using multi-resolution singular value decomposition,” Def. Sci. J. 61, 479–484 (2011).
    [Crossref]
  24. Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense sift,” Inform. Fusion 23, 139–155 (2015).
    [Crossref]
  25. S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22, 2864–2875 (2013).
    [Crossref]
  26. X. Yan, H. Qin, J. Li, H. Zhou, and T. Yang, “Multi-focus image fusion using a guided-filter-based difference image,” Appl. Opt. 55, 2230–2239 (2016).
    [Crossref]
  27. Y. Ma, J. Chen, C. Chen, F. Fan, and J. Ma, “Infrared and visible image fusion using total variation model,” Neurocomputing 202, 12–19 (2016).
    [Crossref]
  28. S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: Asurvey of the state of the art,” Inform. Fusion 33, 100–112 (2017).
    [Crossref]
  29. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
    [Crossref]
  30. A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis. 20, 73–87 (2004).
    [Crossref]
  31. S. Boyd and L. Vandenberghe, Convex Optimization (Cambridge University, 2004).
  32. J. M. Bioucas-Dias and M. A. Figueiredo, “Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing,” in 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (IEEE, 2010), pp. 1–4.
  33. M. V. Afonso, J. M. Bioucas-Dias, and M. A. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Trans. Image Process. 20, 681–695 (2011).
    [Crossref]
  34. P. L. Combettes and V. R. Wajs, “Signal recovery by proximal forward-backward splitting,” Multiscale Model. Simul. 4, 1168–1200 (2005).
    [Crossref]
  35. J. Eckstein and D. P. Bertsekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Math. Program. 55, 293–318 (1992).
    [Crossref]
  36. B. He, H. Yang, and S. Wang, “Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities,” J. Optimization Theory Applications 106, 337–356 (2000).
    [Crossref]
  37. S. Wang and L. Liao, “Decomposition method with a variable parameter for a class of monotone variational inequality problems,” J. Optimization Theory Applications 109, 415–429 (2001).
    [Crossref]
  38. A. Toet, “Image fusion by a ratio of low-pass pyramid,” Patt. Recog. Lett. 9, 245–253 (1989).
    [Crossref]
  39. F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inform. Fusion 8, 143–156 (2007).
    [Crossref]
  40. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013).
    [Crossref]
  41. A. Toet, “TNO Image Fusion Dataset,” figshare (2014), https://figshare.com/articles/TNO_Image_Fusion_Dataset/1008029 .
  42. Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inform. Fusion 24, 147–164 (2015).
    [Crossref]
  43. G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
    [Crossref]
  44. C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
    [Crossref]
  45. J. Zhao, H. Feng, Z. Xu, Q. Li, and T. Liu, “Detail enhanced multi-source fusion using visual weight map extraction based on multi scale edge preserving decomposition,” Opt. Commun. 287, 45–52 (2013).
    [Crossref]

2017 (6)

J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration,” Inf. Sci. 417, 128–142 (2017).
[Crossref]

Z. Wei, Y. Han, M. Li, K. Yang, Y. Yang, Y. Luo, and S.-H. Ong, “A small UAV based multi-temporal image registration for dynamic agricultural terrace monitoring,” Remote Sens. 9, 904 (2017).
[Crossref]

K. Yang, A. Pan, Y. Yang, S. Zhang, S. H. Ong, and H. Tang, “Remote sensing image registration using multiple image features,” Remote Sens. 9, 581 (2017).
[Crossref]

Y. Gao, J. Ma, and A. L. Yuille, “Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples,” IEEE Trans. Image Process. 26, 2545–2560 (2017).
[Crossref]

S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: Asurvey of the state of the art,” Inform. Fusion 33, 100–112 (2017).
[Crossref]

M. Ghaneizad, Z. Kavehvash, and H. Aghajan, “Human detection in occluded scenes through optically inspired multi-camera image fusion,” J. Opt. Soc. Am. A 34, 856–869 (2017).
[Crossref]

2016 (6)

X. Yan, H. Qin, J. Li, H. Zhou, and T. Yang, “Multi-focus image fusion using a guided-filter-based difference image,” Appl. Opt. 55, 2230–2239 (2016).
[Crossref]

C. Liu, J. Ma, Y. Ma, and J. Huang, “Retinal image registration via feature-guided Gaussian mixture model,” J. Opt. Soc. Am. A 33, 1267–1276 (2016).
[Crossref]

Z. Zhou, M. Dong, X. Xie, and Z. Gao, “Fusion of infrared and visible images for night-vision context enhancement,” Appl. Opt. 55, 6480–6490 (2016).
[Crossref]

Y. Ma, J. Chen, C. Chen, F. Fan, and J. Ma, “Infrared and visible image fusion using total variation model,” Neurocomputing 202, 12–19 (2016).
[Crossref]

Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inform. Fusion 30, 15–26 (2016).
[Crossref]

J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inform. Fusion 31, 100–109 (2016).
[Crossref]

2015 (6)

J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognition 48, 772–784 (2015).
[Crossref]

Y. Yang, S. H. Ong, and K. W. C. Foong, “A robust global and local mixture distance based non-rigid point set registration,” Pattern Recognition 48, 156–173 (2015).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense sift,” Inform. Fusion 23, 139–155 (2015).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inform. Fusion 24, 147–164 (2015).
[Crossref]

C. Yang, J. Ma, S. Qi, J. Tian, S. Zheng, and X. Tian, “Directional support value of Gaussian transformation for infrared small target detection,” Appl. Opt. 54, 2255–2265 (2015).
[Crossref]

X. Yan, H. Qin, J. Li, H. Zhou, and J.-G. Zong, “Infrared and visible image fusion with spectral graph wavelet transform,” J. Opt. Soc. Am. A 32, 1643–1652 (2015).
[Crossref]

2013 (4)

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013).
[Crossref]

J. Zhao, H. Feng, Z. Xu, Q. Li, and T. Liu, “Detail enhanced multi-source fusion using visual weight map extraction based on multi scale edge preserving decomposition,” Opt. Commun. 287, 45–52 (2013).
[Crossref]

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22, 2864–2875 (2013).
[Crossref]

J. Ma, J. Zhao, J. Tian, X. Bai, and Z. Tu, “Regularized vector field learning with sparse approximation for mismatch removal,” Pattern Recognition 46, 3519–3532 (2013).
[Crossref]

2012 (1)

M.-D. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Total variation spatial regularization for sparse hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens. 50, 4484–4502 (2012).
[Crossref]

2011 (3)

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2011).
[Crossref]

V. Naidu, “Image fusion technique using multi-resolution singular value decomposition,” Def. Sci. J. 61, 479–484 (2011).
[Crossref]

M. V. Afonso, J. M. Bioucas-Dias, and M. A. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Trans. Image Process. 20, 681–695 (2011).
[Crossref]

2007 (3)

F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inform. Fusion 8, 143–156 (2007).
[Crossref]

J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel-and region-based image fusion with complex wavelets,” Inform. Fusion 8, 119–130 (2007).
[Crossref]

S. G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B. R. Abidi, A. Koschan, M. Yi, and M. A. Abidi, “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” Int. J. Comput. Vision 71, 215–233 (2007).
[Crossref]

2005 (1)

P. L. Combettes and V. R. Wajs, “Signal recovery by proximal forward-backward splitting,” Multiscale Model. Simul. 4, 1168–1200 (2005).
[Crossref]

2004 (1)

A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis. 20, 73–87 (2004).
[Crossref]

2002 (1)

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[Crossref]

2001 (2)

S. Wang and L. Liao, “Decomposition method with a variable parameter for a class of monotone variational inequality problems,” J. Optimization Theory Applications 109, 415–429 (2001).
[Crossref]

E. J. Candès and D. L. Donoho, “Curvelets and curvilinear integrals,” J. Approx. Theory 113, 59–90 (2001).
[Crossref]

2000 (2)

B. He, H. Yang, and S. Wang, “Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities,” J. Optimization Theory Applications 106, 337–356 (2000).
[Crossref]

C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
[Crossref]

1995 (1)

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models Image Process. 57, 235–245 (1995).
[Crossref]

1992 (2)

J. Eckstein and D. P. Bertsekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Math. Program. 55, 293–318 (1992).
[Crossref]

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

1989 (1)

A. Toet, “Image fusion by a ratio of low-pass pyramid,” Patt. Recog. Lett. 9, 245–253 (1989).
[Crossref]

1983 (1)

P. Burt and E. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. 31, 532–540 (1983).
[Crossref]

Abidi, B. R.

S. G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B. R. Abidi, A. Koschan, M. Yi, and M. A. Abidi, “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” Int. J. Comput. Vision 71, 215–233 (2007).
[Crossref]

Abidi, M. A.

S. G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B. R. Abidi, A. Koschan, M. Yi, and M. A. Abidi, “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” Int. J. Comput. Vision 71, 215–233 (2007).
[Crossref]

Adelson, E.

P. Burt and E. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. 31, 532–540 (1983).
[Crossref]

Afonso, M. V.

M. V. Afonso, J. M. Bioucas-Dias, and M. A. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Trans. Image Process. 20, 681–695 (2011).
[Crossref]

Aghajan, H.

Alparone, L.

F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inform. Fusion 8, 143–156 (2007).
[Crossref]

Bai, X.

J. Ma, J. Zhao, J. Tian, X. Bai, and Z. Tu, “Regularized vector field learning with sparse approximation for mismatch removal,” Pattern Recognition 46, 3519–3532 (2013).
[Crossref]

Baronti, S.

F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inform. Fusion 8, 143–156 (2007).
[Crossref]

Bertsekas, D. P.

J. Eckstein and D. P. Bertsekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Math. Program. 55, 293–318 (1992).
[Crossref]

Bioucas-Dias, J. M.

M.-D. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Total variation spatial regularization for sparse hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens. 50, 4484–4502 (2012).
[Crossref]

M. V. Afonso, J. M. Bioucas-Dias, and M. A. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Trans. Image Process. 20, 681–695 (2011).
[Crossref]

J. M. Bioucas-Dias and M. A. Figueiredo, “Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing,” in 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (IEEE, 2010), pp. 1–4.

Boughorbel, F.

S. G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B. R. Abidi, A. Koschan, M. Yi, and M. A. Abidi, “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” Int. J. Comput. Vision 71, 215–233 (2007).
[Crossref]

Boyd, S.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2011).
[Crossref]

S. Boyd and L. Vandenberghe, Convex Optimization (Cambridge University, 2004).

Bull, D. R.

J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel-and region-based image fusion with complex wavelets,” Inform. Fusion 8, 119–130 (2007).
[Crossref]

Burt, P.

P. Burt and E. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. 31, 532–540 (1983).
[Crossref]

Canagarajah, N.

J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel-and region-based image fusion with complex wavelets,” Inform. Fusion 8, 119–130 (2007).
[Crossref]

Candès, E. J.

E. J. Candès and D. L. Donoho, “Curvelets and curvilinear integrals,” J. Approx. Theory 113, 59–90 (2001).
[Crossref]

Chambolle, A.

A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis. 20, 73–87 (2004).
[Crossref]

Chen, C.

Y. Ma, J. Chen, C. Chen, F. Fan, and J. Ma, “Infrared and visible image fusion using total variation model,” Neurocomputing 202, 12–19 (2016).
[Crossref]

J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inform. Fusion 31, 100–109 (2016).
[Crossref]

Chen, J.

Y. Ma, J. Chen, C. Chen, F. Fan, and J. Ma, “Infrared and visible image fusion using total variation model,” Neurocomputing 202, 12–19 (2016).
[Crossref]

Chu, E.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2011).
[Crossref]

Combettes, P. L.

P. L. Combettes and V. R. Wajs, “Signal recovery by proximal forward-backward splitting,” Multiscale Model. Simul. 4, 1168–1200 (2005).
[Crossref]

Dong, M.

Z. Zhou, M. Dong, X. Xie, and Z. Gao, “Fusion of infrared and visible images for night-vision context enhancement,” Appl. Opt. 55, 6480–6490 (2016).
[Crossref]

Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inform. Fusion 30, 15–26 (2016).
[Crossref]

Donoho, D. L.

E. J. Candès and D. L. Donoho, “Curvelets and curvilinear integrals,” J. Approx. Theory 113, 59–90 (2001).
[Crossref]

Eckstein, J.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2011).
[Crossref]

J. Eckstein and D. P. Bertsekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Math. Program. 55, 293–318 (1992).
[Crossref]

Fan, F.

Y. Ma, J. Chen, C. Chen, F. Fan, and J. Ma, “Infrared and visible image fusion using total variation model,” Neurocomputing 202, 12–19 (2016).
[Crossref]

Fang, L.

S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: Asurvey of the state of the art,” Inform. Fusion 33, 100–112 (2017).
[Crossref]

Fatemi, E.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

Feng, H.

J. Zhao, H. Feng, Z. Xu, Q. Li, and T. Liu, “Detail enhanced multi-source fusion using visual weight map extraction based on multi scale edge preserving decomposition,” Opt. Commun. 287, 45–52 (2013).
[Crossref]

Figueiredo, M. A.

M. V. Afonso, J. M. Bioucas-Dias, and M. A. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Trans. Image Process. 20, 681–695 (2011).
[Crossref]

J. M. Bioucas-Dias and M. A. Figueiredo, “Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing,” in 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (IEEE, 2010), pp. 1–4.

Foong, K. W. C.

Y. Yang, S. H. Ong, and K. W. C. Foong, “A robust global and local mixture distance based non-rigid point set registration,” Pattern Recognition 48, 156–173 (2015).
[Crossref]

Gao, Y.

Y. Gao, J. Ma, and A. L. Yuille, “Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples,” IEEE Trans. Image Process. 26, 2545–2560 (2017).
[Crossref]

J. Ma, J. Zhao, H. Guo, J. Jiang, H. Zhou, and Y. Gao, “Locality preserving matching,” in International Joint Conference on Artificial Intelligence (2017), pp. 4492–4498.

Gao, Z.

Garzelli, A.

F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inform. Fusion 8, 143–156 (2007).
[Crossref]

Ghaneizad, M.

Guo, H.

J. Ma, J. Zhao, H. Guo, J. Jiang, H. Zhou, and Y. Gao, “Locality preserving matching,” in International Joint Conference on Artificial Intelligence (2017), pp. 4492–4498.

Han, Y.

Z. Wei, Y. Han, M. Li, K. Yang, Y. Yang, Y. Luo, and S.-H. Ong, “A small UAV based multi-temporal image registration for dynamic agricultural terrace monitoring,” Remote Sens. 9, 904 (2017).
[Crossref]

He, B.

B. He, H. Yang, and S. Wang, “Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities,” J. Optimization Theory Applications 106, 337–356 (2000).
[Crossref]

He, K.

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013).
[Crossref]

Heo, J.

S. G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B. R. Abidi, A. Koschan, M. Yi, and M. A. Abidi, “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” Int. J. Comput. Vision 71, 215–233 (2007).
[Crossref]

Hu, J.

S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: Asurvey of the state of the art,” Inform. Fusion 33, 100–112 (2017).
[Crossref]

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22, 2864–2875 (2013).
[Crossref]

Huang, J.

C. Liu, J. Ma, Y. Ma, and J. Huang, “Retinal image registration via feature-guided Gaussian mixture model,” J. Opt. Soc. Am. A 33, 1267–1276 (2016).
[Crossref]

J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inform. Fusion 31, 100–109 (2016).
[Crossref]

Iordache, M.-D.

M.-D. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Total variation spatial regularization for sparse hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens. 50, 4484–4502 (2012).
[Crossref]

Jiang, J.

J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration,” Inf. Sci. 417, 128–142 (2017).
[Crossref]

J. Ma, J. Zhao, H. Guo, J. Jiang, H. Zhou, and Y. Gao, “Locality preserving matching,” in International Joint Conference on Artificial Intelligence (2017), pp. 4492–4498.

Kang, X.

S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: Asurvey of the state of the art,” Inform. Fusion 33, 100–112 (2017).
[Crossref]

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22, 2864–2875 (2013).
[Crossref]

Kavehvash, Z.

Kong, S. G.

S. G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B. R. Abidi, A. Koschan, M. Yi, and M. A. Abidi, “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” Int. J. Comput. Vision 71, 215–233 (2007).
[Crossref]

Koschan, A.

S. G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B. R. Abidi, A. Koschan, M. Yi, and M. A. Abidi, “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” Int. J. Comput. Vision 71, 215–233 (2007).
[Crossref]

Lewis, J. J.

J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel-and region-based image fusion with complex wavelets,” Inform. Fusion 8, 119–130 (2007).
[Crossref]

Li, C.

J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inform. Fusion 31, 100–109 (2016).
[Crossref]

Li, H.

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models Image Process. 57, 235–245 (1995).
[Crossref]

Li, J.

Li, M.

Z. Wei, Y. Han, M. Li, K. Yang, Y. Yang, Y. Luo, and S.-H. Ong, “A small UAV based multi-temporal image registration for dynamic agricultural terrace monitoring,” Remote Sens. 9, 904 (2017).
[Crossref]

Li, Q.

J. Zhao, H. Feng, Z. Xu, Q. Li, and T. Liu, “Detail enhanced multi-source fusion using visual weight map extraction based on multi scale edge preserving decomposition,” Opt. Commun. 287, 45–52 (2013).
[Crossref]

Li, S.

S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: Asurvey of the state of the art,” Inform. Fusion 33, 100–112 (2017).
[Crossref]

Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inform. Fusion 30, 15–26 (2016).
[Crossref]

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22, 2864–2875 (2013).
[Crossref]

Li, Y.

J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration,” Inf. Sci. 417, 128–142 (2017).
[Crossref]

Liao, L.

S. Wang and L. Liao, “Decomposition method with a variable parameter for a class of monotone variational inequality problems,” J. Optimization Theory Applications 109, 415–429 (2001).
[Crossref]

Liu, C.

J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration,” Inf. Sci. 417, 128–142 (2017).
[Crossref]

C. Liu, J. Ma, Y. Ma, and J. Huang, “Retinal image registration via feature-guided Gaussian mixture model,” J. Opt. Soc. Am. A 33, 1267–1276 (2016).
[Crossref]

Liu, S.

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inform. Fusion 24, 147–164 (2015).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense sift,” Inform. Fusion 23, 139–155 (2015).
[Crossref]

Liu, T.

J. Zhao, H. Feng, Z. Xu, Q. Li, and T. Liu, “Detail enhanced multi-source fusion using visual weight map extraction based on multi scale edge preserving decomposition,” Opt. Commun. 287, 45–52 (2013).
[Crossref]

Liu, Y.

Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense sift,” Inform. Fusion 23, 139–155 (2015).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inform. Fusion 24, 147–164 (2015).
[Crossref]

Luo, Y.

Z. Wei, Y. Han, M. Li, K. Yang, Y. Yang, Y. Luo, and S.-H. Ong, “A small UAV based multi-temporal image registration for dynamic agricultural terrace monitoring,” Remote Sens. 9, 904 (2017).
[Crossref]

Ma, J.

J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration,” Inf. Sci. 417, 128–142 (2017).
[Crossref]

Y. Gao, J. Ma, and A. L. Yuille, “Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples,” IEEE Trans. Image Process. 26, 2545–2560 (2017).
[Crossref]

J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inform. Fusion 31, 100–109 (2016).
[Crossref]

Y. Ma, J. Chen, C. Chen, F. Fan, and J. Ma, “Infrared and visible image fusion using total variation model,” Neurocomputing 202, 12–19 (2016).
[Crossref]

C. Liu, J. Ma, Y. Ma, and J. Huang, “Retinal image registration via feature-guided Gaussian mixture model,” J. Opt. Soc. Am. A 33, 1267–1276 (2016).
[Crossref]

C. Yang, J. Ma, S. Qi, J. Tian, S. Zheng, and X. Tian, “Directional support value of Gaussian transformation for infrared small target detection,” Appl. Opt. 54, 2255–2265 (2015).
[Crossref]

J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognition 48, 772–784 (2015).
[Crossref]

J. Ma, J. Zhao, J. Tian, X. Bai, and Z. Tu, “Regularized vector field learning with sparse approximation for mismatch removal,” Pattern Recognition 46, 3519–3532 (2013).
[Crossref]

J. Ma, J. Zhao, H. Guo, J. Jiang, H. Zhou, and Y. Gao, “Locality preserving matching,” in International Joint Conference on Artificial Intelligence (2017), pp. 4492–4498.

Ma, Y.

C. Liu, J. Ma, Y. Ma, and J. Huang, “Retinal image registration via feature-guided Gaussian mixture model,” J. Opt. Soc. Am. A 33, 1267–1276 (2016).
[Crossref]

Y. Ma, J. Chen, C. Chen, F. Fan, and J. Ma, “Infrared and visible image fusion using total variation model,” Neurocomputing 202, 12–19 (2016).
[Crossref]

J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognition 48, 772–784 (2015).
[Crossref]

Manjunath, B.

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models Image Process. 57, 235–245 (1995).
[Crossref]

Mitra, S. K.

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models Image Process. 57, 235–245 (1995).
[Crossref]

Naidu, V.

V. Naidu, “Image fusion technique using multi-resolution singular value decomposition,” Def. Sci. J. 61, 479–484 (2011).
[Crossref]

Nencini, F.

F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inform. Fusion 8, 143–156 (2007).
[Crossref]

Nikolov, S. G.

J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel-and region-based image fusion with complex wavelets,” Inform. Fusion 8, 119–130 (2007).
[Crossref]

O’Callaghan, R. J.

J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel-and region-based image fusion with complex wavelets,” Inform. Fusion 8, 119–130 (2007).
[Crossref]

Ong, S. H.

K. Yang, A. Pan, Y. Yang, S. Zhang, S. H. Ong, and H. Tang, “Remote sensing image registration using multiple image features,” Remote Sens. 9, 581 (2017).
[Crossref]

Y. Yang, S. H. Ong, and K. W. C. Foong, “A robust global and local mixture distance based non-rigid point set registration,” Pattern Recognition 48, 156–173 (2015).
[Crossref]

Ong, S.-H.

Z. Wei, Y. Han, M. Li, K. Yang, Y. Yang, Y. Luo, and S.-H. Ong, “A small UAV based multi-temporal image registration for dynamic agricultural terrace monitoring,” Remote Sens. 9, 904 (2017).
[Crossref]

Osher, S.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

Pan, A.

K. Yang, A. Pan, Y. Yang, S. Zhang, S. H. Ong, and H. Tang, “Remote sensing image registration using multiple image features,” Remote Sens. 9, 581 (2017).
[Crossref]

Parikh, N.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2011).
[Crossref]

Peleato, B.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2011).
[Crossref]

Petrovic, V.

C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
[Crossref]

Plaza, A.

M.-D. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Total variation spatial regularization for sparse hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens. 50, 4484–4502 (2012).
[Crossref]

Qi, S.

Qin, H.

Qu, G.

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[Crossref]

Rudin, L. I.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

Sun, J.

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013).
[Crossref]

Tang, H.

K. Yang, A. Pan, Y. Yang, S. Zhang, S. H. Ong, and H. Tang, “Remote sensing image registration using multiple image features,” Remote Sens. 9, 581 (2017).
[Crossref]

Tang, X.

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013).
[Crossref]

Tian, J.

C. Yang, J. Ma, S. Qi, J. Tian, S. Zheng, and X. Tian, “Directional support value of Gaussian transformation for infrared small target detection,” Appl. Opt. 54, 2255–2265 (2015).
[Crossref]

J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognition 48, 772–784 (2015).
[Crossref]

J. Ma, J. Zhao, J. Tian, X. Bai, and Z. Tu, “Regularized vector field learning with sparse approximation for mismatch removal,” Pattern Recognition 46, 3519–3532 (2013).
[Crossref]

Tian, X.

Toet, A.

A. Toet, “Image fusion by a ratio of low-pass pyramid,” Patt. Recog. Lett. 9, 245–253 (1989).
[Crossref]

Tu, Z.

J. Ma, J. Zhao, J. Tian, X. Bai, and Z. Tu, “Regularized vector field learning with sparse approximation for mismatch removal,” Pattern Recognition 46, 3519–3532 (2013).
[Crossref]

Vandenberghe, L.

S. Boyd and L. Vandenberghe, Convex Optimization (Cambridge University, 2004).

Wajs, V. R.

P. L. Combettes and V. R. Wajs, “Signal recovery by proximal forward-backward splitting,” Multiscale Model. Simul. 4, 1168–1200 (2005).
[Crossref]

Wang, B.

Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inform. Fusion 30, 15–26 (2016).
[Crossref]

Wang, S.

S. Wang and L. Liao, “Decomposition method with a variable parameter for a class of monotone variational inequality problems,” J. Optimization Theory Applications 109, 415–429 (2001).
[Crossref]

B. He, H. Yang, and S. Wang, “Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities,” J. Optimization Theory Applications 106, 337–356 (2000).
[Crossref]

Wang, Z.

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inform. Fusion 24, 147–164 (2015).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense sift,” Inform. Fusion 23, 139–155 (2015).
[Crossref]

Wei, Z.

Z. Wei, Y. Han, M. Li, K. Yang, Y. Yang, Y. Luo, and S.-H. Ong, “A small UAV based multi-temporal image registration for dynamic agricultural terrace monitoring,” Remote Sens. 9, 904 (2017).
[Crossref]

Xie, X.

Xu, Z.

J. Zhao, H. Feng, Z. Xu, Q. Li, and T. Liu, “Detail enhanced multi-source fusion using visual weight map extraction based on multi scale edge preserving decomposition,” Opt. Commun. 287, 45–52 (2013).
[Crossref]

Xydeas, C.

C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
[Crossref]

Yan, P.

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[Crossref]

Yan, X.

Yang, C.

Yang, H.

B. He, H. Yang, and S. Wang, “Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities,” J. Optimization Theory Applications 106, 337–356 (2000).
[Crossref]

Yang, K.

Z. Wei, Y. Han, M. Li, K. Yang, Y. Yang, Y. Luo, and S.-H. Ong, “A small UAV based multi-temporal image registration for dynamic agricultural terrace monitoring,” Remote Sens. 9, 904 (2017).
[Crossref]

K. Yang, A. Pan, Y. Yang, S. Zhang, S. H. Ong, and H. Tang, “Remote sensing image registration using multiple image features,” Remote Sens. 9, 581 (2017).
[Crossref]

Yang, T.

Yang, Y.

K. Yang, A. Pan, Y. Yang, S. Zhang, S. H. Ong, and H. Tang, “Remote sensing image registration using multiple image features,” Remote Sens. 9, 581 (2017).
[Crossref]

Z. Wei, Y. Han, M. Li, K. Yang, Y. Yang, Y. Luo, and S.-H. Ong, “A small UAV based multi-temporal image registration for dynamic agricultural terrace monitoring,” Remote Sens. 9, 904 (2017).
[Crossref]

Y. Yang, S. H. Ong, and K. W. C. Foong, “A robust global and local mixture distance based non-rigid point set registration,” Pattern Recognition 48, 156–173 (2015).
[Crossref]

Yi, M.

S. G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B. R. Abidi, A. Koschan, M. Yi, and M. A. Abidi, “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” Int. J. Comput. Vision 71, 215–233 (2007).
[Crossref]

Yin, H.

S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: Asurvey of the state of the art,” Inform. Fusion 33, 100–112 (2017).
[Crossref]

Yuille, A. L.

Y. Gao, J. Ma, and A. L. Yuille, “Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples,” IEEE Trans. Image Process. 26, 2545–2560 (2017).
[Crossref]

Zhang, D.

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[Crossref]

Zhang, S.

K. Yang, A. Pan, Y. Yang, S. Zhang, S. H. Ong, and H. Tang, “Remote sensing image registration using multiple image features,” Remote Sens. 9, 581 (2017).
[Crossref]

Zhao, J.

J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognition 48, 772–784 (2015).
[Crossref]

J. Ma, J. Zhao, J. Tian, X. Bai, and Z. Tu, “Regularized vector field learning with sparse approximation for mismatch removal,” Pattern Recognition 46, 3519–3532 (2013).
[Crossref]

J. Zhao, H. Feng, Z. Xu, Q. Li, and T. Liu, “Detail enhanced multi-source fusion using visual weight map extraction based on multi scale edge preserving decomposition,” Opt. Commun. 287, 45–52 (2013).
[Crossref]

J. Ma, J. Zhao, H. Guo, J. Jiang, H. Zhou, and Y. Gao, “Locality preserving matching,” in International Joint Conference on Artificial Intelligence (2017), pp. 4492–4498.

Zheng, S.

Zheng, Y.

S. G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B. R. Abidi, A. Koschan, M. Yi, and M. A. Abidi, “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” Int. J. Comput. Vision 71, 215–233 (2007).
[Crossref]

Zhou, H.

Zhou, Z.

Z. Zhou, M. Dong, X. Xie, and Z. Gao, “Fusion of infrared and visible images for night-vision context enhancement,” Appl. Opt. 55, 6480–6490 (2016).
[Crossref]

Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inform. Fusion 30, 15–26 (2016).
[Crossref]

Zong, J.-G.

Appl. Opt. (3)

Def. Sci. J. (1)

V. Naidu, “Image fusion technique using multi-resolution singular value decomposition,” Def. Sci. J. 61, 479–484 (2011).
[Crossref]

Electron. Lett. (2)

G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38, 313–315 (2002).
[Crossref]

C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
[Crossref]

Found. Trends Mach. Learn. (1)

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2011).
[Crossref]

Graphical Models Image Process. (1)

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models Image Process. 57, 235–245 (1995).
[Crossref]

IEEE Trans. Commun. (1)

P. Burt and E. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. 31, 532–540 (1983).
[Crossref]

IEEE Trans. Geosci. Remote Sens. (1)

M.-D. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Total variation spatial regularization for sparse hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens. 50, 4484–4502 (2012).
[Crossref]

IEEE Trans. Image Process. (3)

Y. Gao, J. Ma, and A. L. Yuille, “Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples,” IEEE Trans. Image Process. 26, 2545–2560 (2017).
[Crossref]

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22, 2864–2875 (2013).
[Crossref]

M. V. Afonso, J. M. Bioucas-Dias, and M. A. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Trans. Image Process. 20, 681–695 (2011).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013).
[Crossref]

Inf. Sci. (1)

J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration,” Inf. Sci. 417, 128–142 (2017).
[Crossref]

Inform. Fusion (7)

J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel-and region-based image fusion with complex wavelets,” Inform. Fusion 8, 119–130 (2007).
[Crossref]

Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inform. Fusion 30, 15–26 (2016).
[Crossref]

J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inform. Fusion 31, 100–109 (2016).
[Crossref]

F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inform. Fusion 8, 143–156 (2007).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense sift,” Inform. Fusion 23, 139–155 (2015).
[Crossref]

S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: Asurvey of the state of the art,” Inform. Fusion 33, 100–112 (2017).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inform. Fusion 24, 147–164 (2015).
[Crossref]

Int. J. Comput. Vision (1)

S. G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B. R. Abidi, A. Koschan, M. Yi, and M. A. Abidi, “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” Int. J. Comput. Vision 71, 215–233 (2007).
[Crossref]

J. Approx. Theory (1)

E. J. Candès and D. L. Donoho, “Curvelets and curvilinear integrals,” J. Approx. Theory 113, 59–90 (2001).
[Crossref]

J. Math. Imaging Vis. (1)

A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis. 20, 73–87 (2004).
[Crossref]

J. Opt. Soc. Am. A (3)

J. Optimization Theory Applications (2)

B. He, H. Yang, and S. Wang, “Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities,” J. Optimization Theory Applications 106, 337–356 (2000).
[Crossref]

S. Wang and L. Liao, “Decomposition method with a variable parameter for a class of monotone variational inequality problems,” J. Optimization Theory Applications 109, 415–429 (2001).
[Crossref]

Math. Program. (1)

J. Eckstein and D. P. Bertsekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Math. Program. 55, 293–318 (1992).
[Crossref]

Multiscale Model. Simul. (1)

P. L. Combettes and V. R. Wajs, “Signal recovery by proximal forward-backward splitting,” Multiscale Model. Simul. 4, 1168–1200 (2005).
[Crossref]

Neurocomputing (1)

Y. Ma, J. Chen, C. Chen, F. Fan, and J. Ma, “Infrared and visible image fusion using total variation model,” Neurocomputing 202, 12–19 (2016).
[Crossref]

Opt. Commun. (1)

J. Zhao, H. Feng, Z. Xu, Q. Li, and T. Liu, “Detail enhanced multi-source fusion using visual weight map extraction based on multi scale edge preserving decomposition,” Opt. Commun. 287, 45–52 (2013).
[Crossref]

Patt. Recog. Lett. (1)

A. Toet, “Image fusion by a ratio of low-pass pyramid,” Patt. Recog. Lett. 9, 245–253 (1989).
[Crossref]

Pattern Recognition (3)

J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognition 48, 772–784 (2015).
[Crossref]

J. Ma, J. Zhao, J. Tian, X. Bai, and Z. Tu, “Regularized vector field learning with sparse approximation for mismatch removal,” Pattern Recognition 46, 3519–3532 (2013).
[Crossref]

Y. Yang, S. H. Ong, and K. W. C. Foong, “A robust global and local mixture distance based non-rigid point set registration,” Pattern Recognition 48, 156–173 (2015).
[Crossref]

Phys. D (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

Remote Sens. (2)

Z. Wei, Y. Han, M. Li, K. Yang, Y. Yang, Y. Luo, and S.-H. Ong, “A small UAV based multi-temporal image registration for dynamic agricultural terrace monitoring,” Remote Sens. 9, 904 (2017).
[Crossref]

K. Yang, A. Pan, Y. Yang, S. Zhang, S. H. Ong, and H. Tang, “Remote sensing image registration using multiple image features,” Remote Sens. 9, 581 (2017).
[Crossref]

Other (4)

J. Ma, J. Zhao, H. Guo, J. Jiang, H. Zhou, and Y. Gao, “Locality preserving matching,” in International Joint Conference on Artificial Intelligence (2017), pp. 4492–4498.

S. Boyd and L. Vandenberghe, Convex Optimization (Cambridge University, 2004).

J. M. Bioucas-Dias and M. A. Figueiredo, “Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing,” in 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (IEEE, 2010), pp. 1–4.

A. Toet, “TNO Image Fusion Dataset,” figshare (2014), https://figshare.com/articles/TNO_Image_Fusion_Dataset/1008029 .

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1.

Fusion results of gradient transfer and augmented Lagrangian fusion. The areas in the red boxes provide evidence that gradient transfer suffers detail loss while augmented Lagrangian fusion does not.

Fig. 2.
Fig. 2.

Fusion results on the five image pairs of bunker, lake, two men in front of house, soldier in trench, and sandpath (from left to right). From top to bottom: infrared images, visible images, fusion results of ALF, GTF [14], LP [17], Wavelet [18], DTCWT [20], MSVD [23], GFF [25], and Hybrid-MSD [22] respectively.

Fig. 3.
Fig. 3.

Results of quantitative experiments with metrics EN, MI, Q G , and SD on the two image sequences Nato camp (left column) and Duine (right column). For all metrics, larger values indicate better performance, and the numbers behind the names of algorithms in the legend denote the average values of the corresponding metric (HMSD = Hybrid-MSD).

Tables (2)

Tables Icon

Algorithm 1: Infrared and visible image fusion with variable splitting and augmented Lagrangian

Tables Icon

Table 1. Runtime of the ALF and Nine Other Algorithms on the Nato camp and Duine Image Sequences

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

x u 1 + α x v 1 ,
x v 1 .
min x ( x u 1 + α x v 1 ) + λ TV x v 1 ,
min X X Y 1 + α X 1 + λ TV HX 1 ,
HX = ( H h X H v X )
min U , V g ( U , V ) s.t.    GU + BV = 0 ,
g ( U , V ) = V 1 Y 1 + α V 2 1 + λ TV V 4 1 , G = ( I I I 0 ) , B = ( I 0 0 0 0 I 0 0 0 0 I 0 0 0 H I ) .
L ( U , V , D ) = g ( U , V ) + ρ 2 GU + BV D F 2 ,
U ( k + 1 ) 1 3 ( ξ 1 ( k ) + ξ 2 ( k ) + ξ 3 ( k ) ) ,
V 1 ( k + 1 ) Y + soft ( U ( k + 1 ) Y D 1 ( k ) , 1 ρ ) ,
V 2 ( k + 1 ) soft ( U ( k + 1 ) D 2 ( k ) , α ρ ) ,
V 3 ( k + 1 ) ( H T H + I ) 1 ( U ( k + 1 ) D 3 ( k ) + H T ξ 4 ( k ) ) ,
V 4 ( k + 1 ) soft ( D 4 ( k ) HV 3 ( k ) , λ TV ρ ) .
D ( k + 1 ) D ( k ) GU ( k + 1 ) BV ( k + 1 ) .
GU ( k ) + BV ( k ) F < ϵ ,

Metrics