Abstract

Imaging in the natural scene under ill lighting conditions (e.g., low light, back-lit, over-exposed front-lit, and any combinations of them) suffers from both over- and under-exposure at the same time, whereas processing of such images often results in over- and under-enhancement. A single small image sensor can hardly provide satisfactory quality for ill lighting conditions with ordinary optical lenses in capturing devices. Challenges arise in the maintenance of a visual smoothness between those regions, while color and contrast should be well preserved. The problem has been approached by various methods, including multiple sensors and handcrafted parameters, but extant model capacity is limited to only some specific scenes (i.e., lighting conditions). Motivated by these challenges, in this paper, we propose a deep image enhancement method for color images captured under ill lighting conditions. In this method, input images are first decomposed into reflection and illumination maps with the proposed layer distribution loss net, where the illumination blindness and structure degradation problem can be subsequently solved via these two components, respectively. The hidden degradation in reflection and illumination is tuned with a knowledge-based adaptive enhancement constraint designed for ill illuminated images. The model can maintain a balance of smoothness and contribute to solving the problem of noise besides over- and under-enhancement. The local consistency in illumination is achieved via a repairing operation performed in the proposed Repair-Net. The total variation operator is optimized to acquire local consistency, and the image gradient is guided with the proposed enhancement constraint. Finally, a product of updated reflection and illumination maps reconstructs an enhanced image. Experiments are organized under both very low exposure and ill illumination conditions, where a new dataset is also proposed. Results on both experiments show that our method has superior performance in preserving structural and textural details compared to other states of the art, which suggests that our method is more practical in future visual applications.

© 2021 Optical Society of America

Full Article  |  PDF Article
More Like This
Robust underwater image enhancement method based on natural light and reflectivity

Xiangyu Deng, Yongqing Zhang, Huigang Wang, and Hao Hu
J. Opt. Soc. Am. A 38(2) 181-191 (2021)

Underwater image enhancement method based on adaptive attenuation-curve prior

Ke Liu and Yongquan Liang
Opt. Express 29(7) 10321-10345 (2021)

Invariant descriptors for intrinsic reflectance optimization

Anil S. Baslamisli and Theo Gevers
J. Opt. Soc. Am. A 38(6) 887-896 (2021)

References

  • View by:
  • |
  • |
  • |

  1. E. H. Land and J. J. McCann, “Lightness and retinex theory,” J. Opt. Soc. Am. A 61, 1–11 (1971).
    [Crossref]
  2. N. A. Riza, J. P. La Torre, and M. J. Amin, “CAOS-CMOS camera,” Opt. Express 24, 13444–13458 (2016).
    [Crossref]
  3. Y. J. Jung, “Enhancement of low light level images using color-plus-mono dual camera,” Opt. Express 25, 12029–12051 (2017).
    [Crossref]
  4. L. Chen, J. Li, and C. P. Chen, “Regional multifocus image fusion using sparse representation,” Opt. Express 21, 5182–5197 (2013).
    [Crossref]
  5. G. Chen, L. Li, W. Jin, J. Zhu, and F. Shi, “Weighted sparse representation multi-scale transform fusion algorithm for high dynamic range imaging with a low-light dual-channel camera,” Opt. Express 27, 10564–10579 (2019).
    [Crossref]
  6. Z. Niu, J. Shi, L. Sun, Y. Zhu, J. Fan, and G. Zeng, “Photon-limited face image super-resolution based on deep learning,” Opt. Express 26, 22773–22782 (2018).
    [Crossref]
  7. S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).
    [Crossref]
  8. W. Wang, C. Zhang, and M. K. Ng, “Variational model for simultaneously image denoising and contrast enhancement,” Opt. Express 28, 18751–18777 (2020).
    [Crossref]
  9. S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process. 22, 3538–3548 (2013).
    [Crossref]
  10. G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger, “HDR image reconstruction from a single exposure using deep CNNS,” ACM Trans. Graph. 36, 178 (2017).
    [Crossref]
  11. M. D. Fairchild, “The HDR photographic survey,” in Color Imaging Conference Final Program and Proceedings (IEEE, 2007), pp. 233–238.
  12. F. Abedi, Q. Liu, and Y. Yang, “Multi-view high dynamic range reconstruction via gain estimation,” in IEEE Video Processing and Image Communications (VCIP) (IEEE, 2019), pp. 1–4.
  13. Q. Shan, J. Jia, and M. S. Brown, “Globally optimized linear windowed tone mapping,” IEEE Trans. Vis. Comput. Graph. 16, 663–675 (2010).
    [Crossref]
  14. Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin, “A closed-form solution to retinex with nonlocal texture constraints,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1437–1444 (2012).
    [Crossref]
  15. X. Zhang, Y. Ma, F. Fan, Y. Zhang, and J. Huang, “Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition,” J. Opt. Soc. Am. A 34, 1400–1410 (2017).
    [Crossref]
  16. H. Guo, Y. Ma, X. Mei, and J. Ma, “Infrared and visible image fusion based on total variation and augmented Lagrangian,” J. Opt. Soc. Am. A 34, 1961–1968 (2017).
    [Crossref]
  17. X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Process. 129, 82–96 (2016).
    [Crossref]
  18. Z. Ying, G. Li, and W. Gao, “A bio-inspired multi-exposure fusion framework for low-light image enhancement,” arXiv:1711.00591 (2017).
  19. Z. Li and X. Wu, “Learning-based restoration of backlit images,” IEEE Trans. Image Process. 27, 976–986 (2017).
    [Crossref]
  20. E. H. Land, “Recent advances in retinex theory and some implications for cortical computations: color vision and the natural image,” Proc. Natl. Acad. Sci. USA 80, 5163–5169 (1983).
    [Crossref]
  21. E. H. Land, “The retinex theory of color vision,” Sci. Am. 237, 108–129 (1977).
    [Crossref]
  22. D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process. 6, 965–976 (1997).
    [Crossref]
  23. M. K. Ng and W. Wang, “A total variation model for retinex,” SIAM J. Imag. Sci. 4, 345–365 (2011).
    [Crossref]
  24. M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Trans. Image Process. 27, 2828–2841 (2018).
    [Crossref]
  25. X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26, 982–993 (2016).
    [Crossref]
  26. X. Ren, M. Li, W.-H. Cheng, and J. Liu, “Joint enhancement and denoising method via sequential decomposition,” in IEEE International Symposium on Circuits and Systems (ISCAS) (IEEE, 2018), pp. 1–5.
  27. L. Tao, C. Zhu, J. Song, T. Lu, H. Jia, and X. Xie, “Low-light image enhancement using cnn and bright channel prior,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2017), pp. 3215–3219.
  28. L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: a convolutional neural network for low-light image enhancement,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.
  29. B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao, “A joint intrinsic-extrinsic prior model for retinex,” in IEEE International Conference on Computer Vision (2017), pp. 4000–4009.
  30. F. Lv, F. Lu, J. Wu, and C. Lim, “MBLLEN: low-light image/video enhancement using CNNs,” in British Machine Vision Conference (2018), p. 220.
  31. C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300.
  32. X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2782–2790.
  33. C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference (2018).
  34. Y. Zhang, J. Zhang, and X. Guo, “Kindling the Darkness: a practical low-light image enhancer,” in Proceedings of the 27th ACM International Conference on Multimedia (2019), pp. 1632–1640.
  35. L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).
    [Crossref]
  36. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2341–2353 (2010).
    [Crossref]
  37. J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “Star: A structure and texture aware retinex model,” IEEE Trans. Image Process. 29, 5022–5037 (2020).
    [Crossref]
  38. https://github.com/imrizvankhan/D2BI .
  39. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [Crossref]
  40. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett. 20, 209–212 (2012).
    [Crossref]
  41. C. Lee, C. Lee, and C.-S. Kim, “Contrast enhancement based on layered difference representation,” in IEEE International Conference on Image Processing (IEEE, 2012), pp. 965–968.

2020 (2)

W. Wang, C. Zhang, and M. K. Ng, “Variational model for simultaneously image denoising and contrast enhancement,” Opt. Express 28, 18751–18777 (2020).
[Crossref]

J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “Star: A structure and texture aware retinex model,” IEEE Trans. Image Process. 29, 5022–5037 (2020).
[Crossref]

2019 (1)

2018 (2)

Z. Niu, J. Shi, L. Sun, Y. Zhu, J. Fan, and G. Zeng, “Photon-limited face image super-resolution based on deep learning,” Opt. Express 26, 22773–22782 (2018).
[Crossref]

M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Trans. Image Process. 27, 2828–2841 (2018).
[Crossref]

2017 (5)

2016 (3)

X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Process. 129, 82–96 (2016).
[Crossref]

N. A. Riza, J. P. La Torre, and M. J. Amin, “CAOS-CMOS camera,” Opt. Express 24, 13444–13458 (2016).
[Crossref]

X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26, 982–993 (2016).
[Crossref]

2013 (2)

L. Chen, J. Li, and C. P. Chen, “Regional multifocus image fusion using sparse representation,” Opt. Express 21, 5182–5197 (2013).
[Crossref]

S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process. 22, 3538–3548 (2013).
[Crossref]

2012 (3)

Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin, “A closed-form solution to retinex with nonlocal texture constraints,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1437–1444 (2012).
[Crossref]

L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).
[Crossref]

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett. 20, 209–212 (2012).
[Crossref]

2011 (1)

M. K. Ng and W. Wang, “A total variation model for retinex,” SIAM J. Imag. Sci. 4, 345–365 (2011).
[Crossref]

2010 (2)

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2341–2353 (2010).
[Crossref]

Q. Shan, J. Jia, and M. S. Brown, “Globally optimized linear windowed tone mapping,” IEEE Trans. Vis. Comput. Graph. 16, 663–675 (2010).
[Crossref]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

1997 (1)

D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process. 6, 965–976 (1997).
[Crossref]

1987 (1)

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).
[Crossref]

1983 (1)

E. H. Land, “Recent advances in retinex theory and some implications for cortical computations: color vision and the natural image,” Proc. Natl. Acad. Sci. USA 80, 5163–5169 (1983).
[Crossref]

1977 (1)

E. H. Land, “The retinex theory of color vision,” Sci. Am. 237, 108–129 (1977).
[Crossref]

1971 (1)

E. H. Land and J. J. McCann, “Lightness and retinex theory,” J. Opt. Soc. Am. A 61, 1–11 (1971).
[Crossref]

Abedi, F.

F. Abedi, Q. Liu, and Y. Yang, “Multi-view high dynamic range reconstruction via gain estimation,” in IEEE Video Processing and Image Communications (VCIP) (IEEE, 2019), pp. 1–4.

Amburn, E. P.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).
[Crossref]

Amin, M. J.

Austin, J. D.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).
[Crossref]

Bovik, A. C.

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett. 20, 209–212 (2012).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Brown, M. S.

Q. Shan, J. Jia, and M. S. Brown, “Globally optimized linear windowed tone mapping,” IEEE Trans. Vis. Comput. Graph. 16, 663–675 (2010).
[Crossref]

Cai, B.

B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao, “A joint intrinsic-extrinsic prior model for retinex,” in IEEE International Conference on Computer Vision (2017), pp. 4000–4009.

Chen, C.

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300.

Chen, C. P.

Chen, G.

Chen, L.

Chen, Q.

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300.

Cheng, W.-H.

X. Ren, M. Li, W.-H. Cheng, and J. Liu, “Joint enhancement and denoising method via sequential decomposition,” in IEEE International Symposium on Circuits and Systems (ISCAS) (IEEE, 2018), pp. 1–5.

Cromartie, R.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).
[Crossref]

Dai, Q.

Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin, “A closed-form solution to retinex with nonlocal texture constraints,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1437–1444 (2012).
[Crossref]

Denes, G.

G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger, “HDR image reconstruction from a single exposure using deep CNNS,” ACM Trans. Graph. 36, 178 (2017).
[Crossref]

Ding, X.

X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Process. 129, 82–96 (2016).
[Crossref]

X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2782–2790.

Eilertsen, G.

G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger, “HDR image reconstruction from a single exposure using deep CNNS,” ACM Trans. Graph. 36, 178 (2017).
[Crossref]

Fairchild, M. D.

M. D. Fairchild, “The HDR photographic survey,” in Color Imaging Conference Final Program and Proceedings (IEEE, 2007), pp. 233–238.

Fan, F.

Fan, J.

Fu, X.

X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Process. 129, 82–96 (2016).
[Crossref]

X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2782–2790.

Gao, W.

Z. Ying, G. Li, and W. Gao, “A bio-inspired multi-exposure fusion framework for low-light image enhancement,” arXiv:1711.00591 (2017).

Geselowitz, A.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).
[Crossref]

Greer, T.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).
[Crossref]

Guo, H.

Guo, K.

B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao, “A joint intrinsic-extrinsic prior model for retinex,” in IEEE International Conference on Computer Vision (2017), pp. 4000–4009.

Guo, X.

X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26, 982–993 (2016).
[Crossref]

Y. Zhang, J. Zhang, and X. Guo, “Kindling the Darkness: a practical low-light image enhancer,” in Proceedings of the 27th ACM International Conference on Multimedia (2019), pp. 1632–1640.

Guo, Z.

M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Trans. Image Process. 27, 2828–2841 (2018).
[Crossref]

He, K.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2341–2353 (2010).
[Crossref]

Hou, Y.

J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “Star: A structure and texture aware retinex model,” IEEE Trans. Image Process. 29, 5022–5037 (2020).
[Crossref]

Hu, B.

B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao, “A joint intrinsic-extrinsic prior model for retinex,” in IEEE International Conference on Computer Vision (2017), pp. 4000–4009.

Hu, H.-M.

S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process. 22, 3538–3548 (2013).
[Crossref]

Huang, J.

Huang, Y.

X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Process. 129, 82–96 (2016).
[Crossref]

X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2782–2790.

Jia, H.

L. Tao, C. Zhu, J. Song, T. Lu, H. Jia, and X. Xie, “Low-light image enhancement using cnn and bright channel prior,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2017), pp. 3215–3219.

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: a convolutional neural network for low-light image enhancement,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

Jia, J.

L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).
[Crossref]

Q. Shan, J. Jia, and M. S. Brown, “Globally optimized linear windowed tone mapping,” IEEE Trans. Vis. Comput. Graph. 16, 663–675 (2010).
[Crossref]

Jia, K.

B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao, “A joint intrinsic-extrinsic prior model for retinex,” in IEEE International Conference on Computer Vision (2017), pp. 4000–4009.

Jin, W.

Jobson, D. J.

D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process. 6, 965–976 (1997).
[Crossref]

Jung, Y. J.

Kim, C.-S.

C. Lee, C. Lee, and C.-S. Kim, “Contrast enhancement based on layered difference representation,” in IEEE International Conference on Image Processing (IEEE, 2012), pp. 965–968.

Koltun, V.

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300.

Kronander, J.

G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger, “HDR image reconstruction from a single exposure using deep CNNS,” ACM Trans. Graph. 36, 178 (2017).
[Crossref]

La Torre, J. P.

Land, E. H.

E. H. Land, “Recent advances in retinex theory and some implications for cortical computations: color vision and the natural image,” Proc. Natl. Acad. Sci. USA 80, 5163–5169 (1983).
[Crossref]

E. H. Land, “The retinex theory of color vision,” Sci. Am. 237, 108–129 (1977).
[Crossref]

E. H. Land and J. J. McCann, “Lightness and retinex theory,” J. Opt. Soc. Am. A 61, 1–11 (1971).
[Crossref]

Lee, C.

C. Lee, C. Lee, and C.-S. Kim, “Contrast enhancement based on layered difference representation,” in IEEE International Conference on Image Processing (IEEE, 2012), pp. 965–968.

C. Lee, C. Lee, and C.-S. Kim, “Contrast enhancement based on layered difference representation,” in IEEE International Conference on Image Processing (IEEE, 2012), pp. 965–968.

Li, B.

S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process. 22, 3538–3548 (2013).
[Crossref]

Li, G.

Z. Ying, G. Li, and W. Gao, “A bio-inspired multi-exposure fusion framework for low-light image enhancement,” arXiv:1711.00591 (2017).

Li, J.

Li, L.

Li, M.

M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Trans. Image Process. 27, 2828–2841 (2018).
[Crossref]

X. Ren, M. Li, W.-H. Cheng, and J. Liu, “Joint enhancement and denoising method via sequential decomposition,” in IEEE International Symposium on Circuits and Systems (ISCAS) (IEEE, 2018), pp. 1–5.

Li, Y.

X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26, 982–993 (2016).
[Crossref]

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: a convolutional neural network for low-light image enhancement,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

Li, Z.

Z. Li and X. Wu, “Learning-based restoration of backlit images,” IEEE Trans. Image Process. 27, 976–986 (2017).
[Crossref]

Liao, Y.

X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Process. 129, 82–96 (2016).
[Crossref]

Lim, C.

F. Lv, F. Lu, J. Wu, and C. Lim, “MBLLEN: low-light image/video enhancement using CNNs,” in British Machine Vision Conference (2018), p. 220.

Lin, S.

Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin, “A closed-form solution to retinex with nonlocal texture constraints,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1437–1444 (2012).
[Crossref]

Ling, H.

X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26, 982–993 (2016).
[Crossref]

Liu, J.

M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Trans. Image Process. 27, 2828–2841 (2018).
[Crossref]

X. Ren, M. Li, W.-H. Cheng, and J. Liu, “Joint enhancement and denoising method via sequential decomposition,” in IEEE International Symposium on Circuits and Systems (ISCAS) (IEEE, 2018), pp. 1–5.

C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference (2018).

Liu, L.

J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “Star: A structure and texture aware retinex model,” IEEE Trans. Image Process. 29, 5022–5037 (2020).
[Crossref]

Liu, Q.

F. Abedi, Q. Liu, and Y. Yang, “Multi-view high dynamic range reconstruction via gain estimation,” in IEEE Video Processing and Image Communications (VCIP) (IEEE, 2019), pp. 1–4.

Lu, F.

F. Lv, F. Lu, J. Wu, and C. Lim, “MBLLEN: low-light image/video enhancement using CNNs,” in British Machine Vision Conference (2018), p. 220.

Lu, T.

L. Tao, C. Zhu, J. Song, T. Lu, H. Jia, and X. Xie, “Low-light image enhancement using cnn and bright channel prior,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2017), pp. 3215–3219.

Lv, F.

F. Lv, F. Lu, J. Wu, and C. Lim, “MBLLEN: low-light image/video enhancement using CNNs,” in British Machine Vision Conference (2018), p. 220.

Ma, J.

Ma, Y.

Mantiuk, R. K.

G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger, “HDR image reconstruction from a single exposure using deep CNNS,” ACM Trans. Graph. 36, 178 (2017).
[Crossref]

McCann, J. J.

E. H. Land and J. J. McCann, “Lightness and retinex theory,” J. Opt. Soc. Am. A 61, 1–11 (1971).
[Crossref]

Mei, X.

Mittal, A.

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett. 20, 209–212 (2012).
[Crossref]

Ng, M. K.

Niu, Z.

Paisley, J.

X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Process. 129, 82–96 (2016).
[Crossref]

Pizer, S. M.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).
[Crossref]

Rahman, Z.-U.

D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process. 6, 965–976 (1997).
[Crossref]

Ren, D.

J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “Star: A structure and texture aware retinex model,” IEEE Trans. Image Process. 29, 5022–5037 (2020).
[Crossref]

Ren, X.

X. Ren, M. Li, W.-H. Cheng, and J. Liu, “Joint enhancement and denoising method via sequential decomposition,” in IEEE International Symposium on Circuits and Systems (ISCAS) (IEEE, 2018), pp. 1–5.

Riza, N. A.

Shan, Q.

Q. Shan, J. Jia, and M. S. Brown, “Globally optimized linear windowed tone mapping,” IEEE Trans. Vis. Comput. Graph. 16, 663–675 (2010).
[Crossref]

Shao, L.

J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “Star: A structure and texture aware retinex model,” IEEE Trans. Image Process. 29, 5022–5037 (2020).
[Crossref]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Shen, L.

Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin, “A closed-form solution to retinex with nonlocal texture constraints,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1437–1444 (2012).
[Crossref]

Shi, F.

Shi, J.

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Song, J.

L. Tao, C. Zhu, J. Song, T. Lu, H. Jia, and X. Xie, “Low-light image enhancement using cnn and bright channel prior,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2017), pp. 3215–3219.

Soundararajan, R.

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett. 20, 209–212 (2012).
[Crossref]

Sun, J.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2341–2353 (2010).
[Crossref]

Sun, L.

Sun, X.

M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Trans. Image Process. 27, 2828–2841 (2018).
[Crossref]

Tan, P.

Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin, “A closed-form solution to retinex with nonlocal texture constraints,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1437–1444 (2012).
[Crossref]

Tang, X.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2341–2353 (2010).
[Crossref]

Tao, D.

B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao, “A joint intrinsic-extrinsic prior model for retinex,” in IEEE International Conference on Computer Vision (2017), pp. 4000–4009.

Tao, L.

L. Tao, C. Zhu, J. Song, T. Lu, H. Jia, and X. Xie, “Low-light image enhancement using cnn and bright channel prior,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2017), pp. 3215–3219.

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: a convolutional neural network for low-light image enhancement,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

ter Haar Romeny, B.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).
[Crossref]

Unger, J.

G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger, “HDR image reconstruction from a single exposure using deep CNNS,” ACM Trans. Graph. 36, 178 (2017).
[Crossref]

Wang, H.

J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “Star: A structure and texture aware retinex model,” IEEE Trans. Image Process. 29, 5022–5037 (2020).
[Crossref]

Wang, S.

S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process. 22, 3538–3548 (2013).
[Crossref]

Wang, W.

W. Wang, C. Zhang, and M. K. Ng, “Variational model for simultaneously image denoising and contrast enhancement,” Opt. Express 28, 18751–18777 (2020).
[Crossref]

M. K. Ng and W. Wang, “A total variation model for retinex,” SIAM J. Imag. Sci. 4, 345–365 (2011).
[Crossref]

C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference (2018).

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Wei, C.

C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference (2018).

Woodell, G. A.

D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process. 6, 965–976 (1997).
[Crossref]

Wu, E.

Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin, “A closed-form solution to retinex with nonlocal texture constraints,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1437–1444 (2012).
[Crossref]

Wu, J.

F. Lv, F. Lu, J. Wu, and C. Lim, “MBLLEN: low-light image/video enhancement using CNNs,” in British Machine Vision Conference (2018), p. 220.

Wu, X.

Z. Li and X. Wu, “Learning-based restoration of backlit images,” IEEE Trans. Image Process. 27, 976–986 (2017).
[Crossref]

Xia, Y.

L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).
[Crossref]

Xiang, G.

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: a convolutional neural network for low-light image enhancement,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

Xie, X.

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: a convolutional neural network for low-light image enhancement,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

L. Tao, C. Zhu, J. Song, T. Lu, H. Jia, and X. Xie, “Low-light image enhancement using cnn and bright channel prior,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2017), pp. 3215–3219.

Xu, J.

J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “Star: A structure and texture aware retinex model,” IEEE Trans. Image Process. 29, 5022–5037 (2020).
[Crossref]

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300.

Xu, L.

L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).
[Crossref]

Xu, X.

B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao, “A joint intrinsic-extrinsic prior model for retinex,” in IEEE International Conference on Computer Vision (2017), pp. 4000–4009.

Yan, Q.

L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).
[Crossref]

Yang, W.

M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Trans. Image Process. 27, 2828–2841 (2018).
[Crossref]

C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference (2018).

Yang, Y.

F. Abedi, Q. Liu, and Y. Yang, “Multi-view high dynamic range reconstruction via gain estimation,” in IEEE Video Processing and Image Communications (VCIP) (IEEE, 2019), pp. 1–4.

Ying, Z.

Z. Ying, G. Li, and W. Gao, “A bio-inspired multi-exposure fusion framework for low-light image enhancement,” arXiv:1711.00591 (2017).

Yu, M.

J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “Star: A structure and texture aware retinex model,” IEEE Trans. Image Process. 29, 5022–5037 (2020).
[Crossref]

Zeng, D.

X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Process. 129, 82–96 (2016).
[Crossref]

X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2782–2790.

Zeng, G.

Zhang, C.

Zhang, J.

Y. Zhang, J. Zhang, and X. Guo, “Kindling the Darkness: a practical low-light image enhancer,” in Proceedings of the 27th ACM International Conference on Multimedia (2019), pp. 1632–1640.

Zhang, X.

Zhang, X.-P.

X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2782–2790.

Zhang, Y.

X. Zhang, Y. Ma, F. Fan, Y. Zhang, and J. Huang, “Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition,” J. Opt. Soc. Am. A 34, 1400–1410 (2017).
[Crossref]

Y. Zhang, J. Zhang, and X. Guo, “Kindling the Darkness: a practical low-light image enhancer,” in Proceedings of the 27th ACM International Conference on Multimedia (2019), pp. 1632–1640.

Zhao, Q.

Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin, “A closed-form solution to retinex with nonlocal texture constraints,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1437–1444 (2012).
[Crossref]

Zheng, J.

S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process. 22, 3538–3548 (2013).
[Crossref]

Zhu, C.

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: a convolutional neural network for low-light image enhancement,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

L. Tao, C. Zhu, J. Song, T. Lu, H. Jia, and X. Xie, “Low-light image enhancement using cnn and bright channel prior,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2017), pp. 3215–3219.

Zhu, F.

J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “Star: A structure and texture aware retinex model,” IEEE Trans. Image Process. 29, 5022–5037 (2020).
[Crossref]

Zhu, J.

Zhu, Y.

Zimmerman, J. B.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).
[Crossref]

Zuiderveld, K.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).
[Crossref]

ACM Trans. Graph. (2)

G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger, “HDR image reconstruction from a single exposure using deep CNNS,” ACM Trans. Graph. 36, 178 (2017).
[Crossref]

L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).
[Crossref]

Comput. Vis. Graph. Image Process. (1)

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).
[Crossref]

IEEE Signal Process. Lett. (1)

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett. 20, 209–212 (2012).
[Crossref]

IEEE Trans. Image Process. (7)

J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “Star: A structure and texture aware retinex model,” IEEE Trans. Image Process. 29, 5022–5037 (2020).
[Crossref]

D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process. 6, 965–976 (1997).
[Crossref]

M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Trans. Image Process. 27, 2828–2841 (2018).
[Crossref]

X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26, 982–993 (2016).
[Crossref]

S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process. 22, 3538–3548 (2013).
[Crossref]

Z. Li and X. Wu, “Learning-based restoration of backlit images,” IEEE Trans. Image Process. 27, 976–986 (2017).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin, “A closed-form solution to retinex with nonlocal texture constraints,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1437–1444 (2012).
[Crossref]

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2341–2353 (2010).
[Crossref]

IEEE Trans. Vis. Comput. Graph. (1)

Q. Shan, J. Jia, and M. S. Brown, “Globally optimized linear windowed tone mapping,” IEEE Trans. Vis. Comput. Graph. 16, 663–675 (2010).
[Crossref]

J. Opt. Soc. Am. A (3)

Opt. Express (6)

Proc. Natl. Acad. Sci. USA (1)

E. H. Land, “Recent advances in retinex theory and some implications for cortical computations: color vision and the natural image,” Proc. Natl. Acad. Sci. USA 80, 5163–5169 (1983).
[Crossref]

Sci. Am. (1)

E. H. Land, “The retinex theory of color vision,” Sci. Am. 237, 108–129 (1977).
[Crossref]

SIAM J. Imag. Sci. (1)

M. K. Ng and W. Wang, “A total variation model for retinex,” SIAM J. Imag. Sci. 4, 345–365 (2011).
[Crossref]

Signal Process. (1)

X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Process. 129, 82–96 (2016).
[Crossref]

Other (14)

Z. Ying, G. Li, and W. Gao, “A bio-inspired multi-exposure fusion framework for low-light image enhancement,” arXiv:1711.00591 (2017).

M. D. Fairchild, “The HDR photographic survey,” in Color Imaging Conference Final Program and Proceedings (IEEE, 2007), pp. 233–238.

F. Abedi, Q. Liu, and Y. Yang, “Multi-view high dynamic range reconstruction via gain estimation,” in IEEE Video Processing and Image Communications (VCIP) (IEEE, 2019), pp. 1–4.

https://github.com/imrizvankhan/D2BI .

C. Lee, C. Lee, and C.-S. Kim, “Contrast enhancement based on layered difference representation,” in IEEE International Conference on Image Processing (IEEE, 2012), pp. 965–968.

X. Ren, M. Li, W.-H. Cheng, and J. Liu, “Joint enhancement and denoising method via sequential decomposition,” in IEEE International Symposium on Circuits and Systems (ISCAS) (IEEE, 2018), pp. 1–5.

L. Tao, C. Zhu, J. Song, T. Lu, H. Jia, and X. Xie, “Low-light image enhancement using cnn and bright channel prior,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2017), pp. 3215–3219.

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: a convolutional neural network for low-light image enhancement,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao, “A joint intrinsic-extrinsic prior model for retinex,” in IEEE International Conference on Computer Vision (2017), pp. 4000–4009.

F. Lv, F. Lu, J. Wu, and C. Lim, “MBLLEN: low-light image/video enhancement using CNNs,” in British Machine Vision Conference (2018), p. 220.

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300.

X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2782–2790.

C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference (2018).

Y. Zhang, J. Zhang, and X. Guo, “Kindling the Darkness: a practical low-light image enhancer,” in Proceedings of the 27th ACM International Conference on Multimedia (2019), pp. 1632–1640.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Framework of proposed approach. First, the LDLN splits input image $I$ into reflection $R$ and illumination $L$ . Next, $R$ and $L$ are upgraded with adaptive enhancement constraint (AEC) operations. A Repair-Net performs an illumination smoothness operation followed by AEC filtering. The dot product of enhanced reflection and illumination maps produces the refined output image.
Fig. 2.
Fig. 2. Image division into reflection and illumination with layer distribution loss net for under-exposed $I$ and well-exposed ${I_v}$ input images.
Fig. 3.
Fig. 3. Advantages of proposed adaptive enhancement constraint for total variation optimization. (a) Input images, (b) output for TV loss without AEC, and (c) output results with the proposed adaptive enhancement constraint for out D2BI-Net.
Fig. 4.
Fig. 4. Illumination smoothness operation in Repair-Net.
Fig. 5.
Fig. 5. Structure similarity index measure versus number of epochs on LOL dataset.
Fig. 6.
Fig. 6. Architecture of the capturing device with sample images, ranging from under-exposed to exposed leading to well-exposed and over-exposed images (UXOV dataset).
Fig. 7.
Fig. 7. Comparison of backlit methods, LBR [19], FEW [17], and D2BI-Net (ours), on backlit2ndmodel in first row and backlitcone in second row.
Fig. 8.
Fig. 8. Comparison of D2BI-Net with LBR [19] and FEW [17], using ill illuminated input modelv1 in first row and soloemov4 in second row from UXOV dataset.
Fig. 9.
Fig. 9. Comparison of the UXOV dataset test images. Columns show input, retinex, KinD, LBR, SBLI, D2BI-Net (ours), and ground truth images, respectively.
Fig. 10.
Fig. 10. Comparison of SLIME, JED, retinex, and KinD with D2BI-Net (ours) by using low light LOL dataset evaluation images.
Fig. 11.
Fig. 11. First column shows ill illuminated input images: dog (right low lit), model with color chart (front lit), and cone (back low lit). Second column shows enhanced illumination, and third is enhanced reflection. Fourth column shows the illumination transmission for depth-of-residue map, and last is the final output image.
Fig. 12.
Fig. 12. Results using DICM dataset. First column shows input, second is enhanced illumination, and third is our reflection. Fourth column shows depth-of-residue map, and final is our refined output.
Fig. 13.
Fig. 13. Ablation study of the effects of AEC in D2BI-Net. (a) Input images decomposed into (b) reflection and (c) illumination with layer distribution loss-net (LDLN). (d) Output images with LDLN and (e) final output images with LDLN  $+$  Repair-Net (RN). (f)–(h) Reflection, illumination, and output images with LDLN  $+$  AEC, respectively. (i) Illumination map refinement with RN  $+$  AEC and (j) output with LDLN  $+$  AEC  $+$  RN.

Tables (6)

Tables Icon

Table 1. Objective Metrics, Average SSIM, PSNR, and NIQE Using UXOV Training and Test Images

Tables Icon

Table 2. No Reference Image Quality, NIQE Assessment for Objects in Fig. 7 with Average NIQE on UXOV Dataset Test Images

Tables Icon

Table 3. Comparison of Objective Quality Metrics, PSNR, and SSIM for Retinex, KinD, LBR, FEW, and D2BI-Net (Ours), Using UXOV Dataset Test Images

Tables Icon

Table 4. Objective Quality Evaluation Metrics, PSNR, and SSIM of Various Methods Using LOL Dataset Sample Images Shown in Fig. 10

Tables Icon

Table 5. Comparison of SSIM and PSNR with State-of-the-Art Methods Using LOL Dataset Evaluation Images

Tables Icon

Table 6. Objective Comparison, PSNR, and SSIM Based on Ablation Study Using Our UXOV Dataset Test Images

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I = R L .
I = R L + γ .
L = μ L DLR + μ R r L R r + μ L S A T L L S A T ,
ϕ ( L ) = 1 ( m L n ) , ψ ( R ) = 1 ( m R n ) ,
L | p i ± 1 = L i ± ( ζ ( λ m L i ) n ) , R | p i ± 1 = R i ± ( ζ ( λ m R i ) n ) ,
L L S A T = ( L + ( ζ ( λ m L ) n ) ) exp ( φ ( R + ( ζ ( λ m R ) n ) ) ) .
L o = L c + L L S A T ,
L c = R ^ L ^ I 1 .

Metrics