Abstract

In digital photography, the improvement of imaging quality in low light shooting is one of the users’ needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Fusion of infrared and visible images for night-vision context enhancement

Zhiqiang Zhou, Mingjie Dong, Xiaozhu Xie, and Zhifeng Gao
Appl. Opt. 55(23) 6480-6490 (2016)

Fusion of color microscopic images based on bidimensional empirical mode decomposition

Ying Chen, Li Wang, Zhibin Sun, Yuanda Jiang, and Guangjie Zhai
Opt. Express 18(21) 21757-21769 (2010)

Continuous digital zooming using local self-similarity-based super-resolution for an asymmetric dual camera system

Byeongho Moon, Soohwan Yu, Seungyong Ko, Seonhee Park, and Joonki Paik
J. Opt. Soc. Am. A 34(6) 991-1003 (2017)

References

  • View by:
  • |
  • |
  • |

  1. A. Chakrabarti, W. T. Freeman, and T. Zickler, “Rethinking color cameras,” in Proceedings of International Conference on Computational Photography (IEEE, 2014), pp. 1–8.
  2. Pointgrey, “Capturing consistent color,” https://www.ptgrey.com/truecolor .
  3. R. S. Blum, Z. Xue, and Z. Zhang, “An overview of image fusion,” in Multi-Sensor Image Fusion and Its Applications (CRC, 2005).
  4. A. P. James and B. V. Dasarathy, “Medical image fusion: a survey of the state of the art,” Inf. Fusion 19, 4–19 (2014).
    [Crossref]
  5. Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Trans. Geosci. Remote Sens. 43(6), 1391–1402 (2005).
    [Crossref]
  6. L. Chen, J. Li, and C. L. P. Chen, “Regional multifocus image fusion using sparse representation,” Opt. Express 21(4), 5182–5197 (2013).
    [Crossref] [PubMed]
  7. L. Guo, M. Dai, and M. Zhu, “Multifocus color image fusion based on quaternion curvelet transform,” Opt. Express 20(17), 18846–18860 (2012).
    [Crossref] [PubMed]
  8. P. J. Burt and R. J. Kolczynski, “Enhanced image capture through fusion,” in Proceedings of the 4th International Conference on Computer Vision (IEEE, Berlin, Germany, 1993), pp. 173–182.
  9. G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23(3), 664–672 (2004).
    [Crossref]
  10. Y. J. Jung, H. G. Kim, and Y. M. Ro, “Critical binocular asymmetry measure for the perceptual quality assessment of synthesized stereo 3D images in view synthesis,” IEEE Trans. Circ. Syst. Video Tech. 26(7), 1201–1214 (2016).
    [Crossref]
  11. Y. Zhao, Z. Chen, C. Zhu, Y. P. Tan, and L. Yu, “Binocular just-noticeable-difference model for stereoscopic images,” IEEE Signal Process. Lett. 18(1), 19–22 (2011).
    [Crossref]
  12. Y. J. Jung, H. Sohn, S. Lee, F. Speranza, and Y. M. Ro, “Visual importance- and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
    [Crossref]
  13. J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26(3), 96 (2007).
    [Crossref]
  14. Q. Yang, R. Yang, J. Davis, and D. Nister, “Spatial-depth super resolution for range images,” in Proceedings of International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  15. Q. Yan, X. Shen, L. Xu, S. Zhuo, X. Zhang, L. Shen, and J. Jia, “Cross-field joint image restoration via scale map,” in Proceedings of International Conference on Computer Vision (IEEE, 2013), pp. 1537–1544.
    [Crossref]
  16. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013).
    [Crossref] [PubMed]
  17. X. Shen, C. Zhou, L. Xu, and J. Jia, “Mutual-structure for joint filtering,” in Proceedings of International Conference on Computer Vision (IEEE, 2015), pp. 3406–3414.
  18. E. Eisemann and F. Durand, “Flash photography enhancement via intrinsic relighting,” ACM Trans. Graph. 23(3), 673–678 (2004).
    [Crossref]
  19. Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in Proceedings of European Conference on Computer Vision (Springer, 2014), pp. 815–830.
  20. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of International Conference on Computer Vision (IEEE, 1998), pp. pp. 839–846.
    [Crossref]
  21. Pointgrey, “Color USB3 vision,” http://www.ptgrey.com/flea3-13-mp-color-usb3-vision-sony-imx035-camera .
  22. Pointgrey, “Mono USB3 vision,” http://www.ptgrey.com/flea3-13-mp-mono-usb3-vision-sony-imx035-camera .
  23. Pointgrey, “FlyCapture SDK,” https://www.ptgrey.com/flycapture-sdk .
  24. Y. J. Jung, H. Sohn, and Y. M. Ro, “Visual discomfort visualizer using stereo vision and time-of-flight depth cameras,” IEEE Trans. Consum. Electron. 58(2), 246–254 (2012).
    [Crossref]
  25. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47(1), 7–42 (2002).
    [Crossref]
  26. A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” J. Math. Imaging Vis. 40(1), 120–145 (2011).
    [Crossref]
  27. H. Sohn, Y. J. Jung, S. Lee, F. Speranza, and Y. M. Ro, “Visual comfort amelioration technique for stereoscopic images: disparity remapping to mitigate global and local discomfort causes,” IEEE Trans. Circ. Syst. Video Tech. 24(5), 745–758 (2014).
    [Crossref]
  28. M. Lang, A. Hornung, O. Wang, S. Poulakos, A. Smolic, and M. Gross, “Nonlinear disparity mapping for stereoscopic 3D,” ACM Trans. Graph. 29(4), 75 (2010).
    [Crossref]
  29. S.-W. Jung, J.-Y. Jeong, and S.-J. Ko, “Sharpness enhancement of stereo images using binocular just-noticeable difference,” IEEE Trans. Image Process. 21(3), 1191–1199 (2012).
    [Crossref] [PubMed]
  30. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
    [Crossref] [PubMed]
  31. Z. Liu, Y. Shan, and Z. Zhang, “Expressive expression mapping with ratio images,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques (ACM, 2001), pp. 271–276.
  32. A. Toet, “Natural colour mapping for multiband night vision imagery,” Inf. Fusion 4(3), 155–166 (2003).
    [Crossref]
  33. X. Bai, F. Zhou, and B. Xue, “Fusion of infrared and visual images through region extraction by using multi scale center-surround top-hat transform,” Opt. Express 19(9), 8444–8457 (2011).
    [Crossref] [PubMed]
  34. CVIPLab, “Color and mono image database,” https://sites.google.com/site/gachoncvip/projects/dual-camera .
  35. C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9(2), 156–160 (2008).
    [Crossref]
  36. ITU-R BT.500–11, “Methodology for the subjective assessment of the quality of television pictures,” (2002).
  37. H. Sohn, Y. J. Jung, and Y. Man Ro, “Crosstalk reduction in stereoscopic 3D displays: disparity adjustment using crosstalk visibility index for crosstalk cancellation,” Opt. Express 22(3), 3375–3392 (2014).
    [Crossref] [PubMed]
  38. Y. Song, L. Bao, X. Xu, and Q. Yang, “Decolorization: is rgb2gray () out?” in Proceedings of SIGGRAPH Asia Technical Briefs (ACM, 2013), p. 15.
  39. C. Lu, L. Xu, and J. Jia, “Contrast preserving decolorization with perception-based metrics,” Int. J. Comput. Vis. 110(2), 222–239 (2014).
    [Crossref]

2016 (1)

Y. J. Jung, H. G. Kim, and Y. M. Ro, “Critical binocular asymmetry measure for the perceptual quality assessment of synthesized stereo 3D images in view synthesis,” IEEE Trans. Circ. Syst. Video Tech. 26(7), 1201–1214 (2016).
[Crossref]

2014 (4)

H. Sohn, Y. J. Jung, S. Lee, F. Speranza, and Y. M. Ro, “Visual comfort amelioration technique for stereoscopic images: disparity remapping to mitigate global and local discomfort causes,” IEEE Trans. Circ. Syst. Video Tech. 24(5), 745–758 (2014).
[Crossref]

A. P. James and B. V. Dasarathy, “Medical image fusion: a survey of the state of the art,” Inf. Fusion 19, 4–19 (2014).
[Crossref]

C. Lu, L. Xu, and J. Jia, “Contrast preserving decolorization with perception-based metrics,” Int. J. Comput. Vis. 110(2), 222–239 (2014).
[Crossref]

H. Sohn, Y. J. Jung, and Y. Man Ro, “Crosstalk reduction in stereoscopic 3D displays: disparity adjustment using crosstalk visibility index for crosstalk cancellation,” Opt. Express 22(3), 3375–3392 (2014).
[Crossref] [PubMed]

2013 (3)

L. Chen, J. Li, and C. L. P. Chen, “Regional multifocus image fusion using sparse representation,” Opt. Express 21(4), 5182–5197 (2013).
[Crossref] [PubMed]

Y. J. Jung, H. Sohn, S. Lee, F. Speranza, and Y. M. Ro, “Visual importance- and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013).
[Crossref] [PubMed]

2012 (3)

Y. J. Jung, H. Sohn, and Y. M. Ro, “Visual discomfort visualizer using stereo vision and time-of-flight depth cameras,” IEEE Trans. Consum. Electron. 58(2), 246–254 (2012).
[Crossref]

S.-W. Jung, J.-Y. Jeong, and S.-J. Ko, “Sharpness enhancement of stereo images using binocular just-noticeable difference,” IEEE Trans. Image Process. 21(3), 1191–1199 (2012).
[Crossref] [PubMed]

L. Guo, M. Dai, and M. Zhu, “Multifocus color image fusion based on quaternion curvelet transform,” Opt. Express 20(17), 18846–18860 (2012).
[Crossref] [PubMed]

2011 (3)

X. Bai, F. Zhou, and B. Xue, “Fusion of infrared and visual images through region extraction by using multi scale center-surround top-hat transform,” Opt. Express 19(9), 8444–8457 (2011).
[Crossref] [PubMed]

A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” J. Math. Imaging Vis. 40(1), 120–145 (2011).
[Crossref]

Y. Zhao, Z. Chen, C. Zhu, Y. P. Tan, and L. Yu, “Binocular just-noticeable-difference model for stereoscopic images,” IEEE Signal Process. Lett. 18(1), 19–22 (2011).
[Crossref]

2010 (1)

M. Lang, A. Hornung, O. Wang, S. Poulakos, A. Smolic, and M. Gross, “Nonlinear disparity mapping for stereoscopic 3D,” ACM Trans. Graph. 29(4), 75 (2010).
[Crossref]

2008 (1)

C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9(2), 156–160 (2008).
[Crossref]

2007 (1)

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26(3), 96 (2007).
[Crossref]

2005 (1)

Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Trans. Geosci. Remote Sens. 43(6), 1391–1402 (2005).
[Crossref]

2004 (3)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23(3), 664–672 (2004).
[Crossref]

E. Eisemann and F. Durand, “Flash photography enhancement via intrinsic relighting,” ACM Trans. Graph. 23(3), 673–678 (2004).
[Crossref]

2003 (1)

A. Toet, “Natural colour mapping for multiband night vision imagery,” Inf. Fusion 4(3), 155–166 (2003).
[Crossref]

2002 (1)

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47(1), 7–42 (2002).
[Crossref]

Agrawala, M.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23(3), 664–672 (2004).
[Crossref]

Armenakis, C.

Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Trans. Geosci. Remote Sens. 43(6), 1391–1402 (2005).
[Crossref]

Bai, X.

Bao, L.

Y. Song, L. Bao, X. Xu, and Q. Yang, “Decolorization: is rgb2gray () out?” in Proceedings of SIGGRAPH Asia Technical Briefs (ACM, 2013), p. 15.

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Chakrabarti, A.

A. Chakrabarti, W. T. Freeman, and T. Zickler, “Rethinking color cameras,” in Proceedings of International Conference on Computational Photography (IEEE, 2014), pp. 1–8.

Chambolle, A.

A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” J. Math. Imaging Vis. 40(1), 120–145 (2011).
[Crossref]

Chen, C. L. P.

Chen, L.

Chen, Z.

Y. Zhao, Z. Chen, C. Zhu, Y. P. Tan, and L. Yu, “Binocular just-noticeable-difference model for stereoscopic images,” IEEE Signal Process. Lett. 18(1), 19–22 (2011).
[Crossref]

Cohen, M.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23(3), 664–672 (2004).
[Crossref]

Cohen, M. F.

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26(3), 96 (2007).
[Crossref]

Dai, M.

Dasarathy, B. V.

A. P. James and B. V. Dasarathy, “Medical image fusion: a survey of the state of the art,” Inf. Fusion 19, 4–19 (2014).
[Crossref]

Davis, J.

Q. Yang, R. Yang, J. Davis, and D. Nister, “Spatial-depth super resolution for range images,” in Proceedings of International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Durand, F.

E. Eisemann and F. Durand, “Flash photography enhancement via intrinsic relighting,” ACM Trans. Graph. 23(3), 673–678 (2004).
[Crossref]

Eisemann, E.

E. Eisemann and F. Durand, “Flash photography enhancement via intrinsic relighting,” ACM Trans. Graph. 23(3), 673–678 (2004).
[Crossref]

Freeman, W. T.

A. Chakrabarti, W. T. Freeman, and T. Zickler, “Rethinking color cameras,” in Proceedings of International Conference on Computational Photography (IEEE, 2014), pp. 1–8.

Gross, M.

M. Lang, A. Hornung, O. Wang, S. Poulakos, A. Smolic, and M. Gross, “Nonlinear disparity mapping for stereoscopic 3D,” ACM Trans. Graph. 29(4), 75 (2010).
[Crossref]

Guo, L.

He, K.

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013).
[Crossref] [PubMed]

Hoppe, H.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23(3), 664–672 (2004).
[Crossref]

Hornung, A.

M. Lang, A. Hornung, O. Wang, S. Poulakos, A. Smolic, and M. Gross, “Nonlinear disparity mapping for stereoscopic 3D,” ACM Trans. Graph. 29(4), 75 (2010).
[Crossref]

James, A. P.

A. P. James and B. V. Dasarathy, “Medical image fusion: a survey of the state of the art,” Inf. Fusion 19, 4–19 (2014).
[Crossref]

Jeong, J.-Y.

S.-W. Jung, J.-Y. Jeong, and S.-J. Ko, “Sharpness enhancement of stereo images using binocular just-noticeable difference,” IEEE Trans. Image Process. 21(3), 1191–1199 (2012).
[Crossref] [PubMed]

Jia, J.

C. Lu, L. Xu, and J. Jia, “Contrast preserving decolorization with perception-based metrics,” Int. J. Comput. Vis. 110(2), 222–239 (2014).
[Crossref]

X. Shen, C. Zhou, L. Xu, and J. Jia, “Mutual-structure for joint filtering,” in Proceedings of International Conference on Computer Vision (IEEE, 2015), pp. 3406–3414.

Q. Yan, X. Shen, L. Xu, S. Zhuo, X. Zhang, L. Shen, and J. Jia, “Cross-field joint image restoration via scale map,” in Proceedings of International Conference on Computer Vision (IEEE, 2013), pp. 1537–1544.
[Crossref]

Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in Proceedings of European Conference on Computer Vision (Springer, 2014), pp. 815–830.

Jung, S.-W.

S.-W. Jung, J.-Y. Jeong, and S.-J. Ko, “Sharpness enhancement of stereo images using binocular just-noticeable difference,” IEEE Trans. Image Process. 21(3), 1191–1199 (2012).
[Crossref] [PubMed]

Jung, Y. J.

Y. J. Jung, H. G. Kim, and Y. M. Ro, “Critical binocular asymmetry measure for the perceptual quality assessment of synthesized stereo 3D images in view synthesis,” IEEE Trans. Circ. Syst. Video Tech. 26(7), 1201–1214 (2016).
[Crossref]

H. Sohn, Y. J. Jung, S. Lee, F. Speranza, and Y. M. Ro, “Visual comfort amelioration technique for stereoscopic images: disparity remapping to mitigate global and local discomfort causes,” IEEE Trans. Circ. Syst. Video Tech. 24(5), 745–758 (2014).
[Crossref]

H. Sohn, Y. J. Jung, and Y. Man Ro, “Crosstalk reduction in stereoscopic 3D displays: disparity adjustment using crosstalk visibility index for crosstalk cancellation,” Opt. Express 22(3), 3375–3392 (2014).
[Crossref] [PubMed]

Y. J. Jung, H. Sohn, S. Lee, F. Speranza, and Y. M. Ro, “Visual importance- and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

Y. J. Jung, H. Sohn, and Y. M. Ro, “Visual discomfort visualizer using stereo vision and time-of-flight depth cameras,” IEEE Trans. Consum. Electron. 58(2), 246–254 (2012).
[Crossref]

Kim, H. G.

Y. J. Jung, H. G. Kim, and Y. M. Ro, “Critical binocular asymmetry measure for the perceptual quality assessment of synthesized stereo 3D images in view synthesis,” IEEE Trans. Circ. Syst. Video Tech. 26(7), 1201–1214 (2016).
[Crossref]

Ko, S.-J.

S.-W. Jung, J.-Y. Jeong, and S.-J. Ko, “Sharpness enhancement of stereo images using binocular just-noticeable difference,” IEEE Trans. Image Process. 21(3), 1191–1199 (2012).
[Crossref] [PubMed]

Kopf, J.

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26(3), 96 (2007).
[Crossref]

Lang, M.

M. Lang, A. Hornung, O. Wang, S. Poulakos, A. Smolic, and M. Gross, “Nonlinear disparity mapping for stereoscopic 3D,” ACM Trans. Graph. 29(4), 75 (2010).
[Crossref]

Lee, S.

H. Sohn, Y. J. Jung, S. Lee, F. Speranza, and Y. M. Ro, “Visual comfort amelioration technique for stereoscopic images: disparity remapping to mitigate global and local discomfort causes,” IEEE Trans. Circ. Syst. Video Tech. 24(5), 745–758 (2014).
[Crossref]

Y. J. Jung, H. Sohn, S. Lee, F. Speranza, and Y. M. Ro, “Visual importance- and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

Li, D.

Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Trans. Geosci. Remote Sens. 43(6), 1391–1402 (2005).
[Crossref]

Li, J.

Li, Q.

Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Trans. Geosci. Remote Sens. 43(6), 1391–1402 (2005).
[Crossref]

Lischinski, D.

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26(3), 96 (2007).
[Crossref]

Liu, X.

C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9(2), 156–160 (2008).
[Crossref]

Liu, Z.

Z. Liu, Y. Shan, and Z. Zhang, “Expressive expression mapping with ratio images,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques (ACM, 2001), pp. 271–276.

Lu, C.

C. Lu, L. Xu, and J. Jia, “Contrast preserving decolorization with perception-based metrics,” Int. J. Comput. Vis. 110(2), 222–239 (2014).
[Crossref]

Man Ro, Y.

Manduchi, R.

C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of International Conference on Computer Vision (IEEE, 1998), pp. pp. 839–846.
[Crossref]

Nister, D.

Q. Yang, R. Yang, J. Davis, and D. Nister, “Spatial-depth super resolution for range images,” in Proceedings of International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Petschnigg, G.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23(3), 664–672 (2004).
[Crossref]

Pock, T.

A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” J. Math. Imaging Vis. 40(1), 120–145 (2011).
[Crossref]

Poulakos, S.

M. Lang, A. Hornung, O. Wang, S. Poulakos, A. Smolic, and M. Gross, “Nonlinear disparity mapping for stereoscopic 3D,” ACM Trans. Graph. 29(4), 75 (2010).
[Crossref]

Ro, Y. M.

Y. J. Jung, H. G. Kim, and Y. M. Ro, “Critical binocular asymmetry measure for the perceptual quality assessment of synthesized stereo 3D images in view synthesis,” IEEE Trans. Circ. Syst. Video Tech. 26(7), 1201–1214 (2016).
[Crossref]

H. Sohn, Y. J. Jung, S. Lee, F. Speranza, and Y. M. Ro, “Visual comfort amelioration technique for stereoscopic images: disparity remapping to mitigate global and local discomfort causes,” IEEE Trans. Circ. Syst. Video Tech. 24(5), 745–758 (2014).
[Crossref]

Y. J. Jung, H. Sohn, S. Lee, F. Speranza, and Y. M. Ro, “Visual importance- and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

Y. J. Jung, H. Sohn, and Y. M. Ro, “Visual discomfort visualizer using stereo vision and time-of-flight depth cameras,” IEEE Trans. Consum. Electron. 58(2), 246–254 (2012).
[Crossref]

Scharstein, D.

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47(1), 7–42 (2002).
[Crossref]

Shan, Y.

Z. Liu, Y. Shan, and Z. Zhang, “Expressive expression mapping with ratio images,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques (ACM, 2001), pp. 271–276.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Shen, L.

Q. Yan, X. Shen, L. Xu, S. Zhuo, X. Zhang, L. Shen, and J. Jia, “Cross-field joint image restoration via scale map,” in Proceedings of International Conference on Computer Vision (IEEE, 2013), pp. 1537–1544.
[Crossref]

Shen, X.

Q. Yan, X. Shen, L. Xu, S. Zhuo, X. Zhang, L. Shen, and J. Jia, “Cross-field joint image restoration via scale map,” in Proceedings of International Conference on Computer Vision (IEEE, 2013), pp. 1537–1544.
[Crossref]

Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in Proceedings of European Conference on Computer Vision (Springer, 2014), pp. 815–830.

X. Shen, C. Zhou, L. Xu, and J. Jia, “Mutual-structure for joint filtering,” in Proceedings of International Conference on Computer Vision (IEEE, 2015), pp. 3406–3414.

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Smolic, A.

M. Lang, A. Hornung, O. Wang, S. Poulakos, A. Smolic, and M. Gross, “Nonlinear disparity mapping for stereoscopic 3D,” ACM Trans. Graph. 29(4), 75 (2010).
[Crossref]

Sohn, H.

H. Sohn, Y. J. Jung, S. Lee, F. Speranza, and Y. M. Ro, “Visual comfort amelioration technique for stereoscopic images: disparity remapping to mitigate global and local discomfort causes,” IEEE Trans. Circ. Syst. Video Tech. 24(5), 745–758 (2014).
[Crossref]

H. Sohn, Y. J. Jung, and Y. Man Ro, “Crosstalk reduction in stereoscopic 3D displays: disparity adjustment using crosstalk visibility index for crosstalk cancellation,” Opt. Express 22(3), 3375–3392 (2014).
[Crossref] [PubMed]

Y. J. Jung, H. Sohn, S. Lee, F. Speranza, and Y. M. Ro, “Visual importance- and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

Y. J. Jung, H. Sohn, and Y. M. Ro, “Visual discomfort visualizer using stereo vision and time-of-flight depth cameras,” IEEE Trans. Consum. Electron. 58(2), 246–254 (2012).
[Crossref]

Song, Y.

Y. Song, L. Bao, X. Xu, and Q. Yang, “Decolorization: is rgb2gray () out?” in Proceedings of SIGGRAPH Asia Technical Briefs (ACM, 2013), p. 15.

Speranza, F.

H. Sohn, Y. J. Jung, S. Lee, F. Speranza, and Y. M. Ro, “Visual comfort amelioration technique for stereoscopic images: disparity remapping to mitigate global and local discomfort causes,” IEEE Trans. Circ. Syst. Video Tech. 24(5), 745–758 (2014).
[Crossref]

Y. J. Jung, H. Sohn, S. Lee, F. Speranza, and Y. M. Ro, “Visual importance- and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

Sun, J.

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013).
[Crossref] [PubMed]

Szeliski, R.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23(3), 664–672 (2004).
[Crossref]

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47(1), 7–42 (2002).
[Crossref]

Tan, Y. P.

Y. Zhao, Z. Chen, C. Zhu, Y. P. Tan, and L. Yu, “Binocular just-noticeable-difference model for stereoscopic images,” IEEE Signal Process. Lett. 18(1), 19–22 (2011).
[Crossref]

Tang, X.

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013).
[Crossref] [PubMed]

Toet, A.

A. Toet, “Natural colour mapping for multiband night vision imagery,” Inf. Fusion 4(3), 155–166 (2003).
[Crossref]

Tomasi, C.

C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of International Conference on Computer Vision (IEEE, 1998), pp. pp. 839–846.
[Crossref]

Toyama, K.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23(3), 664–672 (2004).
[Crossref]

Uyttendaele, M.

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26(3), 96 (2007).
[Crossref]

Wang, O.

M. Lang, A. Hornung, O. Wang, S. Poulakos, A. Smolic, and M. Gross, “Nonlinear disparity mapping for stereoscopic 3D,” ACM Trans. Graph. 29(4), 75 (2010).
[Crossref]

Wang, X.

C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9(2), 156–160 (2008).
[Crossref]

Wang, Z.

Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Trans. Geosci. Remote Sens. 43(6), 1391–1402 (2005).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Xu, L.

C. Lu, L. Xu, and J. Jia, “Contrast preserving decolorization with perception-based metrics,” Int. J. Comput. Vis. 110(2), 222–239 (2014).
[Crossref]

Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in Proceedings of European Conference on Computer Vision (Springer, 2014), pp. 815–830.

X. Shen, C. Zhou, L. Xu, and J. Jia, “Mutual-structure for joint filtering,” in Proceedings of International Conference on Computer Vision (IEEE, 2015), pp. 3406–3414.

Q. Yan, X. Shen, L. Xu, S. Zhuo, X. Zhang, L. Shen, and J. Jia, “Cross-field joint image restoration via scale map,” in Proceedings of International Conference on Computer Vision (IEEE, 2013), pp. 1537–1544.
[Crossref]

Xu, X.

Y. Song, L. Bao, X. Xu, and Q. Yang, “Decolorization: is rgb2gray () out?” in Proceedings of SIGGRAPH Asia Technical Briefs (ACM, 2013), p. 15.

Xue, B.

Yan, Q.

Q. Yan, X. Shen, L. Xu, S. Zhuo, X. Zhang, L. Shen, and J. Jia, “Cross-field joint image restoration via scale map,” in Proceedings of International Conference on Computer Vision (IEEE, 2013), pp. 1537–1544.
[Crossref]

Yang, C.

C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9(2), 156–160 (2008).
[Crossref]

Yang, Q.

Y. Song, L. Bao, X. Xu, and Q. Yang, “Decolorization: is rgb2gray () out?” in Proceedings of SIGGRAPH Asia Technical Briefs (ACM, 2013), p. 15.

Q. Yang, R. Yang, J. Davis, and D. Nister, “Spatial-depth super resolution for range images,” in Proceedings of International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Yang, R.

Q. Yang, R. Yang, J. Davis, and D. Nister, “Spatial-depth super resolution for range images,” in Proceedings of International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Yu, L.

Y. Zhao, Z. Chen, C. Zhu, Y. P. Tan, and L. Yu, “Binocular just-noticeable-difference model for stereoscopic images,” IEEE Signal Process. Lett. 18(1), 19–22 (2011).
[Crossref]

Zhang, J.

C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9(2), 156–160 (2008).
[Crossref]

Zhang, Q.

Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in Proceedings of European Conference on Computer Vision (Springer, 2014), pp. 815–830.

Zhang, X.

Q. Yan, X. Shen, L. Xu, S. Zhuo, X. Zhang, L. Shen, and J. Jia, “Cross-field joint image restoration via scale map,” in Proceedings of International Conference on Computer Vision (IEEE, 2013), pp. 1537–1544.
[Crossref]

Zhang, Z.

Z. Liu, Y. Shan, and Z. Zhang, “Expressive expression mapping with ratio images,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques (ACM, 2001), pp. 271–276.

Zhao, Y.

Y. Zhao, Z. Chen, C. Zhu, Y. P. Tan, and L. Yu, “Binocular just-noticeable-difference model for stereoscopic images,” IEEE Signal Process. Lett. 18(1), 19–22 (2011).
[Crossref]

Zhou, C.

X. Shen, C. Zhou, L. Xu, and J. Jia, “Mutual-structure for joint filtering,” in Proceedings of International Conference on Computer Vision (IEEE, 2015), pp. 3406–3414.

Zhou, F.

Zhu, C.

Y. Zhao, Z. Chen, C. Zhu, Y. P. Tan, and L. Yu, “Binocular just-noticeable-difference model for stereoscopic images,” IEEE Signal Process. Lett. 18(1), 19–22 (2011).
[Crossref]

Zhu, M.

Zhuo, S.

Q. Yan, X. Shen, L. Xu, S. Zhuo, X. Zhang, L. Shen, and J. Jia, “Cross-field joint image restoration via scale map,” in Proceedings of International Conference on Computer Vision (IEEE, 2013), pp. 1537–1544.
[Crossref]

Zickler, T.

A. Chakrabarti, W. T. Freeman, and T. Zickler, “Rethinking color cameras,” in Proceedings of International Conference on Computational Photography (IEEE, 2014), pp. 1–8.

Ziou, D.

Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Trans. Geosci. Remote Sens. 43(6), 1391–1402 (2005).
[Crossref]

ACM Trans. Graph. (4)

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23(3), 664–672 (2004).
[Crossref]

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26(3), 96 (2007).
[Crossref]

E. Eisemann and F. Durand, “Flash photography enhancement via intrinsic relighting,” ACM Trans. Graph. 23(3), 673–678 (2004).
[Crossref]

M. Lang, A. Hornung, O. Wang, S. Poulakos, A. Smolic, and M. Gross, “Nonlinear disparity mapping for stereoscopic 3D,” ACM Trans. Graph. 29(4), 75 (2010).
[Crossref]

IEEE Signal Process. Lett. (1)

Y. Zhao, Z. Chen, C. Zhu, Y. P. Tan, and L. Yu, “Binocular just-noticeable-difference model for stereoscopic images,” IEEE Signal Process. Lett. 18(1), 19–22 (2011).
[Crossref]

IEEE Trans. Circ. Syst. Video Tech. (3)

Y. J. Jung, H. Sohn, S. Lee, F. Speranza, and Y. M. Ro, “Visual importance- and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

Y. J. Jung, H. G. Kim, and Y. M. Ro, “Critical binocular asymmetry measure for the perceptual quality assessment of synthesized stereo 3D images in view synthesis,” IEEE Trans. Circ. Syst. Video Tech. 26(7), 1201–1214 (2016).
[Crossref]

H. Sohn, Y. J. Jung, S. Lee, F. Speranza, and Y. M. Ro, “Visual comfort amelioration technique for stereoscopic images: disparity remapping to mitigate global and local discomfort causes,” IEEE Trans. Circ. Syst. Video Tech. 24(5), 745–758 (2014).
[Crossref]

IEEE Trans. Consum. Electron. (1)

Y. J. Jung, H. Sohn, and Y. M. Ro, “Visual discomfort visualizer using stereo vision and time-of-flight depth cameras,” IEEE Trans. Consum. Electron. 58(2), 246–254 (2012).
[Crossref]

IEEE Trans. Geosci. Remote Sens. (1)

Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Trans. Geosci. Remote Sens. 43(6), 1391–1402 (2005).
[Crossref]

IEEE Trans. Image Process. (2)

S.-W. Jung, J.-Y. Jeong, and S.-J. Ko, “Sharpness enhancement of stereo images using binocular just-noticeable difference,” IEEE Trans. Image Process. 21(3), 1191–1199 (2012).
[Crossref] [PubMed]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013).
[Crossref] [PubMed]

Inf. Fusion (3)

A. P. James and B. V. Dasarathy, “Medical image fusion: a survey of the state of the art,” Inf. Fusion 19, 4–19 (2014).
[Crossref]

A. Toet, “Natural colour mapping for multiband night vision imagery,” Inf. Fusion 4(3), 155–166 (2003).
[Crossref]

C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9(2), 156–160 (2008).
[Crossref]

Int. J. Comput. Vis. (2)

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47(1), 7–42 (2002).
[Crossref]

C. Lu, L. Xu, and J. Jia, “Contrast preserving decolorization with perception-based metrics,” Int. J. Comput. Vis. 110(2), 222–239 (2014).
[Crossref]

J. Math. Imaging Vis. (1)

A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” J. Math. Imaging Vis. 40(1), 120–145 (2011).
[Crossref]

Opt. Express (4)

Other (16)

P. J. Burt and R. J. Kolczynski, “Enhanced image capture through fusion,” in Proceedings of the 4th International Conference on Computer Vision (IEEE, Berlin, Germany, 1993), pp. 173–182.

A. Chakrabarti, W. T. Freeman, and T. Zickler, “Rethinking color cameras,” in Proceedings of International Conference on Computational Photography (IEEE, 2014), pp. 1–8.

Pointgrey, “Capturing consistent color,” https://www.ptgrey.com/truecolor .

R. S. Blum, Z. Xue, and Z. Zhang, “An overview of image fusion,” in Multi-Sensor Image Fusion and Its Applications (CRC, 2005).

Q. Yang, R. Yang, J. Davis, and D. Nister, “Spatial-depth super resolution for range images,” in Proceedings of International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Q. Yan, X. Shen, L. Xu, S. Zhuo, X. Zhang, L. Shen, and J. Jia, “Cross-field joint image restoration via scale map,” in Proceedings of International Conference on Computer Vision (IEEE, 2013), pp. 1537–1544.
[Crossref]

Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in Proceedings of European Conference on Computer Vision (Springer, 2014), pp. 815–830.

C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of International Conference on Computer Vision (IEEE, 1998), pp. pp. 839–846.
[Crossref]

Pointgrey, “Color USB3 vision,” http://www.ptgrey.com/flea3-13-mp-color-usb3-vision-sony-imx035-camera .

Pointgrey, “Mono USB3 vision,” http://www.ptgrey.com/flea3-13-mp-mono-usb3-vision-sony-imx035-camera .

Pointgrey, “FlyCapture SDK,” https://www.ptgrey.com/flycapture-sdk .

CVIPLab, “Color and mono image database,” https://sites.google.com/site/gachoncvip/projects/dual-camera .

Y. Song, L. Bao, X. Xu, and Q. Yang, “Decolorization: is rgb2gray () out?” in Proceedings of SIGGRAPH Asia Technical Briefs (ACM, 2013), p. 15.

ITU-R BT.500–11, “Methodology for the subjective assessment of the quality of television pictures,” (2002).

X. Shen, C. Zhou, L. Xu, and J. Jia, “Mutual-structure for joint filtering,” in Proceedings of International Conference on Computer Vision (IEEE, 2015), pp. 3406–3414.

Z. Liu, Y. Shan, and Z. Zhang, “Expressive expression mapping with ratio images,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques (ACM, 2001), pp. 271–276.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1
Fig. 1

(a) Input color image and (b) mono image captured in low light (6 lux) conditions by a color-plus-mono dual camera. (c) Conventional approach via the guided filter-based denoising using the disparity-compensated mono image and detail transfer from the disparity-compensated mono image. (d) Proposed method (i.e., BJND-aware denoising and selective detail transfer).

Fig. 2
Fig. 2

(a) A color-plus-mono dual camera for our experiments. (b) The left camera is a mono camera and the right camera is a color camera. The camera baseline between the color and mono cameras is 35 mm. (c) An example of image capture in frame synchronization.

Fig. 3
Fig. 3

Procedure of the proposed approach via BJND-aware denoising and selective detail transfer.

Fig. 4
Fig. 4

Example of histogram matching. (a) Input color image (10 lux). (b) Input mono image (10 lux). (c) Histogram matched color image. (d) Disparity map.

Fig. 5
Fig. 5

Procedure of the BJND-aware merge algorithm for adaptive image denoising.

Fig. 6
Fig. 6

The BJND-aware merge process between the individual and joint filtering outputs. (a) BJND map. (b) Histogram of BJND. (c) Dissimilarity map between corresponding pixels. (d) Weight map computed for the BJND-aware merge operation.

Fig. 7
Fig. 7

Example of weighting function according to the dissimilarity value between corresponding pixels. Note that in this example, BJND is 0.06 and the α value in Eq. (8) is 30.

Fig. 8
Fig. 8

Procedure of the selective detail transfer.

Fig. 9
Fig. 9

(a) Example of detail manipulation function according to the dissimilarity value. Note that in this example, BJND is 0.06, the detail value is 0.8, and the alpha value in Eq. (11) is 30. (b) Detail transfer layer. (b) Manipulated detail transfer layer via the detail manipulation function.

Fig. 10
Fig. 10

Frequency spectrums of (a) an original mono image, (b) base layer, and (c) detail layer.

Fig. 11
Fig. 11

A color and mono pair original, along with the results of processing. The original color and mono image pair in (a) and (b), captured in 10 lux conditions. (c) Selective detail transfer layer. (d) Individual guided filter. (e) BJND-aware denoising. (f) Proposed approach (i.e., BJND-aware denoising and selective detail transfer).

Fig. 12
Fig. 12

A close-up of a color and mono pair. The original image pair was captured in 6 lux condition. (a) Original color image. (b) Histogram matching. (c) Individual guided filter. (d) Conventional method via the guided filter and detail transfer. (e) Proposed approach.

Fig. 13
Fig. 13

A close-up of a color and mono image pair and its processing results. (a) Original color and mono image pair captured by a dual camera in 6 lux condition. (b) Histogram matching. (c) Individual guided filter. (d) Proposed approach.

Fig. 14
Fig. 14

A close-up of a color and mono image pair and its processing results. (a) Original color and mono image pair captured in 6 lux condition. (b) Histogram matching. (c) Individual guided filter. (d) Proposed approach.

Fig. 15
Fig. 15

A close-up of processing results. (a) Original image pair captured in 6 lux condition. (b) Histogram matching. (c) Individual guided filter. (d) Proposed approach.

Fig. 16
Fig. 16

A close-up of processing results. (a) Original color and mono image pair captured in 4 lux conditions. (b) Histogram matching. (c) Individual guided filter. (d) Proposed approach.

Fig. 17
Fig. 17

Visual results according to different α parameter values of the weighting functions. (a) α = 1. (b) α = 15. (c) α = 30. (d) α = 45. (e) α = 60. As α value increases, the slop of the function becomes steep. This results in not only little detail transfer but also few artifacts.

Fig. 18
Fig. 18

Visual results according to different ε parameter values of the guided filter. (a) ε = 0.0001. (b) ε = 0.001. (c) ε = 0.01. Note that as ε value increases, we generate increasingly smoother versions of (M)base and hence capture more detail in (M)detail. However, visual artifacts are also amplified near edges in the fused image, as shown in the figure (c).

Fig. 19
Fig. 19

Subjective assessment results.

Tables (3)

Tables Icon

Table 1 Specification of the color and mono cameras used in our dual camera setup

Tables Icon

Table 2 Objective assessment results.

Tables Icon

Table 3 Statistical results of subjective assessment for image quality.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

C base =G( C input , C input ),
C joint =G( C input , M input ).
BJN D C (x,y)=BJN D C (b g m (x+d,y),e h m (x+d,y), n m (x+d,y)), = A(b g m (x+d,y),e h m (x+d,y)) ( 1 ( n m (x+d,y) A(b g m (x+d,y),e h m (x+d,y)) ) λ ) 1/λ
A(bg,eh)= A limit (bg)+K(bg)eh,
A limit (bg)={ 0.0027(b g 2 96bg)+8, if0bg48 0.0001(b g 2 32bg)+1.7, if48bg255 ,
K(bg)= 10 6 (0.7b g 2 +32bg)+0.07,bg[0,255],
S(x,y)= i=1 N W j=1 N W w(i,j)| l x,y c (i,j) l ^ x ^ , y ^ m (i,j)| ,
W(x,y)={ 0,S(x,y)<BJND(x,y) 1exp(α(S(x,y)BJND(x,y))),S(x,y)BJND(x,y) ,
C denoised (x,y)=(1W(x,y)) C joint (x,y)+W(x,y) C base (x,y).
M detail = M compensated +δ M base +δ ,
M ^ detail (x,y)={ M detail (x,y), S(x,y)<BJND(x,y) 1exp( α( S(x,y)BJND(x,y) ) )( 1 M detail (x,y) )+ M detail (x,y),S(x,y)BJND(x,y) ,
C final (x,y)= M ^ detail (x,y) C denoised (x,y).
Q={ λ w SSIM( A w , F w )+(1 λ w )SSIM( B w , F w ), if SSIM( A w , B w |w)0.75 max{SSIM( A w , F w ), SSIM( B w , F w )}, if SSIM( A w , B w |w)<0.75 ,
λ w = s( A w ) s( A w )+s( B w ) ,