Abstract

Despite the advances in image sensors, mainstream RGB sensors are still struggling from low quantum efficiency due to the low sensitivity of the Bayer color filter array. To address this issue, a sparse color sensor uses mostly panchromatic white pixels and a smaller percentage of sparse color pixels to provide better low-light photography performance than a conventional Bayer RGB sensor. However, due to the lack of a proper color reconstruction method, sparse color sensors have not been developed thus far. This study proposes a deep-learning-based method for sparse color reconstruction that can realize such a sparse color sensor. The proposed color reconstruction method consists of a novel two-stage deep model followed by an adversarial training technique to reduce visual artifacts in the reconstructed color image. In simulations and experiments, visual results and quantitative comparisons demonstrate that the proposed color reconstruction method can outperform existing methods. In addition, a prototype system was developed using a hybrid color-plus-mono camera system. Experiments using the prototype system reveal the feasibility of a very sparse color sensor in different lighting conditions.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Enhancement of low light level images using color-plus-mono dual camera

Yong Ju Jung
Opt. Express 25(10) 12029-12051 (2017)

Color filter array patterns for small-pixel image sensors with substantial cross talk

Leo Anzagira and Eric R. Fossum
J. Opt. Soc. Am. A 32(1) 28-34 (2015)

Nature-inspired color-filter array for enhancing the quality of images

Julien Couillaud, Alain Horé, and Djemel Ziou
J. Opt. Soc. Am. A 29(8) 1580-1587 (2012)

References

  • View by:
  • |
  • |
  • |

  1. B. E. Bayer, “Color imaging array,” U.S. patent 3,971,065 (1976).
  2. B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process. Mag. 22(1), 44–54 (2005).
    [Crossref]
  3. L. Zhang and X. Wu, “Color demosaicking via directional linear minimum mean square-error estimation,” IEEE Trans. Image Process. 14(12), 2167–2178 (2005).
    [Crossref] [PubMed]
  4. M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans.Graph. 35(6), 191 (2016).
  5. P. K. Park, B. H. Cho, J. M. Park, K. Lee, H. Y. Kim, H. A. Kang, H. G. Lee, J. Woo, Y. Roh, and W. J. Lee, “Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1624–1628.
    [Crossref]
  6. H. Liu, Y. Wang, and L. Wang, “The effect of light conditions on photoplethysmographic image acquisition using a commercial camera,” IEEE J Transl Eng Heal. Med 2, 1–11 (2014).
    [Crossref]
  7. F. Sigernes, M. Dyrland, N. Peters, D. A. Lorentzen, T. Svenøe, K. Heia, S. Chernouss, C. S. Deehr, and M. Kosch, “The absolute sensitivity of digital colour cameras,” Opt. Express 17(22), 20211–20220 (2009).
    [Crossref] [PubMed]
  8. Y. J. Jung, “Enhancement of low light level images using color-plus-mono dual camera,” Opt. Express 25(10), 12029–12051 (2017).
    [Crossref] [PubMed]
  9. I. Hirota, “Solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus,” U.S. patent 9,392,238 (2016).
  10. T. Kijima, H. Nakamura, J. T. Compton, and J. F. Hamilton, “Image sensor with improved light sensitivity,” U.S. patent 7,688,368 (2010).
  11. C. Zhang, Y. Li, J. Wang, and P. Hao, “Universal demosaicking of color filter arrays,” IEEE Trans. Image Process. 25(11), 5173–5186 (2016).
    [Crossref] [PubMed]
  12. R. H. Steinberg, M. Reid, and P. L. Lacy, “The distribution of rods and cones in the retina of the cat (felis domesticus),” J. Comp. Neurol. 148(2), 229–248 (1973).
    [Crossref] [PubMed]
  13. Y. J. Jung, H. Sohn, S.-I. Lee, F. Speranza, and Y. M. Ro, “Visual importance-and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
    [Crossref]
  14. D. Menon and G. Calvagno, “Color image demosaicking: An overview,” Signal Process. Image Commun. 26(8–9), 518–533 (2011).
    [Crossref]
  15. A. Chakrabarti, W. T. Freeman, and T. Zickler, “Rethinking color cameras,” in Proceedings of IEEE International Conference on Computational Photography (ICCP), (IEEE, 2014), pp. 1–8.
  16. M. Singh and T. Singh, “Linear universal demosaicking of regular pattern color filter arrays,” in Proceedings on IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2012), pp. 1277–1280.
  17. B. Leung, G. Jeon, and E. Dubois, “Least-squares luma–chroma demultiplexing algorithm for bayer demosaicking,” IEEE Trans. Image Process. 20(7), 1885–1894 (2011).
    [Crossref] [PubMed]
  18. A. Chakrabarti, “Learning sensor multiplexing design through back-propagation,” in Advances in Neural Information Processing Systems, (2016), pp. 3081–3089.
  19. J. Li, C. Bai, Z. Lin, and J. Yu, “Automatic design of high-sensitivity color filter arrays with panchromatic pixels,” IEEE Trans. Image Process. 26(2), 870–883 (2017).
    [Crossref] [PubMed]
  20. H. Jiang, Q. Tian, J. Farrell, and B. A. Wandell, “Learning the image processing pipeline,” IEEE Trans. Image Process. 26(10), 5032–5042 (2017).
    [Crossref]
  21. M. Parmar and B. A. Wandell, “Interleaved imaging: an imaging system design inspired by rod-cone vision,” in Digital Photography V, (International Society for Optics and Photonics, 2009), p. 725008.
    [Crossref]
  22. Q. Tian, S. Lansel, J. E. Farrell, and B. A. Wandell, “Automating the design of image processing pipelines for novel color filter arrays: Local, linear, learned (l3) method,” in Digital Photography X, vol. 9023 (International Society for Optics and Photonics, 2014), p. 90230K.
    [Crossref]
  23. R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, “Real-time user-guided image colorization with learned deep priors,” ACM Trans. Graph. 36(4), 119 (2017).
    [Crossref]
  24. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, (2014), pp. 2672–2680.
  25. P. Bao, L. Zhang, and X. Wu, “Canny edge detection enhancement by scale multiplication,” IEEE Trans. Pattern Anal. Mach. Intell. 27(9), 1485–1490 (2005).
    [PubMed]
  26. J. Canny, “A computational approach to edge detection,” in Readings in Computer Vision, (Elsevier, 1987), pp. 184–203.
  27. F. Agostinelli, M. Hoffman, P. Sadowski, and P. Baldi, “Learning activation functions to improve deep neural networks,” arXiv preprint arXiv:1412.6830 (2014).
  28. B. Karlik and A. V. Olgac, “Performance analysis of various activation functions in generalized mlp architectures of neural networks,” Int. J. Artif. Intell. Expert. Syst. 1(4), 111–122 (2011).
  29. H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Trans. Comp. Imaging 3(1), 47–57 (2017).
  30. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 1125–1134.
  31. X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P.-A. Heng, “H-denseunet: Hybrid densely connected unet for liver and tumor segmentation from ct volumes,” IEEE Trans. Med. Imag. 37(12), 2663–2674 (2018).
  32. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.
  33. H. W. Jang, College of Information Technology, Gachon University, Sujeong-Gu, Seongnam, 13120, South Korea and Y. J. Jung are preparing a manuscript to be called “Deep color transfer for color-plus-mono dual camera”.
  34. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 4700–4708.
  35. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2015), pp. 234–241.
  36. J. Pennington, S. Schoenholz, and S. Ganguli, “Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice,” in Advances in Neural Information Processing Systems, (2017), pp. 4785–4795.
  37. M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784 (2014).
  38. M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” arXiv preprint arXiv:1511.05440 (2015).
  39. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv preprint arXiv:1607.08022 (2016).
  40. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).
  41. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2009), pp. 248–255.
  42. “Rethinnk color camera code,” http://vision.seas.harvard.edu/colorsensor/ . Accessed: 2019-07-07.
  43. “Automatic image colorization code,” https://richzhang.github.io/colorization/ . Accessed: 2019-07-07.
  44. “Learning sensor multiplexing design through back-propagation code,” https://github.com/ayanc/learncfa/ . Accessed: 2019-07-07.
  45. S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35(6), 192 (2016).
  46. X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: A systematic survey,” in Visual Communications and Image Processing 2008, (International Society for Optics and Photonics, 2008), p. 68221J.
  47. L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20(2), 023016 (2011).
  48. ISO-12233, “Imaging resolution, photography electronic still picture and responses, spatial frequency,” (2014).
  49. “Flycapture sdk,” https://www.ptgrey.com/flycapture-sdk . Accessed: 2019-04-07.
  50. Y. J. Jung, H. Sohn, and Y. M. Ro, “Visual discomfort visualizer using stereo vision and time-of-flight depth cameras,” IEEE Trans. Consum. Electron. 58(2), 246–254 (2012).
  51. S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010).
  52. S. Bianco, R. Cadene, L. Celona, and P. Napoletano, “Benchmark analysis of representative deep neural network architectures,” IEEE Access 6, 64270–64277 (2018).
  53. A. Telea, “An image inpainting technique based on the fast marching method,” J. Graph. Tools 9(1), 23–34 (2004).
  54. M. Bertalmio, A. L. Bertozzi, and G. Sapiro, “Navier-stokes, fluid dynamics, and image and video inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2001), pp. 335–362.

2018 (2)

X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P.-A. Heng, “H-denseunet: Hybrid densely connected unet for liver and tumor segmentation from ct volumes,” IEEE Trans. Med. Imag. 37(12), 2663–2674 (2018).

S. Bianco, R. Cadene, L. Celona, and P. Napoletano, “Benchmark analysis of representative deep neural network architectures,” IEEE Access 6, 64270–64277 (2018).

2017 (5)

H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Trans. Comp. Imaging 3(1), 47–57 (2017).

J. Li, C. Bai, Z. Lin, and J. Yu, “Automatic design of high-sensitivity color filter arrays with panchromatic pixels,” IEEE Trans. Image Process. 26(2), 870–883 (2017).
[Crossref] [PubMed]

H. Jiang, Q. Tian, J. Farrell, and B. A. Wandell, “Learning the image processing pipeline,” IEEE Trans. Image Process. 26(10), 5032–5042 (2017).
[Crossref]

R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, “Real-time user-guided image colorization with learned deep priors,” ACM Trans. Graph. 36(4), 119 (2017).
[Crossref]

Y. J. Jung, “Enhancement of low light level images using color-plus-mono dual camera,” Opt. Express 25(10), 12029–12051 (2017).
[Crossref] [PubMed]

2016 (3)

C. Zhang, Y. Li, J. Wang, and P. Hao, “Universal demosaicking of color filter arrays,” IEEE Trans. Image Process. 25(11), 5173–5186 (2016).
[Crossref] [PubMed]

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans.Graph. 35(6), 191 (2016).

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35(6), 192 (2016).

2014 (1)

H. Liu, Y. Wang, and L. Wang, “The effect of light conditions on photoplethysmographic image acquisition using a commercial camera,” IEEE J Transl Eng Heal. Med 2, 1–11 (2014).
[Crossref]

2013 (1)

Y. J. Jung, H. Sohn, S.-I. Lee, F. Speranza, and Y. M. Ro, “Visual importance-and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

2012 (1)

Y. J. Jung, H. Sohn, and Y. M. Ro, “Visual discomfort visualizer using stereo vision and time-of-flight depth cameras,” IEEE Trans. Consum. Electron. 58(2), 246–254 (2012).

2011 (4)

D. Menon and G. Calvagno, “Color image demosaicking: An overview,” Signal Process. Image Commun. 26(8–9), 518–533 (2011).
[Crossref]

B. Leung, G. Jeon, and E. Dubois, “Least-squares luma–chroma demultiplexing algorithm for bayer demosaicking,” IEEE Trans. Image Process. 20(7), 1885–1894 (2011).
[Crossref] [PubMed]

L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20(2), 023016 (2011).

B. Karlik and A. V. Olgac, “Performance analysis of various activation functions in generalized mlp architectures of neural networks,” Int. J. Artif. Intell. Expert. Syst. 1(4), 111–122 (2011).

2010 (1)

S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010).

2009 (1)

2005 (3)

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process. Mag. 22(1), 44–54 (2005).
[Crossref]

L. Zhang and X. Wu, “Color demosaicking via directional linear minimum mean square-error estimation,” IEEE Trans. Image Process. 14(12), 2167–2178 (2005).
[Crossref] [PubMed]

P. Bao, L. Zhang, and X. Wu, “Canny edge detection enhancement by scale multiplication,” IEEE Trans. Pattern Anal. Mach. Intell. 27(9), 1485–1490 (2005).
[PubMed]

2004 (1)

A. Telea, “An image inpainting technique based on the fast marching method,” J. Graph. Tools 9(1), 23–34 (2004).

1973 (1)

R. H. Steinberg, M. Reid, and P. L. Lacy, “The distribution of rods and cones in the retina of the cat (felis domesticus),” J. Comp. Neurol. 148(2), 229–248 (1973).
[Crossref] [PubMed]

Abdulkadir, A.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

Adams, A.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35(6), 192 (2016).

Agostinelli, F.

F. Agostinelli, M. Hoffman, P. Sadowski, and P. Baldi, “Learning activation functions to improve deep neural networks,” arXiv preprint arXiv:1412.6830 (2014).

Altunbasak, Y.

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process. Mag. 22(1), 44–54 (2005).
[Crossref]

Ba, J.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Bai, C.

J. Li, C. Bai, Z. Lin, and J. Yu, “Automatic design of high-sensitivity color filter arrays with panchromatic pixels,” IEEE Trans. Image Process. 26(2), 870–883 (2017).
[Crossref] [PubMed]

Baldi, P.

F. Agostinelli, M. Hoffman, P. Sadowski, and P. Baldi, “Learning activation functions to improve deep neural networks,” arXiv preprint arXiv:1412.6830 (2014).

Bao, P.

P. Bao, L. Zhang, and X. Wu, “Canny edge detection enhancement by scale multiplication,” IEEE Trans. Pattern Anal. Mach. Intell. 27(9), 1485–1490 (2005).
[PubMed]

Barron, J. T.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35(6), 192 (2016).

Bayer, B. E.

B. E. Bayer, “Color imaging array,” U.S. patent 3,971,065 (1976).

Bengio, Y.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, (2014), pp. 2672–2680.

Bertalmio, M.

M. Bertalmio, A. L. Bertozzi, and G. Sapiro, “Navier-stokes, fluid dynamics, and image and video inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2001), pp. 335–362.

Bertozzi, A. L.

M. Bertalmio, A. L. Bertozzi, and G. Sapiro, “Navier-stokes, fluid dynamics, and image and video inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2001), pp. 335–362.

Bianco, S.

S. Bianco, R. Cadene, L. Celona, and P. Napoletano, “Benchmark analysis of representative deep neural network architectures,” IEEE Access 6, 64270–64277 (2018).

Brox, T.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2015), pp. 234–241.

Buades, A.

L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20(2), 023016 (2011).

Cadene, R.

S. Bianco, R. Cadene, L. Celona, and P. Napoletano, “Benchmark analysis of representative deep neural network architectures,” IEEE Access 6, 64270–64277 (2018).

Calvagno, G.

D. Menon and G. Calvagno, “Color image demosaicking: An overview,” Signal Process. Image Commun. 26(8–9), 518–533 (2011).
[Crossref]

Canny, J.

J. Canny, “A computational approach to edge detection,” in Readings in Computer Vision, (Elsevier, 1987), pp. 184–203.

Celona, L.

S. Bianco, R. Cadene, L. Celona, and P. Napoletano, “Benchmark analysis of representative deep neural network architectures,” IEEE Access 6, 64270–64277 (2018).

Chakrabarti, A.

A. Chakrabarti, W. T. Freeman, and T. Zickler, “Rethinking color cameras,” in Proceedings of IEEE International Conference on Computational Photography (ICCP), (IEEE, 2014), pp. 1–8.

A. Chakrabarti, “Learning sensor multiplexing design through back-propagation,” in Advances in Neural Information Processing Systems, (2016), pp. 3081–3089.

Chaurasia, G.

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans.Graph. 35(6), 191 (2016).

Chen, H.

X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P.-A. Heng, “H-denseunet: Hybrid densely connected unet for liver and tumor segmentation from ct volumes,” IEEE Trans. Med. Imag. 37(12), 2663–2674 (2018).

Chen, J.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35(6), 192 (2016).

Chernouss, S.

Cho, B. H.

P. K. Park, B. H. Cho, J. M. Park, K. Lee, H. Y. Kim, H. A. Kang, H. G. Lee, J. Woo, Y. Roh, and W. J. Lee, “Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1624–1628.
[Crossref]

Çiçek, Ö.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

Compton, J. T.

T. Kijima, H. Nakamura, J. T. Compton, and J. F. Hamilton, “Image sensor with improved light sensitivity,” U.S. patent 7,688,368 (2010).

Couprie, C.

M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” arXiv preprint arXiv:1511.05440 (2015).

Courville, A.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, (2014), pp. 2672–2680.

Deehr, C. S.

Deng, J.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2009), pp. 248–255.

Dong, W.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2009), pp. 248–255.

Dou, Q.

X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P.-A. Heng, “H-denseunet: Hybrid densely connected unet for liver and tumor segmentation from ct volumes,” IEEE Trans. Med. Imag. 37(12), 2663–2674 (2018).

Dubois, E.

B. Leung, G. Jeon, and E. Dubois, “Least-squares luma–chroma demultiplexing algorithm for bayer demosaicking,” IEEE Trans. Image Process. 20(7), 1885–1894 (2011).
[Crossref] [PubMed]

Durand, F.

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans.Graph. 35(6), 191 (2016).

Dyrland, M.

Efros, A. A.

R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, “Real-time user-guided image colorization with learned deep priors,” ACM Trans. Graph. 36(4), 119 (2017).
[Crossref]

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 1125–1134.

Farrell, J.

H. Jiang, Q. Tian, J. Farrell, and B. A. Wandell, “Learning the image processing pipeline,” IEEE Trans. Image Process. 26(10), 5032–5042 (2017).
[Crossref]

Farrell, J. E.

Q. Tian, S. Lansel, J. E. Farrell, and B. A. Wandell, “Automating the design of image processing pipelines for novel color filter arrays: Local, linear, learned (l3) method,” in Digital Photography X, vol. 9023 (International Society for Optics and Photonics, 2014), p. 90230K.
[Crossref]

Fei-Fei, L.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2009), pp. 248–255.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2015), pp. 234–241.

Freeman, W. T.

A. Chakrabarti, W. T. Freeman, and T. Zickler, “Rethinking color cameras,” in Proceedings of IEEE International Conference on Computational Photography (ICCP), (IEEE, 2014), pp. 1–8.

Frosio, I.

H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Trans. Comp. Imaging 3(1), 47–57 (2017).

Fu, C.-W.

X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P.-A. Heng, “H-denseunet: Hybrid densely connected unet for liver and tumor segmentation from ct volumes,” IEEE Trans. Med. Imag. 37(12), 2663–2674 (2018).

Gallo, O.

H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Trans. Comp. Imaging 3(1), 47–57 (2017).

Ganguli, S.

J. Pennington, S. Schoenholz, and S. Ganguli, “Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice,” in Advances in Neural Information Processing Systems, (2017), pp. 4785–4795.

Geiss, R.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35(6), 192 (2016).

Geng, X.

R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, “Real-time user-guided image colorization with learned deep priors,” ACM Trans. Graph. 36(4), 119 (2017).
[Crossref]

Gharbi, M.

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans.Graph. 35(6), 191 (2016).

Glotzbach, J.

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process. Mag. 22(1), 44–54 (2005).
[Crossref]

Goodfellow, I.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, (2014), pp. 2672–2680.

Gunturk, B.

X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: A systematic survey,” in Visual Communications and Image Processing 2008, (International Society for Optics and Photonics, 2008), p. 68221J.

Gunturk, B. K.

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process. Mag. 22(1), 44–54 (2005).
[Crossref]

Hamilton, J. F.

T. Kijima, H. Nakamura, J. T. Compton, and J. F. Hamilton, “Image sensor with improved light sensitivity,” U.S. patent 7,688,368 (2010).

Hao, P.

C. Zhang, Y. Li, J. Wang, and P. Hao, “Universal demosaicking of color filter arrays,” IEEE Trans. Image Process. 25(11), 5173–5186 (2016).
[Crossref] [PubMed]

Hasinoff, S. W.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35(6), 192 (2016).

Heia, K.

Heng, P.-A.

X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P.-A. Heng, “H-denseunet: Hybrid densely connected unet for liver and tumor segmentation from ct volumes,” IEEE Trans. Med. Imag. 37(12), 2663–2674 (2018).

Hirota, I.

I. Hirota, “Solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus,” U.S. patent 9,392,238 (2016).

Hoffman, M.

F. Agostinelli, M. Hoffman, P. Sadowski, and P. Baldi, “Learning activation functions to improve deep neural networks,” arXiv preprint arXiv:1412.6830 (2014).

Huang, G.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 4700–4708.

Isola, P.

R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, “Real-time user-guided image colorization with learned deep priors,” ACM Trans. Graph. 36(4), 119 (2017).
[Crossref]

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 1125–1134.

Jang, H. W.

H. W. Jang, College of Information Technology, Gachon University, Sujeong-Gu, Seongnam, 13120, South Korea and Y. J. Jung are preparing a manuscript to be called “Deep color transfer for color-plus-mono dual camera”.

Jeon, G.

B. Leung, G. Jeon, and E. Dubois, “Least-squares luma–chroma demultiplexing algorithm for bayer demosaicking,” IEEE Trans. Image Process. 20(7), 1885–1894 (2011).
[Crossref] [PubMed]

Jiang, H.

H. Jiang, Q. Tian, J. Farrell, and B. A. Wandell, “Learning the image processing pipeline,” IEEE Trans. Image Process. 26(10), 5032–5042 (2017).
[Crossref]

Jung, Y. J.

Y. J. Jung, “Enhancement of low light level images using color-plus-mono dual camera,” Opt. Express 25(10), 12029–12051 (2017).
[Crossref] [PubMed]

Y. J. Jung, H. Sohn, S.-I. Lee, F. Speranza, and Y. M. Ro, “Visual importance-and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

Y. J. Jung, H. Sohn, and Y. M. Ro, “Visual discomfort visualizer using stereo vision and time-of-flight depth cameras,” IEEE Trans. Consum. Electron. 58(2), 246–254 (2012).

Kainz, F.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35(6), 192 (2016).

Kang, H. A.

P. K. Park, B. H. Cho, J. M. Park, K. Lee, H. Y. Kim, H. A. Kang, H. G. Lee, J. Woo, Y. Roh, and W. J. Lee, “Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1624–1628.
[Crossref]

Karlik, B.

B. Karlik and A. V. Olgac, “Performance analysis of various activation functions in generalized mlp architectures of neural networks,” Int. J. Artif. Intell. Expert. Syst. 1(4), 111–122 (2011).

Kautz, J.

H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Trans. Comp. Imaging 3(1), 47–57 (2017).

Kijima, T.

T. Kijima, H. Nakamura, J. T. Compton, and J. F. Hamilton, “Image sensor with improved light sensitivity,” U.S. patent 7,688,368 (2010).

Kim, H. Y.

P. K. Park, B. H. Cho, J. M. Park, K. Lee, H. Y. Kim, H. A. Kang, H. G. Lee, J. Woo, Y. Roh, and W. J. Lee, “Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1624–1628.
[Crossref]

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Kosch, M.

Lacy, P. L.

R. H. Steinberg, M. Reid, and P. L. Lacy, “The distribution of rods and cones in the retina of the cat (felis domesticus),” J. Comp. Neurol. 148(2), 229–248 (1973).
[Crossref] [PubMed]

Lansel, S.

Q. Tian, S. Lansel, J. E. Farrell, and B. A. Wandell, “Automating the design of image processing pipelines for novel color filter arrays: Local, linear, learned (l3) method,” in Digital Photography X, vol. 9023 (International Society for Optics and Photonics, 2014), p. 90230K.
[Crossref]

LeCun, Y.

M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” arXiv preprint arXiv:1511.05440 (2015).

Lee, H. G.

P. K. Park, B. H. Cho, J. M. Park, K. Lee, H. Y. Kim, H. A. Kang, H. G. Lee, J. Woo, Y. Roh, and W. J. Lee, “Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1624–1628.
[Crossref]

Lee, K.

P. K. Park, B. H. Cho, J. M. Park, K. Lee, H. Y. Kim, H. A. Kang, H. G. Lee, J. Woo, Y. Roh, and W. J. Lee, “Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1624–1628.
[Crossref]

Lee, S.-I.

Y. J. Jung, H. Sohn, S.-I. Lee, F. Speranza, and Y. M. Ro, “Visual importance-and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

Lee, W. J.

P. K. Park, B. H. Cho, J. M. Park, K. Lee, H. Y. Kim, H. A. Kang, H. G. Lee, J. Woo, Y. Roh, and W. J. Lee, “Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1624–1628.
[Crossref]

Lempitsky, V.

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv preprint arXiv:1607.08022 (2016).

Leung, B.

B. Leung, G. Jeon, and E. Dubois, “Least-squares luma–chroma demultiplexing algorithm for bayer demosaicking,” IEEE Trans. Image Process. 20(7), 1885–1894 (2011).
[Crossref] [PubMed]

Levoy, M.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35(6), 192 (2016).

Li, J.

J. Li, C. Bai, Z. Lin, and J. Yu, “Automatic design of high-sensitivity color filter arrays with panchromatic pixels,” IEEE Trans. Image Process. 26(2), 870–883 (2017).
[Crossref] [PubMed]

Li, K.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2009), pp. 248–255.

Li, L.-J.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2009), pp. 248–255.

Li, X.

X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P.-A. Heng, “H-denseunet: Hybrid densely connected unet for liver and tumor segmentation from ct volumes,” IEEE Trans. Med. Imag. 37(12), 2663–2674 (2018).

L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20(2), 023016 (2011).

X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: A systematic survey,” in Visual Communications and Image Processing 2008, (International Society for Optics and Photonics, 2008), p. 68221J.

Li, Y.

C. Zhang, Y. Li, J. Wang, and P. Hao, “Universal demosaicking of color filter arrays,” IEEE Trans. Image Process. 25(11), 5173–5186 (2016).
[Crossref] [PubMed]

Lienkamp, S. S.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

Lin, A. S.

R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, “Real-time user-guided image colorization with learned deep priors,” ACM Trans. Graph. 36(4), 119 (2017).
[Crossref]

Lin, Z.

J. Li, C. Bai, Z. Lin, and J. Yu, “Automatic design of high-sensitivity color filter arrays with panchromatic pixels,” IEEE Trans. Image Process. 26(2), 870–883 (2017).
[Crossref] [PubMed]

Liu, H.

H. Liu, Y. Wang, and L. Wang, “The effect of light conditions on photoplethysmographic image acquisition using a commercial camera,” IEEE J Transl Eng Heal. Med 2, 1–11 (2014).
[Crossref]

Liu, Z.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 4700–4708.

Lorentzen, D. A.

Mathieu, M.

M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” arXiv preprint arXiv:1511.05440 (2015).

Menon, D.

D. Menon and G. Calvagno, “Color image demosaicking: An overview,” Signal Process. Image Commun. 26(8–9), 518–533 (2011).
[Crossref]

Mersereau, R. M.

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process. Mag. 22(1), 44–54 (2005).
[Crossref]

Mirza, M.

M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784 (2014).

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, (2014), pp. 2672–2680.

Nakamura, H.

T. Kijima, H. Nakamura, J. T. Compton, and J. F. Hamilton, “Image sensor with improved light sensitivity,” U.S. patent 7,688,368 (2010).

Napoletano, P.

S. Bianco, R. Cadene, L. Celona, and P. Napoletano, “Benchmark analysis of representative deep neural network architectures,” IEEE Access 6, 64270–64277 (2018).

Olgac, A. V.

B. Karlik and A. V. Olgac, “Performance analysis of various activation functions in generalized mlp architectures of neural networks,” Int. J. Artif. Intell. Expert. Syst. 1(4), 111–122 (2011).

Osindero, S.

M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784 (2014).

Ozair, S.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, (2014), pp. 2672–2680.

Pan, S. J.

S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010).

Paris, S.

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans.Graph. 35(6), 191 (2016).

Park, J. M.

P. K. Park, B. H. Cho, J. M. Park, K. Lee, H. Y. Kim, H. A. Kang, H. G. Lee, J. Woo, Y. Roh, and W. J. Lee, “Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1624–1628.
[Crossref]

Park, P. K.

P. K. Park, B. H. Cho, J. M. Park, K. Lee, H. Y. Kim, H. A. Kang, H. G. Lee, J. Woo, Y. Roh, and W. J. Lee, “Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1624–1628.
[Crossref]

Parmar, M.

M. Parmar and B. A. Wandell, “Interleaved imaging: an imaging system design inspired by rod-cone vision,” in Digital Photography V, (International Society for Optics and Photonics, 2009), p. 725008.
[Crossref]

Pennington, J.

J. Pennington, S. Schoenholz, and S. Ganguli, “Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice,” in Advances in Neural Information Processing Systems, (2017), pp. 4785–4795.

Peters, N.

Pouget-Abadie, J.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, (2014), pp. 2672–2680.

Qi, X.

X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P.-A. Heng, “H-denseunet: Hybrid densely connected unet for liver and tumor segmentation from ct volumes,” IEEE Trans. Med. Imag. 37(12), 2663–2674 (2018).

Reid, M.

R. H. Steinberg, M. Reid, and P. L. Lacy, “The distribution of rods and cones in the retina of the cat (felis domesticus),” J. Comp. Neurol. 148(2), 229–248 (1973).
[Crossref] [PubMed]

Ro, Y. M.

Y. J. Jung, H. Sohn, S.-I. Lee, F. Speranza, and Y. M. Ro, “Visual importance-and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

Y. J. Jung, H. Sohn, and Y. M. Ro, “Visual discomfort visualizer using stereo vision and time-of-flight depth cameras,” IEEE Trans. Consum. Electron. 58(2), 246–254 (2012).

Roh, Y.

P. K. Park, B. H. Cho, J. M. Park, K. Lee, H. Y. Kim, H. A. Kang, H. G. Lee, J. Woo, Y. Roh, and W. J. Lee, “Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1624–1628.
[Crossref]

Ronneberger, O.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2015), pp. 234–241.

Sadowski, P.

F. Agostinelli, M. Hoffman, P. Sadowski, and P. Baldi, “Learning activation functions to improve deep neural networks,” arXiv preprint arXiv:1412.6830 (2014).

Sapiro, G.

M. Bertalmio, A. L. Bertozzi, and G. Sapiro, “Navier-stokes, fluid dynamics, and image and video inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2001), pp. 335–362.

Schafer, R. W.

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process. Mag. 22(1), 44–54 (2005).
[Crossref]

Schoenholz, S.

J. Pennington, S. Schoenholz, and S. Ganguli, “Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice,” in Advances in Neural Information Processing Systems, (2017), pp. 4785–4795.

Sharlet, D.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35(6), 192 (2016).

Sigernes, F.

Singh, M.

M. Singh and T. Singh, “Linear universal demosaicking of regular pattern color filter arrays,” in Proceedings on IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2012), pp. 1277–1280.

Singh, T.

M. Singh and T. Singh, “Linear universal demosaicking of regular pattern color filter arrays,” in Proceedings on IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2012), pp. 1277–1280.

Socher, R.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2009), pp. 248–255.

Sohn, H.

Y. J. Jung, H. Sohn, S.-I. Lee, F. Speranza, and Y. M. Ro, “Visual importance-and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

Y. J. Jung, H. Sohn, and Y. M. Ro, “Visual discomfort visualizer using stereo vision and time-of-flight depth cameras,” IEEE Trans. Consum. Electron. 58(2), 246–254 (2012).

Speranza, F.

Y. J. Jung, H. Sohn, S.-I. Lee, F. Speranza, and Y. M. Ro, “Visual importance-and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

Steinberg, R. H.

R. H. Steinberg, M. Reid, and P. L. Lacy, “The distribution of rods and cones in the retina of the cat (felis domesticus),” J. Comp. Neurol. 148(2), 229–248 (1973).
[Crossref] [PubMed]

Svenøe, T.

Telea, A.

A. Telea, “An image inpainting technique based on the fast marching method,” J. Graph. Tools 9(1), 23–34 (2004).

Tian, Q.

H. Jiang, Q. Tian, J. Farrell, and B. A. Wandell, “Learning the image processing pipeline,” IEEE Trans. Image Process. 26(10), 5032–5042 (2017).
[Crossref]

Q. Tian, S. Lansel, J. E. Farrell, and B. A. Wandell, “Automating the design of image processing pipelines for novel color filter arrays: Local, linear, learned (l3) method,” in Digital Photography X, vol. 9023 (International Society for Optics and Photonics, 2014), p. 90230K.
[Crossref]

Ulyanov, D.

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv preprint arXiv:1607.08022 (2016).

Van Der Maaten, L.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 4700–4708.

Vedaldi, A.

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv preprint arXiv:1607.08022 (2016).

Wandell, B. A.

H. Jiang, Q. Tian, J. Farrell, and B. A. Wandell, “Learning the image processing pipeline,” IEEE Trans. Image Process. 26(10), 5032–5042 (2017).
[Crossref]

M. Parmar and B. A. Wandell, “Interleaved imaging: an imaging system design inspired by rod-cone vision,” in Digital Photography V, (International Society for Optics and Photonics, 2009), p. 725008.
[Crossref]

Q. Tian, S. Lansel, J. E. Farrell, and B. A. Wandell, “Automating the design of image processing pipelines for novel color filter arrays: Local, linear, learned (l3) method,” in Digital Photography X, vol. 9023 (International Society for Optics and Photonics, 2014), p. 90230K.
[Crossref]

Wang, J.

C. Zhang, Y. Li, J. Wang, and P. Hao, “Universal demosaicking of color filter arrays,” IEEE Trans. Image Process. 25(11), 5173–5186 (2016).
[Crossref] [PubMed]

Wang, L.

H. Liu, Y. Wang, and L. Wang, “The effect of light conditions on photoplethysmographic image acquisition using a commercial camera,” IEEE J Transl Eng Heal. Med 2, 1–11 (2014).
[Crossref]

Wang, Y.

H. Liu, Y. Wang, and L. Wang, “The effect of light conditions on photoplethysmographic image acquisition using a commercial camera,” IEEE J Transl Eng Heal. Med 2, 1–11 (2014).
[Crossref]

Warde-Farley, D.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, (2014), pp. 2672–2680.

Weinberger, K. Q.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 4700–4708.

Woo, J.

P. K. Park, B. H. Cho, J. M. Park, K. Lee, H. Y. Kim, H. A. Kang, H. G. Lee, J. Woo, Y. Roh, and W. J. Lee, “Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1624–1628.
[Crossref]

Wu, X.

L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20(2), 023016 (2011).

L. Zhang and X. Wu, “Color demosaicking via directional linear minimum mean square-error estimation,” IEEE Trans. Image Process. 14(12), 2167–2178 (2005).
[Crossref] [PubMed]

P. Bao, L. Zhang, and X. Wu, “Canny edge detection enhancement by scale multiplication,” IEEE Trans. Pattern Anal. Mach. Intell. 27(9), 1485–1490 (2005).
[PubMed]

Xu, B.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, (2014), pp. 2672–2680.

Yang, Q.

S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010).

Yu, J.

J. Li, C. Bai, Z. Lin, and J. Yu, “Automatic design of high-sensitivity color filter arrays with panchromatic pixels,” IEEE Trans. Image Process. 26(2), 870–883 (2017).
[Crossref] [PubMed]

Yu, T.

R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, “Real-time user-guided image colorization with learned deep priors,” ACM Trans. Graph. 36(4), 119 (2017).
[Crossref]

Zhang, C.

C. Zhang, Y. Li, J. Wang, and P. Hao, “Universal demosaicking of color filter arrays,” IEEE Trans. Image Process. 25(11), 5173–5186 (2016).
[Crossref] [PubMed]

Zhang, L.

L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20(2), 023016 (2011).

L. Zhang and X. Wu, “Color demosaicking via directional linear minimum mean square-error estimation,” IEEE Trans. Image Process. 14(12), 2167–2178 (2005).
[Crossref] [PubMed]

P. Bao, L. Zhang, and X. Wu, “Canny edge detection enhancement by scale multiplication,” IEEE Trans. Pattern Anal. Mach. Intell. 27(9), 1485–1490 (2005).
[PubMed]

X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: A systematic survey,” in Visual Communications and Image Processing 2008, (International Society for Optics and Photonics, 2008), p. 68221J.

Zhang, R.

R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, “Real-time user-guided image colorization with learned deep priors,” ACM Trans. Graph. 36(4), 119 (2017).
[Crossref]

Zhao, H.

H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Trans. Comp. Imaging 3(1), 47–57 (2017).

Zhou, T.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 1125–1134.

Zhu, J.-Y.

R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, “Real-time user-guided image colorization with learned deep priors,” ACM Trans. Graph. 36(4), 119 (2017).
[Crossref]

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 1125–1134.

Zickler, T.

A. Chakrabarti, W. T. Freeman, and T. Zickler, “Rethinking color cameras,” in Proceedings of IEEE International Conference on Computational Photography (ICCP), (IEEE, 2014), pp. 1–8.

ACM Trans. Graph. (2)

R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, “Real-time user-guided image colorization with learned deep priors,” ACM Trans. Graph. 36(4), 119 (2017).
[Crossref]

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35(6), 192 (2016).

ACM Trans.Graph. (1)

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans.Graph. 35(6), 191 (2016).

IEEE Access (1)

S. Bianco, R. Cadene, L. Celona, and P. Napoletano, “Benchmark analysis of representative deep neural network architectures,” IEEE Access 6, 64270–64277 (2018).

IEEE J Transl Eng Heal. Med (1)

H. Liu, Y. Wang, and L. Wang, “The effect of light conditions on photoplethysmographic image acquisition using a commercial camera,” IEEE J Transl Eng Heal. Med 2, 1–11 (2014).
[Crossref]

IEEE Signal Process. Mag. (1)

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process. Mag. 22(1), 44–54 (2005).
[Crossref]

IEEE Trans. Circ. Syst. Video Tech. (1)

Y. J. Jung, H. Sohn, S.-I. Lee, F. Speranza, and Y. M. Ro, “Visual importance-and discomfort region-selective low-pass filtering for reducing visual discomfort in stereoscopic displays,” IEEE Trans. Circ. Syst. Video Tech. 23(8), 1408–1421 (2013).
[Crossref]

IEEE Trans. Comp. Imaging (1)

H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Trans. Comp. Imaging 3(1), 47–57 (2017).

IEEE Trans. Consum. Electron. (1)

Y. J. Jung, H. Sohn, and Y. M. Ro, “Visual discomfort visualizer using stereo vision and time-of-flight depth cameras,” IEEE Trans. Consum. Electron. 58(2), 246–254 (2012).

IEEE Trans. Image Process. (5)

C. Zhang, Y. Li, J. Wang, and P. Hao, “Universal demosaicking of color filter arrays,” IEEE Trans. Image Process. 25(11), 5173–5186 (2016).
[Crossref] [PubMed]

B. Leung, G. Jeon, and E. Dubois, “Least-squares luma–chroma demultiplexing algorithm for bayer demosaicking,” IEEE Trans. Image Process. 20(7), 1885–1894 (2011).
[Crossref] [PubMed]

L. Zhang and X. Wu, “Color demosaicking via directional linear minimum mean square-error estimation,” IEEE Trans. Image Process. 14(12), 2167–2178 (2005).
[Crossref] [PubMed]

J. Li, C. Bai, Z. Lin, and J. Yu, “Automatic design of high-sensitivity color filter arrays with panchromatic pixels,” IEEE Trans. Image Process. 26(2), 870–883 (2017).
[Crossref] [PubMed]

H. Jiang, Q. Tian, J. Farrell, and B. A. Wandell, “Learning the image processing pipeline,” IEEE Trans. Image Process. 26(10), 5032–5042 (2017).
[Crossref]

IEEE Trans. Knowl. Data Eng. (1)

S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010).

IEEE Trans. Med. Imag. (1)

X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P.-A. Heng, “H-denseunet: Hybrid densely connected unet for liver and tumor segmentation from ct volumes,” IEEE Trans. Med. Imag. 37(12), 2663–2674 (2018).

IEEE Trans. Pattern Anal. Mach. Intell. (1)

P. Bao, L. Zhang, and X. Wu, “Canny edge detection enhancement by scale multiplication,” IEEE Trans. Pattern Anal. Mach. Intell. 27(9), 1485–1490 (2005).
[PubMed]

Int. J. Artif. Intell. Expert. Syst. (1)

B. Karlik and A. V. Olgac, “Performance analysis of various activation functions in generalized mlp architectures of neural networks,” Int. J. Artif. Intell. Expert. Syst. 1(4), 111–122 (2011).

J. Comp. Neurol. (1)

R. H. Steinberg, M. Reid, and P. L. Lacy, “The distribution of rods and cones in the retina of the cat (felis domesticus),” J. Comp. Neurol. 148(2), 229–248 (1973).
[Crossref] [PubMed]

J. Electron. Imaging (1)

L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20(2), 023016 (2011).

J. Graph. Tools (1)

A. Telea, “An image inpainting technique based on the fast marching method,” J. Graph. Tools 9(1), 23–34 (2004).

Opt. Express (2)

Signal Process. Image Commun. (1)

D. Menon and G. Calvagno, “Color image demosaicking: An overview,” Signal Process. Image Commun. 26(8–9), 518–533 (2011).
[Crossref]

Other (30)

A. Chakrabarti, W. T. Freeman, and T. Zickler, “Rethinking color cameras,” in Proceedings of IEEE International Conference on Computational Photography (ICCP), (IEEE, 2014), pp. 1–8.

M. Singh and T. Singh, “Linear universal demosaicking of regular pattern color filter arrays,” in Proceedings on IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2012), pp. 1277–1280.

A. Chakrabarti, “Learning sensor multiplexing design through back-propagation,” in Advances in Neural Information Processing Systems, (2016), pp. 3081–3089.

I. Hirota, “Solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus,” U.S. patent 9,392,238 (2016).

T. Kijima, H. Nakamura, J. T. Compton, and J. F. Hamilton, “Image sensor with improved light sensitivity,” U.S. patent 7,688,368 (2010).

P. K. Park, B. H. Cho, J. M. Park, K. Lee, H. Y. Kim, H. A. Kang, H. G. Lee, J. Woo, Y. Roh, and W. J. Lee, “Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1624–1628.
[Crossref]

J. Canny, “A computational approach to edge detection,” in Readings in Computer Vision, (Elsevier, 1987), pp. 184–203.

F. Agostinelli, M. Hoffman, P. Sadowski, and P. Baldi, “Learning activation functions to improve deep neural networks,” arXiv preprint arXiv:1412.6830 (2014).

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, (2014), pp. 2672–2680.

M. Parmar and B. A. Wandell, “Interleaved imaging: an imaging system design inspired by rod-cone vision,” in Digital Photography V, (International Society for Optics and Photonics, 2009), p. 725008.
[Crossref]

Q. Tian, S. Lansel, J. E. Farrell, and B. A. Wandell, “Automating the design of image processing pipelines for novel color filter arrays: Local, linear, learned (l3) method,” in Digital Photography X, vol. 9023 (International Society for Optics and Photonics, 2014), p. 90230K.
[Crossref]

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

H. W. Jang, College of Information Technology, Gachon University, Sujeong-Gu, Seongnam, 13120, South Korea and Y. J. Jung are preparing a manuscript to be called “Deep color transfer for color-plus-mono dual camera”.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 4700–4708.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2015), pp. 234–241.

J. Pennington, S. Schoenholz, and S. Ganguli, “Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice,” in Advances in Neural Information Processing Systems, (2017), pp. 4785–4795.

M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784 (2014).

M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” arXiv preprint arXiv:1511.05440 (2015).

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv preprint arXiv:1607.08022 (2016).

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2009), pp. 248–255.

“Rethinnk color camera code,” http://vision.seas.harvard.edu/colorsensor/ . Accessed: 2019-07-07.

“Automatic image colorization code,” https://richzhang.github.io/colorization/ . Accessed: 2019-07-07.

“Learning sensor multiplexing design through back-propagation code,” https://github.com/ayanc/learncfa/ . Accessed: 2019-07-07.

M. Bertalmio, A. L. Bertozzi, and G. Sapiro, “Navier-stokes, fluid dynamics, and image and video inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2001), pp. 335–362.

ISO-12233, “Imaging resolution, photography electronic still picture and responses, spatial frequency,” (2014).

“Flycapture sdk,” https://www.ptgrey.com/flycapture-sdk . Accessed: 2019-04-07.

X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: A systematic survey,” in Visual Communications and Image Processing 2008, (International Society for Optics and Photonics, 2008), p. 68221J.

B. E. Bayer, “Color imaging array,” U.S. patent 3,971,065 (1976).

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 1125–1134.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (20)

Fig. 1
Fig. 1 Examples of sparse color filter patterns. (a) Existing CFZ-14 pattern with a spare ratio (89% panchromatic pixels and 11% RGB pixels). (b) Extended version of CFZ-14 with a very sparse ratio (98.86% panchromatic pixels and 1.13% RGB pixels), used in our experiments. In the figures, W denotes panchromatic white pixels.
Fig. 2
Fig. 2 Visual artifacts generated by the existing methods for sparse color reconstruction. (a) Ground truth (the inset is a zoomed-in image of the marked area). (b) Input Image (zoom). (c) Chakrabarti-14 [15]. (d) Zhang [23]. (e) Chakrabarti-16 [18]. (f) sNet (Proposed). (g) sGAN (Proposed).
Fig. 3
Fig. 3 Overall framework of the proposed color reconstruction method. LRN:luminance recovery network. CRN:color reconstruction network.
Fig. 4
Fig. 4 Input and output samples for Stage I. (a) Input luminance image with missing pixels. (b) Recovered luminance image. (c) Edge map extracted from the recovered image.
Fig. 5
Fig. 5 Model architecture of luminance recovery network (LRN).
Fig. 6
Fig. 6 Model architecture of color reconstruction network (CRN).
Fig. 7
Fig. 7 Model architecture of the adversarial block (i.e., discriminator).
Fig. 8
Fig. 8 Color bleeding improvement through sNet and sGAN. (a) Ground truth (full image). (b) Ground truth (zoom). (c) Chakrabarti-14 [15]. (d) Zhang [23]. (e) Chakrabarti-16 [18]. (f) sNet (proposed). (g) sGAN (proposed).
Fig. 9
Fig. 9 Color bleeding improvement through sNet and sGAN. (a) Ground truth (full image). (b) Ground truth (zoom). (c) Chakrabarti-14 [15]. (d) Zhang [23]. (e) Chakrabarti-16 [18]. (f) sNet (proposed). (g) sGAN (proposed).
Fig. 10
Fig. 10 False-color improvement through the sGAN. (a) Ground truth (full image). (b) Ground truth (zoom). (c) Chakrabarti-14 [15]. (d) Zhang [23]. (e) Chakrabarti-16 [18]. (f) sNet (proposed). (g) sGAN (proposed).
Fig. 11
Fig. 11 False-color improvement through the sGAN. (a) Ground truth (full image). (b) Ground truth (zoom). (c) Chakrabarti-14 [15]. (d) Zhang [23]. (e) Chakrabarti-16 [18]. (f) sNet (proposed). (g) sGAN (proposed).
Fig. 12
Fig. 12 Example illustration of the random CFA pattern with 1% color pixels used in our experiments. W denotes panchromatic white pixels. R, G, and B denote red, green, and blue color pixels, respectively.
Fig. 13
Fig. 13 Example results reconstructed from random 1% sparse RGB pixels. (a) Ground truth. (b) Reconstructed image. (c) Zoomed versions of the ground truth. (d) Zoomed versions of the reconstructed image. (e) Zoomed versions of the ground truth (2000% magnification). (f) Zoomed versions of the reconstructed image (2000% magnification). (g) Zoomed versions of the ground truth. (h) Zoomed versions of the reconstructed image. (i) Zoomed versions of the ground truth (2000% magnification). (j) Zoomed versions of the reconstructed image (2000% magnification).
Fig. 14
Fig. 14 Example results reconstructed from random 1% sparse RGB pixels. (a) Ground truth. (b) Reconstructed image. (c) Zoomed versions of the ground truth. (d) Zoomed versions of the reconstructed image
Fig. 15
Fig. 15 Reconstruction results from random 1% sparse RGB pixels using a standard simple test image with high frequency patterns (a slanted bar image). (a) Ground truth. (b) Reconstructed image. (c) Zoomed versions of the ground truth. (d) Zoomed versions of the reconstructed image.
Fig. 16
Fig. 16 Hybrid camera system using two cameras and a beam splitter. (a) Side-view. (b) Top-view. (c) Example monochrome image captured by the monochrome camera. (d) Example RGB image with the same field of view captured by the color camera.
Fig. 17
Fig. 17 Well-lit comparison at 86 lux. The figures in the first row show the full versions of the images captured with an RGB sensor (left) and reconstructed with a sparse color sensor prototype (right). The figures in the second row show the zoomed regions. In every pair, the left image shows the ground-truth image captured with an RGB sensor and the right image shows the result obtained by the color reconstruction method.
Fig. 18
Fig. 18 Low-light comparison at 12 lux. In each pair, the left image shows the ground-truth image captured with an RGB sensor and the right image shows the result obtained by the color reconstruction method.
Fig. 19
Fig. 19 Extreme low-light comparison at 6 lux. In each pair, the left image shows the ground-truth image captured with an RGB sensor and the right image shows the result obtained by the sparse color reconstruction method.
Fig. 20
Fig. 20 Example results that demonstrate the denoising capability of the proposed model, while reconstructing the full-color image. (a) Left:input RGB image (without noise), Right:reconstructed image. (b) Left:input RGB image with low noise (standard deviation of 0.03), Right:denoised and reconstructed image. (c) Left:input RGB image with high noise (standard deviation of 0.08), Right:denoised and reconstructed image.

Tables (3)

Tables Icon

Table 1 Mean CPSNR (dB) Calculated with Kodak and IMAX Datasets.

Tables Icon

Table 2 Mean CIELAB Color Difference (ΔE) Calculated with Kodak and IMAX Datasets.

Tables Icon

Table 3 Specification of the Two Cameras Used for Our Prototype.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

LRN ( P G , P R ) = P G P R 1 + SSIM ( P G , P R )
cGAN ( G , D ) = 𝔼 X , Y [ log D ( X , Y ) ] + 𝔼 X , Z [ log ( 1 D ( x , G ( Z , Z ) ) ]
CRN ( G ) = Y G ( X , Z ) 1 + SSIM ( Y , G ( X , Z ) )
CRN ( I G , I R ) = I G I R 1 + SSIM ( I G , I R )
G * = argmin G max D cGAN ( G , D ) + λ CRN ( G )
CPSNR ( I G , I R ) = 10 log 10 255 2 1 3 HW c 3 i H j W I G ( i , j , c ) I R ( i , j , c ) 2 2 ,

Metrics