Abstract

Hyperspectral imaging provides rich spatial-spectral-temporal information with wide applications. However, most of the existing hyperspectral imaging systems require light splitting/filtering devices for spectral modulation, making the system complex and expensive, and sacrifice spatial or temporal resolution. In this paper, we report an end-to-end deep learning method to reconstruct hyperspectral images directly from a raw mosaic image. It saves the separate demosaicing process required by other methods, which reconstructs the full-resolution RGB data from the raw mosaic image. This reduces computational complexity and accumulative error. Three different networks were designed based on the state-of-the-art models in literature, including the residual network, the multiscale network and the parallel-multiscale network. They were trained and tested on public hyperspectral image datasets. Benefiting from the parallel propagation and information fusion of different-resolution feature maps, the parallel-multiscale network performs best among the three networks, with the average peak signal-to-noise ratio achieving 46.83dB. The reported method can be directly integrated to boost an RGB camera for hyperspectral imaging.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging

Fei Wang, Hao Wang, Haichao Wang, Guowei Li, and Guohai Situ
Opt. Express 27(18) 25560-25572 (2019)

Spectral-depth imaging with deep learning based reconstruction

Mingde Yao, Zhiwei Xiong, Lizhi Wang, Dong Liu, and Xuejin Chen
Opt. Express 27(26) 38312-38325 (2019)

An end-to-end fully-convolutional neural network for division of focal plane sensors to reconstruct S0, DoLP, and AoP

Xianglong Zeng, Yuan Luo, Xiaojing Zhao, and Wenbin Ye
Opt. Express 27(6) 8566-8577 (2019)

References

  • View by:
  • |
  • |
  • |

  1. C.-I. Chang, Hyperspectral Imaging: Techniques for Spectral Detection and Classification (Kluwer Academic/Plenum Publishers, 2003).
  2. D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90(3), 337–352 (2004).
    [Crossref]
  3. L.-J. Cheng and G. F. Reyes, “AOTF polarimetric hyperspectral imaging for mine detection,” in Detection Technologies for Mines and Minelike Targets, vol. 2496 (International Society for Optics and Photonics, 1995), pp. 305–311.
  4. V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
    [Crossref]
  5. J. Suo, L. Bian, F. Chen, and Q. Dai, “Bispectral coding: compressive and high-quality acquisition of fluorescence and reflectance,” Opt. Express 22(2), 1697–1712 (2014).
    [Crossref]
  6. A. Sobral, S. Javed, S. Ki Jung, T. Bouwmans, and E.-H. Zahzah, “Online stochastic tensor decomposition for background subtraction in multispectral video sequences,” in IEEE I. Conf. Comp. Vis., pp. 946–953.
  7. C. McElfresh, T. Harrington, and K. S. Vecchio, “Application of a novel new multispectral nanoparticle tracking technique,” Meas. Sci. Technol. 29(6), 065002 (2018).
    [Crossref]
  8. A. F. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for earth remote sensing,” Science 228(4704), 1147–1153 (1985).
    [Crossref]
  9. N. Gat, “Imaging spectroscopy using tunable filters: a review,” in Wavelet Applications VII, vol. 4056 (International Society for Optics and Photonics, 2000), pp. 50–64.
  10. X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
    [Crossref]
  11. M. Gehm, R. John, D. Brady, R. Willett, and T. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15(21), 14013–14027 (2007).
    [Crossref]
  12. X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
    [Crossref]
  13. M. Descour and E. Dereniak, “Computed-tomography imaging spectrometer: experimental calibration and reconstruction results,” Appl. Opt. 34(22), 4817–4826 (1995).
    [Crossref]
  14. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
    [Crossref]
  15. N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
    [Crossref]
  16. B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural RGB images,” in IEEE European Conference on Computer Vision, pp. 19–34.
  17. S. Wug Oh, M. S. Brown, M. Pollefeys, and S. Joo Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 2461–2469.
  18. R. M. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single RGB image,” in IEEE European Conference on Computer Vision, pp. 186–201.
  19. Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “HSCNN: CNN-based hyperspectral image recovery from spectrally undersampled projections,” in IEEE I. Conf. Comp. Vis., pp. 518–525.
  20. O. Losson, L. Macaire, and Y. Yang, “Comparison of color demosaicing methods,” Adv. Imaging Electron Phys. 162, 173–265 (2010).
    [Crossref]
  21. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
    [Crossref]
  22. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
    [Crossref]
  23. C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
    [Crossref]
  24. T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
    [Crossref]
  25. Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3D ShapeNets: A deep representation for volumetric shapes,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 1912–1920.
  26. A.-I. Popa, M. Zanfir, and C. Sminchisescu, “Deep multitask architecture for integrated 2D and 3D human sensing,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 6289–6298.
  27. J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman, and J. Tenenbaum, “MarrNet: 3D shape reconstruction via 2.5D sketches,” in Adv. Neur. In., pp. 540–550.
  28. J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE I. Conf. Comp. Vis., pp. 1646–1654.
  29. B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 136–144.
  30. Y. Tai, J. Yang, X. Liu, and C. Xu, “MemNet: A persistent memory network for image restoration,” in IEEE I. Conf. Comp. Vis., pp. 4539–4547.
  31. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 770–778.
  32. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.
  33. G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
    [Crossref]
  34. K. Sun, B. Xiao, D. Liu, and J. Wang, “Deep high-resolution representation learning for human pose estimation,” arXiv preprint arXiv:1902.09212 (2019).
  35. K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-resolution representations for labeling pixels and regions,” CoRR, abs/1904.04514 (2019).
  36. F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Trans. on Image Process. 19(9), 2241–2253 (2010).
    [Crossref]
  37. S. M. Nascimento, K. Amano, and D. H. Foster, “Spatial distributions of local illumination color in natural scenes,” Vision Res. 120, 39–44 (2016).
    [Crossref]
  38. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).
  39. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
    [Crossref]

2018 (3)

C. McElfresh, T. Harrington, and K. S. Vecchio, “Application of a novel new multispectral nanoparticle tracking technique,” Meas. Sci. Technol. 29(6), 065002 (2018).
[Crossref]

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

2017 (2)

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

2016 (1)

S. M. Nascimento, K. Amano, and D. H. Foster, “Spatial distributions of local illumination color in natural scenes,” Vision Res. 120, 39–44 (2016).
[Crossref]

2015 (1)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

2014 (3)

J. Suo, L. Bian, F. Chen, and Q. Dai, “Bispectral coding: compressive and high-quality acquisition of fluorescence and reflectance,” Opt. Express 22(2), 1697–1712 (2014).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
[Crossref]

2013 (1)

N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
[Crossref]

2011 (1)

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

2010 (2)

O. Losson, L. Macaire, and Y. Yang, “Comparison of color demosaicing methods,” Adv. Imaging Electron Phys. 162, 173–265 (2010).
[Crossref]

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Trans. on Image Process. 19(9), 2241–2253 (2010).
[Crossref]

2007 (1)

2004 (2)

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90(3), 337–352 (2004).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

2000 (1)

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

1995 (1)

1985 (1)

A. F. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for earth remote sensing,” Science 228(4704), 1147–1153 (1985).
[Crossref]

Amano, K.

S. M. Nascimento, K. Amano, and D. H. Foster, “Spatial distributions of local illumination color in natural scenes,” Vision Res. 120, 39–44 (2016).
[Crossref]

Arad, B.

B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural RGB images,” in IEEE European Conference on Computer Vision, pp. 19–34.

Arce, G. R.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

Arendt, J. T.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Arguello, H.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

Ba, J.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Backman, V.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Badizadegan, K.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Bejnordi, B. E.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Ben-Shahar, O.

B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural RGB images,” in IEEE European Conference on Computer Vision, pp. 19–34.

Bian, L.

Bouwmans, T.

A. Sobral, S. Javed, S. Ki Jung, T. Bouwmans, and E.-H. Zahzah, “Online stochastic tensor decomposition for background subtraction in multispectral video sequences,” in IEEE I. Conf. Comp. Vis., pp. 946–953.

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Brady, D.

Brady, D. J.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

Brown, M. S.

S. Wug Oh, M. S. Brown, M. Pollefeys, and S. Joo Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 2461–2469.

R. M. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single RGB image,” in IEEE European Conference on Computer Vision, pp. 186–201.

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Cao, X.

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

Carin, L.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

Chang, C.-I.

C.-I. Chang, Hyperspectral Imaging: Techniques for Spectral Detection and Classification (Kluwer Academic/Plenum Publishers, 2003).

Chen, F.

Chen, N.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Cheng, L.-J.

L.-J. Cheng and G. F. Reyes, “AOTF polarimetric hyperspectral imaging for mine detection,” in Detection Technologies for Mines and Minelike Targets, vol. 2496 (International Society for Optics and Photonics, 1995), pp. 305–311.

Cheng, T.

K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-resolution representations for labeling pixels and regions,” CoRR, abs/1904.04514 (2019).

Ciompi, F.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Crawford, J. M.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Dai, Q.

J. Suo, L. Bian, F. Chen, and Q. Dai, “Bispectral coding: compressive and high-quality acquisition of fluorescence and reflectance,” Opt. Express 22(2), 1697–1712 (2014).
[Crossref]

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
[Crossref]

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

Dasari, R. R.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Dereniak, E.

Descour, M.

Du, H.

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

Edgar, M. P.

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Endo, Y.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Feld, M. S.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Fitzmaurice, M.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Foster, D. H.

S. M. Nascimento, K. Amano, and D. H. Foster, “Spatial distributions of local illumination color in natural scenes,” Vision Res. 120, 39–44 (2016).
[Crossref]

Freeman, B.

J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman, and J. Tenenbaum, “MarrNet: 3D shape reconstruction via 2.5D sketches,” in Adv. Neur. In., pp. 540–550.

Gat, N.

N. Gat, “Imaging spectroscopy using tunable filters: a review,” in Wavelet Applications VII, vol. 4056 (International Society for Optics and Photonics, 2000), pp. 50–64.

Gehm, M.

Ghafoorian, M.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Goetz, A. F.

A. F. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for earth remote sensing,” Science 228(4704), 1147–1153 (1985).
[Crossref]

Gurjar, R.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Haboudane, D.

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90(3), 337–352 (2004).
[Crossref]

Hagen, N. A.

N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
[Crossref]

Harrington, T.

C. McElfresh, T. Harrington, and K. S. Vecchio, “Application of a novel new multispectral nanoparticle tracking technique,” Meas. Sci. Technol. 29(6), 065002 (2018).
[Crossref]

Hasegawa, S.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 770–778.

Higham, C. F.

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Hirayama, R.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Iso, D.

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Trans. on Image Process. 19(9), 2241–2253 (2010).
[Crossref]

Ito, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Itzkan, I.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Javed, S.

A. Sobral, S. Javed, S. Ki Jung, T. Bouwmans, and E.-H. Zahzah, “Online stochastic tensor decomposition for background subtraction in multispectral video sequences,” in IEEE I. Conf. Comp. Vis., pp. 946–953.

Jiang, B.

K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-resolution representations for labeling pixels and regions,” CoRR, abs/1904.04514 (2019).

John, R.

Joo Kim, S.

S. Wug Oh, M. S. Brown, M. Pollefeys, and S. Joo Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 2461–2469.

Kabani, S.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Kakue, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Khosla, A.

Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3D ShapeNets: A deep representation for volumetric shapes,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 1912–1920.

Ki Jung, S.

A. Sobral, S. Javed, S. Ki Jung, T. Bouwmans, and E.-H. Zahzah, “Online stochastic tensor decomposition for background subtraction in multispectral video sequences,” in IEEE I. Conf. Comp. Vis., pp. 946–953.

Kim, H.

B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 136–144.

Kim, J.

J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE I. Conf. Comp. Vis., pp. 1646–1654.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Kittle, D. S.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

Kline, E.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Kooi, T.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Kudenov, M. W.

N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
[Crossref]

Kwon Lee, J.

J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE I. Conf. Comp. Vis., pp. 1646–1654.

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Levin, H. S.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Li, G.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Li, H.

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “HSCNN: CNN-based hyperspectral image recovery from spectrally undersampled projections,” in IEEE I. Conf. Comp. Vis., pp. 518–525.

Lim, B.

B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 136–144.

Lin, S.

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

Lin, X.

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
[Crossref]

Litjens, G.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Liu, D.

K. Sun, B. Xiao, D. Liu, and J. Wang, “Deep high-resolution representation learning for human pose estimation,” arXiv preprint arXiv:1902.09212 (2019).

K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-resolution representations for labeling pixels and regions,” CoRR, abs/1904.04514 (2019).

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “HSCNN: CNN-based hyperspectral image recovery from spectrally undersampled projections,” in IEEE I. Conf. Comp. Vis., pp. 518–525.

Liu, W.

K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-resolution representations for labeling pixels and regions,” CoRR, abs/1904.04514 (2019).

Liu, X.

Y. Tai, J. Yang, X. Liu, and C. Xu, “MemNet: A persistent memory network for image restoration,” in IEEE I. Conf. Comp. Vis., pp. 4539–4547.

Liu, Y.

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
[Crossref]

Losson, O.

O. Losson, L. Macaire, and Y. Yang, “Comparison of color demosaicing methods,” Adv. Imaging Electron Phys. 162, 173–265 (2010).
[Crossref]

Lyu, M.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Macaire, L.

O. Losson, L. Macaire, and Y. Yang, “Comparison of color demosaicing methods,” Adv. Imaging Electron Phys. 162, 173–265 (2010).
[Crossref]

McElfresh, C.

C. McElfresh, T. Harrington, and K. S. Vecchio, “Application of a novel new multispectral nanoparticle tracking technique,” Meas. Sci. Technol. 29(6), 065002 (2018).
[Crossref]

McGillican, T.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Miller, J. R.

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90(3), 337–352 (2004).
[Crossref]

Mitsunaga, T.

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Trans. on Image Process. 19(9), 2241–2253 (2010).
[Crossref]

Mu, Y.

K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-resolution representations for labeling pixels and regions,” CoRR, abs/1904.04514 (2019).

Mu Lee, K.

B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 136–144.

J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE I. Conf. Comp. Vis., pp. 1646–1654.

Muller, M. G.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Murray-Smith, R.

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Nagahama, Y.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Nah, S.

B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 136–144.

Nascimento, S. M.

S. M. Nascimento, K. Amano, and D. H. Foster, “Spatial distributions of local illumination color in natural scenes,” Vision Res. 120, 39–44 (2016).
[Crossref]

Nayar, S. K.

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Trans. on Image Process. 19(9), 2241–2253 (2010).
[Crossref]

Nguyen, R. M.

R. M. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single RGB image,” in IEEE European Conference on Computer Vision, pp. 186–201.

Nishitsuji, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Padgett, M. J.

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Pattey, E.

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90(3), 337–352 (2004).
[Crossref]

Perelman, L. T.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Pollefeys, M.

S. Wug Oh, M. S. Brown, M. Pollefeys, and S. Joo Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 2461–2469.

Popa, A.-I.

A.-I. Popa, M. Zanfir, and C. Sminchisescu, “Deep multitask architecture for integrated 2D and 3D human sensing,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 6289–6298.

Prasad, D. K.

R. M. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single RGB image,” in IEEE European Conference on Computer Vision, pp. 186–201.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 770–778.

Reyes, G. F.

L.-J. Cheng and G. F. Reyes, “AOTF polarimetric hyperspectral imaging for mine detection,” in Detection Technologies for Mines and Minelike Targets, vol. 2496 (International Society for Optics and Photonics, 1995), pp. 305–311.

Rock, B. N.

A. F. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for earth remote sensing,” Science 228(4704), 1147–1153 (1985).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Sánchez, C. I.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Sano, M.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Schulz, T.

Seiler, M.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Setio, A. A. A.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Shapshay, S.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Shi, Z.

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “HSCNN: CNN-based hyperspectral image recovery from spectrally undersampled projections,” in IEEE I. Conf. Comp. Vis., pp. 518–525.

Shimobaba, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Shiraki, A.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Situ, G.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Sminchisescu, C.

A.-I. Popa, M. Zanfir, and C. Sminchisescu, “Deep multitask architecture for integrated 2D and 3D human sensing,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 6289–6298.

Sobral, A.

A. Sobral, S. Javed, S. Ki Jung, T. Bouwmans, and E.-H. Zahzah, “Online stochastic tensor decomposition for background subtraction in multispectral video sequences,” in IEEE I. Conf. Comp. Vis., pp. 946–953.

Solomon, J. E.

A. F. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for earth remote sensing,” Science 228(4704), 1147–1153 (1985).
[Crossref]

Son, S.

B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 136–144.

Song, S.

Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3D ShapeNets: A deep representation for volumetric shapes,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 1912–1920.

Strachan, I. B.

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90(3), 337–352 (2004).
[Crossref]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 770–778.

Sun, K.

K. Sun, B. Xiao, D. Liu, and J. Wang, “Deep high-resolution representation learning for human pose estimation,” arXiv preprint arXiv:1902.09212 (2019).

K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-resolution representations for labeling pixels and regions,” CoRR, abs/1904.04514 (2019).

Sun, X.

J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman, and J. Tenenbaum, “MarrNet: 3D shape reconstruction via 2.5D sketches,” in Adv. Neur. In., pp. 540–550.

Suo, J.

Tai, Y.

Y. Tai, J. Yang, X. Liu, and C. Xu, “MemNet: A persistent memory network for image restoration,” in IEEE I. Conf. Comp. Vis., pp. 4539–4547.

Takahashi, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Tang, X.

Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3D ShapeNets: A deep representation for volumetric shapes,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 1912–1920.

Tenenbaum, J.

J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman, and J. Tenenbaum, “MarrNet: 3D shape reconstruction via 2.5D sketches,” in Adv. Neur. In., pp. 540–550.

Tong, X.

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

Valdez, T.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Van Dam, J.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Van Der Laak, J. A.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Van Ginneken, B.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Vane, G.

A. F. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for earth remote sensing,” Science 228(4704), 1147–1153 (1985).
[Crossref]

Vecchio, K. S.

C. McElfresh, T. Harrington, and K. S. Vecchio, “Application of a novel new multispectral nanoparticle tracking technique,” Meas. Sci. Technol. 29(6), 065002 (2018).
[Crossref]

Wallace, M. B.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Wang, H.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Wang, J.

K. Sun, B. Xiao, D. Liu, and J. Wang, “Deep high-resolution representation learning for human pose estimation,” arXiv preprint arXiv:1902.09212 (2019).

K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-resolution representations for labeling pixels and regions,” CoRR, abs/1904.04514 (2019).

Wang, L.

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “HSCNN: CNN-based hyperspectral image recovery from spectrally undersampled projections,” in IEEE I. Conf. Comp. Vis., pp. 518–525.

Wang, W.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Wang, X.

K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-resolution representations for labeling pixels and regions,” CoRR, abs/1904.04514 (2019).

Wang, Y.

J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman, and J. Tenenbaum, “MarrNet: 3D shape reconstruction via 2.5D sketches,” in Adv. Neur. In., pp. 540–550.

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Willett, R.

Wu, F.

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “HSCNN: CNN-based hyperspectral image recovery from spectrally undersampled projections,” in IEEE I. Conf. Comp. Vis., pp. 518–525.

Wu, J.

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
[Crossref]

J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman, and J. Tenenbaum, “MarrNet: 3D shape reconstruction via 2.5D sketches,” in Adv. Neur. In., pp. 540–550.

Wu, Z.

Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3D ShapeNets: A deep representation for volumetric shapes,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 1912–1920.

Wug Oh, S.

S. Wug Oh, M. S. Brown, M. Pollefeys, and S. Joo Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 2461–2469.

Xiao, B.

K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-resolution representations for labeling pixels and regions,” CoRR, abs/1904.04514 (2019).

K. Sun, B. Xiao, D. Liu, and J. Wang, “Deep high-resolution representation learning for human pose estimation,” arXiv preprint arXiv:1902.09212 (2019).

Xiao, J.

Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3D ShapeNets: A deep representation for volumetric shapes,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 1912–1920.

Xiong, Z.

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “HSCNN: CNN-based hyperspectral image recovery from spectrally undersampled projections,” in IEEE I. Conf. Comp. Vis., pp. 518–525.

Xu, C.

Y. Tai, J. Yang, X. Liu, and C. Xu, “MemNet: A persistent memory network for image restoration,” in IEEE I. Conf. Comp. Vis., pp. 4539–4547.

Xue, T.

J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman, and J. Tenenbaum, “MarrNet: 3D shape reconstruction via 2.5D sketches,” in Adv. Neur. In., pp. 540–550.

Yang, J.

Y. Tai, J. Yang, X. Liu, and C. Xu, “MemNet: A persistent memory network for image restoration,” in IEEE I. Conf. Comp. Vis., pp. 4539–4547.

Yang, Y.

O. Losson, L. Macaire, and Y. Yang, “Comparison of color demosaicing methods,” Adv. Imaging Electron Phys. 162, 173–265 (2010).
[Crossref]

Yasuma, F.

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Trans. on Image Process. 19(9), 2241–2253 (2010).
[Crossref]

Yu, F.

Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3D ShapeNets: A deep representation for volumetric shapes,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 1912–1920.

Zahzah, E.-H.

A. Sobral, S. Javed, S. Ki Jung, T. Bouwmans, and E.-H. Zahzah, “Online stochastic tensor decomposition for background subtraction in multispectral video sequences,” in IEEE I. Conf. Comp. Vis., pp. 946–953.

Zanfir, M.

A.-I. Popa, M. Zanfir, and C. Sminchisescu, “Deep multitask architecture for integrated 2D and 3D human sensing,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 6289–6298.

Zarco-Tejada, P. J.

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90(3), 337–352 (2004).
[Crossref]

Zhang, L.

Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3D ShapeNets: A deep representation for volumetric shapes,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 1912–1920.

Zhang, Q.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 770–778.

Zhao, Y.

K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-resolution representations for labeling pixels and regions,” CoRR, abs/1904.04514 (2019).

Zonios, G.

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

ACM Trans. Graph. (1)

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
[Crossref]

Adv. Imaging Electron Phys. (1)

O. Losson, L. Macaire, and Y. Yang, “Comparison of color demosaicing methods,” Adv. Imaging Electron Phys. 162, 173–265 (2010).
[Crossref]

Appl. Opt. (1)

IEEE Signal Process. Mag. (1)

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

IEEE Trans. on Image Process. (2)

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Trans. on Image Process. 19(9), 2241–2253 (2010).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

Meas. Sci. Technol. (1)

C. McElfresh, T. Harrington, and K. S. Vecchio, “Application of a novel new multispectral nanoparticle tracking technique,” Meas. Sci. Technol. 29(6), 065002 (2018).
[Crossref]

Med. Image Anal. (1)

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Nature (2)

V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Muller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Opt. Commun. (1)

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, A. Shiraki, and T. Ito, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Opt. Eng. (1)

N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
[Crossref]

Opt. Express (2)

Remote Sens. Environ. (1)

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90(3), 337–352 (2004).
[Crossref]

Sci. Rep. (2)

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Science (1)

A. F. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for earth remote sensing,” Science 228(4704), 1147–1153 (1985).
[Crossref]

Vision Res. (1)

S. M. Nascimento, K. Amano, and D. H. Foster, “Spatial distributions of local illumination color in natural scenes,” Vision Res. 120, 39–44 (2016).
[Crossref]

Other (19)

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

K. Sun, B. Xiao, D. Liu, and J. Wang, “Deep high-resolution representation learning for human pose estimation,” arXiv preprint arXiv:1902.09212 (2019).

K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-resolution representations for labeling pixels and regions,” CoRR, abs/1904.04514 (2019).

L.-J. Cheng and G. F. Reyes, “AOTF polarimetric hyperspectral imaging for mine detection,” in Detection Technologies for Mines and Minelike Targets, vol. 2496 (International Society for Optics and Photonics, 1995), pp. 305–311.

C.-I. Chang, Hyperspectral Imaging: Techniques for Spectral Detection and Classification (Kluwer Academic/Plenum Publishers, 2003).

N. Gat, “Imaging spectroscopy using tunable filters: a review,” in Wavelet Applications VII, vol. 4056 (International Society for Optics and Photonics, 2000), pp. 50–64.

A. Sobral, S. Javed, S. Ki Jung, T. Bouwmans, and E.-H. Zahzah, “Online stochastic tensor decomposition for background subtraction in multispectral video sequences,” in IEEE I. Conf. Comp. Vis., pp. 946–953.

B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural RGB images,” in IEEE European Conference on Computer Vision, pp. 19–34.

S. Wug Oh, M. S. Brown, M. Pollefeys, and S. Joo Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 2461–2469.

R. M. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single RGB image,” in IEEE European Conference on Computer Vision, pp. 186–201.

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “HSCNN: CNN-based hyperspectral image recovery from spectrally undersampled projections,” in IEEE I. Conf. Comp. Vis., pp. 518–525.

Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3D ShapeNets: A deep representation for volumetric shapes,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 1912–1920.

A.-I. Popa, M. Zanfir, and C. Sminchisescu, “Deep multitask architecture for integrated 2D and 3D human sensing,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 6289–6298.

J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman, and J. Tenenbaum, “MarrNet: 3D shape reconstruction via 2.5D sketches,” in Adv. Neur. In., pp. 540–550.

J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE I. Conf. Comp. Vis., pp. 1646–1654.

B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 136–144.

Y. Tai, J. Yang, X. Liu, and C. Xu, “MemNet: A persistent memory network for image restoration,” in IEEE I. Conf. Comp. Vis., pp. 4539–4547.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE I. Conf. Comp. Vis. Patt. Recog., pp. 770–778.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Comparison among different hyperspectral imaging methods using an RGB camera. The conventional methods require a demosaicing algorithm to first reconstruct the full-resolution RGB images from the raw mosaic image, and then apply different algorithms for hyperspectral reconstruction. The reported method recovers hyperspectral images directly from the raw mosaic image with end-to-end learning. CNN stands for convolutional neural network.
Fig. 2.
Fig. 2. Structures of the three designed networks, including the residual network, the multiscale network and the parallel-multiscale network. All of the networks use the single raw mosaic image as input, and output the reconstructed hyperspectral images.
Fig. 3.
Fig. 3. Comparison of reconstruction stability on different spectrum channels and different samples of the three networks. The solid plots present the average PSNR of different channels, and the corresponding color areas indicate the PSNR range.
Fig. 4.
Fig. 4. Reconstructed hyperspectral images and corresponding error maps of five selected channels using the reported mosaic-to-hyperspectral method. From left to right: ground-truth reference images, results of the residual network and corresponding error maps, results of the multiscale network and corresponding error maps, results of the parallel-multiscale network and corresponding error maps.
Fig. 5.
Fig. 5. Reconstructed spectra of 4 selected spatial locations from 4 scenes. The X axis represents wavelength (nm), and the Y axis represents spectrum intensity.
Fig. 6.
Fig. 6. Experiment results of imaging a color checker. (a) shows the RGB camera used in our experiment. (b) presents its calibrated RGB spectral responses. (c) is the Macbeth color checker. (d - f) show reconstructed spectra of 3 exemplar color blocks. For (b) and (d - f), the X axis represents wavelength (nm), and the Y axis represents spectrum intensity.
Fig. 7.
Fig. 7. Reconstruction results of an outdoor scene using the reported mosaic-to-hyperspectral method. The first row presents the target scene, the acquired raw mosaic image, and the reconstructed spectra of two locations. The X axis represents wavelength (nm), and Y axis represents spectrum intensity. Ref abbreviates reference, Res denotes the residual network, MS represents the multiscale network, and PMS denotes the parallel-multiscale network. The reconstructed hyperspectral images of three channels and corresponding error maps are shown below for a comparison.

Tables (1)

Tables Icon

Table 1. Reconstruction performance and network complexity of the reported three networks. M stands for million, and B stands for billion.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y ) = i = 1 n F ( x , y , λ i ) S ( x , y , λ i ) L ( λ i ) ,
M S E = 1 m × n j = 1 m i = 1 n S j ( x , y , λ i ) S j ( x , y , λ i ) 2 ,

Metrics