Abstract

This paper proposes a novel multispectral video acquisition method for dynamic scenes by using the Spectral-Sweep camera. To fully utilize the redundancies of multispectral videos in the spatial, temporal and spectral dimensions, we propose a Complex Optical Flow (COF) method that could extract the spatial and spectral signal variations between adjacent spectral-sweep frames. A complex $L_1$-norm constrained optimization algorithm is proposed to compute the COF maps, with which we recover the entire multispectral video by temporally propagating the captured spectral-sweep frames under the guidance of reconstructed COF maps. We demonstrate the promising accuracy of reconstructing full spatial and temporal sensor resolution multispectral videos with our method both quantitatively and qualitatively. Compared with state-of-the-art multispectral imagers, our computational multispectral imaging system can significantly reduce the hardware complexities, while achieves comparable or even better performance.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Hyperspectral video restoration using optical flow and sparse coding

Ajmal Mian and Richard Hartley
Opt. Express 20(10) 10658-10673 (2012)

Content-adaptive high-resolution hyperspectral video acquisition with a hybrid camera system

Chenguang Ma, Xun Cao, Rihui Wu, and Qionghai Dai
Opt. Lett. 39(4) 937-940 (2014)

Multispectral integral imaging acquisition and processing using a monochrome camera and a liquid crystal tunable filter

Pedro Latorre-Carmona, Emilio Sánchez-Ortiga, Xiao Xiao, Filiberto Pla, Manuel Martínez-Corral, Héctor Navarro, Genaro Saavedra, and Bahram Javidi
Opt. Express 20(23) 25960-25969 (2012)

References

  • View by:
  • |
  • |
  • |

  1. S. Denman, T. Lamb, C. Fookes, V. Chandran, and S. Sridharan, “Multi-spectral fusion for surveillance systems,” Comput. Electr. Eng. 36(4), 643–663 (2010).
    [Crossref]
  2. V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
    [Crossref]
  3. P. C. Gray, I. R. Shokair, S. E. Rosenthal, G. C. Tisone, J. S. Wagner, L. D. Rigdon, G. R. Siragusa, and R. J. Heinen, “Distinguishability of biological material by use of ultraviolet multispectral fluorescence,” Appl. Opt. 37(25), 6037–6041 (1998).
    [Crossref]
  4. M. Descour and E. Dereniak, “Computed-tomography imaging spectrometer: experimental calibration and reconstruction results,” Appl. Opt. 34(22), 4817–4826 (1995).
    [Crossref]
  5. H. Du, X. Tong, X. Cao, and S. Lin, “A prism-based system for multispectral video acquisition,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2009), pp. 175–182.
  6. X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
    [Crossref]
  7. S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graph. 36(6), 1–12 (2017).
    [Crossref]
  8. Y. Zhao, X. Hu, H. Guo, Z. Ma, T. Yue, and X. Cao, “Spectral reconstruction from dispersive blur: A novel light efficient spectral imager,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2019), pp. 12202–12211.
  9. M. Gehm, R. John, D. Brady, R. Willett, and T. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15(21), 14013–14027 (2007).
    [Crossref]
  10. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008).
    [Crossref]
  11. I. Choi, D. S. Jeon, G. Nam, D. Gutierrez, and M. H. Kim, “High-quality hyperspectral reconstruction using a spectral prior,” ACM Trans. Graph. 36(6), 1–13 (2017).
    [Crossref]
  12. L. Wang, T. Zhang, Y. Fu, and H. Huang, “Hyperreconnet: Joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. on Image Process. 28(5), 2257–2270 (2019).
    [Crossref]
  13. H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. on Image Process. 23(4), 1896–1908 (2014).
    [Crossref]
  14. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady, “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt. Express 17(8), 6368–6388 (2009).
    [Crossref]
  15. K. M. León-López, L. V. G. Carreño, and H. A. Fuentes, “Temporal colored coded aperture design in compressive spectral video sensing,” IEEE Trans. on Image Process. 28, 253–264 (2019).
    [Crossref]
  16. X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
    [Crossref]
  17. L. Wang, Z. Xiong, D. Gao, G. Shi, and F. Wu, “Dual-camera design for coded aperture snapshot spectral imaging,” Appl. Opt. 54(4), 848–858 (2015).
    [Crossref]
  18. L. Wang, Z. Xiong, H. Huang, G. Shi, F. Wu, and W. Zeng, “High-speed hyperspectral video acquisition by combining nyquist and compressive sampling,” IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 857–870 (2019).
    [Crossref]
  19. Y. Shechtman, L. E. Weiss, A. S. Backer, M. Y. Lee, and W. Moerner, “Multicolour localization microscopy by point-spread-function engineering,” Nat. Photonics 10(9), 590–594 (2016).
    [Crossref]
  20. J. Chen, M. Hirsch, B. Eberhardt, and H. P. Lensch, “A computational camera with programmable optics for snapshot high-resolution multispectral imaging,” in Proceedings of Asian Conference on Computer Vision, (IEEE, 2018), pp. 685–699
  21. D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
    [Crossref]
  22. A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A reconfigurable camera add-on for high dynamic range, multispectral, polarization, and light-field imaging,” ACM Trans. Graph. 32(4), 1 (2013).
    [Crossref]
  23. T. Takatani, T. Aoto, and Y. Mukaigawa, “One-shot hyperspectral imaging using faced reflectors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 4039–4047.
  24. B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural rgb images,” in Proceedings of European Conference on Computer Vision, (IEEE, 2016), pp. 19–34.
  25. Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision, (IEEE, 2018), pp. 788–804.
  26. Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
    [Crossref]
  27. N. Akhtar and A. S. Mian, “Hyperspectral recovery from rgb images using gaussian processes,” IEEE Trans. Pattern Anal. Mach. Intell. (2018).
  28. H. Li, Z. Xiong, Z. Shi, L. Wang, D. Liu, and F. Wu, “Hsvcnn: Cnn-based hyperspectral reconstruction from rgb videos,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2018), pp. 3323–3327.
  29. J. Bao and M. G. Bawendi, “A colloidal quantum dot spectrometer,” Nature 523(7558), 67–70 (2015).
    [Crossref]
  30. P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: Recent advances and practical implementation,” Sensors 14(11), 21626–21659 (2014).
    [Crossref]
  31. S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 4767–4776
  32. J. Jia, K. J. Barnard, and K. Hirakawa, “Fourier spectral filter array for optimal multispectral imaging,” IEEE Trans. on Image Process. 25(4), 1530–1543 (2016).
    [Crossref]
  33. C. Ni, J. Jia, M. Howard, K. Hirakawa, and A. Sarangan, “Single-shot multispectral imager using spatially multiplexed fourier spectral filters,” J. Opt. Soc. Am. B 35(5), 1072–1079 (2018).
    [Crossref]
  34. N. Gat, “Imaging spectroscopy using tunable filters: a review,” in Wavelet Applications VII, vol. 4056 (International Society for Optics and Photonics, 2000), pp. 50–65.
  35. H. Lee and M. H. Kim, “Building a two-way hyperspectral imaging system with liquid crystal tunable filters,” in Proceedings of International Conference on Image and Signal Processing (Springer, 2014), pp. 26–34.
  36. X. Wang, Y. Zhang, X. Ma, T. Xu, and G. R. Arce, “Compressive spectral imaging system based on liquid crystal tunable filter,” Opt. Express 26(19), 25226–25243 (2018).
    [Crossref]
  37. M. Descour and E. Dereniak, “Computed-tomography imaging spectrometer: experimental calibration and reconstruction results,” Appl. Opt. 34(22), 4817–4826 (1995).
    [Crossref]
  38. A. Mian and R. Hartley, “Hyperspectral video restoration using optical flow and sparse coding,” Opt. Express 20(10), 10658–10673 (2012).
    [Crossref]
  39. C. Liu and W. T. Freeman, “A high-quality video denoising algorithm based on reliable motion estimation,” in Proceedings of European Conference on Computer Vision (Springer, 2010), pp. 706–719.
  40. C. Liu and D. Sun, “A bayesian approach to adaptive video super resolution,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 209–216.
  41. S. Cho, J. Wang, and S. Lee, “Video deblurring for hand-held cameras using patch-based synthesis,” ACM Trans. Graph. 31(4), 1–9 (2012).
    [Crossref]
  42. T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proceedings of European Conference on Computer Vision (Springer, 2004), pp. 25–36.
  43. C. Liu, J. Yuen, and A. Torralba, “Sift flow: Dense correspondence across scenes and its applications,” IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 978–994 (2011).
    [Crossref]
  44. S. Negahdaripour, “Revised definition of optical flow: Integration of radiometric and geometric cues for dynamic scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20(9), 961–979 (1998).
    [Crossref]
  45. H. W. Haussecker and D. J. Fleet, “Computing optical flow with physical models of brightness variation,” IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 661–673 (2001).
    [Crossref]
  46. C.-H. Teng, S.-H. Lai, Y.-S. Chen, and W.-H. Hsu, “Accurate optical flow computation under non-uniform brightness variations,” Comput. Vision Image Understanding 97(3), 315–346 (2005).
    [Crossref]
  47. D. Fortun, P. Bouthemy, and C. Kervrann, “Optical flow modeling and computation: a survey,” Comput. Vision Image Understanding 134, 1–21 (2015).
    [Crossref]
  48. J. T. Barron and J. Malik, “Shape, illumination, and reflectance from shading,” IEEE Trans. Pattern Anal. Mach. Intell. 37(8), 1670–1687 (2015).
    [Crossref]
  49. S. Bi, X. Han, and Y. Yu, “An $l _1$l1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition,” ACM Trans. Graph. 34(4), 78 (2015).
    [Crossref]
  50. R. W. Wedderburn, “Quasi-likelihood functions, generalized linear models, and the gauss–newton method,” Biometrika 61(3), 439–447 (1974).
    [Crossref]
  51. C. Liu, “Beyond pixels: Exploring new representations and applications for motion analysis,” Ph.D. thesis, Massachusetts Institute of Technology (2009).
  52. C. Liu, W. T. Freeman, E. H. Adelson, and Y. Weiss, “Human-assisted motion annotation,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

2019 (4)

L. Wang, T. Zhang, Y. Fu, and H. Huang, “Hyperreconnet: Joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. on Image Process. 28(5), 2257–2270 (2019).
[Crossref]

K. M. León-López, L. V. G. Carreño, and H. A. Fuentes, “Temporal colored coded aperture design in compressive spectral video sensing,” IEEE Trans. on Image Process. 28, 253–264 (2019).
[Crossref]

L. Wang, Z. Xiong, H. Huang, G. Shi, F. Wu, and W. Zeng, “High-speed hyperspectral video acquisition by combining nyquist and compressive sampling,” IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 857–870 (2019).
[Crossref]

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

2018 (3)

2017 (2)

I. Choi, D. S. Jeon, G. Nam, D. Gutierrez, and M. H. Kim, “High-quality hyperspectral reconstruction using a spectral prior,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

2016 (2)

Y. Shechtman, L. E. Weiss, A. S. Backer, M. Y. Lee, and W. Moerner, “Multicolour localization microscopy by point-spread-function engineering,” Nat. Photonics 10(9), 590–594 (2016).
[Crossref]

J. Jia, K. J. Barnard, and K. Hirakawa, “Fourier spectral filter array for optimal multispectral imaging,” IEEE Trans. on Image Process. 25(4), 1530–1543 (2016).
[Crossref]

2015 (5)

D. Fortun, P. Bouthemy, and C. Kervrann, “Optical flow modeling and computation: a survey,” Comput. Vision Image Understanding 134, 1–21 (2015).
[Crossref]

J. T. Barron and J. Malik, “Shape, illumination, and reflectance from shading,” IEEE Trans. Pattern Anal. Mach. Intell. 37(8), 1670–1687 (2015).
[Crossref]

S. Bi, X. Han, and Y. Yu, “An $l _1$l1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition,” ACM Trans. Graph. 34(4), 78 (2015).
[Crossref]

J. Bao and M. G. Bawendi, “A colloidal quantum dot spectrometer,” Nature 523(7558), 67–70 (2015).
[Crossref]

L. Wang, Z. Xiong, D. Gao, G. Shi, and F. Wu, “Dual-camera design for coded aperture snapshot spectral imaging,” Appl. Opt. 54(4), 848–858 (2015).
[Crossref]

2014 (3)

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
[Crossref]

H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. on Image Process. 23(4), 1896–1908 (2014).
[Crossref]

P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: Recent advances and practical implementation,” Sensors 14(11), 21626–21659 (2014).
[Crossref]

2013 (1)

A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A reconfigurable camera add-on for high dynamic range, multispectral, polarization, and light-field imaging,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

2012 (2)

A. Mian and R. Hartley, “Hyperspectral video restoration using optical flow and sparse coding,” Opt. Express 20(10), 10658–10673 (2012).
[Crossref]

S. Cho, J. Wang, and S. Lee, “Video deblurring for hand-held cameras using patch-based synthesis,” ACM Trans. Graph. 31(4), 1–9 (2012).
[Crossref]

2011 (2)

C. Liu, J. Yuen, and A. Torralba, “Sift flow: Dense correspondence across scenes and its applications,” IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 978–994 (2011).
[Crossref]

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

2010 (1)

S. Denman, T. Lamb, C. Fookes, V. Chandran, and S. Sridharan, “Multi-spectral fusion for surveillance systems,” Comput. Electr. Eng. 36(4), 643–663 (2010).
[Crossref]

2009 (1)

2008 (1)

2007 (1)

2005 (1)

C.-H. Teng, S.-H. Lai, Y.-S. Chen, and W.-H. Hsu, “Accurate optical flow computation under non-uniform brightness variations,” Comput. Vision Image Understanding 97(3), 315–346 (2005).
[Crossref]

2001 (1)

H. W. Haussecker and D. J. Fleet, “Computing optical flow with physical models of brightness variation,” IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 661–673 (2001).
[Crossref]

2000 (1)

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

1998 (2)

P. C. Gray, I. R. Shokair, S. E. Rosenthal, G. C. Tisone, J. S. Wagner, L. D. Rigdon, G. R. Siragusa, and R. J. Heinen, “Distinguishability of biological material by use of ultraviolet multispectral fluorescence,” Appl. Opt. 37(25), 6037–6041 (1998).
[Crossref]

S. Negahdaripour, “Revised definition of optical flow: Integration of radiometric and geometric cues for dynamic scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20(9), 961–979 (1998).
[Crossref]

1995 (2)

1974 (1)

R. W. Wedderburn, “Quasi-likelihood functions, generalized linear models, and the gauss–newton method,” Biometrika 61(3), 439–447 (1974).
[Crossref]

Adelson, E. H.

C. Liu, W. T. Freeman, E. H. Adelson, and Y. Weiss, “Human-assisted motion annotation,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Akhtar, N.

N. Akhtar and A. S. Mian, “Hyperspectral recovery from rgb images using gaussian processes,” IEEE Trans. Pattern Anal. Mach. Intell. (2018).

Aoto, T.

T. Takatani, T. Aoto, and Y. Mukaigawa, “One-shot hyperspectral imaging using faced reflectors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 4039–4047.

Arad, B.

B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural rgb images,” in Proceedings of European Conference on Computer Vision, (IEEE, 2016), pp. 19–34.

Arce, G. R.

X. Wang, Y. Zhang, X. Ma, T. Xu, and G. R. Arce, “Compressive spectral imaging system based on liquid crystal tunable filter,” Opt. Express 26(19), 25226–25243 (2018).
[Crossref]

H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. on Image Process. 23(4), 1896–1908 (2014).
[Crossref]

Arendt, J.

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Arguello, H.

H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. on Image Process. 23(4), 1896–1908 (2014).
[Crossref]

Backer, A. S.

Y. Shechtman, L. E. Weiss, A. S. Backer, M. Y. Lee, and W. Moerner, “Multicolour localization microscopy by point-spread-function engineering,” Nat. Photonics 10(9), 590–594 (2016).
[Crossref]

Backman, V.

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Baek, S.-H.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

Bao, J.

J. Bao and M. G. Bawendi, “A colloidal quantum dot spectrometer,” Nature 523(7558), 67–70 (2015).
[Crossref]

Barnard, K. J.

J. Jia, K. J. Barnard, and K. Hirakawa, “Fourier spectral filter array for optimal multispectral imaging,” IEEE Trans. on Image Process. 25(4), 1530–1543 (2016).
[Crossref]

Barron, J. T.

J. T. Barron and J. Malik, “Shape, illumination, and reflectance from shading,” IEEE Trans. Pattern Anal. Mach. Intell. 37(8), 1670–1687 (2015).
[Crossref]

Bawendi, M. G.

J. Bao and M. G. Bawendi, “A colloidal quantum dot spectrometer,” Nature 523(7558), 67–70 (2015).
[Crossref]

Ben-Shahar, O.

B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural rgb images,” in Proceedings of European Conference on Computer Vision, (IEEE, 2016), pp. 19–34.

Bi, S.

S. Bi, X. Han, and Y. Yu, “An $l _1$l1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition,” ACM Trans. Graph. 34(4), 78 (2015).
[Crossref]

Bouthemy, P.

D. Fortun, P. Bouthemy, and C. Kervrann, “Optical flow modeling and computation: a survey,” Comput. Vision Image Understanding 134, 1–21 (2015).
[Crossref]

Brady, D.

Brady, D. J.

Brox, T.

T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proceedings of European Conference on Computer Vision (Springer, 2004), pp. 25–36.

Bruhn, A.

T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proceedings of European Conference on Computer Vision (Springer, 2004), pp. 25–36.

Cao, X.

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

Y. Zhao, X. Hu, H. Guo, Z. Ma, T. Yue, and X. Cao, “Spectral reconstruction from dispersive blur: A novel light efficient spectral imager,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2019), pp. 12202–12211.

H. Du, X. Tong, X. Cao, and S. Lin, “A prism-based system for multispectral video acquisition,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2009), pp. 175–182.

Carreño, L. V. G.

K. M. León-López, L. V. G. Carreño, and H. A. Fuentes, “Temporal colored coded aperture design in compressive spectral video sensing,” IEEE Trans. on Image Process. 28, 253–264 (2019).
[Crossref]

Chandran, V.

S. Denman, T. Lamb, C. Fookes, V. Chandran, and S. Sridharan, “Multi-spectral fusion for surveillance systems,” Comput. Electr. Eng. 36(4), 643–663 (2010).
[Crossref]

Chen, J.

J. Chen, M. Hirsch, B. Eberhardt, and H. P. Lensch, “A computational camera with programmable optics for snapshot high-resolution multispectral imaging,” in Proceedings of Asian Conference on Computer Vision, (IEEE, 2018), pp. 685–699

Chen, Y.-S.

C.-H. Teng, S.-H. Lai, Y.-S. Chen, and W.-H. Hsu, “Accurate optical flow computation under non-uniform brightness variations,” Comput. Vision Image Understanding 97(3), 315–346 (2005).
[Crossref]

Cho, S.

S. Cho, J. Wang, and S. Lee, “Video deblurring for hand-held cameras using patch-based synthesis,” ACM Trans. Graph. 31(4), 1–9 (2012).
[Crossref]

Choi, I.

I. Choi, D. S. Jeon, G. Nam, D. Gutierrez, and M. H. Kim, “High-quality hyperspectral reconstruction using a spectral prior,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

Dai, Q.

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
[Crossref]

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

Denman, S.

S. Denman, T. Lamb, C. Fookes, V. Chandran, and S. Sridharan, “Multi-spectral fusion for surveillance systems,” Comput. Electr. Eng. 36(4), 643–663 (2010).
[Crossref]

Dereniak, E.

Descour, M.

Du, H.

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

H. Du, X. Tong, X. Cao, and S. Lin, “A prism-based system for multispectral video acquisition,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2009), pp. 175–182.

Dun, X.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Eberhardt, B.

J. Chen, M. Hirsch, B. Eberhardt, and H. P. Lensch, “A computational camera with programmable optics for snapshot high-resolution multispectral imaging,” in Proceedings of Asian Conference on Computer Vision, (IEEE, 2018), pp. 685–699

Eisemann, E.

A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A reconfigurable camera add-on for high dynamic range, multispectral, polarization, and light-field imaging,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

Fleet, D. J.

H. W. Haussecker and D. J. Fleet, “Computing optical flow with physical models of brightness variation,” IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 661–673 (2001).
[Crossref]

Fookes, C.

S. Denman, T. Lamb, C. Fookes, V. Chandran, and S. Sridharan, “Multi-spectral fusion for surveillance systems,” Comput. Electr. Eng. 36(4), 643–663 (2010).
[Crossref]

Fortun, D.

D. Fortun, P. Bouthemy, and C. Kervrann, “Optical flow modeling and computation: a survey,” Comput. Vision Image Understanding 134, 1–21 (2015).
[Crossref]

Freeman, W. T.

C. Liu and W. T. Freeman, “A high-quality video denoising algorithm based on reliable motion estimation,” in Proceedings of European Conference on Computer Vision (Springer, 2010), pp. 706–719.

C. Liu, W. T. Freeman, E. H. Adelson, and Y. Weiss, “Human-assisted motion annotation,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Fu, Q.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Fu, Y.

L. Wang, T. Zhang, Y. Fu, and H. Huang, “Hyperreconnet: Joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. on Image Process. 28(5), 2257–2270 (2019).
[Crossref]

Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
[Crossref]

Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision, (IEEE, 2018), pp. 788–804.

Fuentes, H. A.

K. M. León-López, L. V. G. Carreño, and H. A. Fuentes, “Temporal colored coded aperture design in compressive spectral video sensing,” IEEE Trans. on Image Process. 28, 253–264 (2019).
[Crossref]

Gao, D.

Gat, N.

N. Gat, “Imaging spectroscopy using tunable filters: a review,” in Wavelet Applications VII, vol. 4056 (International Society for Optics and Photonics, 2000), pp. 50–65.

Gehm, M.

Gouton, P.

P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: Recent advances and practical implementation,” Sensors 14(11), 21626–21659 (2014).
[Crossref]

Gray, P. C.

Gu, L.

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 4767–4776

Guo, H.

Y. Zhao, X. Hu, H. Guo, Z. Ma, T. Yue, and X. Cao, “Spectral reconstruction from dispersive blur: A novel light efficient spectral imager,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2019), pp. 12202–12211.

Gurjar, R.

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Gutierrez, D.

S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

I. Choi, D. S. Jeon, G. Nam, D. Gutierrez, and M. H. Kim, “High-quality hyperspectral reconstruction using a spectral prior,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

Han, X.

S. Bi, X. Han, and Y. Yu, “An $l _1$l1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition,” ACM Trans. Graph. 34(4), 78 (2015).
[Crossref]

Hartley, R.

Haussecker, H. W.

H. W. Haussecker and D. J. Fleet, “Computing optical flow with physical models of brightness variation,” IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 661–673 (2001).
[Crossref]

Hegedus, R.

A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A reconfigurable camera add-on for high dynamic range, multispectral, polarization, and light-field imaging,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

Heidrich, W.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Heinen, R. J.

Hirakawa, K.

C. Ni, J. Jia, M. Howard, K. Hirakawa, and A. Sarangan, “Single-shot multispectral imager using spatially multiplexed fourier spectral filters,” J. Opt. Soc. Am. B 35(5), 1072–1079 (2018).
[Crossref]

J. Jia, K. J. Barnard, and K. Hirakawa, “Fourier spectral filter array for optimal multispectral imaging,” IEEE Trans. on Image Process. 25(4), 1530–1543 (2016).
[Crossref]

Hirsch, M.

J. Chen, M. Hirsch, B. Eberhardt, and H. P. Lensch, “A computational camera with programmable optics for snapshot high-resolution multispectral imaging,” in Proceedings of Asian Conference on Computer Vision, (IEEE, 2018), pp. 685–699

Howard, M.

Hsu, W.-H.

C.-H. Teng, S.-H. Lai, Y.-S. Chen, and W.-H. Hsu, “Accurate optical flow computation under non-uniform brightness variations,” Comput. Vision Image Understanding 97(3), 315–346 (2005).
[Crossref]

Hu, X.

Y. Zhao, X. Hu, H. Guo, Z. Ma, T. Yue, and X. Cao, “Spectral reconstruction from dispersive blur: A novel light efficient spectral imager,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2019), pp. 12202–12211.

Huang, H.

L. Wang, Z. Xiong, H. Huang, G. Shi, F. Wu, and W. Zeng, “High-speed hyperspectral video acquisition by combining nyquist and compressive sampling,” IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 857–870 (2019).
[Crossref]

L. Wang, T. Zhang, Y. Fu, and H. Huang, “Hyperreconnet: Joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. on Image Process. 28(5), 2257–2270 (2019).
[Crossref]

Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
[Crossref]

Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision, (IEEE, 2018), pp. 788–804.

Ihrke, I.

A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A reconfigurable camera add-on for high dynamic range, multispectral, polarization, and light-field imaging,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

Jeon, D. S.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

I. Choi, D. S. Jeon, G. Nam, D. Gutierrez, and M. H. Kim, “High-quality hyperspectral reconstruction using a spectral prior,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

Jia, J.

C. Ni, J. Jia, M. Howard, K. Hirakawa, and A. Sarangan, “Single-shot multispectral imager using spatially multiplexed fourier spectral filters,” J. Opt. Soc. Am. B 35(5), 1072–1079 (2018).
[Crossref]

J. Jia, K. J. Barnard, and K. Hirakawa, “Fourier spectral filter array for optimal multispectral imaging,” IEEE Trans. on Image Process. 25(4), 1530–1543 (2016).
[Crossref]

John, R.

Kervrann, C.

D. Fortun, P. Bouthemy, and C. Kervrann, “Optical flow modeling and computation: a survey,” Comput. Vision Image Understanding 134, 1–21 (2015).
[Crossref]

Kim, I.

S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

Kim, M. H.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

I. Choi, D. S. Jeon, G. Nam, D. Gutierrez, and M. H. Kim, “High-quality hyperspectral reconstruction using a spectral prior,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

H. Lee and M. H. Kim, “Building a two-way hyperspectral imaging system with liquid crystal tunable filters,” in Proceedings of International Conference on Image and Signal Processing (Springer, 2014), pp. 26–34.

Klehm, O.

A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A reconfigurable camera add-on for high dynamic range, multispectral, polarization, and light-field imaging,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

Kline, E.

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Lai, S.-H.

C.-H. Teng, S.-H. Lai, Y.-S. Chen, and W.-H. Hsu, “Accurate optical flow computation under non-uniform brightness variations,” Comput. Vision Image Understanding 97(3), 315–346 (2005).
[Crossref]

Lam, A.

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 4767–4776

Lamb, T.

S. Denman, T. Lamb, C. Fookes, V. Chandran, and S. Sridharan, “Multi-spectral fusion for surveillance systems,” Comput. Electr. Eng. 36(4), 643–663 (2010).
[Crossref]

Lapray, P.-J.

P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: Recent advances and practical implementation,” Sensors 14(11), 21626–21659 (2014).
[Crossref]

Lee, H.

H. Lee and M. H. Kim, “Building a two-way hyperspectral imaging system with liquid crystal tunable filters,” in Proceedings of International Conference on Image and Signal Processing (Springer, 2014), pp. 26–34.

Lee, M. Y.

Y. Shechtman, L. E. Weiss, A. S. Backer, M. Y. Lee, and W. Moerner, “Multicolour localization microscopy by point-spread-function engineering,” Nat. Photonics 10(9), 590–594 (2016).
[Crossref]

Lee, S.

S. Cho, J. Wang, and S. Lee, “Video deblurring for hand-held cameras using patch-based synthesis,” ACM Trans. Graph. 31(4), 1–9 (2012).
[Crossref]

Lensch, H. P.

J. Chen, M. Hirsch, B. Eberhardt, and H. P. Lensch, “A computational camera with programmable optics for snapshot high-resolution multispectral imaging,” in Proceedings of Asian Conference on Computer Vision, (IEEE, 2018), pp. 685–699

León-López, K. M.

K. M. León-López, L. V. G. Carreño, and H. A. Fuentes, “Temporal colored coded aperture design in compressive spectral video sensing,” IEEE Trans. on Image Process. 28, 253–264 (2019).
[Crossref]

Li, H.

H. Li, Z. Xiong, Z. Shi, L. Wang, D. Liu, and F. Wu, “Hsvcnn: Cnn-based hyperspectral reconstruction from rgb videos,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2018), pp. 3323–3327.

Lin, S.

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

H. Du, X. Tong, X. Cao, and S. Lin, “A prism-based system for multispectral video acquisition,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2009), pp. 175–182.

Lin, X.

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
[Crossref]

Liu, C.

C. Liu, J. Yuen, and A. Torralba, “Sift flow: Dense correspondence across scenes and its applications,” IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 978–994 (2011).
[Crossref]

C. Liu, “Beyond pixels: Exploring new representations and applications for motion analysis,” Ph.D. thesis, Massachusetts Institute of Technology (2009).

C. Liu, W. T. Freeman, E. H. Adelson, and Y. Weiss, “Human-assisted motion annotation,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

C. Liu and W. T. Freeman, “A high-quality video denoising algorithm based on reliable motion estimation,” in Proceedings of European Conference on Computer Vision (Springer, 2010), pp. 706–719.

C. Liu and D. Sun, “A bayesian approach to adaptive video super resolution,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 209–216.

Liu, D.

H. Li, Z. Xiong, Z. Shi, L. Wang, D. Liu, and F. Wu, “Hsvcnn: Cnn-based hyperspectral reconstruction from rgb videos,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2018), pp. 3323–3327.

Liu, Y.

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
[Crossref]

Ma, X.

Ma, Z.

Y. Zhao, X. Hu, H. Guo, Z. Ma, T. Yue, and X. Cao, “Spectral reconstruction from dispersive blur: A novel light efficient spectral imager,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2019), pp. 12202–12211.

Malik, J.

J. T. Barron and J. Malik, “Shape, illumination, and reflectance from shading,” IEEE Trans. Pattern Anal. Mach. Intell. 37(8), 1670–1687 (2015).
[Crossref]

Manakov, A.

A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A reconfigurable camera add-on for high dynamic range, multispectral, polarization, and light-field imaging,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

McGillican, T.

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Mian, A.

Mian, A. S.

N. Akhtar and A. S. Mian, “Hyperspectral recovery from rgb images using gaussian processes,” IEEE Trans. Pattern Anal. Mach. Intell. (2018).

Moerner, W.

Y. Shechtman, L. E. Weiss, A. S. Backer, M. Y. Lee, and W. Moerner, “Multicolour localization microscopy by point-spread-function engineering,” Nat. Photonics 10(9), 590–594 (2016).
[Crossref]

Mukaigawa, Y.

T. Takatani, T. Aoto, and Y. Mukaigawa, “One-shot hyperspectral imaging using faced reflectors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 4039–4047.

Múller, M.

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Nam, G.

I. Choi, D. S. Jeon, G. Nam, D. Gutierrez, and M. H. Kim, “High-quality hyperspectral reconstruction using a spectral prior,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

Negahdaripour, S.

S. Negahdaripour, “Revised definition of optical flow: Integration of radiometric and geometric cues for dynamic scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20(9), 961–979 (1998).
[Crossref]

Ni, C.

Nie, S.

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 4767–4776

Ono, N.

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 4767–4776

Papenberg, N.

T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proceedings of European Conference on Computer Vision (Springer, 2004), pp. 25–36.

Perelman, L.

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Pitsianis, N. P.

Restrepo, J.

A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A reconfigurable camera add-on for high dynamic range, multispectral, polarization, and light-field imaging,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

Rigdon, L. D.

Rosenthal, S. E.

Sarangan, A.

Sato, I.

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 4767–4776

Schulz, T.

Seidel, H.-P.

A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A reconfigurable camera add-on for high dynamic range, multispectral, polarization, and light-field imaging,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

Shapshay, S.

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Shechtman, Y.

Y. Shechtman, L. E. Weiss, A. S. Backer, M. Y. Lee, and W. Moerner, “Multicolour localization microscopy by point-spread-function engineering,” Nat. Photonics 10(9), 590–594 (2016).
[Crossref]

Shi, G.

L. Wang, Z. Xiong, H. Huang, G. Shi, F. Wu, and W. Zeng, “High-speed hyperspectral video acquisition by combining nyquist and compressive sampling,” IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 857–870 (2019).
[Crossref]

L. Wang, Z. Xiong, D. Gao, G. Shi, and F. Wu, “Dual-camera design for coded aperture snapshot spectral imaging,” Appl. Opt. 54(4), 848–858 (2015).
[Crossref]

Shi, Z.

H. Li, Z. Xiong, Z. Shi, L. Wang, D. Liu, and F. Wu, “Hsvcnn: Cnn-based hyperspectral reconstruction from rgb videos,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2018), pp. 3323–3327.

Shokair, I. R.

Siragusa, G. R.

Sridharan, S.

S. Denman, T. Lamb, C. Fookes, V. Chandran, and S. Sridharan, “Multi-spectral fusion for surveillance systems,” Comput. Electr. Eng. 36(4), 643–663 (2010).
[Crossref]

Sun, D.

C. Liu and D. Sun, “A bayesian approach to adaptive video super resolution,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 209–216.

Sun, X.

Takatani, T.

T. Takatani, T. Aoto, and Y. Mukaigawa, “One-shot hyperspectral imaging using faced reflectors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 4039–4047.

Teng, C.-H.

C.-H. Teng, S.-H. Lai, Y.-S. Chen, and W.-H. Hsu, “Accurate optical flow computation under non-uniform brightness variations,” Comput. Vision Image Understanding 97(3), 315–346 (2005).
[Crossref]

Thomas, J.-B.

P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: Recent advances and practical implementation,” Sensors 14(11), 21626–21659 (2014).
[Crossref]

Tisone, G. C.

Tong, X.

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

H. Du, X. Tong, X. Cao, and S. Lin, “A prism-based system for multispectral video acquisition,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2009), pp. 175–182.

Torralba, A.

C. Liu, J. Yuen, and A. Torralba, “Sift flow: Dense correspondence across scenes and its applications,” IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 978–994 (2011).
[Crossref]

Wagadarikar, A.

Wagadarikar, A. A.

Wagner, J. S.

Wallace, M. B.

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Wang, J.

S. Cho, J. Wang, and S. Lee, “Video deblurring for hand-held cameras using patch-based synthesis,” ACM Trans. Graph. 31(4), 1–9 (2012).
[Crossref]

Wang, L.

L. Wang, T. Zhang, Y. Fu, and H. Huang, “Hyperreconnet: Joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. on Image Process. 28(5), 2257–2270 (2019).
[Crossref]

L. Wang, Z. Xiong, H. Huang, G. Shi, F. Wu, and W. Zeng, “High-speed hyperspectral video acquisition by combining nyquist and compressive sampling,” IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 857–870 (2019).
[Crossref]

L. Wang, Z. Xiong, D. Gao, G. Shi, and F. Wu, “Dual-camera design for coded aperture snapshot spectral imaging,” Appl. Opt. 54(4), 848–858 (2015).
[Crossref]

H. Li, Z. Xiong, Z. Shi, L. Wang, D. Liu, and F. Wu, “Hsvcnn: Cnn-based hyperspectral reconstruction from rgb videos,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2018), pp. 3323–3327.

Wang, X.

X. Wang, Y. Zhang, X. Ma, T. Xu, and G. R. Arce, “Compressive spectral imaging system based on liquid crystal tunable filter,” Opt. Express 26(19), 25226–25243 (2018).
[Crossref]

P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: Recent advances and practical implementation,” Sensors 14(11), 21626–21659 (2014).
[Crossref]

Wedderburn, R. W.

R. W. Wedderburn, “Quasi-likelihood functions, generalized linear models, and the gauss–newton method,” Biometrika 61(3), 439–447 (1974).
[Crossref]

Weickert, J.

T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proceedings of European Conference on Computer Vision (Springer, 2004), pp. 25–36.

Weiss, L. E.

Y. Shechtman, L. E. Weiss, A. S. Backer, M. Y. Lee, and W. Moerner, “Multicolour localization microscopy by point-spread-function engineering,” Nat. Photonics 10(9), 590–594 (2016).
[Crossref]

Weiss, Y.

C. Liu, W. T. Freeman, E. H. Adelson, and Y. Weiss, “Human-assisted motion annotation,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Willett, R.

Wu, F.

L. Wang, Z. Xiong, H. Huang, G. Shi, F. Wu, and W. Zeng, “High-speed hyperspectral video acquisition by combining nyquist and compressive sampling,” IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 857–870 (2019).
[Crossref]

L. Wang, Z. Xiong, D. Gao, G. Shi, and F. Wu, “Dual-camera design for coded aperture snapshot spectral imaging,” Appl. Opt. 54(4), 848–858 (2015).
[Crossref]

H. Li, Z. Xiong, Z. Shi, L. Wang, D. Liu, and F. Wu, “Hsvcnn: Cnn-based hyperspectral reconstruction from rgb videos,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2018), pp. 3323–3327.

Wu, J.

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
[Crossref]

Xiong, Z.

L. Wang, Z. Xiong, H. Huang, G. Shi, F. Wu, and W. Zeng, “High-speed hyperspectral video acquisition by combining nyquist and compressive sampling,” IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 857–870 (2019).
[Crossref]

L. Wang, Z. Xiong, D. Gao, G. Shi, and F. Wu, “Dual-camera design for coded aperture snapshot spectral imaging,” Appl. Opt. 54(4), 848–858 (2015).
[Crossref]

H. Li, Z. Xiong, Z. Shi, L. Wang, D. Liu, and F. Wu, “Hsvcnn: Cnn-based hyperspectral reconstruction from rgb videos,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2018), pp. 3323–3327.

Xu, T.

Yi, S.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Yu, Y.

S. Bi, X. Han, and Y. Yu, “An $l _1$l1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition,” ACM Trans. Graph. 34(4), 78 (2015).
[Crossref]

Yue, T.

Y. Zhao, X. Hu, H. Guo, Z. Ma, T. Yue, and X. Cao, “Spectral reconstruction from dispersive blur: A novel light efficient spectral imager,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2019), pp. 12202–12211.

Yuen, J.

C. Liu, J. Yuen, and A. Torralba, “Sift flow: Dense correspondence across scenes and its applications,” IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 978–994 (2011).
[Crossref]

Zeng, W.

L. Wang, Z. Xiong, H. Huang, G. Shi, F. Wu, and W. Zeng, “High-speed hyperspectral video acquisition by combining nyquist and compressive sampling,” IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 857–870 (2019).
[Crossref]

Zhang, D.

Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision, (IEEE, 2018), pp. 788–804.

Zhang, L.

Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
[Crossref]

Zhang, Q.

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

Zhang, T.

L. Wang, T. Zhang, Y. Fu, and H. Huang, “Hyperreconnet: Joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. on Image Process. 28(5), 2257–2270 (2019).
[Crossref]

Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision, (IEEE, 2018), pp. 788–804.

Zhang, Y.

Zhao, Y.

Y. Zhao, X. Hu, H. Guo, Z. Ma, T. Yue, and X. Cao, “Spectral reconstruction from dispersive blur: A novel light efficient spectral imager,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2019), pp. 12202–12211.

Zheng, Y.

Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
[Crossref]

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 4767–4776

Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision, (IEEE, 2018), pp. 788–804.

Zonios, G.

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

ACM Trans. Graph. (7)

S.-H. Baek, I. Kim, D. Gutierrez, and M. H. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

I. Choi, D. S. Jeon, G. Nam, D. Gutierrez, and M. H. Kim, “High-quality hyperspectral reconstruction using a spectral prior,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1–11 (2014).
[Crossref]

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A reconfigurable camera add-on for high dynamic range, multispectral, polarization, and light-field imaging,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

S. Cho, J. Wang, and S. Lee, “Video deblurring for hand-held cameras using patch-based synthesis,” ACM Trans. Graph. 31(4), 1–9 (2012).
[Crossref]

S. Bi, X. Han, and Y. Yu, “An $l _1$l1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition,” ACM Trans. Graph. 34(4), 78 (2015).
[Crossref]

Appl. Opt. (5)

Biometrika (1)

R. W. Wedderburn, “Quasi-likelihood functions, generalized linear models, and the gauss–newton method,” Biometrika 61(3), 439–447 (1974).
[Crossref]

Comput. Electr. Eng. (1)

S. Denman, T. Lamb, C. Fookes, V. Chandran, and S. Sridharan, “Multi-spectral fusion for surveillance systems,” Comput. Electr. Eng. 36(4), 643–663 (2010).
[Crossref]

Comput. Vision Image Understanding (2)

C.-H. Teng, S.-H. Lai, Y.-S. Chen, and W.-H. Hsu, “Accurate optical flow computation under non-uniform brightness variations,” Comput. Vision Image Understanding 97(3), 315–346 (2005).
[Crossref]

D. Fortun, P. Bouthemy, and C. Kervrann, “Optical flow modeling and computation: a survey,” Comput. Vision Image Understanding 134, 1–21 (2015).
[Crossref]

IEEE Trans. Comput. Imaging (1)

Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
[Crossref]

IEEE Trans. on Image Process. (4)

K. M. León-López, L. V. G. Carreño, and H. A. Fuentes, “Temporal colored coded aperture design in compressive spectral video sensing,” IEEE Trans. on Image Process. 28, 253–264 (2019).
[Crossref]

J. Jia, K. J. Barnard, and K. Hirakawa, “Fourier spectral filter array for optimal multispectral imaging,” IEEE Trans. on Image Process. 25(4), 1530–1543 (2016).
[Crossref]

L. Wang, T. Zhang, Y. Fu, and H. Huang, “Hyperreconnet: Joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. on Image Process. 28(5), 2257–2270 (2019).
[Crossref]

H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. on Image Process. 23(4), 1896–1908 (2014).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (6)

L. Wang, Z. Xiong, H. Huang, G. Shi, F. Wu, and W. Zeng, “High-speed hyperspectral video acquisition by combining nyquist and compressive sampling,” IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 857–870 (2019).
[Crossref]

X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2423–2435 (2011).
[Crossref]

J. T. Barron and J. Malik, “Shape, illumination, and reflectance from shading,” IEEE Trans. Pattern Anal. Mach. Intell. 37(8), 1670–1687 (2015).
[Crossref]

C. Liu, J. Yuen, and A. Torralba, “Sift flow: Dense correspondence across scenes and its applications,” IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 978–994 (2011).
[Crossref]

S. Negahdaripour, “Revised definition of optical flow: Integration of radiometric and geometric cues for dynamic scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20(9), 961–979 (1998).
[Crossref]

H. W. Haussecker and D. J. Fleet, “Computing optical flow with physical models of brightness variation,” IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 661–673 (2001).
[Crossref]

J. Opt. Soc. Am. B (1)

Nat. Photonics (1)

Y. Shechtman, L. E. Weiss, A. S. Backer, M. Y. Lee, and W. Moerner, “Multicolour localization microscopy by point-spread-function engineering,” Nat. Photonics 10(9), 590–594 (2016).
[Crossref]

Nature (2)

V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Múller, Q. Zhang, G. Zonios, E. Kline, T. McGillican, and S. Shapshay, “Detection of preinvasive cancer cells,” Nature 406(6791), 35–36 (2000).
[Crossref]

J. Bao and M. G. Bawendi, “A colloidal quantum dot spectrometer,” Nature 523(7558), 67–70 (2015).
[Crossref]

Opt. Express (4)

Sensors (1)

P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: Recent advances and practical implementation,” Sensors 14(11), 21626–21659 (2014).
[Crossref]

Other (16)

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 4767–4776

T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proceedings of European Conference on Computer Vision (Springer, 2004), pp. 25–36.

C. Liu, “Beyond pixels: Exploring new representations and applications for motion analysis,” Ph.D. thesis, Massachusetts Institute of Technology (2009).

C. Liu, W. T. Freeman, E. H. Adelson, and Y. Weiss, “Human-assisted motion annotation,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

C. Liu and W. T. Freeman, “A high-quality video denoising algorithm based on reliable motion estimation,” in Proceedings of European Conference on Computer Vision (Springer, 2010), pp. 706–719.

C. Liu and D. Sun, “A bayesian approach to adaptive video super resolution,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 209–216.

N. Gat, “Imaging spectroscopy using tunable filters: a review,” in Wavelet Applications VII, vol. 4056 (International Society for Optics and Photonics, 2000), pp. 50–65.

H. Lee and M. H. Kim, “Building a two-way hyperspectral imaging system with liquid crystal tunable filters,” in Proceedings of International Conference on Image and Signal Processing (Springer, 2014), pp. 26–34.

N. Akhtar and A. S. Mian, “Hyperspectral recovery from rgb images using gaussian processes,” IEEE Trans. Pattern Anal. Mach. Intell. (2018).

H. Li, Z. Xiong, Z. Shi, L. Wang, D. Liu, and F. Wu, “Hsvcnn: Cnn-based hyperspectral reconstruction from rgb videos,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2018), pp. 3323–3327.

T. Takatani, T. Aoto, and Y. Mukaigawa, “One-shot hyperspectral imaging using faced reflectors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 4039–4047.

B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural rgb images,” in Proceedings of European Conference on Computer Vision, (IEEE, 2016), pp. 19–34.

Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision, (IEEE, 2018), pp. 788–804.

J. Chen, M. Hirsch, B. Eberhardt, and H. P. Lensch, “A computational camera with programmable optics for snapshot high-resolution multispectral imaging,” in Proceedings of Asian Conference on Computer Vision, (IEEE, 2018), pp. 685–699

Y. Zhao, X. Hu, H. Guo, Z. Ma, T. Yue, and X. Cao, “Spectral reconstruction from dispersive blur: A novel light efficient spectral imager,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2019), pp. 12202–12211.

H. Du, X. Tong, X. Cao, and S. Lin, “A prism-based system for multispectral video acquisition,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2009), pp. 175–182.

Supplementary Material (2)

NameDescription
» Visualization 1       The ground truth multispectral videos and the reconstructed multispectral videos are shown.
» Visualization 2       The dynamic scene with moving objects, i.e. the hand and the toy vehicle, is captured by our LCTF-based spectral-sweep camera system, and several frames of the reconstructed multispectral video with 17 channels (from 540 nm to 700 nm with 10 nm spect

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. The overview of the proposed multispectral imaging method. (a) By sweeping the passing band of the Liquid Crystal Tunable Filter (LCTF) frame by frame, the light emitted from a certain moving point pass through the LCTF with different wavelengths and project onto the sensor at different locations. (b) By introducing the COF map and bilateral propagation algorithm, different spectral projections of the same point are aligned and the entire spectrum is reconstructed.
Fig. 2.
Fig. 2. Multiplicative intensity transfer map. (a) The synthesized RGB image from the captured multispectral images and (b) the multiplicative transfer map between spectral wavelengths 630nm and 650nm.
Fig. 3.
Fig. 3. The diagram of bilateral propagation for reconstructing multispectral videos. The input images (i.e. diagonal frames marked with colored boxes) are captured by the LCTF-based spectral-sweep acquisition system. Each $N$ frames ($N$ is the number of sweeping channels of the input spectral-sweep video) from the first channel (with the shortest wavelength) to the last one (with the longest wavelength) are processed as a period unit to reconstruct the corresponding multispectral video. The channels on both sides of input spectral-sweep frames (the diagonal frames) are missing and need to be reconstructed by bilateral propagation. The arrows denotes the propagation directions.
Fig. 4.
Fig. 4. The calibration of sensor and LCTF spectral response responses. (a) The pattern of X-rite ColorChecker we used for calibration. (b) Spectral curves of the corresponding color areas before calibration, after calibration and ground truth.
Fig. 5.
Fig. 5. The NOF/COF between the red-green images. We extract different color channels of the binocular images (red channel of left view and green channel of right view) from original full-channel (RGB channels) images (provided by Liu et al. [52]), and compare both optical flow maps computed by NOF algorithm [51] and the proposed COF method. The ground truth and estimated optical flow maps from full-channel images are also given. The ground truth of left view and its warping version from right view using our COF, as well as the error map of the warped image are shown in the bottom row.
Fig. 6.
Fig. 6. Quantitative errors with different frame intervals. The error curves of the results of NOF algorithm and our COF method with respect to the frame intervals during propagation.
Fig. 7.
Fig. 7. The experimental results on real captured data with ground truth.(Visualization 1) Top row: selected input images captured at view 1, 6, 11 and 17 of spectral wavelength at 540 nm, 590 nm, 640 nm and 700 nm respectively. Middle 4 rows: selected reconstructed channels.
Fig. 8.
Fig. 8. Qualitative comparisons with state-of-the-art spectral video acquisition methods on reall captured data with ground truth. Row 1$\sim$4: selected channls at different views. Bottom row: close-ups of the boxed regions.
Fig. 9.
Fig. 9. Quantitative comparisons with state-of-the-art spectral video acquisition methods on real captured data with ground truth. (a) Mean PSNR of all the reconstructed channels. (b) Mean SSIM of all the channels.
Fig. 10.
Fig. 10. The experiment results on real captured dynamic scene (Visualization 2). Top row: selected input frames captured at time 1, 6, 11 and 17 with spectral wavelength 540 nm, 590 nm, 640 nm and 700 nm respectively. Middle 4 rows: selected reconstructed channels. Bottom: the details of reconstructed results.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I t + 1 ( x , y ) = I t ( x + δ x , y + δ y ) ,
s o ( x , y , λ ) = s l ( x , y , λ ) s r ( x , y , λ ) r shading ( x , y ) ,
r λ 1 λ 2 ( x , y ) = s o ( x , y , λ 1 ) s o ( x , y , λ 2 ) = s l ( x , y , λ 1 ) s r ( x , y , λ 1 ) r shading ( x , y ) s l ( x , y , λ 2 ) s r ( x , y , λ 2 ) r shading ( x , y ) = s l ( x , y , λ 1 ) s r ( x , y , λ 1 ) s l ( x , y , λ 2 ) s r ( x , y , λ 2 ) .
F t t + 1 s 1 s 2 ( x , y ) = F ( x , y ) e δ x i + δ y j ,
I t + 1 s 2 ( x , y ) e x i + y j = I t s 1 ( x , y ) e x i + y j F t t + 1 s 1 s 2 ( x , y ) = I t s 1 ( x , y ) F ( x , y ) e ( x + δ x ) i + ( y + δ y ) j ,
I t + 1 s 2 = I t + 1 s 2 F t t + 1 s 1 s 2 .
E f = | | I t s 1 F t t + 1 s 1 s 2 I t + 1 s 2 | | 2 2 .
E c = | | F t t + 1 s 1 s 2 | | 1 ,
x F = lim Δ x 0 F ( x + Δ x , y ) F ( x , y ) y F = lim Δ y 0 F ( x , y + Δ y ) F ( x , y ) ,
E = E f + λ c E c ,
E c k = | F t t + 1 s 1 s 2 ( k 1 ) | 1 | F t t + 1 s 1 s 2 | 2 ,

Metrics