Abstract

Multi-dimensional imaging is a powerful technique for many applications, such as biological analysis, remote sensing, and object recognition. Most existing multi-dimensional imaging systems rely on scanning or camera array, which make the system bulky and unstable. To some extent, these problems can be mitigated by employing compressed sensing algorithms. However, they are computationally expensive and highly rely on the ill-posed assumption that the information is sparse in a given domain. Here, we propose a snapshot spectral-volumetric imaging (SSVI) system by introducing the paradigm of light-field imaging into Fourier transform imaging spectroscopy. We demonstrate that SSVI can reconstruct a complete plenoptic function, P(x,y,z,θ,φ,λ,t), of the incoming light rays using a single detector. Compared with other multidimensional imagers, SSVI features prominent advantages in compactness, robustness, and low cost.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Comparison of three-dimensional particle tracking and sizing using plenoptic imaging and digital in-line holography

Elise M. Hall, Brian S. Thurow, and Daniel R. Guildenbecher
Appl. Opt. 55(23) 6410-6420 (2016)

Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems

Manuel Martínez-Corral and Bahram Javidi
Adv. Opt. Photon. 10(3) 512-566 (2018)

Assessment of plenoptic imaging for reconstruction of 3D discrete and continuous luminous fields

Hecong Liu, Qianlong Wang, and Weiwei Cai
J. Opt. Soc. Am. A 36(2) 149-158 (2019)

References

  • View by:
  • |
  • |
  • |

  1. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT, 1991), pp. 3–20.
  2. L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016).
    [Crossref] [PubMed]
  3. G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014).
    [Crossref] [PubMed]
  4. F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
    [Crossref]
  5. B. M. Nicolaï, K. Beullens, E. Bobelyn, A. Peirs, W. Saeys, K. I. Theron, and J. Lammertyn, “Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review,” Postharvest Biol. Technol. 46(2), 99–118 (2007).
    [Crossref]
  6. A. A. Gowen, C. P. O’Donnell, P. J. Cullen, G. Downey, and J. M. Frias, “Hyperspectral imaging – an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18(12), 590–598 (2007).
    [Crossref]
  7. R. M. Levenson and J. R. Mansfield, “Multispectral imaging in biology and medicine: slices of life,” Cytometry A 69(8), 748–758 (2006).
    [Crossref] [PubMed]
  8. A. F. H. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for Earth remote sensing,” Science 228(4704), 1147–1153 (1985).
    [Crossref] [PubMed]
  9. R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
    [Crossref] [PubMed]
  10. N. C. Pégard, H. Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3(5), 517–524 (2016).
    [Crossref]
  11. N. Bedard, T. Shope, A. Hoberman, M. A. Haralam, N. Shaikh, J. Kovačević, N. Balram, and I. Tošić, “Light field otoscope design for 3D in vivo imaging of the middle ear,” Biomed. Opt. Express 8(1), 260–272 (2017).
    [Crossref] [PubMed]
  12. S. Zhu, P. Jin, R. Liang, and L. Gao, “Optical design and development of a snapshot light-field laryngoscope,” Opt. Eng. 57(2), 023110 (2018).
    [Crossref]
  13. R. Ng, “Digital light field photography,” Ph.D. dissertation (Stanford University, 2006).
  14. R. Raghavendra, K. B. Raja, and C. Busch, “Presentation attack detection for face recognition using light field camera,” IEEE Trans. Image Process. 24(3), 1060–1075 (2015).
    [Crossref] [PubMed]
  15. K. Maeno, H. Nagahara, A. Shimada, and R. I. Taniguchi, “Light field distortion feature for transparent object recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 122–135.
  16. J. Belden, T. T. Truscott, M. C. Axiak, and A. H. Techet, “Three dimensional synthetic aperture particle image velocimetry,” Meas. Sci. Technol. 21(12), 125403 (2010).
    [Crossref]
  17. K. Lynch, T. Fahringer, and B. Thurow, “Three-dimensional particle image velocimetry using a plenoptic camera,” in 50th AIAA Aerospace Sciences Meeting (AIAA, 2012), pp. 1–14.
  18. C. Li, G. S. Mitchell, J. Dutta, S. Ahn, R. M. Leahy, and S. R. Cherry, “A three-dimensional multispectral fluorescence optical tomography imaging system for small animals based on a conical mirror design,” Opt. Express 17(9), 7571–7585 (2009).
    [Crossref] [PubMed]
  19. W. Jahr, B. Schmid, C. Schmied, F. O. Fahrbach, and J. Huisken, “Hyperspectral light sheet microscopy,” Nat. Commun. 6(1), 7990 (2015).
    [Crossref] [PubMed]
  20. F. Morsdorf, C. Nichol, T. Malthus, and I. H. Woodhouse, “Assessing forest structural and physiological information content of multi-spectral LiDAR waveforms by radiative transfer modelling,” Remote Sens. Environ. 113(10), 2152–2163 (2009).
    [Crossref]
  21. A. Wallace, C. Nichol, and I. Woodhouse, “Recovery of forest canopy parameters by inversion of multispectral LiDAR data,” Remote Sens. 4(2), 509–531 (2012).
    [Crossref]
  22. A. D. Gleckler, A. Gelbart, and J. M. Bowden, “Multispectral and hyperspectral 3D imaging lidar based upon the multiple-slit streak tube imaging lidar,” in Aerospace/Defense Sensing, Simulation, and Controls (SPIE, 2001), pp. 328–335.
  23. A. Mansouri, A. Lathuiliere, F. S. Marzani, Y. Voisin, and P. Gouton, “Toward a 3D multispectral scanner: an application to multimedia,” IEEE Multimed. 14(1), 40–47 (2007).
    [Crossref]
  24. P. Latorre-Carmona, E. Sánchez-Ortiga, X. Xiao, F. Pla, M. Martínez-Corral, H. Navarro, G. Saavedra, and B. Javidi, “Multispectral integral imaging acquisition and processing using a monochrome camera and a liquid crystal tunable filter,” Opt. Express 20(23), 25960–25969 (2012).
    [Crossref] [PubMed]
  25. V. Farber, Y. Oiknine, I. August, and A. Stern, “Compressive 4D spectro-volumetric imaging,” Opt. Lett. 41(22), 5174–5177 (2016).
    [Crossref] [PubMed]
  26. L. Gao, N. Bedard, N. Hagen, R. T. Kester, and T. S. Tkaczyk, “Depth-resolved image mapping spectrometer (IMS) with structured illumination,” Opt. Express 19(18), 17439–17452 (2011).
    [Crossref] [PubMed]
  27. H. Rueda, C. Fu, D. L. Lau, and G. R. Arce, “Single aperture spectral+ToF compressive camera: toward hyperspectral+depth imagery,” IEEE J. Sel. Top. Signal Process. 11(7), 992–1003 (2017).
    [Crossref]
  28. P. Latorre-Carmona, F. Pla, A. Stern, I. Moon, and B. Javidi, “Three-dimensional imaging with multiple degrees of freedom using data fusion,” Proc. IEEE 103(9), 1654–1671 (2015).
    [Crossref]
  29. J. Wu, B. Xiong, X. Lin, J. He, J. Suo, and Q. Dai, “Snapshot hyperspectral volumetric microscopy,” Sci. Rep. 6(1), 24624 (2016).
    [Crossref] [PubMed]
  30. Y. Zhao, T. Yue, L. Chen, H. Wang, Z. Ma, D. J. Brady, and X. Cao, “Heterogeneous camera array for multispectral light field imaging,” Opt. Express 25(13), 14008–14022 (2017).
    [Crossref] [PubMed]
  31. Z. Xiong, L. Wang, H. Li, D. Liu, and F. Wu, “Snapshot hyperspectral light field imaging,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017), pp. 3270–3278.
  32. W. Feng, H. Rueda, C. Fu, G. R. Arce, W. He, and Q. Chen, “3D compressive spectral integral imaging,” Opt. Express 24(22), 24859–24871 (2016).
    [Crossref] [PubMed]
  33. M. W. Kudenov and E. L. Dereniak, “Compact snapshot birefringent imaging Fourier transform spectrometer,” in Imaging Spectrometry XV, (SPIE, 2010), paper 781206.
  34. M. W. Kudenov and E. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express 20(16), 17973–17986 (2012).
    [Crossref] [PubMed]
  35. S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Appl. Opt. 57(1), A1–A11 (2018).
    [Crossref] [PubMed]
  36. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015).
    [Crossref] [PubMed]
  37. A. Gerrard and J. M. Burch, Introduction to Matrix Methods in Optics (John Wiley & Sons, 1994), Chap. 2.
  38. X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the Skies: A deep network architecture for single-image rain streaks removal,” IEEE Trans. Image Process. 26(6), 2944–2956 (2017).
    [Crossref] [PubMed]
  39. D. Eigen, D. Krishnan, and R. Fergus, “Restoring an image taken through a window covered with dirt or rain,” in 2013 IEEE International Conference on Computer Vision (IEEE, 2013), pp. 633–640.
    [Crossref]
  40. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 770–778.
  41. B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural RGB images,” in Computer Vision – ECCV 2016 (Springer International Publishing, 2016), pp. 19–34.
  42. A. Vedaldi and K. Lenc, “MatConvNet: convolutional neural networks for MATLAB,” in Proceedings of the 23rd ACM International Conference on Multimedia (ACM, 2015), pp. 689–692.
    [Crossref]
  43. R. C. Bolles, H. H. Baker, and D. H. Marimont, “Epipolar-plane image analysis: an approach to determining structure from motion,” Int. J. Comput. Vis. 1(1), 7–55 (1987).
    [Crossref]
  44. I. Tosic and K. Berkner, “Light field scale-depth space transform for dense depth estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 435–442.
  45. L. Gao, N. Bedard, and I. Tosic, “Disparity-to-depth calibration in light field imaging,” in Imaging and Applied Optics, OSA Technical Digest, (Optical Society of America, 2016), paper CW3D.2.
  46. S. Zhu, Y. Zhang, J. Lin, L. Zhao, Y. Shen, and P. Jin, “High resolution snapshot imaging spectrometer using a fusion algorithm based on grouping principal component analysis,” Opt. Express 24(21), 24624–24640 (2016).
    [Crossref] [PubMed]
  47. G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
    [Crossref]
  48. T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2170–2181 (2016).
    [Crossref] [PubMed]
  49. M. W. Tao, J. C. Su, T. C. Wang, J. Malik, and R. Ramamoorthi, “Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1155–1169 (2016).
    [Crossref] [PubMed]

2018 (2)

S. Zhu, P. Jin, R. Liang, and L. Gao, “Optical design and development of a snapshot light-field laryngoscope,” Opt. Eng. 57(2), 023110 (2018).
[Crossref]

S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Appl. Opt. 57(1), A1–A11 (2018).
[Crossref] [PubMed]

2017 (5)

N. Bedard, T. Shope, A. Hoberman, M. A. Haralam, N. Shaikh, J. Kovačević, N. Balram, and I. Tošić, “Light field otoscope design for 3D in vivo imaging of the middle ear,” Biomed. Opt. Express 8(1), 260–272 (2017).
[Crossref] [PubMed]

Y. Zhao, T. Yue, L. Chen, H. Wang, Z. Ma, D. J. Brady, and X. Cao, “Heterogeneous camera array for multispectral light field imaging,” Opt. Express 25(13), 14008–14022 (2017).
[Crossref] [PubMed]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

H. Rueda, C. Fu, D. L. Lau, and G. R. Arce, “Single aperture spectral+ToF compressive camera: toward hyperspectral+depth imagery,” IEEE J. Sel. Top. Signal Process. 11(7), 992–1003 (2017).
[Crossref]

X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the Skies: A deep network architecture for single-image rain streaks removal,” IEEE Trans. Image Process. 26(6), 2944–2956 (2017).
[Crossref] [PubMed]

2016 (8)

J. Wu, B. Xiong, X. Lin, J. He, J. Suo, and Q. Dai, “Snapshot hyperspectral volumetric microscopy,” Sci. Rep. 6(1), 24624 (2016).
[Crossref] [PubMed]

L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016).
[Crossref] [PubMed]

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2170–2181 (2016).
[Crossref] [PubMed]

M. W. Tao, J. C. Su, T. C. Wang, J. Malik, and R. Ramamoorthi, “Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1155–1169 (2016).
[Crossref] [PubMed]

N. C. Pégard, H. Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3(5), 517–524 (2016).
[Crossref]

S. Zhu, Y. Zhang, J. Lin, L. Zhao, Y. Shen, and P. Jin, “High resolution snapshot imaging spectrometer using a fusion algorithm based on grouping principal component analysis,” Opt. Express 24(21), 24624–24640 (2016).
[Crossref] [PubMed]

W. Feng, H. Rueda, C. Fu, G. R. Arce, W. He, and Q. Chen, “3D compressive spectral integral imaging,” Opt. Express 24(22), 24859–24871 (2016).
[Crossref] [PubMed]

V. Farber, Y. Oiknine, I. August, and A. Stern, “Compressive 4D spectro-volumetric imaging,” Opt. Lett. 41(22), 5174–5177 (2016).
[Crossref] [PubMed]

2015 (4)

E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015).
[Crossref] [PubMed]

R. Raghavendra, K. B. Raja, and C. Busch, “Presentation attack detection for face recognition using light field camera,” IEEE Trans. Image Process. 24(3), 1060–1075 (2015).
[Crossref] [PubMed]

W. Jahr, B. Schmid, C. Schmied, F. O. Fahrbach, and J. Huisken, “Hyperspectral light sheet microscopy,” Nat. Commun. 6(1), 7990 (2015).
[Crossref] [PubMed]

P. Latorre-Carmona, F. Pla, A. Stern, I. Moon, and B. Javidi, “Three-dimensional imaging with multiple degrees of freedom using data fusion,” Proc. IEEE 103(9), 1654–1671 (2015).
[Crossref]

2014 (2)

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014).
[Crossref] [PubMed]

2012 (4)

F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
[Crossref]

A. Wallace, C. Nichol, and I. Woodhouse, “Recovery of forest canopy parameters by inversion of multispectral LiDAR data,” Remote Sens. 4(2), 509–531 (2012).
[Crossref]

M. W. Kudenov and E. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express 20(16), 17973–17986 (2012).
[Crossref] [PubMed]

P. Latorre-Carmona, E. Sánchez-Ortiga, X. Xiao, F. Pla, M. Martínez-Corral, H. Navarro, G. Saavedra, and B. Javidi, “Multispectral integral imaging acquisition and processing using a monochrome camera and a liquid crystal tunable filter,” Opt. Express 20(23), 25960–25969 (2012).
[Crossref] [PubMed]

2011 (1)

2010 (1)

J. Belden, T. T. Truscott, M. C. Axiak, and A. H. Techet, “Three dimensional synthetic aperture particle image velocimetry,” Meas. Sci. Technol. 21(12), 125403 (2010).
[Crossref]

2009 (2)

F. Morsdorf, C. Nichol, T. Malthus, and I. H. Woodhouse, “Assessing forest structural and physiological information content of multi-spectral LiDAR waveforms by radiative transfer modelling,” Remote Sens. Environ. 113(10), 2152–2163 (2009).
[Crossref]

C. Li, G. S. Mitchell, J. Dutta, S. Ahn, R. M. Leahy, and S. R. Cherry, “A three-dimensional multispectral fluorescence optical tomography imaging system for small animals based on a conical mirror design,” Opt. Express 17(9), 7571–7585 (2009).
[Crossref] [PubMed]

2007 (3)

B. M. Nicolaï, K. Beullens, E. Bobelyn, A. Peirs, W. Saeys, K. I. Theron, and J. Lammertyn, “Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review,” Postharvest Biol. Technol. 46(2), 99–118 (2007).
[Crossref]

A. A. Gowen, C. P. O’Donnell, P. J. Cullen, G. Downey, and J. M. Frias, “Hyperspectral imaging – an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18(12), 590–598 (2007).
[Crossref]

A. Mansouri, A. Lathuiliere, F. S. Marzani, Y. Voisin, and P. Gouton, “Toward a 3D multispectral scanner: an application to multimedia,” IEEE Multimed. 14(1), 40–47 (2007).
[Crossref]

2006 (1)

R. M. Levenson and J. R. Mansfield, “Multispectral imaging in biology and medicine: slices of life,” Cytometry A 69(8), 748–758 (2006).
[Crossref] [PubMed]

1987 (1)

R. C. Bolles, H. H. Baker, and D. H. Marimont, “Epipolar-plane image analysis: an approach to determining structure from motion,” Int. J. Comput. Vis. 1(1), 7–55 (1987).
[Crossref]

1985 (1)

A. F. H. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for Earth remote sensing,” Science 228(4704), 1147–1153 (1985).
[Crossref] [PubMed]

Adesnik, H.

Ahn, S.

Antipa, N.

Arce, G. R.

H. Rueda, C. Fu, D. L. Lau, and G. R. Arce, “Single aperture spectral+ToF compressive camera: toward hyperspectral+depth imagery,” IEEE J. Sel. Top. Signal Process. 11(7), 992–1003 (2017).
[Crossref]

W. Feng, H. Rueda, C. Fu, G. R. Arce, W. He, and Q. Chen, “3D compressive spectral integral imaging,” Opt. Express 24(22), 24859–24871 (2016).
[Crossref] [PubMed]

August, I.

Axiak, M. C.

J. Belden, T. T. Truscott, M. C. Axiak, and A. H. Techet, “Three dimensional synthetic aperture particle image velocimetry,” Meas. Sci. Technol. 21(12), 125403 (2010).
[Crossref]

Baker, H. H.

R. C. Bolles, H. H. Baker, and D. H. Marimont, “Epipolar-plane image analysis: an approach to determining structure from motion,” Int. J. Comput. Vis. 1(1), 7–55 (1987).
[Crossref]

Bakker, W. H.

F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
[Crossref]

Balram, N.

Bedard, N.

Belden, J.

J. Belden, T. T. Truscott, M. C. Axiak, and A. H. Techet, “Three dimensional synthetic aperture particle image velocimetry,” Meas. Sci. Technol. 21(12), 125403 (2010).
[Crossref]

Berkner, K.

I. Tosic and K. Berkner, “Light field scale-depth space transform for dense depth estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 435–442.

Beullens, K.

B. M. Nicolaï, K. Beullens, E. Bobelyn, A. Peirs, W. Saeys, K. I. Theron, and J. Lammertyn, “Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review,” Postharvest Biol. Technol. 46(2), 99–118 (2007).
[Crossref]

Bobelyn, E.

B. M. Nicolaï, K. Beullens, E. Bobelyn, A. Peirs, W. Saeys, K. I. Theron, and J. Lammertyn, “Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review,” Postharvest Biol. Technol. 46(2), 99–118 (2007).
[Crossref]

Bolles, R. C.

R. C. Bolles, H. H. Baker, and D. H. Marimont, “Epipolar-plane image analysis: an approach to determining structure from motion,” Int. J. Comput. Vis. 1(1), 7–55 (1987).
[Crossref]

Boyden, E. S.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Brady, D. J.

Busch, C.

R. Raghavendra, K. B. Raja, and C. Busch, “Presentation attack detection for face recognition using light field camera,” IEEE Trans. Image Process. 24(3), 1060–1075 (2015).
[Crossref] [PubMed]

Cao, X.

Carranza, E. J. M.

F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
[Crossref]

Chai, T.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Chen, L.

Chen, Q.

Cherry, S. R.

Cullen, P. J.

A. A. Gowen, C. P. O’Donnell, P. J. Cullen, G. Downey, and J. M. Frias, “Hyperspectral imaging – an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18(12), 590–598 (2007).
[Crossref]

Dai, Q.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

J. Wu, B. Xiong, X. Lin, J. He, J. Suo, and Q. Dai, “Snapshot hyperspectral volumetric microscopy,” Sci. Rep. 6(1), 24624 (2016).
[Crossref] [PubMed]

de Smeth, J. B.

F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
[Crossref]

Dereniak, E. L.

Ding, X.

X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the Skies: A deep network architecture for single-image rain streaks removal,” IEEE Trans. Image Process. 26(6), 2944–2956 (2017).
[Crossref] [PubMed]

Downey, G.

A. A. Gowen, C. P. O’Donnell, P. J. Cullen, G. Downey, and J. M. Frias, “Hyperspectral imaging – an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18(12), 590–598 (2007).
[Crossref]

Dutta, J.

Eaton, K.

Efros, A. A.

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2170–2181 (2016).
[Crossref] [PubMed]

Eigen, D.

D. Eigen, D. Krishnan, and R. Fergus, “Restoring an image taken through a window covered with dirt or rain,” in 2013 IEEE International Conference on Computer Vision (IEEE, 2013), pp. 633–640.
[Crossref]

Fahrbach, F. O.

W. Jahr, B. Schmid, C. Schmied, F. O. Fahrbach, and J. Huisken, “Hyperspectral light sheet microscopy,” Nat. Commun. 6(1), 7990 (2015).
[Crossref] [PubMed]

Fahringer, T.

K. Lynch, T. Fahringer, and B. Thurow, “Three-dimensional particle image velocimetry using a plenoptic camera,” in 50th AIAA Aerospace Sciences Meeting (AIAA, 2012), pp. 1–14.

Farber, V.

Fei, B.

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014).
[Crossref] [PubMed]

Feng, W.

Fergus, R.

D. Eigen, D. Krishnan, and R. Fergus, “Restoring an image taken through a window covered with dirt or rain,” in 2013 IEEE International Conference on Computer Vision (IEEE, 2013), pp. 633–640.
[Crossref]

Frias, J. M.

A. A. Gowen, C. P. O’Donnell, P. J. Cullen, G. Downey, and J. M. Frias, “Hyperspectral imaging – an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18(12), 590–598 (2007).
[Crossref]

Fu, C.

H. Rueda, C. Fu, D. L. Lau, and G. R. Arce, “Single aperture spectral+ToF compressive camera: toward hyperspectral+depth imagery,” IEEE J. Sel. Top. Signal Process. 11(7), 992–1003 (2017).
[Crossref]

W. Feng, H. Rueda, C. Fu, G. R. Arce, W. He, and Q. Chen, “3D compressive spectral integral imaging,” Opt. Express 24(22), 24859–24871 (2016).
[Crossref] [PubMed]

Fu, X.

X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the Skies: A deep network architecture for single-image rain streaks removal,” IEEE Trans. Image Process. 26(6), 2944–2956 (2017).
[Crossref] [PubMed]

Gao, L.

S. Zhu, P. Jin, R. Liang, and L. Gao, “Optical design and development of a snapshot light-field laryngoscope,” Opt. Eng. 57(2), 023110 (2018).
[Crossref]

S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Appl. Opt. 57(1), A1–A11 (2018).
[Crossref] [PubMed]

L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016).
[Crossref] [PubMed]

L. Gao, N. Bedard, N. Hagen, R. T. Kester, and T. S. Tkaczyk, “Depth-resolved image mapping spectrometer (IMS) with structured illumination,” Opt. Express 19(18), 17439–17452 (2011).
[Crossref] [PubMed]

Gerlock, M.

Goetz, A. F. H.

A. F. H. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for Earth remote sensing,” Science 228(4704), 1147–1153 (1985).
[Crossref] [PubMed]

Gouton, P.

A. Mansouri, A. Lathuiliere, F. S. Marzani, Y. Voisin, and P. Gouton, “Toward a 3D multispectral scanner: an application to multimedia,” IEEE Multimed. 14(1), 40–47 (2007).
[Crossref]

Gowen, A. A.

A. A. Gowen, C. P. O’Donnell, P. J. Cullen, G. Downey, and J. M. Frias, “Hyperspectral imaging – an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18(12), 590–598 (2007).
[Crossref]

Hagen, N.

Haralam, M. A.

He, J.

J. Wu, B. Xiong, X. Lin, J. He, J. Suo, and Q. Dai, “Snapshot hyperspectral volumetric microscopy,” Sci. Rep. 6(1), 24624 (2016).
[Crossref] [PubMed]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 770–778.

He, W.

Hecker, C. A.

F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
[Crossref]

Hoberman, A.

Hoffmann, M.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Huang, J.

X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the Skies: A deep network architecture for single-image rain streaks removal,” IEEE Trans. Image Process. 26(6), 2944–2956 (2017).
[Crossref] [PubMed]

Huisken, J.

W. Jahr, B. Schmid, C. Schmied, F. O. Fahrbach, and J. Huisken, “Hyperspectral light sheet microscopy,” Nat. Commun. 6(1), 7990 (2015).
[Crossref] [PubMed]

Jahr, W.

W. Jahr, B. Schmid, C. Schmied, F. O. Fahrbach, and J. Huisken, “Hyperspectral light sheet microscopy,” Nat. Commun. 6(1), 7990 (2015).
[Crossref] [PubMed]

Jarabo, A.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Javidi, B.

Jin, P.

Kato, S.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Kester, R. T.

Kovacevic, J.

Krishnan, D.

D. Eigen, D. Krishnan, and R. Fergus, “Restoring an image taken through a window covered with dirt or rain,” in 2013 IEEE International Conference on Computer Vision (IEEE, 2013), pp. 633–640.
[Crossref]

Kudenov, M. W.

Lai, A.

Lam, E. Y.

Lammertyn, J.

B. M. Nicolaï, K. Beullens, E. Bobelyn, A. Peirs, W. Saeys, K. I. Theron, and J. Lammertyn, “Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review,” Postharvest Biol. Technol. 46(2), 99–118 (2007).
[Crossref]

Lathuiliere, A.

A. Mansouri, A. Lathuiliere, F. S. Marzani, Y. Voisin, and P. Gouton, “Toward a 3D multispectral scanner: an application to multimedia,” IEEE Multimed. 14(1), 40–47 (2007).
[Crossref]

Latorre-Carmona, P.

Lau, D. L.

H. Rueda, C. Fu, D. L. Lau, and G. R. Arce, “Single aperture spectral+ToF compressive camera: toward hyperspectral+depth imagery,” IEEE J. Sel. Top. Signal Process. 11(7), 992–1003 (2017).
[Crossref]

Leahy, R. M.

Lenc, K.

A. Vedaldi and K. Lenc, “MatConvNet: convolutional neural networks for MATLAB,” in Proceedings of the 23rd ACM International Conference on Multimedia (ACM, 2015), pp. 689–692.
[Crossref]

Levenson, R. M.

R. M. Levenson and J. R. Mansfield, “Multispectral imaging in biology and medicine: slices of life,” Cytometry A 69(8), 748–758 (2006).
[Crossref] [PubMed]

Li, C.

Li, H.

Z. Xiong, L. Wang, H. Li, D. Liu, and F. Wu, “Snapshot hyperspectral light field imaging,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017), pp. 3270–3278.

Liang, R.

S. Zhu, P. Jin, R. Liang, and L. Gao, “Optical design and development of a snapshot light-field laryngoscope,” Opt. Eng. 57(2), 023110 (2018).
[Crossref]

Liao, Y.

X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the Skies: A deep network architecture for single-image rain streaks removal,” IEEE Trans. Image Process. 26(6), 2944–2956 (2017).
[Crossref] [PubMed]

Lin, J.

Lin, X.

J. Wu, B. Xiong, X. Lin, J. He, J. Suo, and Q. Dai, “Snapshot hyperspectral volumetric microscopy,” Sci. Rep. 6(1), 24624 (2016).
[Crossref] [PubMed]

Liu, D.

Z. Xiong, L. Wang, H. Li, D. Liu, and F. Wu, “Snapshot hyperspectral light field imaging,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017), pp. 3270–3278.

Liu, H. Y.

Liu, Y.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Lu, G.

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014).
[Crossref] [PubMed]

Lynch, K.

K. Lynch, T. Fahringer, and B. Thurow, “Three-dimensional particle image velocimetry using a plenoptic camera,” in 50th AIAA Aerospace Sciences Meeting (AIAA, 2012), pp. 1–14.

Ma, Z.

Maeno, K.

K. Maeno, H. Nagahara, A. Shimada, and R. I. Taniguchi, “Light field distortion feature for transparent object recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 122–135.

Malik, J.

M. W. Tao, J. C. Su, T. C. Wang, J. Malik, and R. Ramamoorthi, “Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1155–1169 (2016).
[Crossref] [PubMed]

Malthus, T.

F. Morsdorf, C. Nichol, T. Malthus, and I. H. Woodhouse, “Assessing forest structural and physiological information content of multi-spectral LiDAR waveforms by radiative transfer modelling,” Remote Sens. Environ. 113(10), 2152–2163 (2009).
[Crossref]

Mansfield, J. R.

R. M. Levenson and J. R. Mansfield, “Multispectral imaging in biology and medicine: slices of life,” Cytometry A 69(8), 748–758 (2006).
[Crossref] [PubMed]

Mansouri, A.

A. Mansouri, A. Lathuiliere, F. S. Marzani, Y. Voisin, and P. Gouton, “Toward a 3D multispectral scanner: an application to multimedia,” IEEE Multimed. 14(1), 40–47 (2007).
[Crossref]

Marimont, D. H.

R. C. Bolles, H. H. Baker, and D. H. Marimont, “Epipolar-plane image analysis: an approach to determining structure from motion,” Int. J. Comput. Vis. 1(1), 7–55 (1987).
[Crossref]

Martínez-Corral, M.

Marzani, F. S.

A. Mansouri, A. Lathuiliere, F. S. Marzani, Y. Voisin, and P. Gouton, “Toward a 3D multispectral scanner: an application to multimedia,” IEEE Multimed. 14(1), 40–47 (2007).
[Crossref]

Masia, B.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Mitchell, G. S.

Moon, I.

P. Latorre-Carmona, F. Pla, A. Stern, I. Moon, and B. Javidi, “Three-dimensional imaging with multiple degrees of freedom using data fusion,” Proc. IEEE 103(9), 1654–1671 (2015).
[Crossref]

Morsdorf, F.

F. Morsdorf, C. Nichol, T. Malthus, and I. H. Woodhouse, “Assessing forest structural and physiological information content of multi-spectral LiDAR waveforms by radiative transfer modelling,” Remote Sens. Environ. 113(10), 2152–2163 (2009).
[Crossref]

Nagahara, H.

K. Maeno, H. Nagahara, A. Shimada, and R. I. Taniguchi, “Light field distortion feature for transparent object recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 122–135.

Navarro, H.

Nichol, C.

A. Wallace, C. Nichol, and I. Woodhouse, “Recovery of forest canopy parameters by inversion of multispectral LiDAR data,” Remote Sens. 4(2), 509–531 (2012).
[Crossref]

F. Morsdorf, C. Nichol, T. Malthus, and I. H. Woodhouse, “Assessing forest structural and physiological information content of multi-spectral LiDAR waveforms by radiative transfer modelling,” Remote Sens. Environ. 113(10), 2152–2163 (2009).
[Crossref]

Nicolaï, B. M.

B. M. Nicolaï, K. Beullens, E. Bobelyn, A. Peirs, W. Saeys, K. I. Theron, and J. Lammertyn, “Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review,” Postharvest Biol. Technol. 46(2), 99–118 (2007).
[Crossref]

Noomen, M. F.

F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
[Crossref]

O’Donnell, C. P.

A. A. Gowen, C. P. O’Donnell, P. J. Cullen, G. Downey, and J. M. Frias, “Hyperspectral imaging – an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18(12), 590–598 (2007).
[Crossref]

Oiknine, Y.

Paisley, J.

X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the Skies: A deep network architecture for single-image rain streaks removal,” IEEE Trans. Image Process. 26(6), 2944–2956 (2017).
[Crossref] [PubMed]

Pak, N.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Pégard, N. C.

Peirs, A.

B. M. Nicolaï, K. Beullens, E. Bobelyn, A. Peirs, W. Saeys, K. I. Theron, and J. Lammertyn, “Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review,” Postharvest Biol. Technol. 46(2), 99–118 (2007).
[Crossref]

Pla, F.

Prevedel, R.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Raghavendra, R.

R. Raghavendra, K. B. Raja, and C. Busch, “Presentation attack detection for face recognition using light field camera,” IEEE Trans. Image Process. 24(3), 1060–1075 (2015).
[Crossref] [PubMed]

Raja, K. B.

R. Raghavendra, K. B. Raja, and C. Busch, “Presentation attack detection for face recognition using light field camera,” IEEE Trans. Image Process. 24(3), 1060–1075 (2015).
[Crossref] [PubMed]

Ramamoorthi, R.

M. W. Tao, J. C. Su, T. C. Wang, J. Malik, and R. Ramamoorthi, “Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1155–1169 (2016).
[Crossref] [PubMed]

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2170–2181 (2016).
[Crossref] [PubMed]

Raskar, R.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 770–778.

Rock, B. N.

A. F. H. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for Earth remote sensing,” Science 228(4704), 1147–1153 (1985).
[Crossref] [PubMed]

Rueda, H.

H. Rueda, C. Fu, D. L. Lau, and G. R. Arce, “Single aperture spectral+ToF compressive camera: toward hyperspectral+depth imagery,” IEEE J. Sel. Top. Signal Process. 11(7), 992–1003 (2017).
[Crossref]

W. Feng, H. Rueda, C. Fu, G. R. Arce, W. He, and Q. Chen, “3D compressive spectral integral imaging,” Opt. Express 24(22), 24859–24871 (2016).
[Crossref] [PubMed]

Saavedra, G.

Saeys, W.

B. M. Nicolaï, K. Beullens, E. Bobelyn, A. Peirs, W. Saeys, K. I. Theron, and J. Lammertyn, “Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review,” Postharvest Biol. Technol. 46(2), 99–118 (2007).
[Crossref]

Sánchez-Ortiga, E.

Schmid, B.

W. Jahr, B. Schmid, C. Schmied, F. O. Fahrbach, and J. Huisken, “Hyperspectral light sheet microscopy,” Nat. Commun. 6(1), 7990 (2015).
[Crossref] [PubMed]

Schmied, C.

W. Jahr, B. Schmid, C. Schmied, F. O. Fahrbach, and J. Huisken, “Hyperspectral light sheet microscopy,” Nat. Commun. 6(1), 7990 (2015).
[Crossref] [PubMed]

Schrödel, T.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Shaikh, N.

Shen, Y.

Shimada, A.

K. Maeno, H. Nagahara, A. Shimada, and R. I. Taniguchi, “Light field distortion feature for transparent object recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 122–135.

Shope, T.

Solomon, J. E.

A. F. H. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for Earth remote sensing,” Science 228(4704), 1147–1153 (1985).
[Crossref] [PubMed]

Stern, A.

V. Farber, Y. Oiknine, I. August, and A. Stern, “Compressive 4D spectro-volumetric imaging,” Opt. Lett. 41(22), 5174–5177 (2016).
[Crossref] [PubMed]

P. Latorre-Carmona, F. Pla, A. Stern, I. Moon, and B. Javidi, “Three-dimensional imaging with multiple degrees of freedom using data fusion,” Proc. IEEE 103(9), 1654–1671 (2015).
[Crossref]

Su, J. C.

M. W. Tao, J. C. Su, T. C. Wang, J. Malik, and R. Ramamoorthi, “Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1155–1169 (2016).
[Crossref] [PubMed]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 770–778.

Suo, J.

J. Wu, B. Xiong, X. Lin, J. He, J. Suo, and Q. Dai, “Snapshot hyperspectral volumetric microscopy,” Sci. Rep. 6(1), 24624 (2016).
[Crossref] [PubMed]

Taniguchi, R. I.

K. Maeno, H. Nagahara, A. Shimada, and R. I. Taniguchi, “Light field distortion feature for transparent object recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 122–135.

Tao, M. W.

M. W. Tao, J. C. Su, T. C. Wang, J. Malik, and R. Ramamoorthi, “Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1155–1169 (2016).
[Crossref] [PubMed]

Techet, A. H.

J. Belden, T. T. Truscott, M. C. Axiak, and A. H. Techet, “Three dimensional synthetic aperture particle image velocimetry,” Meas. Sci. Technol. 21(12), 125403 (2010).
[Crossref]

Theron, K. I.

B. M. Nicolaï, K. Beullens, E. Bobelyn, A. Peirs, W. Saeys, K. I. Theron, and J. Lammertyn, “Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review,” Postharvest Biol. Technol. 46(2), 99–118 (2007).
[Crossref]

Thurow, B.

K. Lynch, T. Fahringer, and B. Thurow, “Three-dimensional particle image velocimetry using a plenoptic camera,” in 50th AIAA Aerospace Sciences Meeting (AIAA, 2012), pp. 1–14.

Tkaczyk, T. S.

Tosic, I.

I. Tosic and K. Berkner, “Light field scale-depth space transform for dense depth estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 435–442.

Tošic, I.

Truscott, T. T.

J. Belden, T. T. Truscott, M. C. Axiak, and A. H. Techet, “Three dimensional synthetic aperture particle image velocimetry,” Meas. Sci. Technol. 21(12), 125403 (2010).
[Crossref]

van der Meer, F. D.

F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
[Crossref]

van der Meijde, M.

F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
[Crossref]

van der Werff, H. M. A.

F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
[Crossref]

van Ruitenbeek, F. J. A.

F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
[Crossref]

Vane, G.

A. F. H. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for Earth remote sensing,” Science 228(4704), 1147–1153 (1985).
[Crossref] [PubMed]

Vaziri, A.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Vedaldi, A.

A. Vedaldi and K. Lenc, “MatConvNet: convolutional neural networks for MATLAB,” in Proceedings of the 23rd ACM International Conference on Multimedia (ACM, 2015), pp. 689–692.
[Crossref]

Voisin, Y.

A. Mansouri, A. Lathuiliere, F. S. Marzani, Y. Voisin, and P. Gouton, “Toward a 3D multispectral scanner: an application to multimedia,” IEEE Multimed. 14(1), 40–47 (2007).
[Crossref]

Wallace, A.

A. Wallace, C. Nichol, and I. Woodhouse, “Recovery of forest canopy parameters by inversion of multispectral LiDAR data,” Remote Sens. 4(2), 509–531 (2012).
[Crossref]

Waller, L.

Wang, H.

Wang, L.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Z. Xiong, L. Wang, H. Li, D. Liu, and F. Wu, “Snapshot hyperspectral light field imaging,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017), pp. 3270–3278.

Wang, L. V.

L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016).
[Crossref] [PubMed]

Wang, T. C.

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2170–2181 (2016).
[Crossref] [PubMed]

M. W. Tao, J. C. Su, T. C. Wang, J. Malik, and R. Ramamoorthi, “Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1155–1169 (2016).
[Crossref] [PubMed]

Wetzstein, G.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Woldai, T.

F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
[Crossref]

Woodhouse, I.

A. Wallace, C. Nichol, and I. Woodhouse, “Recovery of forest canopy parameters by inversion of multispectral LiDAR data,” Remote Sens. 4(2), 509–531 (2012).
[Crossref]

Woodhouse, I. H.

F. Morsdorf, C. Nichol, T. Malthus, and I. H. Woodhouse, “Assessing forest structural and physiological information content of multi-spectral LiDAR waveforms by radiative transfer modelling,” Remote Sens. Environ. 113(10), 2152–2163 (2009).
[Crossref]

Wu, F.

Z. Xiong, L. Wang, H. Li, D. Liu, and F. Wu, “Snapshot hyperspectral light field imaging,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017), pp. 3270–3278.

Wu, G.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Wu, J.

J. Wu, B. Xiong, X. Lin, J. He, J. Suo, and Q. Dai, “Snapshot hyperspectral volumetric microscopy,” Sci. Rep. 6(1), 24624 (2016).
[Crossref] [PubMed]

Xiao, X.

Xiong, B.

J. Wu, B. Xiong, X. Lin, J. He, J. Suo, and Q. Dai, “Snapshot hyperspectral volumetric microscopy,” Sci. Rep. 6(1), 24624 (2016).
[Crossref] [PubMed]

Xiong, Z.

Z. Xiong, L. Wang, H. Li, D. Liu, and F. Wu, “Snapshot hyperspectral light field imaging,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017), pp. 3270–3278.

Yoon, Y. G.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Yue, T.

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 770–778.

Zhang, Y.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

S. Zhu, Y. Zhang, J. Lin, L. Zhao, Y. Shen, and P. Jin, “High resolution snapshot imaging spectrometer using a fusion algorithm based on grouping principal component analysis,” Opt. Express 24(21), 24624–24640 (2016).
[Crossref] [PubMed]

Zhao, L.

Zhao, Y.

Zhu, S.

Zimmer, M.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Appl. Opt. (1)

Biomed. Opt. Express (1)

Cytometry A (1)

R. M. Levenson and J. R. Mansfield, “Multispectral imaging in biology and medicine: slices of life,” Cytometry A 69(8), 748–758 (2006).
[Crossref] [PubMed]

IEEE J. Sel. Top. Signal Process. (2)

H. Rueda, C. Fu, D. L. Lau, and G. R. Arce, “Single aperture spectral+ToF compressive camera: toward hyperspectral+depth imagery,” IEEE J. Sel. Top. Signal Process. 11(7), 992–1003 (2017).
[Crossref]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

IEEE Multimed. (1)

A. Mansouri, A. Lathuiliere, F. S. Marzani, Y. Voisin, and P. Gouton, “Toward a 3D multispectral scanner: an application to multimedia,” IEEE Multimed. 14(1), 40–47 (2007).
[Crossref]

IEEE Trans. Image Process. (2)

R. Raghavendra, K. B. Raja, and C. Busch, “Presentation attack detection for face recognition using light field camera,” IEEE Trans. Image Process. 24(3), 1060–1075 (2015).
[Crossref] [PubMed]

X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the Skies: A deep network architecture for single-image rain streaks removal,” IEEE Trans. Image Process. 26(6), 2944–2956 (2017).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Depth estimation with occlusion modeling using light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2170–2181 (2016).
[Crossref] [PubMed]

M. W. Tao, J. C. Su, T. C. Wang, J. Malik, and R. Ramamoorthi, “Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1155–1169 (2016).
[Crossref] [PubMed]

Int. J. Appl. Earth Obs. (1)

F. D. van der Meer, H. M. A. van der Werff, F. J. A. van Ruitenbeek, C. A. Hecker, W. H. Bakker, M. F. Noomen, M. van der Meijde, E. J. M. Carranza, J. B. de Smeth, and T. Woldai, “Multi- and hyperspectral geologic remote sensing: A review,” Int. J. Appl. Earth Obs. 14(1), 112–128 (2012).
[Crossref]

Int. J. Comput. Vis. (1)

R. C. Bolles, H. H. Baker, and D. H. Marimont, “Epipolar-plane image analysis: an approach to determining structure from motion,” Int. J. Comput. Vis. 1(1), 7–55 (1987).
[Crossref]

J. Biomed. Opt. (1)

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014).
[Crossref] [PubMed]

J. Opt. Soc. Am. A (1)

Meas. Sci. Technol. (1)

J. Belden, T. T. Truscott, M. C. Axiak, and A. H. Techet, “Three dimensional synthetic aperture particle image velocimetry,” Meas. Sci. Technol. 21(12), 125403 (2010).
[Crossref]

Nat. Commun. (1)

W. Jahr, B. Schmid, C. Schmied, F. O. Fahrbach, and J. Huisken, “Hyperspectral light sheet microscopy,” Nat. Commun. 6(1), 7990 (2015).
[Crossref] [PubMed]

Nat. Methods (1)

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Opt. Eng. (1)

S. Zhu, P. Jin, R. Liang, and L. Gao, “Optical design and development of a snapshot light-field laryngoscope,” Opt. Eng. 57(2), 023110 (2018).
[Crossref]

Opt. Express (7)

C. Li, G. S. Mitchell, J. Dutta, S. Ahn, R. M. Leahy, and S. R. Cherry, “A three-dimensional multispectral fluorescence optical tomography imaging system for small animals based on a conical mirror design,” Opt. Express 17(9), 7571–7585 (2009).
[Crossref] [PubMed]

P. Latorre-Carmona, E. Sánchez-Ortiga, X. Xiao, F. Pla, M. Martínez-Corral, H. Navarro, G. Saavedra, and B. Javidi, “Multispectral integral imaging acquisition and processing using a monochrome camera and a liquid crystal tunable filter,” Opt. Express 20(23), 25960–25969 (2012).
[Crossref] [PubMed]

L. Gao, N. Bedard, N. Hagen, R. T. Kester, and T. S. Tkaczyk, “Depth-resolved image mapping spectrometer (IMS) with structured illumination,” Opt. Express 19(18), 17439–17452 (2011).
[Crossref] [PubMed]

M. W. Kudenov and E. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express 20(16), 17973–17986 (2012).
[Crossref] [PubMed]

Y. Zhao, T. Yue, L. Chen, H. Wang, Z. Ma, D. J. Brady, and X. Cao, “Heterogeneous camera array for multispectral light field imaging,” Opt. Express 25(13), 14008–14022 (2017).
[Crossref] [PubMed]

W. Feng, H. Rueda, C. Fu, G. R. Arce, W. He, and Q. Chen, “3D compressive spectral integral imaging,” Opt. Express 24(22), 24859–24871 (2016).
[Crossref] [PubMed]

S. Zhu, Y. Zhang, J. Lin, L. Zhao, Y. Shen, and P. Jin, “High resolution snapshot imaging spectrometer using a fusion algorithm based on grouping principal component analysis,” Opt. Express 24(21), 24624–24640 (2016).
[Crossref] [PubMed]

Opt. Lett. (1)

Optica (1)

Phys. Rep. (1)

L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016).
[Crossref] [PubMed]

Postharvest Biol. Technol. (1)

B. M. Nicolaï, K. Beullens, E. Bobelyn, A. Peirs, W. Saeys, K. I. Theron, and J. Lammertyn, “Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review,” Postharvest Biol. Technol. 46(2), 99–118 (2007).
[Crossref]

Proc. IEEE (1)

P. Latorre-Carmona, F. Pla, A. Stern, I. Moon, and B. Javidi, “Three-dimensional imaging with multiple degrees of freedom using data fusion,” Proc. IEEE 103(9), 1654–1671 (2015).
[Crossref]

Remote Sens. (1)

A. Wallace, C. Nichol, and I. Woodhouse, “Recovery of forest canopy parameters by inversion of multispectral LiDAR data,” Remote Sens. 4(2), 509–531 (2012).
[Crossref]

Remote Sens. Environ. (1)

F. Morsdorf, C. Nichol, T. Malthus, and I. H. Woodhouse, “Assessing forest structural and physiological information content of multi-spectral LiDAR waveforms by radiative transfer modelling,” Remote Sens. Environ. 113(10), 2152–2163 (2009).
[Crossref]

Sci. Rep. (1)

J. Wu, B. Xiong, X. Lin, J. He, J. Suo, and Q. Dai, “Snapshot hyperspectral volumetric microscopy,” Sci. Rep. 6(1), 24624 (2016).
[Crossref] [PubMed]

Science (1)

A. F. H. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for Earth remote sensing,” Science 228(4704), 1147–1153 (1985).
[Crossref] [PubMed]

Trends Food Sci. Technol. (1)

A. A. Gowen, C. P. O’Donnell, P. J. Cullen, G. Downey, and J. M. Frias, “Hyperspectral imaging – an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18(12), 590–598 (2007).
[Crossref]

Other (14)

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT, 1991), pp. 3–20.

K. Lynch, T. Fahringer, and B. Thurow, “Three-dimensional particle image velocimetry using a plenoptic camera,” in 50th AIAA Aerospace Sciences Meeting (AIAA, 2012), pp. 1–14.

R. Ng, “Digital light field photography,” Ph.D. dissertation (Stanford University, 2006).

K. Maeno, H. Nagahara, A. Shimada, and R. I. Taniguchi, “Light field distortion feature for transparent object recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 122–135.

A. D. Gleckler, A. Gelbart, and J. M. Bowden, “Multispectral and hyperspectral 3D imaging lidar based upon the multiple-slit streak tube imaging lidar,” in Aerospace/Defense Sensing, Simulation, and Controls (SPIE, 2001), pp. 328–335.

M. W. Kudenov and E. L. Dereniak, “Compact snapshot birefringent imaging Fourier transform spectrometer,” in Imaging Spectrometry XV, (SPIE, 2010), paper 781206.

Z. Xiong, L. Wang, H. Li, D. Liu, and F. Wu, “Snapshot hyperspectral light field imaging,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017), pp. 3270–3278.

A. Gerrard and J. M. Burch, Introduction to Matrix Methods in Optics (John Wiley & Sons, 1994), Chap. 2.

I. Tosic and K. Berkner, “Light field scale-depth space transform for dense depth estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 435–442.

L. Gao, N. Bedard, and I. Tosic, “Disparity-to-depth calibration in light field imaging,” in Imaging and Applied Optics, OSA Technical Digest, (Optical Society of America, 2016), paper CW3D.2.

D. Eigen, D. Krishnan, and R. Fergus, “Restoring an image taken through a window covered with dirt or rain,” in 2013 IEEE International Conference on Computer Vision (IEEE, 2013), pp. 633–640.
[Crossref]

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 770–778.

B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural RGB images,” in Computer Vision – ECCV 2016 (Springer International Publishing, 2016), pp. 19–34.

A. Vedaldi and K. Lenc, “MatConvNet: convolutional neural networks for MATLAB,” in Proceedings of the 23rd ACM International Conference on Multimedia (ACM, 2015), pp. 689–692.
[Crossref]

Supplementary Material (2)

NameDescription
» Visualization 1       Real-time four-dimensional video
» Visualization 2       Videos with reconstructed images from SSVI when the system focuses at 450mm and 850mm

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Snapshot spectral-volumetric imaging system configuration. (a) System sketch. MLA, microlens array; CCD, charge-coupled device; EI, elemental image. (b) Configuration of the birefringent polarization interferometer (BPI). Components between the polarizer I and the Wollaston prism have been omitted for clarity. The red arrow and circle indicate the polarization eigenmodes of the Wollaston prism, while the black ones denote the polarization of light rays. The transmission axes of the two polarizers are both oriented at 45° with respect to the polarization eigenmodes of the Wollaston prism. (c) The rotated BPI and CCD. Other components have been omitted.
Fig. 2
Fig. 2 Flowchart of image processing pipeline. (a) Framework of the light-field-interferogram decoupling convolutional neural network (DC-CNN). Log, logarithm; Conv, convolution; ReLU, rectified linear unit; BN, batch normalization. (b) Light-field image. (c) An epipolar plane image extracted from the light-field image. (d) Disparity map. (e) Depth map. (f) An epipolar plane image extracted from the raw image. (g) Interferogram of a single pixel. (h) Spectrum of a single pixel. FFT, fast Fourier transform. (i) Spectral datacube.
Fig. 3
Fig. 3 Prototype of the snapshot spectral-volumetric imaging system.
Fig. 4
Fig. 4 Real-time spectral-volumetric video. (a) Illustration of the scene including a static paper with letters and a swinging leaf. (b-i) Samples of consecutive depth and image frames reconstructed at 15 Hz frame-rate with a lateral resolution of 110 × 110 pixels. (j-m) Samples of image frames reconstructed when the system focuses at 450mm (upper row) and 850mm (lower row). Note: the full video can be seen in Visualization 1 and Visualization 2.
Fig. 5
Fig. 5 Response to metamerism. (a) Reconstructed image at t=1.6s. (b) Spectra of points A and B indicated in (a). (c) High-resolution RGB image of the leaf captured by a commercial camera. (d) Spectral slice at 728.6nm. (e) Spectral slice at 673.9nm.
Fig. 6
Fig. 6 Lateral resolution of SSVI. (a) Experimentally measured resolution at different working distance. (b) Reconstructed image when the working distance is 840 mm.
Fig. 7
Fig. 7 Spectral resolution of SSVI. (a) Reconstructed image. (b-d) Spectra at the red, green, and blue laser point. The insets depict the spectral slice at 632.6nm, 531.6nm, and 487.1nm, respectively.
Fig. 8
Fig. 8 Quantitative evaluation of the spectral-datacube reconstruction. (a) Photo of the ColorChecker captured by a commercial camera. (b) Average normalized RMS errors of the reconstructed spectra in color blocks. (c-r) Reconstructed spectra from the SSVI system (Blue dots) and the Avantes spectrometer (Red line) in color blocks.
Fig. 9
Fig. 9 Depth accuracy of SSVI. (a) Schematic of the experimental setup. (b) Ground-truth depth of the scene. (c) Reconstructed depth. (d) Error map of the reconstructed depth from SSVI. (e) Error map of the reconstructed depth from the light-field image without interference.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

OPD( x,y )=2Btan( α )[ ( x x 0 )cos( δ )ysin( δ ) ]/ M R2 ,
[ y φ ]=[ M 0 1/f 1/M ][ y φ ],
[ y φ ]=[ M 0 1/f 1/M ][ y d n φ ]+[ d n 0 ],
P( y,φ,λ )= P ( y d n M + d n , y d n f +Mφ,λ ) 1 2 [ 1+cos( 2π λ OPD ) ],
I( x,y )= Θ,Φ,Λ P ( x , y , θ , φ ,λ ) 1 2 { 1+cos[ 2π λ OPD( x,y ) ] }dθdφdλ ,
x = x d m M + d m , y = y d n M + d n , θ = x d m f +Mθ, φ = y d n f +Mφ.
I( x,y )= Θ,Φ l( x , y , θ , φ )dθdφ Λ s( x,y,λ ) 1 2 { 1+cos[ 2π λ OPD( x,y ) ] }dλ ,
log[ I( x,y ) ]=log[ Θ,Φ l( x , y , θ , φ )dθdφ ] +log[ Λ s( x,y,λ ) 1 2 { 1+cos[ 2π λ OPD( x,y ) ] }dλ ].
H 1 ( I i )=ReLU{ W 1 log[ H 0 ( I i ) ]+ b 1 },
H 2 ( I i )=ReLU[ W 2 H 1 ( I i )+ b 2 ],
H 3 ( I i )=exp[ W 3 H 2 ( I i )+ b 3 ],
L= 1 N i=1 N H 3 ( I i ) I i E i 2 2 ,
I LF ( x,y )= Θ,Φ l( x d m M + d m , y d n M + d n , x d m f +Mθ, y d n f +Mφ )dθdφ .
m i = i Q ,
n i =Qmod[ ( i1 ),Q ],
x i = x c + D a ( m c m i ),
y i = y c + D a ( n c n i ),
x i raw = x i +( m i 1 )d,
y i raw = y i +( n i 1 )d,
OPD( x i raw , y i raw )= 2Btan( α ) M R2 [ ( x i raw x 0 )cos( δ ) y i raw sin( δ ) ],
x i raw = x c + m c D a d+ m i ( d D a ),
y i raw = y c + n c D a d+ n i ( d D a ).

Metrics