Abstract

Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.

© 2015 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Scalar wave-optical reconstruction of plenoptic camera images

André Junker, Tim Stenau, and Karl-Heinz Brenner
Appl. Opt. 53(25) 5784-5790 (2014)

High-resolution light field reconstruction using a hybrid imaging system

Xiang Wang, Lin Li, and GuangQi Hou
Appl. Opt. 55(10) 2580-2593 (2016)

Wave optics theory and 3-D deconvolution for the light field microscope

Michael Broxton, Logan Grosenick, Samuel Yang, Noy Cohen, Aaron Andalman, Karl Deisseroth, and Marc Levoy
Opt. Express 21(21) 25418-25439 (2013)

References

  • View by:
  • |
  • |
  • |

  1. E. Y. Lam, “Computational photography: advances and challenges,” Proc. SPIE 8122, 81220O (2011).
    [Crossref]
  2. D. R. Gerwe, A. Harvey, and M. E. Gehm, “Computational optical sensing and imaging: Introduction to feature issue,” Appl. Opt. 52, COSI1–COSI2 (2013).
  3. F. H. Imai, D. C. Linne von Berg, T. Skauli, S. Tominaga, and Z. Zalevsky, “Imaging systems and applications: Introduction to the feature,” Appl. Opt. 53, ISA1–ISA2 (2014).
    [Crossref]
  4. E. Y. Lam, G. Bennett, C. Fernandez-Cull, D. Gerwe, M. Kriss, and Z. Zalevsky, “Imaging systems and signal recovery: introduction to feature issue,” Appl. Opt. 54, IS1–IS2 (2015).
    [Crossref]
  5. R. Ng, “Digital light field photography,” Ph.D. thesis (Stanford University, 2006).
  6. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University Computer Science Department, 2005).
  7. V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.
  8. F. Deng, W. F. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Regularized multiframe phase-shifting algorithm for three-dimensional profilometry,” Appl. Opt. 51, 33–42 (2012).
    [Crossref]
  9. F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Illumination-invariant phase-shifting algorithm for three-dimensional profilometry of a moving object,” Opt. Eng. 51, 097001 (2012).
  10. F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, and E. Y. Lam, “An INSPECT measurement system for moving objects,” IEEE Trans. Instrum. Meas. 64, 63–74 (2015).
    [Crossref]
  11. P. Kner, “Phase diversity for three-dimensional imaging,” J. Opt. Soc. Am. A 30, 1980–1987 (2013).
    [Crossref]
  12. R. N. Bracewell, Two-Dimensional Imaging (Prentice-Hall, 1995).
  13. J. L. Prince and J. M. Links, Medical Imaging Signals and Systems, 2nd ed. (Prentice-Hall, 2014).
  14. D. Boas, D. Brooks, E. Miller, C. DiMarzio, M. Kilmer, R. Gaudette, and Q. Zhang, “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18, 57–75 (2001).
    [Crossref]
  15. P. Bonfert-Taylor, F. Leblond, R. W. Holt, K. Tichauer, B. W. Pogue, and E. C. Taylor, “Information loss and reconstruction in diffuse fluorescence tomography,” J. Opt. Soc. Am. A 29, 321–330 (2012).
    [Crossref]
  16. J. F. de Boer, B. Cense, B. H. Park, M. C. Pierce, G. J. Tearney, and B. E. Bouma, “Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography,” Opt. Lett. 28, 2067–2069 (2003).
    [Crossref]
  17. J. Ke and E. Y. Lam, “Image reconstruction from nonuniformly spaced samples in spectral-domain optical coherence tomography,” Biomed. Opt. Exp. 3, 741–752 (2012).
    [Crossref]
  18. M. K. Kim, Digital Holographic Microscopy: Principles, Techniques, and Applications (Springer, 2011).
  19. E. Y. Lam, X. Zhang, H. Vo, T.-C. Poon, and G. Indebetouw, “Three-dimensional microscopy and sectional image reconstruction using optical scanning holography,” Appl. Opt. 48, H113–H119 (2009).
    [Crossref]
  20. L. V. Wang and H.-I. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2007).
  21. B. E. Saleh and M. C. Teich, Fundamentals of Photonics, 2nd ed. (Wiley, 2007).
  22. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT, 1991), pp. 3–20.
  23. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of SIGGRAPH (ACM, 1996), pp. 31–42.
  24. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of SIGGRAPH (ACM, 1996), pp. 43–54.
  25. A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939). Originally published in Moscow in 1936.
  26. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge, 1999).
  27. T. Fujii, T. Kimoto, and M. Tanimoto, “Ray space coding for 3D visual communication,” in Picture Coding Symposium (1996), pp. 447–451.
  28. T. Fujii and M. Tanimoto, “Free viewpoint TV system based on ray-space representation,” Proc. SPIE 4864, 175–189 (2002).
    [Crossref]
  29. R. Ramamoorthi and P. Hanrahan, “On the relationship between radiance and irradiance: determining the illumination from images of a convex Lambertian object,” J. Opt. Soc. Am. A 18, 2448–2459 (2001).
    [Crossref]
  30. H. H. Barrett and K. J. Myers, Foundations of Image Science (Wiley, 2004).
  31. R. N. Bracewell, The Fourier Transform and Its Applications, 3rd ed. (McGraw-Hill, 2000).
  32. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2004).
  33. B. K. Horn, Robot Vision (MIT, 1986).
  34. K. Halbach, “Matrix representation of Gaussian optics,” Am. J. Phys. 32, 90–108 (1964).
    [Crossref]
  35. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of SIGGRAPH (ACM, 2000), pp. 297–306.
  36. M. Lippmann, “Photographie—Épreuves réversibles. photographies intégrales,” Académie des Sci. 146, 446–451 (1908).
  37. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48, H77–H94 (2009).
    [Crossref]
  38. X. Xiao, B. Javidi, M. Martínez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52, 546–560 (2013).
    [Crossref]
  39. H. E. Ives, “Parallax panoramagrams made with a large diameter lens,” J. Opt. Soc. Am. 20, 332–340 (1930).
    [Crossref]
  40. H. E. Ives, “Optical properties of a Lippman lenticulated sheet,” J. Opt. Soc. Am. 21, 171–176 (1931).
    [Crossref]
  41. A. Chutjian and R. J. Collier, “Recording and reconstructing three-dimensional images of computer-generated subjects by Lippmann integral photography,” Appl. Opt. 7, 99–103 (1968).
    [Crossref]
  42. E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
    [Crossref]
  43. V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, “Using plane + parallax for calibrating dense camera arrays,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. I, pp. 2–9.
  44. R. Szeliski, Computer Vision: Algorithms and Applications (Springer, 2011).
  45. M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” in Proceedings of SIGGRAPH (2004), Vol. 23, pp. 825–834.
  46. B. Wilburn, N. Joshi, V. Vaish, M. Levoy, and M. Horowitz, “High-speed videography using a dense camera array,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. II, pp. 294–301.
  47. B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
    [Crossref]
  48. M. Hirsch, S. Sivaramakrishnan, S. Jayasuriya, A. Wang, A. Molnar, R. Raskar, and G. Wetzstein, “A switchable light field camera architecture with angle sensitive pixels and dictionary-based sparse coding,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.
  49. A. Wang, P. Gill, and A. Molnar, “Light field image sensors based on the Talbot effect,” Appl. Opt. 48, 5897–5905 (2009).
    [Crossref]
  50. A. Wang and A. Molnar, “A light-field image sensor in 180  nm CMOS,” IEEE J. Solid-State Circuits 47, 257–271 (2012).
    [Crossref]
  51. M. Land, “The optics of animal eyes,” Contemp. Phys. 29, 435–455 (1988).
    [Crossref]
  52. S. Ogata, J. Ishida, and T. Sasano, “Optical sensor array in an artificial compound eye,” Opt. Eng. 33, 3649–3655 (1994).
    [Crossref]
  53. J. S. Sanders and C. E. Halford, “Design and analysis of apposition compound eye optical sensors,” Opt. Eng. 34, 222–235 (1995).
    [Crossref]
  54. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001).
    [Crossref]
  55. J. Tanida, R. Shogenji, Y. Kitamura, K. Yamada, M. Miyamoto, and S. Miyatake, “Color imaging with an integrated compound imaging system,” Opt. Express 11, 2109–2117 (2003).
    [Crossref]
  56. W.-S. Chan, E. Y. Lam, and M. K. Ng, “Investigation of computational compound-eye imaging system with super-resolution reconstruction,” in International Conference on Acoustics, Speech, and Signal Processing (IEEE, 2006), pp. 1177–1180.
  57. W.-S. Chan, E. Y. Lam, and M. K. Ng, “Extending the depth of field in a compound-eye imaging system with super-resolution reconstruction,” in International Conference on Pattern Recognition (IEEE, 2006), pp. 623–626.
  58. W.-S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimension. Syst. Signal Process. 18, 83–101 (2007).
    [Crossref]
  59. R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2007), paper CMA1.
  60. S. Prasad, “Digital superresolution and the generalized sampling theorem,” J. Opt. Soc. Am. A 24, 311–325 (2007).
    [Crossref]
  61. H. Navarro, J. Barreiro, G. Saavedra, M. Martínez-Corral, and B. Javidi, “High-resolution far-field integral-imaging camera by double snapshot,” Opt. Express 20, 890–895 (2012).
    [Crossref]
  62. Y.-R. Ng, P. M. Hanrahan, M. S. Levoy, and M. A. Horowitz, “Imaging arrangements and methods therefor,” U.S. patentUS8395696 B2 (March12, 2013).
  63. T. Georgiev, Z. Yu, A. Lumsdaine, and S. Goma, “Lytro camera technology: theory, algorithms, performance analysis,” Proc. SPIE 8667, 86671J (2013).
    [Crossref]
  64. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in IEEE Conf. Comput. Vis. Pattern Recogn., Portland, Oregon, 2013, pp. 1027–1034.
  65. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear volumetric focus for light field cameras,” ACM Trans. Graph. 34, 1–20 (2015).
    [Crossref]
  66. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
    [Crossref]
  67. E. Ben-Eliezer, E. Marom, N. Konforti, and Z. Zalevsky, “Experimental realization of an imaging system with an extended depth of field,” Appl. Opt. 44, 2792–2798 (2005).
    [Crossref]
  68. P. Zammit, A. R. Harvey, and G. Carles, “Extended depth-of-field imaging and ranging in a snapshot,” Optica 1, 209–216 (2014).
    [Crossref]
  69. A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Trans. Graph. 23, 292–300 (2004).
  70. R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24, 735–744 (2005).
    [Crossref]
  71. R. N. Bracewell, “Strip integration in radio astronomy,” Aust. J. Phys. 9, 198–217 (1956).
    [Crossref]
  72. D. H. Garces, W. T. Rhodes, and N. M. Peña, “Projection-slice theorem: a compact notation,” J. Opt. Soc. Am. A 28, 766–769 (2011).
    [Crossref]
  73. Z. Xu, J. Ke, and E. Y. Lam, “High-resolution lightfield photography using two masks,” Opt. Express 20, 10971–10983 (2012).
    [Crossref]
  74. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” in Proceedings of SIGGRAPH (2007), Vol. 26, pp. 1–12.
  75. R. N. Bracewell, K.-Y. Chang, A. Jha, and Y.-H. Wang, “Affine theorem for two-dimensional Fourier transform,” Electron. Lett. 29, 304 (1993).
    [Crossref]
  76. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
    [Crossref]
  77. N. Cohen, S. Yang, A. Andalman, M. Broxton, L. Grosenick, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express 22, 24817–24839 (2014).
    [Crossref]
  78. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).
    [Crossref]
  79. M. E. Testorf and M. A. Fiddy, “Lightfield photography and phase-space tomography: A paradigm for computational imaging,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2009), paper CTuB2.
  80. Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–10.
  81. T. Cuypers, R. Horstmeyer, S. B. Oh, P. Bekaert, and R. Raskar, “Validity of Wigner distribution function for ray-based imaging,” in IEEE International Conference on Computational Photography (IEEE, 2011), pp. 1–9.
  82. M. J. Bastiaans, “Wigner distribution function and its application to first-order optics,” J. Opt. Soc. Am. 69, 1710–1716 (1979).
    [Crossref]
  83. H. M. Ozaktas, Z. Zalevsky, and M. A. Kutay, The Fractional Fourier Transform: With Applications in Optics and Signal Processing (Wiley, 2001).
  84. S. B. Oh, S. Kashyap, R. Garg, S. Chandran, and R. Raskar, “Rendering wave effects with augmented light field,” Comput. Graph. Forum 29, 507–516 (2010).
    [Crossref]
  85. S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt. 52, D22–D31 (2013).
    [Crossref]
  86. A. Junker, T. Stenau, and K.-H. Brenner, “Scalar wave-optical reconstruction of plenoptic camera images,” Appl. Opt. 53, 5784–5790 (2014).
    [Crossref]
  87. C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22, 26659–26673 (2014).
    [Crossref]
  88. C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” in Proceedings of SIGGRAPH (2008), Vol. 27, pp. 1–10.
  89. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 972–986 (2012).
    [Crossref]
  90. T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19, 021106 (2000).
  91. T. G. Georgiev, A. Lumsdaine, and S. Goma, “High dynamic range image capture with plenoptic 2.0 camera,” in OSA Topical Meeting in Signal Recovery and Synthesis (Optical Society of America, 2009), paper SWA7P.
  92. T. Georgiev, G. Chunev, and A. Lumsdaine, “Superresolution with the focused plenoptic camera,” Proc. SPIE 7873, 78730X (2011).
    [Crossref]
  93. T. Georgiev, “Plenoptic camera resolution,” in OSA Imaging and Applied Optics Congress (Optical Society of America, 2015), paper JTh4A.2.
  94. E. Y. Lam, “Image restoration in digital photography,” IEEE Trans. Consum. Electron. 49, 269–274 (2003).
    [Crossref]
  95. A. Levin, W. T. Freeman, and F. Durand, “Understanding camera trade-offs through a Bayesian analysis of light field projections,” in European Conference on Computer Vision (Springer, 2008), pp. 88–101.
  96. Z. Xu and E. Y. Lam, “A spatial projection analysis of light field capture,” in OSA Frontiers in Optics (Optical Society of America, 2010), paper FWH2.
  97. K. Mitra and A. Veeraraghavan, “Light field denoising, light field superresolution and stereo camera based refocussing using a GMM light field patch prior,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2012), pp. 22–28.
  98. Z. Xu and E. Y. Lam, “Light field superresolution reconstruction in computational photography,” in OSA Topical Meeting in Signal Recovery and Synthesis (Optical Society of America, 2011), paper SMB3.
  99. J. Xu, Z. Xu, and E. Y. Lam, “Method and apparatus for processing light-field image,” U.S. patent14,575,091 (June18, 2015).
  100. S. Wanner and B. Goldluecke, “Variational light field analysis for disparity estimation and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 606–619 (2014).
    [Crossref]
  101. Z. Xu and E. Y. Lam, “A high-resolution lightfield camera with dual-mask design,” Proc. SPIE 8500, 85000U (2012).
    [Crossref]
  102. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
    [Crossref]
  103. E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008).
    [Crossref]
  104. J. Ke and E. Y. Lam, “Object reconstruction in block-based compressive imaging,” Opt. Express 20, 22102–22117 (2012).
    [Crossref]
  105. K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1–12 (2013).
    [Crossref]
  106. J. W. Goodman, “Assessing a new imaging modality,” in OSA Imaging and Applied Optics Congress (Optical Society of America, 2012), paper JM1A.1.

2015 (4)

E. Y. Lam, G. Bennett, C. Fernandez-Cull, D. Gerwe, M. Kriss, and Z. Zalevsky, “Imaging systems and signal recovery: introduction to feature issue,” Appl. Opt. 54, IS1–IS2 (2015).
[Crossref]

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, and E. Y. Lam, “An INSPECT measurement system for moving objects,” IEEE Trans. Instrum. Meas. 64, 63–74 (2015).
[Crossref]

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear volumetric focus for light field cameras,” ACM Trans. Graph. 34, 1–20 (2015).
[Crossref]

L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).
[Crossref]

2014 (6)

2013 (7)

2012 (10)

Z. Xu, J. Ke, and E. Y. Lam, “High-resolution lightfield photography using two masks,” Opt. Express 20, 10971–10983 (2012).
[Crossref]

J. Ke and E. Y. Lam, “Object reconstruction in block-based compressive imaging,” Opt. Express 20, 22102–22117 (2012).
[Crossref]

Z. Xu and E. Y. Lam, “A high-resolution lightfield camera with dual-mask design,” Proc. SPIE 8500, 85000U (2012).
[Crossref]

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 972–986 (2012).
[Crossref]

A. Wang and A. Molnar, “A light-field image sensor in 180  nm CMOS,” IEEE J. Solid-State Circuits 47, 257–271 (2012).
[Crossref]

H. Navarro, J. Barreiro, G. Saavedra, M. Martínez-Corral, and B. Javidi, “High-resolution far-field integral-imaging camera by double snapshot,” Opt. Express 20, 890–895 (2012).
[Crossref]

F. Deng, W. F. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Regularized multiframe phase-shifting algorithm for three-dimensional profilometry,” Appl. Opt. 51, 33–42 (2012).
[Crossref]

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Illumination-invariant phase-shifting algorithm for three-dimensional profilometry of a moving object,” Opt. Eng. 51, 097001 (2012).

P. Bonfert-Taylor, F. Leblond, R. W. Holt, K. Tichauer, B. W. Pogue, and E. C. Taylor, “Information loss and reconstruction in diffuse fluorescence tomography,” J. Opt. Soc. Am. A 29, 321–330 (2012).
[Crossref]

J. Ke and E. Y. Lam, “Image reconstruction from nonuniformly spaced samples in spectral-domain optical coherence tomography,” Biomed. Opt. Exp. 3, 741–752 (2012).
[Crossref]

2011 (3)

E. Y. Lam, “Computational photography: advances and challenges,” Proc. SPIE 8122, 81220O (2011).
[Crossref]

T. Georgiev, G. Chunev, and A. Lumsdaine, “Superresolution with the focused plenoptic camera,” Proc. SPIE 7873, 78730X (2011).
[Crossref]

D. H. Garces, W. T. Rhodes, and N. M. Peña, “Projection-slice theorem: a compact notation,” J. Opt. Soc. Am. A 28, 766–769 (2011).
[Crossref]

2010 (1)

S. B. Oh, S. Kashyap, R. Garg, S. Chandran, and R. Raskar, “Rendering wave effects with augmented light field,” Comput. Graph. Forum 29, 507–516 (2010).
[Crossref]

2009 (3)

2008 (1)

E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008).
[Crossref]

2007 (2)

W.-S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimension. Syst. Signal Process. 18, 83–101 (2007).
[Crossref]

S. Prasad, “Digital superresolution and the generalized sampling theorem,” J. Opt. Soc. Am. A 24, 311–325 (2007).
[Crossref]

2006 (1)

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

2005 (3)

E. Ben-Eliezer, E. Marom, N. Konforti, and Z. Zalevsky, “Experimental realization of an imaging system with an extended depth of field,” Appl. Opt. 44, 2792–2798 (2005).
[Crossref]

R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24, 735–744 (2005).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

2004 (1)

A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Trans. Graph. 23, 292–300 (2004).

2003 (3)

2002 (1)

T. Fujii and M. Tanimoto, “Free viewpoint TV system based on ray-space representation,” Proc. SPIE 4864, 175–189 (2002).
[Crossref]

2001 (3)

2000 (1)

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19, 021106 (2000).

1995 (2)

E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
[Crossref]

J. S. Sanders and C. E. Halford, “Design and analysis of apposition compound eye optical sensors,” Opt. Eng. 34, 222–235 (1995).
[Crossref]

1994 (1)

S. Ogata, J. Ishida, and T. Sasano, “Optical sensor array in an artificial compound eye,” Opt. Eng. 33, 3649–3655 (1994).
[Crossref]

1993 (1)

R. N. Bracewell, K.-Y. Chang, A. Jha, and Y.-H. Wang, “Affine theorem for two-dimensional Fourier transform,” Electron. Lett. 29, 304 (1993).
[Crossref]

1992 (1)

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

1988 (1)

M. Land, “The optics of animal eyes,” Contemp. Phys. 29, 435–455 (1988).
[Crossref]

1979 (1)

1968 (1)

1964 (1)

K. Halbach, “Matrix representation of Gaussian optics,” Am. J. Phys. 32, 90–108 (1964).
[Crossref]

1956 (1)

R. N. Bracewell, “Strip integration in radio astronomy,” Aust. J. Phys. 9, 198–217 (1956).
[Crossref]

1939 (1)

A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939). Originally published in Moscow in 1936.

1931 (1)

1930 (1)

1908 (1)

M. Lippmann, “Photographie—Épreuves réversibles. photographies intégrales,” Académie des Sci. 146, 446–451 (1908).

Adams, A.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Adelson, E. H.

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT, 1991), pp. 3–20.

Agarwala, A.

A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Trans. Graph. 23, 292–300 (2004).

Aggoun, A.

Agrawal, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” in Proceedings of SIGGRAPH (2007), Vol. 26, pp. 1–12.

Agrawala, M.

A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Trans. Graph. 23, 292–300 (2004).

Andalman, A.

Antunez, E.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Bando, Y.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1–12 (2013).
[Crossref]

Barnard, R.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2007), paper CMA1.

Barreiro, J.

Barrett, H. H.

H. H. Barrett and K. J. Myers, Foundations of Image Science (Wiley, 2004).

Barth, A.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Bastiaans, M. J.

Behrmann, G.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2007), paper CMA1.

Bekaert, P.

T. Cuypers, R. Horstmeyer, S. B. Oh, P. Bekaert, and R. Raskar, “Validity of Wigner distribution function for ray-based imaging,” in IEEE International Conference on Computational Photography (IEEE, 2011), pp. 1–9.

Ben-Eliezer, E.

Bennett, G.

Bergen, J. R.

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT, 1991), pp. 3–20.

Berkner, K.

Bishop, T. E.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 972–986 (2012).
[Crossref]

Boas, D.

D. Boas, D. Brooks, E. Miller, C. DiMarzio, M. Kilmer, R. Gaudette, and Q. Zhang, “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18, 57–75 (2001).
[Crossref]

Bolas, M.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” in Proceedings of SIGGRAPH (2004), Vol. 23, pp. 825–834.

Bonfert-Taylor, P.

Boominathan, V.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Born, M.

M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge, 1999).

Bouma, B. E.

Bracewell, R. N.

R. N. Bracewell, K.-Y. Chang, A. Jha, and Y.-H. Wang, “Affine theorem for two-dimensional Fourier transform,” Electron. Lett. 29, 304 (1993).
[Crossref]

R. N. Bracewell, “Strip integration in radio astronomy,” Aust. J. Phys. 9, 198–217 (1956).
[Crossref]

R. N. Bracewell, Two-Dimensional Imaging (Prentice-Hall, 1995).

R. N. Bracewell, The Fourier Transform and Its Applications, 3rd ed. (McGraw-Hill, 2000).

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University Computer Science Department, 2005).

Brenner, K.-H.

Brooks, D.

D. Boas, D. Brooks, E. Miller, C. DiMarzio, M. Kilmer, R. Gaudette, and Q. Zhang, “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18, 57–75 (2001).
[Crossref]

Broxton, M.

Candès, E. J.

E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008).
[Crossref]

Carles, G.

Cathey, W. T.

Cense, B.

Chan, W.-S.

W.-S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimension. Syst. Signal Process. 18, 83–101 (2007).
[Crossref]

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Investigation of computational compound-eye imaging system with super-resolution reconstruction,” in International Conference on Acoustics, Speech, and Signal Processing (IEEE, 2006), pp. 1177–1180.

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Extending the depth of field in a compound-eye imaging system with super-resolution reconstruction,” in International Conference on Pattern Recognition (IEEE, 2006), pp. 623–626.

Chandran, S.

S. B. Oh, S. Kashyap, R. Garg, S. Chandran, and R. Raskar, “Rendering wave effects with augmented light field,” Comput. Graph. Forum 29, 507–516 (2010).
[Crossref]

Chang, K.-Y.

R. N. Bracewell, K.-Y. Chang, A. Jha, and Y.-H. Wang, “Affine theorem for two-dimensional Fourier transform,” Electron. Lett. 29, 304 (1993).
[Crossref]

Chen, B.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” in Proceedings of SIGGRAPH (2004), Vol. 23, pp. 825–834.

Chen, H. H.

C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” in Proceedings of SIGGRAPH (2008), Vol. 27, pp. 1–10.

Chunev, G.

T. Georgiev, G. Chunev, and A. Lumsdaine, “Superresolution with the focused plenoptic camera,” Proc. SPIE 7873, 78730X (2011).
[Crossref]

Chutjian, A.

Cohen, M.

A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Trans. Graph. 23, 292–300 (2004).

Cohen, M. F.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of SIGGRAPH (ACM, 1996), pp. 43–54.

Cohen, N.

Colburn, A.

A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Trans. Graph. 23, 292–300 (2004).

Collier, R. J.

Curless, B.

A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Trans. Graph. 23, 292–300 (2004).

Cuypers, T.

T. Cuypers, R. Horstmeyer, S. B. Oh, P. Bekaert, and R. Raskar, “Validity of Wigner distribution function for ray-based imaging,” in IEEE International Conference on Computational Photography (IEEE, 2011), pp. 1–9.

Dansereau, D. G.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear volumetric focus for light field cameras,” ACM Trans. Graph. 34, 1–20 (2015).
[Crossref]

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in IEEE Conf. Comput. Vis. Pattern Recogn., Portland, Oregon, 2013, pp. 1027–1034.

de Boer, J. F.

Deisseroth, K.

Deng, F.

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, and E. Y. Lam, “An INSPECT measurement system for moving objects,” IEEE Trans. Instrum. Meas. 64, 63–74 (2015).
[Crossref]

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Illumination-invariant phase-shifting algorithm for three-dimensional profilometry of a moving object,” Opt. Eng. 51, 097001 (2012).

F. Deng, W. F. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Regularized multiframe phase-shifting algorithm for three-dimensional profilometry,” Appl. Opt. 51, 33–42 (2012).
[Crossref]

Deng, J.

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, and E. Y. Lam, “An INSPECT measurement system for moving objects,” IEEE Trans. Instrum. Meas. 64, 63–74 (2015).
[Crossref]

F. Deng, W. F. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Regularized multiframe phase-shifting algorithm for three-dimensional profilometry,” Appl. Opt. 51, 33–42 (2012).
[Crossref]

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Illumination-invariant phase-shifting algorithm for three-dimensional profilometry of a moving object,” Opt. Eng. 51, 097001 (2012).

DiMarzio, C.

D. Boas, D. Brooks, E. Miller, C. DiMarzio, M. Kilmer, R. Gaudette, and Q. Zhang, “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18, 57–75 (2001).
[Crossref]

Donoho, D. L.

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

Dontcheva, M.

A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Trans. Graph. 23, 292–300 (2004).

Dowski, E. R.

Drucker, S.

A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Trans. Graph. 23, 292–300 (2004).

Durand, F.

A. Levin, W. T. Freeman, and F. Durand, “Understanding camera trade-offs through a Bayesian analysis of light field projections,” in European Conference on Computer Vision (Springer, 2008), pp. 88–101.

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University Computer Science Department, 2005).

Favaro, P.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 972–986 (2012).
[Crossref]

Fernández, J. C. J.

Fernandez-Cull, C.

Fiddy, M. A.

M. E. Testorf and M. A. Fiddy, “Lightfield photography and phase-space tomography: A paradigm for computational imaging,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2009), paper CTuB2.

Freeman, W. T.

A. Levin, W. T. Freeman, and F. Durand, “Understanding camera trade-offs through a Bayesian analysis of light field projections,” in European Conference on Computer Vision (Springer, 2008), pp. 88–101.

Fujii, T.

T. Fujii and M. Tanimoto, “Free viewpoint TV system based on ray-space representation,” Proc. SPIE 4864, 175–189 (2002).
[Crossref]

T. Fujii, T. Kimoto, and M. Tanimoto, “Ray space coding for 3D visual communication,” in Picture Coding Symposium (1996), pp. 447–451.

Fung, K. S.

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, and E. Y. Lam, “An INSPECT measurement system for moving objects,” IEEE Trans. Instrum. Meas. 64, 63–74 (2015).
[Crossref]

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Illumination-invariant phase-shifting algorithm for three-dimensional profilometry of a moving object,” Opt. Eng. 51, 097001 (2012).

F. Deng, W. F. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Regularized multiframe phase-shifting algorithm for three-dimensional profilometry,” Appl. Opt. 51, 33–42 (2012).
[Crossref]

Garces, D. H.

Garg, R.

S. B. Oh, S. Kashyap, R. Garg, S. Chandran, and R. Raskar, “Rendering wave effects with augmented light field,” Comput. Graph. Forum 29, 507–516 (2010).
[Crossref]

Gaudette, R.

D. Boas, D. Brooks, E. Miller, C. DiMarzio, M. Kilmer, R. Gaudette, and Q. Zhang, “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18, 57–75 (2001).
[Crossref]

Gehm, M. E.

Georgiev, T.

T. Georgiev, Z. Yu, A. Lumsdaine, and S. Goma, “Lytro camera technology: theory, algorithms, performance analysis,” Proc. SPIE 8667, 86671J (2013).
[Crossref]

T. Georgiev, G. Chunev, and A. Lumsdaine, “Superresolution with the focused plenoptic camera,” Proc. SPIE 7873, 78730X (2011).
[Crossref]

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19, 021106 (2000).

T. Georgiev, “Plenoptic camera resolution,” in OSA Imaging and Applied Optics Congress (Optical Society of America, 2015), paper JTh4A.2.

Georgiev, T. G.

T. G. Georgiev, A. Lumsdaine, and S. Goma, “High dynamic range image capture with plenoptic 2.0 camera,” in OSA Topical Meeting in Signal Recovery and Synthesis (Optical Society of America, 2009), paper SWA7P.

Gershun, A.

A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939). Originally published in Moscow in 1936.

Gerwe, D.

Gerwe, D. R.

Gill, P.

Goldluecke, B.

S. Wanner and B. Goldluecke, “Variational light field analysis for disparity estimation and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 606–619 (2014).
[Crossref]

Goma, S.

T. Georgiev, Z. Yu, A. Lumsdaine, and S. Goma, “Lytro camera technology: theory, algorithms, performance analysis,” Proc. SPIE 8667, 86671J (2013).
[Crossref]

T. G. Georgiev, A. Lumsdaine, and S. Goma, “High dynamic range image capture with plenoptic 2.0 camera,” in OSA Topical Meeting in Signal Recovery and Synthesis (Optical Society of America, 2009), paper SWA7P.

Goodman, J. W.

J. W. Goodman, “Assessing a new imaging modality,” in OSA Imaging and Applied Optics Congress (Optical Society of America, 2012), paper JM1A.1.

J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2004).

Gortler, S. J.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of SIGGRAPH (ACM, 2000), pp. 297–306.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of SIGGRAPH (ACM, 1996), pp. 43–54.

Gray, B.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2007), paper CMA1.

Grosenick, L.

Grzeszczuk, R.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of SIGGRAPH (ACM, 1996), pp. 43–54.

Hahne, C.

Halbach, K.

K. Halbach, “Matrix representation of Gaussian optics,” Am. J. Phys. 32, 90–108 (1964).
[Crossref]

Halford, C. E.

J. S. Sanders and C. E. Halford, “Design and analysis of apposition compound eye optical sensors,” Opt. Eng. 34, 222–235 (1995).
[Crossref]

Hanrahan, P.

R. Ramamoorthi and P. Hanrahan, “On the relationship between radiance and irradiance: determining the illumination from images of a convex Lambertian object,” J. Opt. Soc. Am. A 18, 2448–2459 (2001).
[Crossref]

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of SIGGRAPH (ACM, 1996), pp. 31–42.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University Computer Science Department, 2005).

Hanrahan, P. M.

Y.-R. Ng, P. M. Hanrahan, M. S. Levoy, and M. A. Horowitz, “Imaging arrangements and methods therefor,” U.S. patentUS8395696 B2 (March12, 2013).

Harvey, A.

Harvey, A. R.

Haxha, S.

Hirsch, M.

M. Hirsch, S. Sivaramakrishnan, S. Jayasuriya, A. Wang, A. Molnar, R. Raskar, and G. Wetzstein, “A switchable light field camera architecture with angle sensitive pixels and dictionary-based sparse coding,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Holt, R. W.

Hong, K.

Horn, B. K.

B. K. Horn, Robot Vision (MIT, 1986).

Horowitz, M.

N. Cohen, S. Yang, A. Andalman, M. Broxton, L. Grosenick, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express 22, 24817–24839 (2014).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, M. Levoy, and M. Horowitz, “High-speed videography using a dense camera array,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. II, pp. 294–301.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” in Proceedings of SIGGRAPH (2004), Vol. 23, pp. 825–834.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University Computer Science Department, 2005).

Horowitz, M. A.

Y.-R. Ng, P. M. Hanrahan, M. S. Levoy, and M. A. Horowitz, “Imaging arrangements and methods therefor,” U.S. patentUS8395696 B2 (March12, 2013).

Horstmeyer, R.

T. Cuypers, R. Horstmeyer, S. B. Oh, P. Bekaert, and R. Raskar, “Validity of Wigner distribution function for ray-based imaging,” in IEEE International Conference on Computational Photography (IEEE, 2011), pp. 1–9.

Ichioka, Y.

Imai, F. H.

Indebetouw, G.

Isaksen, A.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of SIGGRAPH (ACM, 2000), pp. 297–306.

Ishida, J.

S. Ogata, J. Ishida, and T. Sasano, “Optical sensor array in an artificial compound eye,” Opt. Eng. 33, 3649–3655 (1994).
[Crossref]

Ishida, K.

Ives, H. E.

Javidi, B.

Jayasuriya, S.

M. Hirsch, S. Sivaramakrishnan, S. Jayasuriya, A. Wang, A. Molnar, R. Raskar, and G. Wetzstein, “A switchable light field camera architecture with angle sensitive pixels and dictionary-based sparse coding,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Jha, A.

R. N. Bracewell, K.-Y. Chang, A. Jha, and Y.-H. Wang, “Affine theorem for two-dimensional Fourier transform,” Electron. Lett. 29, 304 (1993).
[Crossref]

Joshi, N.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, M. Levoy, and M. Horowitz, “High-speed videography using a dense camera array,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. II, pp. 294–301.

V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, “Using plane + parallax for calibrating dense camera arrays,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. I, pp. 2–9.

Junker, A.

Kashyap, S.

S. B. Oh, S. Kashyap, R. Garg, S. Chandran, and R. Raskar, “Rendering wave effects with augmented light field,” Comput. Graph. Forum 29, 507–516 (2010).
[Crossref]

Ke, J.

Kilmer, M.

D. Boas, D. Brooks, E. Miller, C. DiMarzio, M. Kilmer, R. Gaudette, and Q. Zhang, “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18, 57–75 (2001).
[Crossref]

Kim, M. K.

M. K. Kim, Digital Holographic Microscopy: Principles, Techniques, and Applications (Springer, 2011).

Kimoto, T.

T. Fujii, T. Kimoto, and M. Tanimoto, “Ray space coding for 3D visual communication,” in Picture Coding Symposium (1996), pp. 447–451.

Kitamura, Y.

Kner, P.

Kondou, N.

Konforti, N.

Kriss, M.

Kumagai, T.

Kutay, M. A.

H. M. Ozaktas, Z. Zalevsky, and M. A. Kutay, The Fractional Fourier Transform: With Applications in Optics and Signal Processing (Wiley, 2001).

Lam, E. Y.

E. Y. Lam, G. Bennett, C. Fernandez-Cull, D. Gerwe, M. Kriss, and Z. Zalevsky, “Imaging systems and signal recovery: introduction to feature issue,” Appl. Opt. 54, IS1–IS2 (2015).
[Crossref]

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, and E. Y. Lam, “An INSPECT measurement system for moving objects,” IEEE Trans. Instrum. Meas. 64, 63–74 (2015).
[Crossref]

J. Ke and E. Y. Lam, “Image reconstruction from nonuniformly spaced samples in spectral-domain optical coherence tomography,” Biomed. Opt. Exp. 3, 741–752 (2012).
[Crossref]

F. Deng, W. F. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Regularized multiframe phase-shifting algorithm for three-dimensional profilometry,” Appl. Opt. 51, 33–42 (2012).
[Crossref]

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Illumination-invariant phase-shifting algorithm for three-dimensional profilometry of a moving object,” Opt. Eng. 51, 097001 (2012).

Z. Xu, J. Ke, and E. Y. Lam, “High-resolution lightfield photography using two masks,” Opt. Express 20, 10971–10983 (2012).
[Crossref]

J. Ke and E. Y. Lam, “Object reconstruction in block-based compressive imaging,” Opt. Express 20, 22102–22117 (2012).
[Crossref]

Z. Xu and E. Y. Lam, “A high-resolution lightfield camera with dual-mask design,” Proc. SPIE 8500, 85000U (2012).
[Crossref]

E. Y. Lam, “Computational photography: advances and challenges,” Proc. SPIE 8122, 81220O (2011).
[Crossref]

E. Y. Lam, X. Zhang, H. Vo, T.-C. Poon, and G. Indebetouw, “Three-dimensional microscopy and sectional image reconstruction using optical scanning holography,” Appl. Opt. 48, H113–H119 (2009).
[Crossref]

W.-S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimension. Syst. Signal Process. 18, 83–101 (2007).
[Crossref]

E. Y. Lam, “Image restoration in digital photography,” IEEE Trans. Consum. Electron. 49, 269–274 (2003).
[Crossref]

Z. Xu and E. Y. Lam, “A spatial projection analysis of light field capture,” in OSA Frontiers in Optics (Optical Society of America, 2010), paper FWH2.

J. Xu, Z. Xu, and E. Y. Lam, “Method and apparatus for processing light-field image,” U.S. patent14,575,091 (June18, 2015).

Z. Xu and E. Y. Lam, “Light field superresolution reconstruction in computational photography,” in OSA Topical Meeting in Signal Recovery and Synthesis (Optical Society of America, 2011), paper SMB3.

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Extending the depth of field in a compound-eye imaging system with super-resolution reconstruction,” in International Conference on Pattern Recognition (IEEE, 2006), pp. 623–626.

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Investigation of computational compound-eye imaging system with super-resolution reconstruction,” in International Conference on Acoustics, Speech, and Signal Processing (IEEE, 2006), pp. 1177–1180.

Land, M.

M. Land, “The optics of animal eyes,” Contemp. Phys. 29, 435–455 (1988).
[Crossref]

Leblond, F.

Lee, B.

Leung, W.

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Illumination-invariant phase-shifting algorithm for three-dimensional profilometry of a moving object,” Opt. Eng. 51, 097001 (2012).

F. Deng, W. F. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Regularized multiframe phase-shifting algorithm for three-dimensional profilometry,” Appl. Opt. 51, 33–42 (2012).
[Crossref]

Levin, A.

A. Levin, W. T. Freeman, and F. Durand, “Understanding camera trade-offs through a Bayesian analysis of light field projections,” in European Conference on Computer Vision (Springer, 2008), pp. 88–101.

Levoy, M.

N. Cohen, S. Yang, A. Andalman, M. Broxton, L. Grosenick, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express 22, 24817–24839 (2014).
[Crossref]

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, “Using plane + parallax for calibrating dense camera arrays,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. I, pp. 2–9.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” in Proceedings of SIGGRAPH (2004), Vol. 23, pp. 825–834.

B. Wilburn, N. Joshi, V. Vaish, M. Levoy, and M. Horowitz, “High-speed videography using a dense camera array,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. II, pp. 294–301.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of SIGGRAPH (ACM, 1996), pp. 31–42.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University Computer Science Department, 2005).

Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–10.

Levoy, M. S.

Y.-R. Ng, P. M. Hanrahan, M. S. Levoy, and M. A. Horowitz, “Imaging arrangements and methods therefor,” U.S. patentUS8395696 B2 (March12, 2013).

Liang, C.-K.

C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” in Proceedings of SIGGRAPH (2008), Vol. 27, pp. 1–10.

Lin, T.-H.

C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” in Proceedings of SIGGRAPH (2008), Vol. 27, pp. 1–10.

Links, J. M.

J. L. Prince and J. M. Links, Medical Imaging Signals and Systems, 2nd ed. (Prentice-Hall, 2014).

Linne von Berg, D. C.

Lippmann, M.

M. Lippmann, “Photographie—Épreuves réversibles. photographies intégrales,” Académie des Sci. 146, 446–451 (1908).

Liu, C.

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, and E. Y. Lam, “An INSPECT measurement system for moving objects,” IEEE Trans. Instrum. Meas. 64, 63–74 (2015).
[Crossref]

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Illumination-invariant phase-shifting algorithm for three-dimensional profilometry of a moving object,” Opt. Eng. 51, 097001 (2012).

C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” in Proceedings of SIGGRAPH (2008), Vol. 27, pp. 1–10.

Lumsdaine, A.

T. Georgiev, Z. Yu, A. Lumsdaine, and S. Goma, “Lytro camera technology: theory, algorithms, performance analysis,” Proc. SPIE 8667, 86671J (2013).
[Crossref]

T. Georgiev, G. Chunev, and A. Lumsdaine, “Superresolution with the focused plenoptic camera,” Proc. SPIE 7873, 78730X (2011).
[Crossref]

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19, 021106 (2000).

T. G. Georgiev, A. Lumsdaine, and S. Goma, “High dynamic range image capture with plenoptic 2.0 camera,” in OSA Topical Meeting in Signal Recovery and Synthesis (Optical Society of America, 2009), paper SWA7P.

Mak, G. Y.

W.-S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimension. Syst. Signal Process. 18, 83–101 (2007).
[Crossref]

Marom, E.

Martínez-Corral, M.

Marwah, K.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1–12 (2013).
[Crossref]

Matthews, S.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2007), paper CMA1.

McDowall, I.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” in Proceedings of SIGGRAPH (2004), Vol. 23, pp. 825–834.

McMillan, L.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of SIGGRAPH (ACM, 2000), pp. 297–306.

Miller, E.

D. Boas, D. Brooks, E. Miller, C. DiMarzio, M. Kilmer, R. Gaudette, and Q. Zhang, “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18, 57–75 (2001).
[Crossref]

Mirotznik, M.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2007), paper CMA1.

Mitra, K.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

K. Mitra and A. Veeraraghavan, “Light field denoising, light field superresolution and stereo camera based refocussing using a GMM light field patch prior,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2012), pp. 22–28.

Miyamoto, M.

Miyatake, S.

Miyazaki, D.

Mohan, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” in Proceedings of SIGGRAPH (2007), Vol. 26, pp. 1–12.

Molnar, A.

A. Wang and A. Molnar, “A light-field image sensor in 180  nm CMOS,” IEEE J. Solid-State Circuits 47, 257–271 (2012).
[Crossref]

A. Wang, P. Gill, and A. Molnar, “Light field image sensors based on the Talbot effect,” Appl. Opt. 48, 5897–5905 (2009).
[Crossref]

M. Hirsch, S. Sivaramakrishnan, S. Jayasuriya, A. Wang, A. Molnar, R. Raskar, and G. Wetzstein, “A switchable light field camera architecture with angle sensitive pixels and dictionary-based sparse coding,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Morimoto, T.

Myers, K. J.

H. H. Barrett and K. J. Myers, Foundations of Image Science (Wiley, 2004).

Navarro, H.

Ng, M. K.

W.-S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimension. Syst. Signal Process. 18, 83–101 (2007).
[Crossref]

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Investigation of computational compound-eye imaging system with super-resolution reconstruction,” in International Conference on Acoustics, Speech, and Signal Processing (IEEE, 2006), pp. 1177–1180.

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Extending the depth of field in a compound-eye imaging system with super-resolution reconstruction,” in International Conference on Pattern Recognition (IEEE, 2006), pp. 623–626.

Ng, R.

R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24, 735–744 (2005).
[Crossref]

R. Ng, “Digital light field photography,” Ph.D. thesis (Stanford University, 2006).

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University Computer Science Department, 2005).

Ng, Y.-R.

Y.-R. Ng, P. M. Hanrahan, M. S. Levoy, and M. A. Horowitz, “Imaging arrangements and methods therefor,” U.S. patentUS8395696 B2 (March12, 2013).

Ogata, S.

S. Ogata, J. Ishida, and T. Sasano, “Optical sensor array in an artificial compound eye,” Opt. Eng. 33, 3649–3655 (1994).
[Crossref]

Oh, S. B.

S. B. Oh, S. Kashyap, R. Garg, S. Chandran, and R. Raskar, “Rendering wave effects with augmented light field,” Comput. Graph. Forum 29, 507–516 (2010).
[Crossref]

T. Cuypers, R. Horstmeyer, S. B. Oh, P. Bekaert, and R. Raskar, “Validity of Wigner distribution function for ray-based imaging,” in IEEE International Conference on Computational Photography (IEEE, 2011), pp. 1–9.

Ozaktas, H. M.

H. M. Ozaktas, Z. Zalevsky, and M. A. Kutay, The Fractional Fourier Transform: With Applications in Optics and Signal Processing (Wiley, 2001).

Park, B. H.

Park, J.-H.

Pauca, V. P.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2007), paper CMA1.

Peña, N. M.

Pierce, M. C.

Pizarro, O.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear volumetric focus for light field cameras,” ACM Trans. Graph. 34, 1–20 (2015).
[Crossref]

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in IEEE Conf. Comput. Vis. Pattern Recogn., Portland, Oregon, 2013, pp. 1027–1034.

Plemmons, R. J.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2007), paper CMA1.

Pogue, B. W.

Poon, T.-C.

Prasad, S.

S. Prasad, “Digital superresolution and the generalized sampling theorem,” J. Opt. Soc. Am. A 24, 311–325 (2007).
[Crossref]

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2007), paper CMA1.

Prince, J. L.

J. L. Prince and J. M. Links, Medical Imaging Signals and Systems, 2nd ed. (Prentice-Hall, 2014).

Ramamoorthi, R.

Raskar, R.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1–12 (2013).
[Crossref]

S. B. Oh, S. Kashyap, R. Garg, S. Chandran, and R. Raskar, “Rendering wave effects with augmented light field,” Comput. Graph. Forum 29, 507–516 (2010).
[Crossref]

T. Cuypers, R. Horstmeyer, S. B. Oh, P. Bekaert, and R. Raskar, “Validity of Wigner distribution function for ray-based imaging,” in IEEE International Conference on Computational Photography (IEEE, 2011), pp. 1–9.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” in Proceedings of SIGGRAPH (2007), Vol. 26, pp. 1–12.

M. Hirsch, S. Sivaramakrishnan, S. Jayasuriya, A. Wang, A. Molnar, R. Raskar, and G. Wetzstein, “A switchable light field camera architecture with angle sensitive pixels and dictionary-based sparse coding,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Rhodes, W. T.

Saavedra, G.

Saleh, B. E.

B. E. Saleh and M. C. Teich, Fundamentals of Photonics, 2nd ed. (Wiley, 2007).

Salesin, D.

A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Trans. Graph. 23, 292–300 (2004).

Sanders, J. S.

J. S. Sanders and C. E. Halford, “Design and analysis of apposition compound eye optical sensors,” Opt. Eng. 34, 222–235 (1995).
[Crossref]

Sasano, T.

S. Ogata, J. Ishida, and T. Sasano, “Optical sensor array in an artificial compound eye,” Opt. Eng. 33, 3649–3655 (1994).
[Crossref]

Shogenji, R.

Shroff, S. A.

Sivaramakrishnan, S.

M. Hirsch, S. Sivaramakrishnan, S. Jayasuriya, A. Wang, A. Molnar, R. Raskar, and G. Wetzstein, “A switchable light field camera architecture with angle sensitive pixels and dictionary-based sparse coding,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Skauli, T.

Stenau, T.

Stern, A.

Sze, W.

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, and E. Y. Lam, “An INSPECT measurement system for moving objects,” IEEE Trans. Instrum. Meas. 64, 63–74 (2015).
[Crossref]

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Illumination-invariant phase-shifting algorithm for three-dimensional profilometry of a moving object,” Opt. Eng. 51, 097001 (2012).

Sze, W. F.

Szeliski, R.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of SIGGRAPH (ACM, 1996), pp. 43–54.

R. Szeliski, Computer Vision: Algorithms and Applications (Springer, 2011).

Talvala, E.-V.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Tanida, J.

Tanimoto, M.

T. Fujii and M. Tanimoto, “Free viewpoint TV system based on ray-space representation,” Proc. SPIE 4864, 175–189 (2002).
[Crossref]

T. Fujii, T. Kimoto, and M. Tanimoto, “Ray space coding for 3D visual communication,” in Picture Coding Symposium (1996), pp. 447–451.

Taylor, E. C.

Tearney, G. J.

Teich, M. C.

B. E. Saleh and M. C. Teich, Fundamentals of Photonics, 2nd ed. (Wiley, 2007).

Testorf, M. E.

M. E. Testorf and M. A. Fiddy, “Lightfield photography and phase-space tomography: A paradigm for computational imaging,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2009), paper CTuB2.

Tian, L.

Tichauer, K.

Tominaga, S.

Torgersen, T. C.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2007), paper CMA1.

Tumblin, J.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” in Proceedings of SIGGRAPH (2007), Vol. 26, pp. 1–12.

Vaish, V.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” in Proceedings of SIGGRAPH (2004), Vol. 23, pp. 825–834.

V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, “Using plane + parallax for calibrating dense camera arrays,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. I, pp. 2–9.

B. Wilburn, N. Joshi, V. Vaish, M. Levoy, and M. Horowitz, “High-speed videography using a dense camera array,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. II, pp. 294–301.

van der Gracht, J.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2007), paper CMA1.

Veeraraghavan, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” in Proceedings of SIGGRAPH (2007), Vol. 26, pp. 1–12.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

K. Mitra and A. Veeraraghavan, “Light field denoising, light field superresolution and stereo camera based refocussing using a GMM light field patch prior,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2012), pp. 22–28.

Velisavljevic, V.

Vo, H.

Wakin, M. B.

E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008).
[Crossref]

Waller, L.

Wang, A.

A. Wang and A. Molnar, “A light-field image sensor in 180  nm CMOS,” IEEE J. Solid-State Circuits 47, 257–271 (2012).
[Crossref]

A. Wang, P. Gill, and A. Molnar, “Light field image sensors based on the Talbot effect,” Appl. Opt. 48, 5897–5905 (2009).
[Crossref]

M. Hirsch, S. Sivaramakrishnan, S. Jayasuriya, A. Wang, A. Molnar, R. Raskar, and G. Wetzstein, “A switchable light field camera architecture with angle sensitive pixels and dictionary-based sparse coding,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Wang, J. Y.

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

Wang, L. V.

L. V. Wang and H.-I. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2007).

Wang, Y.-H.

R. N. Bracewell, K.-Y. Chang, A. Jha, and Y.-H. Wang, “Affine theorem for two-dimensional Fourier transform,” Electron. Lett. 29, 304 (1993).
[Crossref]

Wanner, S.

S. Wanner and B. Goldluecke, “Variational light field analysis for disparity estimation and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 606–619 (2014).
[Crossref]

Wetzstein, G.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1–12 (2013).
[Crossref]

M. Hirsch, S. Sivaramakrishnan, S. Jayasuriya, A. Wang, A. Molnar, R. Raskar, and G. Wetzstein, “A switchable light field camera architecture with angle sensitive pixels and dictionary-based sparse coding,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Wilburn, B.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, “Using plane + parallax for calibrating dense camera arrays,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. I, pp. 2–9.

B. Wilburn, N. Joshi, V. Vaish, M. Levoy, and M. Horowitz, “High-speed videography using a dense camera array,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. II, pp. 294–301.

Williams, S. B.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear volumetric focus for light field cameras,” ACM Trans. Graph. 34, 1–20 (2015).
[Crossref]

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in IEEE Conf. Comput. Vis. Pattern Recogn., Portland, Oregon, 2013, pp. 1027–1034.

Wolf, E.

M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge, 1999).

Wong, B.-Y.

C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” in Proceedings of SIGGRAPH (2008), Vol. 27, pp. 1–10.

Wu, H.-I.

L. V. Wang and H.-I. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2007).

Xiao, X.

Xu, J.

J. Xu, Z. Xu, and E. Y. Lam, “Method and apparatus for processing light-field image,” U.S. patent14,575,091 (June18, 2015).

Xu, Z.

Z. Xu and E. Y. Lam, “A high-resolution lightfield camera with dual-mask design,” Proc. SPIE 8500, 85000U (2012).
[Crossref]

Z. Xu, J. Ke, and E. Y. Lam, “High-resolution lightfield photography using two masks,” Opt. Express 20, 10971–10983 (2012).
[Crossref]

J. Xu, Z. Xu, and E. Y. Lam, “Method and apparatus for processing light-field image,” U.S. patent14,575,091 (June18, 2015).

Z. Xu and E. Y. Lam, “A spatial projection analysis of light field capture,” in OSA Frontiers in Optics (Optical Society of America, 2010), paper FWH2.

Z. Xu and E. Y. Lam, “Light field superresolution reconstruction in computational photography,” in OSA Topical Meeting in Signal Recovery and Synthesis (Optical Society of America, 2011), paper SMB3.

Yamada, K.

Yang, S.

Yu, Z.

T. Georgiev, Z. Yu, A. Lumsdaine, and S. Goma, “Lytro camera technology: theory, algorithms, performance analysis,” Proc. SPIE 8667, 86671J (2013).
[Crossref]

Zalevsky, Z.

Zammit, P.

Zhang, Q.

D. Boas, D. Brooks, E. Miller, C. DiMarzio, M. Kilmer, R. Gaudette, and Q. Zhang, “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18, 57–75 (2001).
[Crossref]

Zhang, X.

Zhang, Z.

Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–10.

Académie des Sci. (1)

M. Lippmann, “Photographie—Épreuves réversibles. photographies intégrales,” Académie des Sci. 146, 446–451 (1908).

ACM Trans. Graph. (5)

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear volumetric focus for light field cameras,” ACM Trans. Graph. 34, 1–20 (2015).
[Crossref]

A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Trans. Graph. 23, 292–300 (2004).

R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24, 735–744 (2005).
[Crossref]

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1–12 (2013).
[Crossref]

Am. J. Phys. (1)

K. Halbach, “Matrix representation of Gaussian optics,” Am. J. Phys. 32, 90–108 (1964).
[Crossref]

Appl. Opt. (14)

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48, H77–H94 (2009).
[Crossref]

X. Xiao, B. Javidi, M. Martínez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52, 546–560 (2013).
[Crossref]

D. R. Gerwe, A. Harvey, and M. E. Gehm, “Computational optical sensing and imaging: Introduction to feature issue,” Appl. Opt. 52, COSI1–COSI2 (2013).

F. H. Imai, D. C. Linne von Berg, T. Skauli, S. Tominaga, and Z. Zalevsky, “Imaging systems and applications: Introduction to the feature,” Appl. Opt. 53, ISA1–ISA2 (2014).
[Crossref]

E. Y. Lam, G. Bennett, C. Fernandez-Cull, D. Gerwe, M. Kriss, and Z. Zalevsky, “Imaging systems and signal recovery: introduction to feature issue,” Appl. Opt. 54, IS1–IS2 (2015).
[Crossref]

F. Deng, W. F. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Regularized multiframe phase-shifting algorithm for three-dimensional profilometry,” Appl. Opt. 51, 33–42 (2012).
[Crossref]

E. Y. Lam, X. Zhang, H. Vo, T.-C. Poon, and G. Indebetouw, “Three-dimensional microscopy and sectional image reconstruction using optical scanning holography,” Appl. Opt. 48, H113–H119 (2009).
[Crossref]

E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
[Crossref]

E. Ben-Eliezer, E. Marom, N. Konforti, and Z. Zalevsky, “Experimental realization of an imaging system with an extended depth of field,” Appl. Opt. 44, 2792–2798 (2005).
[Crossref]

A. Wang, P. Gill, and A. Molnar, “Light field image sensors based on the Talbot effect,” Appl. Opt. 48, 5897–5905 (2009).
[Crossref]

A. Chutjian and R. J. Collier, “Recording and reconstructing three-dimensional images of computer-generated subjects by Lippmann integral photography,” Appl. Opt. 7, 99–103 (1968).
[Crossref]

J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001).
[Crossref]

S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt. 52, D22–D31 (2013).
[Crossref]

A. Junker, T. Stenau, and K.-H. Brenner, “Scalar wave-optical reconstruction of plenoptic camera images,” Appl. Opt. 53, 5784–5790 (2014).
[Crossref]

Aust. J. Phys. (1)

R. N. Bracewell, “Strip integration in radio astronomy,” Aust. J. Phys. 9, 198–217 (1956).
[Crossref]

Biomed. Opt. Exp. (1)

J. Ke and E. Y. Lam, “Image reconstruction from nonuniformly spaced samples in spectral-domain optical coherence tomography,” Biomed. Opt. Exp. 3, 741–752 (2012).
[Crossref]

Comput. Graph. Forum (1)

S. B. Oh, S. Kashyap, R. Garg, S. Chandran, and R. Raskar, “Rendering wave effects with augmented light field,” Comput. Graph. Forum 29, 507–516 (2010).
[Crossref]

Contemp. Phys. (1)

M. Land, “The optics of animal eyes,” Contemp. Phys. 29, 435–455 (1988).
[Crossref]

Electron. Lett. (1)

R. N. Bracewell, K.-Y. Chang, A. Jha, and Y.-H. Wang, “Affine theorem for two-dimensional Fourier transform,” Electron. Lett. 29, 304 (1993).
[Crossref]

IEEE J. Solid-State Circuits (1)

A. Wang and A. Molnar, “A light-field image sensor in 180  nm CMOS,” IEEE J. Solid-State Circuits 47, 257–271 (2012).
[Crossref]

IEEE Signal Process. Mag. (2)

D. Boas, D. Brooks, E. Miller, C. DiMarzio, M. Kilmer, R. Gaudette, and Q. Zhang, “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18, 57–75 (2001).
[Crossref]

E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008).
[Crossref]

IEEE Trans. Consum. Electron. (1)

E. Y. Lam, “Image restoration in digital photography,” IEEE Trans. Consum. Electron. 49, 269–274 (2003).
[Crossref]

IEEE Trans. Inf. Theory (1)

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

IEEE Trans. Instrum. Meas. (1)

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, and E. Y. Lam, “An INSPECT measurement system for moving objects,” IEEE Trans. Instrum. Meas. 64, 63–74 (2015).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

S. Wanner and B. Goldluecke, “Variational light field analysis for disparity estimation and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 606–619 (2014).
[Crossref]

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 972–986 (2012).
[Crossref]

J. Electron. Imaging (1)

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19, 021106 (2000).

J. Math. Phys. (1)

A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939). Originally published in Moscow in 1936.

J. Opt. Soc. Am. (3)

J. Opt. Soc. Am. A (5)

Multidimension. Syst. Signal Process. (1)

W.-S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimension. Syst. Signal Process. 18, 83–101 (2007).
[Crossref]

Opt. Eng. (3)

S. Ogata, J. Ishida, and T. Sasano, “Optical sensor array in an artificial compound eye,” Opt. Eng. 33, 3649–3655 (1994).
[Crossref]

J. S. Sanders and C. E. Halford, “Design and analysis of apposition compound eye optical sensors,” Opt. Eng. 34, 222–235 (1995).
[Crossref]

F. Deng, C. Liu, W. Sze, J. Deng, K. S. Fung, W. Leung, and E. Y. Lam, “Illumination-invariant phase-shifting algorithm for three-dimensional profilometry of a moving object,” Opt. Eng. 51, 097001 (2012).

Opt. Express (7)

Opt. Lett. (1)

Optica (2)

Proc. SPIE (5)

T. Georgiev, Z. Yu, A. Lumsdaine, and S. Goma, “Lytro camera technology: theory, algorithms, performance analysis,” Proc. SPIE 8667, 86671J (2013).
[Crossref]

E. Y. Lam, “Computational photography: advances and challenges,” Proc. SPIE 8122, 81220O (2011).
[Crossref]

T. Fujii and M. Tanimoto, “Free viewpoint TV system based on ray-space representation,” Proc. SPIE 4864, 175–189 (2002).
[Crossref]

Z. Xu and E. Y. Lam, “A high-resolution lightfield camera with dual-mask design,” Proc. SPIE 8500, 85000U (2012).
[Crossref]

T. Georgiev, G. Chunev, and A. Lumsdaine, “Superresolution with the focused plenoptic camera,” Proc. SPIE 7873, 78730X (2011).
[Crossref]

Other (42)

T. Georgiev, “Plenoptic camera resolution,” in OSA Imaging and Applied Optics Congress (Optical Society of America, 2015), paper JTh4A.2.

A. Levin, W. T. Freeman, and F. Durand, “Understanding camera trade-offs through a Bayesian analysis of light field projections,” in European Conference on Computer Vision (Springer, 2008), pp. 88–101.

Z. Xu and E. Y. Lam, “A spatial projection analysis of light field capture,” in OSA Frontiers in Optics (Optical Society of America, 2010), paper FWH2.

K. Mitra and A. Veeraraghavan, “Light field denoising, light field superresolution and stereo camera based refocussing using a GMM light field patch prior,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2012), pp. 22–28.

Z. Xu and E. Y. Lam, “Light field superresolution reconstruction in computational photography,” in OSA Topical Meeting in Signal Recovery and Synthesis (Optical Society of America, 2011), paper SMB3.

J. Xu, Z. Xu, and E. Y. Lam, “Method and apparatus for processing light-field image,” U.S. patent14,575,091 (June18, 2015).

C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” in Proceedings of SIGGRAPH (2008), Vol. 27, pp. 1–10.

H. M. Ozaktas, Z. Zalevsky, and M. A. Kutay, The Fractional Fourier Transform: With Applications in Optics and Signal Processing (Wiley, 2001).

T. G. Georgiev, A. Lumsdaine, and S. Goma, “High dynamic range image capture with plenoptic 2.0 camera,” in OSA Topical Meeting in Signal Recovery and Synthesis (Optical Society of America, 2009), paper SWA7P.

J. W. Goodman, “Assessing a new imaging modality,” in OSA Imaging and Applied Optics Congress (Optical Society of America, 2012), paper JM1A.1.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of SIGGRAPH (ACM, 2000), pp. 297–306.

H. H. Barrett and K. J. Myers, Foundations of Image Science (Wiley, 2004).

R. N. Bracewell, The Fourier Transform and Its Applications, 3rd ed. (McGraw-Hill, 2000).

J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2004).

B. K. Horn, Robot Vision (MIT, 1986).

M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge, 1999).

T. Fujii, T. Kimoto, and M. Tanimoto, “Ray space coding for 3D visual communication,” in Picture Coding Symposium (1996), pp. 447–451.

M. K. Kim, Digital Holographic Microscopy: Principles, Techniques, and Applications (Springer, 2011).

L. V. Wang and H.-I. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2007).

B. E. Saleh and M. C. Teich, Fundamentals of Photonics, 2nd ed. (Wiley, 2007).

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT, 1991), pp. 3–20.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of SIGGRAPH (ACM, 1996), pp. 31–42.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of SIGGRAPH (ACM, 1996), pp. 43–54.

R. N. Bracewell, Two-Dimensional Imaging (Prentice-Hall, 1995).

J. L. Prince and J. M. Links, Medical Imaging Signals and Systems, 2nd ed. (Prentice-Hall, 2014).

R. Ng, “Digital light field photography,” Ph.D. thesis (Stanford University, 2006).

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University Computer Science Department, 2005).

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in IEEE Conf. Comput. Vis. Pattern Recogn., Portland, Oregon, 2013, pp. 1027–1034.

Y.-R. Ng, P. M. Hanrahan, M. S. Levoy, and M. A. Horowitz, “Imaging arrangements and methods therefor,” U.S. patentUS8395696 B2 (March12, 2013).

M. E. Testorf and M. A. Fiddy, “Lightfield photography and phase-space tomography: A paradigm for computational imaging,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2009), paper CTuB2.

Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–10.

T. Cuypers, R. Horstmeyer, S. B. Oh, P. Bekaert, and R. Raskar, “Validity of Wigner distribution function for ray-based imaging,” in IEEE International Conference on Computational Photography (IEEE, 2011), pp. 1–9.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” in Proceedings of SIGGRAPH (2007), Vol. 26, pp. 1–12.

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Investigation of computational compound-eye imaging system with super-resolution reconstruction,” in International Conference on Acoustics, Speech, and Signal Processing (IEEE, 2006), pp. 1177–1180.

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Extending the depth of field in a compound-eye imaging system with super-resolution reconstruction,” in International Conference on Pattern Recognition (IEEE, 2006), pp. 623–626.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in OSA Topical Meeting in Computational Optical Sensing and Imaging (Optical Society of America, 2007), paper CMA1.

V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, “Using plane + parallax for calibrating dense camera arrays,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. I, pp. 2–9.

R. Szeliski, Computer Vision: Algorithms and Applications (Springer, 2011).

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” in Proceedings of SIGGRAPH (2004), Vol. 23, pp. 825–834.

B. Wilburn, N. Joshi, V. Vaish, M. Levoy, and M. Horowitz, “High-speed videography using a dense camera array,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), Vol. II, pp. 294–301.

M. Hirsch, S. Sivaramakrishnan, S. Jayasuriya, A. Wang, A. Molnar, R. Raskar, and G. Wetzstein, “A switchable light field camera architecture with angle sensitive pixels and dictionary-based sparse coding,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1.

Two-plane parameterization of the light field [23].

Fig. 2.
Fig. 2.

Illustrative plots of the ray-space diagram. (a) A regular array of light rays, from a set of points in the u plane to a set of points in the x plane. (b) A set of light rays arriving at the same x position. (c) A set of light rays approaching a location behind the x plane. (d) A set of light rays diverging after converging at a location before the x plane.

Fig. 3.
Fig. 3.

Bringing the x plane closer to the u plane results in a tilted line in the ray-space at an angle ψ. (a) Moving the second plane closer to the first, by a factor of α. (b) Corresponding shearing in ray-space, with ψ=tan1(1α).

Fig. 4.
Fig. 4.

Light field camera system. Light rays marked in red show how the microlenses separate them, so the photodetector array can capture a sampling of the light field. Light rays marked in blue show that the photodetector array can also be thought of as recording the images of the exit pupil.

Fig. 5.
Fig. 5.

Tilted projection of the light field.

Fig. 6.
Fig. 6.

Example of post-capture refocus from custom light field data. (a) Image reconstruction with focus at the front. (b) Image reconstruction with focus at the back.

Fig. 7.
Fig. 7.

Illustration of the projection-slice theorem. Projection at angle ψ of a 2D function in the xu plane, which becomes a 1D function in ρ, is related to its 2D Fourier transform in the fxfu plane sliced at angle ψ by a 1D Fourier transform.

Fig. 8.
Fig. 8.

Observable light field.

Fig. 9.
Fig. 9.

Dappled photography with heterodyned light field capture using an attenuation mask. The mask is typically placed close to the detector, i.e., with a small z2 [74].

Fig. 10.
Fig. 10.

Focused plenoptic camera system. Object is focused at a plane in front of the lenslet array, and the microlenses then act as a relay system with the main camera lens.

Equations (33)

Equations on this page are rendered with MathJax. Learn more.

P(θ,ϕ,λ,t,x,y,z),
L(u,v,x,y),
E(x,y)=1Z2L(u,v,x,y)T(u,v)dudv,
T(u,v)=circ(u,v)={1ifu2+v2<R20otherwise.
x=u+xuα.
L(u,v,x,y)=L(u,v,u+xuα,v+yvα).
x=αx+(1α)u.
ψ=π2tan1(11α)=tan1(1α).
[u1θ1]=[1Z01][xθ1]=[1αZ01][x1θ1],
[u2θ2]=[1Z01][xθ2]=[1αZ01][x2θ2].
u1=x+Zθ1=x1+αZθ1,
u2=x+Zθ2=x2+αZθ2.
u2u1=Z(θ2θ1)=x2x1+αZ(θ2θ1),
Δu=Δx+αΔu.
ψ=tan1(ΔxΔu)=tan1(1α),
1z0+1z1=1f1,
z2=f2.
d1z1=d2z2.
[uvxy]=[100001001α0α001α0α][uvxy],
B=[100001001α0α001α0α].
BT|BT|=α2[1011α001011α001α00001α].
WU(s,t,ξ,ν)=U(s+α2,t+β2)U*(sα2,tβ2)×ej2π(αξ+βν)dαdβ,
U˜(s,t)=U(s,t)T(su,tv).
A(xλ,yλ)=U(s,t)T(su,tv)ej2π(xλs+yλt)dsdt.
Lobs(u,v,x,y)=|A(xλ,yλ)|2
=U(s1,t1)U*(s2,t2)T(s1u,t1v)×T*(s2u,t2v)ej2π[xλ(s1s2)+yλ(t1t2)]×ds1dt1ds2dt2
=WU(u,v,xλ,yλ)*WT(u,v,xλ,yλ),
h1(x,y;ζ,η)=KP1(u,v)ej2πλz1[(xMζ)u+(yMη)v]dudv,
U(x,y)=h1(x,y;ζ,η)U(ζ,η)dζdη
U˜(x,y)=U(x,y)m=1Nn=1NP2(xmd2,ynd2)×ejπλf2[(xmd2)2+(ynd2)2],
h2(σ,τ)=ej2πλz2jλz2ejπλz2(σ2+τ2),
U(σ,τ)=h2(σx,τy)U˜(x,y)dxdy,
1z1B+1z2=1f2,

Metrics