Abstract

Parallel lens systems and parallel image signal processing enable cost efficient and compact cameras to capture gigapixel scale images. This paper reviews the context of such cameras in the developing field of computational imaging and discusses how parallel architectures impact optical and electronic processing design. Using an array camera operating system initially developed under the Defense Advanced Research Projects Agency Advanced Wide FOV Architectures for Image Reconstruction and Exploitation program, we illustrate the state of parallel camera development with example 100 megapixel videos.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Autofocus for a multiscale gigapixel camera

Tomoya Nakamura, David S. Kittle, Seo Ho Youn, Steven D. Feller, Jun Tanida, and David J. Brady
Appl. Opt. 52(33) 8146-8153 (2013)

Microcamera aperture scale in monocentric gigapixel cameras

Daniel L. Marks, Eric J. Tremblay, Joseph E. Ford, and David J. Brady
Appl. Opt. 50(30) 5824-5833 (2011)

Development of a scalable image formation pipeline for multiscale gigapixel photography

D. R. Golish, E. M. Vera, K. J. Kelly, Q. Gong, P. A. Jansen, J. M. Hughes, D. S. Kittle, D. J. Brady, and M. E. Gehm
Opt. Express 20(20) 22048-22062 (2012)

References

  • View by:
  • |
  • |
  • |

  1. D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009).
    [Crossref]
  2. G. Bell, Supercomputers: The Amazing Race (A History of Supercomputing, 1960–2020) (2014).
  3. D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
    [Crossref]
  4. J. Muybridge, “The horse in motion,” Nature 25, 605 (1882).
    [Crossref]
  5. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
    [Crossref]
  6. A. Young, “Stereocamera,” U.S. patent2,090,017 (August 17, 1937).
  7. B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (Proc SIGGRAPH) (ACM, 2005), Vol. 24, pp. 765–776.
  8. M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt. 49, B9–B17 (2010).
    [Crossref]
  9. F. Mochizuki, K. Kagawa, S.-I. Okihara, M.-W. Seo, B. Zhang, T. Takasawa, K. Yasutomi, and S. Kawahito, “Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor,” Opt. Express 24, 4155–4176 (2016).
    [Crossref]
  10. R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, “Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition,” Opt. Express 18, 19367–19378 (2010).
    [Crossref]
  11. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by Bound Optics (Tombo): concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001).
    [Crossref]
  12. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005).
    [Crossref]
  13. G. Druart, N. Guérineau, R. Hadar, S. Thétas, J. Taboury, S. Rommeluère, J. Primot, and M. Fendler, “Demonstration of an infrared microcamera inspired by Xenos Peckii Vision,” Appl. Opt. 48, 3368–3374 (2009).
    [Crossref]
  14. K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “Picam: an ultra-thin high performance monolithic camera array,” ACM Transactions on Graphics (Proc SIGGRAPH) 32, 166 (2013).
    [Crossref]
  15. A. Portnoy, N. Pitsianis, X. Sun, D. Brady, R. Gibbons, A. Silver, R. Te Kolste, C. Chen, T. Dillon, and D. Prather, “Design and characterization of thin multiple aperture infrared cameras,” Appl. Opt. 48, 2115–2126 (2009).
    [Crossref]
  16. M. Shankar, R. Willett, N. Pitsianis, T. Schulz, R. Gibbons, R. Te Kolste, J. Carriere, C. Chen, D. Prather, and D. Brady, “Thin infrared imaging systems through multichannel sampling,” Appl. Opt. 47, B1–B10 (2008).
    [Crossref]
  17. G. Carles, J. Downing, and A. R. Harvey, “Super-resolution imaging using a camera array,” Opt. Lett. 39, 1889–1892 (2014).
    [Crossref]
  18. P. M. Shankar, W. C. Hasenplaugh, R. L. Morrison, R. A. Stack, and M. A. Neifeld, “Multiaperture imaging,” Appl. Opt. 45, 2871–2883 (2006).
    [Crossref]
  19. R. Shogenji, Y. Kitamura, K. Yamada, S. Miyatake, and J. Tanida, “Multispectral imaging using compact compound optics,” Opt. Express 12, 1643–1655 (2004).
    [Crossref]
  20. V. R. Bhakta, M. Somayaji, S. C. Douglas, and M. P. Christensen, “Experimentally validated computational imaging with adaptive multiaperture folded architecture,” Appl. Opt. 49, B51–B58 (2010).
    [Crossref]
  21. R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
    [Crossref]
  22. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR2, 1–11 (2005).
  23. K. Fife, A. El Gamal, and H.-S. P. Wong, “A 3D multi-aperture image sensor architecture,” in IEEE Custom Integrated Circuits Conference (IEEE, 2006), pp. 281–284.
  24. R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. Hernández, S. Agarwal, and S. M. Seitz, “Jump: virtual reality video,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 198 (2016).
    [Crossref]
  25. E. C. Cull, D. P. Kowalski, J. B. Burchett, S. D. Feller, and D. J. Brady, “Three dimensional imaging with the Argus sensor array,” Proc. SPIE 4873, 211–222 (2002).
    [Crossref]
  26. S. Tallon, “A pocket camera with many eyes,” IEEE Spectrum 53, 34–40 (2016).
    [Crossref]
  27. Z. Ivezic, J. Tyson, B. Abel, E. Acosta, R. Allsman, Y. AlSayyad, S. Anderson, J. Andrew, R. Angel, and G. Angeli, “LSST: from science drivers to reference design and anticipated data products,” arXiv:0805.2366 (2008).
  28. N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
    [Crossref]
  29. B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
    [Crossref]
  30. W. T. Cathey, B. R. Frieden, W. T. Rhodes, and C. K. Rushforth, “Image gathering and processing for enhanced resolution,” J. Opt. Soc. Am. A 1, 241–250 (1984).
    [Crossref]
  31. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
    [Crossref]
  32. J. Ojeda-Castaneda, R. Ramos, and A. Noyola-Isgleas, “High focal depth by apodization and digital restoration,” Appl. Opt. 27, 2583–2586 (1988).
    [Crossref]
  33. R. Raskar, Computational Photography: Epsilon to Coded Photography (Springer, 2009), pp. 238–253.
  34. B. E. Bayer, “Color imaging array,” U.S. patent3,971,065 (July 20, 1976).
  35. X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
    [Crossref]
  36. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
    [Crossref]
  37. D. J. Brady, Optical Imaging and Spectroscopy (Wiley, 2009).
  38. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.
  39. D. L. Marks, R. A. Stack, D. J. Brady, D. C. Munson, and R. B. Brady, “Visible cone-beam tomography with a lensless interferometric camera,” Science 284, 2164–2166 (1999).
    [Crossref]
  40. J. J. McCann and A. Rizzi, The Art and Science of HDR Imaging (Wiley, 2011), Vol. 26.
  41. S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 192 (2016).
    [Crossref]
  42. C. Zhang, J. Bastian, C. Shen, A. van den Hengel, and T. Shen, “Extended depth-of-field via focus stacking and graph cuts,” in 20th IEEE International Conference on Image Processing (ICIP) (IEEE, 2013), pp. 1272–1276.
  43. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
    [Crossref]
  44. P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Image translation for single-shot focal tomography,” Optica 2, 822–825 (2015).
    [Crossref]
  45. D. J. Brady, A. Mrozack, K. MacCabe, and P. Llull, “Compressive tomography,” Adv. Opt. Photon. 7, 756–813 (2015).
    [Crossref]
  46. S. Basty, M. Neifeld, D. Brady, and S. Kraut, “Nonlinear estimation for interferometric imaging,” Opt. Commun. 228, 249–261 (2003).
    [Crossref]
  47. D. J. Brady, “Multiplex sensors and the constant radiance theorem,” Opt. Lett. 27, 16–18 (2002).
    [Crossref]
  48. R. K. Luneburg and M. Herzberger, Mathematical Theory of Optics (University of California, 1964).
  49. E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012).
    [Crossref]
  50. I. Stamenov, A. Arianpour, S. J. Olivas, I. P. Agurok, A. R. Johnson, R. A. Stack, R. L. Morrison, and J. E. Ford, “Panoramic monocentric imaging using fiber-coupled focal planes,” Opt. Express 22, 31708–31721 (2014).
    [Crossref]
  51. W. Pang and D. J. Brady, “Galilean monocentric multiscale optical systems,” Opt. Express 25, 20332–20339 (2017).
    [Crossref]
  52. G. A. Lloyd and S. J. Sasson, “Electronic still camera,” U.S. patent4,131,919 (December 26, 1978).
  53. D. Prakel, The Visual Dictionary of Photography (AVA Publishing, 2009).
  54. “Sony Product Announcement,” http://www.sony-semicon.co.jp/products_en/news/detail/170301.pdf (2017).
  55. J. Nichols, K. Judd, C. Olson, K. Novak, J. Waterman, S. Feller, S. McCain, J. Anderson, and D. Brady, “Range performance of the DARPA AWARE wide field-of-view visible imager,” Appl. Opt. 55, 4478–4484 (2016).
    [Crossref]
  56. D. Golish, E. Vera, K. Kelly, Q. Gong, P. Jansen, J. Hughes, D. Kittle, D. Brady, and M. Gehm, “Development of a scalable image formation pipeline for multiscale gigapixel photography,” Opt. Express 20, 22048–22062 (2012).
    [Crossref]
  57. Q. Gong, E. Vera, D. R. Golish, S. D. Feller, D. J. Brady, and M. E. Gehm, “Model-based multiscale gigapixel image formation pipeline on GPU,” IEEE Trans. Comput. Imaging 3, 493–502 (2017).
    [Crossref]
  58. Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” IEEE Trans. Vis. Comput. Graphics 23, 2357–2364 (2017).
    [Crossref]
  59. “Kinect,” 2017, https://www.microsoftstore.com/accessories/xbox-kinect/p/1708-00000 .
  60. L. Wang, Z. Xiong, D. Gao, G. Shi, W. Zeng, and F. Wu, “High-speed hyperspectral video acquisition with a dual-camera architecture,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 4942–4950.
  61. F. Duanmu, E. Kurdoglu, S. A. Hosseini, Y. Liu, and Y. Wang, “Prioritized buffer control in two-tier 360 video streaming,” in Proceedings of the Workshop on Virtual Reality and Augmented Reality Network (ACM, 2017), pp. 13–18.
  62. K. Martinez, J. Cupitt, and S. Perry, “High resolution colorimetric image browsing on the web,” Computer Networks and ISDN Systems 30, 399–405 (1998).
    [Crossref]
  63. "MantisDemo," 2017, http://vision.nju.edu.cn/demo/ .
  64. "MantisDemoUS," 2017, http://aqueti.tv/demos/xiaolinRoad/

2017 (3)

Q. Gong, E. Vera, D. R. Golish, S. D. Feller, D. J. Brady, and M. E. Gehm, “Model-based multiscale gigapixel image formation pipeline on GPU,” IEEE Trans. Comput. Imaging 3, 493–502 (2017).
[Crossref]

Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” IEEE Trans. Vis. Comput. Graphics 23, 2357–2364 (2017).
[Crossref]

W. Pang and D. J. Brady, “Galilean monocentric multiscale optical systems,” Opt. Express 25, 20332–20339 (2017).
[Crossref]

2016 (6)

F. Mochizuki, K. Kagawa, S.-I. Okihara, M.-W. Seo, B. Zhang, T. Takasawa, K. Yasutomi, and S. Kawahito, “Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor,” Opt. Express 24, 4155–4176 (2016).
[Crossref]

J. Nichols, K. Judd, C. Olson, K. Novak, J. Waterman, S. Feller, S. McCain, J. Anderson, and D. Brady, “Range performance of the DARPA AWARE wide field-of-view visible imager,” Appl. Opt. 55, 4478–4484 (2016).
[Crossref]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. Hernández, S. Agarwal, and S. M. Seitz, “Jump: virtual reality video,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 198 (2016).
[Crossref]

S. Tallon, “A pocket camera with many eyes,” IEEE Spectrum 53, 34–40 (2016).
[Crossref]

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 192 (2016).
[Crossref]

2015 (2)

2014 (3)

2013 (2)

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “Picam: an ultra-thin high performance monolithic camera array,” ACM Transactions on Graphics (Proc SIGGRAPH) 32, 166 (2013).
[Crossref]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

2012 (3)

2010 (3)

2009 (3)

2008 (2)

M. Shankar, R. Willett, N. Pitsianis, T. Schulz, R. Gibbons, R. Te Kolste, J. Carriere, C. Chen, D. Prather, and D. Brady, “Thin infrared imaging systems through multichannel sampling,” Appl. Opt. 47, B1–B10 (2008).
[Crossref]

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

2006 (1)

2005 (1)

2004 (1)

2003 (1)

S. Basty, M. Neifeld, D. Brady, and S. Kraut, “Nonlinear estimation for interferometric imaging,” Opt. Commun. 228, 249–261 (2003).
[Crossref]

2002 (3)

D. J. Brady, “Multiplex sensors and the constant radiance theorem,” Opt. Lett. 27, 16–18 (2002).
[Crossref]

N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
[Crossref]

E. C. Cull, D. P. Kowalski, J. B. Burchett, S. D. Feller, and D. J. Brady, “Three dimensional imaging with the Argus sensor array,” Proc. SPIE 4873, 211–222 (2002).
[Crossref]

2001 (2)

1999 (1)

D. L. Marks, R. A. Stack, D. J. Brady, D. C. Munson, and R. B. Brady, “Visible cone-beam tomography with a lensless interferometric camera,” Science 284, 2164–2166 (1999).
[Crossref]

1998 (1)

K. Martinez, J. Cupitt, and S. Perry, “High resolution colorimetric image browsing on the web,” Computer Networks and ISDN Systems 30, 399–405 (1998).
[Crossref]

1995 (1)

1988 (1)

1984 (1)

1908 (1)

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
[Crossref]

1882 (1)

J. Muybridge, “The horse in motion,” Nature 25, 605 (1882).
[Crossref]

Abel, B.

Z. Ivezic, J. Tyson, B. Abel, E. Acosta, R. Allsman, Y. AlSayyad, S. Anderson, J. Andrew, R. Angel, and G. Angeli, “LSST: from science drivers to reference design and anticipated data products,” arXiv:0805.2366 (2008).

Acosta, E.

Z. Ivezic, J. Tyson, B. Abel, E. Acosta, R. Allsman, Y. AlSayyad, S. Anderson, J. Andrew, R. Angel, and G. Angeli, “LSST: from science drivers to reference design and anticipated data products,” arXiv:0805.2366 (2008).

Adams, A.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 192 (2016).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (Proc SIGGRAPH) (ACM, 2005), Vol. 24, pp. 765–776.

Agarwal, S.

R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. Hernández, S. Agarwal, and S. M. Seitz, “Jump: virtual reality video,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 198 (2016).
[Crossref]

Agurok, I. P.

Allsman, R.

Z. Ivezic, J. Tyson, B. Abel, E. Acosta, R. Allsman, Y. AlSayyad, S. Anderson, J. Andrew, R. Angel, and G. Angeli, “LSST: from science drivers to reference design and anticipated data products,” arXiv:0805.2366 (2008).

AlSayyad, Y.

Z. Ivezic, J. Tyson, B. Abel, E. Acosta, R. Allsman, Y. AlSayyad, S. Anderson, J. Andrew, R. Angel, and G. Angeli, “LSST: from science drivers to reference design and anticipated data products,” arXiv:0805.2366 (2008).

Anderson, J.

Anderson, R.

R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. Hernández, S. Agarwal, and S. M. Seitz, “Jump: virtual reality video,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 198 (2016).
[Crossref]

Anderson, S.

Z. Ivezic, J. Tyson, B. Abel, E. Acosta, R. Allsman, Y. AlSayyad, S. Anderson, J. Andrew, R. Angel, and G. Angeli, “LSST: from science drivers to reference design and anticipated data products,” arXiv:0805.2366 (2008).

Andrew, J.

Z. Ivezic, J. Tyson, B. Abel, E. Acosta, R. Allsman, Y. AlSayyad, S. Anderson, J. Andrew, R. Angel, and G. Angeli, “LSST: from science drivers to reference design and anticipated data products,” arXiv:0805.2366 (2008).

Angel, R.

Z. Ivezic, J. Tyson, B. Abel, E. Acosta, R. Allsman, Y. AlSayyad, S. Anderson, J. Andrew, R. Angel, and G. Angeli, “LSST: from science drivers to reference design and anticipated data products,” arXiv:0805.2366 (2008).

Angeli, G.

Z. Ivezic, J. Tyson, B. Abel, E. Acosta, R. Allsman, Y. AlSayyad, S. Anderson, J. Andrew, R. Angel, and G. Angeli, “LSST: from science drivers to reference design and anticipated data products,” arXiv:0805.2366 (2008).

Antoniades, J.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Antunez, E.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (Proc SIGGRAPH) (ACM, 2005), Vol. 24, pp. 765–776.

Arce, G. R.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

Arguello, H.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

Arianpour, A.

Asif, M. S.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

Aussel, H.

N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
[Crossref]

Ayremlou, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

Baraniuk, R.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

Barron, J. T.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 192 (2016).
[Crossref]

R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. Hernández, S. Agarwal, and S. M. Seitz, “Jump: virtual reality video,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 198 (2016).
[Crossref]

Barth, A.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (Proc SIGGRAPH) (ACM, 2005), Vol. 24, pp. 765–776.

Bastian, J.

C. Zhang, J. Bastian, C. Shen, A. van den Hengel, and T. Shen, “Extended depth-of-field via focus stacking and graph cuts,” in 20th IEEE International Conference on Image Processing (ICIP) (IEEE, 2013), pp. 1272–1276.

Basty, S.

S. Basty, M. Neifeld, D. Brady, and S. Kraut, “Nonlinear estimation for interferometric imaging,” Opt. Commun. 228, 249–261 (2003).
[Crossref]

Bayer, B. E.

B. E. Bayer, “Color imaging array,” U.S. patent3,971,065 (July 20, 1976).

Bell, G.

G. Bell, Supercomputers: The Amazing Race (A History of Supercomputing, 1960–2020) (2014).

Bhakta, V. R.

Boesgaard, H.

N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
[Crossref]

Brady, D.

Brady, D. J.

W. Pang and D. J. Brady, “Galilean monocentric multiscale optical systems,” Opt. Express 25, 20332–20339 (2017).
[Crossref]

Q. Gong, E. Vera, D. R. Golish, S. D. Feller, D. J. Brady, and M. E. Gehm, “Model-based multiscale gigapixel image formation pipeline on GPU,” IEEE Trans. Comput. Imaging 3, 493–502 (2017).
[Crossref]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Image translation for single-shot focal tomography,” Optica 2, 822–825 (2015).
[Crossref]

D. J. Brady, A. Mrozack, K. MacCabe, and P. Llull, “Compressive tomography,” Adv. Opt. Photon. 7, 756–813 (2015).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012).
[Crossref]

M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt. 49, B9–B17 (2010).
[Crossref]

R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, “Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition,” Opt. Express 18, 19367–19378 (2010).
[Crossref]

D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009).
[Crossref]

D. J. Brady, “Multiplex sensors and the constant radiance theorem,” Opt. Lett. 27, 16–18 (2002).
[Crossref]

E. C. Cull, D. P. Kowalski, J. B. Burchett, S. D. Feller, and D. J. Brady, “Three dimensional imaging with the Argus sensor array,” Proc. SPIE 4873, 211–222 (2002).
[Crossref]

D. L. Marks, R. A. Stack, D. J. Brady, D. C. Munson, and R. B. Brady, “Visible cone-beam tomography with a lensless interferometric camera,” Science 284, 2164–2166 (1999).
[Crossref]

D. J. Brady, Optical Imaging and Spectroscopy (Wiley, 2009).

Brady, R. B.

D. L. Marks, R. A. Stack, D. J. Brady, D. C. Munson, and R. B. Brady, “Visible cone-beam tomography with a lensless interferometric camera,” Science 284, 2164–2166 (1999).
[Crossref]

Bräuer, A.

Braun, M.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR2, 1–11 (2005).

Burchett, J. B.

E. C. Cull, D. P. Kowalski, J. B. Burchett, S. D. Feller, and D. J. Brady, “Three dimensional imaging with the Argus sensor array,” Proc. SPIE 4873, 211–222 (2002).
[Crossref]

Burke, B. E.

N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
[Crossref]

Cao, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

Carin, L.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Image translation for single-shot focal tomography,” Optica 2, 822–825 (2015).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

Carles, G.

Carriere, J.

Cathey, W. T.

Chambers, K.

N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
[Crossref]

Chatterjee, P.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “Picam: an ultra-thin high performance monolithic camera array,” ACM Transactions on Graphics (Proc SIGGRAPH) 32, 166 (2013).
[Crossref]

Chen, C.

Chen, J.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 192 (2016).
[Crossref]

Chester, D.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Choi, K.

Christensen, M. P.

Chun, M. R.

N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
[Crossref]

Collins, R. T.

R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
[Crossref]

Cull, E. C.

E. C. Cull, D. P. Kowalski, J. B. Burchett, S. D. Feller, and D. J. Brady, “Three dimensional imaging with the Argus sensor array,” Proc. SPIE 4873, 211–222 (2002).
[Crossref]

Cupitt, J.

K. Martinez, J. Cupitt, and S. Perry, “High resolution colorimetric image browsing on the web,” Computer Networks and ISDN Systems 30, 399–405 (1998).
[Crossref]

Dai, Q.

Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” IEEE Trans. Vis. Comput. Graphics 23, 2357–2364 (2017).
[Crossref]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

Dannberg, P.

Dillon, T.

Douglas, S. C.

Downing, J.

Dowski, E. R.

Druart, G.

Duanmu, F.

F. Duanmu, E. Kurdoglu, S. A. Hosseini, Y. Liu, and Y. Wang, “Prioritized buffer control in two-tier 360 video streaming,” in Proceedings of the Workshop on Virtual Reality and Augmented Reality Network (ACM, 2017), pp. 13–18.

Duparré, J.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “Picam: an ultra-thin high performance monolithic camera array,” ACM Transactions on Graphics (Proc SIGGRAPH) 32, 166 (2013).
[Crossref]

J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005).
[Crossref]

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR2, 1–11 (2005).

Edwards, J.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

El Gamal, A.

K. Fife, A. El Gamal, and H.-S. P. Wong, “A 3D multi-aperture image sensor architecture,” in IEEE Custom Integrated Circuits Conference (IEEE, 2006), pp. 281–284.

Feller, S.

Feller, S. D.

Q. Gong, E. Vera, D. R. Golish, S. D. Feller, D. J. Brady, and M. E. Gehm, “Model-based multiscale gigapixel image formation pipeline on GPU,” IEEE Trans. Comput. Imaging 3, 493–502 (2017).
[Crossref]

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

E. C. Cull, D. P. Kowalski, J. B. Burchett, S. D. Feller, and D. J. Brady, “Three dimensional imaging with the Argus sensor array,” Proc. SPIE 4873, 211–222 (2002).
[Crossref]

Fendler, M.

Fife, K.

K. Fife, A. El Gamal, and H.-S. P. Wong, “A 3D multi-aperture image sensor architecture,” in IEEE Custom Integrated Circuits Conference (IEEE, 2006), pp. 281–284.

Ford, J. E.

Frieden, B. R.

Fujiyoshi, H.

R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
[Crossref]

Gallup, D.

R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. Hernández, S. Agarwal, and S. M. Seitz, “Jump: virtual reality video,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 198 (2016).
[Crossref]

Gao, D.

L. Wang, Z. Xiong, D. Gao, G. Shi, W. Zeng, and F. Wu, “High-speed hyperspectral video acquisition with a dual-camera architecture,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 4942–4950.

Gehm, M.

Gehm, M. E.

Q. Gong, E. Vera, D. R. Golish, S. D. Feller, D. J. Brady, and M. E. Gehm, “Model-based multiscale gigapixel image formation pipeline on GPU,” IEEE Trans. Comput. Imaging 3, 493–502 (2017).
[Crossref]

Geiss, R.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 192 (2016).
[Crossref]

Gershfield, C.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Gibbons, R.

Golish, D.

Golish, D. R.

Q. Gong, E. Vera, D. R. Golish, S. D. Feller, D. J. Brady, and M. E. Gehm, “Model-based multiscale gigapixel image formation pipeline on GPU,” IEEE Trans. Comput. Imaging 3, 493–502 (2017).
[Crossref]

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Gong, Q.

Q. Gong, E. Vera, D. R. Golish, S. D. Feller, D. J. Brady, and M. E. Gehm, “Model-based multiscale gigapixel image formation pipeline on GPU,” IEEE Trans. Comput. Imaging 3, 493–502 (2017).
[Crossref]

D. Golish, E. Vera, K. Kelly, Q. Gong, P. Jansen, J. Hughes, D. Kittle, D. Brady, and M. Gehm, “Development of a scalable image formation pipeline for multiscale gigapixel photography,” Opt. Express 20, 22048–22062 (2012).
[Crossref]

Guérineau, N.

Haas, D.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Hadar, R.

Hagen, N.

Hahn, J.

Hanrahan, P.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR2, 1–11 (2005).

Harvey, A. R.

Hasenplaugh, W. C.

Hasinoff, S. W.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 192 (2016).
[Crossref]

Heasley, J. N.

N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
[Crossref]

Heidrich, W.

Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” IEEE Trans. Vis. Comput. Graphics 23, 2357–2364 (2017).
[Crossref]

Hernández, C.

R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. Hernández, S. Agarwal, and S. M. Seitz, “Jump: virtual reality video,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 198 (2016).
[Crossref]

Herzberger, M.

R. K. Luneburg and M. Herzberger, Mathematical Theory of Optics (University of California, 1964).

Hodapp, K.-W.

N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
[Crossref]

Horisaki, R.

Horowitz, M.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (Proc SIGGRAPH) (ACM, 2005), Vol. 24, pp. 765–776.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR2, 1–11 (2005).

Hosseini, S. A.

F. Duanmu, E. Kurdoglu, S. A. Hosseini, Y. Liu, and Y. Wang, “Prioritized buffer control in two-tier 360 video streaming,” in Proceedings of the Workshop on Virtual Reality and Augmented Reality Network (ACM, 2017), pp. 13–18.

Hughes, J.

Hunt, B.

N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
[Crossref]

Ichioka, Y.

Ishida, K.

Ivezic, Z.

Z. Ivezic, J. Tyson, B. Abel, E. Acosta, R. Allsman, Y. AlSayyad, S. Anderson, J. Andrew, R. Angel, and G. Angeli, “LSST: from science drivers to reference design and anticipated data products,” arXiv:0805.2366 (2008).

Jansen, P.

Jedicke, R.

N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
[Crossref]

Johnson, A. R.

Joshi, N.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (Proc SIGGRAPH) (ACM, 2005), Vol. 24, pp. 765–776.

Judd, K.

Kagawa, K.

Kainz, F.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 192 (2016).
[Crossref]

Kaiser, N.

N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
[Crossref]

Kanade, T.

R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
[Crossref]

Kawahito, S.

Kelly, K.

Kitamura, Y.

Kittle, D.

Kittle, D. S.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

Kondou, N.

Kontkanen, J.

R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. Hernández, S. Agarwal, and S. M. Seitz, “Jump: virtual reality video,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 198 (2016).
[Crossref]

Kowalski, D. P.

E. C. Cull, D. P. Kowalski, J. B. Burchett, S. D. Feller, and D. J. Brady, “Three dimensional imaging with the Argus sensor array,” Proc. SPIE 4873, 211–222 (2002).
[Crossref]

Kraut, S.

S. Basty, M. Neifeld, D. Brady, and S. Kraut, “Nonlinear estimation for interferometric imaging,” Opt. Commun. 228, 249–261 (2003).
[Crossref]

Kumagai, T.

Kurdoglu, E.

F. Duanmu, E. Kurdoglu, S. A. Hosseini, Y. Liu, and Y. Wang, “Prioritized buffer control in two-tier 360 video streaming,” in Proceedings of the Workshop on Virtual Reality and Augmented Reality Network (ACM, 2017), pp. 13–18.

Leininger, B.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Lelescu, D.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “Picam: an ultra-thin high performance monolithic camera array,” ACM Transactions on Graphics (Proc SIGGRAPH) 32, 166 (2013).
[Crossref]

Levoy, M.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 192 (2016).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (Proc SIGGRAPH) (ACM, 2005), Vol. 24, pp. 765–776.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR2, 1–11 (2005).

Liao, X.

Lin, S.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

Lin, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

Lippmann, G.

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
[Crossref]

Lipton, A. J.

R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
[Crossref]

Liu, E.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Liu, Y.

Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” IEEE Trans. Vis. Comput. Graphics 23, 2357–2364 (2017).
[Crossref]

F. Duanmu, E. Kurdoglu, S. A. Hosseini, Y. Liu, and Y. Wang, “Prioritized buffer control in two-tier 360 video streaming,” in Proceedings of the Workshop on Virtual Reality and Augmented Reality Network (ACM, 2017), pp. 13–18.

Lloyd, G. A.

G. A. Lloyd and S. J. Sasson, “Electronic still camera,” U.S. patent4,131,919 (December 26, 1978).

Llull, P.

Luneburg, R. K.

R. K. Luneburg and M. Herzberger, Mathematical Theory of Optics (University of California, 1964).

MacCabe, K.

Marks, D.

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Marks, D. L.

E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012).
[Crossref]

D. L. Marks, R. A. Stack, D. J. Brady, D. C. Munson, and R. B. Brady, “Visible cone-beam tomography with a lensless interferometric camera,” Science 284, 2164–2166 (1999).
[Crossref]

Martinez, K.

K. Martinez, J. Cupitt, and S. Perry, “High resolution colorimetric image browsing on the web,” Computer Networks and ISDN Systems 30, 399–405 (1998).
[Crossref]

McCain, S.

McCann, J. J.

J. J. McCann and A. Rizzi, The Art and Science of HDR Imaging (Wiley, 2011), Vol. 26.

McMahon, A.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “Picam: an ultra-thin high performance monolithic camera array,” ACM Transactions on Graphics (Proc SIGGRAPH) 32, 166 (2013).
[Crossref]

Miyatake, S.

Miyazaki, D.

Mochizuki, F.

Molina, G.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “Picam: an ultra-thin high performance monolithic camera array,” ACM Transactions on Graphics (Proc SIGGRAPH) 32, 166 (2013).
[Crossref]

Morimoto, T.

Morrison, R. L.

Mrozack, A.

Mullis, R.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “Picam: an ultra-thin high performance monolithic camera array,” ACM Transactions on Graphics (Proc SIGGRAPH) 32, 166 (2013).
[Crossref]

Munson, D. C.

D. L. Marks, R. A. Stack, D. J. Brady, D. C. Munson, and R. B. Brady, “Visible cone-beam tomography with a lensless interferometric camera,” Science 284, 2164–2166 (1999).
[Crossref]

Muybridge, J.

J. Muybridge, “The horse in motion,” Nature 25, 605 (1882).
[Crossref]

Nayar, S.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “Picam: an ultra-thin high performance monolithic camera array,” ACM Transactions on Graphics (Proc SIGGRAPH) 32, 166 (2013).
[Crossref]

Neifeld, M.

S. Basty, M. Neifeld, D. Brady, and S. Kraut, “Nonlinear estimation for interferometric imaging,” Opt. Commun. 228, 249–261 (2003).
[Crossref]

Neifeld, M. A.

Ng, R.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR2, 1–11 (2005).

Nichols, J.

Novak, K.

Noyola-Isgleas, A.

Ojeda-Castaneda, J.

Okihara, S.-I.

Olivas, S. J.

Olson, C.

Pang, W.

Perry, S.

K. Martinez, J. Cupitt, and S. Perry, “High resolution colorimetric image browsing on the web,” Computer Networks and ISDN Systems 30, 399–405 (1998).
[Crossref]

Pitsianis, N.

Pitsianis, N. P.

Portnoy, A.

Prakel, D.

D. Prakel, The Visual Dictionary of Photography (AVA Publishing, 2009).

Prather, D.

Primot, J.

Ramos, R.

Raskar, R.

R. Raskar, Computational Photography: Epsilon to Coded Photography (Springer, 2009), pp. 238–253.

Rhodes, W. T.

Rizzi, A.

J. J. McCann and A. Rizzi, The Art and Science of HDR Imaging (Wiley, 2011), Vol. 26.

Rommeluère, S.

Rushforth, C. K.

Sankaranarayanan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

Sapiro, G.

Sasson, S. J.

G. A. Lloyd and S. J. Sasson, “Electronic still camera,” U.S. patent4,131,919 (December 26, 1978).

Schreiber, P.

Schulz, T.

Seitz, S. M.

R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. Hernández, S. Agarwal, and S. M. Seitz, “Jump: virtual reality video,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 198 (2016).
[Crossref]

Seo, M.-W.

Shankar, M.

Shankar, P. M.

Sharlet, D.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 192 (2016).
[Crossref]

Shen, C.

C. Zhang, J. Bastian, C. Shen, A. van den Hengel, and T. Shen, “Extended depth-of-field via focus stacking and graph cuts,” in 20th IEEE International Conference on Image Processing (ICIP) (IEEE, 2013), pp. 1272–1276.

Shen, T.

C. Zhang, J. Bastian, C. Shen, A. van den Hengel, and T. Shen, “Extended depth-of-field via focus stacking and graph cuts,” in 20th IEEE International Conference on Image Processing (ICIP) (IEEE, 2013), pp. 1272–1276.

Shi, G.

L. Wang, Z. Xiong, D. Gao, G. Shi, W. Zeng, and F. Wu, “High-speed hyperspectral video acquisition with a dual-camera architecture,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 4942–4950.

Shogenji, R.

Silver, A.

Snavely, N.

R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. Hernández, S. Agarwal, and S. M. Seitz, “Jump: virtual reality video,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 198 (2016).
[Crossref]

Somayaji, M.

Stack, R.

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Stack, R. A.

Stamenov, I.

Stevens, M.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Sun, X.

Taboury, J.

Takasawa, T.

Tallon, S.

S. Tallon, “A pocket camera with many eyes,” IEEE Spectrum 53, 34–40 (2016).
[Crossref]

Talvala, E.-V.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (Proc SIGGRAPH) (ACM, 2005), Vol. 24, pp. 765–776.

Tanida, J.

Targove, J. D.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Te Kolste, R.

Thétas, S.

Tremblay, E. J.

Tünnermann, A.

Tyson, J.

Z. Ivezic, J. Tyson, B. Abel, E. Acosta, R. Allsman, Y. AlSayyad, S. Anderson, J. Andrew, R. Angel, and G. Angeli, “LSST: from science drivers to reference design and anticipated data products,” arXiv:0805.2366 (2008).

Vaish, V.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (Proc SIGGRAPH) (ACM, 2005), Vol. 24, pp. 765–776.

van den Hengel, A.

C. Zhang, J. Bastian, C. Shen, A. van den Hengel, and T. Shen, “Extended depth-of-field via focus stacking and graph cuts,” in 20th IEEE International Conference on Image Processing (ICIP) (IEEE, 2013), pp. 1272–1276.

Veeraraghavan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

Venkataraman, K.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “Picam: an ultra-thin high performance monolithic camera array,” ACM Transactions on Graphics (Proc SIGGRAPH) 32, 166 (2013).
[Crossref]

Vera, E.

Q. Gong, E. Vera, D. R. Golish, S. D. Feller, D. J. Brady, and M. E. Gehm, “Model-based multiscale gigapixel image formation pipeline on GPU,” IEEE Trans. Comput. Imaging 3, 493–502 (2017).
[Crossref]

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

D. Golish, E. Vera, K. Kelly, Q. Gong, P. Jansen, J. Hughes, D. Kittle, D. Brady, and M. Gehm, “Development of a scalable image formation pipeline for multiscale gigapixel photography,” Opt. Express 20, 22048–22062 (2012).
[Crossref]

Wang, L.

L. Wang, Z. Xiong, D. Gao, G. Shi, W. Zeng, and F. Wu, “High-speed hyperspectral video acquisition with a dual-camera architecture,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 4942–4950.

Wang, Y.

Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” IEEE Trans. Vis. Comput. Graphics 23, 2357–2364 (2017).
[Crossref]

F. Duanmu, E. Kurdoglu, S. A. Hosseini, Y. Liu, and Y. Wang, “Prioritized buffer control in two-tier 360 video streaming,” in Proceedings of the Workshop on Virtual Reality and Augmented Reality Network (ACM, 2017), pp. 13–18.

Waterman, J.

Wilburn, B.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (Proc SIGGRAPH) (ACM, 2005), Vol. 24, pp. 765–776.

Willett, R.

Wong, H.-S. P.

K. Fife, A. El Gamal, and H.-S. P. Wong, “A 3D multi-aperture image sensor architecture,” in IEEE Custom Integrated Circuits Conference (IEEE, 2006), pp. 281–284.

Wu, F.

L. Wang, Z. Xiong, D. Gao, G. Shi, W. Zeng, and F. Wu, “High-speed hyperspectral video acquisition with a dual-camera architecture,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 4942–4950.

Xiong, Z.

L. Wang, Z. Xiong, D. Gao, G. Shi, W. Zeng, and F. Wu, “High-speed hyperspectral video acquisition with a dual-camera architecture,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 4942–4950.

Yamada, K.

Yang, J.

Yasutomi, K.

Young, A.

A. Young, “Stereocamera,” U.S. patent2,090,017 (August 17, 1937).

Yuan, X.

Yue, T.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

Zeng, W.

L. Wang, Z. Xiong, D. Gao, G. Shi, W. Zeng, and F. Wu, “High-speed hyperspectral video acquisition with a dual-camera architecture,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 4942–4950.

Zhang, B.

Zhang, C.

C. Zhang, J. Bastian, C. Shen, A. van den Hengel, and T. Shen, “Extended depth-of-field via focus stacking and graph cuts,” in 20th IEEE International Conference on Image Processing (ICIP) (IEEE, 2013), pp. 1272–1276.

ACM Transactions on Graphics (Proc SIGGRAPH) (3)

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “Picam: an ultra-thin high performance monolithic camera array,” ACM Transactions on Graphics (Proc SIGGRAPH) 32, 166 (2013).
[Crossref]

R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. Hernández, S. Agarwal, and S. M. Seitz, “Jump: virtual reality video,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 198 (2016).
[Crossref]

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Transactions on Graphics (Proc SIGGRAPH) 35, 192 (2016).
[Crossref]

Adv. Opt. Photon. (1)

Appl. Opt. (12)

J. Nichols, K. Judd, C. Olson, K. Novak, J. Waterman, S. Feller, S. McCain, J. Anderson, and D. Brady, “Range performance of the DARPA AWARE wide field-of-view visible imager,” Appl. Opt. 55, 4478–4484 (2016).
[Crossref]

M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt. 49, B9–B17 (2010).
[Crossref]

V. R. Bhakta, M. Somayaji, S. C. Douglas, and M. P. Christensen, “Experimentally validated computational imaging with adaptive multiaperture folded architecture,” Appl. Opt. 49, B51–B58 (2010).
[Crossref]

J. Ojeda-Castaneda, R. Ramos, and A. Noyola-Isgleas, “High focal depth by apodization and digital restoration,” Appl. Opt. 27, 2583–2586 (1988).
[Crossref]

E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
[Crossref]

J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by Bound Optics (Tombo): concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001).
[Crossref]

J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005).
[Crossref]

P. M. Shankar, W. C. Hasenplaugh, R. L. Morrison, R. A. Stack, and M. A. Neifeld, “Multiaperture imaging,” Appl. Opt. 45, 2871–2883 (2006).
[Crossref]

M. Shankar, R. Willett, N. Pitsianis, T. Schulz, R. Gibbons, R. Te Kolste, J. Carriere, C. Chen, D. Prather, and D. Brady, “Thin infrared imaging systems through multichannel sampling,” Appl. Opt. 47, B1–B10 (2008).
[Crossref]

A. Portnoy, N. Pitsianis, X. Sun, D. Brady, R. Gibbons, A. Silver, R. Te Kolste, C. Chen, T. Dillon, and D. Prather, “Design and characterization of thin multiple aperture infrared cameras,” Appl. Opt. 48, 2115–2126 (2009).
[Crossref]

G. Druart, N. Guérineau, R. Hadar, S. Thétas, J. Taboury, S. Rommeluère, J. Primot, and M. Fendler, “Demonstration of an infrared microcamera inspired by Xenos Peckii Vision,” Appl. Opt. 48, 3368–3374 (2009).
[Crossref]

E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012).
[Crossref]

Computer Networks and ISDN Systems (1)

K. Martinez, J. Cupitt, and S. Perry, “High resolution colorimetric image browsing on the web,” Computer Networks and ISDN Systems 30, 399–405 (1998).
[Crossref]

IEEE Signal Process. Mag. (2)

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33, 95–108 (2016).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

IEEE Spectrum (1)

S. Tallon, “A pocket camera with many eyes,” IEEE Spectrum 53, 34–40 (2016).
[Crossref]

IEEE Trans. Comput. Imaging (1)

Q. Gong, E. Vera, D. R. Golish, S. D. Feller, D. J. Brady, and M. E. Gehm, “Model-based multiscale gigapixel image formation pipeline on GPU,” IEEE Trans. Comput. Imaging 3, 493–502 (2017).
[Crossref]

IEEE Trans. Vis. Comput. Graphics (1)

Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” IEEE Trans. Vis. Comput. Graphics 23, 2357–2364 (2017).
[Crossref]

J. Opt. Soc. Am. A (1)

J. Phys. Theor. Appl. (1)

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
[Crossref]

Nature (2)

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

J. Muybridge, “The horse in motion,” Nature 25, 605 (1882).
[Crossref]

Opt. Commun. (1)

S. Basty, M. Neifeld, D. Brady, and S. Kraut, “Nonlinear estimation for interferometric imaging,” Opt. Commun. 228, 249–261 (2003).
[Crossref]

Opt. Express (8)

R. Shogenji, Y. Kitamura, K. Yamada, S. Miyatake, and J. Tanida, “Multispectral imaging using compact compound optics,” Opt. Express 12, 1643–1655 (2004).
[Crossref]

D. Golish, E. Vera, K. Kelly, Q. Gong, P. Jansen, J. Hughes, D. Kittle, D. Brady, and M. Gehm, “Development of a scalable image formation pipeline for multiscale gigapixel photography,” Opt. Express 20, 22048–22062 (2012).
[Crossref]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009).
[Crossref]

R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, “Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition,” Opt. Express 18, 19367–19378 (2010).
[Crossref]

I. Stamenov, A. Arianpour, S. J. Olivas, I. P. Agurok, A. R. Johnson, R. A. Stack, R. L. Morrison, and J. E. Ford, “Panoramic monocentric imaging using fiber-coupled focal planes,” Opt. Express 22, 31708–31721 (2014).
[Crossref]

W. Pang and D. J. Brady, “Galilean monocentric multiscale optical systems,” Opt. Express 25, 20332–20339 (2017).
[Crossref]

F. Mochizuki, K. Kagawa, S.-I. Okihara, M.-W. Seo, B. Zhang, T. Takasawa, K. Yasutomi, and S. Kawahito, “Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor,” Opt. Express 24, 4155–4176 (2016).
[Crossref]

Opt. Lett. (2)

Optica (1)

Proc. IEEE (1)

R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
[Crossref]

Proc. SPIE (3)

E. C. Cull, D. P. Kowalski, J. B. Burchett, S. D. Feller, and D. J. Brady, “Three dimensional imaging with the Argus sensor array,” Proc. SPIE 4873, 211–222 (2002).
[Crossref]

N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, J. N. Heasley, K.-W. Hodapp, B. Hunt, and R. Jedicke, “Pan-starrs: a large synoptic survey telescope array,” Proc. SPIE 4836, 154–164 (2002).
[Crossref]

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, and J. D. Targove, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Science (1)

D. L. Marks, R. A. Stack, D. J. Brady, D. C. Munson, and R. B. Brady, “Visible cone-beam tomography with a lensless interferometric camera,” Science 284, 2164–2166 (1999).
[Crossref]

Other (21)

J. J. McCann and A. Rizzi, The Art and Science of HDR Imaging (Wiley, 2011), Vol. 26.

C. Zhang, J. Bastian, C. Shen, A. van den Hengel, and T. Shen, “Extended depth-of-field via focus stacking and graph cuts,” in 20th IEEE International Conference on Image Processing (ICIP) (IEEE, 2013), pp. 1272–1276.

R. K. Luneburg and M. Herzberger, Mathematical Theory of Optics (University of California, 1964).

D. J. Brady, Optical Imaging and Spectroscopy (Wiley, 2009).

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

“Kinect,” 2017, https://www.microsoftstore.com/accessories/xbox-kinect/p/1708-00000 .

L. Wang, Z. Xiong, D. Gao, G. Shi, W. Zeng, and F. Wu, “High-speed hyperspectral video acquisition with a dual-camera architecture,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 4942–4950.

F. Duanmu, E. Kurdoglu, S. A. Hosseini, Y. Liu, and Y. Wang, “Prioritized buffer control in two-tier 360 video streaming,” in Proceedings of the Workshop on Virtual Reality and Augmented Reality Network (ACM, 2017), pp. 13–18.

"MantisDemo," 2017, http://vision.nju.edu.cn/demo/ .

"MantisDemoUS," 2017, http://aqueti.tv/demos/xiaolinRoad/

G. A. Lloyd and S. J. Sasson, “Electronic still camera,” U.S. patent4,131,919 (December 26, 1978).

D. Prakel, The Visual Dictionary of Photography (AVA Publishing, 2009).

“Sony Product Announcement,” http://www.sony-semicon.co.jp/products_en/news/detail/170301.pdf (2017).

R. Raskar, Computational Photography: Epsilon to Coded Photography (Springer, 2009), pp. 238–253.

B. E. Bayer, “Color imaging array,” U.S. patent3,971,065 (July 20, 1976).

Z. Ivezic, J. Tyson, B. Abel, E. Acosta, R. Allsman, Y. AlSayyad, S. Anderson, J. Andrew, R. Angel, and G. Angeli, “LSST: from science drivers to reference design and anticipated data products,” arXiv:0805.2366 (2008).

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR2, 1–11 (2005).

K. Fife, A. El Gamal, and H.-S. P. Wong, “A 3D multi-aperture image sensor architecture,” in IEEE Custom Integrated Circuits Conference (IEEE, 2006), pp. 281–284.

A. Young, “Stereocamera,” U.S. patent2,090,017 (August 17, 1937).

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (Proc SIGGRAPH) (ACM, 2005), Vol. 24, pp. 765–776.

G. Bell, Supercomputers: The Amazing Race (A History of Supercomputing, 1960–2020) (2014).

Supplementary Material (3)

NameDescription
» Supplement 1        
» Visualization 1       Street scene in Shanghai, China shot with Mantis 70 100 megapixel array camera as explored with digital pan, tilt and zoom through real-time control interface.
» Visualization 2       Street scene in Kunshan, China, shot with Mantis 70 100 megapixel array camera as explored with digital pan, tilt and zoom through real-time control interface.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1.

Cost curve under the assumption of a polynomial function form. The blue line represents the cost of a single lens from the parallel lens array, while the black line represents the total cost of the array system for a full FoV coverage of 80° and here this curve shows a minimum cost around FoVs=20°.

Fig. 2.
Fig. 2.

Lens cost estimation in terms of system volume, weight, and number of elements. (a) The cost curves from the nine design examples. (b) The graphs of cost per pixel plots showing the information efficiency. (c) The cost curves by applying the lens array strategy.

Fig. 3.
Fig. 3.

Lens complexity and focusing complexity versus sub-FoV. (a) The focusing complexity skyrockets when the sub-FoV moves to the left side of the axis. (b) The lens complexity grows rapidly as the sub-FoV increases.

Fig. 4.
Fig. 4.

By merging the two plots together, the optimal sub-FoV falls into a region between 3° and 6° in our specific case. The green solid line is an equally weighted addition of the two plots in Fig. 3.

Fig. 5.
Fig. 5.

System structure for (a) the conventional camera and (b) the parallel camera.

Fig. 6.
Fig. 6.

In interactive video broadcasting, the user can explore the video on a window of any scale both on the spatial axis and the temporal axis.

Tables (1)

Tables Icon

Table 1. Characteristics of as-Constructed AWARE Cameras

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

CA(FoV,FoVs)=(FoVFoVs)2C(FoVs),
CA(FoV,FoVS)=cFoV×FoVsγ2.
C(FoVs)=c1FoVs+c2FoVs2+c3FoVs3++cnFoVsn+.
CA(FoV,FoVs)=FoV2(c11FoVs+c2+c3FoVs++cnFoVsn2+).
c1=c3FoVs2+c4FoVs3++cnFoVsn1+.