Abstract

We propose a scheme for modulating phase computationally in light field imaging systems. In a camera system based on the scheme, light field (LF) data is obtained by array-based optics, and the data is computationally projected into a single image with arbitrary phase modulation. In a projector system based on the scheme, LF data with arbitrary phase modulation is computationally generated before optical projection, and the phase-modulated image is projected by array-based optics. We describe the system design and required conditions based on the sampling theorem. We experimentally verified the proposed scheme based on camera and projector systems. In the experiment, we demonstrated a super-resolution camera and projector with extended depth-of-field without estimating the object’s shape.

© 2013 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell.14, 99–106 (1992).
    [CrossRef]
  2. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt.40, 1806–1813 (2001).
    [CrossRef]
  3. M. Levoy and P. Hanrahan, “Light field rendering,” in Proc. ACM SIGGRAPH(1996), pp. 31–42.
  4. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proc. ACM SIGGRAPH(2000), pp. 297–306.
  5. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR 2005-02 (2005).
  6. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev.14, 347–350 (2007).
    [CrossRef]
  7. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell.34, 972–986 (2012).
    [CrossRef]
  8. Y. Kitamura, R. Shogenji, K. Yamada, S. Miyatake, M. Miyamoto, T. Morimoto, Y. Masaki, N. Kondou, D. Miyazaki, J. Tanida, and Y. Ichioka, “Reconstruction of a high-resolution image on a compound-eye image-capturing system,” Appl. Opt.43, 1719–1727 (2004).
    [CrossRef] [PubMed]
  9. T. E. Bishop, S. Zenetti, and P. Favaro, “Light field superresolution,” in Proc. IEEE International Conference on Computational Photography (ICCP)(2009), pp. 1–9.
  10. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag.20, 21–36 (2003).
    [CrossRef]
  11. S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt.52, D22–D31 (2013).
    [CrossRef] [PubMed]
  12. R. Horisaki, K. Kagawa, Y. Nakao, T. Toyoda, Y. Masaki, and J. Tanida, “Irregular lens arrangement design to improve imaging performance of compound-eye imaging systems,” Appl. Phys. Express3, 022501 (2010).
    [CrossRef]
  13. R. Horisaki and J. Tanida, “Full-resolution light-field single-shot acquisition with spatial encoding,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper CTuB5.
    [CrossRef]
  14. Z. Xu, J. Ke, and E. Y. Lam, “High-resolution lightfield photography using two masks,” Opt. Express20, 10971–10983 (2012).
    [CrossRef] [PubMed]
  15. K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph.32, 1–11 (2013).
    [CrossRef]
  16. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt.34, 1859–1866 (1995).
    [CrossRef] [PubMed]
  17. P. Mouroulis, “Depth of field extension with spherical optics,” Opt. Express16, 12995–13004 (2008).
    [CrossRef] [PubMed]
  18. T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition compound eye imaging for extended depth-of-field and field-of-view,” Opt. Express20, 27482–27495 (2012).
    [CrossRef] [PubMed]
  19. O. Cossairt, C. Zhou, and S. K. Nayar, “Diffusion coded photography for extended depth of field,” ACM Trans. Graph.29, 1–10 (2010).
    [CrossRef]
  20. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell.9, 523–531 (1987).
    [CrossRef] [PubMed]
  21. G. E. Johnson, E. R. Dowski, and W. T. Cathey, “Passive ranging through wave-front coding: information and application,” Appl. Opt.39, 1700–1710 (2000).
    [CrossRef]
  22. A. Greengard, Y. Schechner, and R. Piestun, “Depth from diffracted rotation,” Opt. Lett.31, 181–183 (2006).
    [CrossRef] [PubMed]
  23. C. Zhou, O. Cossairt, and S. K. Nayar, “Depth from diffusion,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)(2010), pp.1–8.
  24. A. Ashok and M. A. Neifeld, “Pseudorandom phase masks for superresolution imaging from subpixel shifting,” Appl. Opt.46, 2256–2268 (2007).
    [CrossRef] [PubMed]
  25. A. Ashok and M. A. Neifeld, “Information-based analysis of simple incoherent imaging systems,” Opt. Express11, 2153–2162 (2003).
    [CrossRef] [PubMed]
  26. J. Chai, X. Tong, S. Chan, and H. Shum, “Plenoptic sampling,” in Proc. ACM SIGGRAPH(2000), pp. 307–318.
  27. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).
  28. S. S. Sherif, W. T. Cathey, and E. R. Dowski, “Phase plate to extend the depth of field of incoherent hybrid imaging systems,” Appl. Opt.43, 2709–2721 (2004).
    [CrossRef] [PubMed]
  29. W. Zhang, Z. Ye, T. Zhao, Y. Chen, and F. Yu, “Point spread function characteristics analysis of the wavefront coding system,” Opt. Express15, 1543–1552 (2007).
    [CrossRef] [PubMed]
  30. Y. Takahashi and S. Komatsu, “Optimized free-form phase mask for extension of depth of field in wavefront-coded imaging,” Opt. Lett.33, 1515–1517 (2008).
    [CrossRef] [PubMed]
  31. T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition projector for extended depth of field and field of view,” Opt. Lett.9, 1560–1562 (2013).
    [CrossRef]
  32. M. Sieler, P. Schreiber, P. Dannberg, A. Bräuer, and A. Tünnermann, “Ultraslim fixed pattern projectors with inherent homogenization of illumination,” Appl. Opt.51, 64–74 (2012).
    [CrossRef] [PubMed]
  33. M. Grosse, G. Wetztein, A. Grundhöfer, and O. Bimber, “Coded aperture projection,” ACM Trans. Graph.29, 1–12 (2010).
    [CrossRef]
  34. R. Horisaki and J. Tanida, “Compact compound-eye projector using superresolved projection,” Opt. Lett.36, 121–123 (2011).
    [CrossRef] [PubMed]
  35. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am.62, 55–59 (1972).
    [CrossRef]
  36. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J.79, 745–754 (1974).
    [CrossRef]

2013 (3)

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph.32, 1–11 (2013).
[CrossRef]

T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition projector for extended depth of field and field of view,” Opt. Lett.9, 1560–1562 (2013).
[CrossRef]

S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt.52, D22–D31 (2013).
[CrossRef] [PubMed]

2012 (4)

2011 (1)

2010 (3)

M. Grosse, G. Wetztein, A. Grundhöfer, and O. Bimber, “Coded aperture projection,” ACM Trans. Graph.29, 1–12 (2010).
[CrossRef]

O. Cossairt, C. Zhou, and S. K. Nayar, “Diffusion coded photography for extended depth of field,” ACM Trans. Graph.29, 1–10 (2010).
[CrossRef]

R. Horisaki, K. Kagawa, Y. Nakao, T. Toyoda, Y. Masaki, and J. Tanida, “Irregular lens arrangement design to improve imaging performance of compound-eye imaging systems,” Appl. Phys. Express3, 022501 (2010).
[CrossRef]

2008 (2)

2007 (3)

2006 (1)

2004 (2)

2003 (2)

A. Ashok and M. A. Neifeld, “Information-based analysis of simple incoherent imaging systems,” Opt. Express11, 2153–2162 (2003).
[CrossRef] [PubMed]

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag.20, 21–36 (2003).
[CrossRef]

2001 (1)

2000 (1)

1995 (1)

1992 (1)

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell.14, 99–106 (1992).
[CrossRef]

1987 (1)

P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell.9, 523–531 (1987).
[CrossRef] [PubMed]

1974 (1)

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J.79, 745–754 (1974).
[CrossRef]

1972 (1)

Adelson, E. H.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell.14, 99–106 (1992).
[CrossRef]

Ashok, A.

Bando, Y.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph.32, 1–11 (2013).
[CrossRef]

Berkner, K.

Bimber, O.

M. Grosse, G. Wetztein, A. Grundhöfer, and O. Bimber, “Coded aperture projection,” ACM Trans. Graph.29, 1–12 (2010).
[CrossRef]

Bishop, T. E.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell.34, 972–986 (2012).
[CrossRef]

T. E. Bishop, S. Zenetti, and P. Favaro, “Light field superresolution,” in Proc. IEEE International Conference on Computational Photography (ICCP)(2009), pp. 1–9.

Bräuer, A.

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR 2005-02 (2005).

Cathey, W. T.

Chai, J.

J. Chai, X. Tong, S. Chan, and H. Shum, “Plenoptic sampling,” in Proc. ACM SIGGRAPH(2000), pp. 307–318.

Chan, S.

J. Chai, X. Tong, S. Chan, and H. Shum, “Plenoptic sampling,” in Proc. ACM SIGGRAPH(2000), pp. 307–318.

Chen, Y.

Cossairt, O.

O. Cossairt, C. Zhou, and S. K. Nayar, “Diffusion coded photography for extended depth of field,” ACM Trans. Graph.29, 1–10 (2010).
[CrossRef]

C. Zhou, O. Cossairt, and S. K. Nayar, “Depth from diffusion,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)(2010), pp.1–8.

Dannberg, P.

Dowski, E. R.

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR 2005-02 (2005).

Favaro, P.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell.34, 972–986 (2012).
[CrossRef]

T. E. Bishop, S. Zenetti, and P. Favaro, “Light field superresolution,” in Proc. IEEE International Conference on Computational Photography (ICCP)(2009), pp. 1–9.

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

Gortler, S. J.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proc. ACM SIGGRAPH(2000), pp. 297–306.

Greengard, A.

Grosse, M.

M. Grosse, G. Wetztein, A. Grundhöfer, and O. Bimber, “Coded aperture projection,” ACM Trans. Graph.29, 1–12 (2010).
[CrossRef]

Grundhöfer, A.

M. Grosse, G. Wetztein, A. Grundhöfer, and O. Bimber, “Coded aperture projection,” ACM Trans. Graph.29, 1–12 (2010).
[CrossRef]

Hanrahan, P.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proc. ACM SIGGRAPH(1996), pp. 31–42.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR 2005-02 (2005).

Horisaki, R.

T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition projector for extended depth of field and field of view,” Opt. Lett.9, 1560–1562 (2013).
[CrossRef]

T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition compound eye imaging for extended depth-of-field and field-of-view,” Opt. Express20, 27482–27495 (2012).
[CrossRef] [PubMed]

R. Horisaki and J. Tanida, “Compact compound-eye projector using superresolved projection,” Opt. Lett.36, 121–123 (2011).
[CrossRef] [PubMed]

R. Horisaki, K. Kagawa, Y. Nakao, T. Toyoda, Y. Masaki, and J. Tanida, “Irregular lens arrangement design to improve imaging performance of compound-eye imaging systems,” Appl. Phys. Express3, 022501 (2010).
[CrossRef]

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev.14, 347–350 (2007).
[CrossRef]

R. Horisaki and J. Tanida, “Full-resolution light-field single-shot acquisition with spatial encoding,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper CTuB5.
[CrossRef]

Horowitz, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR 2005-02 (2005).

Ichioka, Y.

Irie, S.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev.14, 347–350 (2007).
[CrossRef]

Isaksen, A.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proc. ACM SIGGRAPH(2000), pp. 297–306.

Ishida, K.

Johnson, G. E.

Kagawa, K.

R. Horisaki, K. Kagawa, Y. Nakao, T. Toyoda, Y. Masaki, and J. Tanida, “Irregular lens arrangement design to improve imaging performance of compound-eye imaging systems,” Appl. Phys. Express3, 022501 (2010).
[CrossRef]

Kang, M. G.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag.20, 21–36 (2003).
[CrossRef]

Ke, J.

Kitamura, Y.

Komatsu, S.

Kondou, N.

Kumagai, T.

Lam, E. Y.

Levoy, M.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proc. ACM SIGGRAPH(1996), pp. 31–42.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR 2005-02 (2005).

Lucy, L. B.

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J.79, 745–754 (1974).
[CrossRef]

Marwah, K.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph.32, 1–11 (2013).
[CrossRef]

Masaki, Y.

R. Horisaki, K. Kagawa, Y. Nakao, T. Toyoda, Y. Masaki, and J. Tanida, “Irregular lens arrangement design to improve imaging performance of compound-eye imaging systems,” Appl. Phys. Express3, 022501 (2010).
[CrossRef]

Y. Kitamura, R. Shogenji, K. Yamada, S. Miyatake, M. Miyamoto, T. Morimoto, Y. Masaki, N. Kondou, D. Miyazaki, J. Tanida, and Y. Ichioka, “Reconstruction of a high-resolution image on a compound-eye image-capturing system,” Appl. Opt.43, 1719–1727 (2004).
[CrossRef] [PubMed]

McMillan, L.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proc. ACM SIGGRAPH(2000), pp. 297–306.

Miyamoto, M.

Miyatake, S.

Miyazaki, D.

Morimoto, T.

Mouroulis, P.

Nakamura, T.

T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition projector for extended depth of field and field of view,” Opt. Lett.9, 1560–1562 (2013).
[CrossRef]

T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition compound eye imaging for extended depth-of-field and field-of-view,” Opt. Express20, 27482–27495 (2012).
[CrossRef] [PubMed]

Nakao, Y.

R. Horisaki, K. Kagawa, Y. Nakao, T. Toyoda, Y. Masaki, and J. Tanida, “Irregular lens arrangement design to improve imaging performance of compound-eye imaging systems,” Appl. Phys. Express3, 022501 (2010).
[CrossRef]

Nayar, S. K.

O. Cossairt, C. Zhou, and S. K. Nayar, “Diffusion coded photography for extended depth of field,” ACM Trans. Graph.29, 1–10 (2010).
[CrossRef]

C. Zhou, O. Cossairt, and S. K. Nayar, “Depth from diffusion,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)(2010), pp.1–8.

Neifeld, M. A.

Ng, R.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR 2005-02 (2005).

Ogura, Y.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev.14, 347–350 (2007).
[CrossRef]

Park, M. K.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag.20, 21–36 (2003).
[CrossRef]

Park, S. C.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag.20, 21–36 (2003).
[CrossRef]

Pentland, P.

P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell.9, 523–531 (1987).
[CrossRef] [PubMed]

Piestun, R.

Raskar, R.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph.32, 1–11 (2013).
[CrossRef]

Richardson, W. H.

Schechner, Y.

Schreiber, P.

Sherif, S. S.

Shogenji, R.

Shroff, S. A.

Shum, H.

J. Chai, X. Tong, S. Chan, and H. Shum, “Plenoptic sampling,” in Proc. ACM SIGGRAPH(2000), pp. 307–318.

Sieler, M.

Takahashi, Y.

Tanida, J.

T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition projector for extended depth of field and field of view,” Opt. Lett.9, 1560–1562 (2013).
[CrossRef]

T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition compound eye imaging for extended depth-of-field and field-of-view,” Opt. Express20, 27482–27495 (2012).
[CrossRef] [PubMed]

R. Horisaki and J. Tanida, “Compact compound-eye projector using superresolved projection,” Opt. Lett.36, 121–123 (2011).
[CrossRef] [PubMed]

R. Horisaki, K. Kagawa, Y. Nakao, T. Toyoda, Y. Masaki, and J. Tanida, “Irregular lens arrangement design to improve imaging performance of compound-eye imaging systems,” Appl. Phys. Express3, 022501 (2010).
[CrossRef]

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev.14, 347–350 (2007).
[CrossRef]

Y. Kitamura, R. Shogenji, K. Yamada, S. Miyatake, M. Miyamoto, T. Morimoto, Y. Masaki, N. Kondou, D. Miyazaki, J. Tanida, and Y. Ichioka, “Reconstruction of a high-resolution image on a compound-eye image-capturing system,” Appl. Opt.43, 1719–1727 (2004).
[CrossRef] [PubMed]

J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt.40, 1806–1813 (2001).
[CrossRef]

R. Horisaki and J. Tanida, “Full-resolution light-field single-shot acquisition with spatial encoding,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper CTuB5.
[CrossRef]

Tong, X.

J. Chai, X. Tong, S. Chan, and H. Shum, “Plenoptic sampling,” in Proc. ACM SIGGRAPH(2000), pp. 307–318.

Toyoda, T.

R. Horisaki, K. Kagawa, Y. Nakao, T. Toyoda, Y. Masaki, and J. Tanida, “Irregular lens arrangement design to improve imaging performance of compound-eye imaging systems,” Appl. Phys. Express3, 022501 (2010).
[CrossRef]

Tünnermann, A.

Wang, J. Y. A.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell.14, 99–106 (1992).
[CrossRef]

Wetzstein, G.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph.32, 1–11 (2013).
[CrossRef]

Wetztein, G.

M. Grosse, G. Wetztein, A. Grundhöfer, and O. Bimber, “Coded aperture projection,” ACM Trans. Graph.29, 1–12 (2010).
[CrossRef]

Xu, Z.

Yamada, K.

Ye, Z.

Yu, F.

Zenetti, S.

T. E. Bishop, S. Zenetti, and P. Favaro, “Light field superresolution,” in Proc. IEEE International Conference on Computational Photography (ICCP)(2009), pp. 1–9.

Zhang, W.

Zhao, T.

Zhou, C.

O. Cossairt, C. Zhou, and S. K. Nayar, “Diffusion coded photography for extended depth of field,” ACM Trans. Graph.29, 1–10 (2010).
[CrossRef]

C. Zhou, O. Cossairt, and S. K. Nayar, “Depth from diffusion,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)(2010), pp.1–8.

ACM Trans. Graph. (3)

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph.32, 1–11 (2013).
[CrossRef]

O. Cossairt, C. Zhou, and S. K. Nayar, “Diffusion coded photography for extended depth of field,” ACM Trans. Graph.29, 1–10 (2010).
[CrossRef]

M. Grosse, G. Wetztein, A. Grundhöfer, and O. Bimber, “Coded aperture projection,” ACM Trans. Graph.29, 1–12 (2010).
[CrossRef]

Appl. Opt. (8)

M. Sieler, P. Schreiber, P. Dannberg, A. Bräuer, and A. Tünnermann, “Ultraslim fixed pattern projectors with inherent homogenization of illumination,” Appl. Opt.51, 64–74 (2012).
[CrossRef] [PubMed]

S. S. Sherif, W. T. Cathey, and E. R. Dowski, “Phase plate to extend the depth of field of incoherent hybrid imaging systems,” Appl. Opt.43, 2709–2721 (2004).
[CrossRef] [PubMed]

G. E. Johnson, E. R. Dowski, and W. T. Cathey, “Passive ranging through wave-front coding: information and application,” Appl. Opt.39, 1700–1710 (2000).
[CrossRef]

A. Ashok and M. A. Neifeld, “Pseudorandom phase masks for superresolution imaging from subpixel shifting,” Appl. Opt.46, 2256–2268 (2007).
[CrossRef] [PubMed]

E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt.34, 1859–1866 (1995).
[CrossRef] [PubMed]

S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt.52, D22–D31 (2013).
[CrossRef] [PubMed]

J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt.40, 1806–1813 (2001).
[CrossRef]

Y. Kitamura, R. Shogenji, K. Yamada, S. Miyatake, M. Miyamoto, T. Morimoto, Y. Masaki, N. Kondou, D. Miyazaki, J. Tanida, and Y. Ichioka, “Reconstruction of a high-resolution image on a compound-eye image-capturing system,” Appl. Opt.43, 1719–1727 (2004).
[CrossRef] [PubMed]

Appl. Phys. Express (1)

R. Horisaki, K. Kagawa, Y. Nakao, T. Toyoda, Y. Masaki, and J. Tanida, “Irregular lens arrangement design to improve imaging performance of compound-eye imaging systems,” Appl. Phys. Express3, 022501 (2010).
[CrossRef]

Astron. J. (1)

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J.79, 745–754 (1974).
[CrossRef]

IEEE Signal Process. Mag. (1)

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag.20, 21–36 (2003).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell.34, 972–986 (2012).
[CrossRef]

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell.14, 99–106 (1992).
[CrossRef]

P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell.9, 523–531 (1987).
[CrossRef] [PubMed]

J. Opt. Soc. Am. (1)

Opt. Express (5)

Opt. Lett. (4)

Opt. Rev. (1)

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev.14, 347–350 (2007).
[CrossRef]

Other (8)

T. E. Bishop, S. Zenetti, and P. Favaro, “Light field superresolution,” in Proc. IEEE International Conference on Computational Photography (ICCP)(2009), pp. 1–9.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proc. ACM SIGGRAPH(1996), pp. 31–42.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proc. ACM SIGGRAPH(2000), pp. 297–306.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR 2005-02 (2005).

R. Horisaki and J. Tanida, “Full-resolution light-field single-shot acquisition with spatial encoding,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper CTuB5.
[CrossRef]

C. Zhou, O. Cossairt, and S. K. Nayar, “Depth from diffusion,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)(2010), pp.1–8.

J. Chai, X. Tong, S. Chan, and H. Shum, “Plenoptic sampling,” in Proc. ACM SIGGRAPH(2000), pp. 307–318.

J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (20)

Fig. 1
Fig. 1

Definition of light field (LF).

Fig. 2
Fig. 2

Schematic diagram of LF imaging system.

Fig. 3
Fig. 3

Schematic diagram of phase-modulation imaging.

Fig. 4
Fig. 4

Schematic diagram of phase-modulated LF camera.

Fig. 5
Fig. 5

Definitions of the system parameters.

Fig. 6
Fig. 6

Designs for implementing phase modulation in virtual optics. Phase modulation by (a) using a phase plate, and (b) tilting the optical axes of virtual elemental optics for achieving modulation equivalent to that of a phase plate.

Fig. 7
Fig. 7

Geometrical relation between pixels on a sensor and pixels on a virtual image plane in virtual space.

Fig. 8
Fig. 8

Geometrical representation of the disparity.

Fig. 9
Fig. 9

Sampling patterns of LF data and PSFs in the systems (a) without and (b)–(d) with phase modulation. The modulations were designed by emulating (b) a cubic phase mask, (c) spherical optics, and (d) a radially symmetric kinoform diffuser.

Fig. 10
Fig. 10

The PSFs in CPM-based wavefront coding obtained by (a) analytical derivation and (b)–(e) numerical simulations. In the simulations, the pitch of the elemental optics was determined to make the mean disparity (b) half of the pixel pitch, (c) equal to the pixel pitch, (d) double the pixel pitch, and (e) five times larger than the pixel pitch at z v Ψ = 0.

Fig. 11
Fig. 11

Simulations with two-dimensional image. (a) Computationally projected images and (b) final images obtained with the conventional and proposed LF cameras with the CPM while changing the element pitch and the object distance. Red rectangles and blue rectangles mean over-sampling and under-sampling conditions, respectively.

Fig. 12
Fig. 12

Differences between deconvolved images with d̄/δu = 0.5 and others at z v Ψ = 0 and z v Ψ = 30, and their corresponding PSNRs.

Fig. 13
Fig. 13

Schematic diagram of (a) EDOF camera and (b) EDOF projector based on phase-modulated LF imaging.

Fig. 14
Fig. 14

Setup used for experimental verification of the EDOF camera.

Fig. 15
Fig. 15

Results obtained with the EDOF camera based on phase-modulated LF imaging. (a) A single captured image, the computationally projected images (b) without and (c) with computational phase modulation, and (d) the deconvolution of Fig. (c).

Fig. 16
Fig. 16

Results obtained with super-resolved EDOF camera based on phase-modulated LF imaging. (a) A single low-resolution captured image, the computationally projected images (b) without and (c) with computational phase modulation, and (d) the deconvolved image of Fig. (c).

Fig. 17
Fig. 17

Setup used for experimental verification of EDOF projector.

Fig. 18
Fig. 18

(a) An input Lena image and (b) its deconvolved image.

Fig. 19
Fig. 19

Results obtained with the EDOF projector based on phase-modulated LF imaging. The optically projected images of the Lena image in Fig. 18(a) based on (a) the conventional single projector and the LF projector (b) without and (c) with computational phase modulation. (d) The optically projected image of the deconvolved Lena image in Fig. 18(b) based on the LF projector with computational phase modulation.

Fig. 20
Fig. 20

Results obtained with the super-resolved EDOF projector based on phase-modulated LF imaging. Optically projected images of the Lena image in Fig. 18(a) based on (a) the conventional single projector and the LF projector (b) without and (c) with computational phase modulation. (d) An optically projected image of the deconvolved Lena image in Fig. 18(b) based on the LF projector with computational phase modulation.

Tables (2)

Tables Icon

Table 1 System parameters used in simulations.

Tables Icon

Table 2 The pitch of optics and the achieved sampling pitch.

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

m ( s ) = ϕ em ϕ in ,
ϕ in = arctan ( g ( s ) s ) ,
ϕ em = arcsin ( n sin ( ϕ in ) ) .
δ u = Δ u n sr ,
max ( Δ a , δ u ) Δ o 2 .
Δ u = n sr δ u n sr Δ o 2 .
u s ( s ( k ) , z v , m ( s ) ) = f tan ( θ ray + θ mod ) ,
θ ray = arctan ( s ( k ) z v ) ,
θ mod = m ( s ( k ) ) .
d ( s ( k ) , Δ s , z o , z v , m ( s ) ) = | ω o ω v | ,
ω o = u s ( s ( k ) , z o , 0 ) u s ( s ( k 1 ) , z o , 0 ) ,
ω v = u s ( s ( k ) , z v , m ( s ) ) u s ( s ( k 1 ) , z v , m ( s ) ) ,
Δ s = s ( k ) s ( k 1 ) .
d ¯ ( Δ s , z o , z v , m ( s ) ) = k = 2 N d ( s ( k ) , Δ s , z o , z v , m ( s ) ) N 1 .
d ¯ ( Δ s , z o , z v , m ( s ) ) Δ u n sr = δ u .
Ψ = π A 2 4 λ ( 1 f LF 1 z o 1 z v Ψ ) .
A = ( N 1 ) Δ s ,
f LF = 1 1 z o + 1 z v Ψ = 0 .
z v Ψ = 0 = z o .
g ( s ) = α s 3 ,
g ( s ) = β 1 ( s γ ) 2 ,
d d s g ( s ) ~ P ,
h ( u , W 20 ) = 1 2 | 1 + 1 exp ( j α s p 3 + j k W 20 s p 2 j 2 π u s p ) d s p | 2 ,
W 20 = λ Ψ 2 π ,
s p = s A ,
i ^ = arg max i [ i | o ¯ = t ] subject to 0 i ^ ( p ) c , p ,

Metrics