Abstract

The singlet plenoptic camera, which consists of a single lens, microlens array (MLA) and image sensor, possesses the superiority that the imaging system is compact and lightweight, which is beneficial to miniaturization. However, such plenoptic cameras suffer from severe optical aberrations and their imaging quality is inferior for post-capture processing. Therefore, this paper proposes an optical-aberrations-corrected light field re-projection method to obtain high-quality singlet plenoptic imaging. First, optical aberrations are modeled by Seidel polynomials and included into point spread function (PSF) modeling. The modeled PSF is subsequently used to reconstruct imaging object information. Finally, the reconstructed imaging object information is re-projected back to the plenoptic imaging plane to obtain high-quality plenoptic images without optical aberrations. PSF modeling is validated by a self-built singlet plenoptic camera and the utility of the proposed optical-aberrations-corrected light field re-projection method is verified by numerical simulations and real imaging experiments.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Light-field depth estimation considering plenoptic imaging distortion

Zewei Cai, Xiaoli Liu, Giancarlo Pedrini, Wolfgang Osten, and Xiang Peng
Opt. Express 28(3) 4156-4168 (2020)

Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0

Xin Jin, Li Liu, Yanqin Chen, and Qionghai Dai
Opt. Express 25(9) 9947-9962 (2017)

Simplified projection technique to correct geometric and chromatic lens aberrations using plenoptic imaging

Xavier Dallaire and Simon Thibault
Appl. Opt. 56(10) 2946-2951 (2017)

References

  • View by:
  • |
  • |
  • |

  1. F. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60–60:12 (2015).
    [Crossref]
  2. R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, “A system for acquiring, processing, and rendering panoramic light field stills for virtual reality,” ACM Trans. Graph. 37(6), 1–15 (2018).
    [Crossref]
  3. D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3757–3766.
  4. Y. M. Jeong, S. K. Moon, J. S. Jeong, G. Li, J. B. Cho, and B. H. Lee, “One shot 360-degree light field capture and reconstruction with depth extraction based on optical flow for light field camera,” Appl. Sci. 8(6), 890 (2018).
    [Crossref]
  5. A. Ö. Yöntem, K. Li, and D. P. Chu, “Reciprocal 360-deg 3D light-field image acquisition and display system [Invited],” J. Opt. Soc. Am. A 36(2), A77–A87 (2019).
    [Crossref]
  6. N. Bedard, T. Shope, A. Hoberman, M. A. Haralam, N. Shaikh, J. Kovačević, N. Balram, and I. Tošić, “Light field otoscope design for 3D in vivo imaging of the middle ear,” Biomed. Opt. Express 8(1), 260–272 (2017).
    [Crossref]
  7. D. W. Palmer, T. Coppin, K. Rana, D. G. Dansereau, M. Suheimat, M. Maynard, D. A. Atchison, J. Roberts, R. Crawford, and A. Jaiprakash, “Glare-free retinal imaging using a portable light field fundus camera,” Biomed. Opt. Express 9(7), 3178–3192 (2018).
    [Crossref]
  8. X. Dallaire and S. Thibault, “Simplified projection technique to correct geometric and chromatic lens aberrations using plenoptic imaging,” Appl. Opt. 56(10), 2946–2951 (2017).
    [Crossref]
  9. S.A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt. 52(10), D22–D31 (2013).
    [Crossref]
  10. P. Helin, V. Katkovnik, A. Gotchev, and J. Astola, “Super resolution inverse imaging for plenoptic caemras using wavefield modeling,” in proceedings of IEEE 3DTV Conference: The True Vision–Capture, Transmission and Display of 3D Video (3DTV-CON, 2014), pp. 1–4.
  11. E. Sahin, V. Katkovnik, and A. Gotchev, “Super-resolution in a defocused plenoptic camera: a wave-optics-based approach,” Opt. Lett. 41(5), 998–1001 (2016).
    [Crossref]
  12. X. Jin, L. Liu, Y.Q. Chen, and Q.H. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947–9962 (2017).
    [Crossref]
  13. J. W. Goodman, Introduction to Fourier optics. Roberts and Company Publishers (2005).
  14. D.G. Voelz, Computational fourier optics: A MATLAB Tutorial, (SPIE Tutorial Texts Vol. TT89), SPIE Press (2011).
  15. J.M. Geary, Introduction to lens design with practical ZEMAX example, Willmann-Bell, Inc. (2007).
  16. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.
  17. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013).
    [Crossref]
  18. David Fong and M. Saunders, “LSMR: An iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
    [Crossref]
  19. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007).
    [Crossref]
  20. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
    [Crossref]
  21. A. Stefanoiu, J. Page, P. Symvoulidis, G. G. Westmeyer, and T. Lasser, “Artifact-free deconvolution in light field microscopy,” Opt. Express 27(22), 31644–31666 (2019).
    [Crossref]
  22. Y.Q Chen, X Jin, and Q.H. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017).
    [Crossref]
  23. Y.Q Chen, X Jin, and Q.H. Dai, “Distance estimation based on light field geometric modeling,” 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, 2017, pp. 43–48.

2019 (2)

2018 (3)

R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, “A system for acquiring, processing, and rendering panoramic light field stills for virtual reality,” ACM Trans. Graph. 37(6), 1–15 (2018).
[Crossref]

Y. M. Jeong, S. K. Moon, J. S. Jeong, G. Li, J. B. Cho, and B. H. Lee, “One shot 360-degree light field capture and reconstruction with depth extraction based on optical flow for light field camera,” Appl. Sci. 8(6), 890 (2018).
[Crossref]

D. W. Palmer, T. Coppin, K. Rana, D. G. Dansereau, M. Suheimat, M. Maynard, D. A. Atchison, J. Roberts, R. Crawford, and A. Jaiprakash, “Glare-free retinal imaging using a portable light field fundus camera,” Biomed. Opt. Express 9(7), 3178–3192 (2018).
[Crossref]

2017 (4)

2016 (1)

2015 (1)

F. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60–60:12 (2015).
[Crossref]

2013 (2)

2012 (1)

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref]

2011 (1)

David Fong and M. Saunders, “LSMR: An iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
[Crossref]

2007 (1)

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007).
[Crossref]

Andalman, A.

Astola, J.

P. Helin, V. Katkovnik, A. Gotchev, and J. Astola, “Super resolution inverse imaging for plenoptic caemras using wavefield modeling,” in proceedings of IEEE 3DTV Conference: The True Vision–Capture, Transmission and Display of 3D Video (3DTV-CON, 2014), pp. 1–4.

Atchison, D. A.

Balram, N.

Bedard, N.

Berkner, K.

Bioucas-Dias, J. M.

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007).
[Crossref]

Bishop, T. E.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref]

Bredif, M.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.

Broxton, M.

Chen, K.

F. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60–60:12 (2015).
[Crossref]

Chen, Y.Q

Y.Q Chen, X Jin, and Q.H. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017).
[Crossref]

Y.Q Chen, X Jin, and Q.H. Dai, “Distance estimation based on light field geometric modeling,” 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, 2017, pp. 43–48.

Chen, Y.Q.

Cho, J. B.

Y. M. Jeong, S. K. Moon, J. S. Jeong, G. Li, J. B. Cho, and B. H. Lee, “One shot 360-degree light field capture and reconstruction with depth extraction based on optical flow for light field camera,” Appl. Sci. 8(6), 890 (2018).
[Crossref]

Chu, D. P.

Cohen, N.

Coppin, T.

Crawford, R.

Dai, Q.H.

Dallaire, X.

Dansereau, D. G.

D. W. Palmer, T. Coppin, K. Rana, D. G. Dansereau, M. Suheimat, M. Maynard, D. A. Atchison, J. Roberts, R. Crawford, and A. Jaiprakash, “Glare-free retinal imaging using a portable light field fundus camera,” Biomed. Opt. Express 9(7), 3178–3192 (2018).
[Crossref]

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3757–3766.

Debevec, P.

R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, “A system for acquiring, processing, and rendering panoramic light field stills for virtual reality,” ACM Trans. Graph. 37(6), 1–15 (2018).
[Crossref]

Deisseroth, K.

Duval, G.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.

Erickson, D.

R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, “A system for acquiring, processing, and rendering panoramic light field stills for virtual reality,” ACM Trans. Graph. 37(6), 1–15 (2018).
[Crossref]

Evangelakos, D.

R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, “A system for acquiring, processing, and rendering panoramic light field stills for virtual reality,” ACM Trans. Graph. 37(6), 1–15 (2018).
[Crossref]

Favaro, P.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref]

Figueiredo, M. A. T.

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007).
[Crossref]

Fong, David

David Fong and M. Saunders, “LSMR: An iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
[Crossref]

Ford, J.

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3757–3766.

Geary, J.M.

J.M. Geary, Introduction to lens design with practical ZEMAX example, Willmann-Bell, Inc. (2007).

Goodman, J. W.

J. W. Goodman, Introduction to Fourier optics. Roberts and Company Publishers (2005).

Gotchev, A.

E. Sahin, V. Katkovnik, and A. Gotchev, “Super-resolution in a defocused plenoptic camera: a wave-optics-based approach,” Opt. Lett. 41(5), 998–1001 (2016).
[Crossref]

P. Helin, V. Katkovnik, A. Gotchev, and J. Astola, “Super resolution inverse imaging for plenoptic caemras using wavefield modeling,” in proceedings of IEEE 3DTV Conference: The True Vision–Capture, Transmission and Display of 3D Video (3DTV-CON, 2014), pp. 1–4.

Grosenick, L.

Hanrahan, P.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.

Haralam, M. A.

Helin, P.

P. Helin, V. Katkovnik, A. Gotchev, and J. Astola, “Super resolution inverse imaging for plenoptic caemras using wavefield modeling,” in proceedings of IEEE 3DTV Conference: The True Vision–Capture, Transmission and Display of 3D Video (3DTV-CON, 2014), pp. 1–4.

Hoberman, A.

Horowitz, M.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.

Huang, F.

F. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60–60:12 (2015).
[Crossref]

Jaiprakash, A.

Jeong, J. S.

Y. M. Jeong, S. K. Moon, J. S. Jeong, G. Li, J. B. Cho, and B. H. Lee, “One shot 360-degree light field capture and reconstruction with depth extraction based on optical flow for light field camera,” Appl. Sci. 8(6), 890 (2018).
[Crossref]

Jeong, Y. M.

Y. M. Jeong, S. K. Moon, J. S. Jeong, G. Li, J. B. Cho, and B. H. Lee, “One shot 360-degree light field capture and reconstruction with depth extraction based on optical flow for light field camera,” Appl. Sci. 8(6), 890 (2018).
[Crossref]

Jin, X

Y.Q Chen, X Jin, and Q.H. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017).
[Crossref]

Y.Q Chen, X Jin, and Q.H. Dai, “Distance estimation based on light field geometric modeling,” 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, 2017, pp. 43–48.

Jin, X.

Katkovnik, V.

E. Sahin, V. Katkovnik, and A. Gotchev, “Super-resolution in a defocused plenoptic camera: a wave-optics-based approach,” Opt. Lett. 41(5), 998–1001 (2016).
[Crossref]

P. Helin, V. Katkovnik, A. Gotchev, and J. Astola, “Super resolution inverse imaging for plenoptic caemras using wavefield modeling,” in proceedings of IEEE 3DTV Conference: The True Vision–Capture, Transmission and Display of 3D Video (3DTV-CON, 2014), pp. 1–4.

Kovacevic, J.

Lasser, T.

Lee, B. H.

Y. M. Jeong, S. K. Moon, J. S. Jeong, G. Li, J. B. Cho, and B. H. Lee, “One shot 360-degree light field capture and reconstruction with depth extraction based on optical flow for light field camera,” Appl. Sci. 8(6), 890 (2018).
[Crossref]

Levoy, M.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013).
[Crossref]

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.

Li, G.

Y. M. Jeong, S. K. Moon, J. S. Jeong, G. Li, J. B. Cho, and B. H. Lee, “One shot 360-degree light field capture and reconstruction with depth extraction based on optical flow for light field camera,” Appl. Sci. 8(6), 890 (2018).
[Crossref]

Li, K.

Liu, L.

Maynard, M.

Moon, S. K.

Y. M. Jeong, S. K. Moon, J. S. Jeong, G. Li, J. B. Cho, and B. H. Lee, “One shot 360-degree light field capture and reconstruction with depth extraction based on optical flow for light field camera,” Appl. Sci. 8(6), 890 (2018).
[Crossref]

Ng, R.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.

Overbeck, R. S.

R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, “A system for acquiring, processing, and rendering panoramic light field stills for virtual reality,” ACM Trans. Graph. 37(6), 1–15 (2018).
[Crossref]

Page, J.

Palmer, D. W.

Pharr, M.

R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, “A system for acquiring, processing, and rendering panoramic light field stills for virtual reality,” ACM Trans. Graph. 37(6), 1–15 (2018).
[Crossref]

Rana, K.

Roberts, J.

Sahin, E.

Saunders, M.

David Fong and M. Saunders, “LSMR: An iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
[Crossref]

Schuster, G.

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3757–3766.

Shaikh, N.

Shope, T.

Shroff, S.A.

Stefanoiu, A.

Suheimat, M.

Symvoulidis, P.

Thibault, S.

Tošic, I.

Voelz, D.G.

D.G. Voelz, Computational fourier optics: A MATLAB Tutorial, (SPIE Tutorial Texts Vol. TT89), SPIE Press (2011).

Westmeyer, G. G.

Wetzstein, G.

F. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60–60:12 (2015).
[Crossref]

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3757–3766.

Yang, S.

Yöntem, A. Ö.

ACM Trans. Graph. (2)

F. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60–60:12 (2015).
[Crossref]

R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, “A system for acquiring, processing, and rendering panoramic light field stills for virtual reality,” ACM Trans. Graph. 37(6), 1–15 (2018).
[Crossref]

Appl. Opt. (2)

Appl. Sci. (1)

Y. M. Jeong, S. K. Moon, J. S. Jeong, G. Li, J. B. Cho, and B. H. Lee, “One shot 360-degree light field capture and reconstruction with depth extraction based on optical flow for light field camera,” Appl. Sci. 8(6), 890 (2018).
[Crossref]

Biomed. Opt. Express (2)

IEEE Trans. on Image Process. (1)

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref]

J. Opt. Soc. Am. A (1)

Opt. Express (4)

Opt. Lett. (1)

SIAM J. Sci. Comput. (1)

David Fong and M. Saunders, “LSMR: An iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
[Crossref]

Other (7)

J. W. Goodman, Introduction to Fourier optics. Roberts and Company Publishers (2005).

D.G. Voelz, Computational fourier optics: A MATLAB Tutorial, (SPIE Tutorial Texts Vol. TT89), SPIE Press (2011).

J.M. Geary, Introduction to lens design with practical ZEMAX example, Willmann-Bell, Inc. (2007).

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.

Y.Q Chen, X Jin, and Q.H. Dai, “Distance estimation based on light field geometric modeling,” 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, 2017, pp. 43–48.

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3757–3766.

P. Helin, V. Katkovnik, A. Gotchev, and J. Astola, “Super resolution inverse imaging for plenoptic caemras using wavefield modeling,” in proceedings of IEEE 3DTV Conference: The True Vision–Capture, Transmission and Display of 3D Video (3DTV-CON, 2014), pp. 1–4.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. The architecture of the proposed method.
Fig. 2.
Fig. 2. Schematic layout of singlet plenoptic cameras where main lens consists of only a single lens. T is the thickness of the single lens and purple lines indicate the principal planes.
Fig. 3.
Fig. 3. Sub-aperture image extraction comparison. Notice that sub-aperture images are up-sampled for display.
Fig. 4.
Fig. 4. The self-built singlet plenoptic camera.
Fig. 5.
Fig. 5. Captured PSFs and simulated PSFs using the purchased MLA.
Fig. 6.
Fig. 6. Captured PSFs and simulated PSFs using the self-designed MLA.
Fig. 7.
Fig. 7. Imaging objects used in the simulations.
Fig. 8.
Fig. 8. Results of reconstruction and re-projection without noise. Notice that the contrast of (g) and (j) is enhanced for illustration.
Fig. 9.
Fig. 9. Results of reconstruction and re-projection with noise.
Fig. 10.
Fig. 10. Results of reconstruction and re-projection with different imaging noise levels.
Fig. 11.
Fig. 11. Results of reconstruction and re-projection without depth information as a priori. The yellow lines and orange lines are used to manifest the vertical shift, i.e., parallax, among the sub-aperture images.
Fig. 12.
Fig. 12. Results of imaging and re-projection in real experiments. The yellow lines in (c) and (e) are used to manifest the vertical shift, i.e., parallax, among the sub-aperture images.
Fig. 13.
Fig. 13. Definitions of some parameters in Table 3.
Fig. 14.
Fig. 14. Schematics for magnification factor and blur radius analysis.

Tables (3)

Tables Icon

Table 1. Parameters of the self-built singlet plenoptic camera.

Tables Icon

Table 2. Depth estimation results using the absolute distance measurement method proposed in [2223].

Tables Icon

Table 3. Relationships between wavefront coefficients and Seidel coefficients.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

h ( s , t ; ξ , η ) = exp [ i k ( z 1 + z 2 + z 3 ) ] i λ 3 z 1 z 2 z 3 + + m M n N P M L A ( x m d 1 , y n d 1 )   × exp { i k 2 f M L A [ ( x m d 1 ) 2 + ( y n d 1 ) 2 ] }   × exp { i k 2 z 3 [ ( x s ) 2 + ( y t ) 2 ] }   × { + + P m a i n ( u , v ) exp ( i k W ( x ^ ; u ^ , v ^ ) ) exp [ i k 2 F m a i n ( u 2 + v 2 ) ]   × exp { i k 2 z 1 [ ( ξ u ) 2  +  ( η v ) 2 ] }     × exp { i k 2 z 2 [ ( x u ) 2  +  ( y v ) 2 ] } d u d v } d x d y ,
W ( x ^ ; u ^ , v ^ ) = g , l , r W g l r x ^ g ρ l cos r θ = ρ cos θ = u ^ ρ = u ^ 2 + v ^ 2 W 040 ( u ^ 2 + v ^ 2 ) 2 + W 131 x ^ ( u ^ 2 + v ^ 2 ) u ^ + W 222 x ^ 2 u ^ 2 + W 220 x ^ 2 ( u ^ 2 + v ^ 2 ) + W 311 x ^ 3 u ^ ,
I ( s , t ) = H ( ξ , η ) ( s , t ) O ( ξ , η ) + N ( s , t ) ,
O ^ ( ξ , η ) = argmin O ( ξ , η ) | | I ( s , t ) H ( ξ , η ) ( s , t ) O ( ξ , η ) | | 2 2 + τ | | O ( ξ , η ) | | 2 2 ,
O ^ ( ξ , η ) = h r , z 1 O ^ ( ξ , η ) ,
r o b j , z 1 = min ( | γ z 1 d 1 b z 1 | , d 1 / 2 ) S d 1 ,
I ^ ( s , t ) = H ~ ( ξ 0 , η 0 ) ( s , t ) O ^ ( ξ , η ) ,
γ z 1 = z 1 Δ z z 2 z 3 z 2 ( z 1 Δ z ) ,
b z 1 = d 1 z 3 2 | 1 f M L A 1 z 2 ( z 1 Δ z ) 1 z 3 | .
r s e n , z 1  = min ( | γ z 1 d 1 b z 1 | , d 1 / 2 ) ,

Metrics