Abstract

Current computational methods for light field photography model the ray-tracing geometry inside the plenoptic camera. This representation of the problem, and some common approximations, can lead to errors in the estimation of object sizes and positions. We propose a representation that leads to the correct reconstruction of object sizes and distances to the camera, by showing that light field images can be interpreted as limited angle cone-beam tomography acquisitions. We then quantitatively analyze its impact on image refocusing, depth estimation and volumetric reconstructions, comparing it against other possible representations. Finally, we validate these results with numerical and real-world examples.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Advanced light-field refocusing through tomographic modeling of the photographed scene

Nicola Viganò, Pablo Martínez Gil, Charlotte Herzog, Ombeline de la Rochefoucauld, Robert van Liere, and Kees Joost Batenburg
Opt. Express 27(6) 7834-7856 (2019)

On the fundamental comparison between unfocused and focused light field cameras

Shuaishuai Zhu, Andy Lai, Katherine Eaton, Peng Jin, and Liang Gao
Appl. Opt. 57(1) A1-A11 (2018)

Refocusing distance of a standard plenoptic camera

Christopher Hahne, Amar Aggoun, Vladan Velisavljevic, Susanne Fiebig, and Matthias Pesch
Opt. Express 24(19) 21521-21540 (2016)

References

  • View by:
  • |
  • |
  • |

  1. R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).
  2. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015).
    [Crossref]
  3. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2013), pp. 673–680.
  4. C. Hahne, A. Aggoun, V. Velisavljevic, S. Fiebig, and M. Pesch, “Refocusing distance of a standard plenoptic camera,” Opt. Express 24(19), 21521–21540 (2016).
    [Crossref] [PubMed]
  5. R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Univ. Tech. Rep. CSTR 2005-02 (Stanford University, 2005).
  6. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–8.
  7. S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Appl. Opt. 57(1), A1 (2018).
    [Crossref] [PubMed]
  8. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
    [Crossref]
  9. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013).
    [Crossref] [PubMed]
  10. A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE, 1988).
  11. T. G. Georgiev, A. Lumsdaine, J. Gille, and A. Veeraraghavan, “The Radon image as plenoptic function,” in Proceedings of IEEE International Conference on Image Processing, (IEEE, 2014), pp. 1922–1926.
  12. F. A. Jenkins and H. E. White, Fundamentals of Optics, 3 (McGraw-Hill, 1957), 3rd ed.
  13. T. M. Buzug, Computed Tomography (Springer-Verlag, 2008).
  14. W. J. Palenstijn, K. J. Batenburg, and J. Sijbers, “Performance improvements for iterative electron tomography reconstruction using graphics processing units (GPUs),” J. Struct. Biol. 176(2), 250–253 (2011).
    [Crossref] [PubMed]
  15. L. G. Shapiro and G. C. Stockman, Computer Vision (Pearson, 2001).
  16. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. on Imaging Sci. 2, 183–202 (2009).
    [Crossref]
  17. A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” J. Math. Imaging Vis. 40(1), 120–145 (2010).
    [Crossref]
  18. E. Y. Sidky, J. H. Jørgensen, and X. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm,” Phys. Medicine Biol. 57(10), 3065–3091 (2012).
    [Crossref]
  19. E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. on Pure Appl. Math. 59(8), 1207–1223 (2006).
    [Crossref]
  20. A. Lumsdaine, T. G. Georgiev, and G. Chunev, “Spatial analysis of discrete plenoptic sampling,” Proc. SPIE 8299, 829909 (2012).
    [Crossref]
  21. T. G. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 021106 (2010).
    [Crossref]

2018 (1)

2016 (1)

2015 (1)

2013 (1)

2012 (2)

E. Y. Sidky, J. H. Jørgensen, and X. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm,” Phys. Medicine Biol. 57(10), 3065–3091 (2012).
[Crossref]

A. Lumsdaine, T. G. Georgiev, and G. Chunev, “Spatial analysis of discrete plenoptic sampling,” Proc. SPIE 8299, 829909 (2012).
[Crossref]

2011 (1)

W. J. Palenstijn, K. J. Batenburg, and J. Sijbers, “Performance improvements for iterative electron tomography reconstruction using graphics processing units (GPUs),” J. Struct. Biol. 176(2), 250–253 (2011).
[Crossref] [PubMed]

2010 (2)

T. G. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 021106 (2010).
[Crossref]

A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” J. Math. Imaging Vis. 40(1), 120–145 (2010).
[Crossref]

2009 (1)

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. on Imaging Sci. 2, 183–202 (2009).
[Crossref]

2006 (2)

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. on Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Adams, A.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Aggoun, A.

Andalman, A.

Batenburg, K. J.

W. J. Palenstijn, K. J. Batenburg, and J. Sijbers, “Performance improvements for iterative electron tomography reconstruction using graphics processing units (GPUs),” J. Struct. Biol. 176(2), 250–253 (2011).
[Crossref] [PubMed]

Beck, A.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. on Imaging Sci. 2, 183–202 (2009).
[Crossref]

Broxton, M.

Buzug, T. M.

T. M. Buzug, Computed Tomography (Springer-Verlag, 2008).

Candès, E. J.

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. on Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Chambolle, A.

A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” J. Math. Imaging Vis. 40(1), 120–145 (2010).
[Crossref]

Chunev, G.

A. Lumsdaine, T. G. Georgiev, and G. Chunev, “Spatial analysis of discrete plenoptic sampling,” Proc. SPIE 8299, 829909 (2012).
[Crossref]

Cohen, N.

Deisseroth, K.

Duval, G.

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Univ. Tech. Rep. CSTR 2005-02 (Stanford University, 2005).

Eaton, K.

Fiebig, S.

Footer, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Gao, L.

Georgiev, T.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–8.

Georgiev, T. G.

A. Lumsdaine, T. G. Georgiev, and G. Chunev, “Spatial analysis of discrete plenoptic sampling,” Proc. SPIE 8299, 829909 (2012).
[Crossref]

T. G. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 021106 (2010).
[Crossref]

T. G. Georgiev, A. Lumsdaine, J. Gille, and A. Veeraraghavan, “The Radon image as plenoptic function,” in Proceedings of IEEE International Conference on Image Processing, (IEEE, 2014), pp. 1922–1926.

Gille, J.

T. G. Georgiev, A. Lumsdaine, J. Gille, and A. Veeraraghavan, “The Radon image as plenoptic function,” in Proceedings of IEEE International Conference on Image Processing, (IEEE, 2014), pp. 1922–1926.

Grosenick, L.

Hadap, S.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2013), pp. 673–680.

Hahne, C.

Hanrahan, P.

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Univ. Tech. Rep. CSTR 2005-02 (Stanford University, 2005).

Horowitz, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Univ. Tech. Rep. CSTR 2005-02 (Stanford University, 2005).

Jenkins, F. A.

F. A. Jenkins and H. E. White, Fundamentals of Optics, 3 (McGraw-Hill, 1957), 3rd ed.

Jin, P.

Jørgensen, J. H.

E. Y. Sidky, J. H. Jørgensen, and X. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm,” Phys. Medicine Biol. 57(10), 3065–3091 (2012).
[Crossref]

Kak, A. C.

A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE, 1988).

Lai, A.

Lam, E. Y.

Levoy, M.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013).
[Crossref] [PubMed]

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Univ. Tech. Rep. CSTR 2005-02 (Stanford University, 2005).

Lumsdaine, A.

A. Lumsdaine, T. G. Georgiev, and G. Chunev, “Spatial analysis of discrete plenoptic sampling,” Proc. SPIE 8299, 829909 (2012).
[Crossref]

T. G. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 021106 (2010).
[Crossref]

T. G. Georgiev, A. Lumsdaine, J. Gille, and A. Veeraraghavan, “The Radon image as plenoptic function,” in Proceedings of IEEE International Conference on Image Processing, (IEEE, 2014), pp. 1922–1926.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–8.

Malik, J.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2013), pp. 673–680.

Ng, R.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Univ. Tech. Rep. CSTR 2005-02 (Stanford University, 2005).

Palenstijn, W. J.

W. J. Palenstijn, K. J. Batenburg, and J. Sijbers, “Performance improvements for iterative electron tomography reconstruction using graphics processing units (GPUs),” J. Struct. Biol. 176(2), 250–253 (2011).
[Crossref] [PubMed]

Pan, X.

E. Y. Sidky, J. H. Jørgensen, and X. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm,” Phys. Medicine Biol. 57(10), 3065–3091 (2012).
[Crossref]

Pesch, M.

Pock, T.

A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” J. Math. Imaging Vis. 40(1), 120–145 (2010).
[Crossref]

Ramamoorthi, R.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2013), pp. 673–680.

Romberg, J. K.

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. on Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Shapiro, L. G.

L. G. Shapiro and G. C. Stockman, Computer Vision (Pearson, 2001).

Sidky, E. Y.

E. Y. Sidky, J. H. Jørgensen, and X. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm,” Phys. Medicine Biol. 57(10), 3065–3091 (2012).
[Crossref]

Sijbers, J.

W. J. Palenstijn, K. J. Batenburg, and J. Sijbers, “Performance improvements for iterative electron tomography reconstruction using graphics processing units (GPUs),” J. Struct. Biol. 176(2), 250–253 (2011).
[Crossref] [PubMed]

Slaney, M.

A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE, 1988).

Stockman, G. C.

L. G. Shapiro and G. C. Stockman, Computer Vision (Pearson, 2001).

Tao, M. W.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2013), pp. 673–680.

Tao, T.

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. on Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Teboulle, M.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. on Imaging Sci. 2, 183–202 (2009).
[Crossref]

Veeraraghavan, A.

T. G. Georgiev, A. Lumsdaine, J. Gille, and A. Veeraraghavan, “The Radon image as plenoptic function,” in Proceedings of IEEE International Conference on Image Processing, (IEEE, 2014), pp. 1922–1926.

Velisavljevic, V.

White, H. E.

F. A. Jenkins and H. E. White, Fundamentals of Optics, 3 (McGraw-Hill, 1957), 3rd ed.

Yang, S.

Zhu, S.

ACM Trans. Graph. (1)

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Appl. Opt. (1)

Commun. on Pure Appl. Math. (1)

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. on Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

J. Electron. Imaging (1)

T. G. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 021106 (2010).
[Crossref]

J. Math. Imaging Vis. (1)

A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” J. Math. Imaging Vis. 40(1), 120–145 (2010).
[Crossref]

J. Opt. Soc. Am. A (1)

J. Struct. Biol. (1)

W. J. Palenstijn, K. J. Batenburg, and J. Sijbers, “Performance improvements for iterative electron tomography reconstruction using graphics processing units (GPUs),” J. Struct. Biol. 176(2), 250–253 (2011).
[Crossref] [PubMed]

Opt. Express (2)

Phys. Medicine Biol. (1)

E. Y. Sidky, J. H. Jørgensen, and X. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm,” Phys. Medicine Biol. 57(10), 3065–3091 (2012).
[Crossref]

Proc. SPIE (1)

A. Lumsdaine, T. G. Georgiev, and G. Chunev, “Spatial analysis of discrete plenoptic sampling,” Proc. SPIE 8299, 829909 (2012).
[Crossref]

SIAM J. on Imaging Sci. (1)

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. on Imaging Sci. 2, 183–202 (2009).
[Crossref]

Other (9)

L. G. Shapiro and G. C. Stockman, Computer Vision (Pearson, 2001).

R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).

A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE, 1988).

T. G. Georgiev, A. Lumsdaine, J. Gille, and A. Veeraraghavan, “The Radon image as plenoptic function,” in Proceedings of IEEE International Conference on Image Processing, (IEEE, 2014), pp. 1922–1926.

F. A. Jenkins and H. E. White, Fundamentals of Optics, 3 (McGraw-Hill, 1957), 3rd ed.

T. M. Buzug, Computed Tomography (Springer-Verlag, 2008).

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Univ. Tech. Rep. CSTR 2005-02 (Stanford University, 2005).

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–8.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2013), pp. 673–680.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1 Comparison of the two setups: (a) compound plenoptic camera using a MLA, (b) cone-beam tomography acquisition.
Fig. 2
Fig. 2 Ray-tracing geometry for compound plenoptic camera using a focused configuration.
Fig. 3
Fig. 3 Comparison of the (s, u) phase space in the: (a) unfocused light field camera, (b) focused light field camera. In each plot, the blue dots represent the sampling corresponding to each sensor pixel, the red line represents one micro image, and the green line represents one sub-aperture image.
Fig. 4
Fig. 4 Ray tracing geometry for one sub-aperture image, superimposed with the corresponding tomographic geometry.
Fig. 5
Fig. 5 Ray parametrization proposed in [1].
Fig. 6
Fig. 6 Plot of the three-dimensional Fourier sampling associated with the Tarots Cards light field image from the Stanford light field archive.
Fig. 7
Fig. 7 Phantom used to simulate light field data for the logos scene: a VOXEL logo at the acquisition focal distance z = 100 mm, and two CWI logos at the distances of 110 mm, and 90 mm.
Fig. 8
Fig. 8 Phantom of the wireframe scene: a cube centered on the acquisition focal distance 100 mm, which spans multiple refocusing planes.
Fig. 9
Fig. 9 Simulated light fields for both the synthetic logos and wireframe cases. Two sub-aperture images in each each case are extracted for clarity.
Fig. 10
Fig. 10 Experimental setup and acquired light field for the real data experiment: (a) experimental setup, with the Thorlabs 1951 USAF Resolution Test target in the bottom left corner, the Thorlabs AC254-200-A-ML main lens in the middle, and the Imagine Optic HASO3 128GE2 in the top right corner, (b) the acquired light field for the distance z 0 = z 0 = 250 m m.
Fig. 11
Fig. 11 Refocusing sizes and distances of the different geometric models: (a) Flattened all-in-focus phantom from Fig. 7, where the distances different from 90 mm have been made semi-transparent, (b) Refocusing in the object space with a cone-beam geometry, at a distance of z0 = 90 mm, (c) Refocusing in the object space with a parallel-beam geometry, at a distance of z0 = 88.89 mm, (d) Refocusing in the image space with a cone-beam geometry, at a distance z1 = 24.37 mm, corresponding to a z0 = 111.5 mm.
Fig. 12
Fig. 12 Effect of the different parametrizations on the scaling of estimated size and depth, as a function of the object position along the optical axis, for the logos case. More precisely: (a) effect on the α parameters, (b) effect on the estimated distance along the z-axis, (c) effect on the estimated size along the s, t-axes. The chosen parametrizations were the three outlined in section 2.
Fig. 13
Fig. 13 Depth maps for three different aperture sizes and depth estimation cues response functions for three points in the depth map: (a) aperture size d1 = 2 mm, (b) aperture size d1 = 8 mm, (c) aperture size d1 = 32 mm. The blue and orange curves in the function response plots correspond to the defocus and correspondence functions values for each of the sampled distances along the optical axis. The vertical green dotted line in the function response plots indicates the estimated depth for the given point.
Fig. 14
Fig. 14 Comparison of the reconstruction sizes of the wire-frame test case. The figures are projections on the ST, SZ, and TZ planes from top to bottom, and the columns represent: (a) the phantom, (b) reconstruction in the image space with cone-beam geometry, (c) reconstruction in the object space with cone-beam geometry, (d) reconstruction in the object space with parallel-beam geometry.
Fig. 15
Fig. 15 Effect of the different parametrizations on the scaling of estimated size and depth, as a function of the object position along the optical axis, for the real-world example discussed in section 3.2. More precisely: (a) effect on the α parameters, (b) effect on the estimated distance along the z-axis, (c) effect on the estimated size along the s, t-axes. The chosen parametrizations were the three outlined in section 2. The red dotted line represents the expected value for the plotted quantity.
Fig. 16
Fig. 16 Ray tracing geometry for one sub-aperture image of both the plenoptic acquisition setups.
Fig. 17
Fig. 17 Comparison of the (s, u, σ) resolutions in the: (a) unfocused light field camera, (b) focused light field camera. In the FLF case, we speak about pseudo-resolution, due to the sheared sampling of Fig. 3(b).

Equations (62)

Equations on this page are rendered with MathJax. Learn more.

s U L F = s M L A ,
t U L F = t M L A ,
u U L F = z 1 f 2 σ ,
v U L F = z 1 f 2 τ ,
s F L F = s M L A + a b σ ,
t F L F = t M L A + a b τ ,
u F L F = z 1 + a b σ ,
v F L F = z 1 + a b τ ,
s = u + α ( s u ) = α s + ( 1 α ) u ,
t = v + α ( t v ) = α t + ( 1 α ) v ,
α p = 2 1 α c ,
α c = 1 2 α p ,
M s o = s i ,
M t o = t i ,
M s o = α M s o + ( 1 α ) u ,
M t o = α M t o + ( 1 α ) v ,
α c , o = α c , i ( 1 | M | ) α c , i + | M | ,
α c , i = | M | α c , o 1 α c , o ( 1 | M | ) .
E z 0 ( s , t ) = 1 z 0 2 z 0 ( s , t , u , v ) d u d v ,
E z 0 ( s , t ) = 1 z 0 2 z 0 ( s α + ( 1 1 α ) u M , t α + ( 1 1 α ) v M , u , v ) d u d v .
E z 0 ( s , t ) = U , V + U , + V P u , v ( s , t ) δ ( s α + ( 1 1 α ) u M s , t α + ( 1 1 α ) v M t ) d u d v ,
E ( s , t , z ) = U , V + U , + V P u , v ( s , t ) δ ( z 0 z s + ( 1 z 0 z ) u M s , z 0 z t + ( 1 z 0 z ) v M t ) d u d v ,
L ( s , t , u , v ) = Z + Z E ( s , t , z ) δ ( z z 0 s + ( 1 z z 0 ) u M s , z z 0 t + ( 1 z z 0 ) v M t ) d z ,
w s = 2 U | M | δ s Δ z z 0 ,
w t = 2 V | M | δ t Δ z z 0 ,
w z , s δ obj s U δ z Δ z ,
w z , t δ obj t V δ z Δ z .
DoF s = 2 | M | δ s z 0 U ,
DoF t = 2 | M | δ s z 0 V ,
z 0 = z 0 α c , o = z 0 α c , i ( 1 | M | ) α c , i + | M | ,
z 0 = z 0 α c , o = z 0 1 2 α p , o ,
s 0 = s c , i α c , o α c , i ,
s 0 = s p , o α c , o ,
r P ( x ) = r P ( q , p ) = δ 0 ( q ) F ( p ) ,
A lensless = T z 1 = ( 1 z 1 0 1 ) ,
A lensless 1 = T z 1 1 = T z 1 = ( 1 z 1 0 1 ) ,
A compound = L f 1 T z 1 = ( 1 0 1 f 1 1 ) ( 1 z 1 0 1 ) = ( 1 z 1 1 f 1 1 z 1 f 1 ) ,
A compound 1 = T z 1 1 L f 1 1 = ( 1 z 1 0 1 ) ( 1 0 1 f 1 1 ) = ( 1 z 1 f 1 z 1 1 f 1 1 ) ,
r MLA ( x ) = r P ( A compound 1 x ) = δ 0 ( q z 1 ( q 1 f 1 + p ) ) F ( q 1 f 1 + p ) ,
r D ( x ) = r P ( A lensless 1 x ) = δ 0 ( q z 1 p ) F ( p ) ,
I D ( q ) = + r D ( q , p ) d p = + δ 0 ( q z 1 p ) F ( p ) d p = 1 z 1 F ( q z 1 ) ,
I MLA ( q ) = + r M L A ( q , p ) d p = + δ 0 ( q z 1 ( q 1 f 1 + p ) ) F ( q 1 f 1 + p ) d p ,
p = q 1 f 1 + p , d p = d p
I MLA ( q ) = + r D ( q , p ) d p = + δ 0 ( q z 1 p ) F ( p ) d p = 1 z 1 F ( q z 1 ) ,
A tomography = T z T = ( 1 z T 0 1 ) ,
A tomography 1 = T z T 1 = T z T = ( 1 z T 0 1 ) ,
r T ( x ) = r P ( A tomography 1 x ) = δ 0 ( q + z T p ) F ( p ) ,
I T ( q ) = + r T ( q , p ) d p = + δ 0 ( q + z T p ) F ( p ) d p = 1 z T F ( q z T ) ,
E ( z ) ( s , t ) = 1 z 2 u v P u , v ( s α + ( 1 1 α ) u , t α + ( 1 1 α ) v ) ,
α c = u 1 u 2 ( u 1 u 2 ) ( s u 1 s u 2 ) ,
α p = ( u 1 u 2 ) + ( s u 1 s u 2 ) u 1 u 2 = 1 + s u 1 s u 2 u 1 u 2 .
s o = α s o + ( 1 α ) u M ,
t o = α t o + ( 1 α ) v M ,
E ( z o ) ( s o , t o ) = 1 z o 2 u v P u , v ( s o α + ( 1 1 α ) u o M , t o α + ( 1 1 α ) v o M ) ,
α c , i = ( u 1 u 2 ) ( s i , u 1 s i , u 2 ) ( u 1 u 2 ) = ( u 1 u 2 ) ( u 1 u 2 ) ( s i , u 1 s i , u 2 ) ,
α c , o = ( u 1 u 2 ) M ( s i , u 1 s i , u 2 ) ( u 1 u 2 ) = ( u 1 u 2 ) ( u 1 u 2 ) M ( s i , u 1 s i , u 2 ) ,
1 α c , o = 1 M ( s i , u 1 s i , u 2 ) ( u 1 u 2 ) ,
1 α c , i = 1 ( s i , u 1 s i , u 2 ) ( u 1 u 2 ) ,
α c , o = α c , i ( 1 M ) α c , i + M ,
α c , i = α c , o 1 α c , o ( 1 M ) ,
tan θ = U z o ,
DoF = 2 | M | Δ s tan θ .

Metrics