Abstract

Recent developments in computational photography enabled variation of the optical focus of a plenoptic camera after image exposure, also known as refocusing. Existing ray models in the field simplify the camera’s complexity for the purpose of image and depth map enhancement, but fail to satisfyingly predict the distance to which a photograph is refocused. By treating a pair of light rays as a system of linear functions, it will be shown in this paper that its solution yields an intersection indicating the distance to a refocused object plane. Experimental work is conducted with different lenses and focus settings while comparing distance estimates with a stack of refocused photographs for which a blur metric has been devised. Quantitative assessments over a 24 m distance range suggest that predictions deviate by less than 0.35 % in comparison to an optical design software. The proposed refocusing estimator assists in predicting object distances just as in the prototyping stage of plenoptic cameras and will be an essential feature in applications demanding high precision in synthetic focus or where depth map recovery is done by analyzing a stack of refocused photographs.

© 2016 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Light field geometry of a standard plenoptic camera

Christopher Hahne, Amar Aggoun, Shyqyri Haxha, Vladan Velisavljevic, and Juan Carlos Jácome Fernández
Opt. Express 22(22) 26659-26673 (2014)

Computational photography with plenoptic camera and light field capture: tutorial

Edmund Y. Lam
J. Opt. Soc. Am. A 32(11) 2021-2032 (2015)

Reconstruction of perspective shifts and refocusing of a three-dimensional scene from a multi-focus image stack

Julia R. Alonso, Ariel Fernández, and José A. Ferrari
Appl. Opt. 55(9) 2380-2386 (2016)

References

  • View by:
  • |
  • |
  • |

  1. F. E. Ives, “Parallax stereogram and process of making same,” US Patent 725,567 (April14, 1903).
  2. G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
    [Crossref]
  3. E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intel. 14(2), 99–106 (1992).
    [Crossref]
  4. M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH (1996), pp. 31–42.
  5. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH (2000), pp. 297–306.
  6. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” in Stanford Tech. Report, 1–11 (CTSR, 2005).
  7. R. Ng, Digital light field photography (Stanford University, 2006).
  8. T. Georgiev, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering (2006).
  9. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, 1–8 (2009).
  10. C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” Proc. SPIE 8291, 829108 (2012).
    [Crossref]
  11. C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014).
    [Crossref] [PubMed]
  12. C. Hahne, A. Aggoun, and V. Velisavljevic, “The refocusing distance of a standard plenoptic photograph,” in 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2015).
  13. Radiant ZEMAX LLC, “Optical design program,” version 110711 (2011).
  14. D. G. Dansereau, “Plenoptic signal processing for robust vision in field robotics,” (University of Sydney, 2014).
  15. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2013), pp. 1027–1034.
  16. E. Hecht, Optics, 4th ed. (Addison Wesley, 2001).
  17. E. F. Schubert, Light-Emitting Diodes (Cambridge University, 2006).
    [Crossref]
  18. D. Cho, M. Lee, S. Kim, and Y.-W. Tai, “Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction,” in IEEE International Conference on Computer Vision (ICCV) (2013), pp. 3280–3287.
  19. C. Hahne, “Matlab implementation of proposed refocusing distance estimator,” figshare (2016) [retrieved 8 September 2016], http://dx.doi.org/10.6084/m9.figshare.3383797.
  20. B. Caldwell, “Fast wide-range zoom for 35 mm format,” Opt. Photon. News 11(7), 49–51 (2000).
    [Crossref]
  21. M. Yanagisawa, “Optical system having a variable out-of-focus state,” US Patent 4,908,639 (March13, 1990).
  22. TRIOPTICS, “MTF measurement and further parameters,” (2015), [retrieved 3 October 2015] http://www.trioptics.com/knowledge-base/mtf-and-image-quality/ .
  23. C. Hahne, “Zemax archive file containing plenoptic camera design,” figshare (2016) [retrieved 8 September 2016], http://dx.doi.org/10.6084/m9.figshare.3381082.
  24. E. Mavridaki and V. Mezaris, “No-reference blur assessment in natural images using Fourier transform and spatial pyramids,” in IEEE International Conference on Image Processing (ICIP) (2014), pp. 566–570.
  25. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE International Conference on Computer Vision (ICCV) (2014), pp. 673–680.
  26. Z. Xu, J. Ke, and E. Y. Lam, “High-resolution lightfield photography using two masks,” Opt. Express 20(10), 10971–10983 (2012).
    [Crossref] [PubMed]
  27. C. Hahne, “Raw image data taken by a standard plenoptic camera,” figshare (2016) [retrieved 8 September 2016], http://dx.doi.org/10.6084/m9.figshare.3362152.

2014 (1)

2012 (2)

Z. Xu, J. Ke, and E. Y. Lam, “High-resolution lightfield photography using two masks,” Opt. Express 20(10), 10971–10983 (2012).
[Crossref] [PubMed]

C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” Proc. SPIE 8291, 829108 (2012).
[Crossref]

2000 (1)

B. Caldwell, “Fast wide-range zoom for 35 mm format,” Opt. Photon. News 11(7), 49–51 (2000).
[Crossref]

1992 (1)

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intel. 14(2), 99–106 (1992).
[Crossref]

1908 (1)

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
[Crossref]

Adelson, E. H.

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intel. 14(2), 99–106 (1992).
[Crossref]

Aggoun, A.

C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014).
[Crossref] [PubMed]

C. Hahne, A. Aggoun, and V. Velisavljevic, “The refocusing distance of a standard plenoptic photograph,” in 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2015).

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” in Stanford Tech. Report, 1–11 (CTSR, 2005).

Caldwell, B.

B. Caldwell, “Fast wide-range zoom for 35 mm format,” Opt. Photon. News 11(7), 49–51 (2000).
[Crossref]

Cho, D.

D. Cho, M. Lee, S. Kim, and Y.-W. Tai, “Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction,” in IEEE International Conference on Computer Vision (ICCV) (2013), pp. 3280–3287.

Curless, B.

T. Georgiev, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering (2006).

Dansereau, D. G.

D. G. Dansereau, “Plenoptic signal processing for robust vision in field robotics,” (University of Sydney, 2014).

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2013), pp. 1027–1034.

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” in Stanford Tech. Report, 1–11 (CTSR, 2005).

Fernández, J. C. J.

Georgiev, T.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, 1–8 (2009).

T. Georgiev, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering (2006).

Gortler, S. J.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH (2000), pp. 297–306.

Hadap, S.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE International Conference on Computer Vision (ICCV) (2014), pp. 673–680.

Hahne, C.

C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014).
[Crossref] [PubMed]

C. Hahne, A. Aggoun, and V. Velisavljevic, “The refocusing distance of a standard plenoptic photograph,” in 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2015).

Hanrahan, P.

M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH (1996), pp. 31–42.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” in Stanford Tech. Report, 1–11 (CTSR, 2005).

Haxha, S.

Hecht, E.

E. Hecht, Optics, 4th ed. (Addison Wesley, 2001).

Horowitz, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” in Stanford Tech. Report, 1–11 (CTSR, 2005).

Intwala, C.

T. Georgiev, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering (2006).

Isaksen, A.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH (2000), pp. 297–306.

Ives, F. E.

F. E. Ives, “Parallax stereogram and process of making same,” US Patent 725,567 (April14, 1903).

Ke, J.

Kim, S.

D. Cho, M. Lee, S. Kim, and Y.-W. Tai, “Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction,” in IEEE International Conference on Computer Vision (ICCV) (2013), pp. 3280–3287.

Lam, E. Y.

Lee, M.

D. Cho, M. Lee, S. Kim, and Y.-W. Tai, “Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction,” in IEEE International Conference on Computer Vision (ICCV) (2013), pp. 3280–3287.

Levoy, M.

M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH (1996), pp. 31–42.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” in Stanford Tech. Report, 1–11 (CTSR, 2005).

Lippmann, G.

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
[Crossref]

Lumsdaine, A.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, 1–8 (2009).

Malik, J.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE International Conference on Computer Vision (ICCV) (2014), pp. 673–680.

Mavridaki, E.

E. Mavridaki and V. Mezaris, “No-reference blur assessment in natural images using Fourier transform and spatial pyramids,” in IEEE International Conference on Image Processing (ICIP) (2014), pp. 566–570.

McMillan, L.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH (2000), pp. 297–306.

Mezaris, V.

E. Mavridaki and V. Mezaris, “No-reference blur assessment in natural images using Fourier transform and spatial pyramids,” in IEEE International Conference on Image Processing (ICIP) (2014), pp. 566–570.

Nayar, S.

T. Georgiev, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering (2006).

Ng, R.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” in Stanford Tech. Report, 1–11 (CTSR, 2005).

R. Ng, Digital light field photography (Stanford University, 2006).

Perwass, C.

C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” Proc. SPIE 8291, 829108 (2012).
[Crossref]

Pizarro, O.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2013), pp. 1027–1034.

Ramamoorthi, R.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE International Conference on Computer Vision (ICCV) (2014), pp. 673–680.

Salesin, D.

T. Georgiev, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering (2006).

Schubert, E. F.

E. F. Schubert, Light-Emitting Diodes (Cambridge University, 2006).
[Crossref]

Tai, Y.-W.

D. Cho, M. Lee, S. Kim, and Y.-W. Tai, “Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction,” in IEEE International Conference on Computer Vision (ICCV) (2013), pp. 3280–3287.

Tao, M. W.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE International Conference on Computer Vision (ICCV) (2014), pp. 673–680.

Velisavljevic, V.

C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014).
[Crossref] [PubMed]

C. Hahne, A. Aggoun, and V. Velisavljevic, “The refocusing distance of a standard plenoptic photograph,” in 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2015).

Wang, J. Y.

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intel. 14(2), 99–106 (1992).
[Crossref]

Wietzke, L.

C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” Proc. SPIE 8291, 829108 (2012).
[Crossref]

Williams, S. B.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2013), pp. 1027–1034.

Xu, Z.

Yanagisawa, M.

M. Yanagisawa, “Optical system having a variable out-of-focus state,” US Patent 4,908,639 (March13, 1990).

Zheng, K. C.

T. Georgiev, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering (2006).

IEEE Trans. Pattern Anal. Mach. Intel. (1)

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intel. 14(2), 99–106 (1992).
[Crossref]

J. Phys. Théor. Appl. (1)

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
[Crossref]

Opt. Express (2)

Opt. Photon. News (1)

B. Caldwell, “Fast wide-range zoom for 35 mm format,” Opt. Photon. News 11(7), 49–51 (2000).
[Crossref]

Proc. SPIE (1)

C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” Proc. SPIE 8291, 829108 (2012).
[Crossref]

Other (21)

C. Hahne, “Raw image data taken by a standard plenoptic camera,” figshare (2016) [retrieved 8 September 2016], http://dx.doi.org/10.6084/m9.figshare.3362152.

F. E. Ives, “Parallax stereogram and process of making same,” US Patent 725,567 (April14, 1903).

M. Yanagisawa, “Optical system having a variable out-of-focus state,” US Patent 4,908,639 (March13, 1990).

TRIOPTICS, “MTF measurement and further parameters,” (2015), [retrieved 3 October 2015] http://www.trioptics.com/knowledge-base/mtf-and-image-quality/ .

C. Hahne, “Zemax archive file containing plenoptic camera design,” figshare (2016) [retrieved 8 September 2016], http://dx.doi.org/10.6084/m9.figshare.3381082.

E. Mavridaki and V. Mezaris, “No-reference blur assessment in natural images using Fourier transform and spatial pyramids,” in IEEE International Conference on Image Processing (ICIP) (2014), pp. 566–570.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE International Conference on Computer Vision (ICCV) (2014), pp. 673–680.

M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH (1996), pp. 31–42.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH (2000), pp. 297–306.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” in Stanford Tech. Report, 1–11 (CTSR, 2005).

R. Ng, Digital light field photography (Stanford University, 2006).

T. Georgiev, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering (2006).

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, 1–8 (2009).

C. Hahne, A. Aggoun, and V. Velisavljevic, “The refocusing distance of a standard plenoptic photograph,” in 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2015).

Radiant ZEMAX LLC, “Optical design program,” version 110711 (2011).

D. G. Dansereau, “Plenoptic signal processing for robust vision in field robotics,” (University of Sydney, 2014).

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2013), pp. 1027–1034.

E. Hecht, Optics, 4th ed. (Addison Wesley, 2001).

E. F. Schubert, Light-Emitting Diodes (Cambridge University, 2006).
[Crossref]

D. Cho, M. Lee, S. Kim, and Y.-W. Tai, “Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction,” in IEEE International Conference on Computer Vision (ICCV) (2013), pp. 3280–3287.

C. Hahne, “Matlab implementation of proposed refocusing distance estimator,” figshare (2016) [retrieved 8 September 2016], http://dx.doi.org/10.6084/m9.figshare.3383797.

Supplementary Material (4)

NameDescription
» Code 1       Matlab implementation of proposed refocusing distance estimator
» Dataset 1       Zemax archive file containing plenoptic camera design
» Dataset 2       Raw image data taken by a standard plenoptic camera
» Visualization 1: MP4 (2381 KB)      Animated illustration of refocusing synthesis

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1

Lens components. (a) single micro lens sj with diameter Δs and its chief ray mc+i,j based on sensor sampling positions c + i which are separated by Δu; (b) chief ray trajectories where red colored crossbars signify gaps between MICs and respective micro lens optical axes. Rays arriving at MICs arise from the center of the exit pupil A′.

Fig. 2
Fig. 2

Realistic SPC ray model. The refined model considers more accurate MICs obtained from chief rays crossing the micro lens optical centers and the exit pupil center of the main lens (yellow colored rays). For convenience, the main lens is depicted as a thin lens where aperture pupils and principal planes coincide.

Fig. 3
Fig. 3

Irradiance planes. If light rays emanate from an arbitrary point in object space, the measured energy IU (s, U) at the main lens’ aperture is seen to be concentrated on a focused point I b U (s) at the MLA and distributed over the sensor area Ifs(s, u). Neglecting the presence of light absorptions and reflections, Ifs(s, u) is proportional to IU(s, U) which may be proven by comparing similar triangles.

Fig. 4
Fig. 4

Refocusing from raw capture where a = 0 (see the animated Visualization 1).

Fig. 5
Fig. 5

Refocusing from raw capture where a = 1 (see the animated Visualization 1).

Fig. 6
Fig. 6

Refocusing distance estimation where a = 1. Taking the example from Fig. 5, the diagram illustrates parameters that help to find the distance at which refocused photographs exhibit best focus. The proposed model offers two ways to accomplish this by regarding rays as intersecting linear functions in object and image space. DoF border d1− cannot be attained via image-based intersection as inner rays do not converge on the image side which is a consequence of a U < f U . Distances d a ± are negative in case they are located behind the MLA and positive otherwise.

Fig. 7
Fig. 7

Refocused photographs with bU = fU and M = 9. Main lens was focused at infinity (df → ∞). S denotes measured sharpness in (c) to (q). A denominator in a represents the upsampling factor for the linear interpolation of micro images. The diagram in (r) indicates the prediction performance where colored vertical bars represent estimated positions of best focus.

Fig. 8
Fig. 8

Refocused photographs with bU > fU and M = 11. Main lens was focused at 4 m (df ≈ 4 m). S denotes measured sharpness in (c) to (q). A denominator in a represents the upsampling factor for the linear interpolation of micro images. The diagram in (r) indicates the prediction performance where colored vertical bars represent estimated positions of best focus.

Fig. 9
Fig. 9

Real ray tracing simulation showing intersecting inner (red), outer (black) and central rays (cyan) with varying a and M. The consistent scale allows results to be compared, which are taken from the f90 lens with df → ∞ focus and MLA (II.). Screenshots in (a) to (c) have a constant micro image size (M = 13) and suggest that the DoF shrinks with ascending a. In contrast, a DoF grows for a fix refocusing plane (a = 4) by reducing samples in M as seen in (d) to (f).

Fig. 10
Fig. 10

Raw light field photograph with bU = fU. The magnified view shows detailed micro images.

Tables (6)

Tables Icon

Table 1 Micro lens specifications at λ = 550 nm.

Tables Icon

Table 2 Main lens parameters at λ = 550 nm.

Tables Icon

Table 3 Predicted refocusing distances da and da±

Tables Icon

Table 4 Refocusing distance comparison for f193 with MLA (II.) and M = 13.

Tables Icon

Table 5 Refocusing distance comparison for f193 with MLA (I.) and M = 13.

Tables Icon

Table 6 Refocusing distance comparison for f90 with MLA (II.) and M = 13.

Equations (41)

Equations on this page are rendered with MathJax. Learn more.

1 f s = 1 a s + 1 b s ,
I b U ( s , t ) = 1 b U 2 L b U ( s , t , U , V ) A ( U , V ) cos 4 θ d U d V
I b U ( s ) = L b U ( s , U ) d U .
L b U ( s , U ) = I U ( s , U ) .
I U ( s , U ) I f s ( s , U )
I b U ( s ) = I f s ( s , U ) d u .
E b U ( s ) = E f s ( s , U ) d u
E b U ( s j ) = i = c c E f s [ s j , u c + i ]
E 0 [ s 0 ] = E f s [ s 0 , u 0 ] + E f s [ s 0 , u 1 ] + E f s [ s 0 , u 2 ] .
E 0 [ s 1 ] = E f s [ s 1 , u 0 ] + E f s [ s 1 , u 1 ] + E f s [ s 1 , u 2 ] .
E 0 [ s j ] = i = c c E f s [ s j , u c + i ]
E 1 [ s 0 ] = E f s [ s 2 , u 0 ] + E f s [ s 1 , u 1 ] + E f s [ s 0 , u 2 ] .
E 1 [ s 1 ] = E f s [ s 3 , u 0 ] + E f s [ s 2 , u 1 ] + E f s [ s 1 , u 2 ] .
E a [ s j ] = i = c c E f s [ s j + a ( c i ) , u c + i ]
E a [ s j ] i = c c 1 M E f s [ s j + a ( c i ) , u c + i ] , a
o = J 1 2 .
s j = ( j o ) × Δ s
m c , j = s j d A
u c , j = m c , j × f s + s j .
u c + i , j = u c , j + i × Δ u
m c + i , j = s j u c + i , j f s .
f c + i , j ( z ) = m c + i , j × z + s j , z ( , U ] .
f A ( z ) = f B ( z ) , z ( , U ] ,
b U = b U d a .
a U = ( 1 f U 1 b U ) 1 .
d a = b U + H 1 U H 2 U ¯ + a U
r A 1.22 f λ A .
u { c + i , j } ± = u c , j + i × Δ u ± Δ u 2 .
m { c + i , j } ± = s j u { c + i , j } ± f s .
s j ± = s j ± Δ s 2 .
f { c + i , j } ± ( z ) = m { c + i , j } ± × z + s j ± , z ( , U ] .
f A ± ( z ) = f B ± ( z ) , z [ U , ) ,
b U ± = b U d a ± .
a U ± = ( 1 f U 1 b U ± ) 1 .
d a ± = b U + H 1 U H 2 U ¯ + a U ± .
DoF a = d a + d a .
b U = ( 1 f U 1 a U ) 1 ,
𝒳 [ σ ω , ρ ψ ] = | j = ξ Ξ 1 h = ϖ Π 1 E a [ s j , t h ] exp ( 2 π κ ( j ω / ( Ξ ξ ) + h ψ / ( Π ϖ ) ) ) |
TE = ω = 0 Ω 1 ψ = 0 Ψ 1 𝒳 [ σ ω , ρ ψ ] 2
HE = TE ω = 0 Q H ψ = 0 Q V 𝒳 [ σ ω , ρ ψ ] 2
S = HE TE

Metrics