Abstract

In this paper, we propose a geometric optical model to measure the distances of object planes in a light field image. The proposed geometric optical model is composed of two sub-models based on ray tracing: object space model and image space model. The two theoretic sub-models are derived on account of on-axis point light sources. In object space model, light rays propagate into the main lens and refract inside it following the refraction theorem. In image space model, light rays exit from emission positions on the main lens and subsequently impinge on the image sensor with different imaging diameters. The relationships between imaging diameters of objects and their corresponding emission positions on the main lens are investigated through utilizing refocusing and similar triangle principle. By combining the two sub-models together and tracing light rays back to the object space, the relationships between objects’ imaging diameters and corresponding distances of object planes are figured out. The performance of the proposed geometric optical model is compared with existing approaches using different configurations of hand-held plenoptic 1.0 cameras and real experiments are conducted using a preliminary imaging system. Results demonstrate that the proposed model can outperform existing approaches in terms of accuracy and exhibits good performance at general imaging range.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Light field geometry of a standard plenoptic camera

Christopher Hahne, Amar Aggoun, Shyqyri Haxha, Vladan Velisavljevic, and Juan Carlos Jácome Fernández
Opt. Express 22(22) 26659-26673 (2014)

Refocusing distance of a standard plenoptic camera

Christopher Hahne, Amar Aggoun, Vladan Velisavljevic, Susanne Fiebig, and Matthias Pesch
Opt. Express 24(19) 21521-21540 (2016)

Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method

Tian-Jiao Li, Sai Li, Yuan Yuan, Yu-Dong Liu, Chuan-Long Xu, Yong Shuai, and He-Ping Tan
Opt. Express 25(7) 8274-8287 (2017)

References

  • View by:
  • |
  • |
  • |

  1. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.
  2. Lytro, https://www.lytro.com/ .
  3. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.
  4. Raytrix, https://www.raytrix.de/ .
  5. C. C. Chen, Y. C. Lu, and M. S. Su, “Light field based digital refocusing using a DSLR camera with a pinhole array mask,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp.754–757.
    [Crossref]
  6. Y. Taguchi and T. Naemura, “View-Dependent Coding of Light Fields Based on Free-Viewpoint Image Synthesis,” in IEEE International Conference on Image Processing (2006), pp. 509–512.
    [Crossref]
  7. K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905.
    [Crossref]
  8. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
    [Crossref] [PubMed]
  9. N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.
  10. Metrology Resource Co, http://www.metrologyresource.com/ .
  11. Weibel Inc., Weibel RR-60034 Ranging Radar System.
  12. J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).
  13. K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).
  14. Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
    [Crossref]
  15. M. J. Kim, T. H. Oh, and I. S. Kweon, “Cost-aware depth map estimation for Lytro camera,” in IEEE International Conference on Image Processing (2014), pp. 36–40.
    [Crossref]
  16. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680.
    [Crossref]
  17. M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
    [Crossref]
  18. Y. Xu, X. Jin, and Q. Dai, “Depth fused from intensity range and blur estimation for light-field cameras,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2016), pp. 2857–2861.
    [Crossref]
  19. H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
    [Crossref]
  20. C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014).
    [Crossref] [PubMed]
  21. “Zemax,” http://www.zemax.com/ .
  22. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015).
    [Crossref] [PubMed]

2015 (1)

2014 (1)

2012 (1)

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

2009 (1)

J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).

Aggoun, A.

Bishop, T. E.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

Bok, Y. S.

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

Chen, C. C.

C. C. Chen, Y. C. Lu, and M. S. Su, “Light field based digital refocusing using a DSLR camera with a pinhole array mask,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp.754–757.
[Crossref]

Chen, Y. M.

J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).

Choe, G. M.

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

Dai, Q.

Y. Xu, X. Jin, and Q. Dai, “Depth fused from intensity range and blur estimation for light-field cameras,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2016), pp. 2857–2861.
[Crossref]

Favaro, P.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

Fernández, J. C.

Fritz, M.

K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905.
[Crossref]

Gallagher, P.

K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).

Georgiev, T.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

Guo, X.

Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
[Crossref]

Hadap, S.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680.
[Crossref]

Hahne, C.

Haxha, S.

Jain, A.

K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).

Jeon, H. G.

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

Jin, X.

Y. Xu, X. Jin, and Q. Dai, “Depth fused from intensity range and blur estimation for light-field cameras,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2016), pp. 2857–2861.
[Crossref]

Kim, M. J.

M. J. Kim, T. H. Oh, and I. S. Kweon, “Cost-aware depth map estimation for Lytro camera,” in IEEE International Conference on Image Processing (2014), pp. 36–40.
[Crossref]

Kweon, I. S.

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

M. J. Kim, T. H. Oh, and I. S. Kweon, “Cost-aware depth map estimation for Lytro camera,” in IEEE International Conference on Image Processing (2014), pp. 36–40.
[Crossref]

Lam, E. Y.

Li, N. Y.

N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

Ling, H.

Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
[Crossref]

Ling, H. B.

N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

Lu, Y. C.

C. C. Chen, Y. C. Lu, and M. S. Su, “Light field based digital refocusing using a DSLR camera with a pinhole array mask,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp.754–757.
[Crossref]

Lumsdaine, A.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
[Crossref]

Malik, J.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680.
[Crossref]

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
[Crossref]

Naemura, T.

Y. Taguchi and T. Naemura, “View-Dependent Coding of Light Fields Based on Free-Viewpoint Image Synthesis,” in IEEE International Conference on Image Processing (2006), pp. 509–512.
[Crossref]

Nisenzon, S.

K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).

Oh, T. H.

M. J. Kim, T. H. Oh, and I. S. Kweon, “Cost-aware depth map estimation for Lytro camera,” in IEEE International Conference on Image Processing (2014), pp. 36–40.
[Crossref]

Park, J. S.

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

Ramamoorthi, R.

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
[Crossref]

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680.
[Crossref]

Rematas, K.

K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905.
[Crossref]

Ritschel, T.

K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905.
[Crossref]

Rusinkiewicz, S.

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
[Crossref]

Shi, Z. L.

J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).

Srinivasan, P. P.

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
[Crossref]

Su, M. S.

C. C. Chen, Y. C. Lu, and M. S. Su, “Light field based digital refocusing using a DSLR camera with a pinhole array mask,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp.754–757.
[Crossref]

Taguchi, Y.

Y. Taguchi and T. Naemura, “View-Dependent Coding of Light Fields Based on Free-Viewpoint Image Synthesis,” in IEEE International Conference on Image Processing (2006), pp. 509–512.
[Crossref]

Tai, Y. W.

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

Tao, M. W.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680.
[Crossref]

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
[Crossref]

Tuytelaars, T.

K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905.
[Crossref]

Velisavljevic, V.

Venkataraman, K.

K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).

Xu, J.

J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).

Xu, Y.

Y. Xu, X. Jin, and Q. Dai, “Depth fused from intensity range and blur estimation for light-field cameras,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2016), pp. 2857–2861.
[Crossref]

Ye, J. W.

N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

Yu, J.

N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
[Crossref]

Yu, J. Y.

N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

Yu, Z.

Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

J. Opt. Soc. Am. A (1)

J. Shanghai Univ. (1)

J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).

Opt. Express (1)

Other (18)

“Zemax,” http://www.zemax.com/ .

K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).

Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
[Crossref]

M. J. Kim, T. H. Oh, and I. S. Kweon, “Cost-aware depth map estimation for Lytro camera,” in IEEE International Conference on Image Processing (2014), pp. 36–40.
[Crossref]

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680.
[Crossref]

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
[Crossref]

Y. Xu, X. Jin, and Q. Dai, “Depth fused from intensity range and blur estimation for light-field cameras,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2016), pp. 2857–2861.
[Crossref]

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

Metrology Resource Co, http://www.metrologyresource.com/ .

Weibel Inc., Weibel RR-60034 Ranging Radar System.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.

Lytro, https://www.lytro.com/ .

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

Raytrix, https://www.raytrix.de/ .

C. C. Chen, Y. C. Lu, and M. S. Su, “Light field based digital refocusing using a DSLR camera with a pinhole array mask,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp.754–757.
[Crossref]

Y. Taguchi and T. Naemura, “View-Dependent Coding of Light Fields Based on Free-Viewpoint Image Synthesis,” in IEEE International Conference on Image Processing (2006), pp. 509–512.
[Crossref]

K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905.
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1

The optical structure of plenoptic 1.0 cameras where f x is the focal length of MLA.

Fig. 2
Fig. 2

Light field imaging of objects at different distances.

Fig. 3
Fig. 3

Object space model: light ray propagation from objects to the main lens.

Fig. 4
Fig. 4

Image space model: light ray propagation from the main lens to image sensor.

Fig. 5
Fig. 5

Refocusing model for deriving d in where y and v represent MLA and the image sensor, respectively.

Fig. 6
Fig. 6

Zemax screenshots.

Fig. 7
Fig. 7

Estimation error comparison for two imaging systems in Table 3: (a) Imaging system 1;(b) Imaging system 2.

Fig. 8
Fig. 8

Ray tracing model for error analysis where s i = d in +d. The green and red light rays correspond to plane a and plane a ˜ in Fig. 2, respectively.

Fig. 9
Fig. 9

Estimation errors comparison for testing cases.

Fig. 10
Fig. 10

Estimation error comparison for increasing T and D at the same time

Fig. 11
Fig. 11

The prototype of a real imaging system: (a) The front view; (b) and (c) are the front view and the vertical view of the plenoptic imaging system in (a), respectively.

Fig. 12
Fig. 12

Images obtained at different distances d out : (a) 500mm; (b) 600mm; (c) 700mm; (d) 800mm; (e) 900mm.

Tables (7)

Tables Icon

Table 1 Geometrical Parameters of Two Simulated Imaging Systems.

Tables Icon

Table 2 Estimation Error Comparison for the Proposed and That in [20].

Tables Icon

Table 3 Geometric Parameters of Eight Testing Cases.

Tables Icon

Table 4 Estimation Errors of All the Testing Cases in Table 3.

Tables Icon

Table 5 Geometrical Parameters of Two Testing Cases Used for Comparing the Effects of changing T and D.

Tables Icon

Table 6 Geometric Parameters of the Prototype.

Tables Icon

Table 7 Results of Estimated Distances and Estimation Errors.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

tanφ= D 2( d out T/2+R R 2 D 2 /4 ) ,
d out = D 2tanφ + R 2 D 2 /4 R+T/2.
n 1 sinψ=sin(φ+ θ 1 ),
sin θ 1 = D 2R .
n 1 sin( θ 1 ψ+ θ 2 )=sinω,
sin θ 2 = q R .
tan(ω θ 2 )= D 1 2d ,
(RT/2+p) 2 + q 2 = R 2 ,
tan(ω θ 2 )= q f x +d+ d in p ,
D 1 2d = q f x +d+ d in p .
m i = y j v i f x ,
F i = m i ×f,
k i2 = y 2 F i d out f ,
d in = q 0 y 2 + m 0 p 0 m 0 .
D 1 d D f x +d+ d in ,
d ( f x + d in ) D 1 D D 1 .
ω=arctan( D 1 2d )+ θ 2 =arctan( D 1 2d )+arcsin( q R ).
ψ=arcsin( D 2R )+arcsin( q R )arcsin(sin(ω)/ n 1 ),
φ=arcsin( n 1 sin(ψ))arcsin( D 2R ).
tan(ω θ 2 )= q f x d+ d in p ,
D 1 2d = q f x d+ d in p .
D 1 d D f x d+ d in ,
d ( f x + d in ) D 1 D+ D 1 .
ERROR= | d out E d out | d out = | Ad+T/2 (Ed+T/2 ) | Ad+T/2 = | AdEd | Ad+T/2 ,

Metrics