Abstract

We propose a novel light field acquisition method based on a planar catadioptric system. Only one camera with a large field-of-view (FOV) is required, and multiple virtual cameras can be created via an array of planar mirrors to acquire the light field from different views. The spatial distribution of the virtual cameras can be configured flexibly to acquire particular light rays, which can be controlled by simply changing the positions of the mirrors. Compared with previous systems, no aberration or reduction in light transmittance is introduced by the planar mirrors, which ensures image quality. In this study, the design method of the planar catadioptric system is provided and a calibration procedure of its computational model is analyzed in detail. The method is verified by a prototype system, with which the correct digital refocusing results are achieved using the acquired, calibrated light field.

© 2015 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Design and assessment of a 360° panoramic and high-performance capture system with two tiled catadioptric imaging channels

Weitao Song, Xuming Liu, Peng Lu, Yetao Huang, Dongdong Weng, Yuanjin Zheng, Yue Liu, and Yongtian Wang
Appl. Opt. 57(13) 3429-3437 (2018)

Calibration method for a large-scale structured light measurement system

Peng Wang, Jianmei Wang, Jing Xu, Yong Guan, Guanglie Zhang, and Ken Chen
Appl. Opt. 56(14) 3995-4002 (2017)

Calibration method for a central catadioptric-perspective camera system

Bingwei He, Zhipeng Chen, and Youfu Li
J. Opt. Soc. Am. A 29(11) 2514-2524 (2012)

References

  • View by:
  • |
  • |
  • |

  1. G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
    [Crossref]
  2. W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light f ield head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett. 12(6), 060010 (2014).
    [Crossref]
  3. W. Song, Q. Zhu, Y. Liu, and Y. Wang, “Omnidirectional-view three-dimensional display based on rotating selective-diffusing screen and multiple mini-projectors,” Appl. Opt. 54(13), 4154–4160 (2015).
    [Crossref]
  4. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
    [Crossref]
  5. T. E. Bishop and P. Favaro, Full-resolution depth map estimation from an aliased plenoptic light field. In Proceedings of Asian Conference on Computer Vision (Springer Berlin Heidelberg, 2011), pp. 186–200.
    [Crossref]
  6. T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
    [Crossref] [PubMed]
  7. J. C. Yang, M. Everett, C. Buehler, and L. McMillan, “A Real-Time Distributed Light Field Camera,” in Proceedings of the 13th Eurographics workshop on Rendering (Eurographics Association, 2002), pp. 77–86.
  8. B. S. Wilburn, M. Smulski, H. K. Lee, and M. A. Horowitz, “Light field video camera,” Proc. SPIE 4674, 29–36 (2001).
    [Crossref]
  9. X. Cao, Z. Geng, and T. Li, “Dictionary-based light field acquisition using sparse camera array,” Opt. Express 22(20), 24081–24095 (2014).
    [Crossref] [PubMed]
  10. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques(ACM, 1996), pp. 31–42.
  11. C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” ACM Trans. Graph. 27(3), 55 (2008).
    [Crossref]
  12. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7(1), 821–825 (1908).
    [Crossref]
  13. H. E. Ives, “A camera for making parallax panoramagrams,” US patent, 2039648A (May 6th, 1933)
  14. J. H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
    [Crossref] [PubMed]
  15. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94(3), 591–607 (2006).
    [Crossref]
  16. R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24(3), 735–744 (2005).
    [Crossref]
  17. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. Stanford University (2005).
  18. www.raytrix.de
  19. T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoffs in integral photography,” in Proceedings of the 17th Eurographics conference on Rendering Techniques (Eurographics Association,2006), pp 263–272.
  20. T. Georgiev and A. Lumsdaine, “The multifocus plenoptic camera,” Proc. SPIE 8299, 829908 (2012).
    [Crossref]
  21. C. Perwaß and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” Proc. SPIE 8291, 1–15 (2002).
  22. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
    [Crossref]
  23. D. Lanman, “Mask-based Light Field Capture and Display,” Ph.D. Dissertation, Brown University, School of Engineering, 2010.
  24. D. Dansereau, O. Pizarro, and S. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” In IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1027–1034.
    [Crossref]
  25. D. Cho, M. Kim, and Y. Tai, “Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 3280–3287.
    [Crossref]
  26. W. Li and Y. Li, “Generic camera model and its calibration for computational integral imaging and 3D reconstruction,” J. Opt. Soc. Am. A 28(3), 318–326 (2011).
    [Crossref] [PubMed]
  27. Y. Taguchi, A. Agrawal, A. Veeraraghavan, S. Ramalingam, and R. Raskar, “Axial-cones: modeling spherical catadioptric cameras for wide-angle light field rendering,” ACM Trans. Graph. 29(6), 172 (2010).
    [Crossref]
  28. D. Lanman, D. Crispell, M. Wachs, and G. Taubin, “Spherical catadioptric arrays: Construction, multi-view geometry, and calibration,” in International Symposium on 3D Data Processing, Visualization, and Transmission (IEEE 2006) pp. 81–88.
    [Crossref]
  29. J. Gluckman and S. K. Nayar, “Planar catadioptric stereo: Geometry and calibration,” Int. J. Comput. Vis. 44(1), 65–79 (2001).
    [Crossref]
  30. K. H. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 941–946 (2004).
    [Crossref] [PubMed]
  31. J. Bouguet, Camera calibration toolbox for matlab. http://www.vision.caltech.edu/bouguetj
  32. E. Adelson and J. Bergen, “The plenoptic function and the elements of early vision,” in Computation Models of Visual Processing, M. Landy and J.A. Movshon, eds. (MIT Press, Cambridge, 1991).
  33. R. Hartley and A. Zisserman, Multiple view geometry in computer vision. (Cambridge University Press, 2003), pp.597–628.
  34. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
    [Crossref]
  35. W. Li and Y. F. Li, “Single-camera panoramic stereo imaging system with a fisheye lens and a convex mirror,” Opt. Express 19(7), 5855–5867 (2011).
    [Crossref] [PubMed]
  36. R. Ng, “Digital light field photography,” Ph.D. Dissertation, Stanford University, The Department of Computer Science, 2006.

2015 (1)

2014 (2)

2012 (2)

T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

T. Georgiev and A. Lumsdaine, “The multifocus plenoptic camera,” Proc. SPIE 8299, 829908 (2012).
[Crossref]

2011 (3)

2010 (2)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Y. Taguchi, A. Agrawal, A. Veeraraghavan, S. Ramalingam, and R. Raskar, “Axial-cones: modeling spherical catadioptric cameras for wide-angle light field rendering,” ACM Trans. Graph. 29(6), 172 (2010).
[Crossref]

2009 (1)

2008 (1)

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” ACM Trans. Graph. 27(3), 55 (2008).
[Crossref]

2007 (1)

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

2006 (1)

A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94(3), 591–607 (2006).
[Crossref]

2005 (1)

R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24(3), 735–744 (2005).
[Crossref]

2004 (1)

K. H. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 941–946 (2004).
[Crossref] [PubMed]

2002 (1)

C. Perwaß and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” Proc. SPIE 8291, 1–15 (2002).

2001 (2)

J. Gluckman and S. K. Nayar, “Planar catadioptric stereo: Geometry and calibration,” Int. J. Comput. Vis. 44(1), 65–79 (2001).
[Crossref]

B. S. Wilburn, M. Smulski, H. K. Lee, and M. A. Horowitz, “Light field video camera,” Proc. SPIE 4674, 29–36 (2001).
[Crossref]

1908 (1)

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7(1), 821–825 (1908).
[Crossref]

Agrawal, A.

Y. Taguchi, A. Agrawal, A. Veeraraghavan, S. Ramalingam, and R. Raskar, “Axial-cones: modeling spherical catadioptric cameras for wide-angle light field rendering,” ACM Trans. Graph. 29(6), 172 (2010).
[Crossref]

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

Ahuja, N.

K. H. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 941–946 (2004).
[Crossref] [PubMed]

Bishop, T. E.

T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

Buehler, C.

J. C. Yang, M. Everett, C. Buehler, and L. McMillan, “A Real-Time Distributed Light Field Camera,” in Proceedings of the 13th Eurographics workshop on Rendering (Eurographics Association, 2002), pp. 77–86.

Cao, X.

Chen, H. H.

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” ACM Trans. Graph. 27(3), 55 (2008).
[Crossref]

Cheng, D.

Cho, D.

D. Cho, M. Kim, and Y. Tai, “Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 3280–3287.
[Crossref]

Curless, B.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoffs in integral photography,” in Proceedings of the 17th Eurographics conference on Rendering Techniques (Eurographics Association,2006), pp 263–272.

Dansereau, D.

D. Dansereau, O. Pizarro, and S. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” In IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1027–1034.
[Crossref]

Everett, M.

J. C. Yang, M. Everett, C. Buehler, and L. McMillan, “A Real-Time Distributed Light Field Camera,” in Proceedings of the 13th Eurographics workshop on Rendering (Eurographics Association, 2002), pp. 77–86.

Favaro, P.

T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

Fernandez, S.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Geng, Z.

Georgeiv, T.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoffs in integral photography,” in Proceedings of the 17th Eurographics conference on Rendering Techniques (Eurographics Association,2006), pp 263–272.

Georgiev, T.

T. Georgiev and A. Lumsdaine, “The multifocus plenoptic camera,” Proc. SPIE 8299, 829908 (2012).
[Crossref]

Gluckman, J.

J. Gluckman and S. K. Nayar, “Planar catadioptric stereo: Geometry and calibration,” Int. J. Comput. Vis. 44(1), 65–79 (2001).
[Crossref]

Hadap, S.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

Hanrahan, P.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques(ACM, 1996), pp. 31–42.

Heidrich, W.

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

Hong, K.

Horowitz, M. A.

B. S. Wilburn, M. Smulski, H. K. Lee, and M. A. Horowitz, “Light field video camera,” Proc. SPIE 4674, 29–36 (2001).
[Crossref]

Hua, H.

K. H. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 941–946 (2004).
[Crossref] [PubMed]

Intwala, C.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoffs in integral photography,” in Proceedings of the 17th Eurographics conference on Rendering Techniques (Eurographics Association,2006), pp 263–272.

Javidi, B.

A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94(3), 591–607 (2006).
[Crossref]

Kim, M.

D. Cho, M. Kim, and Y. Tai, “Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 3280–3287.
[Crossref]

Lanman, D.

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

Lee, B.

Lee, H. K.

B. S. Wilburn, M. Smulski, H. K. Lee, and M. A. Horowitz, “Light field video camera,” Proc. SPIE 4674, 29–36 (2001).
[Crossref]

Levoy, M.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques(ACM, 1996), pp. 31–42.

Li, T.

Li, W.

Li, Y.

Li, Y. F.

Liang, C. K.

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” ACM Trans. Graph. 27(3), 55 (2008).
[Crossref]

Lin, T. H.

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” ACM Trans. Graph. 27(3), 55 (2008).
[Crossref]

Lippmann, G.

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7(1), 821–825 (1908).
[Crossref]

Liu, C.

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” ACM Trans. Graph. 27(3), 55 (2008).
[Crossref]

Liu, Y.

Llado, X.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Lumsdaine, A.

T. Georgiev and A. Lumsdaine, “The multifocus plenoptic camera,” Proc. SPIE 8299, 829908 (2012).
[Crossref]

Malik, J.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

McMillan, L.

J. C. Yang, M. Everett, C. Buehler, and L. McMillan, “A Real-Time Distributed Light Field Camera,” in Proceedings of the 13th Eurographics workshop on Rendering (Eurographics Association, 2002), pp. 77–86.

Mohan, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

Nayar, S.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoffs in integral photography,” in Proceedings of the 17th Eurographics conference on Rendering Techniques (Eurographics Association,2006), pp 263–272.

Nayar, S. K.

J. Gluckman and S. K. Nayar, “Planar catadioptric stereo: Geometry and calibration,” Int. J. Comput. Vis. 44(1), 65–79 (2001).
[Crossref]

Ng, R.

R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24(3), 735–744 (2005).
[Crossref]

Park, J. H.

Perwaß, C.

C. Perwaß and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” Proc. SPIE 8291, 1–15 (2002).

Pizarro, O.

D. Dansereau, O. Pizarro, and S. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” In IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1027–1034.
[Crossref]

Pribanic, T.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Ramalingam, S.

Y. Taguchi, A. Agrawal, A. Veeraraghavan, S. Ramalingam, and R. Raskar, “Axial-cones: modeling spherical catadioptric cameras for wide-angle light field rendering,” ACM Trans. Graph. 29(6), 172 (2010).
[Crossref]

Ramamoorthi, R.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

Raskar, R.

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

Y. Taguchi, A. Agrawal, A. Veeraraghavan, S. Ramalingam, and R. Raskar, “Axial-cones: modeling spherical catadioptric cameras for wide-angle light field rendering,” ACM Trans. Graph. 29(6), 172 (2010).
[Crossref]

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

Salesin, D.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoffs in integral photography,” in Proceedings of the 17th Eurographics conference on Rendering Techniques (Eurographics Association,2006), pp 263–272.

Salvi, J.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Smulski, M.

B. S. Wilburn, M. Smulski, H. K. Lee, and M. A. Horowitz, “Light field video camera,” Proc. SPIE 4674, 29–36 (2001).
[Crossref]

Song, W.

Stern, A.

A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94(3), 591–607 (2006).
[Crossref]

Taguchi, Y.

Y. Taguchi, A. Agrawal, A. Veeraraghavan, S. Ramalingam, and R. Raskar, “Axial-cones: modeling spherical catadioptric cameras for wide-angle light field rendering,” ACM Trans. Graph. 29(6), 172 (2010).
[Crossref]

Tai, Y.

D. Cho, M. Kim, and Y. Tai, “Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 3280–3287.
[Crossref]

Tan, K. H.

K. H. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 941–946 (2004).
[Crossref] [PubMed]

Tao, M. W.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

Tumblin, J.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

Veeraraghavan, A.

Y. Taguchi, A. Agrawal, A. Veeraraghavan, S. Ramalingam, and R. Raskar, “Axial-cones: modeling spherical catadioptric cameras for wide-angle light field rendering,” ACM Trans. Graph. 29(6), 172 (2010).
[Crossref]

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

Wang, Y.

Wetzstein, G.

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

Wietzke, L.

C. Perwaß and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” Proc. SPIE 8291, 1–15 (2002).

Wilburn, B. S.

B. S. Wilburn, M. Smulski, H. K. Lee, and M. A. Horowitz, “Light field video camera,” Proc. SPIE 4674, 29–36 (2001).
[Crossref]

Williams, S.

D. Dansereau, O. Pizarro, and S. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” In IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1027–1034.
[Crossref]

Wong, B. Y.

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” ACM Trans. Graph. 27(3), 55 (2008).
[Crossref]

Yang, J. C.

J. C. Yang, M. Everett, C. Buehler, and L. McMillan, “A Real-Time Distributed Light Field Camera,” in Proceedings of the 13th Eurographics workshop on Rendering (Eurographics Association, 2002), pp. 77–86.

Zheng, K. C.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoffs in integral photography,” in Proceedings of the 17th Eurographics conference on Rendering Techniques (Eurographics Association,2006), pp 263–272.

Zhu, Q.

ACM Trans. Graph. (5)

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” ACM Trans. Graph. 27(3), 55 (2008).
[Crossref]

R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24(3), 735–744 (2005).
[Crossref]

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

Y. Taguchi, A. Agrawal, A. Veeraraghavan, S. Ramalingam, and R. Raskar, “Axial-cones: modeling spherical catadioptric cameras for wide-angle light field rendering,” ACM Trans. Graph. 29(6), 172 (2010).
[Crossref]

Appl. Opt. (2)

Chin. Opt. Lett. (1)

IEEE Trans. Pattern Anal. Mach. Intell. (2)

T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

K. H. Tan, H. Hua, and N. Ahuja, “Multiview panoramic cameras using mirror pyramids,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 941–946 (2004).
[Crossref] [PubMed]

Int. J. Comput. Vis. (1)

J. Gluckman and S. K. Nayar, “Planar catadioptric stereo: Geometry and calibration,” Int. J. Comput. Vis. 44(1), 65–79 (2001).
[Crossref]

J. Opt. Soc. Am. A (1)

J. Phys. Theor. Appl. (1)

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7(1), 821–825 (1908).
[Crossref]

Opt. Express (2)

Pattern Recognit. (1)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Proc. IEEE (1)

A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94(3), 591–607 (2006).
[Crossref]

Proc. SPIE (3)

B. S. Wilburn, M. Smulski, H. K. Lee, and M. A. Horowitz, “Light field video camera,” Proc. SPIE 4674, 29–36 (2001).
[Crossref]

T. Georgiev and A. Lumsdaine, “The multifocus plenoptic camera,” Proc. SPIE 8299, 829908 (2012).
[Crossref]

C. Perwaß and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” Proc. SPIE 8291, 1–15 (2002).

Other (16)

D. Lanman, “Mask-based Light Field Capture and Display,” Ph.D. Dissertation, Brown University, School of Engineering, 2010.

D. Dansereau, O. Pizarro, and S. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” In IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1027–1034.
[Crossref]

D. Cho, M. Kim, and Y. Tai, “Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 3280–3287.
[Crossref]

D. Lanman, D. Crispell, M. Wachs, and G. Taubin, “Spherical catadioptric arrays: Construction, multi-view geometry, and calibration,” in International Symposium on 3D Data Processing, Visualization, and Transmission (IEEE 2006) pp. 81–88.
[Crossref]

J. Bouguet, Camera calibration toolbox for matlab. http://www.vision.caltech.edu/bouguetj

E. Adelson and J. Bergen, “The plenoptic function and the elements of early vision,” in Computation Models of Visual Processing, M. Landy and J.A. Movshon, eds. (MIT Press, Cambridge, 1991).

R. Hartley and A. Zisserman, Multiple view geometry in computer vision. (Cambridge University Press, 2003), pp.597–628.

H. E. Ives, “A camera for making parallax panoramagrams,” US patent, 2039648A (May 6th, 1933)

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. Stanford University (2005).

www.raytrix.de

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoffs in integral photography,” in Proceedings of the 17th Eurographics conference on Rendering Techniques (Eurographics Association,2006), pp 263–272.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques(ACM, 1996), pp. 31–42.

J. C. Yang, M. Everett, C. Buehler, and L. McMillan, “A Real-Time Distributed Light Field Camera,” in Proceedings of the 13th Eurographics workshop on Rendering (Eurographics Association, 2002), pp. 77–86.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

T. E. Bishop and P. Favaro, Full-resolution depth map estimation from an aliased plenoptic light field. In Proceedings of Asian Conference on Computer Vision (Springer Berlin Heidelberg, 2011), pp. 186–200.
[Crossref]

R. Ng, “Digital light field photography,” Ph.D. Dissertation, Stanford University, The Department of Computer Science, 2006.

Supplementary Material (1)

NameDescription
» Visualization 1: AVI (13847 KB)      digital refocusing video using the calibrated light field

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Schematic diagram of a light field acquisition method using a planar catadioptric system.
Fig. 2
Fig. 2 Schematic diagrams of (a) the position of virtual cameras & the placement of mirrors and (b) the constraints of mirror placements.
Fig. 3
Fig. 3 Multiple mirrors in a planar catadioptric system with virtual sensors on a plane (take 5 × 4 virtual cameras system as an example).
Fig. 4
Fig. 4 Module of the prototype of catadioptric light field capture system.
Fig. 5
Fig. 5 (a) One of the structured light images. (b) Structured light pattern decoding results in horizontal (X) direction. (c) Structured light pattern decoding results in vertical (Y) direction.
Fig. 6
Fig. 6 (a) Calibrated results of the virtual cameras. (b) Enlarged image of the calibrated results. (unit: mm)
Fig. 7
Fig. 7 (a) Experimental setup and (b) captured image by our developed system
Fig. 8
Fig. 8 Digital refocusing images using the captured image (Fig. 7(b)) by our developed system (see Visualization 1).

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

i=0 m1 φ i =Ψ, j=0 n1 π j =Φ
r ij =( tan( i=0 i φ i φ i /2 Ψ/2 ),tan( j=0 j π j π j /2 Φ/2 ),1 )
n ^ ij = ( r ^ ij l ^ ij ) / r ^ ij l ^ ij
M ij = ( n ^ x ij , n ^ y ij , n ^ z ij , λ i j 2 /2 ) T
λ ij = η n ^ x ij l ^ x ij + n ^ y ij l ^ y ij + n ^ z ij l ^ z ij
[ x p y p 1 ]=[ f 1 α f 1 x 0 0 f 2 y 0 0 0 1 ][ ( 1+ k 1 r 2 + k 2 r 4 + k 5 r 6 ) x n +2 k 3 x n y n + k 4 ( r 2 +2 x n 2 ) ( 1+ k 1 r 2 + k 2 r 4 + k 5 r 6 ) y n +2 k 4 x n y n + k 3 ( r 2 +2 y n 2 ) 1 ]
z c [ x n y n 1 ] T =[ R ij T ij ] [ X Y Z 1 ] T
f 1 = U x 2tan( Ψ/2 ) , f 2 = U y 2tan( Φ/2 )
x 0 = U x /2 , y 0 = U y /2
α= k 1 = k 2 = k 3 = k 4 = k 5 =0
{ X r 11 ij +Y r 12 ij +Z r 13 ij x n X r 31 ij x n Y r 32 ij x n Z r 33 ij + t 1 ij t 3 ij =0 X r 21 ij +Y r 22 ij +Z r 23 ij y n X r 31 ij y n Y r 32 ij y n Z r 33 ij + t 2 ij t 3 ij =0
l p =inv( R ij ) T ij
l v = inv( R ij ) [ x n y n 1 ] T inv( R ij ) T ij inv( R ij ) [ x n y n 1 ] T inv( R ij ) T ij

Metrics