Abstract

The original aim of the integral-imaging concept, reported by Gabriel Lippmann more than a century ago, is the capture of images of 3D scenes for their projection onto an autostereoscopic display. In this paper we report a new algorithm for the efficient generation of microimages for their direct projection onto an integral-imaging monitor. Like our previous algorithm, the smart pseudoscopic-to-orthoscopic conversion (SPOC) algorithm, this algorithm produces microimages ready to produce 3D display with full parallax. However, this new algorithm is much simpler than the previous one, produces microimages free of black pixels, and permits fixing at will, between certain limits, the reference plane and the field of view of the displayed 3D scene. Proofs of concept are illustrated with 3D capture and 3D display experiments.

© 2014 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. R. Ng, “Digital light field photography,” Ph.D. dissertation (Stanford University, 2006).
  2. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
    [CrossRef]
  3. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001).
    [CrossRef]
  4. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
    [CrossRef]
  5. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
    [CrossRef]
  6. T. Georgiev and A. Lumsdaine, “The focused plenoptic camera and rendering,” J. Electron. Imaging 19, 021106 (2010).
  7. F. Okano, J. Aral, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
    [CrossRef]
  8. H. Hiura, T. Mishina, J. Arai, K. Hisatomi, Y. Iwadate, T. Ito, and S. Yano, “A study on accommodation response and depth perception in viewing integral photography,” Proceedings of 3D Systems and Applications, Osaka (Japan) (2013), paper P2-2.
  9. F. Okano, H. Hoshino, J. Arai, and I. Yayuma, “Real time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997).
    [CrossRef]
  10. A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Disp. Technol. 2, 393–400 (2006).
    [CrossRef]
  11. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13, 9175–9180 (2005).
    [CrossRef]
  12. K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Transactions on Information and Systems E90-D, 233–241 (2007).
    [CrossRef]
  13. H. Deng, Q. Wang, and D. Li, “Method of generating orthoscopic elemental image array from sparse camera array,” Chin. Opt. Lett. 10, 061102 (2012).
    [CrossRef]
  14. D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47, 1189–1194 (2009).
    [CrossRef]
  15. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48, H77–H94 (2009).
    [CrossRef]
  16. E. Sahin and L. Onural, “A comparative study of light field representation and integral imaging,” Imag. Sci. J. 58, 28–31 (2010).
    [CrossRef]
  17. H. Navarro, R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Opt. Express 18, 25573–25583 (2010).
    [CrossRef]
  18. J.-H. Jung, J. Kim, and B. Lee, “Solution of pseudoscopic problem in integral imaging for real-time processing,” Opt. Lett. 38, 76–79 (2013).
    [CrossRef]
  19. N. Davies, M. McCormick, and L. Yang, “Three-dimensional imaging systems: a new development,” Appl. Opt. 27, 4520–4528 (1988).
    [CrossRef]
  20. H. Navarro, J. C. Barreiro, G. Saavedra, M. Martínez-Corral, and B. Javidi, “High-resolution far-field integral-imaging camera by double snapshot,” Opt. Express 20, 890–895 (2012).
    [CrossRef]
  21. T. Iwane, “Light field camera and IP display,” Proceedings of 3D Systems and Applications, Osaka (Japan) (2013), p. 32.
  22. “3D lightfield camera,” http://www.raytrix.de/ .
  23. “Lightfield based commercial digital still camera,” http://www.lytro.com .
  24. C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Improved viewing zones for projection type integral imaging 3D display using adaptive liquid crystal prism array,” J. Disp. Technol. 10, 198–203 (2014).
    [CrossRef]
  25. H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Disp. Technol. 6, 404–411 (2010).
    [CrossRef]
  26. J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002).
    [CrossRef]

2014 (1)

C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Improved viewing zones for projection type integral imaging 3D display using adaptive liquid crystal prism array,” J. Disp. Technol. 10, 198–203 (2014).
[CrossRef]

2013 (1)

2012 (2)

2010 (4)

E. Sahin and L. Onural, “A comparative study of light field representation and integral imaging,” Imag. Sci. J. 58, 28–31 (2010).
[CrossRef]

H. Navarro, R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Opt. Express 18, 25573–25583 (2010).
[CrossRef]

T. Georgiev and A. Lumsdaine, “The focused plenoptic camera and rendering,” J. Electron. Imaging 19, 021106 (2010).

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Disp. Technol. 6, 404–411 (2010).
[CrossRef]

2009 (2)

D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47, 1189–1194 (2009).
[CrossRef]

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48, H77–H94 (2009).
[CrossRef]

2007 (1)

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Transactions on Information and Systems E90-D, 233–241 (2007).
[CrossRef]

2006 (2)

A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Disp. Technol. 2, 393–400 (2006).
[CrossRef]

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[CrossRef]

2005 (1)

2002 (1)

2001 (1)

1999 (1)

F. Okano, J. Aral, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
[CrossRef]

1997 (1)

1992 (1)

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[CrossRef]

1988 (1)

1908 (1)

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
[CrossRef]

Adams, A.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[CrossRef]

Adelson, E. H.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[CrossRef]

Aggoun, A.

A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Disp. Technol. 2, 393–400 (2006).
[CrossRef]

Arai, J.

F. Okano, H. Hoshino, J. Arai, and I. Yayuma, “Real time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997).
[CrossRef]

H. Hiura, T. Mishina, J. Arai, K. Hisatomi, Y. Iwadate, T. Ito, and S. Yano, “A study on accommodation response and depth perception in viewing integral photography,” Proceedings of 3D Systems and Applications, Osaka (Japan) (2013), paper P2-2.

Aral, J.

F. Okano, J. Aral, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
[CrossRef]

Arimoto, H.

Barreiro, J. C.

Chen, C.-W.

C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Improved viewing zones for projection type integral imaging 3D display using adaptive liquid crystal prism array,” J. Disp. Technol. 10, 198–203 (2014).
[CrossRef]

Cho, M.

C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Improved viewing zones for projection type integral imaging 3D display using adaptive liquid crystal prism array,” J. Disp. Technol. 10, 198–203 (2014).
[CrossRef]

Cho, Y.

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Transactions on Information and Systems E90-D, 233–241 (2007).
[CrossRef]

Davies, N.

Deng, H.

Footer, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[CrossRef]

Georgiev, T.

T. Georgiev and A. Lumsdaine, “The focused plenoptic camera and rendering,” J. Electron. Imaging 19, 021106 (2010).

Hisatomi, K.

H. Hiura, T. Mishina, J. Arai, K. Hisatomi, Y. Iwadate, T. Ito, and S. Yano, “A study on accommodation response and depth perception in viewing integral photography,” Proceedings of 3D Systems and Applications, Osaka (Japan) (2013), paper P2-2.

Hiura, H.

H. Hiura, T. Mishina, J. Arai, K. Hisatomi, Y. Iwadate, T. Ito, and S. Yano, “A study on accommodation response and depth perception in viewing integral photography,” Proceedings of 3D Systems and Applications, Osaka (Japan) (2013), paper P2-2.

Hong, K.

Horowitz, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[CrossRef]

Hoshino, H.

F. Okano, J. Aral, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
[CrossRef]

F. Okano, H. Hoshino, J. Arai, and I. Yayuma, “Real time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997).
[CrossRef]

Huang, Y.-P.

C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Improved viewing zones for projection type integral imaging 3D display using adaptive liquid crystal prism array,” J. Disp. Technol. 10, 198–203 (2014).
[CrossRef]

Ito, T.

H. Hiura, T. Mishina, J. Arai, K. Hisatomi, Y. Iwadate, T. Ito, and S. Yano, “A study on accommodation response and depth perception in viewing integral photography,” Proceedings of 3D Systems and Applications, Osaka (Japan) (2013), paper P2-2.

Iwadate, Y.

H. Hiura, T. Mishina, J. Arai, K. Hisatomi, Y. Iwadate, T. Ito, and S. Yano, “A study on accommodation response and depth perception in viewing integral photography,” Proceedings of 3D Systems and Applications, Osaka (Japan) (2013), paper P2-2.

Iwane, T.

T. Iwane, “Light field camera and IP display,” Proceedings of 3D Systems and Applications, Osaka (Japan) (2013), p. 32.

Jang, J. S.

Javidi, B.

Jung, J.-H.

Kim, E.-S.

D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47, 1189–1194 (2009).
[CrossRef]

Kim, J.

Lee, B.

Lee, B.-G.

D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47, 1189–1194 (2009).
[CrossRef]

Levoy, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[CrossRef]

Li, D.

Lippmann, G.

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
[CrossRef]

Lumsdaine, A.

T. Georgiev and A. Lumsdaine, “The focused plenoptic camera and rendering,” J. Electron. Imaging 19, 021106 (2010).

Martínez-Corral, M.

Martínez-Cuenca, R.

McCormick, M.

Min, S. W.

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Transactions on Information and Systems E90-D, 233–241 (2007).
[CrossRef]

Mishina, T.

H. Hiura, T. Mishina, J. Arai, K. Hisatomi, Y. Iwadate, T. Ito, and S. Yano, “A study on accommodation response and depth perception in viewing integral photography,” Proceedings of 3D Systems and Applications, Osaka (Japan) (2013), paper P2-2.

Molina-Martín, A.

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Disp. Technol. 6, 404–411 (2010).
[CrossRef]

Navarro, H.

Ng, R.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[CrossRef]

R. Ng, “Digital light field photography,” Ph.D. dissertation (Stanford University, 2006).

Okano, F.

F. Okano, J. Aral, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
[CrossRef]

F. Okano, H. Hoshino, J. Arai, and I. Yayuma, “Real time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997).
[CrossRef]

Onural, L.

E. Sahin and L. Onural, “A comparative study of light field representation and integral imaging,” Imag. Sci. J. 58, 28–31 (2010).
[CrossRef]

Park, J.-H.

Park, K. S.

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Transactions on Information and Systems E90-D, 233–241 (2007).
[CrossRef]

Saavedra, G.

Sahin, E.

E. Sahin and L. Onural, “A comparative study of light field representation and integral imaging,” Imag. Sci. J. 58, 28–31 (2010).
[CrossRef]

Shin, D.-H.

D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47, 1189–1194 (2009).
[CrossRef]

Wang, J. Y. A.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[CrossRef]

Wang, Q.

Yang, L.

Yano, S.

H. Hiura, T. Mishina, J. Arai, K. Hisatomi, Y. Iwadate, T. Ito, and S. Yano, “A study on accommodation response and depth perception in viewing integral photography,” Proceedings of 3D Systems and Applications, Osaka (Japan) (2013), paper P2-2.

Yayuma, I.

Yuyama, I.

F. Okano, J. Aral, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
[CrossRef]

ACM Trans. Graph. (1)

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[CrossRef]

Appl. Opt. (3)

Chin. Opt. Lett. (1)

IEEE Trans. Pattern Anal. Mach. Intell. (1)

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[CrossRef]

IEICE Transactions on Information and Systems (1)

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Transactions on Information and Systems E90-D, 233–241 (2007).
[CrossRef]

Imag. Sci. J. (1)

E. Sahin and L. Onural, “A comparative study of light field representation and integral imaging,” Imag. Sci. J. 58, 28–31 (2010).
[CrossRef]

J. Disp. Technol. (3)

A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Disp. Technol. 2, 393–400 (2006).
[CrossRef]

C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Improved viewing zones for projection type integral imaging 3D display using adaptive liquid crystal prism array,” J. Disp. Technol. 10, 198–203 (2014).
[CrossRef]

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Disp. Technol. 6, 404–411 (2010).
[CrossRef]

J. Electron. Imaging (1)

T. Georgiev and A. Lumsdaine, “The focused plenoptic camera and rendering,” J. Electron. Imaging 19, 021106 (2010).

J. Phys. Theor. Appl. (1)

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
[CrossRef]

Opt. Eng. (1)

F. Okano, J. Aral, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
[CrossRef]

Opt. Express (3)

Opt. Lasers Eng. (1)

D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47, 1189–1194 (2009).
[CrossRef]

Opt. Lett. (3)

Other (5)

T. Iwane, “Light field camera and IP display,” Proceedings of 3D Systems and Applications, Osaka (Japan) (2013), p. 32.

“3D lightfield camera,” http://www.raytrix.de/ .

“Lightfield based commercial digital still camera,” http://www.lytro.com .

H. Hiura, T. Mishina, J. Arai, K. Hisatomi, Y. Iwadate, T. Ito, and S. Yano, “A study on accommodation response and depth perception in viewing integral photography,” Proceedings of 3D Systems and Applications, Osaka (Japan) (2013), paper P2-2.

R. Ng, “Digital light field photography,” Ph.D. dissertation (Stanford University, 2006).

Supplementary Material (4)

» Media 1: MOV (1181 KB)     
» Media 2: MOV (841 KB)     
» Media 3: MOV (759 KB)     
» Media 4: MOV (1283 KB)     

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1.

(a) Scheme of the capture of the plenoptic field with an InI setup. (b) The captured, sampled, plenoptic field.

Fig. 2.
Fig. 2.

Scheme of the capture of the plenoptic field with a plenoptic camera.

Fig. 3.
Fig. 3.

Sampled plenoptic field captured with the plenoptic camera of the previous figure. Note that the pixels are not arranged in a rectangular grating, but are sheared by an angle α=px/(f+z).

Fig. 4.
Fig. 4.

Scheme of the plenoptic capturing setup.

Fig. 5.
Fig. 5.

Plenoptic picture of a 3D scene. The picture is composed of 96×60 microimages with 48×48 pixels each.

Fig. 6.
Fig. 6.

Single-frame excerpt from video recording of the implemented InI monitor (Media 1). The video is composed of views of the monitor obtained from different horizontal positions. The small size of the effective display is caused by the small number of microlenses in the capture MLA.

Fig. 7.
Fig. 7.

Sampled plenoptic map at the plane of the camera lens, before refraction, for the case in which the MLA is placed just at the back focal plane of the camera lens, and, therefore, the reference plane is at the infinite (this field has been calculated from the one shown in Fig. 3).

Fig. 8.
Fig. 8.

Plenoptic map at the plane of the camera lens, before refraction, for the case in which the MLA is placed behind the back focal plane of the camera lens, and, therefore, the reference plane is at finite distance.

Fig. 9.
Fig. 9.

Illustration of the cropping procedure. (a) When the crop center is set at the image center, the reference plane is set at infinity. Also, the cropping factor δ/Δ determines the FOV of the displayed images. (b) When the crop center is displaced linearly, by factor α, the reference plane is axially shifted.

Fig. 10.
Fig. 10.

Subset of the cropped EIs.

Fig. 11.
Fig. 11.

Microimages calculated from the EIs.

Fig. 12.
Fig. 12.

Single-frame excerpts from video recording of the InI display. (a) The cook’s head is displayed at the plane of the monitor and the house behind that plane (Media 2). (b) Same, but with a smaller FOV (Media 3). (c) The window shutter is displayed at the monitor plane, the house and the wooden panel within the monitor, and the cook in front of the monitor (Media 4).

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

z=f2z.
Tt=(1t01)
Lf=(101/f1).
A=Tf+zLf=(1f+zf(f+z)1/f1).
(x1θ1)=A1(x0θ0)=(x0θ0f(f+z)x0zfθ0).
α=zf=fz.

Metrics