Abstract

A new real-time integral imaging pick-up and display method is demonstrated. This proposed method utilizes the dual-camera optical pick-up part to collect 3D information of real scene in real-time without pre-calibration. Elemental images are then provided by a computer-generated integral imaging part and displayed by a projection-type integral imaging display part. The theoretical analysis indicates the method is robust to the camera position deviation, which profits the real-time data processing. Experimental results show that the fully continuous, real 3D scene pick-up and display system is feasible with a throughput of 8 fps in real time. Further analysis predicts that the parallel optimization can be adopted by the proposed method for real-time 3D pick-up and display with a throughput of 25 fps.

© 2012 OSA

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt.50(34), H87–H115 (2011).
    [CrossRef] [PubMed]
  2. B. Lee, S. Jung, and J.-H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett.27(10), 818–820 (2002).
    [CrossRef] [PubMed]
  3. X.-R. Wang, Q.-F. Bu, and D.-Y. Zhang, “Method for quantifying the effects of aliasing on the viewing resolution of integral images,” Opt. Lett.34(21), 3382–3384 (2009).
    [CrossRef] [PubMed]
  4. D.-Q. Pham, N. Kim, K.-C. Kwon, J.-H. Jung, K. Hong, B. Lee, and J.-H. Park, “Depth enhancement of integral imaging by using polymer-dispersed liquid-crystal films and a dual-depth configuration,” Opt. Lett.35(18), 3135–3137 (2010).
    [CrossRef] [PubMed]
  5. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt.36(7), 1598–1603 (1997).
    [CrossRef] [PubMed]
  6. J.-S. Jang and B. Javidi, “Real-time all-optical three-dimensional integral imaging projector,” Appl. Opt.41(23), 4866–4869 (2002).
    [CrossRef] [PubMed]
  7. J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt.45(8), 1704–1712 (2006).
    [CrossRef] [PubMed]
  8. Z. Kavehvash, K. Mehrany, and S. Bagheri, “Optimization of the lens-array structure for performance improvement of integral imaging,” Opt. Lett.36(20), 3993–3995 (2011).
    [CrossRef] [PubMed]
  9. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express12(6), 1067–1076 (2004).
    [CrossRef] [PubMed]
  10. Y. Xu, X. R. Wang, Y. Sun, and J. Q. Zhang, “Homogeneous light field model for interactive control of viewing parameters of integral imaging displays,” Opt. Express20(13), 14137–14151 (2012).
    [CrossRef] [PubMed]
  11. I. Moon and B. Javidi, “Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling,” Opt. Lett.34(6), 731–733 (2009).
    [CrossRef] [PubMed]
  12. X.-Zh. Sang, F.-C. Fan, C.-C. Jiang, S. Choi, W.-H. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color three-dimensional display,” Opt. Lett.34(24), 3803–3805 (2009).
    [CrossRef] [PubMed]
  13. Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of vieding parameters,” Proc. IEEE15(5), 841–852 (2009).
  14. H. Navarro, A. Dorado, G. Saavedra, A. Llavador, M. Martínez-Corral, and B. Javidi, “Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two?” Proc. SPIE8384, 838406, 838406-7 (2012).
    [CrossRef]
  15. H.-H. Kang, J.-H. Lee, and E.-S. Kim, “Enhanced compression rate of integral images by using motion-compensated residual images in three-dimensional integral-imaging,” Opt. Express20(5), 5440–5459 (2012).
    [CrossRef] [PubMed]
  16. J. H. Lee and N. S. Lee, “variable block size motion estimation algorithm and its hardware architecture for H.264/AVC,” Proceedings of the 2004 Int. Symp. On Circuits and Syst., 3, 741–744 (2004).
  17. J.-J. Lee, D.-H. Shin, and B.-G. Lee, “Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction,” Opt. Express17(20), 18026–18037 (2009).
    [CrossRef] [PubMed]
  18. B. Lee, S.-W. Min, and B. Javidi, “Theoretical analysis for three-dimensional integral imaging systems with double devices,” Appl. Opt.41(23), 4856–4865 (2002).
    [CrossRef] [PubMed]

2012 (3)

2011 (2)

2010 (1)

2009 (5)

2006 (1)

2004 (1)

2002 (3)

1997 (1)

Arai, J.

Bagheri, S.

Bu, Q.-F.

Chen, N.

Choi, H.-J.

Choi, S.

Dohi, T.

Dorado, A.

H. Navarro, A. Dorado, G. Saavedra, A. Llavador, M. Martínez-Corral, and B. Javidi, “Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two?” Proc. SPIE8384, 838406, 838406-7 (2012).
[CrossRef]

Dou, W.-H.

Fan, F.-C.

Hahn, J.

Hata, N.

Hong, J.

Hong, K.

Hoshino, H.

Iwahara, M.

Jang, J.-S.

Javidi, B.

Jiang, C.-C.

Jung, J.-H.

Jung, S.

Kang, H.-H.

Kavehvash, Z.

Kim, E.-S.

Kim, H.

Kim, N.

Kim, Y.

Koike, T.

Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of vieding parameters,” Proc. IEEE15(5), 841–852 (2009).

Kwon, K.-C.

Lee, B.

Lee, B.-G.

Lee, J.-H.

Lee, J.-J.

Liao, H.

Llavador, A.

H. Navarro, A. Dorado, G. Saavedra, A. Llavador, M. Martínez-Corral, and B. Javidi, “Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two?” Proc. SPIE8384, 838406, 838406-7 (2012).
[CrossRef]

Martínez-Corral, M.

H. Navarro, A. Dorado, G. Saavedra, A. Llavador, M. Martínez-Corral, and B. Javidi, “Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two?” Proc. SPIE8384, 838406, 838406-7 (2012).
[CrossRef]

Mehrany, K.

Min, S.-W.

Moon, I.

Naemura, T.

Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of vieding parameters,” Proc. IEEE15(5), 841–852 (2009).

Navarro, H.

H. Navarro, A. Dorado, G. Saavedra, A. Llavador, M. Martínez-Corral, and B. Javidi, “Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two?” Proc. SPIE8384, 838406, 838406-7 (2012).
[CrossRef]

Okano, F.

Okui, M.

Park, J.-H.

Pham, D.-Q.

Saavedra, G.

H. Navarro, A. Dorado, G. Saavedra, A. Llavador, M. Martínez-Corral, and B. Javidi, “Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two?” Proc. SPIE8384, 838406, 838406-7 (2012).
[CrossRef]

Sang, X.-Zh.

Shin, D.-H.

Sun, Y.

Taguchi, Y.

Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of vieding parameters,” Proc. IEEE15(5), 841–852 (2009).

Takahashi, K.

Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of vieding parameters,” Proc. IEEE15(5), 841–852 (2009).

Wang, X. R.

Wang, X.-R.

Xu, D.

Xu, Y.

Yamashita, T.

Yu, C.

Yuyama, I.

Zhang, D.-Y.

Zhang, J. Q.

Appl. Opt. (5)

Opt. Express (4)

Opt. Lett. (6)

Proc. IEEE (1)

Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of vieding parameters,” Proc. IEEE15(5), 841–852 (2009).

Proc. SPIE (1)

H. Navarro, A. Dorado, G. Saavedra, A. Llavador, M. Martínez-Corral, and B. Javidi, “Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two?” Proc. SPIE8384, 838406, 838406-7 (2012).
[CrossRef]

Other (1)

J. H. Lee and N. S. Lee, “variable block size motion estimation algorithm and its hardware architecture for H.264/AVC,” Proceedings of the 2004 Int. Symp. On Circuits and Syst., 3, 741–744 (2004).

Supplementary Material (1)

» Media 1: MOV (4060 KB)     

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1

Schematic configuration of the real-time integral imaging pick-up and display system (Media 1)

Fig. 2
Fig. 2

Real-time optical pick-up integral image part: (a) Sketch, (b) Results.

Fig. 3
Fig. 3

Schematic diagram of pick-up part for integral imaging

Fig. 4
Fig. 4

EIs and 3D reconstructed images comparison results: (a) EIs generated by the proposed method, (b) EIs achieved by the conventional direct camera array pick-up method, (c) PSNR of each corresponding EI, (d) 3D reconstructed images of EIs 4(a), (e) 3D reconstructed images of EIs 4(b).

Fig. 5
Fig. 5

EIs generated by CGII

Fig. 6
Fig. 6

Reconstructed 3D images observed from different viewpoints: (a) 3D real scene 1 and (b) 3D real scene 2.

Fig. 7
Fig. 7

The process flow of the proposed method and the processing time of each process.

Tables (3)

Tables Icon

Table 1 Characteristics comparison between the conventional and OPII methods.

Tables Icon

Table 2 Parameters of Part II (CGII)

Tables Icon

Table 3 Parameters of Part III (PII)

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

{ p x = ( x R (i,j) x R ( i , j ) ) / (i i ) p y = ( y R (i,j) y R ( i , j ) ) / (j j ) ,(i i ) ,(j j )
R (m,n) ={ x R (m,n) = x R (i,j) +(im) p x y R (m,n) = y R (i,j) +(jn) p y

Metrics