Abstract

Integral imaging (II) is an important 3D imaging technology. To reconstruct 3D information of the viewed objects, modeling and calibrating the optical pickup process of II are necessary. This work focuses on the modeling and calibration of an II system consisting of a lenslet array, an imaging lens, and a charge-coupled device camera. Most existing work on such systems assumes a pinhole array model (PAM). In this work, we explore a generic camera model that accommodates more generality. This model is an empirical model based on measurements, and we constructed a setup for its calibration. Experimental results show a significant difference between the generic camera model and the PAM. Images of planar patterns and 3D objects were computationally reconstructed with the generic camera model. Compared with the images reconstructed using the PAM, the images present higher fidelity and preserve more high spatial frequency components. To the best of our knowledge, this is the first attempt in applying a generic camera model to an II system.

© 2011 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. F. Okano, H. Hoshino, J. Arai, M. Yamada, and I. Yuyama, “Three-dimensional television system based on integral photography,” in Three Dimensional Television, Video, and Display Technologies, B.Javidi and F.Okano, eds. (Springer, 2002), pp. 101-122.
  2. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591-607 (2006).
    [CrossRef]
  3. S. Min, Y. Kim, B. Lee, and B. Javidi, “Integral imaging using multiple display devices,” in Three Dimensional Imaging, Visualization, and Display, B.Javidi, F.Okano, and J.Sun, eds. (Springer, 2009), pp. 41-54.
    [CrossRef]
  4. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real time integral photography for three-dimensional images,” Appl. Opt. 37, 2034-2045 (1998).
    [CrossRef]
  5. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324-326 (2002).
    [CrossRef]
  6. J.-S. Jang, Y. S. Oh, and B. Javidi, “Spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensinoal display,” Opt. Express 12, 557-563 (2004).
    [CrossRef] [PubMed]
  7. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22, 597-603 (2005).
    [CrossRef]
  8. Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488-5496 (2002).
    [CrossRef] [PubMed]
  9. A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036-7042 (2003).
    [CrossRef] [PubMed]
  10. S. H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483-491 (2004).
    [CrossRef] [PubMed]
  11. D. H. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express 15, 12039-12049 (2007).
    [CrossRef] [PubMed]
  12. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157-159 (2001).
    [CrossRef]
  13. V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, “Using plane+parallax for calibrating dense camera arrays,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2004), pp. 2-9.
  14. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford technical report CTSR 2005-02 (2005).
  15. B. Tavakoli, M. Daneshpanah, B. Javidi, and E. Watson, “Performance of 3D integral imaging with position uncertainty,” Opt. Express 15, 11889-11902 (2007).
    [CrossRef] [PubMed]
  16. J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A 21, 951-958 (2004).
    [CrossRef]
  17. M. D. Grossberg and S. K. Nayar, “A general imaging model and a method for finding its parameters,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2001), pp. 108-115.
  18. M. D. Grossberg and S. K. Nayar, “The raxel imaging model and ray-based calibration,” Int. J. Comput. Vis. 61, 119-137 (2005).
    [CrossRef]
  19. P. Sturm and S. Ramalingam, “A generic concept for camera calibration,” in Proceedings of the Eighth European Conference on Computer Vision, T.Pajdla and J.Matas, eds. (Springer, 2004), pp. 1-13.
  20. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing, M.S.Landy and J.A.Movshon, eds. (MIT, 1991), pp. 3-20.
  21. Z. Song and R. Chung, “Use of LCD panel for calibrating structured-light-based range sensing system,” IEEE Trans. Instrumen. Meas. 57, 2623-2630 (2008).
    [CrossRef]
  22. J.-N. Ouellet and P. Hebert, “A simple operator for very precise estimation of ellipses,” in Proceedings of the Fourth Canadian Conference on Computer and Robot Vision (IEEE, 2007), pp. 21-28.
    [CrossRef]
  23. R. I. Hartley and A. Zisserman, “Least-squares minimization,” in Multiple View Geometry in Computer Vision (2nd ed.), R.I.Hartley and A.Zisserman, eds. (Cambridge University, 2003), pp. 588-596.
  24. C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the Fourth Alvey Vision Conference, C.J.Taylor, eds. (AVC, 1988), pp. 147-151.

2008 (1)

Z. Song and R. Chung, “Use of LCD panel for calibrating structured-light-based range sensing system,” IEEE Trans. Instrumen. Meas. 57, 2623-2630 (2008).
[CrossRef]

2007 (2)

2006 (1)

A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591-607 (2006).
[CrossRef]

2005 (2)

2004 (3)

2003 (1)

2002 (2)

2001 (1)

1998 (1)

Adelson, E. H.

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing, M.S.Landy and J.A.Movshon, eds. (MIT, 1991), pp. 3-20.

Arai, J.

J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A 21, 951-958 (2004).
[CrossRef]

J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real time integral photography for three-dimensional images,” Appl. Opt. 37, 2034-2045 (1998).
[CrossRef]

F. Okano, H. Hoshino, J. Arai, M. Yamada, and I. Yuyama, “Three-dimensional television system based on integral photography,” in Three Dimensional Television, Video, and Display Technologies, B.Javidi and F.Okano, eds. (Springer, 2002), pp. 101-122.

Arimoto, H.

Bergen, J. R.

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing, M.S.Landy and J.A.Movshon, eds. (MIT, 1991), pp. 3-20.

Bredif, M.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford technical report CTSR 2005-02 (2005).

Chung, R.

Z. Song and R. Chung, “Use of LCD panel for calibrating structured-light-based range sensing system,” IEEE Trans. Instrumen. Meas. 57, 2623-2630 (2008).
[CrossRef]

Daneshpanah, M.

Duval, G.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford technical report CTSR 2005-02 (2005).

Frauel, Y.

Grossberg, M. D.

M. D. Grossberg and S. K. Nayar, “The raxel imaging model and ray-based calibration,” Int. J. Comput. Vis. 61, 119-137 (2005).
[CrossRef]

M. D. Grossberg and S. K. Nayar, “A general imaging model and a method for finding its parameters,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2001), pp. 108-115.

Hanrahan, P.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford technical report CTSR 2005-02 (2005).

Harris, C.

C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the Fourth Alvey Vision Conference, C.J.Taylor, eds. (AVC, 1988), pp. 147-151.

Hartley, R. I.

R. I. Hartley and A. Zisserman, “Least-squares minimization,” in Multiple View Geometry in Computer Vision (2nd ed.), R.I.Hartley and A.Zisserman, eds. (Cambridge University, 2003), pp. 588-596.

Hebert, P.

J.-N. Ouellet and P. Hebert, “A simple operator for very precise estimation of ellipses,” in Proceedings of the Fourth Canadian Conference on Computer and Robot Vision (IEEE, 2007), pp. 21-28.
[CrossRef]

Hong, S. H.

Horowitz, M.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford technical report CTSR 2005-02 (2005).

Hoshino, H.

J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real time integral photography for three-dimensional images,” Appl. Opt. 37, 2034-2045 (1998).
[CrossRef]

F. Okano, H. Hoshino, J. Arai, M. Yamada, and I. Yuyama, “Three-dimensional television system based on integral photography,” in Three Dimensional Television, Video, and Display Technologies, B.Javidi and F.Okano, eds. (Springer, 2002), pp. 101-122.

Jang, J.-S.

Javidi, B.

B. Tavakoli, M. Daneshpanah, B. Javidi, and E. Watson, “Performance of 3D integral imaging with position uncertainty,” Opt. Express 15, 11889-11902 (2007).
[CrossRef] [PubMed]

A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591-607 (2006).
[CrossRef]

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22, 597-603 (2005).
[CrossRef]

J.-S. Jang, Y. S. Oh, and B. Javidi, “Spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensinoal display,” Opt. Express 12, 557-563 (2004).
[CrossRef] [PubMed]

S. H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483-491 (2004).
[CrossRef] [PubMed]

A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036-7042 (2003).
[CrossRef] [PubMed]

Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488-5496 (2002).
[CrossRef] [PubMed]

J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324-326 (2002).
[CrossRef]

H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157-159 (2001).
[CrossRef]

S. Min, Y. Kim, B. Lee, and B. Javidi, “Integral imaging using multiple display devices,” in Three Dimensional Imaging, Visualization, and Display, B.Javidi, F.Okano, and J.Sun, eds. (Springer, 2009), pp. 41-54.
[CrossRef]

Joshi, N.

V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, “Using plane+parallax for calibrating dense camera arrays,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2004), pp. 2-9.

Kim, Y.

S. Min, Y. Kim, B. Lee, and B. Javidi, “Integral imaging using multiple display devices,” in Three Dimensional Imaging, Visualization, and Display, B.Javidi, F.Okano, and J.Sun, eds. (Springer, 2009), pp. 41-54.
[CrossRef]

Kobayashi, M.

Lee, B.

S. Min, Y. Kim, B. Lee, and B. Javidi, “Integral imaging using multiple display devices,” in Three Dimensional Imaging, Visualization, and Display, B.Javidi, F.Okano, and J.Sun, eds. (Springer, 2009), pp. 41-54.
[CrossRef]

Levoy, M.

V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, “Using plane+parallax for calibrating dense camera arrays,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2004), pp. 2-9.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford technical report CTSR 2005-02 (2005).

Martínez-Corral, M.

Martínez-Cuenca, R.

Min, S.

S. Min, Y. Kim, B. Lee, and B. Javidi, “Integral imaging using multiple display devices,” in Three Dimensional Imaging, Visualization, and Display, B.Javidi, F.Okano, and J.Sun, eds. (Springer, 2009), pp. 41-54.
[CrossRef]

Nayar, S. K.

M. D. Grossberg and S. K. Nayar, “The raxel imaging model and ray-based calibration,” Int. J. Comput. Vis. 61, 119-137 (2005).
[CrossRef]

M. D. Grossberg and S. K. Nayar, “A general imaging model and a method for finding its parameters,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2001), pp. 108-115.

Ng, R.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford technical report CTSR 2005-02 (2005).

Oh, Y. S.

Okano, F.

J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A 21, 951-958 (2004).
[CrossRef]

J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real time integral photography for three-dimensional images,” Appl. Opt. 37, 2034-2045 (1998).
[CrossRef]

F. Okano, H. Hoshino, J. Arai, M. Yamada, and I. Yuyama, “Three-dimensional television system based on integral photography,” in Three Dimensional Television, Video, and Display Technologies, B.Javidi and F.Okano, eds. (Springer, 2002), pp. 101-122.

Okui, M.

Ouellet, J.-N.

J.-N. Ouellet and P. Hebert, “A simple operator for very precise estimation of ellipses,” in Proceedings of the Fourth Canadian Conference on Computer and Robot Vision (IEEE, 2007), pp. 21-28.
[CrossRef]

Ramalingam, S.

P. Sturm and S. Ramalingam, “A generic concept for camera calibration,” in Proceedings of the Eighth European Conference on Computer Vision, T.Pajdla and J.Matas, eds. (Springer, 2004), pp. 1-13.

Saavedra, G.

Shin, D. H.

Song, Z.

Z. Song and R. Chung, “Use of LCD panel for calibrating structured-light-based range sensing system,” IEEE Trans. Instrumen. Meas. 57, 2623-2630 (2008).
[CrossRef]

Stephens, M.

C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the Fourth Alvey Vision Conference, C.J.Taylor, eds. (AVC, 1988), pp. 147-151.

Stern, A.

A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591-607 (2006).
[CrossRef]

A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036-7042 (2003).
[CrossRef] [PubMed]

Sturm, P.

P. Sturm and S. Ramalingam, “A generic concept for camera calibration,” in Proceedings of the Eighth European Conference on Computer Vision, T.Pajdla and J.Matas, eds. (Springer, 2004), pp. 1-13.

Tavakoli, B.

Vaish, V.

V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, “Using plane+parallax for calibrating dense camera arrays,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2004), pp. 2-9.

Watson, E.

Wilburn, B.

V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, “Using plane+parallax for calibrating dense camera arrays,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2004), pp. 2-9.

Yamada, M.

F. Okano, H. Hoshino, J. Arai, M. Yamada, and I. Yuyama, “Three-dimensional television system based on integral photography,” in Three Dimensional Television, Video, and Display Technologies, B.Javidi and F.Okano, eds. (Springer, 2002), pp. 101-122.

Yoo, H.

Yuyama, I.

J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real time integral photography for three-dimensional images,” Appl. Opt. 37, 2034-2045 (1998).
[CrossRef]

F. Okano, H. Hoshino, J. Arai, M. Yamada, and I. Yuyama, “Three-dimensional television system based on integral photography,” in Three Dimensional Television, Video, and Display Technologies, B.Javidi and F.Okano, eds. (Springer, 2002), pp. 101-122.

Zisserman, A.

R. I. Hartley and A. Zisserman, “Least-squares minimization,” in Multiple View Geometry in Computer Vision (2nd ed.), R.I.Hartley and A.Zisserman, eds. (Cambridge University, 2003), pp. 588-596.

Appl. Opt. (3)

IEEE Trans. Instrumen. Meas. (1)

Z. Song and R. Chung, “Use of LCD panel for calibrating structured-light-based range sensing system,” IEEE Trans. Instrumen. Meas. 57, 2623-2630 (2008).
[CrossRef]

Int. J. Comput. Vis. (1)

M. D. Grossberg and S. K. Nayar, “The raxel imaging model and ray-based calibration,” Int. J. Comput. Vis. 61, 119-137 (2005).
[CrossRef]

J. Opt. Soc. Am. A (2)

Opt. Express (4)

Opt. Lett. (2)

Other (11)

F. Okano, H. Hoshino, J. Arai, M. Yamada, and I. Yuyama, “Three-dimensional television system based on integral photography,” in Three Dimensional Television, Video, and Display Technologies, B.Javidi and F.Okano, eds. (Springer, 2002), pp. 101-122.

A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591-607 (2006).
[CrossRef]

S. Min, Y. Kim, B. Lee, and B. Javidi, “Integral imaging using multiple display devices,” in Three Dimensional Imaging, Visualization, and Display, B.Javidi, F.Okano, and J.Sun, eds. (Springer, 2009), pp. 41-54.
[CrossRef]

V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, “Using plane+parallax for calibrating dense camera arrays,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2004), pp. 2-9.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford technical report CTSR 2005-02 (2005).

M. D. Grossberg and S. K. Nayar, “A general imaging model and a method for finding its parameters,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2001), pp. 108-115.

J.-N. Ouellet and P. Hebert, “A simple operator for very precise estimation of ellipses,” in Proceedings of the Fourth Canadian Conference on Computer and Robot Vision (IEEE, 2007), pp. 21-28.
[CrossRef]

R. I. Hartley and A. Zisserman, “Least-squares minimization,” in Multiple View Geometry in Computer Vision (2nd ed.), R.I.Hartley and A.Zisserman, eds. (Cambridge University, 2003), pp. 588-596.

C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the Fourth Alvey Vision Conference, C.J.Taylor, eds. (AVC, 1988), pp. 147-151.

P. Sturm and S. Ramalingam, “A generic concept for camera calibration,” in Proceedings of the Eighth European Conference on Computer Vision, T.Pajdla and J.Matas, eds. (Springer, 2004), pp. 1-13.

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing, M.S.Landy and J.A.Movshon, eds. (MIT, 1991), pp. 3-20.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1

Configuration of the IIS used in this work.

Fig. 2
Fig. 2

CIIR by using a PAM.

Fig. 3
Fig. 3

Integral imaging system and the calibration setup.

Fig. 4
Fig. 4

Illustration of the generic camera model.

Fig. 5
Fig. 5

Illustration of the calibration setup.

Fig. 6
Fig. 6

Experiment of CIIR for 2D planar pattern displayed on an LCD: (a) experiment setup, (b) EIA image, (c) GCM-CIIR image, (d) corners in (c), (e) PAM-CIIR image, and (f) corners in (e).

Fig. 7
Fig. 7

Comparison of the DFT magnitudes of the GCM-CIIR image and the PAM-CIIR image.

Fig. 8
Fig. 8

Experiment of CIIR for a 3D object—a space shuttle model. (a) Experiment setup. (b) EIA image. (c)–(h) GCM-CIIR images from different viewpoints. A GCM-CIIR image (i) is compared with a PAM-CIIR image (j).

Fig. 9
Fig. 9

GCM-CIIR with different object distance.

Fig. 10
Fig. 10

Visualization of the generic camera model.

Fig. 11
Fig. 11

Difference between the GCM and PAM.

Fig. 12
Fig. 12

(a) Experiment setup for 3D reconstruction. (b) A top-down view of the setup in (a). A ping-pong ball is placed at the points marked a1–a9 and b1–b9. (c) 3D reconstruction result compared with ground truth.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

E ( x , y ) = c δ ( q φ 1 q ^ φ 1 , q φ 2 q ^ φ 2 ) Φ ( q φ 1 , q φ 2 ) d q φ 1 d q φ 2 .
L x , y ( r ) = q ^ φ 1 ( x , y ) + r ( q ^ φ 2 ( x , y ) q ^ φ 1 ( x , y ) ) .
I output ( m ) ( x Ψ ( m ) + Δ x , y Ψ ( m ) + Δ y ) = I output ( m 1 ) ( x Ψ ( m ) + Δ x , y Ψ ( m ) + Δ y ) * ( 1 G ( Δ x , Δ y ) ) + E ( x ( m ) + Δ x , y ( m ) + Δ y ) * G ( Δ x , Δ y ) .
q ˜ = arg min q ( ( x ( n ) , y ( n ) ) B q D ( q , L x ( n ) , y ( n ) ) ) .

Metrics