Abstract

A novel model for three dimensional (3D) interactive control of viewing parameters of integral imaging systems is established in this paper. Specifically, transformation matrices are derived in an extended homogeneous light field coordinate space based on interactive controllable requirement of integral imaging displays. In this model, new elemental images can be synthesized directly from the ones captured in the record process to display 3D images with expected viewing parameters, and no extra geometrical information of the 3D scene is required in the synthesis process. Computer simulation and optical experimental results show that the reconstructed 3D scenes with depth control, lateral translation and rotation can be achieved.

© 2012 OSA

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys.7, 821–825 (1908).
  2. O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt.40(20), 3318–3325 (2001).
    [CrossRef] [PubMed]
  3. S. Yeom and B. Javidi, “Three-dimensional distortion-tolerant object recognition using integral imaging,” Opt. Express12(23), 5795–5809 (2004).
    [CrossRef] [PubMed]
  4. I. Moon and B. Javidi, “Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling,” Opt. Lett.34(6), 731–733 (2009).
    [CrossRef] [PubMed]
  5. J.-H. Park and K.-M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express19(19), 18729–18741 (2011).
    [CrossRef] [PubMed]
  6. I. Chung, J.-H. Jung, J. Hong, K. Hong, and B. Lee, “Depth extraction with sub-pixel resolution in integral imaging based on genetic algorithm,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (CD) (Optical Society of America, 2010), paper JMA3.
  7. D.-C. Hwang, D.-H. Shin, S.-C. Kim, and E.-S. Kim, “Depth extraction of three-dimensional objects in space by the computational integral imaging reconstruction technique,” Appl. Opt.47(19), D128–D135 (2008).
    [CrossRef] [PubMed]
  8. J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express18(25), 26373–26387 (2010).
    [CrossRef] [PubMed]
  9. D.-H. Shin, B.-G. Lee, and J.-J. Lee, “Occlusion removal method of partially occluded 3D object using sub-image block matching in computational integral imaging,” Opt. Express16(21), 16294–16304 (2008).
    [CrossRef] [PubMed]
  10. B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett.31(8), 1106–1108 (2006).
    [CrossRef] [PubMed]
  11. W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph.23, 814–824 (2004).
    [CrossRef]
  12. J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt.45(8), 1704–1712 (2006).
    [CrossRef] [PubMed]
  13. Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graph.15(5), 841–852 (2009).
    [CrossRef] [PubMed]
  14. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt.36(7), 1598–1603 (1997).
    [CrossRef] [PubMed]
  15. H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. A21(3), 171–176 (1931).
    [CrossRef]
  16. J. Arai, M. Kawakita, and F. Okano, “Effects of sampling on depth control in integral imaging,” Proc. SPIE7237, 723710, 723710-12 (2009).
    [CrossRef]
  17. J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett.33(3), 279–281 (2008).
    [CrossRef] [PubMed]
  18. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express13(23), 9175–9180 (2005).
    [CrossRef] [PubMed]
  19. H. Navarro, R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Opt. Express18(25), 25573–25583 (2010).
    [CrossRef] [PubMed]
  20. J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett.27(13), 1144–1146 (2002).
    [CrossRef] [PubMed]
  21. J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett.28(16), 1421–1423 (2003).
    [CrossRef] [PubMed]
  22. D.-H. Shin, M. Cho, and E.-S. Kim, “Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets,” ETRI J.27(3), 289–293 (2005).
    [CrossRef]
  23. H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral-imaging monitors,” J. Display Technol.6(10), 404–411 (2010).
    [CrossRef]
  24. H.-B. Xie, X. Zhao, Y. Yang, J. Bu, Z. L. Fang, and X. C. Yuan, “Cross-lenticular lens array for full parallax 3-D display with Crosstalk reduction,” Sci. China Technolog. Sci.55(3), 735–742 (2012).
    [CrossRef]
  25. R. Damasevicius and G. Ziberkas, “Energy consumption and quality of approximate image transformation,” Electron. Electr. Eng.120, 79–82 (2012).
  26. J. X. Chai, X. Tong, S. C. Chan, and H. Y. Shum, “Plenoptic sampling,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’00) (ACM Press, 2000), pp. 307–318.
  27. C. Zhang and T. Chen, “Spectral analysis for sampling image-based rendering data,” IEEE Trans. Circ. Syst. Video Tech.13(11), 1038–1050 (2003).
    [CrossRef]
  28. M. N. Do, D. Marchand-Maillet, and M. Vetterli, “On the bandwidth of the plenoptic function,” IEEE Trans. Image Process.21(2), 708–717 (2012).
    [CrossRef] [PubMed]

2012

H.-B. Xie, X. Zhao, Y. Yang, J. Bu, Z. L. Fang, and X. C. Yuan, “Cross-lenticular lens array for full parallax 3-D display with Crosstalk reduction,” Sci. China Technolog. Sci.55(3), 735–742 (2012).
[CrossRef]

R. Damasevicius and G. Ziberkas, “Energy consumption and quality of approximate image transformation,” Electron. Electr. Eng.120, 79–82 (2012).

M. N. Do, D. Marchand-Maillet, and M. Vetterli, “On the bandwidth of the plenoptic function,” IEEE Trans. Image Process.21(2), 708–717 (2012).
[CrossRef] [PubMed]

2011

2010

2009

I. Moon and B. Javidi, “Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling,” Opt. Lett.34(6), 731–733 (2009).
[CrossRef] [PubMed]

J. Arai, M. Kawakita, and F. Okano, “Effects of sampling on depth control in integral imaging,” Proc. SPIE7237, 723710, 723710-12 (2009).
[CrossRef]

Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graph.15(5), 841–852 (2009).
[CrossRef] [PubMed]

2008

2006

2005

D.-H. Shin, M. Cho, and E.-S. Kim, “Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets,” ETRI J.27(3), 289–293 (2005).
[CrossRef]

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express13(23), 9175–9180 (2005).
[CrossRef] [PubMed]

2004

W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph.23, 814–824 (2004).
[CrossRef]

S. Yeom and B. Javidi, “Three-dimensional distortion-tolerant object recognition using integral imaging,” Opt. Express12(23), 5795–5809 (2004).
[CrossRef] [PubMed]

2003

J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett.28(16), 1421–1423 (2003).
[CrossRef] [PubMed]

C. Zhang and T. Chen, “Spectral analysis for sampling image-based rendering data,” IEEE Trans. Circ. Syst. Video Tech.13(11), 1038–1050 (2003).
[CrossRef]

2002

2001

1997

1931

H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. A21(3), 171–176 (1931).
[CrossRef]

1908

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys.7, 821–825 (1908).

Arai, J.

Bu, J.

H.-B. Xie, X. Zhao, Y. Yang, J. Bu, Z. L. Fang, and X. C. Yuan, “Cross-lenticular lens array for full parallax 3-D display with Crosstalk reduction,” Sci. China Technolog. Sci.55(3), 735–742 (2012).
[CrossRef]

Chen, T.

C. Zhang and T. Chen, “Spectral analysis for sampling image-based rendering data,” IEEE Trans. Circ. Syst. Video Tech.13(11), 1038–1050 (2003).
[CrossRef]

Cho, M.

D.-H. Shin, M. Cho, and E.-S. Kim, “Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets,” ETRI J.27(3), 289–293 (2005).
[CrossRef]

Chung, I.

Damasevicius, R.

R. Damasevicius and G. Ziberkas, “Energy consumption and quality of approximate image transformation,” Electron. Electr. Eng.120, 79–82 (2012).

Do, M. N.

M. N. Do, D. Marchand-Maillet, and M. Vetterli, “On the bandwidth of the plenoptic function,” IEEE Trans. Image Process.21(2), 708–717 (2012).
[CrossRef] [PubMed]

Fang, Z. L.

H.-B. Xie, X. Zhao, Y. Yang, J. Bu, Z. L. Fang, and X. C. Yuan, “Cross-lenticular lens array for full parallax 3-D display with Crosstalk reduction,” Sci. China Technolog. Sci.55(3), 735–742 (2012).
[CrossRef]

Hong, K.

Hong, S.-H.

Hoshino, H.

Hwang, D.-C.

Ives, H. E.

H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. A21(3), 171–176 (1931).
[CrossRef]

Jang, J.-S.

Javidi, B.

H. Navarro, R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Opt. Express18(25), 25573–25583 (2010).
[CrossRef] [PubMed]

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral-imaging monitors,” J. Display Technol.6(10), 404–411 (2010).
[CrossRef]

I. Moon and B. Javidi, “Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling,” Opt. Lett.34(6), 731–733 (2009).
[CrossRef] [PubMed]

B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett.31(8), 1106–1108 (2006).
[CrossRef] [PubMed]

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express13(23), 9175–9180 (2005).
[CrossRef] [PubMed]

S. Yeom and B. Javidi, “Three-dimensional distortion-tolerant object recognition using integral imaging,” Opt. Express12(23), 5795–5809 (2004).
[CrossRef] [PubMed]

J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett.28(16), 1421–1423 (2003).
[CrossRef] [PubMed]

J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett.27(13), 1144–1146 (2002).
[CrossRef] [PubMed]

O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt.40(20), 3318–3325 (2001).
[CrossRef] [PubMed]

Jeong, K.-M.

Jin, F.

Jung, J.-H.

Kawai, H.

Kawakita, M.

J. Arai, M. Kawakita, and F. Okano, “Effects of sampling on depth control in integral imaging,” Proc. SPIE7237, 723710, 723710-12 (2009).
[CrossRef]

J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett.33(3), 279–281 (2008).
[CrossRef] [PubMed]

Kim, E.-S.

D.-C. Hwang, D.-H. Shin, S.-C. Kim, and E.-S. Kim, “Depth extraction of three-dimensional objects in space by the computational integral imaging reconstruction technique,” Appl. Opt.47(19), D128–D135 (2008).
[CrossRef] [PubMed]

D.-H. Shin, M. Cho, and E.-S. Kim, “Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets,” ETRI J.27(3), 289–293 (2005).
[CrossRef]

Kim, S.-C.

Koike, T.

Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graph.15(5), 841–852 (2009).
[CrossRef] [PubMed]

Lee, B.

Lee, B.-G.

Lee, J.-J.

Lippmann, G.

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys.7, 821–825 (1908).

Marchand-Maillet, D.

M. N. Do, D. Marchand-Maillet, and M. Vetterli, “On the bandwidth of the plenoptic function,” IEEE Trans. Image Process.21(2), 708–717 (2012).
[CrossRef] [PubMed]

Martínez-Corral, M.

Martínez-Cuenca, R.

Matoba, O.

Matusik, W.

W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph.23, 814–824 (2004).
[CrossRef]

Molina-Martín, A.

Moon, I.

Naemura, T.

Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graph.15(5), 841–852 (2009).
[CrossRef] [PubMed]

Navarro, H.

Okano, F.

Okui, M.

Park, G.

Park, J.-H.

Pfister, H.

W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph.23, 814–824 (2004).
[CrossRef]

Ponce-Díaz, R.

Saavedra, G.

Shin, D.-H.

Taguchi, Y.

Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graph.15(5), 841–852 (2009).
[CrossRef] [PubMed]

Tajahuerce, E.

Takahashi, K.

Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graph.15(5), 841–852 (2009).
[CrossRef] [PubMed]

Vetterli, M.

M. N. Do, D. Marchand-Maillet, and M. Vetterli, “On the bandwidth of the plenoptic function,” IEEE Trans. Image Process.21(2), 708–717 (2012).
[CrossRef] [PubMed]

Xie, H.-B.

H.-B. Xie, X. Zhao, Y. Yang, J. Bu, Z. L. Fang, and X. C. Yuan, “Cross-lenticular lens array for full parallax 3-D display with Crosstalk reduction,” Sci. China Technolog. Sci.55(3), 735–742 (2012).
[CrossRef]

Yamashita, T.

Yang, Y.

H.-B. Xie, X. Zhao, Y. Yang, J. Bu, Z. L. Fang, and X. C. Yuan, “Cross-lenticular lens array for full parallax 3-D display with Crosstalk reduction,” Sci. China Technolog. Sci.55(3), 735–742 (2012).
[CrossRef]

Yeom, S.

Yuan, X. C.

H.-B. Xie, X. Zhao, Y. Yang, J. Bu, Z. L. Fang, and X. C. Yuan, “Cross-lenticular lens array for full parallax 3-D display with Crosstalk reduction,” Sci. China Technolog. Sci.55(3), 735–742 (2012).
[CrossRef]

Yuyama, I.

Zhang, C.

C. Zhang and T. Chen, “Spectral analysis for sampling image-based rendering data,” IEEE Trans. Circ. Syst. Video Tech.13(11), 1038–1050 (2003).
[CrossRef]

Zhao, X.

H.-B. Xie, X. Zhao, Y. Yang, J. Bu, Z. L. Fang, and X. C. Yuan, “Cross-lenticular lens array for full parallax 3-D display with Crosstalk reduction,” Sci. China Technolog. Sci.55(3), 735–742 (2012).
[CrossRef]

Ziberkas, G.

R. Damasevicius and G. Ziberkas, “Energy consumption and quality of approximate image transformation,” Electron. Electr. Eng.120, 79–82 (2012).

ACM Trans. Graph.

W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph.23, 814–824 (2004).
[CrossRef]

Appl. Opt.

Electron. Electr. Eng.

R. Damasevicius and G. Ziberkas, “Energy consumption and quality of approximate image transformation,” Electron. Electr. Eng.120, 79–82 (2012).

ETRI J.

D.-H. Shin, M. Cho, and E.-S. Kim, “Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets,” ETRI J.27(3), 289–293 (2005).
[CrossRef]

IEEE Trans. Circ. Syst. Video Tech.

C. Zhang and T. Chen, “Spectral analysis for sampling image-based rendering data,” IEEE Trans. Circ. Syst. Video Tech.13(11), 1038–1050 (2003).
[CrossRef]

IEEE Trans. Image Process.

M. N. Do, D. Marchand-Maillet, and M. Vetterli, “On the bandwidth of the plenoptic function,” IEEE Trans. Image Process.21(2), 708–717 (2012).
[CrossRef] [PubMed]

IEEE Trans. Vis. Comput. Graph.

Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graph.15(5), 841–852 (2009).
[CrossRef] [PubMed]

J. Display Technol.

J. Opt. Soc. Am. A

H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. A21(3), 171–176 (1931).
[CrossRef]

J. Phys.

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys.7, 821–825 (1908).

Opt. Express

Opt. Lett.

Proc. SPIE

J. Arai, M. Kawakita, and F. Okano, “Effects of sampling on depth control in integral imaging,” Proc. SPIE7237, 723710, 723710-12 (2009).
[CrossRef]

Sci. China Technolog. Sci.

H.-B. Xie, X. Zhao, Y. Yang, J. Bu, Z. L. Fang, and X. C. Yuan, “Cross-lenticular lens array for full parallax 3-D display with Crosstalk reduction,” Sci. China Technolog. Sci.55(3), 735–742 (2012).
[CrossRef]

Other

I. Chung, J.-H. Jung, J. Hong, K. Hong, and B. Lee, “Depth extraction with sub-pixel resolution in integral imaging based on genetic algorithm,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (CD) (Optical Society of America, 2010), paper JMA3.

J. X. Chai, X. Tong, S. C. Chan, and H. Y. Shum, “Plenoptic sampling,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’00) (ACM Press, 2000), pp. 307–318.

Supplementary Material (4)

» Media 1: MOV (2690 KB)     
» Media 2: MOV (2529 KB)     
» Media 3: MOV (2667 KB)     
» Media 4: MOV (3028 KB)     

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1

Scheme for the conventional two-step pickup method.

Fig. 2
Fig. 2

Spatial coordinate system and the light field coordinate space.

Fig. 3
Fig. 3

Scheme for the HLFT model.

Fig. 4
Fig. 4

Scheme of the experimental setup and the picked up elemental images.

Fig. 5
Fig. 5

Optical experimental setup. The EIA is behind the lenslet array, and a light source is behind the EIA to provide the illumination. A digital camera is placed on the slide rail in front of the lenslet array to take photos of the reconstructed 3D images.

Fig. 6
Fig. 6

Array of 27 × 27 elemental images obtained by the HLFT model.

Fig. 7
Fig. 7

Different perspectives of the 3D scene reconstructed from Fig. 6. (a) Simulation, (b) Optical reconstruction. In the video (Media 1) we show the movie obtained with the optical experiment, the viewing angle is from −4.5° to 4.5°.

Fig. 8
Fig. 8

27 × 27 EIA created by the HLFT model that displays the 3D scene with a translation vector vt = (0 mm, 0 mm, 180 mm, 1).

Fig. 9
Fig. 9

Different perspectives of the 3D scene reconstructed from Fig. 8. (a) Simulation, (b) Optical reconstruction. In the video (Media 2) we show the movie obtained with the optical experiment, the viewing angle is from −4.5° to 4.5°.

Fig. 10
Fig. 10

27 × 27 EIA created by the proposed model with vt = (20 mm, 30 mm, 150 mm, 1) and vr = (−20°, 30°, 0°, 1).

Fig. 11
Fig. 11

Different perspectives of the 3D scene reconstructed from Fig. 10. (a) Simulation, (b) Optical reconstruction. In the video (Media 3) we show the movie obtained with the optical experiment, the viewing angle is from −4.5° to 4.5°.

Fig. 12
Fig. 12

27 × 27 EIA created by the proposed model with vt = (−40 mm, 0 mm, 150 mm, 1) and vr = (0°, −35°, 0°, 1).

Fig. 13
Fig. 13

Different perspectives of the 3D scene reconstructed from Fig. 12. (a) Simulation, (b) Optical reconstruction. In the video (Media 4) we show the movie obtained with the optical experiment, the viewing angle is from −4.5° to 4.5°.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

t o =C× t s
t o = C t × t s
{ x s = x o x y s = y o y z s = z o z
C t =( 1 0 z/ g s 0 x 0 1 0 z/ g s y 0 0 g o / g s 0 0 0 0 0 g o / g s 0 0 0 0 0 1 )
{ x s = x o y s = y o cosα z o sinα z s = z o cosα+ y o sinα
C x =( 1 0 0 0 0 0 1 cosα 0 0 0 0 0 g 0 g s cosα 0 0 0 0 0 g o cosα( g s cosαsinα ) g o tanα 0 0 0 0 1 )
{ x s = x o cosβ+ z o sinβ y s = y o z s = z o cosβ x o sinβ
{ x s = x o cosγ y o sinγ y s = x o sinγ+ y o cosγ z s = z o
C z =( cosγ sinγ 0 0 0 sinγ cosγ 0 0 0 0 0 g o cosγ / g s g o sinγ / g s 0 0 0 g o sinγ / g s g o cosγ / g s 0 0 0 0 0 1 )
{ x s =j× p s y s =i× p s u s =l×px l s v s =k×px l s
{ q=Round( y o / p o ) r=Round( x o / p o ) s=Round( v o / px l o ) t=Round( u o / px l o )

Metrics