Abstract

In the 3-D display field, super multi-view (SMV) technology has drawn keen attention owing to its advanced 3-D features: smooth motion parallax, wide depth of focus, and a possibility of visual discomfort alleviation. Nevertheless, its applications are limited because of narrow viewing lobes (VLs), unavoidable to prevent an excessive decrease of the lateral resolution. To expand VLs, many head-tracked multi-view display technologies have been developed for decades, but restrictive viewing distance (VD) still remains as one of the most critical drawbacks of SMV technology. This paper proposes a novel method that can adjust the optimal VD (OVD) in flat-panel-based SMV displays without mechanical changes or loss of multi-view properties. To this end, it exploits partially aperiodically located sets of subpixels for defining 3-D pixels and dynamically changing view indices for the 3-D pixels. This enables that the front and rear bounds of initial VDs are adjustable, and VLs are dynamically expandable when a target OVD is renewed in real time using head-tracking. In the experiments, the quantitative VL expansion and qualitative quality of perceptual images are compared, and the feasibility of supporting adaptive VD in real time is further investigated. In addition, our prototype head-tracked SMV system is also introduced as an advanced application towards omnidirectionally free VL.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Autostereoscopic 3D display using directional subpixel rendering

Seok Lee, Juyoung Park, Jingu Heo, Byongmin Kang, Dongwoo Kang, Hyoseok Hwang, Jinho Lee, Yoonsun Choi, Kyuhwan Choi, and Dongkyung Nam
Opt. Express 26(16) 20233-20233 (2018)

Multi-projection of lenticular displays to construct a 256-view super multi-view display

Yasuhiro Takaki and Nichiyo Nago
Opt. Express 18(9) 8824-8835 (2010)

Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation [Invited]

Ki-Hyuk Yoon, Min-Koo Kang, Hwasun Lee, and Sung-Kyu Kim
Appl. Opt. 57(1) A101-A117 (2018)

References

  • View by:
  • |
  • |
  • |

  1. T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).
  2. D. Ezra, G. J. Woodgate, B. A. Omar, N. S. Holliman, J. Harrold, and L. S. Shapiro, “New autostereoscopic display system,” Proc. SPIE 2409, 31–41 (1995).
    [Crossref]
  3. P. Harman, “Autostereoscopic display system,” Proc. SPIE 2653, 56–64 (1996).
    [Crossref]
  4. I. Sexton and P. Surman, “Stereoscopic and Autostereoscopic Display Systems,” IEEE Signal Process. Mag. 16(3), 85–99 (1999).
    [Crossref]
  5. J.-H. Park, “Recent progress in computer-generated holography for three-dimensional scenes,” J. Inf. Disp. 18(1), 1–12 (2016).
    [Crossref]
  6. P. Surman and X.W. Sun, “Towards the reality of 3D imaging and display,” in Proceedings of IEEE 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2014), pp. 1–4.
  7. ISO/IEC JTC1/SC29/WG11, “3D FTV/SMV visualization and evaluation lab,” in Proceedings of 112th Moving Picture Experts Group Meeting (Warsaw, Poland, 2015), M36577.
  8. H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D Display with Long Visualization Depth using Referential Viewing Area based Integral Photography,” IEEE Trans. Vis. Comput. Graphics 17(11), 1690–1701 (2011).
    [Crossref]
  9. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in Three-dimensional Integral Imaging: Sensing, Display, and Applications,” Appl. opt. 52(4), 546–560 (2013).
    [Crossref] [PubMed]
  10. J. Arai, E. Nakasu, T. Yamashita, H. Hiura, M. Miura, T. Nakamura, and R. Funatsu, “Progress Overview of Capturing Method for Integral 3-D Imaging Displays,” Proc. IEEE 105(5), 837–849 (2017).
    [Crossref]
  11. Y. Takaki, “High-Density Directional Display for Generating Natural Three-Dimensional Images,” Proc. IEEE 94(3), 654–663 (2006).
    [Crossref]
  12. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010).
    [Crossref] [PubMed]
  13. Y. Takaki, “Development of super multi-view displays,” ITE Trans. Media Technol. Appl. 2(1), 8–14 (2014).
    [Crossref]
  14. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision 8(3), 33 (2008).
    [Crossref]
  15. Y. Takaki, Y. Urano, and H. Nishio, “Motion-parallax smoothness of short-, medium-, and long-distance 3D image presentation using multi-view displays,” Opt. Express 20(24), 27180–27197 (2012).
    [Crossref] [PubMed]
  16. Y. Takaki, “3D images with enhanced DOF produced by 128-directional display,” in Proceedings of the 13th International Display Workshops (2006), pp. 1909–1912.
  17. J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013).
    [Crossref]
  18. X. Shen and B. Javidi, “Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens,” Appl. Opt. 57(7), B184–B189 (2018).
    [Crossref] [PubMed]
  19. M. Tanimoto, “FTV standardization in MPEG,” in Proceedings of IEEE 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2014), pp. 1–4.
  20. ISO/IEC JTC1/SC29/WG11, “Call for evidence on free-viewpoint television: Super-multiview and free navigation,” in Proceedings of 112th Moving Picture Experts Group Meeting (Warsaw, Poland, 2015), N15348.
  21. P. Carballeira, J. Gutiérrez, F. Morán, J. Cabrera, F. Jaureguizar, and N. García, “MultiView Perceptual Disparity Model for Super MultiView Video,” IEEE J. Sel. Top. Signal Process. 11(1), 113–124 (2017).
    [Crossref]
  22. M. Zwicker, S. Yea, A. Vetro, C. Forlines, W. Matusik, and H. Pfister, “Multi-view Video Compression for 3D Displays,” in Proceedings of Conference Record of the Forty-First Asilomar Conference on Signals, Systems and Computers (2007), pp. 1506–1510.
    [Crossref]
  23. T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “Three-dimensional display technologies satisfying super multiview condition,” Proc.SPIE 10298, 102980B (2001).
  24. ISO/IEC SC29WG11, “Experimental Framework for FTV,” in Proceedings of 110th Moving Picture Experts Group Meeting (Strasbourg, France, Oct.2014), N15048.
  25. N. A. Dodgson, “Autostereoscopic 3D Displays,” Computer 38(8), 31 (2005).
    [Crossref]
  26. P. Surman, I. Sexton, K. Hopf, R. Bates, and W. Lee, “Head tracked 3D displays,” Springer LNCS 4105, 769–776 (2006).
  27. J. B. Eichenlaub, “An autostereoscopic display with high brightness and power efficiency,” Proc. SPIE 2177, 4–15 (1994).
    [Crossref]
  28. M. R. Jewell, G. R. Chamberlin, D. E. Sheat, P. Cochrane, and D. J. McCartney, “3-D imaging systems for video communication applications,” Proc. SPIE 2409, 4–10 (1995).
    [Crossref]
  29. N. A. Dodgson, “Variation and extrema of human interpupillary distance,” Proc. SPIE 5291, 36–46 (2004).
    [Crossref]
  30. N. A. Dodgson, “Analysis of the viewing zone of the Cambridge autostereoscopic display,” Appl. Opt. 35(10), 1705–1710 (1996).
    [Crossref] [PubMed]
  31. C. Van Berkel and J. A. Clarke, “Characterisation and optimisation of 3D-LCD module design,” Proc. SPIE 3012, 179–186 (1997).
    [Crossref]
  32. Y. Takaki, “Multi-view 3-D display employing a flat-panel display with slanted pixel arrangement,” J. Soc. Inf. Disp. 18(7), 476–482 (2010).
    [Crossref]
  33. G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer-tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187–199 (1997).
    [Crossref]
  34. N. A. Dodgson, “On the number of viewing zones required for head-tracked autostereoscopic display,” Proc. SPIE 605560550Q (2006).
    [Crossref]
  35. S.-K. Kim, K.-H. Yoon, S. K. Yoon, and H. Ju, “Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display,” Opt. Express 23(10), 13230–13244 (2015).
    [Crossref] [PubMed]
  36. S. K. Yoon, S. Khym, H. W. Kim, and S.-K. Kim, “Variable parallax barrier spacing in autostereoscopic displays,” Opt. Commun. 370, 319–326 (2016).
    [Crossref]
  37. D. Suzuki, S. Hayashi, Y. Hyodo, S. Oka, T. Koito, and H. Sugiyama, “A wide view auto-stereoscopic 3D display with an eye-tracking system for enhanced horizontal viewing position and viewing distance,” J. Soc. Inf. Disp. 24(11), 657–668 (2016).
    [Crossref]
  38. K.-H. Yoon and S.-K. Kim, “Expansion method of the three-dimensional viewing freedom of autostereoscopic 3D display with dynamic merged viewing zone (MVZ) under eye tracking,” Proc. SPIE 10219, 1021914 (2017).
    [Crossref]
  39. K.-H. Yoon, M.-K. Kang, H. Lee, and S.-K. Kim, “Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation,” Appl. Opt. 57(1), A101–A117 (2018).
    [Crossref] [PubMed]
  40. K.C. Kwon, C. Park, M.U. Erdenebat, J.S. Jeong, J.H. Choi, N. Kim, J.H. Park, Y.T. Lim, and K.H. Yoo, “High Speed Image Space Parallel Processing for Computer-generated Integral Imaging System,” Opt. Express 20(2), 732–740 (2012).
    [Crossref] [PubMed]
  41. S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient Computer-generated Integral Imaging Based on The Backward Ray-Tracing Technique and Optical Reconstruction,” Opt. Express 25(1), 330–338 (2017).
    [Crossref] [PubMed]
  42. G. Chen, C. ma, Z. Fan, X. Cui, and H. Liao, “Real-time Lens based Rendering Algorithm for Super-multiview Integral Photography without Image Resampling,” IEEE Trans. Vis. Comput. Graphics (to be published).
  43. N. A. Dodgson, “Analysis of the viewing zone of multi-view autostereoscopic displays,” Proc. SPIE 4660, 254–265 (2002).
    [Crossref]
  44. S.-K. Kim and K.-H. Yoon, “METHOD OF FORMING DYNAMIC MAXIMAL VIEWING ZONE OF AUTOSTEREOSCOPIC DISPLAY APPARATUS”, U.S. patent application 15869504 (Jan.12, 2018).
  45. S.K. Yoon and S.-K. Kim, “Measurement method with moving image sensor in autostereoscopic display,” Proc. of SPIE 8384, 83840Y (2012).
    [Crossref]
  46. D. Kang, J. Kim, and S.-K. Kim, “Affine registration of three-dimensional point sets for improving the accuracy of eye position trackers,” Opt. Eng. 56(4), 043105 (2017).
    [Crossref]
  47. V. Ramachandra, K. Hirakawa, M. Zwicker, and T. Nguyen, “Spatioangular Prefiltering for Multiview 3D Displays,” IEEE Trans. Vis. Comput. Graphics 17(5), 642–654 (2011).
    [Crossref]
  48. L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel RealSense Stereoscopic Depth Cameras,” in Proceedings of Computer Vision and Pattern Recognition (cs.CV) (2017), arXiv:1705.05548v2.
  49. M. Carfagni, R. Furferi, L. Governi, M. Servi, F. Uccheddu, and Y. Volpe, “On the performance of the Intel SR300 depth camera: metrological and critical characterization,” IEEE Sens. J. 17(14), 4508–4519 (2017).
    [Crossref]
  50. L. L. Presti and M. La Cascia, “3D skeleton-based human action classification: A survey,” Pattern Recognit. 53, 130–147 (2016).
    [Crossref]
  51. F. L. Siena, B. Byrom, P. Watts, and P. Breedon, “Utilising the Intel RealSense Camera for Measuring Health Outcomes in Clinical Research,” J. Med. Syst. 42(3), 53 (2018).
    [Crossref] [PubMed]

2018 (3)

2017 (6)

M. Carfagni, R. Furferi, L. Governi, M. Servi, F. Uccheddu, and Y. Volpe, “On the performance of the Intel SR300 depth camera: metrological and critical characterization,” IEEE Sens. J. 17(14), 4508–4519 (2017).
[Crossref]

K.-H. Yoon and S.-K. Kim, “Expansion method of the three-dimensional viewing freedom of autostereoscopic 3D display with dynamic merged viewing zone (MVZ) under eye tracking,” Proc. SPIE 10219, 1021914 (2017).
[Crossref]

D. Kang, J. Kim, and S.-K. Kim, “Affine registration of three-dimensional point sets for improving the accuracy of eye position trackers,” Opt. Eng. 56(4), 043105 (2017).
[Crossref]

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient Computer-generated Integral Imaging Based on The Backward Ray-Tracing Technique and Optical Reconstruction,” Opt. Express 25(1), 330–338 (2017).
[Crossref] [PubMed]

P. Carballeira, J. Gutiérrez, F. Morán, J. Cabrera, F. Jaureguizar, and N. García, “MultiView Perceptual Disparity Model for Super MultiView Video,” IEEE J. Sel. Top. Signal Process. 11(1), 113–124 (2017).
[Crossref]

J. Arai, E. Nakasu, T. Yamashita, H. Hiura, M. Miura, T. Nakamura, and R. Funatsu, “Progress Overview of Capturing Method for Integral 3-D Imaging Displays,” Proc. IEEE 105(5), 837–849 (2017).
[Crossref]

2016 (4)

J.-H. Park, “Recent progress in computer-generated holography for three-dimensional scenes,” J. Inf. Disp. 18(1), 1–12 (2016).
[Crossref]

S. K. Yoon, S. Khym, H. W. Kim, and S.-K. Kim, “Variable parallax barrier spacing in autostereoscopic displays,” Opt. Commun. 370, 319–326 (2016).
[Crossref]

D. Suzuki, S. Hayashi, Y. Hyodo, S. Oka, T. Koito, and H. Sugiyama, “A wide view auto-stereoscopic 3D display with an eye-tracking system for enhanced horizontal viewing position and viewing distance,” J. Soc. Inf. Disp. 24(11), 657–668 (2016).
[Crossref]

L. L. Presti and M. La Cascia, “3D skeleton-based human action classification: A survey,” Pattern Recognit. 53, 130–147 (2016).
[Crossref]

2015 (1)

2014 (1)

Y. Takaki, “Development of super multi-view displays,” ITE Trans. Media Technol. Appl. 2(1), 8–14 (2014).
[Crossref]

2013 (2)

J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013).
[Crossref]

X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in Three-dimensional Integral Imaging: Sensing, Display, and Applications,” Appl. opt. 52(4), 546–560 (2013).
[Crossref] [PubMed]

2012 (3)

2011 (2)

V. Ramachandra, K. Hirakawa, M. Zwicker, and T. Nguyen, “Spatioangular Prefiltering for Multiview 3D Displays,” IEEE Trans. Vis. Comput. Graphics 17(5), 642–654 (2011).
[Crossref]

H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D Display with Long Visualization Depth using Referential Viewing Area based Integral Photography,” IEEE Trans. Vis. Comput. Graphics 17(11), 1690–1701 (2011).
[Crossref]

2010 (2)

Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010).
[Crossref] [PubMed]

Y. Takaki, “Multi-view 3-D display employing a flat-panel display with slanted pixel arrangement,” J. Soc. Inf. Disp. 18(7), 476–482 (2010).
[Crossref]

2008 (1)

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision 8(3), 33 (2008).
[Crossref]

2006 (3)

Y. Takaki, “High-Density Directional Display for Generating Natural Three-Dimensional Images,” Proc. IEEE 94(3), 654–663 (2006).
[Crossref]

P. Surman, I. Sexton, K. Hopf, R. Bates, and W. Lee, “Head tracked 3D displays,” Springer LNCS 4105, 769–776 (2006).

N. A. Dodgson, “On the number of viewing zones required for head-tracked autostereoscopic display,” Proc. SPIE 605560550Q (2006).
[Crossref]

2005 (1)

N. A. Dodgson, “Autostereoscopic 3D Displays,” Computer 38(8), 31 (2005).
[Crossref]

2004 (1)

N. A. Dodgson, “Variation and extrema of human interpupillary distance,” Proc. SPIE 5291, 36–46 (2004).
[Crossref]

2002 (1)

N. A. Dodgson, “Analysis of the viewing zone of multi-view autostereoscopic displays,” Proc. SPIE 4660, 254–265 (2002).
[Crossref]

2001 (1)

T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “Three-dimensional display technologies satisfying super multiview condition,” Proc.SPIE 10298, 102980B (2001).

1999 (1)

I. Sexton and P. Surman, “Stereoscopic and Autostereoscopic Display Systems,” IEEE Signal Process. Mag. 16(3), 85–99 (1999).
[Crossref]

1997 (2)

G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer-tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187–199 (1997).
[Crossref]

C. Van Berkel and J. A. Clarke, “Characterisation and optimisation of 3D-LCD module design,” Proc. SPIE 3012, 179–186 (1997).
[Crossref]

1996 (2)

1995 (2)

D. Ezra, G. J. Woodgate, B. A. Omar, N. S. Holliman, J. Harrold, and L. S. Shapiro, “New autostereoscopic display system,” Proc. SPIE 2409, 31–41 (1995).
[Crossref]

M. R. Jewell, G. R. Chamberlin, D. E. Sheat, P. Cochrane, and D. J. McCartney, “3-D imaging systems for video communication applications,” Proc. SPIE 2409, 4–10 (1995).
[Crossref]

1994 (1)

J. B. Eichenlaub, “An autostereoscopic display with high brightness and power efficiency,” Proc. SPIE 2177, 4–15 (1994).
[Crossref]

Akeley, K.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision 8(3), 33 (2008).
[Crossref]

Arai, J.

J. Arai, E. Nakasu, T. Yamashita, H. Hiura, M. Miura, T. Nakamura, and R. Funatsu, “Progress Overview of Capturing Method for Integral 3-D Imaging Displays,” Proc. IEEE 105(5), 837–849 (2017).
[Crossref]

Banks, M. S.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision 8(3), 33 (2008).
[Crossref]

Bates, R.

P. Surman, I. Sexton, K. Hopf, R. Bates, and W. Lee, “Head tracked 3D displays,” Springer LNCS 4105, 769–776 (2006).

Bhowmik, A.

L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel RealSense Stereoscopic Depth Cameras,” in Proceedings of Computer Vision and Pattern Recognition (cs.CV) (2017), arXiv:1705.05548v2.

Breedon, P.

F. L. Siena, B. Byrom, P. Watts, and P. Breedon, “Utilising the Intel RealSense Camera for Measuring Health Outcomes in Clinical Research,” J. Med. Syst. 42(3), 53 (2018).
[Crossref] [PubMed]

Byrom, B.

F. L. Siena, B. Byrom, P. Watts, and P. Breedon, “Utilising the Intel RealSense Camera for Measuring Health Outcomes in Clinical Research,” J. Med. Syst. 42(3), 53 (2018).
[Crossref] [PubMed]

Cabrera, J.

P. Carballeira, J. Gutiérrez, F. Morán, J. Cabrera, F. Jaureguizar, and N. García, “MultiView Perceptual Disparity Model for Super MultiView Video,” IEEE J. Sel. Top. Signal Process. 11(1), 113–124 (2017).
[Crossref]

Carballeira, P.

P. Carballeira, J. Gutiérrez, F. Morán, J. Cabrera, F. Jaureguizar, and N. García, “MultiView Perceptual Disparity Model for Super MultiView Video,” IEEE J. Sel. Top. Signal Process. 11(1), 113–124 (2017).
[Crossref]

Carfagni, M.

M. Carfagni, R. Furferi, L. Governi, M. Servi, F. Uccheddu, and Y. Volpe, “On the performance of the Intel SR300 depth camera: metrological and critical characterization,” IEEE Sens. J. 17(14), 4508–4519 (2017).
[Crossref]

Chamberlin, G. R.

M. R. Jewell, G. R. Chamberlin, D. E. Sheat, P. Cochrane, and D. J. McCartney, “3-D imaging systems for video communication applications,” Proc. SPIE 2409, 4–10 (1995).
[Crossref]

Chen, G.

G. Chen, C. ma, Z. Fan, X. Cui, and H. Liao, “Real-time Lens based Rendering Algorithm for Super-multiview Integral Photography without Image Resampling,” IEEE Trans. Vis. Comput. Graphics (to be published).

Choi, J.H.

Clarke, J. A.

C. Van Berkel and J. A. Clarke, “Characterisation and optimisation of 3D-LCD module design,” Proc. SPIE 3012, 179–186 (1997).
[Crossref]

Cochrane, P.

M. R. Jewell, G. R. Chamberlin, D. E. Sheat, P. Cochrane, and D. J. McCartney, “3-D imaging systems for video communication applications,” Proc. SPIE 2409, 4–10 (1995).
[Crossref]

Cui, X.

G. Chen, C. ma, Z. Fan, X. Cui, and H. Liao, “Real-time Lens based Rendering Algorithm for Super-multiview Integral Photography without Image Resampling,” IEEE Trans. Vis. Comput. Graphics (to be published).

Dodgson, N. A.

N. A. Dodgson, “On the number of viewing zones required for head-tracked autostereoscopic display,” Proc. SPIE 605560550Q (2006).
[Crossref]

N. A. Dodgson, “Autostereoscopic 3D Displays,” Computer 38(8), 31 (2005).
[Crossref]

N. A. Dodgson, “Variation and extrema of human interpupillary distance,” Proc. SPIE 5291, 36–46 (2004).
[Crossref]

N. A. Dodgson, “Analysis of the viewing zone of multi-view autostereoscopic displays,” Proc. SPIE 4660, 254–265 (2002).
[Crossref]

N. A. Dodgson, “Analysis of the viewing zone of the Cambridge autostereoscopic display,” Appl. Opt. 35(10), 1705–1710 (1996).
[Crossref] [PubMed]

Dohi, T.

H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D Display with Long Visualization Depth using Referential Viewing Area based Integral Photography,” IEEE Trans. Vis. Comput. Graphics 17(11), 1690–1701 (2011).
[Crossref]

Duo, C.

Eichenlaub, J. B.

J. B. Eichenlaub, “An autostereoscopic display with high brightness and power efficiency,” Proc. SPIE 2177, 4–15 (1994).
[Crossref]

Endo, T.

T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “Three-dimensional display technologies satisfying super multiview condition,” Proc.SPIE 10298, 102980B (2001).

Erdenebat, M.U.

Ezra, D.

G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer-tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187–199 (1997).
[Crossref]

D. Ezra, G. J. Woodgate, B. A. Omar, N. S. Holliman, J. Harrold, and L. S. Shapiro, “New autostereoscopic display system,” Proc. SPIE 2409, 31–41 (1995).
[Crossref]

Fan, Z.

G. Chen, C. ma, Z. Fan, X. Cui, and H. Liao, “Real-time Lens based Rendering Algorithm for Super-multiview Integral Photography without Image Resampling,” IEEE Trans. Vis. Comput. Graphics (to be published).

Forlines, C.

M. Zwicker, S. Yea, A. Vetro, C. Forlines, W. Matusik, and H. Pfister, “Multi-view Video Compression for 3D Displays,” in Proceedings of Conference Record of the Forty-First Asilomar Conference on Signals, Systems and Computers (2007), pp. 1506–1510.
[Crossref]

Fujii, T.

T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “Three-dimensional display technologies satisfying super multiview condition,” Proc.SPIE 10298, 102980B (2001).

Funatsu, R.

J. Arai, E. Nakasu, T. Yamashita, H. Hiura, M. Miura, T. Nakamura, and R. Funatsu, “Progress Overview of Capturing Method for Integral 3-D Imaging Displays,” Proc. IEEE 105(5), 837–849 (2017).
[Crossref]

Furferi, R.

M. Carfagni, R. Furferi, L. Governi, M. Servi, F. Uccheddu, and Y. Volpe, “On the performance of the Intel SR300 depth camera: metrological and critical characterization,” IEEE Sens. J. 17(14), 4508–4519 (2017).
[Crossref]

Gao, X.

García, N.

P. Carballeira, J. Gutiérrez, F. Morán, J. Cabrera, F. Jaureguizar, and N. García, “MultiView Perceptual Disparity Model for Super MultiView Video,” IEEE J. Sel. Top. Signal Process. 11(1), 113–124 (2017).
[Crossref]

Girshick, A. R.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision 8(3), 33 (2008).
[Crossref]

Governi, L.

M. Carfagni, R. Furferi, L. Governi, M. Servi, F. Uccheddu, and Y. Volpe, “On the performance of the Intel SR300 depth camera: metrological and critical characterization,” IEEE Sens. J. 17(14), 4508–4519 (2017).
[Crossref]

Grunnet-Jepsen, A.

L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel RealSense Stereoscopic Depth Cameras,” in Proceedings of Computer Vision and Pattern Recognition (cs.CV) (2017), arXiv:1705.05548v2.

Guan, Y.

Gutiérrez, J.

P. Carballeira, J. Gutiérrez, F. Morán, J. Cabrera, F. Jaureguizar, and N. García, “MultiView Perceptual Disparity Model for Super MultiView Video,” IEEE J. Sel. Top. Signal Process. 11(1), 113–124 (2017).
[Crossref]

Hamaguchi, T.

T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “Three-dimensional display technologies satisfying super multiview condition,” Proc.SPIE 10298, 102980B (2001).

Harman, P.

P. Harman, “Autostereoscopic display system,” Proc. SPIE 2653, 56–64 (1996).
[Crossref]

Harrold, J.

G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer-tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187–199 (1997).
[Crossref]

D. Ezra, G. J. Woodgate, B. A. Omar, N. S. Holliman, J. Harrold, and L. S. Shapiro, “New autostereoscopic display system,” Proc. SPIE 2409, 31–41 (1995).
[Crossref]

Hatada, T.

T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “Three-dimensional display technologies satisfying super multiview condition,” Proc.SPIE 10298, 102980B (2001).

Hayashi, S.

D. Suzuki, S. Hayashi, Y. Hyodo, S. Oka, T. Koito, and H. Sugiyama, “A wide view auto-stereoscopic 3D display with an eye-tracking system for enhanced horizontal viewing position and viewing distance,” J. Soc. Inf. Disp. 24(11), 657–668 (2016).
[Crossref]

Hirakawa, K.

V. Ramachandra, K. Hirakawa, M. Zwicker, and T. Nguyen, “Spatioangular Prefiltering for Multiview 3D Displays,” IEEE Trans. Vis. Comput. Graphics 17(5), 642–654 (2011).
[Crossref]

Hiura, H.

J. Arai, E. Nakasu, T. Yamashita, H. Hiura, M. Miura, T. Nakamura, and R. Funatsu, “Progress Overview of Capturing Method for Integral 3-D Imaging Displays,” Proc. IEEE 105(5), 837–849 (2017).
[Crossref]

Hoffman, D. M.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision 8(3), 33 (2008).
[Crossref]

Holliman, N. S.

G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer-tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187–199 (1997).
[Crossref]

D. Ezra, G. J. Woodgate, B. A. Omar, N. S. Holliman, J. Harrold, and L. S. Shapiro, “New autostereoscopic display system,” Proc. SPIE 2409, 31–41 (1995).
[Crossref]

Honda, T.

T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “Three-dimensional display technologies satisfying super multiview condition,” Proc.SPIE 10298, 102980B (2001).

Hopf, K.

P. Surman, I. Sexton, K. Hopf, R. Bates, and W. Lee, “Head tracked 3D displays,” Springer LNCS 4105, 769–776 (2006).

Hyodo, Y.

D. Suzuki, S. Hayashi, Y. Hyodo, S. Oka, T. Koito, and H. Sugiyama, “A wide view auto-stereoscopic 3D display with an eye-tracking system for enhanced horizontal viewing position and viewing distance,” J. Soc. Inf. Disp. 24(11), 657–668 (2016).
[Crossref]

Jaureguizar, F.

P. Carballeira, J. Gutiérrez, F. Morán, J. Cabrera, F. Jaureguizar, and N. García, “MultiView Perceptual Disparity Model for Super MultiView Video,” IEEE J. Sel. Top. Signal Process. 11(1), 113–124 (2017).
[Crossref]

Javidi, B.

Jeong, J.S.

Jewell, M. R.

M. R. Jewell, G. R. Chamberlin, D. E. Sheat, P. Cochrane, and D. J. McCartney, “3-D imaging systems for video communication applications,” Proc. SPIE 2409, 4–10 (1995).
[Crossref]

Jones, G. R.

G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer-tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187–199 (1997).
[Crossref]

Ju, H.

Kajiki, Y.

T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “Three-dimensional display technologies satisfying super multiview condition,” Proc.SPIE 10298, 102980B (2001).

Kang, D.

D. Kang, J. Kim, and S.-K. Kim, “Affine registration of three-dimensional point sets for improving the accuracy of eye position trackers,” Opt. Eng. 56(4), 043105 (2017).
[Crossref]

Kang, M.-K.

Keselman, L.

L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel RealSense Stereoscopic Depth Cameras,” in Proceedings of Computer Vision and Pattern Recognition (cs.CV) (2017), arXiv:1705.05548v2.

Khym, S.

S. K. Yoon, S. Khym, H. W. Kim, and S.-K. Kim, “Variable parallax barrier spacing in autostereoscopic displays,” Opt. Commun. 370, 319–326 (2016).
[Crossref]

Kim, H. W.

S. K. Yoon, S. Khym, H. W. Kim, and S.-K. Kim, “Variable parallax barrier spacing in autostereoscopic displays,” Opt. Commun. 370, 319–326 (2016).
[Crossref]

Kim, J.

D. Kang, J. Kim, and S.-K. Kim, “Affine registration of three-dimensional point sets for improving the accuracy of eye position trackers,” Opt. Eng. 56(4), 043105 (2017).
[Crossref]

Kim, N.

Kim, S.-K.

K.-H. Yoon, M.-K. Kang, H. Lee, and S.-K. Kim, “Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation,” Appl. Opt. 57(1), A101–A117 (2018).
[Crossref] [PubMed]

D. Kang, J. Kim, and S.-K. Kim, “Affine registration of three-dimensional point sets for improving the accuracy of eye position trackers,” Opt. Eng. 56(4), 043105 (2017).
[Crossref]

K.-H. Yoon and S.-K. Kim, “Expansion method of the three-dimensional viewing freedom of autostereoscopic 3D display with dynamic merged viewing zone (MVZ) under eye tracking,” Proc. SPIE 10219, 1021914 (2017).
[Crossref]

S. K. Yoon, S. Khym, H. W. Kim, and S.-K. Kim, “Variable parallax barrier spacing in autostereoscopic displays,” Opt. Commun. 370, 319–326 (2016).
[Crossref]

S.-K. Kim, K.-H. Yoon, S. K. Yoon, and H. Ju, “Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display,” Opt. Express 23(10), 13230–13244 (2015).
[Crossref] [PubMed]

S.K. Yoon and S.-K. Kim, “Measurement method with moving image sensor in autostereoscopic display,” Proc. of SPIE 8384, 83840Y (2012).
[Crossref]

S.-K. Kim and K.-H. Yoon, “METHOD OF FORMING DYNAMIC MAXIMAL VIEWING ZONE OF AUTOSTEREOSCOPIC DISPLAY APPARATUS”, U.S. patent application 15869504 (Jan.12, 2018).

Koito, T.

D. Suzuki, S. Hayashi, Y. Hyodo, S. Oka, T. Koito, and H. Sugiyama, “A wide view auto-stereoscopic 3D display with an eye-tracking system for enhanced horizontal viewing position and viewing distance,” J. Soc. Inf. Disp. 24(11), 657–668 (2016).
[Crossref]

Kwon, K.C.

La Cascia, M.

L. L. Presti and M. La Cascia, “3D skeleton-based human action classification: A survey,” Pattern Recognit. 53, 130–147 (2016).
[Crossref]

Lee, H.

Lee, W.

P. Surman, I. Sexton, K. Hopf, R. Bates, and W. Lee, “Head tracked 3D displays,” Springer LNCS 4105, 769–776 (2006).

Liao, H.

H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D Display with Long Visualization Depth using Referential Viewing Area based Integral Photography,” IEEE Trans. Vis. Comput. Graphics 17(11), 1690–1701 (2011).
[Crossref]

G. Chen, C. ma, Z. Fan, X. Cui, and H. Liao, “Real-time Lens based Rendering Algorithm for Super-multiview Integral Photography without Image Resampling,” IEEE Trans. Vis. Comput. Graphics (to be published).

Lim, Y.T.

ma, C.

G. Chen, C. ma, Z. Fan, X. Cui, and H. Liao, “Real-time Lens based Rendering Algorithm for Super-multiview Integral Photography without Image Resampling,” IEEE Trans. Vis. Comput. Graphics (to be published).

Martinez-Corral, M.

Matusik, W.

M. Zwicker, S. Yea, A. Vetro, C. Forlines, W. Matusik, and H. Pfister, “Multi-view Video Compression for 3D Displays,” in Proceedings of Conference Record of the Forty-First Asilomar Conference on Signals, Systems and Computers (2007), pp. 1506–1510.
[Crossref]

McCartney, D. J.

M. R. Jewell, G. R. Chamberlin, D. E. Sheat, P. Cochrane, and D. J. McCartney, “3-D imaging systems for video communication applications,” Proc. SPIE 2409, 4–10 (1995).
[Crossref]

Miura, M.

J. Arai, E. Nakasu, T. Yamashita, H. Hiura, M. Miura, T. Nakamura, and R. Funatsu, “Progress Overview of Capturing Method for Integral 3-D Imaging Displays,” Proc. IEEE 105(5), 837–849 (2017).
[Crossref]

Morán, F.

P. Carballeira, J. Gutiérrez, F. Morán, J. Cabrera, F. Jaureguizar, and N. García, “MultiView Perceptual Disparity Model for Super MultiView Video,” IEEE J. Sel. Top. Signal Process. 11(1), 113–124 (2017).
[Crossref]

Moseley, R. R.

G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer-tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187–199 (1997).
[Crossref]

Nago, N.

Nakamura, J.

J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013).
[Crossref]

Nakamura, T.

J. Arai, E. Nakasu, T. Yamashita, H. Hiura, M. Miura, T. Nakamura, and R. Funatsu, “Progress Overview of Capturing Method for Integral 3-D Imaging Displays,” Proc. IEEE 105(5), 837–849 (2017).
[Crossref]

Nakasu, E.

J. Arai, E. Nakasu, T. Yamashita, H. Hiura, M. Miura, T. Nakamura, and R. Funatsu, “Progress Overview of Capturing Method for Integral 3-D Imaging Displays,” Proc. IEEE 105(5), 837–849 (2017).
[Crossref]

Nguyen, T.

V. Ramachandra, K. Hirakawa, M. Zwicker, and T. Nguyen, “Spatioangular Prefiltering for Multiview 3D Displays,” IEEE Trans. Vis. Comput. Graphics 17(5), 642–654 (2011).
[Crossref]

Nishio, H.

Nomura, K.

H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D Display with Long Visualization Depth using Referential Viewing Area based Integral Photography,” IEEE Trans. Vis. Comput. Graphics 17(11), 1690–1701 (2011).
[Crossref]

Oka, S.

D. Suzuki, S. Hayashi, Y. Hyodo, S. Oka, T. Koito, and H. Sugiyama, “A wide view auto-stereoscopic 3D display with an eye-tracking system for enhanced horizontal viewing position and viewing distance,” J. Soc. Inf. Disp. 24(11), 657–668 (2016).
[Crossref]

Okoshi, T.

T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).

Omar, B. A.

D. Ezra, G. J. Woodgate, B. A. Omar, N. S. Holliman, J. Harrold, and L. S. Shapiro, “New autostereoscopic display system,” Proc. SPIE 2409, 31–41 (1995).
[Crossref]

Pang, B.

Park, C.

Park, J.H.

Park, J.-H.

J.-H. Park, “Recent progress in computer-generated holography for three-dimensional scenes,” J. Inf. Disp. 18(1), 1–12 (2016).
[Crossref]

Pfister, H.

M. Zwicker, S. Yea, A. Vetro, C. Forlines, W. Matusik, and H. Pfister, “Multi-view Video Compression for 3D Displays,” in Proceedings of Conference Record of the Forty-First Asilomar Conference on Signals, Systems and Computers (2007), pp. 1506–1510.
[Crossref]

Presti, L. L.

L. L. Presti and M. La Cascia, “3D skeleton-based human action classification: A survey,” Pattern Recognit. 53, 130–147 (2016).
[Crossref]

Ramachandra, V.

V. Ramachandra, K. Hirakawa, M. Zwicker, and T. Nguyen, “Spatioangular Prefiltering for Multiview 3D Displays,” IEEE Trans. Vis. Comput. Graphics 17(5), 642–654 (2011).
[Crossref]

Sang, X.

Servi, M.

M. Carfagni, R. Furferi, L. Governi, M. Servi, F. Uccheddu, and Y. Volpe, “On the performance of the Intel SR300 depth camera: metrological and critical characterization,” IEEE Sens. J. 17(14), 4508–4519 (2017).
[Crossref]

Sexton, I.

P. Surman, I. Sexton, K. Hopf, R. Bates, and W. Lee, “Head tracked 3D displays,” Springer LNCS 4105, 769–776 (2006).

I. Sexton and P. Surman, “Stereoscopic and Autostereoscopic Display Systems,” IEEE Signal Process. Mag. 16(3), 85–99 (1999).
[Crossref]

Shapiro, L. S.

D. Ezra, G. J. Woodgate, B. A. Omar, N. S. Holliman, J. Harrold, and L. S. Shapiro, “New autostereoscopic display system,” Proc. SPIE 2409, 31–41 (1995).
[Crossref]

Sheat, D. E.

M. R. Jewell, G. R. Chamberlin, D. E. Sheat, P. Cochrane, and D. J. McCartney, “3-D imaging systems for video communication applications,” Proc. SPIE 2409, 4–10 (1995).
[Crossref]

Shen, X.

Siena, F. L.

F. L. Siena, B. Byrom, P. Watts, and P. Breedon, “Utilising the Intel RealSense Camera for Measuring Health Outcomes in Clinical Research,” J. Med. Syst. 42(3), 53 (2018).
[Crossref] [PubMed]

Stern, A.

Sugiyama, H.

D. Suzuki, S. Hayashi, Y. Hyodo, S. Oka, T. Koito, and H. Sugiyama, “A wide view auto-stereoscopic 3D display with an eye-tracking system for enhanced horizontal viewing position and viewing distance,” J. Soc. Inf. Disp. 24(11), 657–668 (2016).
[Crossref]

Sun, X.W.

P. Surman and X.W. Sun, “Towards the reality of 3D imaging and display,” in Proceedings of IEEE 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2014), pp. 1–4.

Surman, P.

P. Surman, I. Sexton, K. Hopf, R. Bates, and W. Lee, “Head tracked 3D displays,” Springer LNCS 4105, 769–776 (2006).

I. Sexton and P. Surman, “Stereoscopic and Autostereoscopic Display Systems,” IEEE Signal Process. Mag. 16(3), 85–99 (1999).
[Crossref]

P. Surman and X.W. Sun, “Towards the reality of 3D imaging and display,” in Proceedings of IEEE 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2014), pp. 1–4.

Susami, K.

T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “Three-dimensional display technologies satisfying super multiview condition,” Proc.SPIE 10298, 102980B (2001).

Suzuki, D.

D. Suzuki, S. Hayashi, Y. Hyodo, S. Oka, T. Koito, and H. Sugiyama, “A wide view auto-stereoscopic 3D display with an eye-tracking system for enhanced horizontal viewing position and viewing distance,” J. Soc. Inf. Disp. 24(11), 657–668 (2016).
[Crossref]

Takaki, Y.

Y. Takaki, “Development of super multi-view displays,” ITE Trans. Media Technol. Appl. 2(1), 8–14 (2014).
[Crossref]

J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013).
[Crossref]

Y. Takaki, Y. Urano, and H. Nishio, “Motion-parallax smoothness of short-, medium-, and long-distance 3D image presentation using multi-view displays,” Opt. Express 20(24), 27180–27197 (2012).
[Crossref] [PubMed]

Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010).
[Crossref] [PubMed]

Y. Takaki, “Multi-view 3-D display employing a flat-panel display with slanted pixel arrangement,” J. Soc. Inf. Disp. 18(7), 476–482 (2010).
[Crossref]

Y. Takaki, “High-Density Directional Display for Generating Natural Three-Dimensional Images,” Proc. IEEE 94(3), 654–663 (2006).
[Crossref]

Y. Takaki, “3D images with enhanced DOF produced by 128-directional display,” in Proceedings of the 13th International Display Workshops (2006), pp. 1909–1912.

Tanaka, K.

J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013).
[Crossref]

Tanimoto, M.

M. Tanimoto, “FTV standardization in MPEG,” in Proceedings of IEEE 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2014), pp. 1–4.

Uccheddu, F.

M. Carfagni, R. Furferi, L. Governi, M. Servi, F. Uccheddu, and Y. Volpe, “On the performance of the Intel SR300 depth camera: metrological and critical characterization,” IEEE Sens. J. 17(14), 4508–4519 (2017).
[Crossref]

Urano, Y.

Van Berkel, C.

C. Van Berkel and J. A. Clarke, “Characterisation and optimisation of 3D-LCD module design,” Proc. SPIE 3012, 179–186 (1997).
[Crossref]

Vetro, A.

M. Zwicker, S. Yea, A. Vetro, C. Forlines, W. Matusik, and H. Pfister, “Multi-view Video Compression for 3D Displays,” in Proceedings of Conference Record of the Forty-First Asilomar Conference on Signals, Systems and Computers (2007), pp. 1506–1510.
[Crossref]

Volpe, Y.

M. Carfagni, R. Furferi, L. Governi, M. Servi, F. Uccheddu, and Y. Volpe, “On the performance of the Intel SR300 depth camera: metrological and critical characterization,” IEEE Sens. J. 17(14), 4508–4519 (2017).
[Crossref]

Wang, K.

Watts, P.

F. L. Siena, B. Byrom, P. Watts, and P. Breedon, “Utilising the Intel RealSense Camera for Measuring Health Outcomes in Clinical Research,” J. Med. Syst. 42(3), 53 (2018).
[Crossref] [PubMed]

Woodfill, J. I.

L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel RealSense Stereoscopic Depth Cameras,” in Proceedings of Computer Vision and Pattern Recognition (cs.CV) (2017), arXiv:1705.05548v2.

Woodgate, G. J.

G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer-tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187–199 (1997).
[Crossref]

D. Ezra, G. J. Woodgate, B. A. Omar, N. S. Holliman, J. Harrold, and L. S. Shapiro, “New autostereoscopic display system,” Proc. SPIE 2409, 31–41 (1995).
[Crossref]

Xiao, X.

Xing, S.

Yamashita, T.

J. Arai, E. Nakasu, T. Yamashita, H. Hiura, M. Miura, T. Nakamura, and R. Funatsu, “Progress Overview of Capturing Method for Integral 3-D Imaging Displays,” Proc. IEEE 105(5), 837–849 (2017).
[Crossref]

Yan, B.

Yang, S.

Yea, S.

M. Zwicker, S. Yea, A. Vetro, C. Forlines, W. Matusik, and H. Pfister, “Multi-view Video Compression for 3D Displays,” in Proceedings of Conference Record of the Forty-First Asilomar Conference on Signals, Systems and Computers (2007), pp. 1506–1510.
[Crossref]

Yoo, K.H.

Yoon, K.-H.

K.-H. Yoon, M.-K. Kang, H. Lee, and S.-K. Kim, “Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation,” Appl. Opt. 57(1), A101–A117 (2018).
[Crossref] [PubMed]

K.-H. Yoon and S.-K. Kim, “Expansion method of the three-dimensional viewing freedom of autostereoscopic 3D display with dynamic merged viewing zone (MVZ) under eye tracking,” Proc. SPIE 10219, 1021914 (2017).
[Crossref]

S.-K. Kim, K.-H. Yoon, S. K. Yoon, and H. Ju, “Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display,” Opt. Express 23(10), 13230–13244 (2015).
[Crossref] [PubMed]

S.-K. Kim and K.-H. Yoon, “METHOD OF FORMING DYNAMIC MAXIMAL VIEWING ZONE OF AUTOSTEREOSCOPIC DISPLAY APPARATUS”, U.S. patent application 15869504 (Jan.12, 2018).

Yoon, S. K.

S. K. Yoon, S. Khym, H. W. Kim, and S.-K. Kim, “Variable parallax barrier spacing in autostereoscopic displays,” Opt. Commun. 370, 319–326 (2016).
[Crossref]

S.-K. Kim, K.-H. Yoon, S. K. Yoon, and H. Ju, “Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display,” Opt. Express 23(10), 13230–13244 (2015).
[Crossref] [PubMed]

Yoon, S.K.

S.K. Yoon and S.-K. Kim, “Measurement method with moving image sensor in autostereoscopic display,” Proc. of SPIE 8384, 83840Y (2012).
[Crossref]

Yu, X.

Yuan, J.

Zwicker, M.

V. Ramachandra, K. Hirakawa, M. Zwicker, and T. Nguyen, “Spatioangular Prefiltering for Multiview 3D Displays,” IEEE Trans. Vis. Comput. Graphics 17(5), 642–654 (2011).
[Crossref]

M. Zwicker, S. Yea, A. Vetro, C. Forlines, W. Matusik, and H. Pfister, “Multi-view Video Compression for 3D Displays,” in Proceedings of Conference Record of the Forty-First Asilomar Conference on Signals, Systems and Computers (2007), pp. 1506–1510.
[Crossref]

Appl. opt. (1)

Appl. Phys. Express (1)

J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013).
[Crossref]

Computer (1)

N. A. Dodgson, “Autostereoscopic 3D Displays,” Computer 38(8), 31 (2005).
[Crossref]

IEEE J. Sel. Top. Signal Process. (1)

P. Carballeira, J. Gutiérrez, F. Morán, J. Cabrera, F. Jaureguizar, and N. García, “MultiView Perceptual Disparity Model for Super MultiView Video,” IEEE J. Sel. Top. Signal Process. 11(1), 113–124 (2017).
[Crossref]

IEEE Sens. J. (1)

M. Carfagni, R. Furferi, L. Governi, M. Servi, F. Uccheddu, and Y. Volpe, “On the performance of the Intel SR300 depth camera: metrological and critical characterization,” IEEE Sens. J. 17(14), 4508–4519 (2017).
[Crossref]

IEEE Signal Process. Mag. (1)

I. Sexton and P. Surman, “Stereoscopic and Autostereoscopic Display Systems,” IEEE Signal Process. Mag. 16(3), 85–99 (1999).
[Crossref]

IEEE Trans. Vis. Comput. Graphics (2)

H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D Display with Long Visualization Depth using Referential Viewing Area based Integral Photography,” IEEE Trans. Vis. Comput. Graphics 17(11), 1690–1701 (2011).
[Crossref]

V. Ramachandra, K. Hirakawa, M. Zwicker, and T. Nguyen, “Spatioangular Prefiltering for Multiview 3D Displays,” IEEE Trans. Vis. Comput. Graphics 17(5), 642–654 (2011).
[Crossref]

ITE Trans. Media Technol. Appl. (1)

Y. Takaki, “Development of super multi-view displays,” ITE Trans. Media Technol. Appl. 2(1), 8–14 (2014).
[Crossref]

J. Inf. Disp. (1)

J.-H. Park, “Recent progress in computer-generated holography for three-dimensional scenes,” J. Inf. Disp. 18(1), 1–12 (2016).
[Crossref]

J. Med. Syst. (1)

F. L. Siena, B. Byrom, P. Watts, and P. Breedon, “Utilising the Intel RealSense Camera for Measuring Health Outcomes in Clinical Research,” J. Med. Syst. 42(3), 53 (2018).
[Crossref] [PubMed]

J. Soc. Inf. Disp. (2)

D. Suzuki, S. Hayashi, Y. Hyodo, S. Oka, T. Koito, and H. Sugiyama, “A wide view auto-stereoscopic 3D display with an eye-tracking system for enhanced horizontal viewing position and viewing distance,” J. Soc. Inf. Disp. 24(11), 657–668 (2016).
[Crossref]

Y. Takaki, “Multi-view 3-D display employing a flat-panel display with slanted pixel arrangement,” J. Soc. Inf. Disp. 18(7), 476–482 (2010).
[Crossref]

J. Vision (1)

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision 8(3), 33 (2008).
[Crossref]

Opt. Commun. (1)

S. K. Yoon, S. Khym, H. W. Kim, and S.-K. Kim, “Variable parallax barrier spacing in autostereoscopic displays,” Opt. Commun. 370, 319–326 (2016).
[Crossref]

Opt. Eng. (1)

D. Kang, J. Kim, and S.-K. Kim, “Affine registration of three-dimensional point sets for improving the accuracy of eye position trackers,” Opt. Eng. 56(4), 043105 (2017).
[Crossref]

Opt. Express (5)

Pattern Recognit. (1)

L. L. Presti and M. La Cascia, “3D skeleton-based human action classification: A survey,” Pattern Recognit. 53, 130–147 (2016).
[Crossref]

Proc. IEEE (2)

J. Arai, E. Nakasu, T. Yamashita, H. Hiura, M. Miura, T. Nakamura, and R. Funatsu, “Progress Overview of Capturing Method for Integral 3-D Imaging Displays,” Proc. IEEE 105(5), 837–849 (2017).
[Crossref]

Y. Takaki, “High-Density Directional Display for Generating Natural Three-Dimensional Images,” Proc. IEEE 94(3), 654–663 (2006).
[Crossref]

Proc. of SPIE (1)

S.K. Yoon and S.-K. Kim, “Measurement method with moving image sensor in autostereoscopic display,” Proc. of SPIE 8384, 83840Y (2012).
[Crossref]

Proc. SPIE (10)

K.-H. Yoon and S.-K. Kim, “Expansion method of the three-dimensional viewing freedom of autostereoscopic 3D display with dynamic merged viewing zone (MVZ) under eye tracking,” Proc. SPIE 10219, 1021914 (2017).
[Crossref]

N. A. Dodgson, “Analysis of the viewing zone of multi-view autostereoscopic displays,” Proc. SPIE 4660, 254–265 (2002).
[Crossref]

D. Ezra, G. J. Woodgate, B. A. Omar, N. S. Holliman, J. Harrold, and L. S. Shapiro, “New autostereoscopic display system,” Proc. SPIE 2409, 31–41 (1995).
[Crossref]

P. Harman, “Autostereoscopic display system,” Proc. SPIE 2653, 56–64 (1996).
[Crossref]

G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer-tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187–199 (1997).
[Crossref]

N. A. Dodgson, “On the number of viewing zones required for head-tracked autostereoscopic display,” Proc. SPIE 605560550Q (2006).
[Crossref]

C. Van Berkel and J. A. Clarke, “Characterisation and optimisation of 3D-LCD module design,” Proc. SPIE 3012, 179–186 (1997).
[Crossref]

J. B. Eichenlaub, “An autostereoscopic display with high brightness and power efficiency,” Proc. SPIE 2177, 4–15 (1994).
[Crossref]

M. R. Jewell, G. R. Chamberlin, D. E. Sheat, P. Cochrane, and D. J. McCartney, “3-D imaging systems for video communication applications,” Proc. SPIE 2409, 4–10 (1995).
[Crossref]

N. A. Dodgson, “Variation and extrema of human interpupillary distance,” Proc. SPIE 5291, 36–46 (2004).
[Crossref]

Proc.SPIE (1)

T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “Three-dimensional display technologies satisfying super multiview condition,” Proc.SPIE 10298, 102980B (2001).

Springer LNCS (1)

P. Surman, I. Sexton, K. Hopf, R. Bates, and W. Lee, “Head tracked 3D displays,” Springer LNCS 4105, 769–776 (2006).

Other (11)

ISO/IEC SC29WG11, “Experimental Framework for FTV,” in Proceedings of 110th Moving Picture Experts Group Meeting (Strasbourg, France, Oct.2014), N15048.

M. Zwicker, S. Yea, A. Vetro, C. Forlines, W. Matusik, and H. Pfister, “Multi-view Video Compression for 3D Displays,” in Proceedings of Conference Record of the Forty-First Asilomar Conference on Signals, Systems and Computers (2007), pp. 1506–1510.
[Crossref]

M. Tanimoto, “FTV standardization in MPEG,” in Proceedings of IEEE 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2014), pp. 1–4.

ISO/IEC JTC1/SC29/WG11, “Call for evidence on free-viewpoint television: Super-multiview and free navigation,” in Proceedings of 112th Moving Picture Experts Group Meeting (Warsaw, Poland, 2015), N15348.

T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).

P. Surman and X.W. Sun, “Towards the reality of 3D imaging and display,” in Proceedings of IEEE 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2014), pp. 1–4.

ISO/IEC JTC1/SC29/WG11, “3D FTV/SMV visualization and evaluation lab,” in Proceedings of 112th Moving Picture Experts Group Meeting (Warsaw, Poland, 2015), M36577.

Y. Takaki, “3D images with enhanced DOF produced by 128-directional display,” in Proceedings of the 13th International Display Workshops (2006), pp. 1909–1912.

S.-K. Kim and K.-H. Yoon, “METHOD OF FORMING DYNAMIC MAXIMAL VIEWING ZONE OF AUTOSTEREOSCOPIC DISPLAY APPARATUS”, U.S. patent application 15869504 (Jan.12, 2018).

G. Chen, C. ma, Z. Fan, X. Cui, and H. Liao, “Real-time Lens based Rendering Algorithm for Super-multiview Integral Photography without Image Resampling,” IEEE Trans. Vis. Comput. Graphics (to be published).

L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel RealSense Stereoscopic Depth Cameras,” in Proceedings of Computer Vision and Pattern Recognition (cs.CV) (2017), arXiv:1705.05548v2.

Supplementary Material (6)

NameDescription
» Visualization 1       This clip shows the composition of the test contents and its directional images perceptually observed in the test displays when a built-in stereo camera moves horizontally.
» Visualization 2       This clip shows the details of experimental setups and real-time performance of the proposed method. Although flipping images start to be shown at the front bound of the original SMV display, those are disappeared in the modified SMV display by the p
» Visualization 3       This clip explains the concept of an advanced viewing-lobe-free SMV display based on the proposed dynamic subpixel rearrangement method.
» Visualization 4       This clip proves the feasibility of the proposed method for a viewing-lobe-free SMV 3D display when a viewer is horizontally moving in a practical environment.
» Visualization 5       This clip proves the feasibility of the proposed method for a viewing- lobe -free SMV 3D display when a viewer is vertically moving in a practical environment.
» Visualization 6       This clip proves the feasibility of the proposed method for a viewing- lobe -free SMV 3D display when a viewer is depth-directionally moving in a practical environment.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Configurations of viewing lobe (VL) in multi-view autostereocopic displays. Three different types can be designed according to the optimal viewing distance (OVD) (do), the screen width (wl), and a target width of VL at the OVD (wo): (a) VL width wider than the screen, (b) VL width the same with the screen, and (c) VL width narrower than the screen.
Fig. 2
Fig. 2 (a) An example viewing zones of a five-view display, (b) conceptual images for left and right eyes observable at each positions in (a) in which the numbers in each image stand for view indices among the five views, and (c) perceptual images of our 63-view SMV display captured at the OVD and the closest VD.
Fig. 3
Fig. 3 Configurations of (a) the initial viewing lobe (red lines) and (b)(c) the proposed dynamic viewing lobes (blue lines). If viewers are outside the valid viewing distance (P1 and P2), (d)(f) perceptual images contain some flipping images because of wrong viewpoint projections from adjacent 3-D pixels. However, the proposed method can construct (e)(g) natural composite-view images at the same VDs (P′1 and P′2) without any noticeable artifacts.
Fig. 4
Fig. 4 Schematic explanation of determining the position of the view index update, where a 5-view configuration with parallax barrier was applied. The cases with the viewer (a) in front of the OVD and (b) behind the OVD are depicted.
Fig. 5
Fig. 5 Schematic explanation of renewing subpixels constructing each 3D pixel by using the same 5-view configuration.
Fig. 6
Fig. 6 Optical characteristics measured by the system used in [45]. (a)(b) luminance distributions of a single-view on the xz-plane for the OVDs, and (c)(d) luminance distributions of all views at the OVDs for the VPIs of Display1 and Display2, respectively.
Fig. 7
Fig. 7 (a) Apparatus of our test displays and RGB+D sensors used for head-tracking, (b) composition of our test 3-D contents based on computer graphics, (c) a multi-view input image for Display 1, and (d) its perceptual images, captured by a camera moving from side to side at the initial OVD (see Visualization 1).
Fig. 8
Fig. 8 The verification system for eye-tracked autostereoscopic display [39]. A test display renders 3-D contents while tracking the viewer’s face model, and the built-in stereo camera captures perceptual images of the test display as moving in the xyz-directions.
Fig. 9
Fig. 9 Quantitative comparison: the simulated and empirical viewing lobes (VLs) of (a) Display 1 and (b) Display 2. The simulations were conducted from the parameters in Table 1. The empirical VLs were measured by the verification system in Fig. 8.
Fig. 10
Fig. 10 Qualitative comparison of perceptual images captures at the four sample positions in Fig. 9: ①, ②, ③, and ④, close to the initial bounds (see Visualization 2).
Fig. 11
Fig. 11 Feasibility evaluation of operating the dynamically expandable viewing lobe (DEVL) in real time. Perceptual images were captured in the working range of the verification system.
Fig. 12
Fig. 12 An application of the proposed method toward viewing-lobe-free and visually comfort 3-D perception. Our prototype of head-tracked SMV display, combined with the conventional head-tracking technology, allows a viewer to see natural 3-D contents at any positions in the xyz-direction, and it works in 22–24 fps including the function of vertical view changes in the practical environment (see Visualization 3, Visualization 4, Visualization 5, and Visualization 6).
Fig. 13
Fig. 13 Valid VD and VL of our prototype VL-free SMV display system using head-tracking. In practice, VD and VL of the proposed method are restrained because of FOVs and working ranges of tracking-sensors.

Tables (1)

Tables Icon

Table 1 Specifications of the test displays

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

x = ± ( z d 0 ( w 0 w l ) 2 + w l 2 ) , x = ± ( z d 0 ( w 0 w l ) 2 w l 2 ) ,
x = ± ( z d 0 w 0 2 ) ,

Metrics