Abstract

In this paper, we propose a novel method to construct an optical see-through light-field near-eye display (OST LF-NED) by using a discrete lenslet array (DLA). The DLA is used as a spatial light modulator (SLM) to generate dense light field of three-dimensional (3-D) scenes inside the user’s eyebox of the system and provide correct focus cues to the user. A corresponding light-field image rendering method is also proposed and demonstrated. The light emitted from the real objects passes through the transparent region of the display panel and the planar area of the DLA successively without redirection, so the user can have a clear view of the real scene as well as the virtual information. The stray light that may degrade the image quality has been analyzed in detail. The experimental result shows that the proposed method is capable of obtaining a corrected depth perception of the virtual information in augmented reality (AR) applications.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article

Corrections

18 July 2018: A typographical correction was made to the caption of Fig. 5.


OSA Recommended Articles
A 3D integral imaging optical see-through head-mounted display

Hong Hua and Bahram Javidi
Opt. Express 22(11) 13484-13491 (2014)

Optical see-through Maxwellian near-to-eye display with an enlarged eyebox

Seong-Bok Kim and Jae-Hyeung Park
Opt. Lett. 43(4) 767-770 (2018)

References

  • View by:
  • |
  • |
  • |

  1. H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017).
    [Crossref]
  2. L. Marran and C. Schor, “Multiaccommodative stimuli in VR systems: problems & solutions,” Hum. Factors 39(3), 382–388 (1997).
    [Crossref] [PubMed]
  3. H. Takahashi and S. Hirooka, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
    [Crossref]
  4. S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996).
    [Crossref]
  5. E. Fernandez and P. Artal, “Membrane deformable mirror for adaptive optics: performance limits in visual optics,” Opt. Express 11(9), 1056–1069 (2003).
    [Crossref] [PubMed]
  6. S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
    [Crossref] [PubMed]
  7. J. P. Rolland, M. W. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000).
    [Crossref] [PubMed]
  8. S. Liu and H. Hua, “Time-multiplexed dual-focal plane head-mounted display with a liquid lens,” Opt. Lett. 34(11), 1642–1644 (2009).
    [Crossref] [PubMed]
  9. B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010).
    [Crossref]
  10. H. Gotoda, “A multilayer liquid crystal display for autostereoscopic 3D viewing,” Proc. SPIE 7524 75240P, 1–8 (2010).
  11. G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
    [Crossref]
  12. G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
    [Crossref]
  13. A. Maimone and H. Fuchs, “Computational augmented reality eyeglasses,” in proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) pp. 29–38 (2013).
    [Crossref]
  14. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
    [Crossref]
  15. Y. Takaki and Y. Yamaguchi, “Flat-panel see-through three-dimensional display based on integral imaging,” Opt. Lett. 40(8), 1873–1876 (2015).
    [Crossref] [PubMed]
  16. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
    [Crossref]
  17. W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett. 12(6), 060010 (2014).
    [Crossref]
  18. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
    [Crossref] [PubMed]
  19. K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015).
    [Crossref] [PubMed]
  20. R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Tech Rep. CTSR 2005–02, Dept. of Computer Science, Stanford Univ., (2005).
  21. D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.
  22. Q. H. Wang, C. C. Ji, L. Li, and H. Deng, “Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher,” Opt. Express 24(1), 9–16 (2016).
    [Crossref] [PubMed]
  23. D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism,” Appl. Opt. 48(14), 2655–2668 (2009).
    [Crossref] [PubMed]
  24. H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
    [Crossref]
  25. S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
    [Crossref]

2017 (1)

H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017).
[Crossref]

2016 (3)

Q. H. Wang, C. C. Ji, L. Li, and H. Deng, “Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher,” Opt. Express 24(1), 9–16 (2016).
[Crossref] [PubMed]

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

2015 (2)

2014 (3)

2013 (1)

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

2012 (1)

G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

2011 (1)

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

2010 (3)

B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010).
[Crossref]

H. Gotoda, “A multilayer liquid crystal display for autostereoscopic 3D viewing,” Proc. SPIE 7524 75240P, 1–8 (2010).

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[Crossref] [PubMed]

2009 (2)

2008 (1)

H. Takahashi and S. Hirooka, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
[Crossref]

2003 (1)

2000 (1)

1997 (1)

L. Marran and C. Schor, “Multiaccommodative stimuli in VR systems: problems & solutions,” Hum. Factors 39(3), 382–388 (1997).
[Crossref] [PubMed]

1996 (1)

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996).
[Crossref]

Aksit, K.

Artal, P.

Cheng, D.

Dansereau, D. G.

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.

Deng, H.

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

Q. H. Wang, C. C. Ji, L. Li, and H. Deng, “Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher,” Opt. Express 24(1), 9–16 (2016).
[Crossref] [PubMed]

Fernandez, E.

Ford, J.

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.

Fuchs, H.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

A. Maimone and H. Fuchs, “Computational augmented reality eyeglasses,” in proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) pp. 29–38 (2013).
[Crossref]

Goon, A.

Gotoda, H.

H. Gotoda, “A multilayer liquid crystal display for autostereoscopic 3D viewing,” Proc. SPIE 7524 75240P, 1–8 (2010).

Heidrich, W.

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

Hirooka, S.

H. Takahashi and S. Hirooka, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
[Crossref]

Hirsch, M. W.

G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

Hua, H.

Huang, S.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Javidi, B.

Ji, C. C.

Kautz, J.

Keller, K.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

Kishino, F.

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996).
[Crossref]

Krueger, M. W.

Lanman, D.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

Lanman, D. R.

G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

Li, L.

Li, X.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Li, Y.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Liu, S.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[Crossref] [PubMed]

S. Liu and H. Hua, “Time-multiplexed dual-focal plane head-mounted display with a liquid lens,” Opt. Lett. 34(11), 1642–1644 (2009).
[Crossref] [PubMed]

Liu, Y.

Lu, W.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Luebke, D.

K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015).
[Crossref] [PubMed]

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

Maimone, A.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

A. Maimone and H. Fuchs, “Computational augmented reality eyeglasses,” in proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) pp. 29–38 (2013).
[Crossref]

Marran, L.

L. Marran and C. Schor, “Multiaccommodative stimuli in VR systems: problems & solutions,” Hum. Factors 39(3), 382–388 (1997).
[Crossref] [PubMed]

Murari, M.

B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010).
[Crossref]

Omura, K.

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996).
[Crossref]

Raskar, R.

G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

Rathinavel, K.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

Rolland, J. P.

Rong, N.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Schor, C.

L. Marran and C. Schor, “Multiaccommodative stimuli in VR systems: problems & solutions,” Hum. Factors 39(3), 382–388 (1997).
[Crossref] [PubMed]

Schowengerdt, B. T.

B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010).
[Crossref]

Schuster, G.

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.

Seibel, E. J.

B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010).
[Crossref]

Shiwa, S.

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996).
[Crossref]

Song, W.

Su, Y.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Takahashi, H.

H. Takahashi and S. Hirooka, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
[Crossref]

Takaki, Y.

Talha, M. M.

Wang, Q.

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

Wang, Q. H.

Wang, Y.

Wetzstein, G.

G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.

Xing, Y.

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

Xiong, Z.

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

Yamaguchi, Y.

Zhang, H.

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

Zhou, P.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

ACM Trans. Graph. (4)

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

Appl. Opt. (3)

Chin. Opt. Lett. (1)

Hum. Factors (1)

L. Marran and C. Schor, “Multiaccommodative stimuli in VR systems: problems & solutions,” Hum. Factors 39(3), 382–388 (1997).
[Crossref] [PubMed]

IEEE Trans. Vis. Comput. Graph. (1)

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[Crossref] [PubMed]

J. Soc. Inf. Disp. (2)

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996).
[Crossref]

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Opt. Express (3)

Opt. Lett. (2)

Optik (Stuttg.) (1)

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

Proc. IEEE (1)

H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017).
[Crossref]

Proc. SPIE (1)

H. Takahashi and S. Hirooka, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
[Crossref]

Proc. SPIE 7524 (1)

H. Gotoda, “A multilayer liquid crystal display for autostereoscopic 3D viewing,” Proc. SPIE 7524 75240P, 1–8 (2010).

SID Symp. Dig. Tech. Pap. (1)

B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010).
[Crossref]

Other (3)

A. Maimone and H. Fuchs, “Computational augmented reality eyeglasses,” in proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) pp. 29–38 (2013).
[Crossref]

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Tech Rep. CTSR 2005–02, Dept. of Computer Science, Stanford Univ., (2005).

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.

Supplementary Material (2)

NameDescription
» Visualization 1       Images seen through an OLED-based light-field near-eye display, focusing on the droid and the fighter. The items that are not in focus appear blurry.
» Visualization 2       Images seen through a transparent film-based light-field near-eye display, focusing on the droid and the foreground, or the fighter and the background. The items that are not in focus appear blurry.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Principle of the OST-LF NED system. (a) The screen light path. (b) The real-world light path.
Fig. 2
Fig. 2 (a) The geometric relationship of a straylight-free system. The eyeboxes defined by screen light and by real-world light are the same. The thick red line depicts an eyebox boundary defined by the chief ray of a marginal field. In this case, real-world light (shown as thin red lines) may enter the region of the eyebox, to form stray light. (b) The practical system with stray light. DE is the eyebox that contains the complete light field. DE' is the region with no stray light, named the clear area of the eyebox. The region between them, depicted as red shadow, is where the stray light exists.
Fig. 3
Fig. 3 The simulation of the stray light with different combinations of eye relief and eyebox.
Fig. 4
Fig. 4 The projections built on the DLA. The axes of the projections intersect at the center of the eyebox. Each screen acts as a near clipping plane.
Fig. 5
Fig. 5 (a) The 3D virtual scene in which a droid (source model “R2-D2” by Eric Finn, licensed under CC-BY 3.0) is located at about 0.3m and a fighter (source model “Tie fighter” by Alberto Calvo, licensed under CC-BY 3.0) at about 5m away from the camera array. (b) The rendered light-field image of the virtual scene. Vignetting is added to relief the stray light from the edge of the elemental images.
Fig. 6
Fig. 6 (a) A 3D model of the DLA. (b) The micro-OLED-based prototype. (c) Experimental setup of a film-based prototype.
Fig. 7
Fig. 7 Images seen through an OLED-based prototype, focusing on (a) the nearer object at 0.3m and (b) the farther object at 5m. The items that are not in focus appear blurry (see Visualization 1).
Fig. 8
Fig. 8 (a) A micrograph of the transparent film. (b) An image seen through a film with a clear scene. The image appears slightly blurry because of the lack of coating on the lenslet array. Correct focus cues are demonstrated by focusing on (c) the droid and the foreground, and on (d) the fighter and the background (see Visualization 2).

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

D E = L E 2 f D L ,
I n = D E + t L E f + t .
I e = D E + D L L E f .
I e = ( 1 2 + f L E ) D L .
τ = I e I e + I n = ( 2 f + L E ) D L 2 ( f + L E ) ( D L + t ) .
I s = D E f L E D L 2 .
I i = I e 2 I s = D L D E L E f + D L .
D E = L E f D L D E .
L E D E > f D L .
[ cot ( θ 2 ) a s p e c t 0 x w min + x w max 2 0 0 cot ( θ 2 ) y w min + y w max 2 0 0 0 s z t z 0 0 1 0 ] ,
N f ( W S + D E + D L ) ( f + L E ) W S N ,