Abstract

In this paper, we propose a novel method to construct an optical see-through light-field near-eye display (OST LF-NED) by using a discrete lenslet array (DLA). The DLA is used as a spatial light modulator (SLM) to generate dense light field of three-dimensional (3-D) scenes inside the user’s eyebox of the system and provide correct focus cues to the user. A corresponding light-field image rendering method is also proposed and demonstrated. The light emitted from the real objects passes through the transparent region of the display panel and the planar area of the DLA successively without redirection, so the user can have a clear view of the real scene as well as the virtual information. The stray light that may degrade the image quality has been analyzed in detail. The experimental result shows that the proposed method is capable of obtaining a corrected depth perception of the virtual information in augmented reality (AR) applications.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Corrections

18 July 2018: A typographical correction was made to the caption of Fig. 5.

1. Introduction

With the rapid development of virtual reality (VR) and augmented reality (AR) technology and the booming of many commercial products in recent years, near-eye display (NED) technology has been widely studied. The most essential advantage of NED is its capacity of stereoscopic display. However, most of these technologies are only based on binocular parallax while the consistency of vergence and accommodation is often unsolved. The discrepancy of vergence and accommodation of the eyes, known as the vergence-accommodation conflict (VAC) in NED, would lead to incorrect focus cues. Generally speaking, incorrect focus cues may lead to two commonly recognized issues: distorted depth perception and visual discomfort. Examples of discomfort include diplopic vision, visual fatigue, and degradation in oculomotor responses, especially after viewing such display for an extended period of time [1]. In particular, when it comes to optical see-through (OST) AR displays, the VAC problem would degrade the effectiveness of the combination of the virtual object and the real-world scene, resulting in visual confusion.

Several methods have been proposed to solve or to relieve the VAC problem. According to the classification method by Dr. Hong Hua [1], these techniques can be categorized into five general types: Maxwellian view displays [2, 3], vari-focal plane displays [4–6], multifocal plane (MFP) displays [7–9], computational multilayer displays [10–13], and integral-imaging (InI)-based displays [14–19]. Among these state-of-the-art techniques, the Maxwellian view display and the vari-focal plane display have simple optical structures but are not able to produce natural retinal blur cues. The MFP, integral imaging, and multilayer approaches are commonly referred to be light-field displays, which render a true 3-D scene by sampling either the projections of the scene at different depths or the directions of the light rays apparently emitted by the scene and viewed from different eye positions [1].

Among the light field methods, InI-based approach has the simplest structure, and has been regarded as an effective way to reconstruct the light field. It is also widely studied in the fields of light-field photography [20,21] and eyewear-free autostereoscopic displays [22]. It typically consists of a screen and a 2-D array as the SLM, which could be a lenslet array [14, 15] or a pinhole array [16–19]. The rendered image is displayed on the screen, and the rays emitted from each pixel intersect with the SLM. The SLM angularly samples the directions of these rays, so that integrally creates the perception of a 3-D scene.

To utilize the light-field technology in AR, the OST capacity of the InI-based displays has also been studied in recent years. A most common approach is to combine the LF display with OST optics [18, 23]. A lenslet array or a pinhole array is commonly placed at the image plane of the OST element to reproduce the light field. The problem of these approaches is that as the OST element could be regarded as a magnifying lens, the non-linear relationship between the object distance and the image distance causes lateral and axial distortion in the image space. It could be resolved by the pitch scaling and depth scaling processes [24]. However, the computational burden of the display engine also increases. Some other approaches were proposed to realize OST, such as pinlight display [16], three-layered lenslet array approach [15], and polymer-stabilized liquid crystal shutters [25]. However, these methods either suffer the problem of low image toning or have a limited field of view (FOV).

In this paper, we develop an OST LF-NED system that has the capacity of generating the light field of the virtual scene by using a DLA and a transparent microdisplay. The light from the real world passes through the transparent areas on the microdisplay panel and the gaps on the DLA panel while the light from the screen is angularly selected by the lenslets to form the light field. The stray light is mainly caused by the screen light that leaks into the gaps between the lenslets. In order to improve the viewing effect, several steps are implemented to analyze and minimize the stray light. An OLED-based prototype and a film-based prototype are developed to demonstrate the correct focus cues and the OST capacity.

2. Principle of the novel OST LF-NED system

In this proposal, a DLA is placed at a certain distance to the eye position and a microdisplay panel is placed at the focal plane of the lenslets, as shown in Fig. 1. The image displayed on the microdisplay panel is rendered as a light-field image, which is segmented into elemental images. Each elemental image and corresponding lenslet acts as an independent magnifier, synthesizing an off-axis perspective projection of the virtual image plane at a relatively far distance. Each perspective spans the eyebox and has a center projection coincident with the lens center. Thus, the magnifier array forms the light field in the eyebox, shown as Fig. 1(a). Since the lenslets are discretely arranged, the light emitted from the real world passes directly through the transparent areas on the microdisplay panel and the gaps among the lenslets. This real-world light also spans an eyebox, which is typically smaller than the former one in practical conditions, shown as Fig. 1(b). In the straylight-free conditions, these two eyeboxes coincide exactly, thus there is no space between these two eyeboxes for stray light to exist. Take the requirements of eye relief and compactness into consideration, the rear surface of the DLA is placed at 20-40 mm distance from the eye and the diameter of the eyebox is set larger than 6mm. The design process is based on these settings.

 figure: Fig. 1

Fig. 1 Principle of the OST-LF NED system. (a) The screen light path. (b) The real-world light path.

Download Full Size | PPT Slide | PDF

There are several variables in the proposed system, which are the diameter of each lenslet DL, the eye relief LE, the gap between two lenslets t, and the focal length of each lenslet f. The optical parameters of the system, including the size of the eyebox and the display effect, are direct results of the change in these variables. The following sections illustrate the calculating process of a straylight-free system and a system with stray light.

2.1 Design of a straylight-free OST LF-NED system

In a straylight-free system, the light from the screen should intersect exactly on the regions of the lenses on the DLA panel, and the real-world light passes exactly through the gaps among the lenslets. The two types of light transmit in two separated paths so that no stray light would exist. In this case, the geometric relationship of these parameters is illustrated as Fig. 2(a). The dash lines depict the boundaries for the screen light. These boundaries are defined by connecting the edges of the elemental images and those of the corresponding lenslets, which ensures that no screen light would leak into the gaps among the lenslets within the region of eyebox. In practical situations, a small amount of screen light may leak into the gaps inevitably, as shown in Fig. 2(b), which will be discussed in subsection 2.2.

 figure: Fig. 2

Fig. 2 (a) The geometric relationship of a straylight-free system. The eyeboxes defined by screen light and by real-world light are the same. The thick red line depicts an eyebox boundary defined by the chief ray of a marginal field. In this case, real-world light (shown as thin red lines) may enter the region of the eyebox, to form stray light. (b) The practical system with stray light. DE is the eyebox that contains the complete light field. DE' is the region with no stray light, named the clear area of the eyebox. The region between them, depicted as red shadow, is where the stray light exists.

Download Full Size | PPT Slide | PDF

To make sure that all the pixels on the screens are visible through the corresponding lenslets, the marginal ray of the marginal field should cross exactly the edge of the eyebox. For example, the ray emitting from the lower edge of an elemental image and passing through the lower edge of the corresponding lenslet should be refracted and then pass through the upper edge of the eyebox. What is different from a system using a pinhole array is that since the size of a lenslet is not neglectable, the eyebox of the screen light is no longer defined by the chief ray of the marginal field, but the marginal ray of the marginal field. As shown in Fig. 2(a), the thick red line depicts the lower edge of a chief-ray-defined eyebox. In this case, the light from out of the screen, shown as thin red lines, would pass through the edge of the lenslet and enter the eyebox, to form stray light. Thus, compared to a pinhole-based approach, the eyebox in our lenslet-based approach would be narrower by DL. The main feature of the straylight-free system is that the diameter of the eyebox DE depends on the diameter DL and the focal length f of each lenslet. The relation between them is expressed as:

DE=LE2fDL,
where LE is the eye relief. Given the geometric relationship, the region of the unused pixels In on the screen is calculated as:
In=DE+tLEf+t.
Meanwhile, the region of the effective pixels Ie on the screen is calculated as:
Ie=DE+DLLEf.
Take the relationship in Eq. (1) into consideration, then Eq. (3) could be further expressed as:
Ie=(12+fLE)DL.
Apparently, from Eq. (3), the amount of the effective pixels in each elemental image in the straylight-free system is always positive. The duty cycle of the effective pixels on the 1-D direction is calculated as:

τ=IeIe+In=(2f+LE)DL2(f+LE)(DL+t).

Above are the geometric relations of the variables in a straylight-free system. As it can be seen from these equations, there could be more than one type of pixel arrangement on a 2-D panel, such as orthogonal array, hexagonal array, and diagonal array. It can be concluded from the equations that the transmittance of the real-world light is affected by the duty cycle of the gaps of the DLA, and the ratio between the eye relief and the diameter of the eyebox is twice the F-number of the lenslets. For example, if the F-number of each lenslet is set 3 in general, and the diameter of the eyebox 10mm, then the eye relief should be 60mm, which is relatively too large for a NED. However, the F-number of an ordinary lenslet is often larger than what is desired in this proposal. Reducing the F-number would increase the complexity in designing and manufacturing and decrease the imaging quality. A catadioptric system may solve this problem, but it still requires a relatively large display panel to obtain a large FOV. Here we make a tradeoff between the eyebox and the viewing effect by manually enlarging the eyebox while allowing a small amount of stray light to exist.

2.2. Practical design of an OST LF-NED system with stray light

In a straylight-free system, the size of eyebox totally depends on the F-number of each lenslet. However, because of the self-adjusting ability of human eyes, a small amount of stray light at the edge of the eyebox would not have a serious impact on viewing effect. Moreover, stray light can never be totally eliminated, but it can often be reduced to a level at which it is tolerable. Thus, the eyebox could be manually enlarged, which means the restriction in Eq. (1) could be broken. In this condition, the eyebox is expressed as two forms: the largest eyebox DE expanded by each view of the elemental image, which contains the complete light field, and DE', defined by the edges of each elemental image and the corresponding lenslet, named clear area, where no screen light would leak into the gaps among the lenslets, as shown in Fig. 2(b). In the region between DE' and DE, marked with red shadow, some portion of the screen light could enter the eyebox without being refracted by the lenslets, to form the stray light, which is the only type of stray light in this proposal.

In such a system with stray light, the region of the effective pixels Ie on the screen is still calculated as Eq. (3), while some of the pixels generate the stray light. The region of these pixels (on one side) is calculated as:

Is=DEfLEDL2.
And the region of the pixels that generate no stray light is calculated as:
Ii=Ie2Is=DLDELEf+DL.
The width of the clear area is calculated as:
DE=LEfDLDE.
Note that DE' could be negative, which means that there might exists no clear area. To avoid this situation, D'E should be positive, which makes
LEDE>fDL.
Consider the limitation of the F-number of the lenslets, the ratio between LE and DE could not be very large. That means the clear area is relatively small in our proposal.

The distribution of the stray light can be calculated through numerical simulation and is illustrated intuitively in Fig. 3. The diameter of each lenslet is set to 0.8mm, the focal length is set to 5mm, and the width of the gaps is set to approximately 0.35mm. The varying colors correspond to the proportion of the intensity of the stray light in the region of the eyebox. As it depicts, stray light exists in all of the listed situations. It increases with the growth of the size of the eyebox and decreases with the growth of the eye relief. This type of stray light emits from the screen close to the eye and will not be collimated by the lenslets, so its main effect is forming bright spots on the retina, to decrease the contrast of the image. According to Eq. (6) to Eq. (9), there is a tradeoff between ergonomic effects (i.e. eyebox and eye relief) and optical effects (i.e. stray light and viewing effect). The viewing effect can be improved by releasing the restriction of eye relief or eyebox. In this situation, a combination of a 6mm eyebox and a 35mm eye relief is regarded to make a tolerable and balanced system, for the stray light mainly distributes at the edge of the eyebox and little of it exists in the central portion of the eyebox. Although with stray light, a relatively clear view of the virtual objects and the real scene can be obtained, which is proved in the experiments. Meanwhile, the size of both eyebox and eye relief are acceptable for wearable devices. These parameters are used in the rest of the paper and the experiments.

 figure: Fig. 3

Fig. 3 The simulation of the stray light with different combinations of eye relief and eyebox.

Download Full Size | PPT Slide | PDF

2.3. Light field image rendering

Unlike conventional NEDs, the image source in a LF-NED system should be pre-rendered as a light-field image. The rendering process is the inversion of the imaging process of a light-field camera. It could be implemented using a 3D game engine based on OpenGL, such as Unity or Unreal. The DLA could be regarded as a set of cameras with perspective projection. The axes of these projections are defined by connecting the center of the eyebox and the centers of each lenslet, and the center of the projection in each camera is located at the center of the corresponding lenslet, as shown in Fig. 4. Based on the definitions about projection matrix in OpenGL [28], the view planes, or the near clipping planes, are located at the microdisplay plane, and the far clipping planes are located at infinity. Thus, the projection matrix of each camera is defined as

[cot(θ2)aspect0xwmin+xwmax200cot(θ2)ywmin+ywmax2000sztz0010],
where θ is the FOV of a single camera, xw and yw are the spatial coordinates of the edges of each elemental image relative to its center, sz and tz are values relating to the clipping planes. Since it's the same situation in both x and y axis, aspect equals to 1.

 figure: Fig. 4

Fig. 4 The projections built on the DLA. The axes of the projections intersect at the center of the eyebox. Each screen acts as a near clipping plane.

Download Full Size | PPT Slide | PDF

Based on the projection matrix, the camera array model could be built in Unity, the well-known 3-D game engine, along with two virtual models located at about 0.3m and 5m respectively away from the camera array, as shown in Fig. 5. The ineffective pixels are eliminated during post processing, which means these regions on a valid display are shown as transparent. The background of the virtual scene is set black so that only the virtual projects are visible.

 figure: Fig. 5

Fig. 5 (a) The 3D virtual scene in which a droid (source model “R2-D2” by Eric Finn, licensed under CC-BY 3.0) is located at about 0.3m and a fighter (source model “Tie fighter” by Alberto Calvo, licensed under CC-BY 3.0) at about 5m away from the camera array. (b) The rendered light-field image of the virtual scene. Vignetting is added to relief the stray light from the edge of the elemental images.

Download Full Size | PPT Slide | PDF

3. Experiment and discussion

Two LF-NEDs were implemented: the first prototype is based on a micro-OLED and the second is a static prototype using a transparent film, as shown in Fig. 6. In the OLED-based prototype, a non-transparent 0.7” Sony micro-OLED screen, with a 1920 × 1080 resolution, was placed in front of the DLA, 20mm × 20mm, made of PMMA. The focal length of each lenslet is 5mm and the diameter is 0.8mm. The horizontal distance between the centers of two lenslets is 1.6mm. Thus, the nearest distance between two lenslet is about 0.35mm. A 15–70fps real-time dynamic image shown in Fig. 5(b) was driven by a 2.6 GHz Intel Core i7 PC with 8 GB of RAM and an NVIDIA GeForce GTX 960M graphics card. The camera was placed in front of the DLA. The entrance pupil of the camera was set within the eyebox of the system. Figure 7 shows the images that are seen through an OLED-based prototype. When the camera focused on the objects at different positions, the out-of-focus objects appeared blurry, which means the system provides correct focus cues.

 figure: Fig. 6

Fig. 6 (a) A 3D model of the DLA. (b) The micro-OLED-based prototype. (c) Experimental setup of a film-based prototype.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7

Fig. 7 Images seen through an OLED-based prototype, focusing on (a) the nearer object at 0.3m and (b) the farther object at 5m. The items that are not in focus appear blurry (see Visualization 1).

Download Full Size | PPT Slide | PDF

The film-based prototype is similar to the OLED-based one, except that the display is made of a 14mm × 21mm transparent film. As shown in Fig. 8(a), the effective pixels for virtual objects are within the black circles, which act as the image source for its corresponding lenslet. The light-field images were exposed on the films so that the bright areas are made transparent and the dark areas remain non-transparent. In Fig. 8(b), to illustrate the see-through light path, the virtual scene is not presented by setting all the effective pixels in the circles to black (non-transparent), that is, the film is exposed as white (transparent) background with discrete black circles. The intensity of the real-world light is reduced a little bit and some dark pattern appears because of the uneven duty cycles in different directions. A hexagonally arranged DLA may relieve the dark pattern, but with an increase in the difficulty of manufacture. The transparent film with two virtual objects as shown in Fig. 8(a) was used to illustrate the augmented combination of the see-through light path and the virtual scene light path. As shown in Fig. 8(c) and 8(d), both OST capacity and correct focus cues were achieved. It can be seen that the camera can focus on the nearer object and foreground, located at about 0.3m, or on the farther object and the background, about 5m away from the camera. The image quality of the virtual objects is not as high as that of the OLED-based prototype, which can be explained by two reasons. One reason is the manufacturing error of the film. The other reason is that the light field image is totally lit by the ambient light instead of a special backlight. Thus, it indicates the necessity of a special backlight or a self-illuminant image source to increase the contrast.

 figure: Fig. 8

Fig. 8 (a) A micrograph of the transparent film. (b) An image seen through a film with a clear scene. The image appears slightly blurry because of the lack of coating on the lenslet array. Correct focus cues are demonstrated by focusing on (c) the droid and the foreground, and on (d) the fighter and the background (see Visualization 2).

Download Full Size | PPT Slide | PDF

From the experiments, it can be concluded that the main advantages of this system are the OST capacity and the correct focus cues. Along with these advantages, some problems exist in the experiments. Apart from the dark pattern, one of the most obvious problems is the resolution loss, a typical shortcoming of light-field displays. The plane resolution of the virtual image is calculated as

Nf(WS+DE+DL)(f+LE)WSN,
where WS is the screen size, N and N' are the resolution of the original image source and that of the virtual image. In the 1920 × 1080 OLED-based prototype, the resolution of the virtual image is approximately 345 × 241, which reflects the tradeoff between plane resolution and depth resolution. Thus, a large size and ultra-high-resolution screen is needed to obtain a high-resolution light-field display.

Another problem is the imaging performance of the lenslets. Since the lenslets are all the same parameter and planoconvex lenses, the image quality is limited, especially for the marginal field. In future work, the surface of each lenslet corresponding to different fields could be designed separately to improve its image performance; extra apertures could be introduced into the system to further reduce the stray light; and other arrangements of the lenslets on DLA will also be studied and implemented to improve the uniformity of the image.

Moreover, the most critical problem is that a large portion of the screen is always kept transparent. If an ordinary transparent screen is used here, then more than half of the pixels will be never used, which causes huge waste of pixels. Thus, to implement this proposal, a screen with a special pixel arrangement illustrated in section 2.1 is required. Only the effective portions are covered with pixels and made non-transparent, and the rest of the screen is made transparent with no pixels covered, so that the pixels would not be wasted.

4. Conclusion

We propose an OST LF-NED using a DLA and an ideal transparent microdisplay. The relations between the optical parameters and the structure parameters are derived and a straylight-free model is given. To make it more comfortable for users, the eyebox is enlarged manually, and the structure parameters and the stray light caused by this is calculated. Experiments are implemented on an OLED-based prototype and a film-based prototype, although the brightness and the contrast of the virtual scene viewing through the latter prototype is not as good as the former. The experiments demonstrate the OST capacity and the correct focus cues of the proposal.

Funding

National Key Research and Development Program of China (2016YFB1001502); National Natural Science Foundation of China (61727808).

Acknowledgments

We would like to thank Synopsys for providing the educational license of CODE V. We would also like to acknowledge Mrs. Yang Wang and Mr. Weihong Hou at Beijing NED + AR Display Technology Corporation for fruitful discussions and providing the microdisplay module, and Mr. Jianxun Zhang at Xiangshiboan Technology Corporation for providing support of the fabrication of transparent films.

References and links

1. H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017). [CrossRef]  

2. L. Marran and C. Schor, “Multiaccommodative stimuli in VR systems: problems & solutions,” Hum. Factors 39(3), 382–388 (1997). [CrossRef]   [PubMed]  

3. H. Takahashi and S. Hirooka, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008). [CrossRef]  

4. S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996). [CrossRef]  

5. E. Fernandez and P. Artal, “Membrane deformable mirror for adaptive optics: performance limits in visual optics,” Opt. Express 11(9), 1056–1069 (2003). [CrossRef]   [PubMed]  

6. S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010). [CrossRef]   [PubMed]  

7. J. P. Rolland, M. W. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000). [CrossRef]   [PubMed]  

8. S. Liu and H. Hua, “Time-multiplexed dual-focal plane head-mounted display with a liquid lens,” Opt. Lett. 34(11), 1642–1644 (2009). [CrossRef]   [PubMed]  

9. B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010). [CrossRef]  

10. H. Gotoda, “A multilayer liquid crystal display for autostereoscopic 3D viewing,” Proc. SPIE 7524 75240P, 1–8 (2010).

11. G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011). [CrossRef]  

12. G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012). [CrossRef]  

13. A. Maimone and H. Fuchs, “Computational augmented reality eyeglasses,” in proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) pp. 29–38 (2013). [CrossRef]  

14. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

15. Y. Takaki and Y. Yamaguchi, “Flat-panel see-through three-dimensional display based on integral imaging,” Opt. Lett. 40(8), 1873–1876 (2015). [CrossRef]   [PubMed]  

16. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014). [CrossRef]  

17. W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett. 12(6), 060010 (2014). [CrossRef]  

18. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]   [PubMed]  

19. K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015). [CrossRef]   [PubMed]  

20. R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Tech Rep. CTSR 2005–02, Dept. of Computer Science, Stanford Univ., (2005).

21. D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.

22. Q. H. Wang, C. C. Ji, L. Li, and H. Deng, “Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher,” Opt. Express 24(1), 9–16 (2016). [CrossRef]   [PubMed]  

23. D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism,” Appl. Opt. 48(14), 2655–2668 (2009). [CrossRef]   [PubMed]  

24. H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016). [CrossRef]  

25. S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017).
    [Crossref]
  2. L. Marran and C. Schor, “Multiaccommodative stimuli in VR systems: problems & solutions,” Hum. Factors 39(3), 382–388 (1997).
    [Crossref] [PubMed]
  3. H. Takahashi and S. Hirooka, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
    [Crossref]
  4. S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996).
    [Crossref]
  5. E. Fernandez and P. Artal, “Membrane deformable mirror for adaptive optics: performance limits in visual optics,” Opt. Express 11(9), 1056–1069 (2003).
    [Crossref] [PubMed]
  6. S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
    [Crossref] [PubMed]
  7. J. P. Rolland, M. W. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000).
    [Crossref] [PubMed]
  8. S. Liu and H. Hua, “Time-multiplexed dual-focal plane head-mounted display with a liquid lens,” Opt. Lett. 34(11), 1642–1644 (2009).
    [Crossref] [PubMed]
  9. B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010).
    [Crossref]
  10. H. Gotoda, “A multilayer liquid crystal display for autostereoscopic 3D viewing,” Proc. SPIE 7524 75240P, 1–8 (2010).
  11. G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
    [Crossref]
  12. G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
    [Crossref]
  13. A. Maimone and H. Fuchs, “Computational augmented reality eyeglasses,” in proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) pp. 29–38 (2013).
    [Crossref]
  14. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
    [Crossref]
  15. Y. Takaki and Y. Yamaguchi, “Flat-panel see-through three-dimensional display based on integral imaging,” Opt. Lett. 40(8), 1873–1876 (2015).
    [Crossref] [PubMed]
  16. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
    [Crossref]
  17. W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett. 12(6), 060010 (2014).
    [Crossref]
  18. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
    [Crossref] [PubMed]
  19. K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015).
    [Crossref] [PubMed]
  20. R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Tech Rep. CTSR 2005–02, Dept. of Computer Science, Stanford Univ., (2005).
  21. D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.
  22. Q. H. Wang, C. C. Ji, L. Li, and H. Deng, “Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher,” Opt. Express 24(1), 9–16 (2016).
    [Crossref] [PubMed]
  23. D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism,” Appl. Opt. 48(14), 2655–2668 (2009).
    [Crossref] [PubMed]
  24. H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
    [Crossref]
  25. S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
    [Crossref]

2017 (1)

H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017).
[Crossref]

2016 (3)

Q. H. Wang, C. C. Ji, L. Li, and H. Deng, “Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher,” Opt. Express 24(1), 9–16 (2016).
[Crossref] [PubMed]

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

2015 (2)

2014 (3)

2013 (1)

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

2012 (1)

G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

2011 (1)

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

2010 (3)

B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010).
[Crossref]

H. Gotoda, “A multilayer liquid crystal display for autostereoscopic 3D viewing,” Proc. SPIE 7524 75240P, 1–8 (2010).

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[Crossref] [PubMed]

2009 (2)

2008 (1)

H. Takahashi and S. Hirooka, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
[Crossref]

2003 (1)

2000 (1)

1997 (1)

L. Marran and C. Schor, “Multiaccommodative stimuli in VR systems: problems & solutions,” Hum. Factors 39(3), 382–388 (1997).
[Crossref] [PubMed]

1996 (1)

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996).
[Crossref]

Aksit, K.

Artal, P.

Cheng, D.

Dansereau, D. G.

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.

Deng, H.

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

Q. H. Wang, C. C. Ji, L. Li, and H. Deng, “Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher,” Opt. Express 24(1), 9–16 (2016).
[Crossref] [PubMed]

Fernandez, E.

Ford, J.

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.

Fuchs, H.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

A. Maimone and H. Fuchs, “Computational augmented reality eyeglasses,” in proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) pp. 29–38 (2013).
[Crossref]

Goon, A.

Gotoda, H.

H. Gotoda, “A multilayer liquid crystal display for autostereoscopic 3D viewing,” Proc. SPIE 7524 75240P, 1–8 (2010).

Heidrich, W.

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

Hirooka, S.

H. Takahashi and S. Hirooka, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
[Crossref]

Hirsch, M. W.

G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

Hua, H.

Huang, S.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Javidi, B.

Ji, C. C.

Kautz, J.

Keller, K.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

Kishino, F.

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996).
[Crossref]

Krueger, M. W.

Lanman, D.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

Lanman, D. R.

G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

Li, L.

Li, X.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Li, Y.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Liu, S.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[Crossref] [PubMed]

S. Liu and H. Hua, “Time-multiplexed dual-focal plane head-mounted display with a liquid lens,” Opt. Lett. 34(11), 1642–1644 (2009).
[Crossref] [PubMed]

Liu, Y.

Lu, W.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Luebke, D.

K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015).
[Crossref] [PubMed]

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

Maimone, A.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

A. Maimone and H. Fuchs, “Computational augmented reality eyeglasses,” in proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) pp. 29–38 (2013).
[Crossref]

Marran, L.

L. Marran and C. Schor, “Multiaccommodative stimuli in VR systems: problems & solutions,” Hum. Factors 39(3), 382–388 (1997).
[Crossref] [PubMed]

Murari, M.

B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010).
[Crossref]

Omura, K.

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996).
[Crossref]

Raskar, R.

G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

Rathinavel, K.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

Rolland, J. P.

Rong, N.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Schor, C.

L. Marran and C. Schor, “Multiaccommodative stimuli in VR systems: problems & solutions,” Hum. Factors 39(3), 382–388 (1997).
[Crossref] [PubMed]

Schowengerdt, B. T.

B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010).
[Crossref]

Schuster, G.

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.

Seibel, E. J.

B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010).
[Crossref]

Shiwa, S.

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996).
[Crossref]

Song, W.

Su, Y.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Takahashi, H.

H. Takahashi and S. Hirooka, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
[Crossref]

Takaki, Y.

Talha, M. M.

Wang, Q.

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

Wang, Q. H.

Wang, Y.

Wetzstein, G.

G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.

Xing, Y.

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

Xiong, Z.

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

Yamaguchi, Y.

Zhang, H.

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

Zhou, P.

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

ACM Trans. Graph. (4)

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011).
[Crossref]

G. Wetzstein, D. R. Lanman, M. W. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

Appl. Opt. (3)

Chin. Opt. Lett. (1)

Hum. Factors (1)

L. Marran and C. Schor, “Multiaccommodative stimuli in VR systems: problems & solutions,” Hum. Factors 39(3), 382–388 (1997).
[Crossref] [PubMed]

IEEE Trans. Vis. Comput. Graph. (1)

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[Crossref] [PubMed]

J. Soc. Inf. Disp. (2)

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” J. Soc. Inf. Disp. 4(4), 255–261 (1996).
[Crossref]

S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, W. Lu, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016).
[Crossref]

Opt. Express (3)

Opt. Lett. (2)

Optik (Stuttg.) (1)

H. Deng, Q. Wang, Z. Xiong, H. Zhang, and Y. Xing, “Magnified augmented reality 3D display based on integral imaging,” Optik (Stuttg.) 127(10), 4250–4253 (2016).
[Crossref]

Proc. IEEE (1)

H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017).
[Crossref]

Proc. SPIE (1)

H. Takahashi and S. Hirooka, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
[Crossref]

Proc. SPIE 7524 (1)

H. Gotoda, “A multilayer liquid crystal display for autostereoscopic 3D viewing,” Proc. SPIE 7524 75240P, 1–8 (2010).

SID Symp. Dig. Tech. Pap. (1)

B. T. Schowengerdt, M. Murari, and E. J. Seibel, “Volumetric display using scanned fiber array,” SID Symp. Dig. Tech. Pap. 41(1), 653–656 (2010).
[Crossref]

Other (3)

A. Maimone and H. Fuchs, “Computational augmented reality eyeglasses,” in proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) pp. 29–38 (2013).
[Crossref]

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Tech Rep. CTSR 2005–02, Dept. of Computer Science, Stanford Univ., (2005).

D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.

Supplementary Material (2)

NameDescription
» Visualization 1       Images seen through an OLED-based light-field near-eye display, focusing on the droid and the fighter. The items that are not in focus appear blurry.
» Visualization 2       Images seen through a transparent film-based light-field near-eye display, focusing on the droid and the foreground, or the fighter and the background. The items that are not in focus appear blurry.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Principle of the OST-LF NED system. (a) The screen light path. (b) The real-world light path.
Fig. 2
Fig. 2 (a) The geometric relationship of a straylight-free system. The eyeboxes defined by screen light and by real-world light are the same. The thick red line depicts an eyebox boundary defined by the chief ray of a marginal field. In this case, real-world light (shown as thin red lines) may enter the region of the eyebox, to form stray light. (b) The practical system with stray light. DE is the eyebox that contains the complete light field. DE' is the region with no stray light, named the clear area of the eyebox. The region between them, depicted as red shadow, is where the stray light exists.
Fig. 3
Fig. 3 The simulation of the stray light with different combinations of eye relief and eyebox.
Fig. 4
Fig. 4 The projections built on the DLA. The axes of the projections intersect at the center of the eyebox. Each screen acts as a near clipping plane.
Fig. 5
Fig. 5 (a) The 3D virtual scene in which a droid (source model “R2-D2” by Eric Finn, licensed under CC-BY 3.0) is located at about 0.3m and a fighter (source model “Tie fighter” by Alberto Calvo, licensed under CC-BY 3.0) at about 5m away from the camera array. (b) The rendered light-field image of the virtual scene. Vignetting is added to relief the stray light from the edge of the elemental images.
Fig. 6
Fig. 6 (a) A 3D model of the DLA. (b) The micro-OLED-based prototype. (c) Experimental setup of a film-based prototype.
Fig. 7
Fig. 7 Images seen through an OLED-based prototype, focusing on (a) the nearer object at 0.3m and (b) the farther object at 5m. The items that are not in focus appear blurry (see Visualization 1).
Fig. 8
Fig. 8 (a) A micrograph of the transparent film. (b) An image seen through a film with a clear scene. The image appears slightly blurry because of the lack of coating on the lenslet array. Correct focus cues are demonstrated by focusing on (c) the droid and the foreground, and on (d) the fighter and the background (see Visualization 2).

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

D E = L E 2 f D L ,
I n = D E + t L E f + t .
I e = D E + D L L E f .
I e = ( 1 2 + f L E ) D L .
τ = I e I e + I n = ( 2 f + L E ) D L 2 ( f + L E ) ( D L + t ) .
I s = D E f L E D L 2 .
I i = I e 2 I s = D L D E L E f + D L .
D E = L E f D L D E .
L E D E > f D L .
[ cot ( θ 2 ) a s p e c t 0 x w min + x w max 2 0 0 cot ( θ 2 ) y w min + y w max 2 0 0 0 s z t z 0 0 1 0 ] ,
N f ( W S + D E + D L ) ( f + L E ) W S N ,

Metrics