Abstract

Alvarez lenses offer accurate and high-speed, dynamic tuning of optical power through a lateral shifting of two lens elements, making them an appealing solution to eliminate the inherent decoupling of accommodation and convergence seen in conventional stereoscopic displays. In this paper, we present a design of a compact eyepiece coupled with two lateral-shifting freeform Alvarez lenses to enable a compact, high-resolution, optical see-through head-mounted display (HMD). The proposed design is able to tune its focal depth from 0 to 3 diopters, rendering near-accurate focus cues with high image quality and a large undistorted see-through field of view (FOV). Our design utilizes an 1920x1080 color resolution organic light-emitting diode (OLED) microdisplay to achieve a >30 degree virtual diagonal FOV, with an angular resolution of <0.85 arcminutes and an average optical performance of > 0.4 contrast over the full field. We also experimentally demonstrate a fully functional benchtop prototype using mostly off-the-shelf optics.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Conventional head-mounted displays (HMDs) lack the ability to correctly render focus cues, including accommodation and retinal blur effects, because they merely present a pair of stereoscopic images with binocular disparities and other pictorial depth cues on a fixed image plane. These displays thus force an unnatural decoupling of the accommodation and convergence cues and induce a fundamental problem referred to as the vergence-accommodation conflict (VAC), which can lead to various visual artifacts, such as distorted depth perception and visual fatigue [1–3].

Several display methods have been proposed as potential solutions to the VAC problem, including but not limited to: holographic, volumetric, multifocal, light field, and vari-focal displays [4]. Each of these technologies has unique disadvantages. Holographic displays, for example, can potentially enable displays with correct focus cues and achieve compact form factor while remaining lightweight [5]. However, it is very challenging to develop full-color, high-resolution holographic displays free of artifacts such as speckle. Volumetric displays render 3D voxels of a scene occupying a physical space and thus naturally allow users to perceive correct focus cues [6], but these displays tend be extremely bulky and have low resolution with several moving parts. Multi-focal plane displays, another alternative, project several virtual focal planes discretely placed along the visual axis of the viewing space, each plane allowing for rendering nearly-correct focus cues across an extended depth volume [7–10]. This extended depth, however, often comes at the cost of both time multiplexing and large data bandwidth. In recent years, light field displays have emerged as a promising technology to correct the VAC by rendering the different directions of the light rays apparently emitted by a 3D scene and viewed from slightly different positions [11–15]. Light field displays can be both lightweight and compact, while using simple viewing optics in conjunction with a pinhole or lenslet array to achieve a large field of view. However, the view density of a light field display, which is defined as the number of views per unit area projected on its viewing window, is inversely correlated to the spatial resolution of the reconstructed 3D scene [16]. Therefore, tradeoffs have to be made among the key parameters of display performance, such as the spatial resolution depth of field, depth resolution, the accuracy of focusing cues, and the accommodative response errors [17].

A vari-focal plane display technology is arguably one of the easiest remedies to the VAC problem in HMDs. A vari-focal HMD dynamically adjusts the focal distance of a single-plane display by either adopting an electrically or mechanically tunable optical element or mechanically varying the distance between a microdisplay and its eyepiece so that the 2D image of a virtual object rendered by the display appears at the correct focal depth [4,18,19]. For instance, Liu et al demonstrated a vari-focal HMD prototype integrating a liquid lens for dynamic focus control and experimentally validated the effectiveness of a vari-focal display for addressing the VAC problem [20]. More recently, Dunn et al. demonstrated a vari-focal augmented reality display using deformable beamsplitter membranes [21].

A key enabling technology in a vari-focal HMD is an active optical element that is able to dynamically tune the optical power of the system at a high speed and in a large depth range (typically a few diopters) while also offering a large, clear aperture. Examples for such an active optical element include: deformable membrane mirror devices (DMMD) [22], electrowetting lenses [23], elastomer-membrane fluidic lenses [24], liquid crystal lenses [25], and digitally switchable multi-focal lenses [26]. With the new developments in manufacturing freeform surfaces as well as optical metrology to accurately measure those surfaces, Alvarez lenses, which offer accurate and high-speed dynamic tuning of optical power through the lateral shifting of two lens elements [27], have recently surfaced as an attractive method to achieve large focal ranges rapidly, while still maintaining a compact structure [28–30].

In this paper, we present a novel design of a high-resolution vari-focal plane optical see-through HMD (OST-HMD) system using freeform Alvarez lenses coupled with ultra-fast high resolution piezo linear actuators. Our design is capable of rendering near-correct focus cues by providing dynamic control of the focal distance positions throughout the extended depth of field corresponding to a shift from 0 to 3 diopters at a rate of 150Hz per diopter. Our design utilizes a Sony 0.7” OLED microdisplay for the virtual display path to achieve a 30 degree diagonal FOV and 1920x1080 pixel resolution, with an optical performance greater than 20% modulation contrast at 0.81 arcmins over the full FOV.

2. Optical system design

Figure 1 illustrates a schematic diagram of our proposed OST-HMD optical architect. The design can be further divided into two main groups, including an eyepiece and a tunable relay group. The eyepiece group is made up of a plane plate beamsplitter (PPB), imaging optics, and a cold mirror to create a folded, compact optical path that projects a magnified virtual image toward a user’s eye pupil. The beamsplitter acts as an optical combiner to merge the light paths of the real-world view and the virtual display view. The cold mirror is used for a potential eye-tracking path needed to determine the viewer’s gaze direction and thus to determine the depth of eye convergence for rendering correct focus cues [31]. The tunable relay group consists of a relay system and an Alvarez lens group. The relay system is made up of two lens groups separated by the Alvarez lens group and it relays a 2D image rendered on an OLED micro-display to form an intermediate image plane which is then projected by the eyepiece. The Alvarez lens group boxed in red line in Fig. 1 is composed of two symmetric freeform lenses that change the optical focus of the system from 0 to 3 diopters in the virtual image space with equal and opposite lateral translation. By carefully placing the Alvarez lens group at the intermediate pupil location that optically conjugates to the exit pupil of the system, we ensure that no change occurs in the chief ray angles when the optical power of the Alvarez lens group is dynamically adjusted to control the apparent focal depth of the virtual display over an extended depth range from 0 to 3 diopters in the visual space. Consequently, the HMD system is able to maintain a constant field of view and angular resolution when adjusting the focal depth of virtual display. This offers a significant advantage over designs [21] in which the tunable optical element is not optically conjugate to the exit pupil of the system, and thus the apparent field of view and the optical magnification of such systems vary in their focal depth, requiring calibration and digital correction.

 

Fig. 1 Schematic diagram of the proposed vari-focal OST-HMD design utilizing symmetric freeform Alvarez lenses.

Download Full Size | PPT Slide | PDF

Our prototype design uses a 0.7” Sony full-color OLED microdisplay for the virtual display path. The Sony OLED, having an effective area of 15.36mm and 8.64mm and a pixel size of 8μm, offers a native resolution of 1920x1080 pixels and an aspect ratio of 16:9. Based on the choice of microdisplays, we further optimized a previously designed eyepiece using available stock lenses [32] to achieve a diagonal FOV of 30°, or 26.5° horizontally and 15° vertically, and an angular resolution of 0.81 arcmins per pixel, corresponding to a Nyquist frequency of 63 cycles/mm in the microdisplay space or 37 cycles/degree in the visual space. The design achieved an exit pupil diameter (EPD) of 10mm and an eye clearance distance of at least >20mm. The see-though path is composed of a single beamsplitter allowing for a very large FOV and un-aberrated optical performance. The tunable relay group consists of a 1:1 relay group with an Alvarez lens group inserted in-between. The 1:1 relay group is composed of 6 rotationally-symmetric plastic aspheric lenses, in which the 3 lenses on the left side of the Alvarez lens group are symmetric to those on the right side for significant cost reduction for fabrication. The Alarez lens group, consisting of two acrylic freeform lenses, affords the control of the optical power of the system. Its design, the main focus of this paper, will be detailed in the following paragraph. To correct residual field curvature of the system, a field correcting lens is inserted between the tunable relay group and the eyepiece group. Overall, the final lens design consists of 13 lenses, including 4 stock glass lenses (eyepiece), 6 symmetric plastic aspheric lenses (1:1 relay), 2 acrylic freeform lenses (Alvarez group), and a field correcting lens. It was optimized for 3 wavelengths, 465, 550, and 615nm with weights of 1, 2 and 1, respectively, in accordance to the dominant wavelengths of the OLED microdisplay. To balance the overall optical performance over the focal depth range of 3 diopters in the visual space, we optimized the system using 7 zoom configurations, each corresponding to a different optical power induced to the Alvarez lens group by a small lateral shift between the Alvarez lens pair (thus creating a different focal depth of the virtual display), to create a smooth transition of the optical performance throughout the extended depth of field.

Figures 2(a) and 2(b) show the optical system layout of the final vari-focal plane OST-HMD design with the Alvarez lens group configured for the virtual display focal depths of 0 diopter (i.e. optical infinity) and 3 diopters (i.e. 33cm) from the exit pupil, respectively. The lateral shift in the freeform lens position results in a change of optical power for the Alvarez lens group, which changes the axial location of the relayed intermediate image with respect to the eyepiece and consequently the focal depth of the virtual display in the viewing space. The Alvarez lens group is placed at the intermediate pupil location, causing the chief rays to remain unchanged with a varying optical power and allowing for a fixed field of view with low pupil swim. The Alvarez lens group consists of two separate and symmetric lenses, each with doubly freeform lens surface profiles. When laterally shifted, the optical power of the Alvarez lens group follows a cubic phase profile given by the following Eq:

Φ=Axx3+Axyx2+Ayy3+Ayxy2+Bx2+Cxy+Dy2+Ex+Fy+G
where ϕ is the spherical power of the lens group, and x and y are the amounts of lateral translations in the corresponding directions. In our case, we set y = 0 because zero translation was allowed in the y direction, giving a much-reduced Eq for the spherical power:

 

Fig. 2 Optical system layout of varifocal-plane OST-HMD using freeform Alvarez lenses for (a) 0 diopter focal shift (b) 3 diopter focal shift.

Download Full Size | PPT Slide | PDF

Φ=Axx3+Bx2+Ex+G

The freeform Alvarez lens group was optimized in CodeV using x-y polynomials to the 3rd order. During the optimization, 7 focal depths of the virtual display in the range from 0 to 3 diopters were sampled at an increment of 0.5 diopters and they were configured as 7 separate zooms in the CodeV. The amount of the lateral relative shift between the Alvarez lens pair along the x-direction was initialized for each zoom according to the paraxial spherical powers of the entire system and the Alvarez lens group calculated using Eq. (2), but it was set as a variable during optimization to allow compensation of non-paraxial effects. The optimization process focused on optimizing the shape of the freeform lens surface and the lateral shifts of the pair to obtain balanced performances across all the 7 sampled positions. Based on the optimized result, Fig. 3 shows the spherical power shift of the Alvarez lenses (in grey triangles) as a function of the lateral x-translation sampled in 7 focal positions. These sampled positions were then fitted to Eq. (2) and the fitted relationship was plotted in the grey solid line. The fitted coefficients for A, B, E, and G are shown in the graph. For each 1.667mm lateral shift of the freeform surfaces, the intermediate image plane is roughly shifted by 1mm toward the eyepiece, corresponding to approximately a 1-diopter shift of the virtual image plane in the visual space. Figure 3 plotted three configurations of the Alvarez lens pair corresponding to 0, 1.5, and 3 diopters of virtual display focal depths. Figure 3 further plotted the focal depth shift of the virtual image plane (i.e. the system power shift) as a function of the lateral displacement of the Alvarez lenses denoted by the orange (square) line, which demonstrates a system power shift of 3 diopters.

 

Fig. 3 Optical power of the Alvarez lens group as a function of distance of lateral translation. The orange curve (squares) shows the overall system power change, while the gray curve (triangles) demonstrates the spherical power of the Alvarez group.

Download Full Size | PPT Slide | PDF

The simulated optical performance of the virtual display was optimized and assessed over the full field of view in the display space using the polychromatic modulation transfer function (MTF) curves. Figure 4 shows the polychromatic MTF curves, evaluated with a 3-mm eye pupil, for 5 fields at three different focal depths corresponding to 0.5, 1.5 and 3 diopters in the visual space, respectively. The virtual display path over the extended depth range preserves over 20% modulation at the designed Nyquist frequency of 63 cycles/mm, corresponding to the 8μm pixel size of the OLED display. An average of 50% modulation at the frequency of 35 cycles/mm is maintained over the full field of view for the focal range from 0 to 3 diopters.

 

Fig. 4 Modulation transfer function of several transverse (Tan) and radial (Rad) fields evaluated with a 3mm pupil diameter and a cutoff spatial frequency of 63 cycles/mm for the display path with its virtual image at a depth of (a) 0.5, (b) 1.5 and (c) 3 diopters.

Download Full Size | PPT Slide | PDF

Along with the MTF, several other metrics were used to characterize the optical performance of the virtual display path, such as wave front error and spot diagram. The wave front error over the full field for the 3-diopter extended depth of field was held to under 1.5 waves. The average root mean square (RMS) spot diameter across the field at the far ends of the depth of field is about 14μm. This error is largely due to lateral chromatic aberration and astigmatism. Lateral chromatic aberration results from a lateral magnification difference for each wavelength and can be digitally corrected, much like distortion. Unfortunately, due to off-axis and non-rotationally symmetric design of the freeform Alvarez lenses, astigmatism is inherent to the optical system. This problem can potentially be reduced by further optimizing the Alvarez lens group, using high order terms. The distortion grid along with the magnification of the virtual image over the full focal range was analyzed and Fig. 5(a) through 5(c) plotted the distortion grids at the focal depths of 0.5, 1.5, and 3 diopters, respectively. The design shows <3% distortion and <1% magnification errors over the full field for the extended depth of field. This small amount of residual distortion can easily be corrected by image processing to pre-warp the original image.

 

Fig. 5 Distortion grid of display path for full field with its virtual image at a depth of (a) 0.5, (b) 1.5 and (c) 3 diopters.

Download Full Size | PPT Slide | PDF

For the proposed prototype of the varifocal OST-HMD design shown in Fig. 2, a PI M-633.4U Piezo linear stage was used as the electronic linear actuator to drive the lateral shift of the Alvarez lenses. Due to the symmetric form of the freeform Alvarez lenses, only one linear actuator is needed per eye to achieve equal and opposite translation. Figure 6(a) shows the mechanical mount of a binocular, vari-focal OST-HMD prototype fitted on an average-sized human head model, while Fig. 6(b) is an enlarged view of the Alvarez lens module with integrated linear stage to show the relative size of the linear stage with respect to lens. The overall width of the OST-HMD system is 200mm, with a depth of 95mm and an intraocular distance of 60mm. The mechanical setup utilizes one piezo actuator that is attached to a small gear that allows for the symmetric bidirectional translation of the two Alvarez lenses with a single movement. The M-633.4U stage offers a translation speed of 250mm/s, thereby producing a 50 Hz transition speed for a focal depth shift of the virtual display from 0 to 3 diopters and a 150Hz transition speed for a 1-diopter focal depth shift. In the prototyped system, the eyepiece lenses were cropped to achieve an eye clearance of >20mm and a 10mm EPD. For the mechanical design, lenses were individually aligned into a larger housing, where they were held by set screws to achieve a smaller tolerance stack as well as more compensation in the optical design, making it easier to achieve the desired maximum MTF and allow for the housing to be 3D printed. Each linear actuator has a step resolution of 100nm and a repeatability resolution of 200nm, while the 3D printed mounts had a mechanical tolerance of 100um, contributing to very little impact on the optical tolerances and MTF sensitivity.

 

Fig. 6 (a) 3D model of a binocular vari-focal OST-HMD prototype with piezo linear actuators (b) enlarged view of the Alvarez lens module.

Download Full Size | PPT Slide | PDF

3. System prototype and experimental demonstration

Figure 7 shows a monocular benchtop prototype of our varifocal OST-HMD system with the light paths of the real and virtual scenes superimposed. The light path for the virtual display is highlighted with red arrows, while the light path for the real-world view is shown with blue arrows. Due to parts availability in the laboratory and budget constraints, we modified our original optical design shown in Fig. 2 and built this benchtop prototype with a WUXGA eMagin OLED microdisplay having a 9.6um pixel pitch and a Nyquist frequency of 52cycles/mm. Instead of custom-made aspheric lenses, the relay group was composed of two identical eyepieces obtained from Sony HMZ-T3 HMDs to create a double telocentric relay system. The freeform Alvarez lenses were samples provided by Dr. Rob Stevens of Adlens Ltd. and were placed at the intermediate pupil location. The lateral positional shifts of the lenses were controlled by liner translation stages. Lastly, the same eyepiece design as those used in our previous work [19] was modified to allow the see-through path to merge with the virtual scene. A Pointgrey camera, along with a 16mm-focal-length lens by Edmund Optic, was inserted at the exit pupil to replace the eye for image capture.

 

Fig. 7 Experimental setup of a monocular benchtop prototype of a varifocal-plane OST-HMD using freeform Alvarez lenses.

Download Full Size | PPT Slide | PDF

Figure 8 shows a qualitative demonstration of our prototyped benchtop varifocal OST-HMD. A virtual set of tumbling E was rendered as targets on the microdisplay as the user manipulated the focal depth of the virtual display through the interface device. The dimensions of the E letters were scaled such that they maintain the same angular size and resolution when viewed from the camera position. Therefore, the captured images of E letters displayed at different focal depths are expected to be the same size due to the benefits of constant angular magnification of our designed system. For visual references, two printed spoke targets were placed in the see-through path, one at 160mm and the other at 3000mm away from the camera. The two printed targets were also scaled properly so that they maintain the same angular size. By varying the lateral displacement of the Alvarez lens group by 3cm, the focal depth of the virtual display can be varied correspondingly from 6 to 0.33 diopters. Figure 8(a) shows the captured image of the scene with the camera focused at the depth of 160mm and Alvarez lens group focused at the same depth, while Fig. 8(b) shows the captured image of the scene with the camera focused at the depth of 3000mm and Alvarez lens group focused at the same depth. The image in Fig. 8(a) clearly shows that the printed target placed at 160mm or 6 diopters and the virtual E targets displayed at the same focal depth are sharply focused while the printed target placed at 3000mm looks blurry as expected. Similarly, the image in Fig. 8(b) clearly shows that the printed target placed at 3000mm or 0.33 diopters and the virtual E targets displayed at the same focal depth are sharply focused while the printed target placed at 160mm looks blurry as expected. A finger and hand as seen in the images were used for size and distance reference.

 

Fig. 8 Qualitative demonstration of focus cue rendering in our vari-focal OST-HMD benchtop prototype: (a) A virtual image (tumbling E) was rendered at 160mm (i.e. 6 diopters) with the Alvarez lens and camera focused at the same depth along with physical reference objects in the see-through paths; (b) A virtual image (tumbling E) was rendered at 3000mm (i.e. 0.33 diopters) with the Alvarez lens and camera focused at the same depth along with physical reference objects in the see-through paths.

Download Full Size | PPT Slide | PDF

4. Conclusion

This paper presents a novel design of a vari-focal optical see-through head-mounted display system using freeform Alvarez lenses to dynamically shift the focal depth of the virtual image plane from 0 to 3 diopters. The study includes a comprehensive description of the optical design capable of making a large focal shift while still maintaining good image quality of the virtual display. Our design offers a >30° diagonal FOV and an angular resolution of 0.81 arcmins, with an optical performance of > 0.4 contrast over the full FOV at the Nyquist frequency of the display. By optimizing the system at 7 different focal positions, we were able to obtain a design of an Alvarez lens group that can tune the focal depth of the virtual display over a 3-diopter depth range by less than 6mm of lateral displacement of the Alvarez lenses. By using a high-speed Piezo linear stage, the design is able to achieve a focal shift speed of 150Hz per diopter. This paper further demonstrates a benchtop prototype of a varifocal OST-HMD system using freeform Alvarez lenses and mostly off-the-shelf optics.

Disclosures

Dr. Hong Hua has a disclosed financial interest in Magic Leap, Inc. The terms of this arrangement have been properly disclosed to The University of Arizona and reviewed by the Institutional Review Committee in accordance with its conflict of interest policies.

References

1. J. Rolland and H. Hua, “Head-mounted display systems,” Encyclopedia of Optical Engineering, 1–13 (2005).

2. O. Cakmakci and J. Rolland, “Head-worn displays: a review,” J. Disp. Technol. 2(3), 199–216 (2006). [CrossRef]  

3. S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005). [CrossRef]   [PubMed]  

4. H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017). [CrossRef]  

5. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

6. G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100 million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002). [CrossRef]  

7. J. P. Rolland, M. W. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000). [CrossRef]   [PubMed]  

8. K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004). [CrossRef]  

9. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). [CrossRef]   [PubMed]  

10. X. Hu and H. Hua, “Design and assessment of a depth-fused multi-focal-plane display prototype,” J. Disp. Technol. 10(4), 308–316 (2014). [CrossRef]  

11. W. Gordon, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).

12. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 1–11 (2014). [CrossRef]  

13. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]   [PubMed]  

14. W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett. 12(6), 060010 (2014). [CrossRef]  

15. H. Huang and H. Hua, “An integral-imaging-based head-mounted light field display using a tunable lens and aperture array,” J. Soc. Inf. Disp. 25(3), 200–207 (2017). [CrossRef]  

16. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017). [CrossRef]   [PubMed]  

17. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018). [CrossRef]   [PubMed]  

18. T. Shibata, T. Kawai, K. Ohta, M. Otsuki, N. Miyake, Y. Yoshihara, and T. Iwasaki, “Stereoscopic 3-D display with optical correction for the reduction of the discrepancy between accommodation and convergence,” Journal of SID 13(8), 665–671 (2005). [CrossRef]  

19. S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” Journal of the SID 4(4), 255–261 (1996). [CrossRef]  

20. S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010). [PubMed]  

21. D. Dunn, P. Chakravarthula, Q. Dong, and H. Fuchs, “Mitigating vergence-accommodation conflict for near-eye displays via deformable beamsplitters,” Proc. SPIE 10676, 104 (2018). [CrossRef]  

22. M. J. Moghimi, B. J. Lutzenberger, B. M. Kaylor, and D. L. Dickensheets, “MOEMS deformable mirrors for focus control in vital microscopy,” J. Micro. Nanolithogr. MEMS MOEMS 10(2), 023005 (2011). [CrossRef]  

23. M. Kang and R. Yue, “Variable-focus liquid lens based on EWOD,” J. Adhes. Sci. Technol. 26(12–17), 1941–1946 (2012).

24. S. T. Choi, J. Y. Lee, J. O. Kwon, S. Lee, and W. Kim, “Liquid-filled varifocal lens on a chip,” In MOEMS and Miniaturized Systems VIII. 7208, ISOP (2009).

25. S. Suyama, M. Date, and H. Takada, “Three-dimensional display system with dual-frequency liquid-crystal varifocal lens,” Jpn. J. Appl. Phys. 39(2A2R), 480–484 (2000). [CrossRef]  

26. X. Wang, Y. Qin, H. Hua, Y. H. Lee, and S. T. Wu, “Digitally switchable multi-focal lens using freeform optics,” Opt. Express 26(8), 11007–11017 (2018). [CrossRef]   [PubMed]  

27. A. W. Lohmann, “A new class of varifocal lenses,” Appl. Opt. 9(7), 1669–1671 (1970). [CrossRef]   [PubMed]  

28. M. Bawart, A. Jesacher, P. Zelger, S. Bernet, and M. Ritsch-Marte, “Modified Alvarez lens for high-speed focusing,” Opt. Express 25(24), 29847–29855 (2017). [CrossRef]   [PubMed]  

29. Y. Zou, W. Zhang, F. S. Chau, and G. Zhou, “Miniature adjustable-focus endoscope with a solid electrically tunable lens,” Opt. Express 23(16), 20582–20592 (2015). [CrossRef]   [PubMed]  

30. A. Wilson and H. Hua, “High-resolution optical see-through vari-focal-plane head-mounted display using freeform Alvarez lenses,” Proc. SPIE 10676, 140 (2018). [CrossRef]  

31. S. Liu, Methods for generating addressable focus cues in stereoscopic displays Ph.D. dissertation, (Univ. Arizona), (2010).

32. A. Wilson and H. Hua, “Design and prototype of an augmented reality display with per-pixel mutual occlusion capability,” Opt. Express 25(24), 30539–30549 (2017). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. J. Rolland and H. Hua, “Head-mounted display systems,” Encyclopedia of Optical Engineering, 1–13 (2005).
  2. O. Cakmakci and J. Rolland, “Head-worn displays: a review,” J. Disp. Technol. 2(3), 199–216 (2006).
    [Crossref]
  3. S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
    [Crossref] [PubMed]
  4. H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017).
    [Crossref]
  5. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).
    [Crossref]
  6. G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100 million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002).
    [Crossref]
  7. J. P. Rolland, M. W. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000).
    [Crossref] [PubMed]
  8. K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
    [Crossref]
  9. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010).
    [Crossref] [PubMed]
  10. X. Hu and H. Hua, “Design and assessment of a depth-fused multi-focal-plane display prototype,” J. Disp. Technol. 10(4), 308–316 (2014).
    [Crossref]
  11. W. Gordon, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
  12. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 1–11 (2014).
    [Crossref]
  13. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
    [Crossref] [PubMed]
  14. W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett. 12(6), 060010 (2014).
    [Crossref]
  15. H. Huang and H. Hua, “An integral-imaging-based head-mounted light field display using a tunable lens and aperture array,” J. Soc. Inf. Disp. 25(3), 200–207 (2017).
    [Crossref]
  16. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017).
    [Crossref] [PubMed]
  17. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018).
    [Crossref] [PubMed]
  18. T. Shibata, T. Kawai, K. Ohta, M. Otsuki, N. Miyake, Y. Yoshihara, and T. Iwasaki, “Stereoscopic 3-D display with optical correction for the reduction of the discrepancy between accommodation and convergence,” Journal of SID 13(8), 665–671 (2005).
    [Crossref]
  19. S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” Journal of the SID 4(4), 255–261 (1996).
    [Crossref]
  20. S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
    [PubMed]
  21. D. Dunn, P. Chakravarthula, Q. Dong, and H. Fuchs, “Mitigating vergence-accommodation conflict for near-eye displays via deformable beamsplitters,” Proc. SPIE 10676, 104 (2018).
    [Crossref]
  22. M. J. Moghimi, B. J. Lutzenberger, B. M. Kaylor, and D. L. Dickensheets, “MOEMS deformable mirrors for focus control in vital microscopy,” J. Micro. Nanolithogr. MEMS MOEMS 10(2), 023005 (2011).
    [Crossref]
  23. M. Kang and R. Yue, “Variable-focus liquid lens based on EWOD,” J. Adhes. Sci. Technol. 26(12–17), 1941–1946 (2012).
  24. S. T. Choi, J. Y. Lee, J. O. Kwon, S. Lee, and W. Kim, “Liquid-filled varifocal lens on a chip,” In MOEMS and Miniaturized Systems VIII. 7208, ISOP (2009).
  25. S. Suyama, M. Date, and H. Takada, “Three-dimensional display system with dual-frequency liquid-crystal varifocal lens,” Jpn. J. Appl. Phys. 39(2A2R), 480–484 (2000).
    [Crossref]
  26. X. Wang, Y. Qin, H. Hua, Y. H. Lee, and S. T. Wu, “Digitally switchable multi-focal lens using freeform optics,” Opt. Express 26(8), 11007–11017 (2018).
    [Crossref] [PubMed]
  27. A. W. Lohmann, “A new class of varifocal lenses,” Appl. Opt. 9(7), 1669–1671 (1970).
    [Crossref] [PubMed]
  28. M. Bawart, A. Jesacher, P. Zelger, S. Bernet, and M. Ritsch-Marte, “Modified Alvarez lens for high-speed focusing,” Opt. Express 25(24), 29847–29855 (2017).
    [Crossref] [PubMed]
  29. Y. Zou, W. Zhang, F. S. Chau, and G. Zhou, “Miniature adjustable-focus endoscope with a solid electrically tunable lens,” Opt. Express 23(16), 20582–20592 (2015).
    [Crossref] [PubMed]
  30. A. Wilson and H. Hua, “High-resolution optical see-through vari-focal-plane head-mounted display using freeform Alvarez lenses,” Proc. SPIE 10676, 140 (2018).
    [Crossref]
  31. S. Liu, Methods for generating addressable focus cues in stereoscopic displays Ph.D. dissertation, (Univ. Arizona), (2010).
  32. A. Wilson and H. Hua, “Design and prototype of an augmented reality display with per-pixel mutual occlusion capability,” Opt. Express 25(24), 30539–30549 (2017).
    [Crossref] [PubMed]

2018 (4)

H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018).
[Crossref] [PubMed]

D. Dunn, P. Chakravarthula, Q. Dong, and H. Fuchs, “Mitigating vergence-accommodation conflict for near-eye displays via deformable beamsplitters,” Proc. SPIE 10676, 104 (2018).
[Crossref]

X. Wang, Y. Qin, H. Hua, Y. H. Lee, and S. T. Wu, “Digitally switchable multi-focal lens using freeform optics,” Opt. Express 26(8), 11007–11017 (2018).
[Crossref] [PubMed]

A. Wilson and H. Hua, “High-resolution optical see-through vari-focal-plane head-mounted display using freeform Alvarez lenses,” Proc. SPIE 10676, 140 (2018).
[Crossref]

2017 (6)

A. Wilson and H. Hua, “Design and prototype of an augmented reality display with per-pixel mutual occlusion capability,” Opt. Express 25(24), 30539–30549 (2017).
[Crossref] [PubMed]

M. Bawart, A. Jesacher, P. Zelger, S. Bernet, and M. Ritsch-Marte, “Modified Alvarez lens for high-speed focusing,” Opt. Express 25(24), 29847–29855 (2017).
[Crossref] [PubMed]

H. Huang and H. Hua, “An integral-imaging-based head-mounted light field display using a tunable lens and aperture array,” J. Soc. Inf. Disp. 25(3), 200–207 (2017).
[Crossref]

H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017).
[Crossref] [PubMed]

H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017).
[Crossref]

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).
[Crossref]

2015 (1)

2014 (4)

X. Hu and H. Hua, “Design and assessment of a depth-fused multi-focal-plane display prototype,” J. Disp. Technol. 10(4), 308–316 (2014).
[Crossref]

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 1–11 (2014).
[Crossref]

H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
[Crossref] [PubMed]

W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett. 12(6), 060010 (2014).
[Crossref]

2012 (2)

W. Gordon, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).

M. Kang and R. Yue, “Variable-focus liquid lens based on EWOD,” J. Adhes. Sci. Technol. 26(12–17), 1941–1946 (2012).

2011 (1)

M. J. Moghimi, B. J. Lutzenberger, B. M. Kaylor, and D. L. Dickensheets, “MOEMS deformable mirrors for focus control in vital microscopy,” J. Micro. Nanolithogr. MEMS MOEMS 10(2), 023005 (2011).
[Crossref]

2010 (2)

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[PubMed]

S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010).
[Crossref] [PubMed]

2006 (1)

O. Cakmakci and J. Rolland, “Head-worn displays: a review,” J. Disp. Technol. 2(3), 199–216 (2006).
[Crossref]

2005 (2)

S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
[Crossref] [PubMed]

T. Shibata, T. Kawai, K. Ohta, M. Otsuki, N. Miyake, Y. Yoshihara, and T. Iwasaki, “Stereoscopic 3-D display with optical correction for the reduction of the discrepancy between accommodation and convergence,” Journal of SID 13(8), 665–671 (2005).
[Crossref]

2004 (1)

K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
[Crossref]

2002 (1)

G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100 million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002).
[Crossref]

2000 (2)

J. P. Rolland, M. W. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000).
[Crossref] [PubMed]

S. Suyama, M. Date, and H. Takada, “Three-dimensional display system with dual-frequency liquid-crystal varifocal lens,” Jpn. J. Appl. Phys. 39(2A2R), 480–484 (2000).
[Crossref]

1996 (1)

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” Journal of the SID 4(4), 255–261 (1996).
[Crossref]

1970 (1)

Akeley, K.

S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
[Crossref] [PubMed]

K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
[Crossref]

Banks, M. S.

S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
[Crossref] [PubMed]

K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
[Crossref]

Bawart, M.

Bernet, S.

Cakmakci, O.

O. Cakmakci and J. Rolland, “Head-worn displays: a review,” J. Disp. Technol. 2(3), 199–216 (2006).
[Crossref]

Chakravarthula, P.

D. Dunn, P. Chakravarthula, Q. Dong, and H. Fuchs, “Mitigating vergence-accommodation conflict for near-eye displays via deformable beamsplitters,” Proc. SPIE 10676, 104 (2018).
[Crossref]

Chau, F. S.

Cheng, D.

W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett. 12(6), 060010 (2014).
[Crossref]

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[PubMed]

Chun, W. S.

G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100 million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002).
[Crossref]

Date, M.

S. Suyama, M. Date, and H. Takada, “Three-dimensional display system with dual-frequency liquid-crystal varifocal lens,” Jpn. J. Appl. Phys. 39(2A2R), 480–484 (2000).
[Crossref]

Dickensheets, D. L.

M. J. Moghimi, B. J. Lutzenberger, B. M. Kaylor, and D. L. Dickensheets, “MOEMS deformable mirrors for focus control in vital microscopy,” J. Micro. Nanolithogr. MEMS MOEMS 10(2), 023005 (2011).
[Crossref]

Dong, Q.

D. Dunn, P. Chakravarthula, Q. Dong, and H. Fuchs, “Mitigating vergence-accommodation conflict for near-eye displays via deformable beamsplitters,” Proc. SPIE 10676, 104 (2018).
[Crossref]

Dorval, R. K.

G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100 million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002).
[Crossref]

Dunn, D.

D. Dunn, P. Chakravarthula, Q. Dong, and H. Fuchs, “Mitigating vergence-accommodation conflict for near-eye displays via deformable beamsplitters,” Proc. SPIE 10676, 104 (2018).
[Crossref]

Ernst, M. O.

S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
[Crossref] [PubMed]

Favalora, G. E.

G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100 million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002).
[Crossref]

Fuchs, H.

D. Dunn, P. Chakravarthula, Q. Dong, and H. Fuchs, “Mitigating vergence-accommodation conflict for near-eye displays via deformable beamsplitters,” Proc. SPIE 10676, 104 (2018).
[Crossref]

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 1–11 (2014).
[Crossref]

Georgiou, A.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).
[Crossref]

Giovinco, M. G.

G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100 million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002).
[Crossref]

Girshick, A. R.

K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
[Crossref]

Goon, A.

Gordon, W.

W. Gordon, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).

Hall, D. M.

G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100 million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002).
[Crossref]

Hirsch, M.

W. Gordon, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).

Hu, X.

X. Hu and H. Hua, “Design and assessment of a depth-fused multi-focal-plane display prototype,” J. Disp. Technol. 10(4), 308–316 (2014).
[Crossref]

Hua, H.

H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018).
[Crossref] [PubMed]

X. Wang, Y. Qin, H. Hua, Y. H. Lee, and S. T. Wu, “Digitally switchable multi-focal lens using freeform optics,” Opt. Express 26(8), 11007–11017 (2018).
[Crossref] [PubMed]

A. Wilson and H. Hua, “High-resolution optical see-through vari-focal-plane head-mounted display using freeform Alvarez lenses,” Proc. SPIE 10676, 140 (2018).
[Crossref]

A. Wilson and H. Hua, “Design and prototype of an augmented reality display with per-pixel mutual occlusion capability,” Opt. Express 25(24), 30539–30549 (2017).
[Crossref] [PubMed]

H. Huang and H. Hua, “An integral-imaging-based head-mounted light field display using a tunable lens and aperture array,” J. Soc. Inf. Disp. 25(3), 200–207 (2017).
[Crossref]

H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017).
[Crossref] [PubMed]

H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017).
[Crossref]

X. Hu and H. Hua, “Design and assessment of a depth-fused multi-focal-plane display prototype,” J. Disp. Technol. 10(4), 308–316 (2014).
[Crossref]

H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
[Crossref] [PubMed]

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[PubMed]

S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010).
[Crossref] [PubMed]

Huang, H.

Iwasaki, T.

T. Shibata, T. Kawai, K. Ohta, M. Otsuki, N. Miyake, Y. Yoshihara, and T. Iwasaki, “Stereoscopic 3-D display with optical correction for the reduction of the discrepancy between accommodation and convergence,” Journal of SID 13(8), 665–671 (2005).
[Crossref]

Javidi, B.

Jesacher, A.

Kang, M.

M. Kang and R. Yue, “Variable-focus liquid lens based on EWOD,” J. Adhes. Sci. Technol. 26(12–17), 1941–1946 (2012).

Kawai, T.

T. Shibata, T. Kawai, K. Ohta, M. Otsuki, N. Miyake, Y. Yoshihara, and T. Iwasaki, “Stereoscopic 3-D display with optical correction for the reduction of the discrepancy between accommodation and convergence,” Journal of SID 13(8), 665–671 (2005).
[Crossref]

Kaylor, B. M.

M. J. Moghimi, B. J. Lutzenberger, B. M. Kaylor, and D. L. Dickensheets, “MOEMS deformable mirrors for focus control in vital microscopy,” J. Micro. Nanolithogr. MEMS MOEMS 10(2), 023005 (2011).
[Crossref]

Keller, K.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 1–11 (2014).
[Crossref]

Kishino, F.

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” Journal of the SID 4(4), 255–261 (1996).
[Crossref]

Kollin, J. S.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).
[Crossref]

Krueger, M. W.

Lanman, D.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 1–11 (2014).
[Crossref]

W. Gordon, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).

Lee, Y. H.

Liu, S.

S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010).
[Crossref] [PubMed]

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[PubMed]

Liu, Y.

Lohmann, A. W.

Luebke, D.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 1–11 (2014).
[Crossref]

Lutzenberger, B. J.

M. J. Moghimi, B. J. Lutzenberger, B. M. Kaylor, and D. L. Dickensheets, “MOEMS deformable mirrors for focus control in vital microscopy,” J. Micro. Nanolithogr. MEMS MOEMS 10(2), 023005 (2011).
[Crossref]

Maimone, A.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).
[Crossref]

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 1–11 (2014).
[Crossref]

Miyake, N.

T. Shibata, T. Kawai, K. Ohta, M. Otsuki, N. Miyake, Y. Yoshihara, and T. Iwasaki, “Stereoscopic 3-D display with optical correction for the reduction of the discrepancy between accommodation and convergence,” Journal of SID 13(8), 665–671 (2005).
[Crossref]

Moghimi, M. J.

M. J. Moghimi, B. J. Lutzenberger, B. M. Kaylor, and D. L. Dickensheets, “MOEMS deformable mirrors for focus control in vital microscopy,” J. Micro. Nanolithogr. MEMS MOEMS 10(2), 023005 (2011).
[Crossref]

Napoli, J.

G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100 million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002).
[Crossref]

Ohta, K.

T. Shibata, T. Kawai, K. Ohta, M. Otsuki, N. Miyake, Y. Yoshihara, and T. Iwasaki, “Stereoscopic 3-D display with optical correction for the reduction of the discrepancy between accommodation and convergence,” Journal of SID 13(8), 665–671 (2005).
[Crossref]

Omura, K.

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” Journal of the SID 4(4), 255–261 (1996).
[Crossref]

Otsuki, M.

T. Shibata, T. Kawai, K. Ohta, M. Otsuki, N. Miyake, Y. Yoshihara, and T. Iwasaki, “Stereoscopic 3-D display with optical correction for the reduction of the discrepancy between accommodation and convergence,” Journal of SID 13(8), 665–671 (2005).
[Crossref]

Qin, Y.

Raskar, R.

W. Gordon, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).

Rathinavel, K.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 1–11 (2014).
[Crossref]

Richmond, M. J.

G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100 million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002).
[Crossref]

Ritsch-Marte, M.

Rolland, J.

O. Cakmakci and J. Rolland, “Head-worn displays: a review,” J. Disp. Technol. 2(3), 199–216 (2006).
[Crossref]

Rolland, J. P.

Shibata, T.

T. Shibata, T. Kawai, K. Ohta, M. Otsuki, N. Miyake, Y. Yoshihara, and T. Iwasaki, “Stereoscopic 3-D display with optical correction for the reduction of the discrepancy between accommodation and convergence,” Journal of SID 13(8), 665–671 (2005).
[Crossref]

Shiwa, S.

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” Journal of the SID 4(4), 255–261 (1996).
[Crossref]

Song, W.

Suyama, S.

S. Suyama, M. Date, and H. Takada, “Three-dimensional display system with dual-frequency liquid-crystal varifocal lens,” Jpn. J. Appl. Phys. 39(2A2R), 480–484 (2000).
[Crossref]

Takada, H.

S. Suyama, M. Date, and H. Takada, “Three-dimensional display system with dual-frequency liquid-crystal varifocal lens,” Jpn. J. Appl. Phys. 39(2A2R), 480–484 (2000).
[Crossref]

Wang, X.

Wang, Y.

Watt, S. J.

S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
[Crossref] [PubMed]

K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
[Crossref]

Wilson, A.

A. Wilson and H. Hua, “High-resolution optical see-through vari-focal-plane head-mounted display using freeform Alvarez lenses,” Proc. SPIE 10676, 140 (2018).
[Crossref]

A. Wilson and H. Hua, “Design and prototype of an augmented reality display with per-pixel mutual occlusion capability,” Opt. Express 25(24), 30539–30549 (2017).
[Crossref] [PubMed]

Wu, S. T.

Yoshihara, Y.

T. Shibata, T. Kawai, K. Ohta, M. Otsuki, N. Miyake, Y. Yoshihara, and T. Iwasaki, “Stereoscopic 3-D display with optical correction for the reduction of the discrepancy between accommodation and convergence,” Journal of SID 13(8), 665–671 (2005).
[Crossref]

Yue, R.

M. Kang and R. Yue, “Variable-focus liquid lens based on EWOD,” J. Adhes. Sci. Technol. 26(12–17), 1941–1946 (2012).

Zelger, P.

Zhang, W.

Zhou, G.

Zou, Y.

ACM Trans. Graph. (4)

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).
[Crossref]

K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
[Crossref]

W. Gordon, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 1–11 (2014).
[Crossref]

Appl. Opt. (2)

Chin. Opt. Lett. (1)

IEEE Trans. Vis. Comput. Graph. (1)

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[PubMed]

J. Adhes. Sci. Technol. (1)

M. Kang and R. Yue, “Variable-focus liquid lens based on EWOD,” J. Adhes. Sci. Technol. 26(12–17), 1941–1946 (2012).

J. Disp. Technol. (2)

X. Hu and H. Hua, “Design and assessment of a depth-fused multi-focal-plane display prototype,” J. Disp. Technol. 10(4), 308–316 (2014).
[Crossref]

O. Cakmakci and J. Rolland, “Head-worn displays: a review,” J. Disp. Technol. 2(3), 199–216 (2006).
[Crossref]

J. Micro. Nanolithogr. MEMS MOEMS (1)

M. J. Moghimi, B. J. Lutzenberger, B. M. Kaylor, and D. L. Dickensheets, “MOEMS deformable mirrors for focus control in vital microscopy,” J. Micro. Nanolithogr. MEMS MOEMS 10(2), 023005 (2011).
[Crossref]

J. Soc. Inf. Disp. (1)

H. Huang and H. Hua, “An integral-imaging-based head-mounted light field display using a tunable lens and aperture array,” J. Soc. Inf. Disp. 25(3), 200–207 (2017).
[Crossref]

J. Vis. (1)

S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
[Crossref] [PubMed]

Journal of SID (1)

T. Shibata, T. Kawai, K. Ohta, M. Otsuki, N. Miyake, Y. Yoshihara, and T. Iwasaki, “Stereoscopic 3-D display with optical correction for the reduction of the discrepancy between accommodation and convergence,” Journal of SID 13(8), 665–671 (2005).
[Crossref]

Journal of the SID (1)

S. Shiwa, K. Omura, and F. Kishino, “Proposal for a 3-D display with accommodative compensation: 3DDAC,” Journal of the SID 4(4), 255–261 (1996).
[Crossref]

Jpn. J. Appl. Phys. (1)

S. Suyama, M. Date, and H. Takada, “Three-dimensional display system with dual-frequency liquid-crystal varifocal lens,” Jpn. J. Appl. Phys. 39(2A2R), 480–484 (2000).
[Crossref]

Opt. Express (8)

Proc. IEEE (1)

H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017).
[Crossref]

Proc. SPIE (3)

G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100 million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002).
[Crossref]

A. Wilson and H. Hua, “High-resolution optical see-through vari-focal-plane head-mounted display using freeform Alvarez lenses,” Proc. SPIE 10676, 140 (2018).
[Crossref]

D. Dunn, P. Chakravarthula, Q. Dong, and H. Fuchs, “Mitigating vergence-accommodation conflict for near-eye displays via deformable beamsplitters,” Proc. SPIE 10676, 104 (2018).
[Crossref]

Other (3)

S. T. Choi, J. Y. Lee, J. O. Kwon, S. Lee, and W. Kim, “Liquid-filled varifocal lens on a chip,” In MOEMS and Miniaturized Systems VIII. 7208, ISOP (2009).

S. Liu, Methods for generating addressable focus cues in stereoscopic displays Ph.D. dissertation, (Univ. Arizona), (2010).

J. Rolland and H. Hua, “Head-mounted display systems,” Encyclopedia of Optical Engineering, 1–13 (2005).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Schematic diagram of the proposed vari-focal OST-HMD design utilizing symmetric freeform Alvarez lenses.
Fig. 2
Fig. 2 Optical system layout of varifocal-plane OST-HMD using freeform Alvarez lenses for (a) 0 diopter focal shift (b) 3 diopter focal shift.
Fig. 3
Fig. 3 Optical power of the Alvarez lens group as a function of distance of lateral translation. The orange curve (squares) shows the overall system power change, while the gray curve (triangles) demonstrates the spherical power of the Alvarez group.
Fig. 4
Fig. 4 Modulation transfer function of several transverse (Tan) and radial (Rad) fields evaluated with a 3mm pupil diameter and a cutoff spatial frequency of 63 cycles/mm for the display path with its virtual image at a depth of (a) 0.5, (b) 1.5 and (c) 3 diopters.
Fig. 5
Fig. 5 Distortion grid of display path for full field with its virtual image at a depth of (a) 0.5, (b) 1.5 and (c) 3 diopters.
Fig. 6
Fig. 6 (a) 3D model of a binocular vari-focal OST-HMD prototype with piezo linear actuators (b) enlarged view of the Alvarez lens module.
Fig. 7
Fig. 7 Experimental setup of a monocular benchtop prototype of a varifocal-plane OST-HMD using freeform Alvarez lenses.
Fig. 8
Fig. 8 Qualitative demonstration of focus cue rendering in our vari-focal OST-HMD benchtop prototype: (a) A virtual image (tumbling E) was rendered at 160mm (i.e. 6 diopters) with the Alvarez lens and camera focused at the same depth along with physical reference objects in the see-through paths; (b) A virtual image (tumbling E) was rendered at 3000mm (i.e. 0.33 diopters) with the Alvarez lens and camera focused at the same depth along with physical reference objects in the see-through paths.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

Φ= A x x 3 + A x y x 2 + A y y 3 + A y x y 2 +B x 2 +Cxy+D y 2 +Ex+Fy+G
Φ= A x x 3 +B x 2 +Ex+G

Metrics