Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Design of retinal-projection-based near-eye display with contact lens

Open Access Open Access

Abstract

We propose a design of a retinal-projection-based near-eye display for achieving ultra-large field of view, vision correction, and occlusion. Our solution is highlighted by a contact lens combo, a transparent organic light-emitting diode panel, and a twisted nematic liquid crystal panel. Its design rules are set forth in detail, followed by the results and discussion regarding the field of view, angular resolution, modulation transfer function, contrast ratio, distortion, and simulated imaging.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In recent years, the notion of augmented reality (AR) [1] has been going viral thanks to the staggering venture investments and countless media hypes. With AR, users are able to view the real world overlaid with computer-generated imagery and information. Such user experience can be realized by two types of optical solutions, i.e. video see-through displays and optical see-through near-eye displays (NEDs) [2]. The former is usually deployed on well-established mobile devices such as smartphones and tablets, while the latter on the immature wearable devices, e.g. smart glasses or headsets. As far as user experience is concerned, optical see-through NEDs outperform video see-through displays in that what you see is what you get. But sadly, an ideal solution for optical see-through NEDs that could perfectly live up to the requirements of AR is still a big challenge. For example, combiner-based NEDs―including beam splitters [3–5], semi-reflective mirrors [6–8], and holographic reflectors [9–11]―are often bulky and heavy if designed for a large field of view (FOV). Waveguide-based NEDs―including both planar [12–14] and freeform [15–17] waveguides―are more compact in terms of form factor as the optical path can be compressed into the waveguide. However, once the light enters into a waveguide, the maximum angle, at which it could leave, will be bound by the total internal reflection and the ways of out-coupling. Retinal-projection-based or direct-view NEDs―including retinal scanning [18–20], pinlight [21], and iOptik [22], in which the image is directly projected on the retina―may have both compactness and large FOVs, and yet each one has its own issues. The retinal scanning is vulnerable to the rotation of eyeball. The pinlight struggles with the change in the gaze direction, pupil size, and eye’s focal state. The iOpitk―a proprietary technology of Innovega―is identified as a contact lens embedded with a polarizer and a band-pass filter. Despite years of development, the manufacturability of such contact lens remains to be improved.

Unlike video see-through displays, optical see-through NEDs are of wearable devices. Therefore, optics aside, ergonomics merits special care as well. One of the ergonomic pain points to solve is to save the visually impaired users from the trouble of wearing extra eyeglasses. As earlier attempts, we introduced combiner-based [23–25], waveguide-based [26–28], and retinal-projection-based NEDs [29–31], which are merged with the prescription or corrective lenses for vision correction. In this paper, we shall extend to a different scenario when a subset of population would prefer to wear contact lens in the hope of yielding a better performance in outdoor activities. To satisfy this niche, a retinal-projection-based NED, which features a contact lens combo, a transparent organic light-emitting diode (OLED) panel, and a twisted nematic liquid crystal (TN-LC) panel, is proposed. In what follows, its structure, design rules, and results and discussion are to be elaborated.

2. Design principle

2.1 Proposed structure

Figure 1 is a schematic drawing of the proposed NED, which involves four major components, i.e. an OLED panel, a TN-LC panel, a contact lens combo―including a contact lens, a patterned analyzer, and a microlens―and an eye. The OLED panel is supposed to be transparent and responsible for delivering the virtual images. To the inner side of the OLED panel is attached a polarizer that is vertically polarized. The TN-LC panel is capable of switching the polarization. The contact lens is tailored to correct the refractive anomalies of the eye, depending on the user’s acuity. On top of the contact lens is coated a patterned analyzer, whose transmission axis (TA) at the inner part is same as that of polarizer but at the outer part orthogonal to the former, as shown in the inset of Fig. 1, where the blue double arrows denote TAs. On top of the inner part of analyzer is laminated a microlens, which is used to converge the light of OLED. L is the diagonal dimension of active area of OLED panel. D is the distance between the OLED panel and eye. Ri is the radius of inner part of patterned analyzer.

 figure: Fig. 1

Fig. 1 Schematic drawing of the proposed NED, which involves four major components, i.e. an OLED panel, a TN-LC panel, a contact lens combo―including a contact lens, a patterned analyzer, and a microlens―and an eye. L is the diagonal dimension of active area of OLED panel. D is the distance between the OLED panel and eye. Ri is the radius of inner part of patterned analyzer.

Download Full Size | PDF

2.2 Eye

Prior to explaining the design rules of our NED, it is essential to understand the mechanism of eye. From the perspective of geometric optics, human eye is equivalent to a zoom lens system, mainly consisting of two focusing elements―i.e. cornea and lens―and a sensor―i.e. retina [32]. For the sake of easy calculation and explanation, a simplified eye is exploited, as shown in Fig. 2. It is composed of the cornea (anterior and posterior), chambers (anterior and posterior) filled with aqueous humor, pupil, lens (anterior and posterior), vitreous chamber filled with vitreous humor, and retina. The cornea accounts for approximately two thirds of the eye’s total diopter [33]. The lens, on the other hand, is responsible for fine-tuning the diopter of eye in response to the object distance.

 figure: Fig. 2

Fig. 2 Simplified eye structure, which is composed of the cornea (anterior and posterior), chambers (anterior and posterior) filled with aqueous humor, pupil, lens (anterior and posterior), vitreous chamber filled with vitreous humor, and retina.

Download Full Size | PDF

When light emitting from object S first arrives at the anterior cornea, the refraction occurs. By neglecting the thickness between adjacent surfaces and treating each surface as spherical, object distance s, image distance si after ith surface and final image distance s′ could be correlated as [34]

1s+n1s1=n11R1
n1s1+n2s2=n2n1R2
n2s2+n3s3=n3n2R3
n3s3+n4s'=n4n3R4
where n1, n2, n3, and n4 are in turn the refractive indices of cornea, aqueous humor, lens, and vitreous humor, and R1, R2, R3, and R4 are the radii of curvature of anterior cornea, posterior cornea, anterior lens, and posterior lens, respectively. Summing up Eqs. (1) to (4), we could write
1s+n4s'=n11R1+n2n1R2+n3n2R3+n4n3R4=P
For a simpler expression, the right side of Eq. (5) is abbreviated as P. By letting s be infinitely large, diopter of eye Pe can be deduced as
Pe=Pn4
Substituting Eq. (6) into Eq. (5) yields
s=s'(Pes'1)n4
In practice, image distance s′ shall be fixed to be equal to the length of eye ball Le, which is about 24-25 mm for an adult [31]. Say s′ = 24 mm and n4 = 1.3377, object distance s can be calculated as a function of the diopter of eye Pe, as shown in Fig. 3. If the target value of object distance s is set as 3 m, Pe shall be 41.92 m−1. If the target value of object distance s is set as 1.5 cm, Pe shall be 91.50 m−1, which is obviously impractical as Pe usually maximizes at 53 m−1 [35]. Hence, the near point―the minimum object distance where sharp focusing is possible―is 6.6 cm.

 figure: Fig. 3

Fig. 3 Calculated object distance s as a function of the diopter of eye. If the target value of object distance s is set as 3 m, Pe shall be 41.92 m−1. If the target value of object distance s is set as 1.5 cm, Pe shall be 91.50 m−1, which is obviously impractical as Pe usually maximizes at 53 m−1.

Download Full Size | PDF

From the perspective of ophthalmology, human eye is a complex and delicate sensory organ. On the retina, there are three types of photoreceptor cells―i.e. rods, cones, and ganglion cells [36]. Rods are sensitive to the brightness for both high and low light levels. Cones are sensitive to the colors but only work at high light levels. Ganglion cells indirectly contribute to the sight for being credited for the circadian rhythm and pupillary reflex. The distribution of photoreceptor cells throughout the retina is uneven and highly concentrated at the center of retina. Near the center of retina is located a 1.5-mm pit, known as fovea, which is richest in cones. The fovea is responsible for the 100% acuity or sharpest central vision―sometimes dubbed as foveal vision. A bigger area surrounding the fovea is called macula, which is 5.5 mm across and houses the largest amount of both cones and rods [37]. If aligning the optical axis to the center of macula, the angular size of macula θ, seen from the air, can be approximated as

θ=2sin1(LmnavgLm2+4Le2)=17.9°
where Lm is the size of macula and navg is the average refractive index of eye. Given Lm = 5.5 mm and navg = 1.3692, θ is 17.9°. As the number of photoreceptor cells decreases all the way from the fovea to periphery, visual acuity drops rapidly toward the retina’s periphery [38]. For the center of fovea, visual acuity is 1.0, also known as 20/20 vision. For the periphery of macula, visual acuity is ca. 0.31. When the angle of periphery deviating from fovea exceeds 60°, visual acuity is literally 0 [38].

2.3 Design rules

The design of the proposed NED deals with two optical paths, one for imaging the real object and the other for imaging the virtual object. The optical path diagrams for the real and virtual images are illustrated in Fig. 4 and Fig. 5, respectively. In imaging the real object, both OLED and TN-LC are turned off. As shown in Fig. 4, light rays emitting from the real object will first become vertically polarized after passing through the polarizer. When light is incident on the off-state TN-LC, a phenomenon known as optical activity is incurred [34], for which, the polarization of light at the exit will be rotated by 90°―i.e. horizontally polarized if viewed head-on. When rays reach the inner part of analyzer―which is vertically polarized―they will be blocked, implying that brightness of real image heavily leans on the size of inner part of analyzer. Only when rays reach the outer part of analyzer―which is horizontally polarized―they could be transmitted and then refracted in turn by the contact lens, cornea, and lens. Finally, an inverted image will be formed on the retina. The design of contact lens shall follow from the lensmaker’s equation [34], as given by Eq. (9)

Pc=(nc1)[1Rcf1Rcb]
where Pc is diopter or optical power of contact lens, which can be obtained directly from the prescription, nc is the refractive index of contact lens, Rcf and Rcb are the radii of curvature of the front and back surfaces of contact lens, respectively. By factoring into Pc, Eq. (7) is modified as

 figure: Fig. 4

Fig. 4 Optical path diagram for imaging the real image, for which both OLED and TN-LC are turned off.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Optical path diagram for imaging the virtual image, for which both OLED and TN-LC are turned on.

Download Full Size | PDF

s=s'(Pen4+Pc)s'n4

In imaging the virtual object, both OLED and TN-LC are turned on. As shown in Fig. 5, light rays emitting from the virtual object―i.e. the screen of OLED―will first become vertically polarized after passing through the polarizer. When incident on the on-state TN-LC, the optical activity is deactivated, for which, the polarization of light at the exit will remain unchanged. When rays reach the outer part of analyzer, they will be blocked. On the other hand, when rays reach the inner part of analyzer, they could be transmitted. This arrangement of patterned analyzer guarantees that when viewing the virtual object, no rays from the real object would stand in the way. This is particularly important for the outdoor usage as the strong ambient light would easily overwhelm the virtual object. Now the table could be turned for the ambient light will be substantially dampened by the patterned analyzer and even outshined by the OLED. In other words, the occlusion [2] between the real and virtual objects is enabled. Considering that OLED is too close to the eye that it is out of the range of accommodation, a microlens to pre-converge the rays is required to compensate the upper limit of range of accommodation. Again, employing lensmaker’s equation, the diopter of microlens Pm is determined by

Pm=(nm1)[1Rmf1Rmb]
where nm is the refractive index of microlens, Rmf and Rmb are the radii of curvature of the front and back surfaces of microlens, respectively. By intention, we let Rmb = Rcf so that the diopter of analyzer is zero. Upon leaving the microlens and analyzer, rays are in turn refracted by the contact lens, cornea, and lens. Likewise, an inverted image will be formed on the retina. By factoring into Pm, Eq. (10) shall be modified as
s=s'(Pen4+Pc+Pm)s'n4
from which, it can be seen that the presence of Pm shortens the object distance s that is identical to the distance D between the OLED panel and eye. Plus, the diopter of eye Pe may vary during the accommodation, in order for s′ to be maintained on the retina so that sharp image can be formed, D could be manually adjusted. In fact, by revisiting Fig. 3, we could find out that once the distance of real object exceeds 3 m, Pe does not change very much. More importantly, although the physical distances of real and virtual objects―calculated by Eqs. (10) and (12)―are different, the psychological distances of real and virtual objects―processed by brain―will be equalized, as the depth cue of virtual object tends to be coupled with that of real object [39].

2.4 Field of view

FOV is a key indicator for evaluating the performance of NED. Because the contact lens is usually way larger than the pupil and tightly adheres to the eye through the surface tension of tears [40], FOV with or without contact lens remains the same. To avoid ambiguity, FOV of the real image, FOVr―by default along the diagonal direction―can be calculated by

FOVr=2tan1tan2(FOVh/2)+tan2(FOVv/2)
where FOVh and FOVv stand for the horizontal and vertical FOVs, respectively. For a naked or unaided eye, whose FOV is measured as 150° (horizontal) by 120° (vertical) [2], FOVr is therefore 153° (diagonal). Referring to Fig. 6, FOV of the virtual image, FOVv, is defined as the angular size of OLED, which is written as
FOVv=2tan1(W2+H22D)
where W and H represent the horizontal and vertical dimensions of OLED, respectively. It is apparent that FOVv hinges on the size of OLED and enlarges as the eye gets closer to OLED.

 figure: Fig. 6

Fig. 6 FOV of virtual image, which is defined as the angular size of OLED. It is apparent that FOVv hinges on the size of OLED and enlarges as the eye gets closer to OLED.

Download Full Size | PDF

2.5 Contact lens combo

The contact lens combo consists of a contact lens, a patterned analyzer, and a microlens. The positions of contact lens and patterned analyzer are interchangeable. The microlens, patterned analyzer and contact lens are center-aligned. Patterned analyzer can be fashioned via the photoalignment technique [41]. Consider a case that a user has 3 diopters of myopia, disregarding the astigmatism and other types of refractive anomalies. Per the said design rules, a contact lens combo can be tentatively designed using the parameters as given in Table 1. It should be mentioned that those parameters are subject to change after the optimization, as will be done later. Incidentally, for the fact that contact lens is a medical device, it is highly recommended to resort to an optometrist or ophthalmologist for professional advice on whether is suitable or not to wear contact lens, frequency of use, choice of materials, water content, oxygen permeability etc.

Tables Icon

Table 1. Parameters for the contact lens combo

2.6 OLED panel

OLED panel, acting as a virtual object, consists of an OLED and a polarizer. For the real image, it is switched off, whereas for the virtual image, it is switched on. Preferably, it is supposed to be highly transparent to enhance the light utilization. Alternatively, OLEDs can be replaced by the quantum dot light-emitting diodes [42] or other types of transparent displays. Due to the unavailability of transparent OLEDs of merely a couple of inches, a set of customized parameters are listed in Table 2, where the resolution is 1024 × 768, diagonal is 1.7 inch, pixel size is 33.73 µm, transmittance of OLED is 30%, contrast ratio (CR) is 10000, transmittance of polarizer is 49%. The overall transmittance or transparency of OLED panel is 14.7%. For a better transparency, the resolution has to be reduced so as to increase the aperture ratio, meaning that there is a tradeoff between the transparency and resolution.

Tables Icon

Table 2. Parameters for OLED panel

2.7 TN-LC panel

TN-LC panel, acting as a polarization rotator, consists of a TN-LC [43], which is sandwiched between two glass substrates coated with indium tin oxide (ITO) electrodes and polyimide (PI) alignment layers, as shown in Fig. 7. The switching of TN-LC should be synchronized with OLED. In the off-state―null voltage is applied―LC directors at the entrance and exit are perpendicular to one another. Under such configuration, the polarization of emerging light will be rotated by 90° via the optical activity [43]. In the on-state―a voltage is applied―the twist of LC directors is unwound, thereby lifting the optical activity. As a result, the polarization of emerging light will remain intact. To fulfill the first maximum of Mauguin condition [43], cellgap of LC layer dlc, birefringence of LC Δn, and wavelength λ shall meet

dlc=3λ2Δn
Say Δn = 0.1 and λ = 543 nm, dlc = 4.7 μm. Though Mauguin condition can be fulfilled at greater maximums, thicker cellgap of LC layer will definitely slow down the switching of TN-LC [43]. Since Mauguin condition is wavelength sensitive, the polarization rotation will not be perfect for the entire spectrum. This would result in a loss of transmittance as well as an imperfect occlusion between the real and virtual objects.

 figure: Fig. 7

Fig. 7 Polarization switching of TN-LC panel. In the (a) off-state―null voltage is applied―LC directors at the entrance and exit are perpendicular to one another. Under such configuration, the polarization of emerging light will be rotated by 90° via the optical activity. In the (b) on-state―a voltage is applied―the twist of LC directors is unwound, thereby lifting the optical activity. As a result, the polarization of emerging light will remain intact.

Download Full Size | PDF

3. Results and discussion

3.1 Simulation settings

Our simulation is implemented with an optical design software Code V (Synopsys), which is based on the ray tracing [44] and capable of analyzing the imaging properties, including modulation transfer function (MTF), distortion, and imaging simulation. The design wavelengths are 458, 543, and 632.8 nm. The fields of 0° (center of fovea) and 9° (periphery of macula), and 55° are selected. As OLED, polarizer, TN-LC, and patterned analyzer are free of diopters, they are omitted during the calculation of imaging properties.

The numbering of surfaces is labelled as in Fig. 8. The real and virtual objects are situated at 3 m and 15 mm away from the eye, respectively. Surfaces 1 to 2 (S1 to S2) make up the microlens. Surfaces 2 to 3 (S2 to S3) make up the contact lens. Surfaces 3 to 8 (S3 to S8) make up the eye, of which, S3 is anterior cornea, S4 posterior cornea, S5 pupil, S6 anterior lens, S7 posterior lens, and S8 retina. In calculating the real image, real object and surfaces from S2 to S8 are active, of which S5 is assigned as stop. In calculating the virtual image, virtual object and surfaces from S1 to S8 are active, of which S2 is assigned as stop.

 figure: Fig. 8

Fig. 8 Numbering of surfaces. The real and virtual objects are situated at 3 m and 15 mm away from the eye, respectively. Surfaces 1 to 2 (S1 to S2) make up the microlens. Surfaces 2 to 3 (S2 to S3) make up the contact lens. Surfaces 3 to 8 (S3 to S8) make up the eye, of which, S3 is anterior cornea, S4 posterior cornea, S5 pupil, S6 anterior lens, S7 posterior lens, and S8 retina. In calculating the real image, real object and surfaces from S2 to S8 are active. In calculating the virtual image, virtual object and surfaces from S1 to S8 are active.

Download Full Size | PDF

To model the eye, the structural parameters of eye are originally adopted from a schematic eye proposed by Navarro et al. [45]. Along with the preliminary parameters enumerated in the previous section, we could build an initial NED design by presetting the surfaces of each element. Two optimizations are carried out in turn for the real and virtual images. At first, an optimization for the real image is done by constraining the length of the eye to be 24 mm. Then, fixing the as-optimized parameters for eye and contact lens, an optimization for the virtual image is done by tweaking the microlens only. The final parameters obtained after the optimization are summarized in Table 3. Besides, more detailed parameters for defining aspherical surfaces are disclosed in Table 4.

Tables Icon

Table 3. Parameters used for the simulation

Tables Icon

Table 4. Parameters for aspherical surfaces

3.2 Field of view

Table 5 lists the parameters necessary for evaluating FOVs. According to Eqs. (13) and (14), FOVr and FOVv are calculated as 153° (diagonal) and 110° (diagonal), respectively.

Tables Icon

Table 5. Parameters for calculating FOVs

3.3 Angular resolution

Angular resolution―measured in arcminute (′)―of the image formed on the retina relies on both the resolutions of object and eye. For the resolution of real object is usually way higher than that of eye, angular resolution of the real image, ARr, shall be equal to the latter, which is the reciprocal of visual acuity [38]. Thus,

ARr=1visual acuity
Under the best condition that the visual acuity is 1.0, angular resolution is 1′. For the resolution of OLED―defined as the average angular subtense of a single pixel―is usually way lower than that of eye, angular resolution of the virtual image, ARv, on the contrary, shall be decided by the former, which can be calculated by dividing FOVv by the number of pixels N along the diagonal, expressed as
ARv=60FOVvN=60FOVvNh2+Nv2
where Nh and Nv are the number of pixels along the horizontal and vertical directions, respectively. For FOVv = 110°, Nh = 1024, and Nv = 768, angular resolution 5.16′. To reach the visual limit of 1′, a much higher resolution up to 7680 × 4320―i.e. 8K ultra-high-definition―will suffice, for which the angular resolution is as fine as 0.75′. It also should be cautioned that the above definition for ARv will no longer hold once the resolution of OLED is better than 2′, if it is legitimate to think of the imaging of eye as a sampling process [46].

3.4 Modulation transfer function

By computing the auto-correlation of the pupil function [47], diffraction MTFs of both real and virtual images are calculated as a function of spatial frequency―the number of cycles or line pairs per millimeter [48]―for the diffraction limit and fields of 0°, 9° and 55° (tangential and radial), as shown in Fig. 9. For the real image, MTFs within the macula are above 0.4 at 6 cycles/mm. For the virtual image, MTFs within the macula are above 0.4 at 20 cycles/mm.

 figure: Fig. 9

Fig. 9 Calculated MTFs of (a) real and (b) virtual images. For the real image, MTFs within the macula are above 0.4 at 6 cycles/mm. For the virtual image, MTFs within the macula are above 0.4 at 20 cycles/mm.

Download Full Size | PDF

3.5 Contrast ratio

Contrast ratio―if treated as a transient quantity―is defined as the ratio of maximum intensity to minimum intensity [44], and it can be derived as [31]

CR=1+MMTF1MMTF
where M denotes the modulation in object, i.e.
M=CRo1CRo+1
where CRo is the CR of object. For the real object, CRo can be infinitely large so that M is deemed as 1. For the virtual object, CRo is the CR of OLED. Since the real and virtual objects are situated at different distances, for an apples-to-apples comparison, the foregoing spatial frequency shall be converted to the number of cycles per degree. For the field of 0° at a spatial frequency of 3.89 cycles/degree―which corresponds to a pixel size of 33.73 μm at a distance of 15 mm―CRs of real and virtual images are calculated as 1999 (MTF = 0.999) and 11 (MTF = 0.827), respectively. By the way, the influence of TN-LC panel on the CR can be neglected, as it would diminish the maximum and minimum intensities of both real and virtual objects in proportion.

3.6 Distortion

Distortions of real and virtual images, defined as the displacement of image height or ray location, are plotted in Fig. 10, where the distortions of real and virtual images are less than 29% and 45%, respectively. Considering the fact that the distortion is an intrinsic property of eye [49], an absolutely distortion-free NED might not be very necessary. Instead, a certain distortion would be advantageous for the virtual world to be meshed perfectly with the real world, as long as the distortions of real and virtual images could be overlapped.

 figure: Fig. 10

Fig. 10 Calculated distortions of real and virtual images. For real and virtual images, the distortions are less than 29% and 45%, respectively.

Download Full Size | PDF

3.7 Simulated imaging

For a qualitative analysis of imaging quality, both real and virtual images are visualized from the imaging simulation that takes into account the effects of distortion, aberration blurring, diffraction blurring, and relative illumination, as shown in Fig. 11. By comparing the original and simulated images, it can be seen that the real image is inherently distorted at large field angles, while the virtual image turns out to be more blurred and more pronounced in the chromatic aberration. It has to be mentioned that those simulated images are what appear on the retina.

 figure: Fig. 11

Fig. 11 (a) Original (photographer: C. P. Chen, location: Flaming Mountains, Turpan, China), (b) real, and (c) virtual images. By comparing the original and simulated images, it can be seen that the real image is inherently distorted at large field angles, while the virtual image turns out to be more blurred and more pronounced in the chromatic aberration.

Download Full Size | PDF

4. Conclusions

A retinal-projection-based NED and design rules thereof are proposed. Its structure is characterized by a contact lens combo, a transparent OLED panel, and a TN-LC panel. Based on the simulation, its key performance including FOV, angular resolution, MTF, CR, and distortion has been studied. For the real image, FOV is 153° (diagonal), angular resolution is 1′, MTF is above 0.4 at 6 cycles/mm, CR is 1999, and the distortion is less than 29%. For the virtual image, FOV is 110° (diagonal), angular resolution is 5.16′, MTF is above 0.4 at 20 cycles/mm, CR is 11, and the distortion is less than 45%. Targeting the niche market on the contact-lens-wearing users and outdoor AR applications, our solution would offer several technical edges or possibilities that might be difficult with the current practices. First, its ultra-large FOVs for both real and virtual images are unparalleled by the others. Second, as opposed to eyeglasses, contact lens combo saves more room. Moreover, similar to polarized sunglasses, the analyzer within the combo could block the ultraviolet light and mitigate the glare [50]. Third, apart from being an optical device, contact lens combo can even cater to cosmetic needs by tinting the non-optical area of lens. Fourth, the occlusion between real and virtual objects is achieved by the patterning of analyzer and polarization switching of TN-LC.

Funding

Science and Technology Commission of Shanghai Municipality (1701H169200); Shanghai Jiao Tong University (AF0300204, WF101103001/085); Shanghai Rockers Inc. (15H100000157).

Acknowledgments

Special thanks to Prof. Yi-Hsin Lin (National Chiao Tung University) for her valuable advice during the submission.

References and links

1. Wikipedia, “Augmented reality,” https://en.wikipedia.org/wiki/augmented_reality.

2. W. Barfield, Fundamentals of Wearable Computers and Augmented Reality 2nd Edition (Chemical Rubber Company, 2015).

3. H.-S. Chen, Y.-J. Wang, P.-J. Chen, and Y.-H. Lin, “Electrically adjustable location of a projected image in augmented reality via a liquid-crystal lens,” Opt. Express 23(22), 28154–28162 (2015). [CrossRef]   [PubMed]  

4. Y.-J. Wang, P.-J. Chen, X. Liang, and Y.-H. Lin, “Augmented reality with image registration, vision correction and sunlight readability via liquid crystal devices,” Sci. Rep. 7(1), 433 (2017). [CrossRef]   [PubMed]  

5. Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017). [CrossRef]   [PubMed]  

6. J. P. Rolland, “Wide-angle, off-axis, see-through head-mounted display,” Opt. Eng. 39(7), 1760–1767 (2000). [CrossRef]  

7. S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010). [CrossRef]   [PubMed]  

8. R. Zhu, G. Tan, J. Yuan, and S.-T. Wu, “Functional reflective polarizer for augmented reality and color vision deficiency,” Opt. Express 24(5), 5431–5441 (2016). [CrossRef]   [PubMed]  

9. C. Jang, C.-K. Lee, J. Jeong, G. Li, S. Lee, J. Yeom, K. Hong, and B. Lee, “Recent progress in see-through three-dimensional displays using holographic optical elements.,” Appl. Opt. 55(3), A71–A85 (2016). [CrossRef]   [PubMed]  

10. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017). [CrossRef]  

11. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017). [CrossRef]  

12. Y. Amitai, “Extremely compact high-performance HMDs based on substrate-guided optical element,” in SID Symposium (2004), pp. 310–313. [CrossRef]  

13. T. Levola, “Diffractive optics for virtual reality displays,” J. Soc. Inf. Disp. 14(5), 467–475 (2006). [CrossRef]  

14. H. Mukawa, K. Akutsu, I. Matsumura, S. Nakano, T. Yoshida, M. Kuwahara, and K. Aiki, “A full-color eyewear display using planar waveguides with reflection volume holograms,” J. Soc. Inf. Disp. 17(3), 185–193 (2009). [CrossRef]  

15. D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism,” Appl. Opt. 48(14), 2655–2668 (2009). [CrossRef]   [PubMed]  

16. Q. Wang, D. Cheng, Y. Wang, H. Hua, and G. Jin, “Design, tolerance, and fabrication of an optical see-through head-mounted display with free-form surface elements,” Appl. Opt. 52(7), C88–C99 (2013). [CrossRef]   [PubMed]  

17. X. Hu and H. Hua, “High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014). [CrossRef]   [PubMed]  

18. S. C. McQuaide, E. J. Seibel, J. P. Kelly, B. T. Schowengerdt, and T. A. Furness III, “A retinal scanning display system that produces multiple focal planes with a deformable membrane mirror,” Displays 24(2), 65–72 (2003). [CrossRef]  

19. M. Sugawara, M. Suzuki, and N. Miyauchi, “Retinal imaging laser eyewear with focus-free and augmented reality,” in SID Display Week (2016), pp. 164–167.

20. T. North, M. Wagner, S. Bourquin, and L. Kilcher, “Compact and high-brightness helmet-mounted head-up display system by retinal laser projection,” J. Disp. Technol. 12(9), 982–985 (2016). [CrossRef]  

21. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014). [CrossRef]  

22. R. Sprague, A. Zhang, L. Hendricks, T. O’Brien, J. Ford, E. Tremblay, and T. Rutherford, “Novel HMD concepts from the DARPA SCENICC program,” Proc. SPIE 8383, 838302 (2012). [CrossRef]  

23. C. P. Chen, Z. Zhang, and X. Yang, “A head-mounted smart display device for augmented reality,” CN Patent 201610075988.7 (2016).

24. L. Zhou, C. P. Chen, Y. Wu, K. Wang, and Z. Zhang, “See-through near-eye displays for visual impairment,” in The 23rd International Display Workshops in conjunction with Asia Display (2016), pp. 1114–1115.

25. L. Zhou, C. P. Chen, Y. Wu, Z. Zhang, K. Wang, B. Yu, and Y. Li, “See-through near-eye displays enabling vision correction,” Opt. Express 25(3), 2130–2142 (2017). [CrossRef]   [PubMed]  

26. C. P. Chen, Y. Wu, and L. Zhou, “An optical display device for augmented reality,” CN Patent, 201610112824.7 (2016).

27. Y. Wu, C. P. Chen, L. Zhou, Y. Li, B. Yu, and H. Jin, “Near-eye display for vision correction with large FOV,” in SID Display Week (2017), pp. 767–770.

28. Y. Wu, C. P. Chen, L. Zhou, Y. Li, B. Yu, and H. Jin, “Design of see-through near-eye display for presbyopia,” Opt. Express 25(8), 8937–8949 (2017). [CrossRef]   [PubMed]  

29. C. P. Chen, Y. Wu, L. Zhou, J. Ge, J. Liu, and J. Xu, “A near-eye display integrated with vision correction,” CN Patent, 201710065160.8 (2017).

30. C. P. Chen, Y. Wu, J. Zhao, Y. Li, and B. Yu, “A retinal-projection-based near-eye display,” CN Patent, 201710751238.1 (2017).

31. C. P. Chen, L. Zhou, J. Ge, Y. Wu, L. Mi, Y. Wu, B. Yu, and Y. Li, “Design of retinal projection displays enabling vision correction,” Opt. Express 25(23), 28223–28235 (2017). [CrossRef]  

32. Wikipedia, “Human eye,” https://en.wikipedia.org/wiki/human_eye.

33. Wikipedia, “Cornea,” https://en.wikipedia.org/wiki/cornea.

34. F. L. Pedrotti, L. M. Pedrotti, and L. S. Pedrotti, Introduction to Optics 3rd Edition (Addison-Wesley, 2006).

35. University of Notre Dame, “Physics of the eye,” https://www3.nd.edu/~nsl/Lectures/mphysics.

36. Wikipedia, “Photoreceptor cell,” https://en.wikipedia.org/wiki/Photoreceptor_cell.

37. Wikipedia, “Macula of retina,” https://en.wikipedia.org/wiki/Macula_of_retina.

38. Wikipedia, “Visual acuity,” https://en.wikipedia.org/wiki/Visual_acuity.

39. I. P. Howard, Perceiving in Depth, Volume 1: Basic Mechanisms (Oxford University, 2012).

40. Wikipedia, “Surface tension,” https://en.wikipedia.org/wiki/Surface_tension.

41. V. G. Chigrinov, V. M. Kozenkov, and H.-S. Kwok, Photoalignment of Liquid Crystalline Materials: Physics and Applications (Wiley, 2008).

42. J. Cao, J.-W. Xie, X. Wei, J. Zhou, C.-P. Chen, Z.-X. Wang, and C. Jhun, “Bright hybrid white light-emitting quantum dot device with direct charge injection into quantum dot,” Chin. Phys. B 25(12), 128502 (2016). [CrossRef]  

43. P. Yeh and C. Gu, Optics of Liquid Crystal Displays 2nd Edition (Wiley, 2009).

44. R. E. Fischer, B. Tadic-Galeb, and P. R. Yoder, Optical System Design 2nd Edition (McGraw-Hill Education, 2008).

45. I. Escudero-Sanz and R. Navarro, “Off-axis aberrations of a wide-angle schematic eye model,” J. Opt. Soc. Am. A 16(8), 1881–1891 (1999). [CrossRef]   [PubMed]  

46. Wikipedia, “Nyquist–Shannon sampling theorem,” https://en.wikipedia.org/wiki/Nyquist-Shannon_sampling_theorem.

47. H. H. Hopkins, “The numerical evaluation of the frequency response of optical systems,” Proc. Phys. Soc. B 70(10), 1002–1005 (1957). [CrossRef]  

48. Wikipedia, “Spatial frequency,” https://en.wikipedia.org/wiki/Spatial_frequency.

49. M. Bass, ‎C. DeCusatis,‎ J. Enoch,‎ V. Lakshminarayanan,‎ G. Li,‎ C. MacDonald, V. Mahajan,‎ and E. V. Stryland, Handbook of Optics 3rd Edition Volume III: Vision and Vision Optics (McGraw-Hill Education, 2009).

50. Wikipedia, “Sunglasses,” https://en.wikipedia.org/wiki/Sunglasses.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Schematic drawing of the proposed NED, which involves four major components, i.e. an OLED panel, a TN-LC panel, a contact lens combo―including a contact lens, a patterned analyzer, and a microlens―and an eye. L is the diagonal dimension of active area of OLED panel. D is the distance between the OLED panel and eye. Ri is the radius of inner part of patterned analyzer.
Fig. 2
Fig. 2 Simplified eye structure, which is composed of the cornea (anterior and posterior), chambers (anterior and posterior) filled with aqueous humor, pupil, lens (anterior and posterior), vitreous chamber filled with vitreous humor, and retina.
Fig. 3
Fig. 3 Calculated object distance s as a function of the diopter of eye. If the target value of object distance s is set as 3 m, Pe shall be 41.92 m−1. If the target value of object distance s is set as 1.5 cm, Pe shall be 91.50 m−1, which is obviously impractical as Pe usually maximizes at 53 m−1.
Fig. 4
Fig. 4 Optical path diagram for imaging the real image, for which both OLED and TN-LC are turned off.
Fig. 5
Fig. 5 Optical path diagram for imaging the virtual image, for which both OLED and TN-LC are turned on.
Fig. 6
Fig. 6 FOV of virtual image, which is defined as the angular size of OLED. It is apparent that FOVv hinges on the size of OLED and enlarges as the eye gets closer to OLED.
Fig. 7
Fig. 7 Polarization switching of TN-LC panel. In the (a) off-state―null voltage is applied―LC directors at the entrance and exit are perpendicular to one another. Under such configuration, the polarization of emerging light will be rotated by 90° via the optical activity. In the (b) on-state―a voltage is applied―the twist of LC directors is unwound, thereby lifting the optical activity. As a result, the polarization of emerging light will remain intact.
Fig. 8
Fig. 8 Numbering of surfaces. The real and virtual objects are situated at 3 m and 15 mm away from the eye, respectively. Surfaces 1 to 2 (S1 to S2) make up the microlens. Surfaces 2 to 3 (S2 to S3) make up the contact lens. Surfaces 3 to 8 (S3 to S8) make up the eye, of which, S3 is anterior cornea, S4 posterior cornea, S5 pupil, S6 anterior lens, S7 posterior lens, and S8 retina. In calculating the real image, real object and surfaces from S2 to S8 are active. In calculating the virtual image, virtual object and surfaces from S1 to S8 are active.
Fig. 9
Fig. 9 Calculated MTFs of (a) real and (b) virtual images. For the real image, MTFs within the macula are above 0.4 at 6 cycles/mm. For the virtual image, MTFs within the macula are above 0.4 at 20 cycles/mm.
Fig. 10
Fig. 10 Calculated distortions of real and virtual images. For real and virtual images, the distortions are less than 29% and 45%, respectively.
Fig. 11
Fig. 11 (a) Original (photographer: C. P. Chen, location: Flaming Mountains, Turpan, China), (b) real, and (c) virtual images. By comparing the original and simulated images, it can be seen that the real image is inherently distorted at large field angles, while the virtual image turns out to be more blurred and more pronounced in the chromatic aberration.

Tables (5)

Tables Icon

Table 1 Parameters for the contact lens combo

Tables Icon

Table 2 Parameters for OLED panel

Tables Icon

Table 3 Parameters used for the simulation

Tables Icon

Table 4 Parameters for aspherical surfaces

Tables Icon

Table 5 Parameters for calculating FOVs

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

1 s + n 1 s 1 = n 1 1 R 1
n 1 s 1 + n 2 s 2 = n 2 n 1 R 2
n 2 s 2 + n 3 s 3 = n 3 n 2 R 3
n 3 s 3 + n 4 s' = n 4 n 3 R 4
1 s + n 4 s' = n 1 1 R 1 + n 2 n 1 R 2 + n 3 n 2 R 3 + n 4 n 3 R 4 =P
P e = P n 4
s= s' ( P e s ' 1 ) n 4
θ=2 sin 1 ( L m n avg L m 2 +4 L e 2 )=17.9°
P c =( n c 1 )[ 1 R cf 1 R cb ]
s= s' ( P e n 4 + P c ) s ' n 4
P m =( n m 1 )[ 1 R mf 1 R mb ]
s= s' ( P e n 4 + P c + P m ) s ' n 4
FO V r =2 tan 1 tan 2 ( FO V h /2 )+ tan 2 ( FO V v /2 )
FO V v =2 tan 1 ( W 2 + H 2 2D )
d lc = 3 λ 2Δn
A R r = 1 visual acuity
A R v = 60FO V v N = 60FO V v N h 2 + N v 2
CR= 1+MMTF 1MMTF
M= C R o 1 C R o +1
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.