Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Methods of optimizing and evaluating geometrical lightguides with microstructure mirrors for augmented reality displays

Open Access Open Access

Abstract

Waveguide or lightguide technology has been widely used in the state-of-the-art, see-through, near-eye displays to reduce system weight and form factor. Although a few of the current products use a geometrical lightguide as an optical combiner, its design and performance assessment methods have been barely discussed. In this paper, by taking into account the factors affecting retinal image quality, we presented novel methods for quantifying and evaluating the optical performances and artifacts of geometrical lightguides based on microstructure mirror arrays, and proposed new merit functions and a novel process for systematic optimization of such lightguides. A lightguide design example implementing the evaluation and optimization methods are demonstrated, and the resulted lightguide is then further utilized as a combiner for the design of a lightweight, glasses-like, see-through, near-eye display.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Head-mounted displays (HMDs), one of the key enabling technologies for virtual and augmented reality (VR/AR) systems, have been undergoing rapid development and commercialization in recent years. With the increasing expectations of consumer AR displays, developing compact, lightweight HMDs with optical see-through capability while maintaining high image performance has remained to be a challenging task. A typical optical see-through HMD for AR applications minimally consists of a microdisplay, an eyepiece and an optical combiner, among which the optical combiner plays a critical role in determining the overall optical performance and system compactness of an AR displays. Several different technologies for constructing optical combiners have been developed, including a flat beamsplitter [1], a curved or freeform surface with a beamsplitter coating [2–5], a diffractive [6–8] or holographic waveguide [9–13], and a geometrical lightguide [14–18]. A flat beamsplitter is simple and low-cost, but its size scales up rapidly with the field of view (FOV) of a system and becomes impractical. A freeform combiner that uses either a coated freeform prism with a compensator [2–4] or a tilted off-axis partial reflector [5] can achieve a large FOV and high image quality, but its size typically is bulkier than holographic waveguides or geometric lightguides. Waveguides using diffractive optical element such as deep slanted nanometric gratings [8] or volume holograms [9–13], which couples and guides light through a substrate using diffractive techniques, has the advantage of being thin and compact, but it is very challenging to achieve a large FOV, high-quality uniform image, and low chromatic artifacts due to its high sensitivity to wavelengths and angles of incidence. For instance, rainbow effects can be seen due to the variation of spectral reflectivity versus incident angle [17], which can be partially addressed by waveguide multiplexing [9,13].

Geometrical lightguide is an alternative way of guiding light through a substrate and out-coupling light to the eye using either an array of cascaded beamsplitters or microstructure mirrors. As shown in Fig. 1(a), a beamsplitter-array lightguide uses a cascaded partially reflective mirror array (PRMA) as the out-coupling optics, in which the PRMA are coated transversely through the substrate and are placed in parallel with equal distance from each other [15,16]. Rays propagating inside the substrate are partially coupled out when they are reflected by the PRMA. A major problem of this structure is the complicated stray light paths caused by unexpected reflections that degrade the coupling image performance [14,19]. As shown in Fig. 1(b), a micro-mirror-array (MMA) geometrical lightguide contains an array of microstructure mirrors coated at the top of wedged grooves [20–22]. These shallow micro-mirror grooves are spaced apart by uncoated flat regions through which the light rays from real world scenes, shown in red dashed lines, can propagate through the parallel-plate substrate. Compared with holographic waveguides which suffer from the angular non-uniformity and color crosstalk issues due to the diffractive nature, the geometrical lightguide is invulnerable to chromatic aberration and angular sensitivity. Moreover, the shallow mirror grooves in an MMA are typically hundreds of microns deep and can be easily manufactured by diamond cut or molding on a plastic substrate, which makes the fabrication process much simpler and lower cost. While a typical freeform prism combiner is at least tens of millimeters thick, a geometric lightguide can be as thin as 3-5mm, and it has better photostability than holographic waveguides.

 figure: Fig. 1

Fig. 1 Schematic layout of the geometrical lightguide based on (a) partially reflective mirror-array and (b) micro-mirror-array (MMA).

Download Full Size | PDF

Designing and optimizing a lightguide-based AR display, however, confronts several major challenges mainly due to the nature of non-sequential ray propagation inside a geometric lightguide, which distinguishes itself from the design of the conventional imaging optics. While the optical performance of conventional imaging systems where sequential ray tracing is applicable can be readily evaluated using the well-accepted metrics such as modulation transfer functions (MTF), the non-sequential ray paths in a lightguide require new means of assessing its optical performance for imaging purpose where the toolbox available for non-imaging optics is insufficient. Moreover, the non-sequential ray paths lead to the fact that lightguide-based imaging system is much more prone to artifacts such as non-uniform images and stray lights than sequential imaging systems. Unfortunately, few works have been reported to address such challenges systematically. Recently, several research efforts have been made to overcome the major engineering challenges related to PRMA lightguide designs. For instance, Cheng et al. analyzed the cause of stray light in PRMA lightguide and designed a PRMA based geometrical lightguide by optimizing its structures for minimal stray light [14]. Wang et al. analyzed the angular tolerance and the ghost images caused by parallelism mismatch in PRMA [19]. Although the MMA-based and PRMA-based lightguide designs share some similarities, there are several structural and characteristic differences between them that demand for different optimization methods. Firstly, while the arrangement of the partially reflective mirrors in a PRMA design is fully determined by the waveguide thickness, the arrangement of the micro-structure mirror array is not constrained by any lightguide geometrical structure. The relatively independent micro-mirror structure gives the out-coupler more design freedoms. Secondly, stray light and ghost image are less an issue in a MMA than in PRMA because the shallower etched depth of the micro-mirrors makes less unintended reflection happens. Thirdly, the reflective mirrors spaced by transparent regions segment the ray bundles at out-coupler region, causing non-uniform illuminance distribution at the eyebox. Therefore, pupil uniformity and out-coupling efficiency distribution across the eyebox become more essential image evaluation factors in MMA than in PRMA designs.

In this paper, by analyzing the unique characteristics and structural properties of an MMA-based geometric lightguide (Section 2), we proposed a new set of metrics for quantifying and evaluating the key optical performances and artifacts, including the image contrast, efficiency, image uniformity, and stray light artifacts (Section 3), which are more comprehensive than the focus of stray light analysis in [14]. We further proposed a new set of merit functions based on the proposed performance metrics and developed a novel process for optimizing an MMA-based lightguide (Section 4). We then demonstrated the utility of our proposed performance metrics and optimization method by providing an example of a lightguide-based AR display design and performance evaluation (Section 5). The collimator design and the integrated design of an MMA-based AR display are also demonstrated (Section 6). It is worth noting that the optimization and evaluation methods proposed in this paper can be generally applied to the design and analysis of other lightguides.

2. Overview of an MMA-based AR display

As shown in Fig. 1, a MMA-based AR mainly consists of an image generator, a collimator and a lightguide. The image generator, the core of which is a microdisplay device, renders the digital image seen by the viewer. The collimator constructed as a lens group or a monolithic freeform prism magnifies the image from the microdisplay and forms a virtual image at the optical infinity. An example of a monolithic freeform collimator will be discussed in Sec. 6. The lightguide receives the collimated light from the collimator, guides it through the substrate via total internal reflections (TIR), and out couples it toward the eye via the MMA. A mirror-array lightguide may be divided into three functional segments: an in-coupling wedge, a guide substrate, and an out-coupling area of microstructure mirrors. As illustrated in Fig. 2(a), the in-coupling wedge, located at left end of the lightguide substrate, can be treated as an inverted right triangular prism with one right-angle side to be the in-coupling surface, which couples the light from the image collimator into the lightguide substrate. It may be characterized by the wedge angleβ, which is defined as the tilting angle of the in-coupling surface with respect to the substrate, and the wedge height t1, which characterizes the width of the in-coupling surface and determines the size of entrance pupil. Without loss of generality, the on-axis field of the collimator is assumed to be normally incident into the in-coupling surface. To ensure the in-coupling ray bundles can be propagated within the waveguide substrate by TIR, the wedge angle β must satisfy the condition:

β>sin11n+sin1sinθmaxn
where n is the refractive index of the substrate, θmaxis half field of view of the collimator.

 figure: Fig. 2

Fig. 2 (a) Schematic side view of ray paths and angular relations at the MMA-based lightguide in-coupler wedge and out-coupling area. (b) Stray light path and corresponding normal ray path and at the out-coupling area. Normal ray is coupled out directly from first micro-mirror whereas stray light is coupled out by the following micro-mirrors.

Download Full Size | PDF

The guide substrate is the main bulk of the lightguide and allows the in-coupled light to propagate toward the out-coupler via multiple TIR reflections. It may be characterized by its thickness t, refractive index n, and the substrate length H, which is defined by the lateral distance between the vertex of the wedge and the center of the exit pupil. The substrate length, H, determines the number of TIR reflections of the ray bundles. Its choice typically follows ergonomic and opto-mechanical design constraints of the display system.

The parameters of the in-coupling wedge and the guide substrate affect the ray paths in two different aspects: the distribution of the ray footprints on the substrate and the out-coupled ray patterns. Figure 2(a) shows side views of the ray paths with field angle θi in the lightguide substrate. As illustrated, the distributions and locations of the ray bundle footprints on the substrate surface may be characterized by the width, W, of the projected footprint of the rays of a given field angle on the substrate, and the separation, S, of the ray footprints of two consecutive reflections. The footprint width and separation for the incident field θi may be expressed as:

W=t1sinβcosβ(t1+δt){tanβtan[βsin1(sinθin)]}S=2ttan(β+sin1(sinθin))
where δ={0,θi<01,θi>0. Both the width and separation of the ray footprints directly affect the out-coupling efficiency of a lightguide design. It is preferable to avoid a footprint gap between adjacent reflections. In the meantime, footprint overlap should be minimized to maintain uniform flux distribution uniform across the substrate interface. To obtain a gapless footprint without overlapping for the on-axis field, the footprint width should equal to the separation between adjacent footprints. If applying this constraint to the on-axis field, we obtain geometrical relationship of
t1=2tsinβ2
In this case, a gap between the footprints is inevitable for off-axis fields.

The out-coupling area, located on the right end of the lightguide substrate, is composed of a one-dimensional array of micro-mirror grooves spaced apart by uncoated flat top regions. The in-coupled rays are reflected toward the eye via the coated mirror grooves while the in-coming light from a real-world scene is transmitted through the uncoated flat regions. The out-coupler may be characterized by the total width, d, of the out-coupling area, the tapered angle, ω, of the mirror grooves, the width, a, of the mirror grooves, and the spacing between adjacent grooves. The total width of the out-coupler area needs to be adequately large to ensure the ray bundles of all field directions can be coupled into the exit pupil and needs to be chosen according to the desired FOV and eyebox size,

dEPD+2ERFtanθmax+2ttanθmax
where EPD is the exit pupil diameter of the system, ERF is the clearance distance between the designated exit pupil and the lightguide, which needs to be at least 17mm [23], and θmax is the refractive angle of the full field. The tapered angle,ω, of the mirror grooves needs to remain the same and be chosen such that the in-coupled on-axis field is coupled out in a normal direction to the lightguide substrate and the exit pupil plane, which gives

ω=β2

The width and spacing of the grooves directly affect the out-coupling ray bundle patterns, flux distributions in the eyebox, as well as the transparency of the see-through view. For instance, the ratio of the width to the space is directly proportional to the overall reflectance of the lightguide. A higher ratio is desired for high light efficiency for the display path but a lower ratio is desired for a high transparency for the see-through path. Furthermore, the width and spacing of the mirror grooves need to be large enough to minimize the effects of diffraction, but excessive large mirror grooves will significantly increase the thickness of the substrate and visibility of the groove structures. Finally, varying the width and spacing of the grooves may offer an opportunity to improve image uniformity and efficiency. To model this ability, while we maintain the width, a, of the mirror grooves, as a constant across the mirror array, the spacing of the mirrors were allowed to change across the array. The center position of the mirror grooves along the x-direction (the ray propagation direction within the lightguide as seen in Fig. 4(a)) can be modeled by a non-linear 3rd order polynomial equation expressed as

x=A+Bl+Cl2+Dl3
where l  (l[1..L]) is the index number of the mirror grooves which are indexed from left to right, A is the offset value of the leftmost mirror groove from the center of the exit pupil, B is the linear term equivalent to a uniform spacing, C and D are the non-linear displacement terms modeling the deviation from a uniform mirror placement.

Besides these geometric parameters above, the mirror reflectance of the out-coupler may also be considered as a variable to further improve the uniformity and efficiency of the resulted retinal images. We explored this option in our design examples, and found out its effect on image uniformity was equivalent to vary the spacing of the mirror grooves. Therefore, for the remaining study, we decided to neglect it as a variable.

Besides considering the normal ray paths in an MMA, the stray light paths also need to be analyzed, since they will significantly affect the out-coupling image contrast. Compared with a PRMA lightguide configuration [14] that has several stray light paths because of the complexity of the unexpected reflections at the out-coupler area, there is only one major cause of stray light in an MMA lightguide, which is shown by the blue ray in Fig. 2(b). Unlike the normal ray paths which are directly reflected and coupled out by the tapered mirrors as shown by the red ray in Fig. 2(b), stray rays strike the top substrate surface before they are coupled out by the micro-mirrors. This extra reflection on the top surface changes the angle of the ray propagation in the substrate, and results in abnormal out-coupling angles when they are coupled out by the following micro-mirrors.

For a given field angle θi from the collimator, the out-coupling angle of its corresponding stray ray path θsiis given by:

θsi=sin1{nsin[4ω+βsin1(sinθin)180°]}
It is worth noting that stray light occurs only when the field angle satisfies the conditionθi>sin1[nsin(90°βω)]. Considering the relationship in Eq. (5), Eq. (7) can be further simplified as θsi=sin1{nsin[3βsin1(sinθin)180°]}. When β=60°, θsi=θi, which suggests a mirror symmetry between the stray ray path and the normal ray path and only the positive field angles may have a visible stray ray path if β is around 60°.

3. Quality metrics for evaluating optical performances and artifacts

A key component of any optical design problem is to establish a set of appropriate and adequate quality metrics to evaluate its optical performance and to establish meaningful merit functions for the purpose of optimization. The non-sequential ray propagation inside a lightguide prevents a direct adoption of the well-established quality metrics for evaluating the optics performance of conventional ray-based imaging systems. In the meanwhile, its imaging purpose in a display system dictates the fact that the well-accepted metrics for evaluating the performance of non-imaging optics are not adequate. Therefore, it becomes an essential step to establish a set of quality metrics for lightguide-based display systems that can be utilized to evaluate their optical performances and artifacts and establish merit functions for system optimization.

Several system factors may affect the perception of displayed images through a MMA-based lightguide. Firstly, since each of the spaced micro-mirrors only couples out a small segment of the ray bundles in each reflection, the rays from the same field angle of the display source and the rays of different field angles are coupled out toward the eye via piecewise reflections by different mirror grooves with different optical path lengths. The perceived image by a viewer is the integral sum of the piecewise rays propagated through different ray reflection paths. Rays propagating through the lightguide via more number of reflections tend to have lower brightness. Consequently, the image perceived by a viewer may appear non-uniform over the field of view, and the image brightness at different eye positions may also vary due to non-uniform light distribution over the eyebox. Secondly, as discussed in Sec. 2, stray light caused by unexpected reflections may occur and leads to the formation of ghost images and reduced image contrast. Finally, overall coupling efficiency is another critical factor to consider for AR displays. Therefore, the ultimate goal of a lightguide design is to obtain a structure that yields out-coupled images with high uniformity, high contrast, and low ghost artifacts across its designed eyebox while maintaining desirable efficiencies.

Based on the factors described above, the optical performance of a lightguide-based display design shall be evaluated in two different means—the light distribution patterns measured at the exit pupil plane and the perceived image at a viewer’s retina. The light distribution patterns (Sec. 3.1) can be conveniently simulated in the non-sequential raytracing software and subsequently measured by inserting an array of illuminance receivers at the exit pupil plane. Therefore, measurements based on the light distribution patterns naturally create individual metrics that can be conveniently and efficiently utilized to establish merit functions for system optimization. The perceived retinal image (Sec. 3.2), on the other hand, requires integrating a suitable eye model into the system and requires a fairly complex simulation step to render the integral sum of the sub-ray bundles from each field guided by each mirror-groove. While metrics based on the light distribution patterns provide piecewise assessment of the optical performance, a retinal image provides the integral perception of a guided image that takes into account all the potential artifacts and ocular factors.

3.1 Performance metric functions based on light distribution patterns

To characterize the light distribution patterns at the exit pupil plane from different field angle, we modelled an MMA-based lightguide display system in the non-sequential raytracing software LightTools. A directional collimated light source was placed on the in-coupler surface to simulate the collimated rays from a collimator. The luminance and the incident field angle of the light source upon the in-coupling surface can be varied to simulate different brightness and field position of the display. Finally, an illuminance receiver and a stray light receiver were inserted at the exit pupil plane to collect the illuminance data from a specific collimated field angle through their normal ray paths and stray light ray paths, respectively. Each of the receivers is divided into an array of NxK bins to divide the rays entering the exit pupil plane into sub-bundles correlating to their ray path differences. The integral sum of the illuminance data collected by all the bins of a receiver yields the total illuminance received on the exit pupil plane from a given field angle. By repeating the simulation process over different field angles, the light distribution patterns over the exit pupil plane across the entire FOV can be obtained.

To quantify the key optical performances and artifacts of a lightguide-based AR display, we developed four independent metrics, which establish the foundation for building a sensible merit function for effective optimization. The first metric, S1, measures the overall image-guiding efficiency of a lightguide design, is defined as the average flux ratio between in-coupling and out-coupling images, and is expressed as

S1=1Mi=1MPr(i)Pin(i)
where M is the total number of sampled field angle across the FOV, Pr(i) is the out-coupled total flux received by the receiver for the ith field, and Pin(i) is the in-coupling flux of the ith field from the collimator.

The second metric, S2, characterizes the illuminance uniformity of the out-coupled image, is defined as the normalized room-mean-square deviation (NRMSD) of flux across the whole FOV and is expressed as

S2=1Mi=1M[Pr(i)Pr(i)¯]2Pr(i)¯
where Pr(i)¯=1Mi=1MPr(i), which is the average out-coupled flux across the whole FOV.

The third metric, S3, characterizes the uniformity of the illuminance distribution at different viewing positions within the eyebox and is defined as the average of the NRMSD of the illuminance distribution at the exit pupil plane across the FOV. Non-uniform illuminance distribution at pupil position results in visible varied image brightness when eye moves in the eyebox and is expressed as

S3=1Mi=1M1NKj=1NK[Er(i,j)Er(i)¯]2Er(i)¯
where Er(i,j) is the out-coupled illuminance of the jth bin from the ith field,  Er(i)¯=1NKj=1NKEr(i,j), which is the average out-coupled illuminance on the ith field.

The fourth metric, S4, characterizes the amount of stray light coupled into the exit pupil and is defined as the average ratio of the stray light flux to the out-coupled flux through normal ray paths across the FOV. It is expressed as

S4=1Mi=1MPsl(i)Pr(i)
where Psl(i) the out-coupled flux received by the stray light receiver for the ith field inserted at the exit pupil plane to measure the illuminance of the stray light path.

To demonstrate the effectiveness of these metrics, we modeled an MMA-based lightguide system in LightTools and performed a global search for optimal lightguide structure parameters, of which the modeling methods and optimization procedures are discussed in detail in Section 4. Through this example, we collected metric data from a total of 891 different lightguide structures and compared the performance differences of these structures. The four metric function values of each numbered structure are plotted in Fig. 3(a), showing apparent numerical differences with different structures. To illustrate how the four metric functions influence the image performance, we investigated three sampled fields (0°, + 3.9°, and −3.9°) and selected several configurations based on the values of S1-S4 as examples. Figure 3(b) compares the illuminance distributions at the exit pupil plane for the three selected field angles for two different designs offering the highest and lowest pupil uniformity (S3) values. The configuration with the lowest S3 value shows better luminance uniformity over the pupil than that of the highest S3 values at all three field angles, which suggests less brightness variation when eyes move within the eyebox. Figure 3(c) shows the intensity distributions of the three fields from four selected structures (#500, #816, #521, and #20), each of which represents different combinations of the metric function cases: compared with the intensity distributions shown in Fig. 3(c.1) for the design #500 yielding low S1 and high S2 measures, the overall intensity value improves significantly in Fig. 3(c.2) for the design #816 correlating well with its measure of high efficiency (S1), the intensity values across the three fields shows significantly improved uniformity in Fig. 3(c.3) for the design #521 correlating well with its low values of the uniformity metric (S2), and an obvious unexpected intensity peak appeared in the field of view due to presence of severe stray light in Fig. 3(c.4) for the configuration #20 with the maximum measure of S4. The histograms of the coupling efficiency for those four configurations at different incident fields are plotted in Fig. 3(d).

 figure: Fig. 3

Fig. 3 Examples showing the effectiveness of four metric functions. (a) The values of four evaluation metric functions among 891 simulated MMA lightguide structures. (b) Three sampled field illuminance distributions at the eyebox. Top: low pupil uniformity equation (S3) value. Bottom: high S3 value. (c) Three sampled field intensities at the eyebox with four occasions. 1: low S1 and high S2; 2: high S1; 3: low S2; 4: high S4 and (d) planar graph showing the coupling efficiency of the four cases in (c).

Download Full Size | PDF

3.2 Retinal image simulation

Figure 4(a) shows a schematic layout of a lightguide coupled with a simple eye model placed at the exit pupil plane where the eye optics is simply represented by an eye lens which contributes the optical power and a retinal image plane. A collimated ray bundle from an object field (e.g. the on-axis field shown in orange color) is guided through the substrate and piece-wisely reflected by the segmented array of mirror grooves. The perceived retinal image is the integral sum of the piecewise rays propagated through different ray reflection paths integrated by the eye lens. Therefore, unlike a conventional imaging system, its image performance and artifacts shall be eventually evaluated by the retina image to properly model various artifacts.

 figure: Fig. 4

Fig. 4 (a) Schematic layout of the out-coupled ray bundles and eye model. (b) The projected amplitude reflectance distribution on the pupil plane when out-coupled field angle equals to 0° and 3.3°. (c) The illuminance distribution of the 3.3° incident field at the eyebox of the selected MMA configuration. (d) The normalized point spread function (PSF) of the 3.3° incident field on retina. (e) Original test image and (f) the simulated perceived retinal image with 1.2 cycles/degree sinusoid pattern.

Download Full Size | PDF

To simulate the retinal image accounting for proper ocular properties and inherent eye aberrations, we adopted the Arizona Eye Model [27], which is capable of modeling the clinical levels of aberrations and population-average ocular parameters. The accommodation state of the eye model was set at optical infinity and the entrance pupil diameter of the model was set as 4mm considering the average eye pupil size under typical display brightness.

To effectively simulate the retinal point spread function (PSF) and perceived retinal image, we re-utilized the illuminance distribution data that were collected through the receivers placed at the exit pupil plane for each field angle through a non-sequential ray tracing process. Consider a collimated plane wave,  ejkr, being coupled and guided through the lightguide and incident upon the eye pupil, where k is the wave vector defining the incident field direction, (ξ,η), and wavenumber, 2πλ, of the plane wave. Its retinal image formed through the eye optics accommodated at infinity can be characterized by the incoherent normalized PSF expressed as

PSF(u,v;ξ,η)=|exp[jπλf(u2+v2)]jλfejkrEr(ξ,η,x,y)P(x,y)ej2πλWabexp[j2πλf(ux+vy)]dxdy|2
where Er(ξ,η,x,y)is the projected amplitude distribution on the pupil plane, which is further expressed as Er(ξ,η,x,y)=Ei(ξ,η,x,y)rA(ξ,η,x,y), equal to the product of incident ray distribution Ei(ξ,η,x,y) on the micro-structural mirror array and the projected amplitude reflectance rA(ξ,η,x,y)on the pupil plane. The projected amplitude reflectance rA(ξ,η,x,y) varies with the out-coupled field angle and the geometry of the micro-mirrors since different MMA segments will be projected to the pupil location for different field angles. Figure 4(b) shows two examples of the projected amplitude reflectance rA(ξ,η,x,y) for the out-coupled field angle of 0° and 3.3°, respectively. Consequently, the reflectance function of rA(ξ,η,x,y)also accounts for the aperture diffraction effects of the micro-mirrors. P(x,y) is an apodization pupil function to account for the directional sensitivity of the photoreceptors on the retina, known as the Stiles-Crawford effect [26,27]. Wab is the compound aberration phase term accounting for both the wavefront deviation potentially induced by the collimator and the residual aberrations of the eye model. f is the effective focal length of the eye model. For every incident field angle, (ξ,η), being sampled across the entire FOV, the corresponding amplitude distribution data, Er(ξ,η,x,y), upon the eye pupil can be obtained by a receiver with NxK bins through raytracing, through which the normalized retinal PSF for the corresponding field can be calculated via Eq. (12). To ensure adequate sampling on the pupil plane, the illuminance receivers were divided into 9x9 bins to create the illuminance mesh data over a 4mm-diameter eye pupil. We can obtain the PSFs for every sampled field angle across the FOV by repeating this procedure. The perceived retinal image of an input image can be obtained by convolving the original image content with the corresponding PSFs.

Using the design #816 shown in Fig. 3 that offers the highest out-coupling efficiency, we demonstrate the retinal image simulation process. For simplicity, the wavelength of 550nm is assumed. Figure 4(c) shows an example of the illuminance distribution mesh data over the 4-mm eye pupil for a specific out-coupled field angle of 3.3. The illuminance shows a strip-like distribution due to the 1-dimensional mirror array structure. Figure 4(d) shows the corresponding normalized PSF of the specific field computed with this illuminance map through Eq. (12). The PSF shows a broader spread in the u-direction while narrower in the v- direction because of the diffraction effects of the strip-like mirror apertures. The field angle was iterated from −13 to + 13 degrees with a step increment of 2 arc minutes to compute the retinal responses over the entire FOV. A sinusoid fringe pattern with a pixel resolution of 468 by 780 and an angular frequency of 1.2 cycles/degree were employed as the test image and is shown in Fig. 4(e) with its image intensity profile along the horizontal direction plotted underneath. Figure 4(f) shows the simulation result of the perceived retinal image by convolving the original image content with the corresponding PSFs. Compared with the original test image, the contrast degradation and coupling efficiency non-uniformity of the retinal image can be clearly observed.

4. Lightguide optimization method

When modeling a conventional imaging system of sequential ray propagation, software packages such as CodeV and Zemax are utilized in which tracing a small number of rays from a small sample of field positions through the aperture of the system are typically adequate to establish quality metrics and merit functions for the purpose of performance assessment and optimization. The nature of non-sequential ray propagation through a geometric lightguide makes the process of optimizing a geometric lightguide much less straightforward and more challenging. It requires using the software for the non-sequential ray propagation such as LightTools, which typically requires tracing a large number of rays in order to simulate an illuminance distribution pattern with high accuracy. To optimize a system, one has to address two critical problems — customizing a meaningful merit function and creating an optimization method, both of which requires customization and considerations for a balance between accuracy and speed.

4.1 Customizing merit function

As demonstrated in the example in Fig. 4, the overall image performance of a lightguide-based AR display does not merely depend on a single metric function, but determined by the four metric functions altogether. For the purpose of system optimization, it is practically impossible to use these performance metrics defined in Sec. 3 individually to gauge the performance of a given lightguide design. For the convenience of performance ranking, it is necessary to create a merit function incorporating all of performance metrics with proper weights such that a single value can be computed to gauge the overall performance of a given design.

There are potentially two ways of constructing a system merit function by the four metric factors discussed in Sec. 3.1. If all of the four metrics are treated in a unison manner, a weighted sum of all of the four factors is a straightforward choice for the construction of a system merit function. This approach may yield a configuration to approach the most ideal image performance, but it overlooks the fact that the human visual system does not perceive the artifacts related to the four measurements in the same way and miss the opportunity to find the configuration most suitable for the eye. Alternatively, we can use the weighted sum of a subset of the metrics and implement the influence of the remaining metrics as additional penalty conditions to filter out undesirable configurations. For example, if 5% contrast degradation due to stray light is hardly perceived by human eyes, the metric function S4 representing stray light can be turned into a penalty condition to filter all unsatisfying configurations, rather than minimizing its value.

Considering that the pupil uniformity metric, S3, mainly affects the perceived image brightness dependence with eye pupil position within the exit pupil plane. It can be treated as being less critical than the overall efficiency, S1, and the image uniformity metric, S2. Similarly, a small amount of stray light can be tolerated due to the eye’s low sensitivity to a small percentage of overall contrast drop. Based on these observations, a merit function can be defined as

MeritFun=W1(S1)normalized+W2(S2)normalizedWithconditionof{S3<RpupilS4<Rstray
where the merit function is the weighted sum of the efficiency metrics S1 and image uniformity metrics S2 with their normalized weighting factors as W1and W2, respectively. The pupil uniformity metric S3 and stray light metric S4 are treated as penalty conditions to filter away designs with excessive stray light or pupil non-uniformity. Rpupil is the threshold for the pupil uniformity measure which was set to allow the top 84% uniform structures through all configurations for further considering, and Rstray is the threshold for the stray light percentile which was set to be 5% in our design example in Sec. 5.

It is worth noting that the see-through path in a geometric lightguide can also be evaluated by the four metrics defined in Eq. (8)-(11) and can be included into the merit function. However, different from the display path, the light rays from the see-through path are expected to propagate by the uncoated flat regions of the mirror grooves as if they pass through a thin parallel plate. Consequently, the apparent image quality of the see-through path is not affected. Although the light efficiency and thus the transparency of the see-through path are affected by the distribution of the micro-mirror array, the field uniformity of the see-through path is hardly affected as the micro-mirrors are substantially smaller than the size of eye pupil. To make the merit function more efficient, we chose to build the merit function based on solely on virtual image quality metrics.

4.2 Optimization method: global searching and local optimization

A brutal-force approach to optimizing a non-sequential imaging system is to set up all the relevant parameters as variables simultaneously, re-iterate the ray-tracing simulation at small increments to the variables through their defined ranges, and obtain an optimal configuration by computing and ranking the merit functions of all the possible configurations. However, the process of tracing a large number of rays for many configurations is very time consuming and ineffective. It is therefore an essential step to create a good starting configuration, down-select the variables, and develop an effective optimization strategy.

All of the parameters characterizing an MMA-lightguide design discussed in Sec. 2 may be classified into two groups based on their effects on the system performance. The first group is referred to as the lightguide structural parameters, which are geometrical dimensions and angles in the scope of the whole lightguide entity. This group of parameters has global effects on the overall shape and dimensions of the lightguide, the ray propagation paths, as well as image performances. The second group is called the out-coupler parameters, which characterize the dimensions and arrangement of micro-structural mirrors. The out-coupler parameters do not affect the ray propagations within the substrate, but determine the distributions and interaction of the ray bundles locally when they are coupled out of the lightguide. It is worth noting that both groups of parameters have significant influence on the performance of the out-coupling image and their main difference is that one has global effects while the other has local influence.

Based on the above classification, we consider the two groups of variables separately into two subsequently optimization procedures. The lightguide structural parameters that affect the overall lightguide structures and ray paths are treated as the primary variables and optimized in a wide range, which is referred to as a global searching step for the purpose of selecting good structural configurations. The out-coupler parameters that affect the ray out-coupling and the ray distribution over the exit pupil locally are treated as secondary variables and can be optimized in a small range for the selected structural configurations, which is thus referred to as a local optimization step.

In the global searching step, three most critical structural parameters are selected to be variables: the in-coupler wedge angleβ, the lightguide thicknesstand the inter-coupler distanceH. A global searching method is adopted by setting up a searching range and increment for each variable and iterate over all possible combinations. The searching range of the variables should follow the constraints established in Sec. 2 as well as system ergonomics and optomechanics requirements. Table 1 provides an example for setting up the ranges and increments of the variables for a PMMA-based lightguide substrate.

Tables Icon

Table 1. Searching Ranges and Increments of Structural Variables for Global Searching.

The global search was implemented by utilizing the LightTools-MATLAB API. A MATLAB program assigned each variable with a user defined initial value, an increment, and a search range. The values for other geometrical parameters correlated with the three structural variables, such as the wedge height and mirror tapered angle, were updated according to the relationships defined in Sec. 2 for each variable combinations to ensure the configuration consistency. Nested loops were used to run the LightTools raytracing simulation through all different variable combinations. 10 illuminance receivers placed at the eyebox of the lightguide recorded the simulation data from 7 sampling incident fields (3 stray light receivers). Following each raytracing simulation by LightTools, simulation results from all sampled receivers were passed back to the MATLAB program for further computation of their performance metrics and merit function values. The variable configurations in Table 1 yielded a total of 792 configurations which were automatically raytraced during the global search process. The performance metrics (Sec. 3) for each configuration across the entire FOV were computed from which a user-defined merit function (Sec. 4.1) value was computed. The top 50 configurations with minimum merit function values were selected from the 792 configurations as the optimal lightguide structures for further out-coupler optimization.

Following the global search, a local optimization was performed on the out-coupler parameters for each of the selected structure configurations, where the width, a, and spacing of the out-coupler micro-mirrors were set as variables. The width of the mirror grooves remained a constant across the mirror array, and the spacing of the mirrors were allowed to change across the array in a non-linear fashion by varying the coefficients A through D in Eq. (6). The total width, d, of the out-coupler area is set by Eq. (4) covering all the out-coupling field angles. The slanted angle of MMA is set to be ω=β2 given by Eq. (5). Table 2 summarizes the initial values, and ranges of these variables, which yielded a total of 250 out-coupler configurations. The local optimization was also implementated with LigthtTools-MATLAB API in a fashion similar to the global search. For each of the 50 selected configurations resulted from the global search, we set the variables and their initial values and range for the out-coupler as shown in Table 2 and raytracing simulations were automatically repeated across 5 different out-coupler configurations for 7 sampled field angles. For each lightguide configuration, the performance metrics and merit function for each coupler configuration across the whole FOV were computed to obtain the optimal out-coupler parameters. Finally, an optimal lightguide design is identified by comparing the performance metrics and merit functions of the local optimum across the 250 structure configurations.

Tables Icon

Table 2. Searching Ranges and Increments of Out-coupler Variables for Local Optimization

5. MMA-based lightguide design example

To demonstrate the utility of the proposed performance metrics and optimization method described above, we designed an MMA-based lightguide which offers an FOV of 26°(H) x 15.6°(V) along with 4mm exit pupil diameter and with 23mm eye relief. The substrate material was PMMA. The optimization was implemented with LightTools along with custom MATLAB scripts using the LightTools-MATLAB API. Figure 5 shows the optical layout and the ray paths of an optimized lightguide example. The out-coupling mirror grooves were modelled by a 3-D texture zone with an array of right triangular prismatic bevels with 100% reflectance on the tapered edge and 100% transmittance for the flat spacing separating the prisms. The entrance pupil of the lightguide was set on the in-coupling surface of the in-coupler wedge. To model the ray propagation in the YOZ-plane, seven collimated circular surface sources were placed at the in-coupling surface with different field angles, 0, ± 3.9°, ± 7.8°, and ± 13°, respectively. Ten illuminance receivers with a 2mm radius were placed at the exit pupil of the lightguide, which is 23mm away from the bottom surface of the substrate. Seven of the receivers were used to collect the normal out-coupled rays from each field, while the other three collect the stray light from the right side fields. Each of the receivers was divided into 9x9 bins. Considering an overall ray transfer efficiency of about 30% on average in our design, the minimum number of 4.93×105 rays calculated by Rose Model [24] need to be traced in Monte Carlo ray tracing simulation to maintain image feature contrast above 0.2 and an error level below 10%.

 figure: Fig. 5

Fig. 5 Schematic layout and ray path of the optimal MMA-based lightguide with efficiency bias.

Download Full Size | PDF

The optimization variables and settings were configured following the discussion in Sec. 4.2. The global search was performed to obtain a total of 792 different structural configurations among which 50 different configurations with top performance continued through the local optimization step. Each of the 50 configurations yielded 5 out-coupler configurations. The simulation was performed with Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz and RAM 16.0 GB and it takes about 2.7 hours to finish all simulations. The performance of each lightguide design was assessed by the merit function given in Eq. (13). Between the two performance metrics, S1 and S2, a logarithmic weighting factor ratio, log(W1W2), was varied from −1 to 1 with an interval 0.1, so that an optimal lightguide configurations with efficiency or uniformity bias can be picked out under different weighting factors.

Among the total of 250 designs in local optimization, we selected two examples to demonstrate the optimization results, where one is considered to be an optimal design with a bias toward overall coupling efficacy log(W1W2)=1 and the other biased toward the overall image uniformity log(W1W2)=1. Tables 3 and 4 summarize their corresponding structural and out-coupler parameters, respectively. It can be clearly seen in both configurations, the mirror arrangement tends to be denser at the far end than the side closer to the ray incoming direction. Table 5 summarizes the ratio between the out-coupled flux to the source flux for each of the ten receivers. Since each receiver received rays from one field direction, these values actually demonstrate the coupling efficiency of the different field angles and stray light. Table 6 summarizes the values of the performance metrics, S1 through S4. We can clearly see that the configuration with efficiency bias tends to have higher coupling efficiency but at the cost of sacrificing image uniformity than the configuration with the uniformity bias.

Tables Icon

Table 3. Structural Parameters of Two Optimal Designs

Tables Icon

Table 4. Out-coupler Parameters of Two Optimal Designs

Tables Icon

Table 5. Coupling Efficiency and Stray Light across Sample Fields of Two Optimal Designs

Tables Icon

Table 6. Comparison of Performance Metrics at the Exit Pupil of Two Optimal Designs

Following the procedure described in Sec. 3.2, we simulated the retinal images for both of the two example designs. Across the full FOV of 26° x 15.6°, we simulated the retinal image illuminance for an assuming white input image at a field sampling resolution of 2 arcminutes increments. The simulated retinal images are shown in Figs. 6(a) and 6(b) for the results of efficiency and uniformity biases, respectively. The intensity profiles across the middle of the vertical direction are plotted beneath each figure, with the sampled fields during optimization marked by red markers. Although the design with a uniformity bias demonstrated better image uniformity over the sampled fields than the design with an efficiency bias, clear artifacts of non-uniformity can still be visible at non-sampled field positions. The distributions of stray light are also plotted in Figs. 6(c) and 6(d). The design with an efficiency bias shows a broader stray light distribution due to its smaller in-coupler wedge angle than the design with uniformity bias. We computed the performance metric values S1 through S4 across the entire FOV of the simulated retinal images and the results are summarized in Table 7. Overall, we may conclude that the design with an efficiency bias shows better performance in terms of image uniformity, coupling efficiency as well as stray light distribution than that with a uniformity bias.

 figure: Fig. 6

Fig. 6 Simulation results of the retinal image and stray light distributions over full field of view. (a) Normalized retinal image intensity with efficiency bias design. (b) Normalized retinal image intensity with uniformity bias design. (c) Stray light distribution with efficiency bias design and (d) stray light distribution with uniformity bias design.

Download Full Size | PDF

Tables Icon

Table 7. Comparison of Performance Metrics on the Simulated Retinal Images of Two Optimal Designs

Constrained by the speed Monte Carlo ray tracing, only a small number of field angles can be sampled during the simulation and optimization process to maintain a reasonable speed. As demonstrated in Fig. 6, artifacts are clearly visible across the entire FOV for the retinal images. To reduce the artifacts and further improve the out-coupled image quality, we further propose a digital image correction method by pre-processing the input image according to the uniformity metrics measured for each field angles from the retinal images to compensate for the non-uniformity artifacts. To generate the digitally corrected image, the first step is to generate a normalized image profile by scaling the simulated retinal image (Fig. 6) of a white image source such that its maximum image intensity is equal to one. The normalized image profile is equivalent to obtaining the per-pixel coupling efficiency across the entire FOV. Then the reversed greyscale image filter is generated by dividing a uniform image with a target greyscale value by the normalized image profile and followed by a step of normalization and scaling.

To obtain a truly desired luminance value, the target illuminance map needs to be calibrated and gamma correction is needed as demonstrated in [25]. Figures. 7(a) and 7(d) show the obtained reserve greyscale images for the two designs examples shown in Fig. 6. By using the reversed greyscale images as the input image rather than the original white input image, we repeated the step of simulating the retinal image through the two lightguide designs following the same procedures described in Sec. 3.2. Figures 7(b) and 7(e) shows the simulated retinal image profile after the digital correction for the designs with efficiency and uniformity bias, respectively. It is clearly seen that the corrected illuminance maps become much more uniform than those before correction (Figs. 6(a) and 6(b)), with only a little fuzzy noise which can be hardly visible. To evaluate the residual illuminance non-uniformity and demonstrate whether it can be perceived by human eyes, we plotted the binary just noticeable difference (JND) map in Figs. 7(c) and 7(f), based on the Barten’s contrast CSF model and DICOM standard [29], assuming the out-coupled image has the luminance of 200 cd/m2. The noticeable illuminance variations were plotted as the white pixels in the image maps, which were about 10.26% and 11.90% of the whole image for the two selected designs, respectively, while non-noticeable areas were plotted as black pixels denoting uniform perception. It is worth noting that the proposed digital uniformity correction reduces the dynamic range of the image area with higher coupling efficiency as the cost.

 figure: Fig. 7

Fig. 7 (a) and (d): The reserve greyscale image as the digital input to compensate for image non-uniformity, for efficiency and uniformity bias design in Fig. 6, respectively. (b) and (e): The perceived retinal image after digital image uniformity correction. (c) and (f): Binary noticeable difference map, where white pixels denote perceivable illuminance variations.

Download Full Size | PDF

6. Design example of an MMA-lightguide-based AR display

Figure 8 shows a design example of the MMA-lightguide-based AR display. We used a RGB liquid crystal on silicon (LCoS) microdisplay with 0.55” diagonal size and the resolution of 1280 by 768 as the image generator, and implemented the optimization result of the efficiency bias MMA-lightguide as the guiding optics. To reduce system overall package and weight, an LCoS with built-in LED illumination system was used [30]. Figure 8(a) shows the schematic layout of the MMA-lighguide-based AR display. To reduce the volume of the collimator while maintaining the image performance, we customized a monolithic freeform prism as the collimator. Figure 8(b) shows the 3-D view of the collimator, which has a refractive surface (surface 3 as in Fig. 8(b)), a reflective surface (surface 2), a refractive/ TIR surface (surface 1 and 1’) and two separate pupil planes in two orthogonal directions. The ray emitted from the microdisplay is firstly refracted by surface 3, then TIR and reflected by surface 1’ and 2 in sequence, and finally transmitted through surface 1 and be coupled in to the MMA lightguide. The MMA lightguide guides the light in one dimension causing the pupil planes separated in two orthogonal directions. The YOZ pupil plane of the collimator is located at the in-coupling wedge. It is relayed from the in-coupling wedge to the eyebox by the lightguide since the collimated ray bundles from different field angles are guided by the substrate via different number of TIR reflections and coupled out by different MMAs along the z-axis as shown in Fig. 8(a). However, the ray bundles are propagated along the XOZ direction without pupil relay due to the 1-D slanted wedge configuration of the MMAs. In this case, to ensure all XOZ fields could be seen at the eyebox, the XOZ pupil plane of the collimator should be placed at the eyebox position. To separate the pupil planes in two orthogonal directions, surface 1 and 2 were set to be 8th and 6th order of XY-polynomial respectively, whereas surface 3 was described as 10th order anamorphic aspheric surface to contribute most of the power difference in two orthogonal directions. Two orthogonal pupil planes have a distance of 41.71mm and entrance pupil diameter of 3.9794mm, which are calculated based on the MMA structural parameters [28].

 figure: Fig. 8

Fig. 8 (a) Perspective view of the ray bundle propagation in MMA lightguide YOZ-plane. (b) 3-D view of the freeform collimator design. (c) MTF plot over full field of view. (d) Distortion grid of the virtual image over full field of view. (e) CAD model of the MMA-lightguide-based AR display prototype.

Download Full Size | PDF

Figures 8(c) and 8(d) show the image performance of the freeform collimator. The polychromatic modulation transfer function (MTF) with the representative wavelengths of 0.47, 0.55 and 0.61μm at the frequency of 30 cy/mm (red) and 52 cy/mm (green) are plotted in Fig. 8(c). The MTF is all above 0.14 at the display cut-off frequency (52 cy/mm) over the entire FOV. The distortion map with less than 5% over the full field of view is plotted in Fig. 8(d), indicating that distortion is well corrected. Residual distortion can be further corrected by distortion calibration and pre-warping [25]. Figure 8(e) shows a proposed 3D model of the MMA-lightguide-based AR system, after assembling microdisplay, freeform collimator and MMA lightguide together. The AR system shown in Fig. 8(e) has the dimension of 70mm by 32.5mm by 24mm, which is light-weighted, compact and easy to wear with a glasses frame appearance.

7. Conclusion

In this paper, we presented a systematic optical design and an image performance evaluation method of the micromirror-array-lightguide-based optical see-through near-eye display. The lightguide geometrical parameters are identified and classified into global and local types, and optimization method including both global and local searching is employed accordingly to find the optimal MMA-lightguide structure. Four metrics evaluating optical performance of MMA-lightguide are defined, and the merit function composed by these metrics is employed in the lightguide optimization and assessment processes. An example of the lightguide design with the angular resolution of 1.21 arcminutes over 30° field of view is demonstrated, and configurations selected by efficiency and uniformity bias merit functions are compared. The design of an MMA-based AR display with a glasses-like form factor is also proposed. Future works can be done for further analyzing the diffraction effects, assessing system tolerance, and expanding the pupil size.

Disclosures

Dr. Hong Hua has a disclosed financial interest in Magic Leap Inc. The terms of this arrangement have been properly disclosed to The University of Arizona and reviewed by the Institutional Review Committee in accordance with its conflict of interest policies.

References

1. M. I. Olsson, M. J. Heinrich, D. Kelly, and J. Lapetina, “Wearable device with input and output structures,” U.S. Patent No. 9,285,592 (2016).

2. D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism,” Appl. Opt. 48(14), 2655–2668 (2009). [CrossRef]   [PubMed]  

3. X. Hu and H. Hua, “High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014). [CrossRef]   [PubMed]  

4. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018). [CrossRef]   [PubMed]  

5. Z. Zheng, X. Liu, H. Li, and L. Xu, “Design and fabrication of an off-axis see-through head-mounted display with an x-y polynomial surface,” Appl. Opt. 49(19), 3661–3668 (2010). [CrossRef]   [PubMed]  

6. J. W. Pan and H. C. Hung, “Optical design of a compact see-through head-mounted display with light guide plate,” J. Disp. Technol. 11(3), 223–228 (2015). [CrossRef]  

7. M. U. Erdenebat, Y. T. Lim, K. C. Kwon, N. Darkhanbaatar, and N. Kim, “Waveguide-Type Head-Mounted Display System for AR Application,” (2018). [CrossRef]  

8. T. Levola, “28.2: Stereoscopic Near to Eye Display using a Single Microdisplay,” SID Symposium Digest of Technical Papers38(1), 1158–1159 (2007). [CrossRef]  

9. H. Mukawa, K. Akutsu, I. Matsumura, S. Nakano, T. Yoshida, M. Kuwahara, and K. Aiki, “A full‐color eyewear display using planar waveguides with reflection volume holograms,” J. Soc. Inf. Disp. 17(3), 185–193 (2009). [CrossRef]  

10. H. J. Yeom, H. J. Kim, S. B. Kim, H. Zhang, B. Li, Y. M. Ji, S. H. Kim, and J. H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]   [PubMed]  

11. M. L. Piao and N. Kim, “Achieving high levels of color uniformity and optical efficiency for a wedge-shaped waveguide head-mounted display using a photopolymer,” Appl. Opt. 53(10), 2180–2186 (2014). [CrossRef]   [PubMed]  

12. B. C. Kress and W. J. Cummings, “11‐1: Invited Paper: Towards the Ultimate Mixed Reality Experience: HoloLens Display Architecture Choices,” SID Symposium Digest of Technical Papers48(1), 127–131 (2017). [CrossRef]  

13. J. Han, J. Liu, X. Yao, and Y. Wang, “Portable waveguide display system with a large field of view by integrating freeform elements and volume holograms,” Opt. Express 23(3), 3534–3549 (2015). [CrossRef]   [PubMed]  

14. D. Cheng, Y. Wang, C. Xu, W. Song, and G. Jin, “Design of an ultra-thin near-eye display with geometrical waveguide and freeform optics,” Opt. Express 22(17), 20705–20719 (2014). [CrossRef]   [PubMed]  

15. Y. Amitai, “P‐21: Extremely Compact High‐Performance HMDs Based on Substrate‐Guided Optical Element,” SID Symposium Digest of Technical Papers35(1), 310–313 (2004). [CrossRef]  

16. Y. Amitai, “P-27: A Two‐Dimensional Aperture Expander for Ultra‐Compact, High‐Performance Head‐Worn Displays,” SID Symposium Digest of Technical Papers36(11), 360–363 (2005). [CrossRef]  

17. K. Sarayeddine and K. Mirza, “Key challenges to affordable see-through wearable displays: the missing link for mobile AR mass deployment,” SPIE Proc. 8720, 8720DD (2013). [CrossRef]  

18. J. Yang, P. Twardowski, P. Gérard, and J. Fontaine, “Design of a large field-of-view see-through near to eye display with two geometrical waveguides,” Opt. Lett. 41(23), 5426–5429 (2016). [CrossRef]   [PubMed]  

19. Q. Wang, D. Cheng, Q. Hou, Y. Hu, and Y. Wang, “Stray light and tolerance analysis of an ultrathin waveguide display,” Appl. Opt. 54(28), 8354–8362 (2015). [CrossRef]   [PubMed]  

20. B. Pascal, D. Guilhem, and S. Khaled, “Optical guide and ocular vision optical system,” U.S. Patent, No. 8,433,172 (2013).

21. K. Sarayeddline, K. Mirza, P. Benoit, and X. Hugel, “Monolithic light guide optics enabling new user experience for see-through AR glasses,” Photonics Applications for Aviation, Aerospace, Commercial, and Harsh Environments V 9202–92020 (2014).

22. C. J. Wang and B. Amirparviz, “Image waveguide with mirror arrays,” U.S. Patent No. 8,189,263 (2012).

23. J. P. Rolland and H. Hua, “Head-mounted display systems,” Encyclopedia of optical engineering 1–13 (2005).

24. H. Hua, C. W. Pansing, and J. P. Rolland, “Modeling of an eye-imaging system for optimizing illumination schemes in an eye-tracked head-mounted display,” Appl. Opt. 46(31), 7757–7770 (2007). [CrossRef]   [PubMed]  

25. M. Xu and H. Hua, “High dynamic range head mounted display based on dual-layer spatial modulation,” Opt. Express 25(19), 23320–23333 (2017). [CrossRef]   [PubMed]  

26. A. W. Snyder and C. Pask, “The Stiles-Crawford effect-explanation and consequences,” Vision Res. 13(6), 1115–1137 (1973). [CrossRef]   [PubMed]  

27. J. Schwiegerling, Field guide to visual and ophthalmic optics (SPIE, 2004).

28. M. Xu and H. Hua, “Ultrathin optical combiner with microstructure mirrors in augmented reality,” Proc. SPIE 10676, 1067614 (2018). [CrossRef]  

29. P. Nema, “Digital imaging and communications in medicine (DICOM) Part 14: Grayscale standard display function,” National Electrical Manufacturers Association, Rosslyn, VA (2000).

30. https://www.miyotadca.com/mdca_product/.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Schematic layout of the geometrical lightguide based on (a) partially reflective mirror-array and (b) micro-mirror-array (MMA).
Fig. 2
Fig. 2 (a) Schematic side view of ray paths and angular relations at the MMA-based lightguide in-coupler wedge and out-coupling area. (b) Stray light path and corresponding normal ray path and at the out-coupling area. Normal ray is coupled out directly from first micro-mirror whereas stray light is coupled out by the following micro-mirrors.
Fig. 3
Fig. 3 Examples showing the effectiveness of four metric functions. (a) The values of four evaluation metric functions among 891 simulated MMA lightguide structures. (b) Three sampled field illuminance distributions at the eyebox. Top: low pupil uniformity equation (S3) value. Bottom: high S3 value. (c) Three sampled field intensities at the eyebox with four occasions. 1: low S1 and high S2; 2: high S1; 3: low S2; 4: high S4 and (d) planar graph showing the coupling efficiency of the four cases in (c).
Fig. 4
Fig. 4 (a) Schematic layout of the out-coupled ray bundles and eye model. (b) The projected amplitude reflectance distribution on the pupil plane when out-coupled field angle equals to 0° and 3.3°. (c) The illuminance distribution of the 3.3° incident field at the eyebox of the selected MMA configuration. (d) The normalized point spread function (PSF) of the 3.3° incident field on retina. (e) Original test image and (f) the simulated perceived retinal image with 1.2 cycles/degree sinusoid pattern.
Fig. 5
Fig. 5 Schematic layout and ray path of the optimal MMA-based lightguide with efficiency bias.
Fig. 6
Fig. 6 Simulation results of the retinal image and stray light distributions over full field of view. (a) Normalized retinal image intensity with efficiency bias design. (b) Normalized retinal image intensity with uniformity bias design. (c) Stray light distribution with efficiency bias design and (d) stray light distribution with uniformity bias design.
Fig. 7
Fig. 7 (a) and (d): The reserve greyscale image as the digital input to compensate for image non-uniformity, for efficiency and uniformity bias design in Fig. 6, respectively. (b) and (e): The perceived retinal image after digital image uniformity correction. (c) and (f): Binary noticeable difference map, where white pixels denote perceivable illuminance variations.
Fig. 8
Fig. 8 (a) Perspective view of the ray bundle propagation in MMA lightguide YOZ-plane. (b) 3-D view of the freeform collimator design. (c) MTF plot over full field of view. (d) Distortion grid of the virtual image over full field of view. (e) CAD model of the MMA-lightguide-based AR display prototype.

Tables (7)

Tables Icon

Table 1 Searching Ranges and Increments of Structural Variables for Global Searching.

Tables Icon

Table 2 Searching Ranges and Increments of Out-coupler Variables for Local Optimization

Tables Icon

Table 3 Structural Parameters of Two Optimal Designs

Tables Icon

Table 4 Out-coupler Parameters of Two Optimal Designs

Tables Icon

Table 5 Coupling Efficiency and Stray Light across Sample Fields of Two Optimal Designs

Tables Icon

Table 6 Comparison of Performance Metrics at the Exit Pupil of Two Optimal Designs

Tables Icon

Table 7 Comparison of Performance Metrics on the Simulated Retinal Images of Two Optimal Designs

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

β> sin 1 1 n + sin 1 sin θ max n
W= t 1 sinβcosβ ( t 1 +δt){tanβtan[β sin 1 ( sin θ i n )]} S=2ttan(β+ sin 1 ( sin θ i n ))
t 1 =2tsin β 2
dEPD+2ERFtan θ max +2ttan θ max
ω= β 2
x=A+Bl+C l 2 +D l 3
θ si = sin 1 {nsin[4ω+β sin 1 ( sin θ i n )180°]}
S 1 = 1 M i=1 M P r (i) P in (i)
S 2 = 1 M i=1 M [ P r (i) P r (i) ¯ ] 2 P r (i) ¯
S 3 = 1 M i=1 M 1 NK j=1 NK [ E r (i,j) E r (i) ¯ ] 2 E r (i) ¯
S 4 = 1 M i=1 M P sl (i) P r (i)
PSF(u,v;ξ,η)= | exp[j π λf ( u 2 + v 2 )] jλf e j k r E r (ξ,η,x,y)P(x,y) e j 2π λ W ab exp[ j2π λf (ux+vy)]dxdy | 2
MeritFun= W 1 ( S 1 ) normalized + W 2 ( S 2 ) normalized With condition of { S 3 < R pupil S 4 < R stray
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.