Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

See-through holographic retinal projection display concept

Open Access Open Access

Abstract

The field of near-eye see-through devices has recently received significant media attention and financial investments. However, devices demonstrated to date suffer from significant practical limitations resulting from the conventional optics on which they are based. Potential manufacturers seek to surpass these limitations using novel optical schemes. In this paper, we propose such a potentially disruptive optical technology that may be used for this application. Conceptually, our optical scheme is situated at the interface of geometric incoherent refractive imaging and radiative coherent diffractive imaging. The generation of an image occurs as a result of data transmission through a two-dimensional network of optical waveguides that addresses a distribution of switchable holographic elements. The device acts as a wavefront generator, and the eye is the only optical system in which the image is formed. In the following we describe the device concept and characteristics, as well as the results of initial simulations.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

The fields of augmented reality (AR), mixed reality (MR), and virtual reality (VR) have recently been subject to renewed interest due to the large opportunities offered by smartphone applications. As an extension of this personal, everyday device, smart glasses could allow users to interact directly with their favorite applications without looking at and touching a screen. Such smart glasses could generate new applications in relation to the surrounding analogic world (AR) or to a digitalized world, real or virtual (MR or VR).

To support and anticipate customer expectations, smart glass concepts and devices have been proposed with the help of graphic designers. Impressive marketing material has created a discrepancy between customer perception and the actual technological capabilities of these devices. A thin, light, aesthetically pleasing pair of glasses with low power consumption showing a bright and contrasted image with high resolution and a wide field of view is unfortunately still a dream.

Most development on smart glasses for AR applications is based on a conventional imaging scheme built on the following steps:

  • (1) Sensing: perception of the surrounding environment by sensors [camera].
  • (2) Processing: computation of the digital image in relation to the surrounding field of view [micro-processor].
  • (3) Generating: a display creates the analogic image [microdisplay].
  • (4) Transforming: the real image is transformed into a virtual image seen at large distance to be seen by the viewer [optical system].
  • (5) Propagating: the photons produced in the image creation process are brought to the eye [free-space or waveguide propagation].
  • (6) Combining: the virtual image is superimposed on to the surrounding scenery [semireflective, grating, or holographic elements].

These technological functions are difficult to integrate in a compact way, and most of the devices produced for AR applications are still closer to a smart helmet than to smart glasses.

Based on this analysis we have tried to find an unconventional design that takes as a starting point an idealized image of lightweight, discreet smart glasses that has been brought to the consumer and uses alternative technologies to achieve a thin, light, and bright see-through device.

We found that the difficulties encountered in the optical system design are due to the steps 3 to 6 that concern the manipulation of the image from the display to the eye. To circumvent these difficulties, an obvious solution is to emit directly the wavefronts related to the image in front of the eye. This led us to develop a new kind of display that mixes integrated photonics and digitalized holography [1,2]: integrated photonics brings light from the light sources to the eye as a data transfer system, and holography transforms this data into wavefronts for the image to be projected on the retina.

In Section 2, we describe our display concept. In Section 3, we introduce the concept of the self-focusing effect allowing image formation on the retina. We then present the two main technological constituents of the system in Sections 4 and 5: the integrated photonics light distribution and the digitalized holography, respectively. Sections 6 and 7 describe estimates of the imaging properties and device power consumption, respectively.

2. GENERAL CONCEPT

Image formation into the eye is described in Fig. 1(a): our eye transforms a wavefront coming from the source image and focuses it on the retina. The coordinates of this image point on the retina are given by the angle of the wavefront kp that reaches the cornea.

 figure: Fig. 1.

Fig. 1. Imaging into the eye: (a) imaging of a point at infinity, (b) near-eye display, (c) near-eye display with a single optical system, (d) near-eye display with multiple lens/pinholes aperture, (e) near-eye display based on a phased array, and (f) near to eye display according to the CEA concept.

Download Full Size | PDF

A conventional display emits light at the level of the image pixels. In a near-eye configuration the spherical wavefront ks generated by each pixel has a curvature that the eye is not able to correct [Fig. 1(b)]. Even if a wavefront has the correct angular orientation, the resulting image point on the retina is blurred. In order to form the image, an optical system is introduced that reduces the curvature of the spherical wavefront and produces the planar wavefront kp with the correct angular coordinates [Fig. 1(c)].

The use of an optical system increases the volume of the optical device and leads to severe constrains in terms of field of view and eye box. Lens-free near-eye displays are possible alternatives that allow generation of planar wavefronts without the use of a single-axis optical system. An example proposed by Maimone et al. uses an integral imaging concept [3]. An array of lenses or pinholes, creates a distribution of elementary planar wavefronts with the given angular coordinates kpi [Fig. 1(d)]. The size of the optical system is decreased due to the reduction in size of the optics. The lens or pinhole array plays the role of a pupil expander and allows the eye box constraint to be relaxed. Another solution proposed by Sun et al. consists of a direct emission of a complex holographic wavefront kh. The device uses a phased array to generate the phase distribution of the image to be formed on the retina [Fig. 1(e)] [4].

These two solutions rely on a segmentation of the wavefront. The first case uses an incoherent wavefront angular distribution built on a conventional display. The second case uses a coherent wavefront phase distribution built on an unconventional integrated photonics device.

We propose an intermediate solution, situated at the interface of geometric incoherent refractive imaging and radiative coherent diffractive imaging. The wavefront distribution is built in a coherent way by the use of integrated photonics and holography. However, the resulting wavefront is not considered as a complex phase function calculated from the Fourier transform of the image. It is rather built as an incoherent geometrical combination of elementary coherent wavefronts resulting from the image angular point coordinates.

The principle of our concept is described in Fig. 1(f): the display emits a distribution of spherical wavefronts with a given angular orientation ksi. These wavefronts are mutually coherent so that a planar wavefront is generated as described in the Huygens–Fresnel principle. The eye can then focus the generated planar wavefront on the retina with the specified angular coordinate. As the only optical system needed to form the image on the retina is the eye itself, we can imagine a highly integrated architecture as a basis of the display device.

An artist’s view of our device is given in Fig. 2. Each emitting point that generates an elementary spherical wavefront is addressed by a waveguide that brings the image data to the outcoupling region. At this location a switchable outcoupling grating extracts the propagated light in the vicinity of a holographic element. This reflective elementary hologram can be considered as an orientated Bragg grating that defines the spherical wavefront angular orientation ksi,j.

 figure: Fig. 2.

Fig. 2. Artist’s view of the see-through display device with a zoom on one emissive point element.

Download Full Size | PDF

The surface of the glass is covered by a complex waveguide design that addresses the emissive points distribution (EPD). A distribution of outcoupling electrodes allows the activation/deactivation of the wavefront emission in order to refresh the image formation on the retina. Light that forms the image is generated by an amplitude modulated laser array that is coupled to the waveguide distribution.

If the device is made in transparent materials, with small refractive index variation, we can expect an overall transparency that could allow see-through applications.

The operating principle of the device is described in Fig. 3. In this simple case we show the rendering of a basic image of 3×3 pixels. In Fig. 3(a) we have an exploded view of the device: (1) the laser array produces the light used to form the image; (2) the routing waveguides direct the light to the waveguide output distribution; (3) the waveguide output distribution addresses the EPD; (4) the outcoupling grating layer extracts light from guided to free-space optics; (5) the electrode layer enables light extraction at specified locations; and (6) the hologram layer defines light direction and fixes the EPD phase coherence.

 figure: Fig. 3.

Fig. 3. Principle of the device operation. (a) Exploded view of the device concept, (b) description of the imaging process, and (c) the three steps necessary to project the 3×3 pixels image from an array of three laser sources and three electrodes.

Download Full Size | PDF

Our simple illustration shows the letter “T” scanning image formation in three steps [Figs. 3(b) and 3(c)]. In step 1 we emit three angular directions corresponding to three image pixels. The three lasers are emitting coherent light at three different output power levels at a green wavelength. In the drawing, each green laser is associated with a specific color (red, blue, yellow). A first set of electrodes (gray) is activated, and each light beam guided from the laser is emitted on the glass surface on an EPD of the corresponding color. We have represented only one emitted beam for each laser but each laser, i.e., each angular direction, is associated with a set of emissive holographic points in red, blue, or yellow. At this step the display projects on the retina three points corresponding to the three first angular directions [Fig. 3(b)].

In step 2 a second set of three image pixels is projected. Another set of amplitudes is given to the three laser sources, and another set of electrodes (pink) is activated in order to address the corresponding angular holographic EPD.

The same process is repeated in step 3 to project the last three image pixels. Vision persistence is used to recover the whole image [Fig. 3(b)].

This ambitious concept is based on technological steps that have to be investigated and demonstrated theoretically and practically. One of the first issues concerns the ability to form an image in relation to the Huygens–Fresnel principle. We describe this image-forming method as the self-focusing effect.

3. SELF-FOCUSING EFFECT

A. Theoretical Analysis

The self-focusing effect has already been introduced and demonstrated experimentally by Hong et al. in the field of optical data storage [5]. The authors have shown the ability to focus a laser from a combination of phase-adjusted laser beams. More recently this concept has found new applications in LIDAR devices [6].

In our case the self-focusing effect can be described considering both a Gaussian beam model and the multiple interferences phenomenon. Figure 4(a) shows the basic concept of multiple beams focusing into the eye. Our display is located at a distance Z1 from the eye. The eye is described as a thin lens of focal length f. The retina is located at a distance f from the surface of the eye. The display emits a collection of Gaussian beams from the EPD Mu,v. The number of emissive points is limited by the entrance pupil Π with diameter Dp, as shown in Fig. 4(b).

 figure: Fig. 4.

Fig. 4. (a) Principle of the Gaussian beams interferences from the display plane to the retina plane, (b) EPD on the display plane, and (c) the retinal plane.

Download Full Size | PDF

The particularity of our concept is that each emitted beam of a given emissive point Mu,v propagates according to the same wave vector ki,j. Geometrically, after passing through the eye lens, all these parallel beams converge to the same point Pi,j, located in the focal plane of the eye lens.

We take the approximation of an eye that can be described as a thin lens. According to the Gaussian beam formulation the waist w1 located at Mu,v is imaged by the eye at a distance Z2 given by

Z2=f+f2×σσ2+π2w14λ2,
with σ=Z1f.

Equation (1) shows that if the display is positioned close to the object focal plane of the eye (σ=0), the distance Z2 is close to the focal length. The wavefronts converging on the point Pi,j are close to the waist location and can be considered as plane waves with wave vector ku,v. The behavior of the beam superposition at the point Pi,j can then be described as multiple interfering planar waves. We describe the field of the planar wave as follows:

Eu,v(r)=E0×ei(ku,v.r+ϕu,v),
where E0 is the beam amplitude and ϕu,v a phase offset.

The interference energy function I(r) is given by the sum of the beams passing through the eye lens pupil aperture Π as

I(r)=[u,vMu,vΠEu,v(r)]×[u,vMu,vΠEu,v(r)]*.
It comes from Eq. (2) that
I(r)=u,vMu,vΠu,vMu,vΠE02[cos(Δϕu,v,u,v(r))],
with the interphase function as
Δϕu,v,u,v(r)=(ku,vku,v)·r+ϕu,vϕu,v.
If the EPD is coherently phase adjusted for a given angular direction ki,j, the relative phase shift verifies that
ϕu,vϕu,v=(ku,vku,v)·ri,ju,v,u,v.
The interference function can then be expressed as
I(r)=u,vMu,vΠu,vMu,vΠE02[cos((ku,vku,v)·(rri,j))].
The energy figure on the retina shows a maxima on the point Pi,j that is the focus point relative to the emissive distribution Mu,v.

As a demonstration and for a better understanding of the phenomena, we pursue our theoretical analysis with a 1D emissive distribution in the plane (y, z), as shown in Fig. 5. The wave vector ku is expressed in relation to the propagation angle αu as

ku=2πλ|sin(αu)cos(αu).
We consider the center O of the thin lens as the phase reference. The phase offset of Eq. (2) is then expressed as
ϕu=yu×sin(γi).
The coordinate yu corresponds to the point Nu, projection of the emissive point Mu on the lens along the ki direction, characterized by the angle γi.

 figure: Fig. 5.

Fig. 5. Simple 2D geometrical representation for the calculation of the interphase function.

Download Full Size | PDF

The interphase function at the focal plane z=f becomes

Δϕu,u(y)=2πλ[(sin(αu)sin(αu))×y+(cos(αu)cos(αu))×f(yuyu)×sin(γi)].
If we consider the paraxial approximation, Eq. (10) can be simplified as
Δϕu,u(y)=2πλf[(yuyu)×(yyi)].
We obtain an equation similar to Eq. (7) showing a maximum of energy at y=yi.

Equation (11) also shows that if the emissive points are distributed in a square periodic grid of period Λ1, that is, if yu=u×Λ1, then other maximums of energy occur at the coordinates

y=yi+mλfΛ1,
with m as a relative integer.

Equation (12) is the expression of the order of diffraction of the periodic EPD structure. It shows that the self-focusing effect can effectively focalize the energy at the targeted location but also at periodic resonances. The self-focusing effect could lead to a sharp image on the retina but, as the image is duplicated, the periodic EPD forbids effective imaging. To avoid this resonance effect one solution is to introduce randomness in the EPD, as shown in Fig. 4(b).

B. Simulation

We have simulated the self-focusing effect using the double formalism of the Gaussian beam and multiple beam interference. Equation (3) is used in an iterative way to sum the contribution of an EPD.

We consider a Gaussian beam propagation to describe the amplitude of the beam from the display to the retina. The amplitude of the field in the plane of the display is given in a first approximation by a Gaussian function of waist w1:

E0(x,y,Z1)=2P0πw12×e(x2+y2w12).
We consider in this approximation that the waist is equal to the half of the emissive point diameter. The power emitted by each emissive zone is P0.

The beam propagates from the display to the eye and forms an image of waist w2 close to the retina:

w2=1nfλπw1,
where n is the refractive index inside the eye.

The simulated intensity function Is is calculated from the summation of the planar beam given in Eq. (3) with a Gaussian beam intensity weighting:

IS(x,y,f)=2P0πw22×e2(x2+y2w22)×I(x,y,f)I(0,0,f).
As an illustration we choose the following simulation parameters that give a simple view of the phenomenon:
  • – emissive point radius w1=10μm,
  • – eye lens pupil aperture Dp=500μm,
  • – eye lens focal length f=23mm.

We evaluate the interference figure as a function of the period of the EPD and compare the periodic distribution with a semirandom distribution, defined by a “randomly periodic equation”:

xu,v=(u+rnd)×Λ1,yu,v=(v+rnd)×Λ1,
where rnd is a random number generation function [0.5:0.5].

Figure 6(a) shows the case Λ1=400μm: only one point of the EPD belongs to the eye lens aperture (red dot on the left figure). The resulting intensity function on the retina shows only the Gaussian beam contribution. The Gaussian beam of waist given by Eq. (14) represents the blurred signal on the retina.

 figure: Fig. 6.

Fig. 6. Results of the self-focusing intensity signal for various EPD configurations. Figures on the left show the EPD in periodic (red dots) and randomly periodic (green dots) cases. Figures in the center and on the right give the intensity distributions for periodic and randomly periodic EPDs. (a) Λ1=400μm, (b) Λ1=200μm, (c) Λ1=100μm, and (d) Λ1=50μm.

Download Full Size | PDF

In Fig. 6(b) the EPD period is decreased to 200 μm. We compare the periodic distribution with five points on the EPD (red dots) and the randomly periodic distribution that give four points on the EPD (green dots). The resulting intensity function of both cases shows the periodic self-focusing effect (on the center) and the random self-focusing effect (on the right). Figures 6(c) and 6(d) show the case with Λ1=100μm and Λ1=50μm.

The choice of the parameter Dp on Fig. 6 allows a good view of the phenomenon as the waist w1 and w2 are visible on the same graphic. In the case of periodic EPD, the interfering beams generate a focus that is replicated on a period Λ2 given by Eq. (12). When randomness is introduce in the EPD, the diffraction orders vanish such that a speckle pattern is formed allowing a single focused spot of the image to be created. The size w1 of the emissive zone fixes the size w2 of the speckle distribution, the size Dp of the eye lens aperture Π fixes the radius of the focus δw that tends to the diffraction limit, given by

δw=1.22λfDP.
We compare in Fig. 7 the normalized intensity cross section of the self-focusing signal in the case of a random periodic EPD for Λ1=50μm and the theoretical Airy function [7] for a diffracting pupil aperture diameter Dp.

 figure: Fig. 7.

Fig. 7. Comparison between intensity cross section of the self-focusing signal in the random case of Fig. 6(d) (green curve) and the Airy function (dashed blue curve).

Download Full Size | PDF

C. Display Analysis

We evaluate the efficiency of the self-focusing effect by measuring the signal-to-noise ratio (SNR) with parameters more consistent with our concept:

  • – emissive point radius w1=2μm,
  • – eye lens pupil aperture Dp=4000μm,
  • – eye lens focal length f=23mm.

The SNR is calculated according to the equation

SNR=10log(IS(0)max(Is(r))r>δw).
We compare the results of periodic and randomly periodic EPDs to the theoretical SNR limit given by the first Airy figure maximum [7]:
SNRAiry=10log[(2J1(5.136)5.136)2]=17.57dB.
The SNR results of the simulated EPD are shown in Fig. 8. For a better comparison, the periodic case is extended in the x axis (Λ110×Λ1).

 figure: Fig. 8.

Fig. 8. Results of SNR simulations for six randomly periodic sequences of EPDs (green dots) and for a periodic EPD (red dots, extended by 10 in the abscissa axis). The minimum, mean, and maximum curves of the random sequences are presented by the solid line. The Airy diffraction-limited SNR is also given for comparison.

Download Full Size | PDF

As expected, the results show that randomness greatly improves the SNR and that the choice of the EPD has a strong impact on the imaging process. For a given period Λ1, the EPD random sequence choice can modify the SNR over several orders of magnitude.

As shown in Fig. 8, we need a small EPD period to increase the SNR and the image rendering. However, for a given emissive point size, the number of available EPDs decreases with the EPD period. As the number of available EPDs is directly related to the number of pixels of the projected image, we have to manage a compromise between the quality and the resolution of the image. This compromise differs from the standard space–bandwidth constraint (for a given display size, sharpness and resolution increase together) [8] and underlines the unconventional aspect of our approach: the increase of the image resolution reduces the image rendering.

The SNR is a first step in the characterization of the self-focusing process. The effective impact of the EPD choice on the image quality, related to resolution sharpness or contrast constraints, is currently under investigation and will be published soon.

The choice of the EPD configuration is also a research topic strongly related to technological constraints. The part of randomness introduced in the EPD must be consistent with the technological solutions used to bring light to the surface of the display. In particular, it must take into account the limitations induced by the waveguide design.

4. WAVEGUIDE DISTRIBUTION ARRAY

The technological principle considered to bring light to the EPD is described in Fig. 9. A waveguide distribution array guides the energy that fixes the amplitude of the emitted signal. The location ru,v of the emitted beam is determined by the intersection between the waveguide Wu and the outcoupling activation electrode Ev. These intersections between the activated electrodes and waveguides define the EPDs. As shown in Fig. 3, various EPDs are activated at the same time as different waveguides are addressed by different lasers.

 figure: Fig. 9.

Fig. 9. Principle of signal extraction from the guided mode to free-space propagation.

Download Full Size | PDF

Figure 9 shows two waveguides Wu and Wu+1 addressed by two lasers q and q. The electrode Ev extracts the two signals toward the holographic elements (hoels) hu,v and hu+1,v. The two hoels, which belong to the EPD related to the two image point angular coordinates, reflect the signal in the given angular directions ki,j and ki,j.

The device parameters are given in Figs. 10 and 11. Waveguides are designed to propagate a single mode in the visible range, at the wavelength of the hologram maximum efficiency (around 532 nm for the polymer holographic material considered here). One promising technology for the manufacturing of waveguides operational in the visible range is silicon nitride [9]. Typical value parameters for the waveguide thickness eg and width wg at these wavelengths are 200 nm and 300 nm. The distance dg between the waveguides is chosen to limit the coupling to neighboring waveguides. A typical value of 1.5 μm can be considered. The SiN waveguides can be manufactured on transparent glass and covered by a SiO2 cladding of thickness ec.

 figure: Fig. 10.

Fig. 10. Cross section of the device showing the wire waveguide and hoel design.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. (a) Lateral section of the device showing the coupling between the waveguide and the hoel. (b) The same figure during the recording process.

Download Full Size | PDF

The outcoupling grating is etched in the cladding above the waveguide. A typical grating period for a wavelength of 532 nm is around 400 nm.

The design of the waveguide including coupling/propagation constraints due to the particular random EPD choice is currently investigated theoretically and experimentally.

The electrode that activates the outcoupling grating can be made of a liquid crystal layer as proposed by Buss et al. [10]. The width we of the electrode fixes the length of the outcoupling grating [Fig. 11(a)]. Typical lengths could be of the order of a few microns. The electrodes are separated by a distance de. The total number of emissive points NEP for the whole EPD family is given by the parameters de and dg in relation to the eye pupil aperture

NEP=πDp24dedg.
The hoels are recorded on a holographic material deposited over the cladding. The thickness eh of the hologram layer depends on the expected characteristics of the hologram, typical values are about 10 μm to 20 μm. The width wh of the hoel depends of the recording process and fixes the size w1 of the EPD elements. The parameters wh and dg are not correlated and two neighboring hoels can overlap if wh>dg. The recording process of the hoel can be compared to the holographic recording process involved in the optical data storage through spatial shift multiplexing [11]. In contrast to holographic data storage, each hologram does not relate to complex data information but to a single angular reference. Each hoel can then be interpreted as an elementary directional Bragg element.

5. DIRECTIONAL HOLOGRAPHIC ELEMENTS

A. General Principle

The principle of light coupling from the waveguide to the hoel has been described in Fig. 9. Here, we present in Fig. 11 the principle of the phase adjustment that allows the efficiency of the interferences produced by the directional hoels.

Figures 11(a) shows a lateral section of the device. A guided wave is extracted in two locations and is reflected by two hoels belonging to the same EPD. The length Lh of the hoel is related to the size of the electrode but is not limited by the interelectrode distance de. As for the width of the hologram, its length can exceed de so that neighboring hoels can overlap. We have chosen in our simulation a hoel radius that defines a waist w1=2μm of the emitted beam.

As presented in Section 3.A, each EPD must be phase adjusted in order to self-focalize on a specific location of the retina plane. The phase shift δϕ between the extracted beams described in Fig. 11(a) is related to the optical path shift and to the grating distribution. It can hardly be controlled by a nanoscale resolution mask design over the whole device surface. Instead, phase adjustment is guaranteed by the intrinsic nanoscale resolution of the 3D hoel recording process.

The recording process is described in Fig. 11(b). The guided laser light plays the role of the reference beam, and a free-space beam coming from the same laser is used as the object beam. This beam is segmented in a multitude of elementary beams coming from a given angular direction. Activation of the outcoupling electrodes allows the creation of the interference pattern between the reference and the object beams that is recorded by the hoels for a given EPD.

When the reference beam is coupled from the waveguide, the conjugate object beam is generated in a reflective mode, as shown in Fig. 11(a). The phase adjustment is automatically recorded from the original object beam.

B. Recording Setup

In order to record the hoel distribution corresponding to a given EPD, a specific recording setup has to be built. Figure 12 shows the basic concept of the setup. An optical fiber is used to split a laser beam into a reference and an object beam. The object beam is collimated in a two lens optical system from a point source given by the first fiber extremity. First lens L1 makes the image of the fiber extremity on the object focal plane of the second lens L2. The position of this image fixes the angular direction ki,j of the collimated beam. An aperture mask corresponding to the EPD is located on the image focal point of the first lens and is imaged on the hologram layer by the second lens L2. The object beam that impacts the hologram layer is collimated and phase adjusted in relation to the optical fiber position and is segmented in a collection of beam spots defined by the aperture mask. The image of the aperture mask is aligned with the EPD that is activated by the electrodes and by the optical coupling of the second optical fiber in a selected waveguide distribution (shown in the inset of Fig. 12).

 figure: Fig. 12.

Fig. 12. Recording setup for the hoel distribution manufacturing.

Download Full Size | PDF

The recording setup shown in Figs. 11(b) and 12 uses a collimated planar object wavefront. It is designed to form an image at infinity for each eye. The modification of the fiber longitudinal position in the recording setup allows the device to form an image at a fixed, given viewing distance by modifying the curvature of the object wavefront. In this case, the management of a symmetric off-axis pixels distribution for each eye should be used to reduce the vergence accomodation conflict (VAC) that usually limits conventional smart glasses approaches [12]. Our concept allows viewing an image in different plane locations with limited VAC. However, the manufacturing process fixes the plane location for a given display device.

Numerous questions remain regarding the technological process for the realization of the holographic device. Uncertainties around the recording duration, the material behavior, and the possible replication methods have to be investigated to validate a potential commercial interest to the concept. However, the impressive achievements of commercially feasible terabit-scale hologram storage and hologram printers imply a technical maturity that should be applicable to our approach [13,14].

We are currently evaluating the hoel recording process and have presented some initial design considerations [15]. A collaboration with a polymeric holographic material supplier has been initiated and should allow us to soon record the first hoel distribution in order to validate the self-focusing effect in an image-forming process.

6. IMAGING PROPERTIES

A. Sharpness/Contrast Issue

The sharpness of an image formed by an optical system is generally characterized by its modulation transfer function (MTF). This function gives the efficiency of the system for the rendering of a spatial frequency with a given contrast. The MTF can be calculated from the Fourier transform of the point spread function (PSF) that is the impulse intensity distribution.

Figure 7 shows the PSF of an optimal self-focusing effect (green curve). The calculation takes into account a perfect lens as the optical imaging system and the signal in its central part is very close to the theoretical diffraction limit. A more realistic approach needs to take into account the specific optical characteristics of the human eye and this poses some specific issues due to the human vision process.

The eye is a complex optical system that does not follow the diffraction theory. Unlike Eq. (17), aberrations in the pupil periphery degrade the MTF as the eye pupil diameter increases [16]. On the other hand, it has been shown that the coherent laser interfering imaging process can alleviate the pupil peripheral aberration distortion and improve the PSF [17]. The question of the effective sharpness of the self-focusing effect in the eye is an open question and needs to be studied with modern eye models and physiologic experiments in a coherent imaging process.

The measurement of the ANSI contrast is also a mean to evaluate the efficiency of an imaging system. It consists in forming a checkerboard image and measuring the ratio of intensity between the dark and white cells. In the case of self-focusing imaging, the PSF can be divided in two parts: a thin central spot and a large noisy speckle contribution, as shown in Fig. 7. This leads us to introduce a double Gaussian model in which the thin central Gaussian spot contributes to the imaging process and the large Gaussian noise reduces the overall contrast [18]. Improving the contrast requires limiting the impact of the large Gaussian contribution. This is done by an optimal EPD design that improves the energy ratio between the two Gaussian contributions and by limiting the number of pixels required to form an image (typically by the use of nonadjacent pixel distribution, as shown below). Characterization of our system in terms of ANSI contrast simulation will be described in an upcoming paper.

B. Resolution/Image Rendering Issue

As we have mentioned in Section 3, the rendering of the image is related to the number of emissive points nep of a given EPD. This number depends on the eye pupil aperture size and on the choice of the EPD function, in particular, the EPD period Λ1:

nEP=πDp24Λ12.
The total number of pixels Npix that can be projected on the retina is given by the number of EPDs:
Npix=NEPnEP=Λ12dedg.
Equations (21) and (22) confirm that the image rendering given by nep and the image resolution given by Npix are diverging parameters. Reducing Λ1 increases the image quality but reduces the resolution.

We can define the following consistent parameters:

  • – electrode distance de=4μm,
  • – waveguide distance dg=1.5μm,
  • – EPD period Λ1=600μm.

This gives a total number of pixels Npix=93,750 that corresponds to a conventional image resolution of about 300×300 pixels.

We have simulated in Fig. 13(a) a retinal projection on the basis of an image of 300×300 pixels projected on a 15°×15° field of view (FOV). This artistic view highlights the differences between our retinal projection concept and a conventional display. The image is formed by separated luminous dots rather than by adjacent pixels. The angular distance that separates the dots is not necessarily uniform and can be adapted to the content for a given region of the FOV. In the example the text is projected on the retina with an angular distance between the dots of 3 arcmin and the value is increased to 4.5 arcmin for the GPS pictogram.

 figure: Fig. 13.

Fig. 13. (a) Simulation of an image projection according to our concept. The image on top shows an external view that covers a 100° wide FOV. The red square presents a projection zone of 15°×15° with a resolution of 300×300 pixels. Over is a zoom on a section of the projected image, with an angular resolution of the text and the GPS pictogram of 3 arcmin and 4.5 arcmin, respectively, (b) detail of the text projected in a dots distribution, (c) detail of the same text resolution projected in an adjacent pixel distribution.

Download Full Size | PDF

Equation (22) leads to low pixel number values. However, even if the resolution of the available image is low, the total number of pixels can be optimized by selecting specific regions in the FOV. This possibility underlines once again the unconventional approach of the concept: the perceived FOV, traditionally given by the product of the resolution with the angular pixel increment, can be increased here for a given constant total number of pixels. The dynamic image addressing for a specific region of the FOV and with a specific resolution is, however, fixed for a given device. Each EPD can be modulated in power emission but not in angular reference.

In terms of image rendering, the dotted aspect of the projected image can modify the perceived resolution and improve the result in comparison to a conventional display due to half-toning visual effects. As an illustration, we compare the same text coded with a resolution of 86×9 pixels for an unconventional dots pattern display [Fig. 13(b)] and for a conventional adjacent pixels display [Fig. 13(c)].

Imaging properties are currently investigated in an extensive study to evaluate the impact of the speckle noise. This theoretical study will be shown by experimental evaluations incorporating visual tests.

7. POWER CONSIDERATIONS

To conclude the technological review of our concept we focus on power considerations to check if the device is consistent with the objective of near-eye integration.

We target the projection of a full bright image on a circular FOV of 15°. The image is characterized by a brightness B. We calculate the power required in relation to the etendue of the eye [Fig. 14]:

ϕe=B×π2×Dp24×(sin(β))2.
The brightness required in AR applications is estimated to be in the range of 1000Cd/m2 to 10,000Cd/m2. From Eq. (23), the power that enters the eye is then between 1 μW and 10 μW.

 figure: Fig. 14.

Fig. 14. Description of the etendue and eye box distribution in the intraretinal projection process.

Download Full Size | PDF

The etendue of the beam seen from the eye is equal to the etendue of the eye seen from the beam. The emitting surface characterized by a diameter De is given by the relation

De=2×Z1×sin(β).
One particular advantage of our concept is that the size of the emitting surface that fixes the eye box (EB) is not related to a single-axis optical system. The EB can then be extended by the design of a large waveguide and holographic elements distribution. We note DEB as the diameter of the EB on the display.

The total amount of optical power emitted from the display is given by

ϕe_tot=(DEBDe)2×ϕe.
Power considerations are mainly driven by the losses that occur in the device. Losses include coupling loss from the laser array to the waveguide distribution, propagation and outcoupling losses at the device surface, and the loss related to the holographic element efficiency. The losses related to the waveguide design at 532 nm and to the waveguide routing architecture are currently investigated and will be published soon. The efficiency of the holographic elements will depend on the photopolymer material and on the multiplexing strategy. Current performance for photopolymer holographic material shows diffraction efficiency that can reach 98% [14].

We note this overall device efficiency ηd, which leads to the optical power emitted by the lasers:

ϕlasers=ϕeηd.
If we suppose an overall device efficiency of 1% and consider a conversion efficiency of 10% for the lasers, it gives a required optical and electrical power of about 1 mW and 10 mW for a brightness of 1000Cd/m2 and 10,000Cd/m2. This result is an approximation, but it shows that there might be no strong limitation in terms of power consumption. Our low consumption value can be explained by the directivity of the EPD that optimizes the optical power in the direction of the eye and by the proximity between the eye and the display. This may be compared to evaluations of other see-through devices. LiKamWa et al. have shown that the LCOS display in the Google glass draws a power between 690 mW and 870 mW, depending on the sensed ambient brightness [19]. This power affects the use of the device and is not optimized as it does not depend on the projected image. On the contrary, our emissive display adapts its power to the content so that for projecting text information as in the case of Fig. 13 an electric power far lower than 10 mW can be expected.

Another power consideration is that of eye safety. The image is projected on the retina in a scanning mode (Fig. 3) and must not lead to hazard for the retina. Laser safety is based on the calculation of the maximum permissible exposure (MPE) on the cornea. For the case of a collimated laser beam, a blinking reflex of 0.25 s, and a pupil diameter of 7 mm, MPE is about 6.4J/m2 [20] and corresponds to a laser power limit of about 1 mW.

In our display the exposure of the cornea for one pixel during 0.25 s is given by

Epix=4ϕeNpix×π×Dp2×(0.25s).
For the worst case B=10,000Cd/m2, Eq. (27) leads to an exposure value of 5.107J/m2, which is far below the MPE limit.

The device seems to present no hazard for the eye. However, the case of the blinking reflex can be discussed and long-term laser exposure will have to be studied in more detail.

8. CONCLUSION

We present a complete theoretical overview of an unconventional imaging concept that could allow the development of a near-eye integrated transparent display. We describe the concept of the self-focusing effect that could allow image formation by retinal projection in a lens-free device configuration. The concept is simulated, and first results on evaluation characteristics such as the focus SNR give first insight on the device feature. The waveguide design and hoel concept are introduced, and we give some first perspectives on image rendering, device manufacturability, and power consumption.

Initial limitations are identified in terms of image rendering and commercial implementation. Image resolution is constrained by the waveguide integration and self-focusing efficiency. The use of holographic elements limits the projection to a monochromatic image, and a fast holographic recording process has yet to be demonstrated. These limitations can be balanced by the new opportunities opened by the unconventional imaging approach. In particular, the ability to adapt locally the resolution of the image inside a discontinuous field of view can open interesting applications.

More generally, this research can be seen as a fundamental reflection on the new opportunities for retinal projection that are opened by recent technological achievements in integrated photonics and holography. As our laboratory is strongly involved in conventional microdisplay design and manufacturing for AR/VR/MR applications [21], such investigations may anticipate potential technological evolutions.

Acknowledgment

We thank Pr. Haeberle from Laboratoire MIPS of Université de Haute-Alsace for fruitful discussions on diffraction and holographic issues.

REFERENCES

1. C. Martinez, “Image projection device,” U.S. patent 2015/0370073 A1 (December 24, 2015).

2. C. Martinez, V. Krotov, D. Fowler, and O. Haeberle, “Lens-free near-eye intraocular projection display, concept and first evaluation,” in Imaging and Applied Optics, OSA Technical Digest (Optical Society of America, 2016), paper CW1C.5.

3. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33, 89 (2014). [CrossRef]  

4. J. Sun, E. Timurdogan, A. Yaacobi, E. Shah Hosseini, and M. R. Watts, “Large-scale nanophotonic phased array,” Nature 493, 195–199 (2013). [CrossRef]  

5. S. S. Hong, B. K. Horn, D. M. Freeman, and M. S. Mermelstein, “Lensless focusing with subwavelength resolution by direct synthesis of the angular spectrum,” Appl. Phys. Lett. 88, 261107 (2006). [CrossRef]  

6. M. Heck, “Highly integrated optical phased arrays: photonic integrated circuits for optical beam shaping and beam steering,” Nanophotonics 6, 93–107 (2017). [CrossRef]  

7. M. Born and E. Wolf, Principle of Optics, 7th ed. (Cambridge University, 1999).

8. A. W. Lohmann, R. G. Dorsch, D. Mendlovic, Z. Zalevsky, and C. Ferreira, “Space-bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A 13, 470–473 (1996). [CrossRef]  

9. A. Z. Subramanian, P. Neutens, A. Dhakal, R. Jansen, T. Claes, X. Rottenberg, F. Peyskens, S. Selvaraja, P. Helin, B. Du Bois, K. Leyssens, S. Severi, P. Deshpande, R. Baets, and P. Van Dorpe, “Low-loss singlemode PECVD silicon nitride photonic wire waveguides for 532–900 nm wavelength window fabricated within a CMOS pilot line,” IEEE Photon. J. 5, 2202809 (2013). [CrossRef]  

10. T. Buß, C. L. C. Smith, and A. Kristensen, “Electrically modulated transparent liquid crystal-optical grating projection,” Opt. Express 21, 1820–1829 (2013). [CrossRef]  

11. G. Barbastathis, M. Levene, and D. Psaltis, “Shift multiplexing with spherical reference waves,” Appl. Opt. 35, 2403–2417 (1996). [CrossRef]  

12. G. Wetzstein, “Light field, focus-tunable, and monovision near-eye displays,” SID Symp. Dig. Tech. Pap. 47, 358–360 (2016). [CrossRef]  

13. L. Hesselink, S. S. Orlov, and M. C. Bashaw, “Holographic data storage systems,” Proc. IEEE 92, 1231–1280 (2004). [CrossRef]  

14. H. Bjelkhagen and D. Brotherton-Ratcliffe, Ultra-Realistic Imaging, Advanced Techniques in Analogue and Digital Colour Holography (CRC Press, 2013).

15. C. Martinez, V. Krotov, and D. Fowler, “Holographic recording setup for integrated see-through near-eye display evaluation,” in Imaging and Applied Optics, OSA Technical Digest (Optical Society of America, 2017), paper JTu5A.36.

16. F. W. Campbell and R. W. Gubisch, “Optical quality of the human eye,” J. Physiol. 186, 558–578 (1966). [CrossRef]  

17. B. A. Wandell, Foundations of Vision (Sinaur Associates, 1995), p. 54.

18. V. Krotov, C. Martinez, and O. Haeberlé, “Imaging performance analysis of a lens-free near to eye display,” in Imaging and Applied Optics, OSA Technical Digest (Optical Society of America, 2017), paper JTu5A.5.

19. R. LiKamWa, Z. Wang, A. Carroll, F. X. Lin, and L. Zhong, “Draining our glass: an energy and heat characterization of Google Glass,” in 5th Asia-Pacific Workshop on Systems (APSYS) (2014).

20. F. C. Delori, R. H. Webb, and D. H. Sliney, “Maximum permissible exposures for ocular safety (ANSI 2000), with emphasis on ophthalmic devices,” J. Opt. Soc. Am. A 24, 1250–1265 (2007). [CrossRef]  

21. F. Templier, L. Dupré, S. Tirano, M. Marra, V. Verney, F. Olivier, B. Aventurier, D. Sarrasin, F. Marion, T. Catelain, F. Berger, L. Mathieu, B. Dupont, and P. Gamarra, “75-1: Invited paper: GaN-based emissive microdisplays: a very promising technology for compact, ultra-high brightness display systems,” SID Symp. Dig. Tech. Pap. 47, 1013–1016 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Imaging into the eye: (a) imaging of a point at infinity, (b) near-eye display, (c) near-eye display with a single optical system, (d) near-eye display with multiple lens/pinholes aperture, (e) near-eye display based on a phased array, and (f) near to eye display according to the CEA concept.
Fig. 2.
Fig. 2. Artist’s view of the see-through display device with a zoom on one emissive point element.
Fig. 3.
Fig. 3. Principle of the device operation. (a) Exploded view of the device concept, (b) description of the imaging process, and (c) the three steps necessary to project the 3×3 pixels image from an array of three laser sources and three electrodes.
Fig. 4.
Fig. 4. (a) Principle of the Gaussian beams interferences from the display plane to the retina plane, (b) EPD on the display plane, and (c) the retinal plane.
Fig. 5.
Fig. 5. Simple 2D geometrical representation for the calculation of the interphase function.
Fig. 6.
Fig. 6. Results of the self-focusing intensity signal for various EPD configurations. Figures on the left show the EPD in periodic (red dots) and randomly periodic (green dots) cases. Figures in the center and on the right give the intensity distributions for periodic and randomly periodic EPDs. (a) Λ1=400μm, (b) Λ1=200μm, (c) Λ1=100μm, and (d) Λ1=50μm.
Fig. 7.
Fig. 7. Comparison between intensity cross section of the self-focusing signal in the random case of Fig. 6(d) (green curve) and the Airy function (dashed blue curve).
Fig. 8.
Fig. 8. Results of SNR simulations for six randomly periodic sequences of EPDs (green dots) and for a periodic EPD (red dots, extended by 10 in the abscissa axis). The minimum, mean, and maximum curves of the random sequences are presented by the solid line. The Airy diffraction-limited SNR is also given for comparison.
Fig. 9.
Fig. 9. Principle of signal extraction from the guided mode to free-space propagation.
Fig. 10.
Fig. 10. Cross section of the device showing the wire waveguide and hoel design.
Fig. 11.
Fig. 11. (a) Lateral section of the device showing the coupling between the waveguide and the hoel. (b) The same figure during the recording process.
Fig. 12.
Fig. 12. Recording setup for the hoel distribution manufacturing.
Fig. 13.
Fig. 13. (a) Simulation of an image projection according to our concept. The image on top shows an external view that covers a 100° wide FOV. The red square presents a projection zone of 15°×15° with a resolution of 300×300 pixels. Over is a zoom on a section of the projected image, with an angular resolution of the text and the GPS pictogram of 3 arcmin and 4.5 arcmin, respectively, (b) detail of the text projected in a dots distribution, (c) detail of the same text resolution projected in an adjacent pixel distribution.
Fig. 14.
Fig. 14. Description of the etendue and eye box distribution in the intraretinal projection process.

Equations (27)

Equations on this page are rendered with MathJax. Learn more.

Z2=f+f2×σσ2+π2w14λ2,
Eu,v(r)=E0×ei(ku,v.r+ϕu,v),
I(r)=[u,vMu,vΠEu,v(r)]×[u,vMu,vΠEu,v(r)]*.
I(r)=u,vMu,vΠu,vMu,vΠE02[cos(Δϕu,v,u,v(r))],
Δϕu,v,u,v(r)=(ku,vku,v)·r+ϕu,vϕu,v.
ϕu,vϕu,v=(ku,vku,v)·ri,ju,v,u,v.
I(r)=u,vMu,vΠu,vMu,vΠE02[cos((ku,vku,v)·(rri,j))].
ku=2πλ|sin(αu)cos(αu).
ϕu=yu×sin(γi).
Δϕu,u(y)=2πλ[(sin(αu)sin(αu))×y+(cos(αu)cos(αu))×f(yuyu)×sin(γi)].
Δϕu,u(y)=2πλf[(yuyu)×(yyi)].
y=yi+mλfΛ1,
E0(x,y,Z1)=2P0πw12×e(x2+y2w12).
w2=1nfλπw1,
IS(x,y,f)=2P0πw22×e2(x2+y2w22)×I(x,y,f)I(0,0,f).
xu,v=(u+rnd)×Λ1,yu,v=(v+rnd)×Λ1,
δw=1.22λfDP.
SNR=10log(IS(0)max(Is(r))r>δw).
SNRAiry=10log[(2J1(5.136)5.136)2]=17.57dB.
NEP=πDp24dedg.
nEP=πDp24Λ12.
Npix=NEPnEP=Λ12dedg.
ϕe=B×π2×Dp24×(sin(β))2.
De=2×Z1×sin(β).
ϕe_tot=(DEBDe)2×ϕe.
ϕlasers=ϕeηd.
Epix=4ϕeNpix×π×Dp2×(0.25s).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.