The Maxwellian-view display can provide visual information to people with low vision because retinal images can be formed independently of the refractive power of an eye by using rays converging on its pupil. This study presents the holographic Maxwellian-view display, which generates a wavefront converging on the pupil and forming images on the retina. The beam convergent point can be moved electrically in accordance with the pupil movement, and the beam width in the pupil can be changed electrically to control the depth of field of the eye. A compact optical system configuration for the holographic Maxwellian-view display is also proposed. The prototype system was constructed and experimentally tested. Because this holographic technique allows the phase modulation in the pupil, eye aberrations can be corrected; thus, retinal images can be formed for eyes with astigmatism.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
The Maxwellian view enables the image formation on a retina independent of the refractive power of the crystalline lens of an eye . Therefore, it can be used to provide visual information to people with low vision abilities [2,3]. The Maxwellian view has also been used for visual research studies in the fields of psychophysics, brain imaging, and clinical testing . In the Maxwellian view, the retinal image is formed by converging the light rays on the eye pupil. Conventional Maxwellian-view displays [2,3] employ a lens to converge the rays; hence, they are based on geometrical optics. Therefore, viewers should locate their eyes so that the convergent point is placed on their pupils. This inconvenience limits the use of Maxwellian-view displays. This study proposes a flexible Maxwellian-view display based on holography to enable the electronic movement of the convergent point and, consequently, overcome the limitation of the conventional displays. Moreover, the proposed technique can correct wavefront errors in the pupil to produce retinal images for eyes with aberrations, such as astigmatic eyes. Especially, the proposed technique can correct complicated wavefront errors, such as irregular astigmatism because the two-dimensional wavefront in the pupil can be controlled arbitrarily. As irregular astigmatism cannot be corrected using eyeglasses and it is sometimes difficult to be corrected using contact lenses, refractive surgery is required. Therefore, holographic implementation of the Maxwellian-view displays offers those with low vision caused by complicated wavefront errors a measure to provide visual information.
Head-mounted displays (HMDs) using the Maxwellian view have been developed [2,3,5] for virtual reality (VR) and augmented reality (AR) applications to solve the visual fatigue problem, which is caused by the vergence–accommodation conflict . When left and right images are shown to the corresponding eyes, the eyes rotate to observe three-dimensional (3D) images and the vergence function correctly perceives the depth of the 3D images using the rotation information from both eyes. However, the eye focusing function, i.e., the accommodation function, does not work properly, because the eyes focus on the image plane of the HMD imaging system, not on the 3D images. This disagreement between vergence and accommodation causes visual fatigue. Maxwellian-view HMDs can solve this problem because the retinal images are still formed on the retinas when the eyes focus on 3D images. A Maxwellian-view HMD using the super multi-view technique [7,8] was proposed . Recently, another display using two-dimensional laser scanning by a micro-electro-mechanical system (MEMS) mirror was also developed . In addition, a flat-panel type Maxwellian-view display  was proposed, which uses the integral imaging technique to control the directions of the rays.
Recently, several vision-correcting displays that are not based on the Maxwellian view have also been developed. In , an integral imaging display is used to control the directions of the rays to form retinal images. In , images deconvolved by a point-spread function of a low-vision eye are displayed by a multilayer display. The use of light field displays for displaying deconvolved images provides high resolution and high contrast retinal images .
The above-mentioned Maxwellian-view and vision-correcting displays are based on geometrical optics and the control of light rays through lenses. The technique proposed in this study, instead, is based on wave optics and the light wavefront is controlled by spatial light modulators (SLMs). Because the holographic technique is used to control the light wavefront, the convergent point can be moved electrically in accordance with the eye movement. Moreover, this technique can arbitrarily control the wavefront so that images can be displayed at any distance from the eyes, whose aberrations can be corrected to produce sharp images on the retina. Techniques to move the convergent point of the light using tilted mirrors have been previously proposed for the Maxwellian-view displays based on geometrical optics [15,16]; however, these techniques required additional mechanical components. The proposed holographic technique enables the pure electronic movement of the light convergent point. This study also proposes a compact configuration for the Maxwellian-view display system by shortening the optical length via a telephoto lens and an SLM. Preliminary experimental results have been previously reported in a conference paper , while here the proposed technique is explained in detail using the depth of field (DOF) of the eyes. The explanations about the hologram calculations and the compact display system configuration are also added, and the experimental results for the astigmatism correction are newly shown. Recently, the retinal image formation using Maxwellian-view based on holography was also proposed ; however, this work did not use the holographic technique to move the convergent point on the pupil and correct the wavefront errors in it.
2. Conventional Maxwellian-view display based on geometrical optics
Figure 1 illustrates the conventional Maxwellian-view display [2,3] based on geometrical optics. The eye imaging system, including the cornea and the crystalline lens, is simplified as an imaging system with one lens (eye lens). The rays emitted from the point light source are collimated by the condenser lens into parallel rays that illuminate the transmissive display. Then, the modulated rays from the transmissive display are refracted by the converging lens and converged into the pupil of the eye lens. Because the rays pass through the center of the eye lens, their directions do not change; therefore, the image displayed on the transmissive display is reproduced on the retina. Even when the eye lens changes its refractive power, the directions of the rays do not change; thus, the retinal images are produced not depending on the refractive power of the eye lens.
Because the converging light has a finite beam width in the pupil, there is a limitation in the retinal image formation by the Maxwellian view. Under normal conditions, the pupil diameter determines the DOF range at which the eye can focus without perceiving blurs on the retina . Under the Maxwellian-view condition, because the beam width becomes a practical pupil diameter, the DOF range of the eye is determined by the beam width and, hence, it increases. The Maxwellian view is achieved in this extended DOF range. Figure 2 illustrates the various parameters determining the DOF range, including the beam width in the eye pupil (d), the length between the pupil and the image (l), the focal length of the eye lens (fe), and the minimum perceivable blur on the retina (δ). The DOF range is given by
The focal length of the converging lens, the object length, and the image length are denoted by fc, s, and s’, respectively. As shown in Fig. 2(a), when the display is placed between the converging lens and its focal plane (s < fc), the virtual image is produced between the converging lens and infinity, s’ < 0, and the DOF range is finite. As shown in Fig. 2(b), when the display is placed on the focal plane of the converging lens (s = fc), the virtual image is produced at infinity (s’ = ∞) and the DOF range is infinite (ln = fed/δ and lf = ∞.)
In conventional Maxwellian-view displays, the transmissive display is usually placed on the focal plane as in Fig. 2(b). Here, the assumptions of fe = 17 mm, δ = 5 μm, and λ = 0.5 μm are made. When the beam diameter is 2 mm, the DOF range is from 6.8 m to infinity. For AR and VR applications, this DOF range is located far from the eyes. When the display is placed between the lens and the focal plane, the DOF range can be generated nearer to eyes, but the DOF range is reduced. For example, when the virtual image is produced at a distance of 3.0 m from the eyes, the DOF range is between 2.1 and 5.4 m.
3. Flexible Maxwellian-view display based on holography
3.1. Basic idea
Figure 3 shows the basic idea of the holographic Maxwellian-view display proposed in this study. The SLM generates a wavefront converging into the pupil of the eye lens and producing images at any distance from it, as shown in Fig. 3(a). The holography technique allows the control of the inclination and the curvature radius of the wavefront; hence, the light convergence point can be electronically moved three-dimensionally to accommodate movement of the pupil, as shown in Fig. 3(b). Moreover, the beam width in the pupil can be changed to control the extension of the DOF range.
Figure 4 illustrates the holographic Maxwellian-view display system proposed in this study. It consists of an SLM and a converging lens. The converging lens is not necessarily required because the SLM can generate converging wavefronts; however, since current SLMs have typical pixel pitches larger than 5 μm, they can produce a lens function with a considerable long focal length which consequently extends the system length. Thus, the converging lens is used to shorten the system length, and the wavefront modulation by the SLM is used to move the light convergent point.
3.2. Hologram calculation process
Here, the calculation of the wavefront that should be generated by the SLM is explained. Figure 5 shows the simplified optical system used for the explanation. The origin of the coordinate system is located at the center of the SLM screen. The light convergent point is denoted by (xc, yc) and moved along the pupil plane (xp, yp), which is located at the distance zp from the origin.
The coordinates of the image plane are denoted by (xi, yi) and the distance between the reconstructed image and the pupil is zi. The target complex amplitude distribution of the reconstructed image is denoted by g(xi, yi). The center of the reconstructed image should be shifted to (xs, ys) so that the center of the diffraction pattern passes through the center of the SLM screen. The reconstructed image should have the phase distribution of a spherical wave converging at (xc, yc) on the pupil plane. Hence, the complex-amplitude distribution on the image plane is given by:
Figure 6 depicts the calculation process. First, the target image is Fourier transformed, and the circular aperture, the auxiliary phase distributions ϕa(xp, yp), and the phase distribution ϕp(xp, yp) = k(1/zi−1/zp)(xp2 + yp2)/2 are multiplied. Next, the distribution is shifted to (xc, yc) and the inverse Fourier transform is performed. Finally, the phase distribution ϕo(xp, yp) = −k(1/zp−1/fc)(xo2 + yo2)/2 is multiplied. The last phase modulation is not required when zp = fc.
3.3. Resolution of retinal images
When the beam width in the pupil is decreased to enlarge the DOF range, the resolution on the retina decreases. The minimum resolvable distance on the retina is given by δR = 1.22λfe/d from the Rayleigh criterion. When δR is larger than the minimum perceivable blur (δ) on the retina, the replacement of δ by δR in Eqs. (1) and (2) gives the beam width (d) required to obtain the DOF range located from ln to lf as follows:Eqs. (1) and (2) as follows:
Based on the pixel pitch (p) and the resolution of the SLM (N × M), the horizontal and vertical angles of view of the reconstructed image are given by 2 tan−1(Np/2zp) and necessarily 2 tan−1(Mp/2zp), respectively. The image size on the retina is given by (Npfe/zp) × (Mpfe/zp). Thus, the resolution on the retina is given by (Npd/1.22λzp) × (Mpd/1.22λzp).
4. Display system design
In this section, the holographic Maxwellian-view display system with a reduced system length is explained.
The width of the area on the pupil plane where the light convergent point can be moved is given by λfc/p when zp = fc. This study assumes that the width of the pupil movement is 20 mm based on the average human eyes. The experimental system built in this study consisted of a SLM with a pixel pitch of 8.1 μm and a light source with a wavelength of 640 nm. In such conditions, fc of the converging lens would be 253 mm so the system length would be large. Therefore, a technique to reduce the length of the optical system is required.
Figure 7 illustrates the technique proposed to shorten the optical system. A telephoto lens, which consists of positive and negative lenses, is used as a converging lens. Its combined focal length can be enlarged by making an absolute focal length of the positive lens longer than that of the negative one. As shown in Fig. 7, the positive lens shortens the distance between the SLM and the converging point, while the negative lens enlarges the radius of curvature of the wavefront to be equivalent to that of the spherical wave produced by the converging lens with a focal length of f. The combined focal length is longer than the back focus (the distance between the negative lens and the focal point). The focal point (F’) and the principle point (H’) are located on the opposite sides of the telephoto lens.
The SLM should be placed on the principle plane (H) to have the combined lens system optically equivalent to the single lens system shown in Fig. 5. In the proposed display system, the SLM is attached to the positive lens to further shorten the system length as shown in Fig. 7. The physical diffraction from H to the positive lens is included in the numerical diffraction for the hologram calculation. Based on the combined focal length fc of the telephoto lens and the separation (g) of the two lenses, the distribution u’(xp, yp) on the pupil plane can be calculated from the distribution h’(xo, yo) on the SLM plane using the Fresnel approximation as follows:Eq. (10) and Eq. (5) provide the wavefront that should be produced by the SLM as below:Fig. 6 needs to be modified. The phase distribution must be changed into ϕp(xp, yp) = k[1/zi−1/zp(1−g/f)](xp2 + yp2)/2 and the last phase modulation ϕo(xp, yp) is not required after the inverse Fourier transform.
In this study, a liquid crystal on silicon SLM (LCOS-SLM) was used and, since it is a reflection-type SLM, the holographic Maxwellian-view display was realized with the optical system shown in Fig. 8. The optical system also provides a see-through feature by using a polarization beam splitter (PBS). The end of the optical fiber emitting the light is located on the focal plane F of the telephoto lens. The light is linearly polarized by the PBS and successively collimated by the telephoto lens. The plane wave illuminates the LCOS-SLM and the reflected light passes through the telephoto lens again to be converged. The light bended by the PBS toward an eye is the amplitude modulated light by the SLM. A beam splitter (BS) is placed between the PBS and the SSB filter to monitor the pupil movement with a camera.
The complex-amplitude distribution given by Eq. (11) was encoded into the amplitude distribution to be displayed on the LCOS-SLM. The encoding method, briefly explained here and detailed in Ref. 20, is based on the single-sideband (SSB) filter technique [20,21], which eliminates the zero-order diffraction light and the conjugate image component by placing an SSB filter on the focal plane F’ of the telephoto lens. The Fourier transform of the complex-amplitude distribution is performed and the Fourier transform is then moved to the upper half of the Fourier plane. The symmetric complex-conjugate distribution of the Fourier transform is then placed on the lower half of the Fourier plane before the inverse Fourier transform is performed. Due to the conjugate symmetry on the Fourier plane, the inverse Fourier transform provides a real distribution. Finally, a constant real value is added to obtain a non-negative real distribution. When this non-negative real distribution is displayed on the SLM, the symmetric complex-conjugate distribution is eliminated by the SSB filter on the focal plane F’. The constant real value generates a peak distribution on the focal plane; the SBB filter thus eliminates this zero-order diffraction light. Therefore, only the Fourier transform of the complex-amplitude distribution is able to pass through the SSB filter.
In Fig. 8, the optical fiber is shifted perpendicularly to the optical axis and the SSB filter is shifted accordingly to allow the eye to be located around the optical axis. The size of the Fourier transformed pattern on the focal plane is given by (λfc/p) × (λfc/p). Because the lower half of the Fourier transformed pattern is blocked by the SSB filter, the aperture size of the SSB filter is (λfc/p) × (λfc/2p). The area where the convergence point can be moved is equal to this aperture size. The rectangular shape of the SSB filter also eliminates higher-order diffraction patterns generated by the pixelated structure of the modulation screen of the SLM.
5. Experimental system
The proposed technique was experimentally verified by constructing a holographic Maxwellian-view display prototype.
The LCOS-SLM used was an LC-R 1080 (HoloEye Photonics AG), with a resolution of 1,920 × 1,200 and a pixel pitch of 8.1 μm. As the light source, a light emitting diode (LED) with a central wavelength of 640 nm and a spectral width of 18 nm was used, and it was attached to a multi-mode fiber with a core diameter of 0.20 mm.
The area where the pupil can move was 20.0 × 10.0 mm2 and the fc of the telephoto lens was 253 mm as described in Sec. 4. The focal lengths of the plano-convex and the plano-concave lenses constituting the telephoto lens were 75.0 and −25.0 mm, respectively; these values were determined using an optical design software and considering the spot diagrams on the focal plane, whose diameter was 0.107 mm. The length of the projection optics (l, see Fig. 7) was 119 mm, which is almost half of fc.
Figure 9(a) shows the constructed prototype. The SLM was placed at the bottom of the system and the connector to the optical fiber at the top. As shown in Fig. 9(b), a camera for monitoring the pupil movement can be attached to one side of the system. In this study, the camera was not used and only the movement of the convergence point based on the holographic technique was verified. The horizontal and vertical angles of view of the reconstructed images were 3.5° and 2.2°, respectively, when zp = fc. The retinal image size was 1.04 × 0.653 mm2.
6. Experimental results
6.1. Movement of the light convergence point
The movement of the light convergence point by using the holographic technique was experimentally verified and the results are shown in Fig. 10.
A paper sheet was attached to the SSB filter to simulate the pupil plane. The beam width d of the converging light was set at 1.0 mm. The reconstructed image was produced at 500 mm from the light convergence point. The convergent point was moved of ± 9.5 mm and ± 4.5 mm in the horizontal and vertical directions, respectively, considering the aperture size of the SSB filter and the beam width. Figure 10(a) shows the movements of the light convergence point on the pupil plane. Figure 10(b) shows the retinal images that were captured using a video camera instead of a real eye after removing the paper sheet. A spherical lens with a focal length of 20 mm was attached to the SSB filter so that the retinal images were obtained on the image sensor plane of the video camera.
6.2 Extension of the DOF range of the eyes
The extension of the DOF range of the eyes by limiting the beam width of the converging light was experimentally verified.
The DOF range was extended between ln = 300 mm and lf = 1,000 mm. In this case, the beam diameter was d = 0.818 mm, from Eq. (8), and the distance from the pupil to the reconstructed image was l = 462 mm, from Eq. (9). The minimum resolvable length on the retina was δR = 16.2 μm.
The same video camera from Sec. 6.1 was used to capture retinal images (see Fig. 11) in this experiment. The light convergence point was set at the center of the aperture of the SSB filter. Paper strips showing the distances from the pupil were placed in front of the prototype system for reference. The see-through function allowed both the reconstructed image and the paper strips to be observed. The focus of the camera was changed between 300 and 1,000 mm from the pupil by moving the image sensor plane. Although the paper strips were blurred in the see-through images in accordance with the change of the focus of the video camera, apparent changes were not observed in the reconstructed images.
6.3 Retinal image formation for astigmatism
The ability of the proposed technique to modulate the phase distribution on the pupil plane by adding the auxiliary phase ϕa(xp, yp) allows for the correction of the aberration of the eye lens. In this subsection, the retinal image formation for eyes with both normal and irregular astigmatism is demonstrated. Using aberration correction for holographic optical tweezers has been proposed to improve the sharpness of spots used for the optical trapping of particles . Furthermore, astigmatism correction using the holographic near-eye display has also been previously demonstrated .
The eye lens with normal astigmatism can be modeled as the combination of a spherical lens and a cylindrical lens. The normal astigmatism is characterized by two parameters, the degree of astigmatism (CYL) and the angle of astigmatism (AX). The former one indicates the refractive power of the cylindrical lens and the latter one is the rotation angle of its axis. Thus, to correct the normal astigmatism, the auxiliary phase ϕa(xp, yp) should be the phase distribution of a cylindrical lens having an opposite-signed refractive power and an identical lens axis respect to the cylindrical lens.
The eye lens with irregular astigmatism can be modeled by the combination of a spherical lens and plural cylindrical lenses. In this case, multiple correction phase distributions corresponding to the plural cylindrical lenses should be added to obtain the auxiliary phase distribution ϕa(xp, yp).
To simulate astigmatic eyes, cylindrical lenses were attached to the spherical lens described in Sec. 6.1 to capture the retinal images. Two cylindrical lenses with focal lengths of 100 mm (CYL = −10.0 D) and 150 mm (CYL = −6.7 D) were used. Although absolute CYL values for human eyes are usually smaller than 3 D, this study used such large refractive power cylindrical lenses because cylindrical lenses with focal lengths longer than 300 mm are difficult to obtain.
Figure 12 shows the captured retinal images. For normal astigmatism with CYL = −10.0 D and AX = 90°, Figs. 12(a) and 12(b) show the retinal images formed without and with the auxiliary phase, respectively; the horizontally blurred retinal image was corrected. Figures 12(c) and 12(d) show the retinal images formed for normal astigmatism with CYL = −6.7 D and AX = 180°; the vertically blurred retinal image was corrected. For irregular astigmatism using the two cylindrical lenses (CYL = −10.0 D and AX = 180°) and (CYL = −6.7 D and AX = 90°), the retinal images formed without and with the phase correction are shown in Figs. 12(e) and 12(f), respectively.
As shown in Fig. 11, the reconstructed images could be observed when the focus of the video camera was adjusted at near and far positions. The Maxwellian-view effect was further confirmed using our eyes. Specifically, people usually wearing eyeglasses in their daily lives could see the reconstructed images generated by the experimental system without wearing their glasses.
To verify the proposed technique, a video camera was attached to the SSB filter instead of a real eye, as attaching eyes to the SSB filter was impractical. Because the divergence angle of the converging light was as small as 1.75°, the beam width increased only gradually near the light convergent point. Therefore, it was unnecessary to attach the eyes to the SSB filter to obtain the Maxwellian-view effect; the Maxwellian-view effect could be confirmed visually. Because the proposed technique allows the light convergent point to move along the optical axis, the light convergent point can be moved toward eyes.
The resolution of the reconstructed images shown in Fig. 11 was not good (64 × 40) because the image size on the retina was 1.04 × 0.653 mm2 and the minimum resolvable length was 16.2 μm. The resolution on the retina is inversely proportional to the beam width at the pupil; therefore, when the DOF range is increased, the resolution of the retinal images decreases, but it can be increased by increasing the retinal image size, i.e., the resolution of the SLM.
Image distortion was also observed in the reconstructed images as a consequence of the lens aberration, because the telephoto lens had a simple structure consisting of plano-convex and plano-concave lenses. The image distortion can be reduced by using a telephoto lens consisting of more lenses with aspherical surfaces, and also by anti-distorting the target images before calculating the holograms.
In the experiments on the correction for astigmatism, the beam width on the pupil plane was set to 5 mm; initially, it was set to 1 mm, but the retinal image correction was not effective, and hence it was increased. A random phase distribution was added to the target image to make the Fourier transformed image uniformly distributed within the limited width beam. In this case, the use of the random phase distribution made the retinal images noisy as shown in Fig. 12.
In the experiments shown in Fig. 12, the retinal image formation for astigmatic eyes with CYL = −10.0 and −6.7 D was demonstrated. The correction of such severe astigmatism can be difficult by the conventual approach of using eyeglasses. Therefore, the proposed technique provides an effective way to show images to people with severe and irregular astigmatism.
The experimental system used a LED as a light source. Initially, a laser diode was used and the retinal images were generated successfully; however, the diffraction by eyelashes affected the retinal images. Therefore, a LED was chosen because of the low-coherence of its light, which reduces the influence of diffraction on the retinal images.
In this study, the video camera for monitoring the pupil movement was not used. When the pupil tracking function will be used, the hologram calculation should be performed in real time. From Eqs. (10) and (11), the Fourier transforms should be performed twice in the hologram calculation process. In the experiments, the calculation time was 0.58 s when using a PC with an Intel Core i7-6700K 4.0 GHz CPU. The use of GPUs would greatly shorten the calculation time.
A holographic Maxwellian-view display was proposed. Its ability to electrically move the light convergent point according to the eye movement without using any mechanical part and to change the beam width of the converging light to control the extension of the DOF range of the eyes was demonstrated. A technique for reducing the optical system length was also proposed and the display prototype system was constructed. The movement of the light convergent point within a ± 9.5 mm horizontal and ± 4.5 mm vertical region on the pupil plane was demonstrated. The retinal images were generated without blurs when the focus of the eyes changed between 300 mm and 1,000 mm from the pupil by setting the beam width to 0.82 mm. The retinal image formation for eyes with astigmatism was also demonstrated using the phase modulation on the pupil plane. The proposed system was demonstrated to effectively correct severe and irregular astigmatism.
Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Number 50236189.
References and links
2. T. Ando, K. Yamasaki, M. Okamoto, and E. Shimizu, “Head mounted display using holographic optical element,” Proc. SPIE 3293, 183–189 (1998). [CrossRef]
3. T. Ando, K. Yamasaki, M. Okamoto, T. Matsumoto, and E. Shimizu, “Evaluation of HOE for head mounted display,” Proc. SPIE 3637, 110–118 (1999). [CrossRef]
4. R. D. Beer, D. I. MacLeod, and T. P. Miller, “The extended Maxwellian view (BIGMAX): a high-intensity, high-saturation color display for clinical diagnosis and vision research,” Behav. Res. Methods 37(3), 513–521 (2005). [CrossRef] [PubMed]
5. M. Inami, N. Kawakami, T. Maeda, Y. Yanagida, and S. Tachi, “A stereoscopic display with large field of view using Maxwellian optics,” in Proceedings of Int. Conf. Artificial Reality and Tele-Existence‘97, pp. 71–76 (1997).
8. Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006). [CrossRef]
9. H. Takahashi, Y. Ito, S. Nakata, and K. Yamada, “Retinal projection type super multi-view head-mounted display,” Proc. SPIE 9012, 90120L (2014). [CrossRef]
10. M. Sugawara, M. Suzuki, and N. Miyauchi, “Retinal imaging laser eyewear with focus-free and augmented reality,” in SID Symposium Digest of Technical Papers (2016), paper 14–5L.
11. A. Yuuki, K. Itoga, and T. Satake, “A new Maxwellian view display for accommodation trouble free,” J. Int. SID 20, 581–588 (2012).
12. V. F. Pamplona, M. M. Oliveira, D. G. Aliaga, and R. Raskar, “Tailored displays to compensate for visual aberrations,” ACM Trans. Graph. 31(4), 81 (2012). [CrossRef]
13. F.-C. Huang, D. Lanman, B. A. Barsky, and R. Raskar, “Correcting for optical aberrations using multilayer displays,” ACM Trans. Graph. 31(6), 185 (2012). [CrossRef]
14. F.-C. Huang, G. Wetzstein, B. A. Barsky, and R. Raskar, “Eyeglasses-free display: towards correcting visual aberrations with computational light field displays,” ACM Trans. Graph. 33(4), 59 (2014). [CrossRef]
15. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017). [CrossRef]
16. T. A. Furness III and J. S. Kollins, “Display with variable transmissive element,” US Patent 2003/0095081 A1 (2003).
17. N. Fujimoto and Y. Takaki, “Holographic Maxwellian-view Display System,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online) (Optical Society of America, 2017), paper Th3A.4.
18. J. Park, “Waveguide-type See-through 3D Head-mounted Displays Without Accommodation Vergence Mismatch,” in Frontiers in Optics 2017, OSA Technical Digest (online) (Optical Society of America, 2017), paper FW5C.3.
21. T. Mishina, F. Okano, and I. Yuyama, “Time-alternating method based on single-sideband holography with half-zone-plate processing for the enlargement of viewing zones,” Appl. Opt. 38(17), 3703–3713 (1999). [CrossRef] [PubMed]
22. K. D. Wulff, D. G. Cole, R. L. Clark, R. Dileonardo, J. Leach, J. Cooper, G. Gibson, and M. J. Padgett, “Aberration correction in holographic optical tweezers,” Opt. Express 14(9), 4169–4174 (2006). [CrossRef] [PubMed]
23. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017). [CrossRef]