Abstract

Light-field near-eye displays can solve the accommodation/convergence conflict problem that can cause severe discomfort to the user. However, in actual systems, convergence depth and accommodation depth may not match each other due to the repeated zones or flipped images produced by traditional light-field methods. Also, Moiré fringes are another problem which is caused by interaction between two periodic structures. We present a method of constructing a light-field near-eye display based on random pinholes, where the random structure is employed as a spatial light modulator to break the periodicity of elemental images. Light-field images for a unique view zone in space without Moiré fringes can be provided. A proof-of-concept prototype has been developed to verify the proposed method.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Near-eye display technology is becoming increasingly important as a result of demands for wearable computing, which finds its way into cutting-edge applications like virtual reality (VR) and augmented reality (AR) [1,2]. Most commercial near-eye displays provide the user with binocular parallax as the only way to perceive depth in a virtual scenario with only one imaging plane; this leads to accommodation/convergence (AC) conflict. Studies suggest that this conflict leads to visual confusion and fatigue, resulting in a bad user experience and a potential health hazard; a problem that is becoming more critical [3].

To solve this conflict, multiple focal planes can be generated in near-eye displays by spatial-multiplexed [4,5] and time-multiplexed methods [6,7]. However, either employing multiple display devices / half mirrors in the spatial-multiplexed methods or using active elements in the time-multiplexed methods greatly increases the complexity of these systems. Maxwellian view display is another method that solves this conflict by projecting the images directly into the user’s retina through the pupil [8–10]; however, this removes retinal blur which is a focusing cue for the human visual system. Based on the state-of-the-art techniques, this method suffers from a small exit pupil, thus it not only cannot offer enough tolerance for users with different interpupillary distances, but also allows little room for the eyes to swivel within their sockets without vignetting occurring.

Light-field displays are considered as a most promising solution to this problem by reconstructing sufficient light rays into the user’s pupils to generate correct focus. Wetzstein et al. developed a light-field generation method based on multi-layered displays [11]. Many other researchers have proposed various methods to improve the performance of this type of displays according to the characteristics of the human visual system (HVS) [12–14]. This computational multi-layer light-field method makes full use of the pixels on the display device to generate excellent three-dimensional (3D) scenes. However, in these light-field display constructions, rays cannot be controlled individually, and the amount of computation required is huge, which in turn restricts their development.

As the simplest and most effective way to generate the light field, microstructure array-based light-field near-eye displays have received considerable attention [15–22]. This method combines near-eye displays with integral imaging, which has already been applied to 3D displays, 3D image capturing and authentication [23–25]. A pinhole array or a micro-lens array is often employed to angularly sample the light from pixels on the display panel, and integrally creates the light field of a 3D scene within the exit pupil of the eyepiece. Different types of structure for immersive displays or optical see-through types have been developed, while important key factors of these displays have also been discussed in detail; for example, rendering methods, resolution and eye box [18,21].

Theoretically, sufficient rays should be produced within the eye box for a given light-field near-eye display for the user to experience a vivid virtual scene. However, in real-world applications, display performance still suffers from the practical problem shown in Fig. 1. The position of each component is ideally set according to the design requirements, and both eyes are also placed within the correct eye box of the generated light-field images. The virtual object observed by the left eye can match correctly with the one viewed by the right eye (shown in Fig. 1(a)).

 figure: Fig. 1

Fig. 1 Ideal light-field near-eye displays. (b) Actual systems with the repeated zones problem.

Download Full Size | PPT Slide | PDF

This system is subject to 3D image artifacts when there are even slight misalignments between the microstructure array and display panel. Due to the periodic nature of the micro-structure array and the displayed pattern, users can perceive a complete virtual image at different positions, even when they are out of the view zone. This phenomenon, which is called flipped images or repeated zones [26,27], also exists in other 3D displays, such as integral imaging and autostereoscopic displays. The virtual images viewed in the repeated zones are similar to the correctly generated ones, but at different positions in the space (shown in Fig. 1(b)).

It is preferable to enlarge the exit pupil of near-eye displays, but this parameter of current systems is seldom larger than 10mm [28–30]. In reality, users’ interpupillary distances may differ considerably, the diameter of the user’s pupil varies from 2mm to 8mm and the eye balls swivel within their sockets when viewing the virtual image. Therefore, the user will adjust the near-eye display system before the usage, during which display and observer positions are fixed. Moreover, this kind of system is also subject to 3D image artifacts when there are even slight misalignments between microstructure array and display panel. When viewing virtual images on light-field displays, the user’s eyes may not necessarily be placed within the viewing zones of the system due to the issues mentioned above. However, the user still can observe a complete image by one eye due to the repetition of viewing zones, which the user may find hard to distinguish, thus mistakenly putting their eyes in the wrong position.

The original intention of introducing a light field is to solve visual confusion and fatigue in near-eye displays. However, misalignment and repeated zones may aggravate user discomfort even further due to strong visual mismatching. Therefore, a solution to this problem is demanded. In conventional naked-eye 3D displays, a set of physical barriers or a specially designed illumination system is employed to solve the problem, but the introduced elements will increase the complexity of the system greatly [31,32]. Random hole displays have been developed in traditional 3D displays [33], which is a good idea to solve this problem. However, to the best of our knowledge, no solution has yet been reported or analyzed to solve the problem in designing light-field near-eye displays.

Here, we develop a light-field near-eye display using random pinholes to solve the repeated zones problem. Based on the proposed method, a unique viewing zone can be provided as being the only correct position to observe the virtual image. In near-eye displays, increasing the size of eye box by optical system design has proved a difficult task, and using an eye-tracking method introduces an additional component. The proposed method based on random holes is the simplest way to solve the repeated zones problem.

Moiré fringes are another problem which is caused by interaction between two periodic structures and must be taken into consideration by 3D display designers. This problem exists not only in autostereoscopic displays using lenticular screens or barriers, but also in light-field displays. Many computational simulation methods such as Fourier transformation or ray-tracing algorithms have been developed to analyze the performance of Moiré fringes on display systems [34,35]. Approaches to reduce or eliminate this annoying problem have been discussed over many years [36–38]. Displays based on random pinholes can also solve this problem in light-field near-eye displays as the periodicity of elemental images is broken by the random structure. The proposed method has been verified experimentally on a near-eye display prototype based on random pinholes; there are no repeated zone or Moiré fringing problems.

2. Principle of a light-field near-eye display using random pinholes

2.1 System set-up

A schematic diagram of the proposed display system is shown in Fig. 2. Similar to traditional light-field near-eye displays based on periodic pinhole arrays, random pinholes (which can be a film or a solid structure) and the display panel (with backlight) are packaged together with a fixed distance between them, typically achieved by using a transparent spacer. The random pinhole-display panel sandwich is then placed close to the cornea of the eye, forming a near-eye system. As the eyes cannot focus on an object at a distance of 50 mm, the random pinholes are not resolved by the user. Each pixel on the display panel and the corresponding pinhole form a single ray. Multiple holes and multiple pixels on the display panel construct the light field of virtual images in this case, thus enabling the observer to see virtual images with full depth cues.

 figure: Fig. 2

Fig. 2 Schematic diagram of the light-field near-eye display using random pinholes.

Download Full Size | PPT Slide | PDF

An LCD with a built-in backlight, which is commonly used in smart phones and virtual reality products, was chosen as the display panel. An opaque thin film with random pinholes opening or a transparent spatial light modulator can act as the random-pinhole device in the proposed method; the former was chosen for our configuration. Thus, our experimental system consists of one LCD layer with orthogonal polarizers on both sides, one LED backlight and one pinhole film at the front. The positions of these components are shown in Fig. 2. The pattern of random pinholes was printed on a transparent film by a laser printer, so the transmission rate, size and distribution of pinholes is controlled by the required pattern. A fixed gap was set between the random pinholes device and display panel by inserting an acrylic sheet to synthesize an appropriate light field and eye box for viewing. To generate binocular images for both eyes, both display and random pinholes panel are divided into two parts where random pinhole distribution and displayed patterns can be defined independently.

With the use of the random structure, the visible Moiré fringes are eliminated as these are produced by the similar regular structure of both the micro-structure array (pinhole or microlens) and the display panel. The light ray can be rendered just for the correct area using traditional methods such as a real ray-tracing algorithm or orthographic projection (discussed in Section 2.2), and the area outside the correct zone, noise or wrong light rays can be observed due to the randomizing hole distribution.

2.2 Rendering method

To produce a light field of virtual images for the user, the random pinholes and the corresponding displayed light-field images (that consist of multiple elemental images) are pre-rendered. Uniformly distributed random points are selected across the display window to create transparent openings on the pinhole panel to give clear view to images across the frame, which are generated by a two-dimensional random number generation algorithm. The minimum distance between any two points is larger than a given threshold, determined by the required eye box size (discussed in Section 2.3). Pinholes are created by setting the transparent pattern on the digital printing mask according to the position of random points, while other positions remain opaque.

The rendering process of displayed images is similar to that of integral imaging, which can be treated as the inversion of the imaging process of a light-field camera. It is implemented based on the basic ray-tracing computer graphics algorithms or by using a 3D game engine such as Unity or Unreal. The elemental images can be regarded as a set of cameras with perspective projection, whose viewing axes are defined by connecting the center of the eye box and the corresponding pinhole. These rendering methods have been discussed in previous work [21,22]. The rendering ray of the pixel on the display panel is the line connection between the pixel position and the position of the corresponding pinhole. In our method, the elemental image is not repeated periodically. Thus, the key of the rendering method is to find the corresponding pinhole of every pixel on the display panel.

As shown in Fig. 3(a), let point O be the center of an exit pupil in the eye relief plane, which is the designated position of the eyes. O is not necessarily on the axis of the display panel, but is determined by the required pupil position. To find the corresponding pinhole for the pixel Pij on display panel, Hin should first be calculated; it is at the intersection between the line OPij and the pinhole panel. Then, according to the similar triangles principle, the corresponding pinhole is the closest one to Hin. The distance between Hin and all the random holes can be calculated to find the nearest one (shown as Hnear in Fig. 2(b)). Thus, Hnear is the corresponding pinhole of Pij . By using this method for each pixel, the corresponding hole of all the pixels on the display panel can be determined.

 figure: Fig. 3

Fig. 3 (a) Schematic diagram of rendering method. (b) Schematic diagram of the view zone of a light-field random-hole near-eye display.

Download Full Size | PPT Slide | PDF

When light-field images are displayed, light from each pixel is emitted in any direction in front of the panel. However, only light beams transmitting through the random pinholes can escape and form rays that enter observer’s eyes, while the remaining are blocked by the opaque film. Figure 3(a) gives three rays formed by the pixel Pij through three holes, which are marked with “√” and “×”. Based on the proposed principle, the light ray connecting Hnear and Pij is the correct ray that will enter the eye box (the ray that is marked with “√”). In this way, the rendering ray for each pixel can be obtained and the rendering process can be achieved based on computer graphics ray tracing algorithms. In our prototype system, the relative positions of the LCD display panel and the pinhole film are fixed. The coordinates of the light rays for all pixels can be preloaded in advance so that the amount of computation required is reduced when changing 3D images, thus enabling real-time rendering.

2.3 Viewing zone of a light-field near-eye display using random pinholes

The size of the eye box is determined by the distribution of the random pinholes. To illustrate the relationship between these and the eye box, a 2D schematic diagram is shown in Fig. 3(b). Hi-1, Hi, and Hi + 1 are three neighboring pinholes through which i can go. The light-field images are rendered with eye relief lr with the equivalent gap between the random-pinhole film and the display panel (marked as g). As the random pinholes break the periodicity of elemental images on the display panel, the distances between every two neighbor holes will vary. Ei is the size of the elemental image corresponding to pinhole Hi. According to the proposed rendering method (mentioned in Section 2.1), the size of the elemental image is determined as follows.

Ei=lr2lr+2g(hi+hi+1),
where hi is the distance between the two neighbor holes and Hi-1, Hi, and hi + 1 is the distance between the two neighbor holes Hi, Hi + 1.

The size of exit pupil where we can get the correct ray information from pinhole Hi is Di; that is, the user can observe the elemental image Ei from Hi within the zone Di, and the exit pupil of the system is the overlap area of the exit pupil from all pinholes, where i should traverse from one to the number of pinholes. According to the principle of similar triangles, D is determined by the minimum distance between any two pinholes, shown in Eq. (2). To generalize this conclusion to 3D system, the size of exit pupil is determined by the minimum distance in the corresponding direction. Therefore, the minimum distance should be controlled to achieve appropriate eye box at the required eye relief.

D=miniDi=mini(lrEig)=lr2lrg+g2minihi.

To generate random pinholes with a given minimum distance, a large number of holes can be first generated by random number generators, and each hole is given a serial number from 1 to M (M is the number of these holes). For the kth hole, if the minimum distance between this hole and the first to the (k-1)th one is less than the given distance value, this hole should be deleted; otherwise this hole is retained. Then, when the positions of all the holes are gone through, these retained holes are the required ones which will be printed on the random-hole film.

2.4 The resolution of the generated light field

In light-field near-eye displays, one light ray is formed by one pixel separately, and the light field consists of these light rays. The organization of the generated light field has been analyzed in [18] and [21] including the angular, spatial, and the depth resolution of the light field. Moreover, FOV and eye relief have also been discussed in these previous works. In terms of the display performance, the multiple elemental images on the display panel are overlapped to form virtual image in front of the user’s eye, similar to the reconstruction process of integral imaging. The resolution of the image at different depths is improved due to the overlapping method (the likely principle has been shown in [35]), and the enhancement effect often depends on the pattern of 3D image. That is to say, the organization of the generated light field will affect the performance slightly, and the key parameter determining the performance of virtual images is the number of light rays entering the eye.

The user’s eye must be within the eye box to view the virtual light field, and its diameter is assumed to be 2 – 8 mm, which is marked as p. The number of light rays entering pupil from one pinhole can be calculated by projecting the pupil through the pinhole on the displayed panel and then counting the number of pixels. If the size of pixel is s (assuming the pixel is rectangular), and the number of holes on the pinhole film is N, the number of light rays entering the user’s eye is shown in Eq. (3), where round(·) is the rounding function.

R=round(glrπp24s2N).

As mentioned above, visible Moiré fringes and flipping zones are indeed problems in integral imaging systems. In actual light-field near-eye displays these problems are still annoying and previous approaches to solve the problem result in bulky systems. Our proposed method is a simple way to eliminate both problems, at the expense of slightly reducing the number of light rays entering the pupil. In traditional light-field near-eye displays using pinhole or microlens arrays, the distances between adjacent microstructures are the same. In our proposed method, the distance between holes or microlens will be different, and the minimum value of these distances is larger than that in conventional methods when obtaining the same size of the exit pupil.

Based on the method of generating random holes, the total number of holes is reduced by around 20% compared to traditional methods. The pixel number of current display panels is sufficient for near-eye displays; however, this number is insufficient when applied to light-field near-eye displays. Currently, a slight resolution reduction can be seen. With the development of display techniques, this will be overcome in the future as panel resolution is rapidly increasing. The proposed method will still be the simplest way to solve the visible Moiré fringes and flipping zone problems and the current slight reduction in resolution is justified.

3. Experimental results

To verify the proposed method, a prototype shown in Fig. 4(a) was fabricated using 3D printing. During the experiment, a camera was used to simulate the human eye. The experimental set-up is shown in Fig. 4(b). As mentioned above, the system consists of a random pinhole film, an LCD panel and an acrylic spacer.

 figure: Fig. 4

Fig. 4 (a) Schematic diagram of the designed light-field near-eye displays based on random holes. (b) Photograph of the developed prototype and the camera (that simulates human eye). (c) Photograph of the components.

Download Full Size | PPT Slide | PDF

In the prototype, randomly distributed pinhole patterns were printed on transparent films, which were interchangeable during the experiment. Periodic pinhole arrays were made by the same printing process for comparison. A single LCD panel with a resolution of 1440 x 2560 and a refresh rate of 60 Hz was used. The RGB pixel size is 0.0475 x 0.0475 mm with 8-bit depth full color. An LED backlight that provides uniform illumination is attached to the back of the display panel. Two orthogonal polarizers are attached to the LCD on both sides and a film with the hole pattern, cut to the same size as the LCD panel is mounted on a 5 mm acrylic spacer which is in contact with the LCD.

The interpupillary distance of eyepieces is set at 64 mm; the average adult value. A black plastic sheet is inserted between the left and right viewing areas to separate the content for each eye and minimize unnecessary crosstalk. For one eye, 1440 x 1280 light rays are generated to form the image.

The acrylic insert has a refractive index of 1.49 at visible wavelengths so the equivalent gap in air is 3.356 mm. The eye relief was set as 50 mm in order to provide a sufficiently large eye box. Moreover, the distance between the display panel and the user’s eye was set at 50mm which makes the system close to commercial near-eye displays, which are employed in virtual reality applications. The eye relief is larger than usual as no eyepiece is required in the proof-of-concept prototype. Other parameters, such as pinhole spacing, spacer thickness etc. will be optimized in the next iteration prototype.

Based on Eq. (2), the minimum distance between any two random holes should be larger than 0.57 mm, which can give an exit pupil with a diameter no smaller than 8 mm at the eye-relief plane. In the actual system, the closest distance of random holes is set at 15 pixels, which can ensure an exit pupil of around 10 mm diameter. Figure 6 (h) is an example of the random pinhole pattern.

The viewing zone is not a regular shape due to the randomness of the pinholes. A simulation tool was developed to obtain its shape. For an arbitrary point in 3D space, a ray is traced from this point through one pinhole to the display panel. If the intersection is inside the elemental image of the corresponding pinhole, this ray is treated as a signal ray. Otherwise, the ray is treated as a noise ray. By summarizing all the noise rays and signal rays, the ratio between signal and the full energy of any user viewing position can be determined. To show it clearly, the signal ratio can be defined as the ratio between signal and the full energy (the sum of the number of signal rays and noise rays). That is to say, the signal ratio of 100% means an area without noise, and the signal ratio of 50% means that the number of signal rays equals to that of noise rays when the eye is set at this position. Based on this developed simulation tool, the signal ratio for any point in space can be obtained when the system is set up. Furthermore, the viewing zone can be analyzed during the design progress.

 figure: Fig. 5

Fig. 5 (a) Schematic diagram of the view zones with different signal-noise ratios along with the light-field near-eye display system. The signal-noise ratio of the positions on the eye-relief of (b) 40mm, (c)50mm and (d) 60mm.

Download Full Size | PPT Slide | PDF

Figure 5(a) shows the viewing zones distribution in the space with different levels of signal ratio. The blue, green and red clusters represent the viewing zones in 3D space with signal ratios (the ratio between signal and the full energy, which has been explained above) of 100%, 80% and 60%, respectively. The blue zone is the area with negligible noise, which can be treated as the optimum viewing zone of the system. In terms of the shape of the eye box, the size of exit pupil will decrease when the eye plane moves away from the eye relief (which is the optimum view plane); a similar phenomenon is also present in integral imaging display systems. When the eye plane is nearer than the optimum viewing distance, crosstalk rapidly occurs as more unwanted rays that pass through the micro-structure array are perceived by the user. When the eye plane is away from the designated viewing distance, the situation is also similar. To show the viewing zone clearly, Figs. 5(b)- 5(d) give the signal ratio intersections on the viewing planes (the positions of intersection planes are shown in Fig. 5(a)). The viewing planes lie parallel to the pinhole plane with distances of 40 mm, 50 mm (the designed eye-relief plane) and 60 mm, respectively and the negligible noise area of the exit pupil is around 10 mm diameter at the eye-relief plane. Complete 3D images with the correct light field are observed when the user’s eye pupil is inside this viewing zone. However, if the eye is outside the perfect viewing zone, the perceived image quality drops drastically as there are no repeated viewing zones in this space.

To make a direct comparison with the traditional method, a light-field near-eye display based on a periodic pinhole array was made only by changing the pinhole film. A pinhole array pattern with a pitch of 15 pixels and pinhole size of 0.15mm was printed on the same transparent film: this hole size is in the region of 300 wavelengths of light, so diffraction is negligible.

The original pattern and an enlarged micrograph are shown in Figs. 6(a) and 6(b), the counterpart of random pinholes with the minimum inter-pinhole distance of 15 pixels and the same size of 0.15mm are shown in Figs. 6(h) and 6(i), which was employed in the experiment and the analysis discussed in the section above. In both cases, the target image (Mandrill) was set at the distance of 1m. The elemental images for both display methods, based on ray-tracing methods, are given in Figs. 6(c), 6(d), 6(j) and 6(k), respectively. The reader can easily differentiate periodicity and randomness of these two displayed images.

 figure: Fig. 6

Fig. 6 (a) Image of the pinhole array. (b) Enlarged image of the pinhole array. (c) Elemental image displayed in the light-field near-eye display using pinhole array. (d) Enlarged image of the elemental image. (e) Display performance in the view zone. (f) and (g) Display performance out of the view zone. (Visualization 1) (h) Image of the random holes. (i) Enlarged image of the random holes. (j) Elemental image displayed in the developed light-field near-eye display using random holes. (k) Enlarged image of the elemental image. (l) Display performance in the view zone. (m) and (n) Display performance out of the view zone. (Visualization 2)

Download Full Size | PPT Slide | PDF

Figures 6(e)-6(g) and 6(l)-6(n) illustrate the display behavior under various eye positions. Figures 6(e) and 6(l) show the light-field images from both displays when the camera was placed well inside the viewing zone. The light field of the target image can be observed as having high image quality. Figures 6(f) and 6(m) give the results when the camera was placed near the border of viewing zone. Visible Moiré fringes can be shown in Figs. 6(e)-6(g) using pinhole array (low-frequency black-and-white stripes in these figures), while the fringes cannot be seen in the resultant images based on the proposed method. That is to say, the Moiré-fringes problem can be solved.

For a display based on a periodic pinhole array, images from repeated zones start to emerge. Although the image is still clear, the perceived light field is totally disordered. For the displays using random pinholes, as the user’s pupil moves away from the optimum position, the signal ratios start to increase and the image start to fade; these are clear signals that warn the user where the viewing zone borders are. When the camera was moved outside the view zone, the light field from wrong repeated zones persists for the periodic pinhole-array method (shown in Fig. 6(g)), which will let the user mistakenly think his eye is still in the correct position. Conversely, in the proposed random pinhole method, the image totally blurs out, making it impossible for the user to observe, i.e. the clear image can only be obtained within the correct viewing zone, while light field observed outside the viewing zone is noise (shown in Fig. 6(n)). The results can also be inspected in Visualization 1 and Visualization 2 by moving the camera through the view zone in the same direction for the traditional method and the proposed one.

When comparing the performance of the traditional method (Fig. 6(e) and the proposed method (Fig. 6(l)), the results using the random holes look a bit darker, and the resolution is slightly decreased. As mentioned in Section 2.4, the resolution of light-field near-eye displays is determined by the number of holes on the pinhole film. Also, the number of the holes affects the illumination of the displays as they are the structures to let the backlight pass through. The pinhole array pattern was employed with a pitch of 15 pixels, while the minimum inter-pinhole distance of the random-hole film used in the developed system is 15 pixels. The number of holes in the proposed method is less than that in traditional method, thus making the image a little darker and with slightly lower resolution.

In addition to the Mandrill image, a standard test chart has also been used to compare the experimental results obtained by the traditional method and the proposed method in order to make the results clearer to the readers. Similar to the results shown in Fig. 6, the elemental images of the standard test chart for both light-field near-eye methods are shown in Figs. 7(a) and 7(e), respectively. By moving the camera, display results can be obtained within the view zone (Figs. 7(b) and 7(f)), near the border of viewing zone (Figs. 7(c) and 7(g)), and outside of the view zone (Figs. 7(d) and 7(h)). The suppression of the flipped image problem by the proposed method can be seen in the experimental results, and the display resolution is slightly decreased. The pixel number of current display panels is not sufficient when applied to light-field near-eye displays, as the pixels contribute not only the spatial resolution, but also the depth information. With the rapidly increasing development of display techniques, the lower resolution problem could be overcome in the near future. Also, there are multiplexing methods that can be applied to enhance the display performance [39–41].

 figure: Fig. 7

Fig. 7 Display performance using a standard test chart. (a) Elemental image displayed in the light-field near-eye display using pinhole array. (b) Display performance in the view zone. (c) and (d) Display performance out of the view zone. (e) Elemental image displayed in the developed light-field near-eye display using random holes. (f) Display performance in the view zone. (g) and (h) Display performance out of the view zone.

Download Full Size | PPT Slide | PDF

The main contribution of our work is to solve the visible moiré fringes and flipping zone problems whilst maintaining the specifications of traditional displays, such as eye box, field of view and the resolution. This has been demonstrated by the proof-of-concept prototype and the idea can also be applied to other types of light-field near-eye displays.

4. Conclusions

We have demonstrated random pinholes as a light-field near-eye generation device. By employing the random structure, we solved the light-field mismatching problems in the existing light-field near-eye displays, caused by the structure periodicity. The key designing principles of random pinhole displays have been discussed, including the characteristics of the viewing zone and the rendering method. A proof-of-concept prototype was made, and the advantages of random pinholes were validated by experiment with correct virtual light-field images and no repeated zones. In addition, the images showed no sign of Moiré fringing, which can be a serious problem in displays incorporating periodic structures. Future work includes the improvement of display performance and generalizing the method in order to be incorporated into optical see-through near-eye displays, which can be employed in AR applications.

Funding

National Key Research and Development Program of China (No. 2018YFB1005002); National Natural Science Foundation of China (61727808); A*STAR RIE2020 AME Programmatic Funding (A18A7b0058).

References

1. O. Cakmakci and J. Rolland, “Head-Worn Displays: A Review,” J. Disp. Technol. 2(3), 199–216 (2006). [CrossRef]  

2. H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017). [CrossRef]  

3. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis.8(3) (2008).

4. J. P. Rolland, M. W. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000). [CrossRef]   [PubMed]  

5. D. Cheng, Q. Wang, Y. Wang, and G. Jin, “Lightweight spatial-multiplexed dual focal-plane head-mounted display using two freeform prisms,” Chin. Opt. Lett . 11, 031201 (2013)

6. S. Liu, Y. Li, P. Zhou, Q. Chen, and Y. Su, “Reverse-mode PSLC multi-plane optical see-through display for AR applications,” Opt. Express 26(3), 3394–3403 (2018). [CrossRef]   [PubMed]  

7. S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010). [CrossRef]   [PubMed]  

8. Y. Takaki and N. Fujimoto, “Flexible retinal image formation by holographic Maxwellian-view display,” Opt. Express 26(18), 22985–22999 (2018). [CrossRef]   [PubMed]  

9. J. S. Lee, Y. K. Kim, and Y. H. Won, “Time multiplexing technique of holographic view and Maxwellian view using a liquid lens in the optical see-through head mounted display,” Opt. Express 26(2), 2149–2159 (2018). [CrossRef]   [PubMed]  

10. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017). [CrossRef]  

11. F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2010).

12. D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graph. 29(6), 1–10 (2010). [CrossRef]  

13. M. Liu, C. Lu, H. Li, and X. Liu, “Near eye light field display based on human visual features,” Opt. Express 25(9), 9886–9900 (2017). [CrossRef]   [PubMed]  

14. D. Chen, X. Sang, X. Yu, X. Zeng, S. Xie, and N. Guo, “Performance improvement of compressive light field display with the viewing-position-dependent weight distribution,” Opt. Express 24(26), 29781–29793 (2016). [CrossRef]   [PubMed]  

15. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

16. Y. Takaki and Y. Yamaguchi, “Flat-panel see-through three-dimensional display based on integral imaging,” Opt. Lett. 40(8), 1873–1876 (2015). [CrossRef]   [PubMed]  

17. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014). [CrossRef]  

18. W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett.12(6) (2014).

19. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]   [PubMed]  

20. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017). [CrossRef]   [PubMed]  

21. K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015). [CrossRef]   [PubMed]  

22. C. Yao, D. Cheng, T. Yang, and Y. Wang, “Design of an optical see-through light-field near-eye display using a discrete lenslet array,” Opt. Express 26(14), 18292–18301 (2018). [CrossRef]   [PubMed]  

23. H.-L. Zhang, H. Deng, W.-T. Yu, M.-Y. He, D.-H. Li, and Q.-H. Wang, “Tabletop augmented reality 3D display system based on integral imaging,” J. Opt. Soc. Am. B 34(5), B16–B21 (2017). [CrossRef]  

24. Z.-L. Xiong, Q.-H. Wang, Y. Xing, H. Deng, and D.-H. Li, “Active integral imaging system based on multiple structured light method,” Opt. Express 23(21), 27094–27104 (2015). [CrossRef]   [PubMed]  

25. F. Yi, Y. Jeoung, and I. Moon, “Three-dimensional image authentication scheme using sparse phase information in double random phase encoded integral imaging,” Appl. Opt. 56(15), 4381–4387 (2017). [CrossRef]   [PubMed]  

26. H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99(4), 540–555 (2011). [CrossRef]  

27. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]   [PubMed]  

28. D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism,” Appl. Opt. 48(14), 2655–2668 (2009). [CrossRef]   [PubMed]  

29. L. Wei, Y. Li, J. Jing, L. Feng, and J. Zhou, “Design and fabrication of a compact off-axis see-through head-mounted display using a freeform surface,” Opt. Express 26(7), 8550–8565 (2018). [CrossRef]   [PubMed]  

30. W. Song, D. Cheng, Z. Deng, Y. Liu, and Y. Wang, “Design and assessment of a wide FOV and high-resolution optical tiled head-mounted display,” Appl. Opt. 54(28), E15–E22 (2015). [CrossRef]   [PubMed]  

31. Á. Tolosa, R. Martinez-Cuenca, H. Navarro, G. Saavedra, M. Martínez-Corral, B. Javidi, and A. Pons, “Enhanced field-of-view integral imaging display using multi-Köhler illumination,” Opt. Express 22(26), 31853–31863 (2014). [CrossRef]   [PubMed]  

32. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]   [PubMed]  

33. A. Nashel and H. Fuchs, “Random hole display: A non-uniform barrier autostereoscopic display”, 2009 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video. IEEE, 2009: 1–4. [CrossRef]  

34. V. Saveljev and S. K. Kim, “Simulation and measurement of moiré patterns at finite distance,” Opt. Express 20(3), 2163–2177 (2012). [CrossRef]   [PubMed]  

35. R. Liao and R. Dong, “A novel analytical method for moiré phenomenon in autostereoscopic displays,” SID Symp. Dig. Tech. Pap. 45(1), 1074–1076 (2015).

36. X. Zeng, X. Zhou, T. Guo, L. Yang, E. Chen, and Y. Zhang, “Crosstalk reduction in large-scale autostereoscopic 3D-LED display based on black-stripe occupation ratio,” Opt. Commun. 389, 159–164 (2017). [CrossRef]  

37. Y. Kim, G. Park, J. H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48(11), 2178–2187 (2009). [CrossRef]   [PubMed]  

38. E. Chen, J. Cai, X. Zeng, S. Xu, Y. Ye, Q. F. Yan, and T. Guo, “Ultra-large moiré-less autostereoscopic three-dimensional light-emitting-diode displays,” Opt. Express 27(7), 10355–10369 (2019). [CrossRef]   [PubMed]  

39. F. Heide, D. Lanman, D. Reddy, J. Kautz, K. Pulli, and D. Luebke, “Cascaded displays: spatiotemporal superresolution using offset pixel layers,” ACM Trans. Graph. 33(4), 60 (2014). [CrossRef]  

40. Y. H. Lee, T. Zhan, and S. T. Wu, “Enhancing the resolution of a near-eye display with a Pancharatnam-Berry phase deflector,” Opt. Lett. 42(22), 4732–4735 (2017). [CrossRef]   [PubMed]  

41. J. Y. Wu, P. Y. Chou, K. E. Peng, Y. P. Huang, H. H. Lo, C. C. Chang, and F. M. Chuang, “Resolution enhanced light field near eye display using e-shifting method with birefringent plate,” J. Soc. Inf. Disp. 26(5), 269–279 (2018). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. O. Cakmakci and J. Rolland, “Head-Worn Displays: A Review,” J. Disp. Technol. 2(3), 199–216 (2006).
    [Crossref]
  2. H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017).
    [Crossref]
  3. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis.8(3) (2008).
  4. J. P. Rolland, M. W. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000).
    [Crossref] [PubMed]
  5. D. Cheng, Q. Wang, Y. Wang, and G. Jin, “Lightweight spatial-multiplexed dual focal-plane head-mounted display using two freeform prisms,” Chin. Opt. Lett. 11, 031201 (2013)
  6. S. Liu, Y. Li, P. Zhou, Q. Chen, and Y. Su, “Reverse-mode PSLC multi-plane optical see-through display for AR applications,” Opt. Express 26(3), 3394–3403 (2018).
    [Crossref] [PubMed]
  7. S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
    [Crossref] [PubMed]
  8. Y. Takaki and N. Fujimoto, “Flexible retinal image formation by holographic Maxwellian-view display,” Opt. Express 26(18), 22985–22999 (2018).
    [Crossref] [PubMed]
  9. J. S. Lee, Y. K. Kim, and Y. H. Won, “Time multiplexing technique of holographic view and Maxwellian view using a liquid lens in the optical see-through head mounted display,” Opt. Express 26(2), 2149–2159 (2018).
    [Crossref] [PubMed]
  10. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
    [Crossref]
  11. F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2010).
  12. D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graph. 29(6), 1–10 (2010).
    [Crossref]
  13. M. Liu, C. Lu, H. Li, and X. Liu, “Near eye light field display based on human visual features,” Opt. Express 25(9), 9886–9900 (2017).
    [Crossref] [PubMed]
  14. D. Chen, X. Sang, X. Yu, X. Zeng, S. Xie, and N. Guo, “Performance improvement of compressive light field display with the viewing-position-dependent weight distribution,” Opt. Express 24(26), 29781–29793 (2016).
    [Crossref] [PubMed]
  15. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
    [Crossref]
  16. Y. Takaki and Y. Yamaguchi, “Flat-panel see-through three-dimensional display based on integral imaging,” Opt. Lett. 40(8), 1873–1876 (2015).
    [Crossref] [PubMed]
  17. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
    [Crossref]
  18. W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett.12(6) (2014).
  19. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
    [Crossref] [PubMed]
  20. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017).
    [Crossref] [PubMed]
  21. K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015).
    [Crossref] [PubMed]
  22. C. Yao, D. Cheng, T. Yang, and Y. Wang, “Design of an optical see-through light-field near-eye display using a discrete lenslet array,” Opt. Express 26(14), 18292–18301 (2018).
    [Crossref] [PubMed]
  23. H.-L. Zhang, H. Deng, W.-T. Yu, M.-Y. He, D.-H. Li, and Q.-H. Wang, “Tabletop augmented reality 3D display system based on integral imaging,” J. Opt. Soc. Am. B 34(5), B16–B21 (2017).
    [Crossref]
  24. Z.-L. Xiong, Q.-H. Wang, Y. Xing, H. Deng, and D.-H. Li, “Active integral imaging system based on multiple structured light method,” Opt. Express 23(21), 27094–27104 (2015).
    [Crossref] [PubMed]
  25. F. Yi, Y. Jeoung, and I. Moon, “Three-dimensional image authentication scheme using sparse phase information in double random phase encoded integral imaging,” Appl. Opt. 56(15), 4381–4387 (2017).
    [Crossref] [PubMed]
  26. H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99(4), 540–555 (2011).
    [Crossref]
  27. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
    [Crossref] [PubMed]
  28. D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism,” Appl. Opt. 48(14), 2655–2668 (2009).
    [Crossref] [PubMed]
  29. L. Wei, Y. Li, J. Jing, L. Feng, and J. Zhou, “Design and fabrication of a compact off-axis see-through head-mounted display using a freeform surface,” Opt. Express 26(7), 8550–8565 (2018).
    [Crossref] [PubMed]
  30. W. Song, D. Cheng, Z. Deng, Y. Liu, and Y. Wang, “Design and assessment of a wide FOV and high-resolution optical tiled head-mounted display,” Appl. Opt. 54(28), E15–E22 (2015).
    [Crossref] [PubMed]
  31. Á. Tolosa, R. Martinez-Cuenca, H. Navarro, G. Saavedra, M. Martínez-Corral, B. Javidi, and A. Pons, “Enhanced field-of-view integral imaging display using multi-Köhler illumination,” Opt. Express 22(26), 31853–31863 (2014).
    [Crossref] [PubMed]
  32. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
    [Crossref] [PubMed]
  33. A. Nashel and H. Fuchs, “Random hole display: A non-uniform barrier autostereoscopic display”, 2009 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video. IEEE, 2009: 1–4.
    [Crossref]
  34. V. Saveljev and S. K. Kim, “Simulation and measurement of moiré patterns at finite distance,” Opt. Express 20(3), 2163–2177 (2012).
    [Crossref] [PubMed]
  35. R. Liao and R. Dong, “A novel analytical method for moiré phenomenon in autostereoscopic displays,” SID Symp. Dig. Tech. Pap. 45(1), 1074–1076 (2015).
  36. X. Zeng, X. Zhou, T. Guo, L. Yang, E. Chen, and Y. Zhang, “Crosstalk reduction in large-scale autostereoscopic 3D-LED display based on black-stripe occupation ratio,” Opt. Commun. 389, 159–164 (2017).
    [Crossref]
  37. Y. Kim, G. Park, J. H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48(11), 2178–2187 (2009).
    [Crossref] [PubMed]
  38. E. Chen, J. Cai, X. Zeng, S. Xu, Y. Ye, Q. F. Yan, and T. Guo, “Ultra-large moiré-less autostereoscopic three-dimensional light-emitting-diode displays,” Opt. Express 27(7), 10355–10369 (2019).
    [Crossref] [PubMed]
  39. F. Heide, D. Lanman, D. Reddy, J. Kautz, K. Pulli, and D. Luebke, “Cascaded displays: spatiotemporal superresolution using offset pixel layers,” ACM Trans. Graph. 33(4), 60 (2014).
    [Crossref]
  40. Y. H. Lee, T. Zhan, and S. T. Wu, “Enhancing the resolution of a near-eye display with a Pancharatnam-Berry phase deflector,” Opt. Lett. 42(22), 4732–4735 (2017).
    [Crossref] [PubMed]
  41. J. Y. Wu, P. Y. Chou, K. E. Peng, Y. P. Huang, H. H. Lo, C. C. Chang, and F. M. Chuang, “Resolution enhanced light field near eye display using e-shifting method with birefringent plate,” J. Soc. Inf. Disp. 26(5), 269–279 (2018).
    [Crossref]

2019 (1)

2018 (6)

2017 (8)

2016 (1)

2015 (4)

2014 (4)

Á. Tolosa, R. Martinez-Cuenca, H. Navarro, G. Saavedra, M. Martínez-Corral, B. Javidi, and A. Pons, “Enhanced field-of-view integral imaging display using multi-Köhler illumination,” Opt. Express 22(26), 31853–31863 (2014).
[Crossref] [PubMed]

F. Heide, D. Lanman, D. Reddy, J. Kautz, K. Pulli, and D. Luebke, “Cascaded displays: spatiotemporal superresolution using offset pixel layers,” ACM Trans. Graph. 33(4), 60 (2014).
[Crossref]

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
[Crossref] [PubMed]

2013 (3)

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

D. Cheng, Q. Wang, Y. Wang, and G. Jin, “Lightweight spatial-multiplexed dual focal-plane head-mounted display using two freeform prisms,” Chin. Opt. Lett. 11, 031201 (2013)

J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
[Crossref] [PubMed]

2012 (1)

2011 (1)

H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99(4), 540–555 (2011).
[Crossref]

2010 (3)

F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2010).

D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graph. 29(6), 1–10 (2010).
[Crossref]

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[Crossref] [PubMed]

2009 (3)

2006 (1)

O. Cakmakci and J. Rolland, “Head-Worn Displays: A Review,” J. Disp. Technol. 2(3), 199–216 (2006).
[Crossref]

2000 (1)

Akeley, K.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis.8(3) (2008).

Aksit, K.

Bang, K.

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

Banks, M. S.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis.8(3) (2008).

Cai, J.

Cakmakci, O.

O. Cakmakci and J. Rolland, “Head-Worn Displays: A Review,” J. Disp. Technol. 2(3), 199–216 (2006).
[Crossref]

Chang, C. C.

J. Y. Wu, P. Y. Chou, K. E. Peng, Y. P. Huang, H. H. Lo, C. C. Chang, and F. M. Chuang, “Resolution enhanced light field near eye display using e-shifting method with birefringent plate,” J. Soc. Inf. Disp. 26(5), 269–279 (2018).
[Crossref]

Chellappan, K. V.

H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99(4), 540–555 (2011).
[Crossref]

Chen, D.

Chen, E.

E. Chen, J. Cai, X. Zeng, S. Xu, Y. Ye, Q. F. Yan, and T. Guo, “Ultra-large moiré-less autostereoscopic three-dimensional light-emitting-diode displays,” Opt. Express 27(7), 10355–10369 (2019).
[Crossref] [PubMed]

X. Zeng, X. Zhou, T. Guo, L. Yang, E. Chen, and Y. Zhang, “Crosstalk reduction in large-scale autostereoscopic 3D-LED display based on black-stripe occupation ratio,” Opt. Commun. 389, 159–164 (2017).
[Crossref]

Chen, K.

F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2010).

Chen, Q.

Cheng, D.

C. Yao, D. Cheng, T. Yang, and Y. Wang, “Design of an optical see-through light-field near-eye display using a discrete lenslet array,” Opt. Express 26(14), 18292–18301 (2018).
[Crossref] [PubMed]

W. Song, D. Cheng, Z. Deng, Y. Liu, and Y. Wang, “Design and assessment of a wide FOV and high-resolution optical tiled head-mounted display,” Appl. Opt. 54(28), E15–E22 (2015).
[Crossref] [PubMed]

D. Cheng, Q. Wang, Y. Wang, and G. Jin, “Lightweight spatial-multiplexed dual focal-plane head-mounted display using two freeform prisms,” Chin. Opt. Lett. 11, 031201 (2013)

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[Crossref] [PubMed]

D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism,” Appl. Opt. 48(14), 2655–2668 (2009).
[Crossref] [PubMed]

W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett.12(6) (2014).

Chou, P. Y.

J. Y. Wu, P. Y. Chou, K. E. Peng, Y. P. Huang, H. H. Lo, C. C. Chang, and F. M. Chuang, “Resolution enhanced light field near eye display using e-shifting method with birefringent plate,” J. Soc. Inf. Disp. 26(5), 269–279 (2018).
[Crossref]

Chuang, F. M.

J. Y. Wu, P. Y. Chou, K. E. Peng, Y. P. Huang, H. H. Lo, C. C. Chang, and F. M. Chuang, “Resolution enhanced light field near eye display using e-shifting method with birefringent plate,” J. Soc. Inf. Disp. 26(5), 269–279 (2018).
[Crossref]

Deng, H.

Deng, Z.

Erden, E.

H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99(4), 540–555 (2011).
[Crossref]

Feng, L.

Fuchs, H.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

Fujimoto, N.

Geng, J.

J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
[Crossref] [PubMed]

Girshick, A. R.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis.8(3) (2008).

Goon, A.

Guo, N.

Guo, T.

E. Chen, J. Cai, X. Zeng, S. Xu, Y. Ye, Q. F. Yan, and T. Guo, “Ultra-large moiré-less autostereoscopic three-dimensional light-emitting-diode displays,” Opt. Express 27(7), 10355–10369 (2019).
[Crossref] [PubMed]

X. Zeng, X. Zhou, T. Guo, L. Yang, E. Chen, and Y. Zhang, “Crosstalk reduction in large-scale autostereoscopic 3D-LED display based on black-stripe occupation ratio,” Opt. Commun. 389, 159–164 (2017).
[Crossref]

He, M.-Y.

Heide, F.

F. Heide, D. Lanman, D. Reddy, J. Kautz, K. Pulli, and D. Luebke, “Cascaded displays: spatiotemporal superresolution using offset pixel layers,” ACM Trans. Graph. 33(4), 60 (2014).
[Crossref]

Hirsch, M.

D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graph. 29(6), 1–10 (2010).
[Crossref]

Hoffman, D. M.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis.8(3) (2008).

Hong, K.

Hua, H.

Huang, F. C.

F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2010).

Huang, H.

Huang, Y. P.

J. Y. Wu, P. Y. Chou, K. E. Peng, Y. P. Huang, H. H. Lo, C. C. Chang, and F. M. Chuang, “Resolution enhanced light field near eye display using e-shifting method with birefringent plate,” J. Soc. Inf. Disp. 26(5), 269–279 (2018).
[Crossref]

Jang, C.

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

Javidi, B.

Jeoung, Y.

Jin, G.

D. Cheng, Q. Wang, Y. Wang, and G. Jin, “Lightweight spatial-multiplexed dual focal-plane head-mounted display using two freeform prisms,” Chin. Opt. Lett. 11, 031201 (2013)

Jing, J.

Jung, J. H.

Kautz, J.

K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015).
[Crossref] [PubMed]

F. Heide, D. Lanman, D. Reddy, J. Kautz, K. Pulli, and D. Luebke, “Cascaded displays: spatiotemporal superresolution using offset pixel layers,” ACM Trans. Graph. 33(4), 60 (2014).
[Crossref]

Keller, K.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

Kim, J.

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

Y. Kim, G. Park, J. H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48(11), 2178–2187 (2009).
[Crossref] [PubMed]

Kim, S. K.

Kim, Y.

D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graph. 29(6), 1–10 (2010).
[Crossref]

Y. Kim, G. Park, J. H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48(11), 2178–2187 (2009).
[Crossref] [PubMed]

Kim, Y. K.

Krueger, M. W.

Lanman, D.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

F. Heide, D. Lanman, D. Reddy, J. Kautz, K. Pulli, and D. Luebke, “Cascaded displays: spatiotemporal superresolution using offset pixel layers,” ACM Trans. Graph. 33(4), 60 (2014).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graph. 29(6), 1–10 (2010).
[Crossref]

Lee, B.

Lee, J. S.

Lee, S.

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

Lee, Y. H.

Li, D.-H.

Li, H.

Li, Y.

Liu, M.

Liu, S.

S. Liu, Y. Li, P. Zhou, Q. Chen, and Y. Su, “Reverse-mode PSLC multi-plane optical see-through display for AR applications,” Opt. Express 26(3), 3394–3403 (2018).
[Crossref] [PubMed]

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[Crossref] [PubMed]

Liu, X.

Liu, Y.

W. Song, D. Cheng, Z. Deng, Y. Liu, and Y. Wang, “Design and assessment of a wide FOV and high-resolution optical tiled head-mounted display,” Appl. Opt. 54(28), E15–E22 (2015).
[Crossref] [PubMed]

W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett.12(6) (2014).

Lo, H. H.

J. Y. Wu, P. Y. Chou, K. E. Peng, Y. P. Huang, H. H. Lo, C. C. Chang, and F. M. Chuang, “Resolution enhanced light field near eye display using e-shifting method with birefringent plate,” J. Soc. Inf. Disp. 26(5), 269–279 (2018).
[Crossref]

Lu, C.

Luebke, D.

K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015).
[Crossref] [PubMed]

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

F. Heide, D. Lanman, D. Reddy, J. Kautz, K. Pulli, and D. Luebke, “Cascaded displays: spatiotemporal superresolution using offset pixel layers,” ACM Trans. Graph. 33(4), 60 (2014).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

Maimone, A.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

Martínez-Corral, M.

Martinez-Cuenca, R.

Moon, I.

Moon, S.

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

Navarro, H.

Park, G.

Park, J.-H.

Peng, K. E.

J. Y. Wu, P. Y. Chou, K. E. Peng, Y. P. Huang, H. H. Lo, C. C. Chang, and F. M. Chuang, “Resolution enhanced light field near eye display using e-shifting method with birefringent plate,” J. Soc. Inf. Disp. 26(5), 269–279 (2018).
[Crossref]

Pons, A.

Pulli, K.

F. Heide, D. Lanman, D. Reddy, J. Kautz, K. Pulli, and D. Luebke, “Cascaded displays: spatiotemporal superresolution using offset pixel layers,” ACM Trans. Graph. 33(4), 60 (2014).
[Crossref]

Raskar, R.

D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graph. 29(6), 1–10 (2010).
[Crossref]

Rathinavel, K.

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

Reddy, D.

F. Heide, D. Lanman, D. Reddy, J. Kautz, K. Pulli, and D. Luebke, “Cascaded displays: spatiotemporal superresolution using offset pixel layers,” ACM Trans. Graph. 33(4), 60 (2014).
[Crossref]

Rolland, J.

O. Cakmakci and J. Rolland, “Head-Worn Displays: A Review,” J. Disp. Technol. 2(3), 199–216 (2006).
[Crossref]

Rolland, J. P.

Saavedra, G.

Sang, X.

Saveljev, V.

Song, W.

W. Song, D. Cheng, Z. Deng, Y. Liu, and Y. Wang, “Design and assessment of a wide FOV and high-resolution optical tiled head-mounted display,” Appl. Opt. 54(28), E15–E22 (2015).
[Crossref] [PubMed]

W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett.12(6) (2014).

Su, Y.

Surman, P.

H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99(4), 540–555 (2011).
[Crossref]

Takaki, Y.

Talha, M. M.

Tolosa, Á.

Urey, H.

H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99(4), 540–555 (2011).
[Crossref]

Wang, Q.

D. Cheng, Q. Wang, Y. Wang, and G. Jin, “Lightweight spatial-multiplexed dual focal-plane head-mounted display using two freeform prisms,” Chin. Opt. Lett. 11, 031201 (2013)

Wang, Q.-H.

Wang, Y.

Wei, L.

Wetzstein, G.

F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2010).

Won, Y. H.

Wu, J. Y.

J. Y. Wu, P. Y. Chou, K. E. Peng, Y. P. Huang, H. H. Lo, C. C. Chang, and F. M. Chuang, “Resolution enhanced light field near eye display using e-shifting method with birefringent plate,” J. Soc. Inf. Disp. 26(5), 269–279 (2018).
[Crossref]

Wu, S. T.

Xie, S.

Xing, Y.

Xiong, Z.-L.

Xu, S.

Yamaguchi, Y.

Yan, Q. F.

Yang, L.

X. Zeng, X. Zhou, T. Guo, L. Yang, E. Chen, and Y. Zhang, “Crosstalk reduction in large-scale autostereoscopic 3D-LED display based on black-stripe occupation ratio,” Opt. Commun. 389, 159–164 (2017).
[Crossref]

Yang, T.

Yao, C.

Ye, Y.

Yi, F.

Yu, W.-T.

Yu, X.

Zeng, X.

Zhan, T.

Zhang, H.-L.

Zhang, Y.

X. Zeng, X. Zhou, T. Guo, L. Yang, E. Chen, and Y. Zhang, “Crosstalk reduction in large-scale autostereoscopic 3D-LED display based on black-stripe occupation ratio,” Opt. Commun. 389, 159–164 (2017).
[Crossref]

Zhou, J.

Zhou, P.

Zhou, X.

X. Zeng, X. Zhou, T. Guo, L. Yang, E. Chen, and Y. Zhang, “Crosstalk reduction in large-scale autostereoscopic 3D-LED display based on black-stripe occupation ratio,” Opt. Commun. 389, 159–164 (2017).
[Crossref]

ACM Trans. Graph. (6)

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2010).

D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graph. 29(6), 1–10 (2010).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 89 (2014).
[Crossref]

F. Heide, D. Lanman, D. Reddy, J. Kautz, K. Pulli, and D. Luebke, “Cascaded displays: spatiotemporal superresolution using offset pixel layers,” ACM Trans. Graph. 33(4), 60 (2014).
[Crossref]

Adv. Opt. Photonics (1)

J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
[Crossref] [PubMed]

Appl. Opt. (7)

Chin. Opt. Lett (1)

D. Cheng, Q. Wang, Y. Wang, and G. Jin, “Lightweight spatial-multiplexed dual focal-plane head-mounted display using two freeform prisms,” Chin. Opt. Lett. 11, 031201 (2013)

IEEE Trans. Vis. Comput. Graph. (1)

S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010).
[Crossref] [PubMed]

J. Disp. Technol. (1)

O. Cakmakci and J. Rolland, “Head-Worn Displays: A Review,” J. Disp. Technol. 2(3), 199–216 (2006).
[Crossref]

J. Opt. Soc. Am. B (1)

J. Soc. Inf. Disp. (1)

J. Y. Wu, P. Y. Chou, K. E. Peng, Y. P. Huang, H. H. Lo, C. C. Chang, and F. M. Chuang, “Resolution enhanced light field near eye display using e-shifting method with birefringent plate,” J. Soc. Inf. Disp. 26(5), 269–279 (2018).
[Crossref]

Opt. Commun. (1)

X. Zeng, X. Zhou, T. Guo, L. Yang, E. Chen, and Y. Zhang, “Crosstalk reduction in large-scale autostereoscopic 3D-LED display based on black-stripe occupation ratio,” Opt. Commun. 389, 159–164 (2017).
[Crossref]

Opt. Express (13)

Z.-L. Xiong, Q.-H. Wang, Y. Xing, H. Deng, and D.-H. Li, “Active integral imaging system based on multiple structured light method,” Opt. Express 23(21), 27094–27104 (2015).
[Crossref] [PubMed]

C. Yao, D. Cheng, T. Yang, and Y. Wang, “Design of an optical see-through light-field near-eye display using a discrete lenslet array,” Opt. Express 26(14), 18292–18301 (2018).
[Crossref] [PubMed]

L. Wei, Y. Li, J. Jing, L. Feng, and J. Zhou, “Design and fabrication of a compact off-axis see-through head-mounted display using a freeform surface,” Opt. Express 26(7), 8550–8565 (2018).
[Crossref] [PubMed]

E. Chen, J. Cai, X. Zeng, S. Xu, Y. Ye, Q. F. Yan, and T. Guo, “Ultra-large moiré-less autostereoscopic three-dimensional light-emitting-diode displays,” Opt. Express 27(7), 10355–10369 (2019).
[Crossref] [PubMed]

V. Saveljev and S. K. Kim, “Simulation and measurement of moiré patterns at finite distance,” Opt. Express 20(3), 2163–2177 (2012).
[Crossref] [PubMed]

Á. Tolosa, R. Martinez-Cuenca, H. Navarro, G. Saavedra, M. Martínez-Corral, B. Javidi, and A. Pons, “Enhanced field-of-view integral imaging display using multi-Köhler illumination,” Opt. Express 22(26), 31853–31863 (2014).
[Crossref] [PubMed]

S. Liu, Y. Li, P. Zhou, Q. Chen, and Y. Su, “Reverse-mode PSLC multi-plane optical see-through display for AR applications,” Opt. Express 26(3), 3394–3403 (2018).
[Crossref] [PubMed]

M. Liu, C. Lu, H. Li, and X. Liu, “Near eye light field display based on human visual features,” Opt. Express 25(9), 9886–9900 (2017).
[Crossref] [PubMed]

D. Chen, X. Sang, X. Yu, X. Zeng, S. Xie, and N. Guo, “Performance improvement of compressive light field display with the viewing-position-dependent weight distribution,” Opt. Express 24(26), 29781–29793 (2016).
[Crossref] [PubMed]

Y. Takaki and N. Fujimoto, “Flexible retinal image formation by holographic Maxwellian-view display,” Opt. Express 26(18), 22985–22999 (2018).
[Crossref] [PubMed]

J. S. Lee, Y. K. Kim, and Y. H. Won, “Time multiplexing technique of holographic view and Maxwellian view using a liquid lens in the optical see-through head mounted display,” Opt. Express 26(2), 2149–2159 (2018).
[Crossref] [PubMed]

H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
[Crossref] [PubMed]

H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017).
[Crossref] [PubMed]

Opt. Lett. (2)

Proc. IEEE (2)

H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017).
[Crossref]

H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99(4), 540–555 (2011).
[Crossref]

Other (4)

A. Nashel and H. Fuchs, “Random hole display: A non-uniform barrier autostereoscopic display”, 2009 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video. IEEE, 2009: 1–4.
[Crossref]

R. Liao and R. Dong, “A novel analytical method for moiré phenomenon in autostereoscopic displays,” SID Symp. Dig. Tech. Pap. 45(1), 1074–1076 (2015).

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis.8(3) (2008).

W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett.12(6) (2014).

Supplementary Material (2)

NameDescription
» Visualization 1       Repeated zones can be observed along with Moiré fringes in traditional light-field near-eye displays
» Visualization 2       No repeated zones can be observed in the proposed light-field near-eye displays using random holes, and Moiré fringes problem also be solved.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Ideal light-field near-eye displays. (b) Actual systems with the repeated zones problem.
Fig. 2
Fig. 2 Schematic diagram of the light-field near-eye display using random pinholes.
Fig. 3
Fig. 3 (a) Schematic diagram of rendering method. (b) Schematic diagram of the view zone of a light-field random-hole near-eye display.
Fig. 4
Fig. 4 (a) Schematic diagram of the designed light-field near-eye displays based on random holes. (b) Photograph of the developed prototype and the camera (that simulates human eye). (c) Photograph of the components.
Fig. 5
Fig. 5 (a) Schematic diagram of the view zones with different signal-noise ratios along with the light-field near-eye display system. The signal-noise ratio of the positions on the eye-relief of (b) 40mm, (c)50mm and (d) 60mm.
Fig. 6
Fig. 6 (a) Image of the pinhole array. (b) Enlarged image of the pinhole array. (c) Elemental image displayed in the light-field near-eye display using pinhole array. (d) Enlarged image of the elemental image. (e) Display performance in the view zone. (f) and (g) Display performance out of the view zone. (Visualization 1) (h) Image of the random holes. (i) Enlarged image of the random holes. (j) Elemental image displayed in the developed light-field near-eye display using random holes. (k) Enlarged image of the elemental image. (l) Display performance in the view zone. (m) and (n) Display performance out of the view zone. (Visualization 2)
Fig. 7
Fig. 7 Display performance using a standard test chart. (a) Elemental image displayed in the light-field near-eye display using pinhole array. (b) Display performance in the view zone. (c) and (d) Display performance out of the view zone. (e) Elemental image displayed in the developed light-field near-eye display using random holes. (f) Display performance in the view zone. (g) and (h) Display performance out of the view zone.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

E i = l r 2 l r +2g ( h i + h i+1 ),
D= min i D i = min i ( l r E i g )= l r 2 l r g+ g 2 min i h i .
R=round( g l r π p 2 4 s 2 N ).

Metrics