Abstract

The narrow field of view (FOV) has always been one of the most with limitations that drag the development of holographic three-dimensional (3D) near-eye display (NED). The complex amplitude modulation (CAM) technique is one way to realize holographic 3D display in real time with the advantage of high image quality. Previously, we applied the CAM technique on the design and integration of a compact colorful 3D-NED system. In this paper, a viewing angle enlarged CAM based 3D-NED system using a Abbe-Porter scheme and curved reflective structure is proposed. The viewing angle is increased in two steps. An Abbe-Porter filter system, composed of a lens and a grating, is used to enlarge the FOV for the first step and, meanwhile, realize complex amplitude modulation. A curved reflective structure is used to realize the FOV enlargement for the second step. Besides, the system retains the ability of colorful 3D display with high image quality. Optical experiments are performed, and the results show the system could present a 45.2° diagonal viewing angle. The system is able to present dynamic display as well. A compact prototype is fabricated and integrated for wearable and lightweight design.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In the last few decades, increasing attention has been paid to the development of augmented reality (AR) devices. The AR technology allows users to view both the real scenes and virtual signals in front of their eyes at the same time [1]. According to application circumstances, AR devices can be categorized into handheld AR device, car head-up display (HUD), projection type AR device and near-eye display (NED) device. Among these devices, near-eye display is the most popular type for its best immersive sense and portable design. The earlier NED devices, however, deliver two-dimensional (2D) images only. The stereoscopic three-dimensional perception mainly relies on binocular parallax which also leads to the accommodation-convergence conflict as a result.

To conquer the challenge above, numerous 3D display techniques beyond binocular parallax have been applied on the design of 3D-NED devices such as light field display [26], integral imaging [711], multifocal display [1214], varifocal display [1517], holographic 3D display [1824] and so on. Each 3D display technique has its unique pros and cons. Light field display, as the name suggests, aims to reconstruct all the light rays information of a 3D scene. Liu et al. proposed a bifocal computational near-eye light field display via time-multiplexing with the use of a liquid lens and dual-layer CCDs [1]. A determination framework is also proposed to determine the structure parameters of the bifocal light field display. Such light field display systems are able to present good 3D perception and right geometric occlusion. But trade-offs have to be optimized among key parameters such as spatial resolution, depth range and angular resolution in current implementations. Integral imaging can be considered as one realization of light field display. The core components are micro lens array (MLA) and elemental images array (EIA). Shen and Javidi improved the performance of conventional integral imaging based 3D-NED system by adopting a focus-tunable lens to relay EIA with various positions to MLA [8]. The depth range of 3D image is significantly enhanced in the proposed system and also with good image quality. An integral imaging system can be realized with lightweight and compact while it still suffers from the tradeoff between spatial resolution, depth of field and the viewing angle. Multifocal display provides 3D perception by presenting multiple discrete focal planes of the virtual 3D scene. The system usually consists of focus-tunable lens and high-frame-rate display devices such as digital micromirror device (DMD). A multifocal display system could be implemented via temporal multiplexing, spatial multiplexing or polarization multiplexing. The system structure usually results to be complex and bulky. Compared with multifocal display, varifocal display intends to present single focal plane while the distance is dynamically adjusted by tunable optical element or mechanical movement. Dunn et al. proposed a varifocal display by use of varifocal deformable membrane mirror with 60° FOV horizontally [15]. More recently, Wilson and Hua demonstrated a varifocal NED using freeform Alvarez lenses and the diagonal FOV with more than 30° is achieved [16]. Holographic 3D display could provide sufficient depth cues of a 3D scene without any special eyewear. It is usually considered as one of most promising techniques to realize true 3D near-eye display. Wakunami et al. proposed a projection-type see-through holographic 3D display based on HOE to increase the display size and viewing angle [19]. A 73.6×41.4 mm2 display size and 20.8° viewing angle is realized. Li et al. also proposed an optical see-through NED system employing a multi-functional HOE. The HOE performs the optical functions of a mirror and a lens simultaneously [20]. The system is significantly simplified. Park et al. proposed a holographic NED that can control the depth of field of individual 3D image and replicate the eyebox [23]. The eyebox is replicated to around 9 mm and the FOV is 9.6°×7.4°. Chang et al. proposed a foveated rendering method to accelerate the CGH encoding in a holographic NED. The target image is computed into a high-resolution foveated region and a low-resolution peripheral region. The computation speed is significantly improved [24]. In most scheme of holographic 3D-NED systems, computer generated holograms (CGHs) are calculated digitally. A phase-only spatial light modulator (SLM) is employed as image source for its high efficiency. This kind of scheme brings the following issues: one is the amplitude information loss, the other is the time consumption caused by the iteration operation during hologram encoding.

The complex amplitude modulation (CAM) technique is one way to realize holographic 3D display with high image quality and fast CGH calculation [25,26]. Previously, we proposed a monocular 3D-NED system based on CAM for the first time and a compact structure is integrated for lightweight and wearable applications [27,28]. Later, we further optimized the CAM algorithm to be suitable for colorful display and a full-color 3D-NED system was built [29]. The proposed system was able to present colorful 3D images with sufficient depth cues. Nevertheless, the FOV of the full-color 3D system is still limited. For a system using SLM with 8 µm pixel pitch and blue laser with 473 nm wavelength, the FOV is restricted to be ±1.7° horizontal and 4.8° diagonal.

In this paper, a CAM based 3D-NED system with enlarged FOV is proposed. A two-step method is proposed to realize the enlargement of FOV. An Abbe-Porter filter system is employed to increase the viewing angle for the first step and realize CAM simultaneously. The Abbe-Porter filter system is more compact than other image filter systems. A curved reflective structure is used to enlarge the FOV for the second step. Reflective optical element is free of chromatic aberration so better colorful effect is provided. A reflective structure could also fold the light path to save more space for portable design. Hence, the system structure is rather compact and a lightweight prototype is fabricated and integrated. Besides, as the calculation time is significantly shortened, CGHs could be calculated and uploaded in real-time so the system is able to present colorful dynamic display with a wide FOV.

2. Principle and system

2.1 Scheme of the proposed system

The schematic and viewing effect of proposed system are illustrated in Fig. 1. The incident light of the system is collimated parallel light composed of three color lasers. A SLM works as image source with CGHs uploaded on. After the SLM, an Abbe-Porter filter system which contains a doublet lens (DL) and a grating (G) is employed to enlarge the viewing angle for the first step. The doublet lens is used to reduce the chromatic aberration of colorful display system. The grating is placed at the back focus of doublet lens to realize CAM via cooperating with specially designed CGHs on SLM plane. After a planar mirror (M) which is used to reflect the optical path, the light illuminates a concave mirror (CM) normally. The position of concave mirror is specifically settled so that the FOV of the system is enlarged for the second step. Then the light from concave mirror is reflected by a light combiner to human eye. For the other path, the light from outside world passes directly through light combiner to human eye. Hence the real scene can be seen with virtual images superposed on. The optical arrangement is shown in Fig. 1(a). Figure 1(b) depicts one of the applications of proposed 3D-NED system. The workers in a car factory could wear a proposed device during manufacturing cars on assembly line. Colorful virtual images and instructions with large FOV could be displayed directly in front their eyes and superposed on real cars for more convenient assemble and higher efficiency.

 figure: Fig. 1.

Fig. 1. (a) Schematic of the FOV enlargement structure. BS means beam splitter, DL means doublet lens, G means grating, M is mirror and CM denotes to concave mirror, (b) viewing effect of the proposed system.

Download Full Size | PPT Slide | PDF

Before further illustration, it’s necessary to define the conception of FOV in this article. In traditional definition of viewing angle of a display panel, we care about the angular moving range of a viewer to see whole images. The viewing angle is mainly decided by emitting angle of edge pixel. But for 3D near-eye display, the display device is worn by viewer just like wearing a glasses or helmet. The display device is relatively static to human eye which means we care more about image size range to human eye. In this case, the FOV of system is mainly influenced by image size. In other words, for a 3D near-eye system, a larger FOV indicates receiving larger image or more information in a person’s view.

2.2 Principle of complex amplitude modulation algorithm

The complex amplitude modulation is the core algorithm of the system. To get a complex hologram, there are several possible methods. Here, we apply a CAM algorithm for full-color reconstruction based on Abbe-Porter filter scheme and single SLM. The realization of CAM mainly relies on specially designed CGHs and a holographic grating [25,29]. The process is illustrated in Fig. 2. Firstly, the target 3D colorful image is decomposed into three components according to wavelengths. The CGH for each wavelength is calculated independently. For each component, a complex Fresnel hologram of target image is acquired through Fresnel diffraction. The complex Fresnel hologram is expressed as:

$$H = A\exp (i\theta )$$
where A and $\theta $ represent the amplitude and phase distributions respectively and i denotes $\sqrt { - 1} $. As phase-only SLM has higher diffraction efficiency and is commonly used in holographic 3D display, we intend to express the complex Fresnel hologram with phase holograms only. Two phase holograms are extracted from the complex hologram and the decomposition can be expressed as follows:
$$\exp (i{\theta _1}) + \exp (i{\theta _2}) = A\exp (i\theta )$$
where ${\theta _1}$ and ${\theta _2}$ refer to two phase holograms. After some mathematically deduction, ${\theta _1}$ and ${\theta _2}$ can be solved as:
$$\left\{ \begin{array}{l} {\theta_1} = \theta + \arccos (\frac{A}{2})\\ {\theta_2} = \theta - \arccos (\frac{A}{2}) \end{array} \right.$$

During display, the extracted two sub-holograms (${\theta _1}$ and ${\theta _2}$) is uploaded on phase-only SLM. The two sub-holograms are separated with a distance D on SLM plane as shown in Fig. 2. Here, we present the distribution of two sub-holograms as ${h_1}$ and ${h_2}$ so the distribution of input plane (SLM plane) can be described as:

$$h(x,y) = {h_1}(x,y - \frac{D}{2}) + {h_2}(x,y + \frac{D}{2})$$

 figure: Fig. 2.

Fig. 2. Illustration of the complex amplitude modulation method (green component).

Download Full Size | PPT Slide | PDF

An Abbe-Porter filter scheme is implemented following the SLM. A normal Abbe-Porter scheme consists of a lens and a filter. The input image is positioned a little far away from front focal plane of the lens. The filter is put at the back focus of lens. In the proposed structure, a sinusoidal amplitude grating is interposed to realize CAM. When the SLM is illuminated by collimated plane wave, the spectral distribution is acquired on Fourier plane. Unlike 4f filter scheme, the SLM in the proposed structure is not placed on front focal plane of lens. So the distribution on Fourier plane is not the precise Fourier transform of input plane. According to Fourier transform nature of the lens, given an input plane placed ${d_1}$ distance front of the lens, the complex amplitude on Fourier plane is expressed as:

$$\begin{array}{l} {E_1}({x_1},{y_1}) = {g_1}({x_1},{y_1})\cdot {\mathcal {F}}\{{h({x_0},{y_0})} \}\left|\begin{array}{l} {f_x} = {{{x_1}} / {\lambda {d_1}}}\\ {f_y} = {{{y_1}} / {\lambda {d_1}}} \end{array} \right.\\ \;\;\;\;\;\;\;\;\;\;\;\;\; = {g_1}({x_1},{y_1})\cdot H\left( {\frac{{{x_1}}}{{\lambda {d_1}}},\frac{{{y_1}}}{{\lambda {d_1}}}} \right) \end{array}$$
where
$${g_1}({x_1},{y_1}) = \frac{{{E_0}}}{{j\lambda f}}\exp [{jk({{d_1} + f} )} ]\exp \left[ {j\frac{k}{{2f}}\left( {1 - \frac{{{d_1}}}{f}} \right)({x_1^2 + y_1^2} )} \right]$$
$({{x_0},{y_0}} )$ and $({{x_1},{y_1}} )$ represent coordinates of input plane and Fourier plane, respectively. ${\mathcal {F}}\{{} \}$ means Fourier transform operation, $\lambda $ is the wavelength of corresponding incident light. H stands for frequency spectrum of CGH. ${E_0}$ is a complex constant indicating the complex amplitude at the center of lens and f is focal length of lens. After propagating to Fourier plane, the optical wave is multiplied by grating. The transmission coefficient of sinusoidal amplitude grating is given by:
$$G(y) = \frac{1}{2} + \frac{1}{2}\cos \left( {2\pi \frac{y}{\Delta }} \right)$$
where $\Delta $ is the period of grating. So the complex amplitude after grating equals to ${E_1}({{x_1},{y_1}} )\cdot G({{y_1}} )$. After the grating, the optical wave propagates another distance ${d_2}$ to output plane. This process is performed via Fresnel diffraction and the distribution on output plane is represented as:
$$\begin{array}{l} {E_2}({{x_2},{y_2}} )= \frac{1}{{j\lambda {d_2}}}\exp \left[ {jk\left( {{d_2} + \frac{{x_2^2 + y_2^2}}{{2{d_2}}}} \right)} \right]\\ \;\;\;\;\;\;\;\;\;\;\cdot \int\limits_{ - \infty }^\infty {\int {{E_1}({{x_1},{y_1}} )\cdot G({{y_1}} )\exp \left[ {j\frac{k}{{2{d_2}}}({x_1^2 + y_1^2} )} \right]\cdot \exp \left[ { - j\frac{k}{{{d_2}}}({{x_2}{x_1} + {y_2}{y_1}} )} \right]d{x_1}d{y_1}} } \end{array}$$
where $({{x_2},{y_2}} )$ denotes the coordinate of output plane. As output plane and input plane are conjugated planes of lens, Eq. (8) can be simplified. According to Newton formula of lens imaging equation:
$$({{d_1} - f} )\cdot {d_2} = {f^2}$$

After taking Eq. (9) into Eq. (6) and some deductions, the complex amplitude on output plane is simplified as:

$$\begin{array}{l} {E_2}({{x_2},{y_2}} )={-} \frac{{{E_0}}}{{{\lambda ^2}f{d_2}}}\exp \left[ {jk\left( {{d_1} + f + {d_2} + \frac{{x_2^2 + y_2^2}}{{2{d_2}}}} \right)} \right]\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\cdot \int\limits_{ - \infty }^\infty {\int {H\left( {\frac{{{x_1}}}{{\lambda {d_1}}},\frac{{{y_1}}}{{\lambda {d_1}}}} \right)\cdot G({{y_1}} )\cdot \exp \left[ { - j\frac{k}{{{d_2}}}({{x_2}{x_1} + {y_2}{y_1}} )} \right]d{x_1}d{y_1}} } \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\; = {g_2}({{x_2},{y_2}} ){\mathcal {F}}\left\{ {H\left( {\frac{{{x_1}}}{{\lambda {d_1}}},\frac{{{y_1}}}{{\lambda {d_1}}}} \right)\cdot G({{y_1}} )} \right\}\left|\begin{array}{l} {f_x} = {{{x_2}} / {\lambda {d_2}}}\\ {f_y} = {{{y_2}} / {\lambda {d_2}}} \end{array} \right. \end{array}$$
where
$${g_2}({{x_2},{y_2}} )={-} \frac{{{E_0}}}{{{\lambda ^2}f{d_2}}}\exp \left[ {jk\left( {{d_1} + f + {d_2} + \frac{{x_2^2 + y_2^2}}{{2{d_2}}}} \right)} \right]$$

This result corresponds well with Abbe secondary imaging theory. If we substitute input distribution [Eq. (4)] and grating [Eq. (7)] into calculation, the optical field on output plane is given by:

$$\begin{array}{l} {E_2}({{x_2},{y_2}} )= \frac{{{\lambda ^2}{d_1}{d_2}}}{2}{g_2}({{x_2},{y_2}} )\left\{ {\left[ {{h_1}\left( {\frac{{{x_2}}}{M},\frac{{{y_2}}}{M} - \frac{D}{2}} \right) + {h_2}\left( {\frac{{{x_2}}}{M},\frac{{{y_2}}}{M} + \frac{D}{2}} \right)} \right]} \right.\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \frac{1}{2}\left[ {{h_1}\left( {\frac{{{x_2}}}{M},\frac{{{y_2}}}{M} + \frac{{\lambda {d_2}}}{\Delta } - \frac{D}{2}} \right) + {h_2}\left( {\frac{{{x_2}}}{M},\frac{{{y_2}}}{M} - \frac{{\lambda {d_2}}}{\Delta } + \frac{D}{2}} \right)} \right]\\ \left. {\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \frac{1}{2}\left[ {{h_1}\left( {\frac{{{x_2}}}{M},\frac{{{y_2}}}{M} - \frac{{\lambda {d_2}}}{\Delta } - \frac{D}{2}} \right) + {h_2}\left( {\frac{{{x_2}}}{M},\frac{{{y_2}}}{M} + \frac{{\lambda {d_2}}}{\Delta } + \frac{D}{2}} \right)} \right]} \right\} \end{array}$$
where $M ={-} {{{d_2}} / {{d_1}}}$. It is noticed that if separated distance of two sub-holograms D and grating period $\Delta $ satisfy following condition:
$$D = \frac{{2\lambda {d_2}}}{\Delta }$$
Equation (12) can be further simplified as:
$$\begin{array}{l} {E_2}({{x_2},{y_2}} )= \frac{{{\lambda ^2}{d_1}{d_2}}}{2}{g_2}({{x_2},{y_2}} )\left\{ {\left[ {{h_1}\left( {\frac{{{x_2}}}{M},\frac{{{y_2}}}{M} - \frac{D}{2}} \right) + {h_2}\left( {\frac{{{x_2}}}{M},\frac{{{y_2}}}{M} + \frac{D}{2}} \right)} \right]} \right.\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \frac{1}{2}\left[ {{h_1}\left( {\frac{{{x_2}}}{M},\frac{{{y_2}}}{M}} \right) + {h_2}\left( {\frac{{{x_2}}}{M},\frac{{{y_2}}}{M}} \right)} \right]\\ \left. {\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \frac{1}{2}\left[ {{h_1}\left( {\frac{{{x_2}}}{M},\frac{{{y_2}}}{M} - D} \right) + {h_2}\left( {\frac{{{x_2}}}{M},\frac{{{y_2}}}{M} + D} \right)} \right]} \right\} \end{array}$$

Comparing the second term of Eq. (14) with Eq. (2), we find that the complex amplitude distribution is reconstructed on output plane. The other terms are unwanted orders that will be filtered out in experiments. The complex coefficient ${g_2}({{x_2},{y_2}} )$ is a quadratic phase factor which means the wave reaches output plane as spherical wave.

Actually, from Eq. (14), it can be seen that each sub-hologram is duplicated into three patterns by the grating on output plane. When the condition of Eq. (13) is satisfied, the -1 duplicate of the upper sub-hologram and the +1 duplicate of lower sub-hologram will superpose at the center and synthesize a complex amplitude hologram. Then the 3D object for single color is reconstructed through Fresnel diffraction. It can be seen from Eq. (13) that D is proportional to $\lambda $. So distance D is different for each wavelength. The whole process of green component is illustrated in Fig. 2. It works the same with red and blue component. Finally, via time-multiplexing, CGH for each wavelength is uploaded on SLM in time sequence and CAM for colorful image is realized.

2.3 Enlarging field of view by two-step method

As is mentioned before, to enlarge the FOV of a near-eye device is to converge a wide range of rays as large as possible into human eye, which also means larger images and more information could be seen in a field of view. A two-step method is proposed to enlarge the FOV which contains an Abbe-Porter filter system and curved reflective structure.

The theory of proposed two-step method is explained in Fig. 3. An Abbe-Porter filter system consists of a lens and a specific filter positioned at back focus of lens. Here, a doublet lens is used to reduce the chromatic aberration of system. The Abbe-Porter system is a commonly used filter structure in imaging system with following advantages: it has compact structure with less optical elements compared with 4-f system and could realize image enlargement conveniently. The object of an Abbe-Porter filter system is placed slightly further from front focus of the lens. According to principle of lens imaging, an inverted enlarged real image will be acquired at the distance beyond twice the focal length of the lens, also called intermediate real image. The holographic grating is placed at back focus of lens. It cooperates with specially designed CGHs to realize CAM as described in section 2.2. Then the intermediate real image works as input image of the following curved reflective structure. A reflective optical element could fold optical path and compress the size of whole system which can benefit the compact design goal of 3D-NED system. Besides, it brings no extra chromatic aberration for colorful display system compared with refractive optical elements. Here a concave mirror is employed. The concave mirror also follows lens imaging principle and enlarges the image for the second step. At last, with the help of light combiner, the enlarged image is reflected into human eye and simultaneously, light from real world scene could pass through combiner directly as well.

 figure: Fig. 3.

Fig. 3. Proposed two-step method for FOV enlargement.

Download Full Size | PPT Slide | PDF

According to geometrical optics, the Gaussian imaging principle of a thin lens is:

$$\frac{1}{{l^{\prime}}} - \frac{1}{l} = \frac{1}{{f^{\prime}}}$$
where $l^{\prime}$, l and $f^{\prime}$ denotes to image distance, object distance and focal length respectively. Their signs follow the following rule: starting from principal plane of lens, negative if from right to left and positive if from left to right. In the proposed system, the object is set slightly further from front focus of lens. So in this case, the object distance ${l_1}$ is negative while image distance ${l_1}^{\prime}$ and focal length ${f_1}^{\prime}$ are positive. After some mathematical deduction, the magnification of Abbe-Porter filter system can be expressed as:
$${\beta _1} = \frac{{{l_1}^{\prime}}}{{{l_1}}} = \frac{1}{{\frac{{{l_1}}}{{{f_1}^{\prime}}} + 1}}$$
The sign of ${\beta _1}$ indicates the direction of image which means upright or inverted. The absolute value of ${\beta _1}$ indicates the image is magnified or minified compared to object. According to the signs of the parameters, longer focal length of the lens will bring higher $|{{\beta_1}} |$ which means higher magnification. However, the image of Abbe-Porter filter system could not be magnified too large because it will act as input image of concave mirror. So the image of first-step system should be restricted by the aperture of second system.

After Abbe-Porter filter system, a first-step enlarged intermediate real image is acquired. The imaging principle of a concave mirror is similar with thin lens in Eq. (5) but with different sign rule. For concave mirror, in this case, the object distance ${l_2}$, image distance ${l_2}^{\prime}$ and focal length ${f_2}^{\prime}$ are all negative. The concave mirror is properly placed so that the intermediate image is within the focal length. So the second system provides an upright magnified virtual image for better viewing effect and compact system structure. According to Eq. (6) and the sign of the parameters in second system, the shorter focal length the concave mirror has, the larger image will be got. So in the experiment, an off-the-shelf short focus concave lens with 25.4mm focal length is employed.

3. Experiments and analysis

3.1 Optical setup and specifications

Based on the principle illustrated above, a FOV enlarged CAM-based 3D near-eye display system is proposed and designed. As system form factor is considered during whole design process, a compact and lightweight prototype is fabricated and the system is integrated. The shell of the prototype is made by 3D printing technique and the material is polylactic acid (PLA). The size of the prototype is 185 mm×34.8 mm×31.4 mm with a 66 mm accessory. The wavelengths of incident light are 639 nm, 532 nm and 473 nm respectively. After beam expanding and collimating, the combined white parallel light illuminates SLM panel through a BS. The SLM is Holoeye Pluto with 8 µm pixel pitch, 1920×1080 resolution, 60Hz refreshing rate and the phase modulation range is [0, 2π]. The focal length of doublet lens is 75mm and the aperture size is 25.4 mm. The grating is fabricated via holographic exposure and the grating period is 100 µm. The thickness of grating is 3 mm and the size is cut to be 20 mm×20 mm. The concave mirror is coated with silver. Both the focal length and aperture size are 25.4 mm. The light combiner before human eye is a beam splitter plate with a thickness of 2 mm.

During display, the CGH for each wavelength is calculated and uploaded on SLM in real time. The corresponding laser is synchronized and turned on at the same time. Via time-multiplexing, full-color reconstruction is realized and frame rate for full-color display is 20fps. A CCD sensor (Thorlabs DCU224C) with a 132° diagonal FOV camera lens (Thorlabs MVL4WA f=3.5 mm) works as human eye to record the reconstructed 3D images. The detailed parameters of proposed systems are listed in Table 1.

Tables Icon

Table 1. Parameters of the proposed prototype.

3.2 Experimental results

Firstly, the FOV enlargement of proposed system is tested. Two narrows and several characters are used as target image to illustrate diagonal FOV of the system. Each color component is displayed separately. The display distance is set 700 mm away from CCD camera. A paper with distance mark is placed the same distance with images. Each color images are recorded by CCD camera and are shown in Figs. 4(a)–4(c). Figure 4(d) is partially magnification of the green distribution. The FOV of the displayed images is measured to be 45.2° diagonal. Compared with former system for colorful display with a 4.8° diagonal FOV [28], a 9.4 times enlargement is realized. It is worth mentioning that a white blurry ring-shaped image can be seen indistinctly around reconstructed images. This is the image of concave mirror reflected by half mirror. The edge of concave mirror is reflected directly to human eye according to the configuration in Fig. 3 and captured by wide-angle lens. If the optical feature of concave mirror is recorded in a reflective HOE and the concave mirror is replaced which is quite feasible in further design, there will be no such disturbance. The experimental results proved that the proposed method could realize significant FOV enlargement for better viewing effect in NED system.

 figure: Fig. 4.

Fig. 4. (a)-(c) Experimental results of FOV enlargement with single-color reconstruction, (d) partially magnification of green distribution.

Download Full Size | PPT Slide | PDF

Next, the 3D colorful reconstruction with large FOV is performed. In the experiment, two words “Far” and “Near” are chosen as target colorful virtual signal to represent further information and nearer information respectively. Each character of “Far” and “Near” is set to be different colors to realize colorful reconstruction. The words “Near” and “Far” are reconstructed 50 mm and 2 m away from the camera respectively and at the same distance locate two distance marks. The display effect is shown in Fig. 5. When the camera focus on further signal, the word “Far” is in-focus and clear. Meanwhile, the “2 m” mark at the same distance with “Far” can be seen clearly either as shown in Fig. 5(a). When the camera focus moves nearer, the word “Near” becomes clear along with the mark “50 mm” as shown in Fig. 5(b). This process is also recorded in a video attached as Visualization 1. Figure 5(c) indicates the spatial distribution of images. In summary, the virtual images are blurred and clear the same way with real objects. Besides, the color of the virtual images is reconstructed faithfully as original images. The experimental results show that the system is able to present full-color 3D reconstruction with correct depth cues and large FOV.

 figure: Fig. 5.

Fig. 5. 3D colorful reconstruction with large FOV (Visualization 1), (a) focus at ‘Far’, ${d_1} = 2\textrm{m}$, (b) focus at ‘Near’, ${d_2} = 50\textrm{mm}$, (c) spatial distribution of the reconstructed scene.

Download Full Size | PPT Slide | PDF

As no iteration operation is applied during CGH calculation process, the proposed system is also able to present colorful dynamic display with large FOV. A planet model is used to test dynamic display ability. A large red star is positioned statically at the center. A blue planet is revolving around the star and in the meantime, a little green planet is also revolving around blue planet. The inner orbit is set to be yellow while outer orbit to be magenta. The centered star is reconstructed 700 mm away from the camera. Still, a distance mask with “700 mm” is placed the same distance with red star. During display, the CGHs are calculated and uploaded on SLM in real time so that dynamic display is realized. The camera focus at 700 mm and a video is recorded in Visualization 2. In Fig. 6 are several frames extracted from the video. The color of images is reconstructed as designed and large FOV is realized. The results demonstrate that the proposed system could present dynamic display with large FOV.

 figure: Fig. 6.

Fig. 6. Dynamic 3D colorful reconstruction with large FOV (Visualization 2), (a)-(c) frames exacted from the video.

Download Full Size | PPT Slide | PDF

3.3 Analysis and discussions

The experimental results show that the proposed system could present full-color 3D reconstruction with large FOV. The proposed method is proved to be effective and maximum FOV is measure to be 45.2° diagonal. To record the entire field of view of reconstructed images, a CCD camera with wide angle camera lens is employed. The camera lens has a 132° FOV and 3.5 mm focal length. As we know, the depth of field (DOF) of a camera lens has relationship with lens aperture, focal length and image distance to the lens. A lens with smaller aperture, larger focal length and longer image distance to the lens will capture a larger DOF. As a result, for the lens used in the experiment, the DOF that lens could capture is pretty large. When the reconstructed distance is set about 1 m from the lens, even though we adjust the imaging point of lens away from the reconstructed plane, the reconstructed image is still quite well-shaped and only slightly blurred because of large DOF of lens. In this case, the 3D reconstructed effect is weakened and not as obvious as usual. So in our second experiment, the final reconstruction distance is set about 2 m which is on the farthest wall and 50 mm which is quite close to the lens respectively to achieve better effect. Actually, when the proposed system is observed by human eye directly, there exist no such issue. The virtual colorful images are clear and blurred the same way with real objects when the eye focus changes.

For the present design, the FOV of the system is mainly restricted by the aperture size of the concave mirror and the focal length of the concave mirror. According to the analysis in section 2.3, the intermediate image which is magnified image of Abbe-Porter system works as object of concave mirror. Theoretically, the magnification power of Abbe-Porter system could be large enough but the size of intermediate image must be within the aperture of concave mirror to avoid image loss. Therefore, a concave mirror with larger aperture size will allow larger magnification power and FOV of the system. However, as the form factor and weight are highly demanded in a near-eye display system, this improvement is limited. The other restriction is the focal length of the concave mirror. According to Eq. (6), a concave mirror with smaller focal length will bring larger FOV. The concave mirror used in our system is 25.4 mm off-the-shelf. The FOV of system is 45.2° diagonal and the proposed method has been proved effective. If a concave mirror with smaller focal length is used in the future, the FOV of the system will be further increased.

4. Summary

A two-step method is proposed to enlarge the FOV of CAM-based 3D-NED system. The proposed system contains an Abbe-Porter scheme and curved reflective structure. Complex amplitude modulation is realized to maintain high quality of reconstructed images. Experimental results show that the proposed system could present full-color 3D reconstruction with large FOV. The FOV of the system could reach 45.2° diagonal which realize a 9.4 times magnification compared with former CAM-based 3D-NED systems. As the CGHs are calculated in real time, the system could also present full-color dynamic 3D display with large FOV. The method is proved to be effective and if proper optical element such as concave mirror with smaller focal length is employed, the FOV of the system could be further increased significantly in the future. The proposed system provides a better viewing effect for the viewers and is believed to make a promotion for the development of 3D-NED AR devices.

Funding

National Natural Science Foundation of China (61575024, 61975014).

Disclosures

The authors declare no conflicts of interest.

References

1. Z. He, X. Sui, G. Jin, and L. Cao, “Progress in virtual reality and augmented reality based on holographic display,” Appl. Opt. 58(5), A74–A81 (2019). [CrossRef]  

2. S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays,” ACM Trans. Graph. 35(4), 1–13 (2016). [CrossRef]  

3. S. Xie, P. Wang, X. Sang, and C. Li, “Augmented reality three-dimensional display with light field fusion,” Opt. Express 24(11), 11483–11494 (2016). [CrossRef]  

4. T. Zhan, Y. H. Lee, and S. T. Wu, “High-resolution additive light field near-eye display by switchable Pancharatnam-Berry phase lenses,” Opt. Express 26(4), 4863–4872 (2018). [CrossRef]  

5. M. Liu, C. Lu, H. Li, and X. Liu, “Bifocal computational near eye light field displays and Structure parameters determination scheme for bifocal computational display,” Opt. Express 26(4), 4060–4074 (2018). [CrossRef]  

6. W. Song, Q. Cheng, P. Surman, Y. Liu, Y. Zheng, Z. Lin, and Y. Wang, “Design of a light-field near-eye display using random pinholes,” Opt. Express 27(17), 23763–23774 (2019). [CrossRef]  

7. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]  

8. X. Shen and B. Javidi, “Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens,” Appl. Opt. 57(7), B184–B189 (2018). [CrossRef]  

9. H. Huang and H. Hua, “Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays,” Opt. Express 27(18), 25154–25171 (2019). [CrossRef]  

10. H. Deng, C. Chen, M.-Y. He, J.-J. Li, H.-L. Zhang, and Q.-H. Wang, “High-resolution augmented reality 3D display with use of a lenticular lens array holographic optical element,” J. Opt. Soc. Am. A 36(4), 588–593 (2019). [CrossRef]  

11. D. Shin, C. Kim, G. Koo, and Y. Hyub Won, “Depth plane adaptive integral imaging system using a vari-focal liquid lens array for realizing augmented reality,” Opt. Express 28(4), 5602–5616 (2020). [CrossRef]  

12. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). [CrossRef]  

13. X. Hu and H. Hua, “High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014). [CrossRef]  

14. S. Liu, Y. Li, P. Zhou, Q. Chen, and Y. Su, “Reverse-mode PSLC multi-plane optical see-through display for AR applications,” Opt. Express 26(3), 3394–3403 (2018). [CrossRef]  

15. D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017). [CrossRef]  

16. A. Wilson and H. Hua, “Design and demonstration of a vari-focal optical see-through head-mounted display using freeform Alvarez lenses,” Opt. Express 27(11), 15627–15637 (2019). [CrossRef]  

17. X. Xia, Y. Guan, A. State, P. Chakravarthy, K. Rathinavel, T.-J. Cham, and H. Fuchs, “Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support,” IEEE Trans. Vis. Comput. Graph. 25(11), 3114–3124 (2019). [CrossRef]  

18. J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]  

19. K. Wakunami, P.-Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016). [CrossRef]  

20. G. Li, D. Lee, Y. Jeong, J. Cho, and B. Lee, “Holographic display for see-through augmented reality using mirror-lens holographic optical element,” Opt. Lett. 41(11), 2486–2489 (2016). [CrossRef]  

21. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

22. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]  

23. J. H. Park and S. B. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of field control,” Opt. Express 26(21), 27076–27088 (2018). [CrossRef]  

24. C. Chang, W. Cui, and L. Gao, “Foveated holographic near-eye 3D display,” Opt. Express 28(2), 1345–1356 (2020). [CrossRef]  

25. J. P. Liu, W. Y. Hsieh, T. C. Poon, and P. Tsang, “Complex Fresnel hologram display using a single SLM,” Appl. Opt. 50(34), H128–H135 (2011). [CrossRef]  

26. H. Song, G. Sung, S. Choi, K. Won, H.-S. Lee, and H. Kim, “Optimal synthesis of double-phase computer generated holograms using a phase-only spatial light modulator with grating filter,” Opt. Express 20(28), 29844–29853 (2012). [CrossRef]  

27. Q. Gao, J. Liu, J. Han, and X. Li, “Monocular 3D see-through head-mounted display via complex amplitude modulation,” Opt. Express 24(15), 17372–17383 (2016). [CrossRef]  

28. Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017). [CrossRef]  

29. Z. Zhang, J. Liu, Q. Gao, X. Duan, and X. Shi, “A full-color compact 3D see-through near-eye display system based on complex amplitude modulation,” Opt. Express 27(5), 7023–7035 (2019). [CrossRef]  

References

  • View by:

  1. Z. He, X. Sui, G. Jin, and L. Cao, “Progress in virtual reality and augmented reality based on holographic display,” Appl. Opt. 58(5), A74–A81 (2019).
    [Crossref]
  2. S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays,” ACM Trans. Graph. 35(4), 1–13 (2016).
    [Crossref]
  3. S. Xie, P. Wang, X. Sang, and C. Li, “Augmented reality three-dimensional display with light field fusion,” Opt. Express 24(11), 11483–11494 (2016).
    [Crossref]
  4. T. Zhan, Y. H. Lee, and S. T. Wu, “High-resolution additive light field near-eye display by switchable Pancharatnam-Berry phase lenses,” Opt. Express 26(4), 4863–4872 (2018).
    [Crossref]
  5. M. Liu, C. Lu, H. Li, and X. Liu, “Bifocal computational near eye light field displays and Structure parameters determination scheme for bifocal computational display,” Opt. Express 26(4), 4060–4074 (2018).
    [Crossref]
  6. W. Song, Q. Cheng, P. Surman, Y. Liu, Y. Zheng, Z. Lin, and Y. Wang, “Design of a light-field near-eye display using random pinholes,” Opt. Express 27(17), 23763–23774 (2019).
    [Crossref]
  7. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
    [Crossref]
  8. X. Shen and B. Javidi, “Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens,” Appl. Opt. 57(7), B184–B189 (2018).
    [Crossref]
  9. H. Huang and H. Hua, “Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays,” Opt. Express 27(18), 25154–25171 (2019).
    [Crossref]
  10. H. Deng, C. Chen, M.-Y. He, J.-J. Li, H.-L. Zhang, and Q.-H. Wang, “High-resolution augmented reality 3D display with use of a lenticular lens array holographic optical element,” J. Opt. Soc. Am. A 36(4), 588–593 (2019).
    [Crossref]
  11. D. Shin, C. Kim, G. Koo, and Y. Hyub Won, “Depth plane adaptive integral imaging system using a vari-focal liquid lens array for realizing augmented reality,” Opt. Express 28(4), 5602–5616 (2020).
    [Crossref]
  12. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010).
    [Crossref]
  13. X. Hu and H. Hua, “High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014).
    [Crossref]
  14. S. Liu, Y. Li, P. Zhou, Q. Chen, and Y. Su, “Reverse-mode PSLC multi-plane optical see-through display for AR applications,” Opt. Express 26(3), 3394–3403 (2018).
    [Crossref]
  15. D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017).
    [Crossref]
  16. A. Wilson and H. Hua, “Design and demonstration of a vari-focal optical see-through head-mounted display using freeform Alvarez lenses,” Opt. Express 27(11), 15627–15637 (2019).
    [Crossref]
  17. X. Xia, Y. Guan, A. State, P. Chakravarthy, K. Rathinavel, T.-J. Cham, and H. Fuchs, “Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support,” IEEE Trans. Vis. Comput. Graph. 25(11), 3114–3124 (2019).
    [Crossref]
  18. J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015).
    [Crossref]
  19. K. Wakunami, P.-Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016).
    [Crossref]
  20. G. Li, D. Lee, Y. Jeong, J. Cho, and B. Lee, “Holographic display for see-through augmented reality using mirror-lens holographic optical element,” Opt. Lett. 41(11), 2486–2489 (2016).
    [Crossref]
  21. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).
    [Crossref]
  22. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014).
    [Crossref]
  23. J. H. Park and S. B. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of field control,” Opt. Express 26(21), 27076–27088 (2018).
    [Crossref]
  24. C. Chang, W. Cui, and L. Gao, “Foveated holographic near-eye 3D display,” Opt. Express 28(2), 1345–1356 (2020).
    [Crossref]
  25. J. P. Liu, W. Y. Hsieh, T. C. Poon, and P. Tsang, “Complex Fresnel hologram display using a single SLM,” Appl. Opt. 50(34), H128–H135 (2011).
    [Crossref]
  26. H. Song, G. Sung, S. Choi, K. Won, H.-S. Lee, and H. Kim, “Optimal synthesis of double-phase computer generated holograms using a phase-only spatial light modulator with grating filter,” Opt. Express 20(28), 29844–29853 (2012).
    [Crossref]
  27. Q. Gao, J. Liu, J. Han, and X. Li, “Monocular 3D see-through head-mounted display via complex amplitude modulation,” Opt. Express 24(15), 17372–17383 (2016).
    [Crossref]
  28. Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017).
    [Crossref]
  29. Z. Zhang, J. Liu, Q. Gao, X. Duan, and X. Shi, “A full-color compact 3D see-through near-eye display system based on complex amplitude modulation,” Opt. Express 27(5), 7023–7035 (2019).
    [Crossref]

2020 (2)

2019 (7)

2018 (5)

2017 (3)

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).
[Crossref]

Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017).
[Crossref]

D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017).
[Crossref]

2016 (5)

2015 (1)

2014 (3)

2012 (1)

2011 (1)

2010 (1)

Aksit, K.

D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017).
[Crossref]

Cao, L.

Chakravarthy, P.

X. Xia, Y. Guan, A. State, P. Chakravarthy, K. Rathinavel, T.-J. Cham, and H. Fuchs, “Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support,” IEEE Trans. Vis. Comput. Graph. 25(11), 3114–3124 (2019).
[Crossref]

Cham, T.-J.

X. Xia, Y. Guan, A. State, P. Chakravarthy, K. Rathinavel, T.-J. Cham, and H. Fuchs, “Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support,” IEEE Trans. Vis. Comput. Graph. 25(11), 3114–3124 (2019).
[Crossref]

Chang, C.

Chen, C.

Chen, J. S.

Chen, Q.

Cheng, Q.

Cho, J.

Choi, S.

Chu, D. P.

Cui, W.

Deng, H.

Didyk, P.

D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017).
[Crossref]

Duan, X.

Dunn, D.

D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017).
[Crossref]

Fuchs, H.

X. Xia, Y. Guan, A. State, P. Chakravarthy, K. Rathinavel, T.-J. Cham, and H. Fuchs, “Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support,” IEEE Trans. Vis. Comput. Graph. 25(11), 3114–3124 (2019).
[Crossref]

D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017).
[Crossref]

Gao, L.

Gao, Q.

Georgiou, A.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).
[Crossref]

Guan, Y.

X. Xia, Y. Guan, A. State, P. Chakravarthy, K. Rathinavel, T.-J. Cham, and H. Fuchs, “Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support,” IEEE Trans. Vis. Comput. Graph. 25(11), 3114–3124 (2019).
[Crossref]

Hahn, J.

Han, J.

He, M.-Y.

He, Z.

Hsieh, P.-Y.

K. Wakunami, P.-Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016).
[Crossref]

Hsieh, W. Y.

Hu, X.

Hua, H.

Huang, H.

Huang, Y.-P.

K. Wakunami, P.-Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016).
[Crossref]

Hyub Won, Y.

Ichihashi, Y.

K. Wakunami, P.-Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016).
[Crossref]

Jang, C.

S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays,” ACM Trans. Graph. 35(4), 1–13 (2016).
[Crossref]

Javidi, B.

Jeong, Y.

Jin, G.

Kellnhofer, P.

D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017).
[Crossref]

Kim, C.

Kim, H.

Kim, M.

Kim, S. B.

Kollin, J. S.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).
[Crossref]

Koo, G.

Lee, B.

Lee, D.

Lee, H.-S.

Lee, S.

S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays,” ACM Trans. Graph. 35(4), 1–13 (2016).
[Crossref]

Lee, Y. H.

Li, C.

Li, G.

Li, H.

Li, J.-J.

Li, X.

Li, Y.

Lin, Z.

Liu, J.

Liu, J. P.

Liu, M.

Liu, P.

Liu, S.

Liu, X.

Liu, Y.

Lu, C.

Luebke, D.

D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017).
[Crossref]

Maimone, A.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).
[Crossref]

Moon, E.

Moon, S.

S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays,” ACM Trans. Graph. 35(4), 1–13 (2016).
[Crossref]

Myszkowski, K.

D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017).
[Crossref]

Oi, R.

K. Wakunami, P.-Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016).
[Crossref]

Okui, M.

K. Wakunami, P.-Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016).
[Crossref]

Park, J. H.

Poon, T. C.

Rathinavel, K.

X. Xia, Y. Guan, A. State, P. Chakravarthy, K. Rathinavel, T.-J. Cham, and H. Fuchs, “Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support,” IEEE Trans. Vis. Comput. Graph. 25(11), 3114–3124 (2019).
[Crossref]

Roh, J.

Sang, X.

Sasaki, H.

K. Wakunami, P.-Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016).
[Crossref]

Senoh, T.

K. Wakunami, P.-Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016).
[Crossref]

Shen, X.

Shi, X.

Shin, D.

Song, H.

Song, W.

State, A.

X. Xia, Y. Guan, A. State, P. Chakravarthy, K. Rathinavel, T.-J. Cham, and H. Fuchs, “Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support,” IEEE Trans. Vis. Comput. Graph. 25(11), 3114–3124 (2019).
[Crossref]

Su, Y.

Sui, X.

Sung, G.

Surman, P.

Tippets, C.

D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017).
[Crossref]

Torell, K.

D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017).
[Crossref]

Tsang, P.

Wakunami, K.

K. Wakunami, P.-Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016).
[Crossref]

Wang, P.

Wang, Q.-H.

Wang, Y.

Wilson, A.

Won, K.

Wu, S. T.

Xia, X.

X. Xia, Y. Guan, A. State, P. Chakravarthy, K. Rathinavel, T.-J. Cham, and H. Fuchs, “Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support,” IEEE Trans. Vis. Comput. Graph. 25(11), 3114–3124 (2019).
[Crossref]

Xie, S.

Yamamoto, K.

K. Wakunami, P.-Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016).
[Crossref]

Zhan, T.

Zhang, H.-L.

Zhang, Z.

Zhao, T.

Zheng, Y.

Zhou, P.

ACM Trans. Graph. (2)

S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays,” ACM Trans. Graph. 35(4), 1–13 (2016).
[Crossref]

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).
[Crossref]

Appl. Opt. (3)

IEEE Trans. Vis. Comput. Graph. (2)

D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017).
[Crossref]

X. Xia, Y. Guan, A. State, P. Chakravarthy, K. Rathinavel, T.-J. Cham, and H. Fuchs, “Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support,” IEEE Trans. Vis. Comput. Graph. 25(11), 3114–3124 (2019).
[Crossref]

J. Opt. Soc. Am. A (1)

Nat. Commun. (1)

K. Wakunami, P.-Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016).
[Crossref]

Opt. Express (19)

J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015).
[Crossref]

A. Wilson and H. Hua, “Design and demonstration of a vari-focal optical see-through head-mounted display using freeform Alvarez lenses,” Opt. Express 27(11), 15627–15637 (2019).
[Crossref]

H. Huang and H. Hua, “Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays,” Opt. Express 27(18), 25154–25171 (2019).
[Crossref]

D. Shin, C. Kim, G. Koo, and Y. Hyub Won, “Depth plane adaptive integral imaging system using a vari-focal liquid lens array for realizing augmented reality,” Opt. Express 28(4), 5602–5616 (2020).
[Crossref]

S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010).
[Crossref]

X. Hu and H. Hua, “High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014).
[Crossref]

S. Liu, Y. Li, P. Zhou, Q. Chen, and Y. Su, “Reverse-mode PSLC multi-plane optical see-through display for AR applications,” Opt. Express 26(3), 3394–3403 (2018).
[Crossref]

S. Xie, P. Wang, X. Sang, and C. Li, “Augmented reality three-dimensional display with light field fusion,” Opt. Express 24(11), 11483–11494 (2016).
[Crossref]

T. Zhan, Y. H. Lee, and S. T. Wu, “High-resolution additive light field near-eye display by switchable Pancharatnam-Berry phase lenses,” Opt. Express 26(4), 4863–4872 (2018).
[Crossref]

M. Liu, C. Lu, H. Li, and X. Liu, “Bifocal computational near eye light field displays and Structure parameters determination scheme for bifocal computational display,” Opt. Express 26(4), 4060–4074 (2018).
[Crossref]

W. Song, Q. Cheng, P. Surman, Y. Liu, Y. Zheng, Z. Lin, and Y. Wang, “Design of a light-field near-eye display using random pinholes,” Opt. Express 27(17), 23763–23774 (2019).
[Crossref]

H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
[Crossref]

H. Song, G. Sung, S. Choi, K. Won, H.-S. Lee, and H. Kim, “Optimal synthesis of double-phase computer generated holograms using a phase-only spatial light modulator with grating filter,” Opt. Express 20(28), 29844–29853 (2012).
[Crossref]

Q. Gao, J. Liu, J. Han, and X. Li, “Monocular 3D see-through head-mounted display via complex amplitude modulation,” Opt. Express 24(15), 17372–17383 (2016).
[Crossref]

Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017).
[Crossref]

Z. Zhang, J. Liu, Q. Gao, X. Duan, and X. Shi, “A full-color compact 3D see-through near-eye display system based on complex amplitude modulation,” Opt. Express 27(5), 7023–7035 (2019).
[Crossref]

E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014).
[Crossref]

J. H. Park and S. B. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of field control,” Opt. Express 26(21), 27076–27088 (2018).
[Crossref]

C. Chang, W. Cui, and L. Gao, “Foveated holographic near-eye 3D display,” Opt. Express 28(2), 1345–1356 (2020).
[Crossref]

Opt. Lett. (1)

Supplementary Material (2)

NameDescription
Visualization 1       3D colorful reconstruction with large FOV
Visualization 2       Dynamic 3D colorful reconstruction with large FOV

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (a) Schematic of the FOV enlargement structure. BS means beam splitter, DL means doublet lens, G means grating, M is mirror and CM denotes to concave mirror, (b) viewing effect of the proposed system.
Fig. 2.
Fig. 2. Illustration of the complex amplitude modulation method (green component).
Fig. 3.
Fig. 3. Proposed two-step method for FOV enlargement.
Fig. 4.
Fig. 4. (a)-(c) Experimental results of FOV enlargement with single-color reconstruction, (d) partially magnification of green distribution.
Fig. 5.
Fig. 5. 3D colorful reconstruction with large FOV (Visualization 1), (a) focus at ‘Far’, ${d_1} = 2\textrm{m}$, (b) focus at ‘Near’, ${d_2} = 50\textrm{mm}$, (c) spatial distribution of the reconstructed scene.
Fig. 6.
Fig. 6. Dynamic 3D colorful reconstruction with large FOV (Visualization 2), (a)-(c) frames exacted from the video.

Tables (1)

Tables Icon

Table 1. Parameters of the proposed prototype.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

H = A exp ( i θ )
exp ( i θ 1 ) + exp ( i θ 2 ) = A exp ( i θ )
{ θ 1 = θ + arccos ( A 2 ) θ 2 = θ arccos ( A 2 )
h ( x , y ) = h 1 ( x , y D 2 ) + h 2 ( x , y + D 2 )
E 1 ( x 1 , y 1 ) = g 1 ( x 1 , y 1 ) F { h ( x 0 , y 0 ) } | f x = x 1 / λ d 1 f y = y 1 / λ d 1 = g 1 ( x 1 , y 1 ) H ( x 1 λ d 1 , y 1 λ d 1 )
g 1 ( x 1 , y 1 ) = E 0 j λ f exp [ j k ( d 1 + f ) ] exp [ j k 2 f ( 1 d 1 f ) ( x 1 2 + y 1 2 ) ]
G ( y ) = 1 2 + 1 2 cos ( 2 π y Δ )
E 2 ( x 2 , y 2 ) = 1 j λ d 2 exp [ j k ( d 2 + x 2 2 + y 2 2 2 d 2 ) ] E 1 ( x 1 , y 1 ) G ( y 1 ) exp [ j k 2 d 2 ( x 1 2 + y 1 2 ) ] exp [ j k d 2 ( x 2 x 1 + y 2 y 1 ) ] d x 1 d y 1
( d 1 f ) d 2 = f 2
E 2 ( x 2 , y 2 ) = E 0 λ 2 f d 2 exp [ j k ( d 1 + f + d 2 + x 2 2 + y 2 2 2 d 2 ) ] H ( x 1 λ d 1 , y 1 λ d 1 ) G ( y 1 ) exp [ j k d 2 ( x 2 x 1 + y 2 y 1 ) ] d x 1 d y 1 = g 2 ( x 2 , y 2 ) F { H ( x 1 λ d 1 , y 1 λ d 1 ) G ( y 1 ) } | f x = x 2 / λ d 2 f y = y 2 / λ d 2
g 2 ( x 2 , y 2 ) = E 0 λ 2 f d 2 exp [ j k ( d 1 + f + d 2 + x 2 2 + y 2 2 2 d 2 ) ]
E 2 ( x 2 , y 2 ) = λ 2 d 1 d 2 2 g 2 ( x 2 , y 2 ) { [ h 1 ( x 2 M , y 2 M D 2 ) + h 2 ( x 2 M , y 2 M + D 2 ) ] + 1 2 [ h 1 ( x 2 M , y 2 M + λ d 2 Δ D 2 ) + h 2 ( x 2 M , y 2 M λ d 2 Δ + D 2 ) ] + 1 2 [ h 1 ( x 2 M , y 2 M λ d 2 Δ D 2 ) + h 2 ( x 2 M , y 2 M + λ d 2 Δ + D 2 ) ] }
D = 2 λ d 2 Δ
E 2 ( x 2 , y 2 ) = λ 2 d 1 d 2 2 g 2 ( x 2 , y 2 ) { [ h 1 ( x 2 M , y 2 M D 2 ) + h 2 ( x 2 M , y 2 M + D 2 ) ] + 1 2 [ h 1 ( x 2 M , y 2 M ) + h 2 ( x 2 M , y 2 M ) ] + 1 2 [ h 1 ( x 2 M , y 2 M D ) + h 2 ( x 2 M , y 2 M + D ) ] }
1 l 1 l = 1 f
β 1 = l 1 l 1 = 1 l 1 f 1 + 1

Metrics