Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-resolution 3D display using time-division light ray quadruplexing technology

Open Access Open Access

Abstract

We propose a flicker-reduced time-division light ray quadruplexing technology to improve both the spatial and angular resolutions of three-dimensional (3D) images. The proposed method uses an image-shift optical device with polarization gratings. By optimizing the design of the image-shift optical device and incorporating it into the display system, we confirmed that the resolution characteristics of 3D images displayed at a depth of 30 mm or more can be improved by up to 1.58 times. Furthermore, by developing the display system using a 120 Hz 8K projector with wobbling device and a wavelength-selective λ/2 plate for reducing flicker, we achieved high-resolution 3D image display with deeper depth.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

An ideal three-dimensional (3D) display can be used in many fields such as television broadcasting, amusement, medicine, education, and digital signage because it provides an immersive and realistic experience. To realize an ideal 3D display, it is important to ensure that the displayed object appears as if it is real, and the observer should not be uncomfortable even when observing it for a considerable time. Physical and physiological factors make observers uncomfortable. Many studies have been reported on high-quality 3D displays using head-mounted displays or special glasses [14]. However, because the device needs to be physically worn, it is not suitable for prolonged use. Physiological factors are influenced by the physical relationship between the two eyes and the objects in terms of convergence, accommodation, and motion parallax. For example, parallax stereograms [5], one of the 3D displays that use parallax barriers, reproduces 3D images by displaying different parallax images each to the left and right eyes. However, the accommodation function of human eyes mismatches the convergence distance, which is a phenomenon called the accommodation-convergence conflict [6]. For some observers, this phenomenon may cause discomfort and headaches when they observe 3D images for a considerable amount of time. In addition, the appearance of 3D objects does not change even if the position of observation is altered because there is no motion parallax. Therefore, it is difficult to obtain a sense of reality.

Light field displays have been proposed to solve these problems [710]. For example, the integral 3D display [11] uses light rays to reproduce an optical image in space. In this method, discomfort caused by physical and physiological factors is resolved because it considers the factors for perceiving 3D images based on human visual functions. The thickness of the display device can be reduced because this method involves only a flat panel display and a lens array to display 3D images. In addition, stereoscopic viewing is possible even when the observers tilt their heads because it has both horizontal and vertical parallaxes. However, the problem with this method is that it is difficult to display high resolution 3D images. In this method, the number of pixels in a 3D image is equal to the number of elemental lenses constituting the lens array [12]. Therefore, it is important to miniaturize the elemental lens and densify the elemental image, which is an image corresponding to each lens, to improve the resolution of 3D images. In current flat panel display technology, it is difficult to realize a high-density and high-definition display with pixel density significantly higher than 1000 ppi [13]. Although integral 3D display methods using an ultra-high definition device or multiple display devices have been proposed, the limit on the number of pixels for 3D images is approximated to 100,000 pixels [1416].

Several studies have been conducted on light field displays that do not use a lens array on the display surface [1719]. They include a method of reproducing 3D images by projecting and superimposing a multi-view image onto a screen using projectors or similar devices. In this method, the resolution of the 3D image is equal to the resolution of each viewpoint image, and therefore, it is relatively easy to enhance the resolution of the 3D image by increasing the resolution of the multi-view image. However, the conventional method displays parallax only in the horizontal direction. Thus, a faithful optical image is not formed in space and distortion may occur in the 3D image at places other than the predetermined observation position. To display natural 3D images, the angular resolution as well as the resolution of the viewpoint image should be improved. The disadvantage of this method is that it is difficult to improve the angular resolution. The angular resolution is limited by the physical layout of the installation because multiple projectors are lined up to display each viewpoint image. Consequently, some studies have been conducted to optimize the installation methods of projectors [2022]. In these studies, high angular resolution is achieved by placing the projectors in the horizontal and vertical directions. However, it is complicated to design and adjust the optical system because of the different incident angles in the vertical direction. Furthermore, it is impossible to realize a vertical parallax. Research [23,24] has been conducted to increase the number of viewpoints and improve the angular resolution using time-division technology. For example, Teng [23] proposed a method to increase the number of viewpoints by rotating a spiral array of apertures at high speed; however, it is difficult to maintain the condition for a long period because it requires mechanically moving the apertures. Xia [24] proposed a method to optically increase the number of viewpoints using a screen composed of liquid crystals and prism arrays. This method can be realized by switching the liquid crystal on and off by voltage control, which is easy to control; the steering angle is determined by the birefringence and apex angle of the prism array. Therefore, the screen has to be designed and manufactured again when the display system specifications are changed because the display system and the screen are in a one-to-one relationship.

In this study, we propose a time-division multiplexing technology to improve the angular resolution in a display system with both horizontal and vertical parallaxes. This technology uses two optical elements called polarization grating (PG), and the amount of the shift of the viewpoint image can be controlled freely by adjusting the distance between these elements. Therefore, we believe that this technology can optically shift the viewpoint image and be easily maintained in a state even after being utilized for a long period. The same design of elements can be used even if the specifications of the display system change. In addition, we developed a four light ray multiplexing technology that can apply a conventional wobbling method [25] simultaneously to improve not only the angular resolution but also the spatial resolution. This technology can also reduce flicker in 3D images, which is a well-known issue in time-division multiplexing technology, using a wavelength-selective $\lambda$/2 plate. We developed a display system with the proposed method and achieved 3D image display with high resolution and deep depth using a single 120 Hz 8K projector. Furthermore, we also analyzed the display characteristics of 3D images to effectively improve the image quality. First, the basic principle of the display system developed in this study is explained in Sec. 2. In Sec. 3, a time-division multiplexing technology to improve the angular resolution is proposed, and its basic verification is performed. Next, we designed a flicker-reduced light ray quadruplex scheme to improve the spatial and angular resolutions using the proposed method, fabricated a prototype display system, and evaluated the display characteristics as described in Sec. 4 and 5. Finally, the summary of this study and future prospects are presented in Sec. 6.

2. Basic principle of light field display

2.1 Configuration of display optical system and design of diffusion screen

Figure 1 shows the basic configuration of the display optical system. The projector projects a multi-view image of $M{\times }N$ viewpoints (e.g., $P_1$, $P_2$,…, $P_M$) with horizontal and vertical parallaxes, which are divided into each viewpoint image by the imaging lens array. Thereafter, each viewpoint image is superimposed on the 3D screen by condenser lens 1 placed at the focal length of the imaging lens array. In addition, condenser lens 2, which is used for controlling the viewing zone, forms the viewpoints (e.g., $V_1$, $V_2$,…, $V_M$) at the optimal viewing distance $D$ from condenser lens 2. In this method, each viewpoint image is projected and displayed on the entire screen. Therefore, the maximum number of pixels in the 3D image is equal to the number of pixels in each viewpoint image. The optimal viewing distance $D$ can be expressed using

$$D = \frac{1}{\frac{1}{f_{V}} - \frac{1}{f_{S}} },$$
where $f_{S}$ is the focal length of condenser lens 1, and $f_{V}$ is the focal length of condenser lens 2. As the viewpoint interval observed by the observer is $Dp_{x}/f_{S}$ where $p_{x}$ is the horizontal pitch of the imaging lens, the angular interval of the viewpoint images $\theta _{x}$ is expressed by
$$\theta_{x} = \arctan (\frac{p_{x}}{f_{S}}) \approx \frac{p_{x}}{f_{S}}.$$

Therefore, when $M$ viewpoint images are displayed in the horizontal direction from the projector, the horizontal viewing zone is $M\theta _{x}$. Three-dimensional images without geometric distortion can also be viewed at a viewing position far from the optimal viewing distance $D$ because this display system is based on a light field display. The light rays of each viewpoint image are emitted at different angles from the 3D screen, as shown in Fig. 1. At the optimal viewing distance $D$, only light rays of one viewpoint image reach the one eye of the observer. At other viewing distances, the light rays of more than two viewpoint images reach the observer. Thus, light field 3D images according to the viewing position can be viewed in the effective viewing zone indicated in Fig. 1. Outside the effective viewing zone, the entire or a part of the display screen is darkened, which makes it impossible to view the 3D images.

 figure: Fig. 1.

Fig. 1. Basic configuration of developed display optical system.

Download Full Size | PDF

It is necessary to widen the light rays of each viewpoint image to reproduce a 3D image with a smooth motion parallax. However, the crosstalk between adjacent viewpoints increases if the light rays are spread significantly wide, and this reduces the quality of the 3D image. Therefore, it is important to design a diffusion screen with the diffusion characteristics of a top-hat shape rather than a general Gaussian shape to reduce the crosstalk by spreading the light rays fully. A diffusion screen formed by micro-lenses was used for top-hat shaped diffusion characteristics (see Supplement 1). When the diffusion angle of the diffusion screen is equal to the angular interval of the viewpoint images $\theta _{x}$, crosstalk does not exist between adjacent viewpoint images in principle, thus high quality 3D images can be displayed.

2.2 Characteristics of 3D resolution in depth direction

Light field displays, which include integral 3D displays, have the highest resolution near the display surface, and the resolution decreases with distance from the display surface [26]. The resolution characteristics of this display system can be considered in the same manner as the resolution characteristics of the integral 3D displays. In this method, the case of observing a 3D image reproduced at a distance of $z$ from the 3D screen surface was considered, as shown in Fig. 2. The observation spatial frequency $\beta$ [cycles/rad (cpr)] at which the observer can observe is expressed by

$$\beta = \alpha\frac{D-z}{|z|},$$
where $\alpha$ is the projected spatial frequency of the 3D image. The projected spatial frequency $\alpha$ is limited owing to the angular interval of the viewpoint images and diffusion angle of the diffusion screen. It can be expressed by the following relationship:
$$\frac{1}{\alpha}\geq 2{\rm max}(\theta_{x},\ \theta_{d}),$$
where $\theta _{d}$ is the diffusion angle of the diffusion screen. When $\theta _{d}$ is smaller than the angular interval of the viewpoint images $\theta _{x}$, it is impossible to reproduce 3D images with a smooth motion parallax. Therefore, it is typical to set $\theta _x\leq \theta _d$. The observation spatial frequency $\beta$ is similarly limited by the following equation:
$$\beta \leq \frac{D-z}{2\theta_d|z|} = \beta_{max},$$
where $\beta _{max}$ is defined as the maximum observation spatial frequency, which is determined to be corresponding to the projected spatial frequency.

 figure: Fig. 2.

Fig. 2. Observation spatial frequency of 3D image.

Download Full Size | PDF

The displayed 3D image is sampled by the pixel size projected on the screen. When the pixel size projected on the screen is $w$, the sampling period was $w/D$. Therefore, the Nyquist frequency of sampling $\beta _{nyq}$ can be expressed by

$$\beta_{nyq} = \frac{D}{2w}.$$

When the Nyquist frequency $\beta _{nyq}$ is smaller than the maximum observation spatial frequency $\beta _{max}$, a 3D image with aliasing is observed. Thus, it is not easy to observe 3D images at the maximum observation spatial frequency. Therefore, the upper spatial frequency $\gamma$ of the 3D image can be expressed as

$$\gamma = {\rm min}(\beta_{max},\beta_{nyq}) = {\rm min}(\frac{D-z}{2\theta_d|z|},\ \frac{D}{2w}).$$

Figure 3 shows the result of illustrating this upper spatial frequency. The resolution of the 3D image displayed near the screen is equal to the Nyquist frequency $\beta _{nyq}$. However, the resolution of the 3D image displayed further from the screen is dominated by the maximum observation spatial frequency $\beta _{max}$, which decreases with distance from the screen. In this system, the Nyquist frequency $\beta _{nyq}$ should be improved to enhance the upper spatial frequency. This is achieved by ensuring the pixel size projected on the screen is small. Moreover, it is important to narrow the diffusion angle of the diffusion screen and to ensure that the lens pitch $p_x$ of the imaging lens is fine to improve the maximum observation spatial frequency $\beta _{max}$. Thus, the viewpoint interval of the projected multi-view image should be fine to enhance the resolution of the 3D image with depth.

 figure: Fig. 3.

Fig. 3. Resolution characteristics of 3D image.

Download Full Size | PDF

3. Proposed method for improving angular resolution of 3D image

The lens pitch of the imaging lens should be fine to improve the angular resolution of 3D images in this system. In this study, we propose a method, which ensures the lens pitch is fine, not physically but optically, by shifting the viewpoint image in a time division. This method uses two PGs, which are diffractive elements composed of a liquid crystal polymer. Its function is to select the diffraction direction according to the polarization of the incident light (see Supplement 1). Considering these two features: (1) the light is emitted only in the direction of positive or negative first-order diffracted light, and (2) the light transmitted through the PG changes its phase by $\pi$, the proposed image-shift optical device is composed of one imaging lens, two PGs, and a polarization switching device, as shown in Fig. 4. For example, as shown in Fig. 4(a), when the light of the horizontal polarization is incident on the first PG, it is emitted in the positive first-order diffraction direction. At this time, the light is converted to vertical polarization by the first PG. Therefore, the second PG emits the light as negative first-order diffraction light, which results in a shift of the image to the lower left direction along the light path. Conversely, as shown in Fig. 4(b), when voltage is applied to the polarization switching device and the vertical polarization light is input into the first PG, the image is shifted to the upper right direction along the light path. Therefore, doubling the number of viewpoints can be achieved by switching the polarization switching device at high speed. Appendix movie shows how the image is shifted when the polarization switching device is turned on and off (Fig. 4(c) and 4(d) show images when the movie is stopped.). It can be verified from this movie that there is slight light leakage, and the image can be shifted with high diffraction efficiency.

Figure 5 describes in detail the amount of shift of the image when using the proposed method. In Fig. 5, the exit angle of the light after transmission through the PG is expressed as

$$\sin\theta_{1} - \sin\theta_{0} = \frac{m\lambda}{\mathit{\Lambda_{x}}},$$
where $\theta _{0}$, $\theta _{1}$ are the incident angle to the PG and the exit angle of the $m$th-order diffracted light, respectively, and $\mathit {\Lambda _{x}}$ is the pitch of the diffraction grating in the x-direction of the PG. If the oriented direction of the liquid crystal that forms the diffraction grating is rotated by $\phi$ with respect to the y-axis, then $\mathit {\Lambda _{x}}=\mathit {\Lambda }/\cos \phi$, where $\mathit {\Lambda }$ is the pitch of the diffraction grating. Therefore, the incident light at the position and direction $(x_{0},\ \theta _{0})$ near the optical axis is transferred to $(\mathit {\Delta }s_x, \theta _2)$ at the focal length position $f_{I}$ of the imaging lens after passing through the second PG, as
$$\begin{pmatrix} \mathit{{\Delta}s_x} \\ \theta_2 \end{pmatrix} = \begin{pmatrix} 1 & f_{I} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} x_{0} \\ \theta_{0} \end{pmatrix} + \begin{pmatrix} 1 & l+\mathit{\Delta}z \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 \\ \frac{m_{1}\lambda}{\mathit{\Lambda_{x}}} \end{pmatrix} + \begin{pmatrix} 1 & \mathit{\Delta}z \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 \\ \frac{m_{2}\lambda}{\mathit{\Lambda_{x}}} \end{pmatrix},$$
where $l$ is the distance between two PGs, $\mathit {\Delta }z$ is the distance between the second PG and the focal length position of the imaging lens, and $m_1$, $m_2$ are the orders of the diffracted light emitted from the first and second PGs, respectively. When horizontally or vertically polarized light is input to the first PG, the order of the diffracted light is $(m_1, m_2)=(\pm 1, \mp 1)$, and therefore, Eq. (9) can be simplified as
$$\begin{pmatrix} \mathit{\Delta}s_x \\ \theta_2 \end{pmatrix} = \begin{pmatrix} 1 & f_{I} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} x_{0} \\ \theta_{0} \end{pmatrix} + \begin{pmatrix} {\pm}\frac{l\lambda}{\mathit{\Lambda_x}} \\ 0 \end{pmatrix}.$$

 figure: Fig. 4.

Fig. 4. Principle of proposed image-shift optical device. (a) When no voltage is applied to the polarization switching device, the image is shifted in the lower left direction along the light path, and (b) when voltage is applied to the polarization switching device, the image is shifted upper right direction along the light path. The results of re-shooting the shifted image (c) in the state of (a) and (d) in the state of (b) (see Visualization 1).

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Theoretical diagram of image shift near the optical axis by image-shift optical device.

Download Full Size | PDF

From Eq. (10), the traveling direction of the light does not change because $\theta _2 =\theta _0$. Therefore, the image-shift optical device can shift the image in parallel to the optical axis. In addition, the amount of image shift $\mathit {\Delta }s_x$ in the x-axis direction is $\mathit {\Delta }s_x={\pm } l\lambda /\mathit {\Lambda _x}$ because the position and direction of the incident light are related by $x_0{\approx }-f_{I}\theta _{0}$. Thus, this value is determined by the liquid crystal pitch of the PGs and the distance $l$ between the two PGs. Therefore, it is possible to shift the image by an arbitrary amount by adjusting the distance between the PGs.

To half the horizontal lens pitch of the imaging lens in time division, the image-shift optical device should be used to shift the image by $\left |{\Delta }s_x\right |=p_x/4$. Thus, it is expressed as follows using Eq. (10):

$$\left| {\Delta}s_x \right| = \frac{l\lambda}{\mathit{\Lambda_x}} = \frac{1}{4}p_x.$$

Therefore, doubling the maximum observation spatial frequency $\beta _{max}$ expressed in Eq. (5) is achieved by adjusting the distance between the two PGs, as shown in the following equation:

$$l = \frac{\mathit{\Lambda_x}}{4\lambda}p_x = \frac{\mathit{\Lambda}}{4{\lambda}\cos\phi}p_x.$$

4. Prototype display system

4.1 Configuration of prototype display system

Based on Section 3, the proposed image-shift optical device was designed and employed in the prototype display system. We developed a technology that can quadruplex the light rays in the time division to improve the angular resolution of the 3D image by the image-shift optical device as well as the resolution of the 3D image simultaneously. A 120 Hz 8K projector with a conventional wobbling device [25] was developed to quadruple the number of light rays, and it was incorporated into the display system. Figure 6 shows the overall view of the display system and Table 1 shows the specifications of the projector and optical elements used in the display system.

The size of the developed projector is 240 mm wide, 341 mm high, and 1130 mm deep. Three LED light sources and three LCOS chips were used to realize a compact and high pixel density projector, and a small color synthesizing optical block with a dichroic prism was applied in the image reading part. In addition, a full 8K image with a minimum size of 11 inches diagonally (over 800 ppi) could be projected by employing a dedicated projection lens. Furthermore, the built-in wobbling device enabled the pixels to be optically shifted by half a pixel diagonally. Therefore, 16K equivalent images were projected in the time division.

 figure: Fig. 6.

Fig. 6. Appearance of prototype display system with image-shift optical device. (a) Overall view of the display system and (b) enlarged view of the image-shift optical device part.

Download Full Size | PDF

Tables Icon

Table 1. Specifications of prototype display system.

Tables Icon

Table 2. Theoretical characteristics of displayed 3D image.

This projector projects viewpoint images of 960 $\times$ 540 pixels for a total of 64 viewpoints, i.e., 8 horizontals and 8 verticals, and improves each viewpoint image to the equivalent of high-definition (HD) resolution using the wobbling device. In addition, each viewpoint image was shifted diagonally and multiplexed using the image-shift optical device to double the number of viewpoints to 128 viewpoints. As a result of optimizing the arrangement of the optical system and the diffusion characteristics of the diffusion screen based on Section 2, the desired 3D image display characteristics of this display system are shown in Table 2.

4.2 Flicker-reduced time-division quadruplexing technology

We developed a light ray quadruplexing technology that can display 3D images with reduced flicker. Figure 7(a) shows the detailed configuration of the prototype display system. SW1 and SW2 are polarization switching devices placed inside the projector and near the projection lens, respectively. SW1 is used to shift the image by half a pixel as part of a wobbling device; SW2 is used to shift the image half a viewpoint and double the number of viewpoints as part of an image-shift optical device. A wavelength-selective $\lambda$/2 plate is placed at the rear of SW2. This waveplate causes the polarization state of the green (G) and red (R)/blue (B) images to differ. Figure 8 shows how images are switched in the chronological order. Four images are displayed in one cycle. At a certain viewpoint, the G and R/B images are switched at 120 Hz. Simultaneously, a half-pixel shift with a wobbling device is performed at 60 Hz. Thus, multiplexed 3D images can be displayed while preventing the occurrence of flicker even at 30 fps because either the G or R/B image is continually switched and displayed at 120 Hz. An optically compensated bend liquid crystal with a high response time is used as the polarization switching device. The rising and falling response times are approximately 1 ms and 1.5 ms, respectively. Furthermore, in the prototype display system, LEDs are turned on intermittently to reduce the effect of the response time of the polarization switching device. For the synchronization, the two polarization switching devices in the prototype display system are driven and controlled based on the digital signal incorporated in the displayed image (see Supplement 1).

 figure: Fig. 7.

Fig. 7. Configuration of prototype display system: (a) Display optical system. (b) Aperture array for blocking unnecessary light. (c) Half viewpoint image-shift.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Flicker-reduced time-division quadruplexing display.

Download Full Size | PDF

The average of the measured diffraction efficiency of the first-order diffraction light of the PG was 93%. When two PGs were stacked, the diffraction efficiency decreases and the measured value was 87%. The zero-order light was approximately 2–3%. The zero-order and higher-order diffracted light rays are superimposed on the displayed image if they are not blocked, which causes ghost images and degrades image quality. To address this problem, an aperture array is placed on the plane where the first-order diffracted light rays are focused, as shown in Fig. 7(b). This aperture array blocks unwanted light to prevent image quality loss caused by ghost images.

4.3 Effects of chromatic dispersion of PG

The design value of the diffraction pitch of the prototype PG is 3.5 $\mu$m, and the measured diffraction angle is 9.0$^\circ$ at a wavelength of 550 nm. The center wavelength and full width at half maximum (FWHM) of the projected light of the 8K projector are 624 nm and 15 nm for the R light, 536 nm and 56 nm for the G light, and 452 nm and 19 nm for the B light. The distance between the two PGs is 56.8 mm. The diffraction angles for the incident R, G, and B lights are 10.3$^\circ$, 8.8$^\circ$, and 7.4$^\circ$, respectively. Under these conditions, the R and B lights are projected at a distance of approximately 1.5 mm from the G light on the second PG, as shown in Fig. 9(b). This amount of shift affects the traveling angle of the light rays that form 3D images. The difference in the traveling angle of the R and B lights relative to the G light is approximately $\pm$ 0.09$^\circ$ from $d_1/d_2\approx 0.06$, where $d_1$ and $d_2$ are the distances between the PGs and between the condenser lens and 3D screen, respectively. This difference in the traveling angle is sufficiently low compared with the diffusion angle of the 3D screen, which is approximately 1.0$^\circ$. Therefore, the chromatic dispersion of the PG hardly affects image quality. Theoretically, this difference in the traveling angle of light rays can be corrected by geometrically correcting the R, G, and B images, as shown in Fig. 9(b). To perform this correction, a diffuser film with a wide diffusion angle must be temporarily placed at the same position as the second PG, with the camera placed in front of it. The shift in the second PG can be eliminated by displaying the calibration pattern for each color from the projector and reverse-correcting it such that each color image is displayed in the same position on the diffuser film.

The FWHM of the G light is 56 nm, which is the widest of those of the R, G, and B lights. In the G light, the chromatic dispersion of the PG causes a difference in the diffraction angle of approximately 0.9$^\circ$ within the FWHM wavelength range. This difference in the diffraction angle appears as an angular dispersion in the traveling direction of light rays that form 3D images. The width of the angular dispersion of the G light at the 3D screen is approximately 0.05$^\circ$, which is the value derived from $d_1/d_2\approx 0.06$. This width of the angular dispersion is sufficiently low compared with the diffusion angle of the 3D screen and has little effect on the image quality.

 figure: Fig. 9.

Fig. 9. Geometric correction for R, G, and B images. Optical path (a) without correction and (b) with correction.

Download Full Size | PDF

5. Experimental results and discussion

To verify the efficiency of the proposed method, wedge charts were reproduced every 10 mm at different depth positions from 10–100 mm in front of the screen. The results of re-shooting the 3D images with an additional diffuser film placed at the reproduced position are shown in Fig. 10(a). Note that the wobbling function was not used in this experiment, and the effect of the resolution characteristics by the image-shift optical device was compared. As the viewpoint interval becomes half by incorporating the image-shift optical device, the diffusion angle of the diffusion screen to compensate for the gap between viewpoints also becomes half. Therefore, in this experiment, the diffusion angle for image shifting was set to 1.0$^\circ$, and the diffusion angle without image shifting was set to 2.0$^\circ$. Figure 10(b) shows the results of evaluating the resolution characteristics in the depth dimension based on the re-shooting results in Fig. 10(a). In this figure, the curve represents the upper spatial frequency $\gamma$ described in Section 2, and the plots represent the observation spatial frequency at a modulation of 0.05 obtained from the experiments. In the experimental results without image shifting in Fig. 9(b), the 3D images displayed at depths of 70 mm or deeper are incomparable because the spatial frequency is observed after aliasing. From these results, it can be observed that the resolution characteristics of the 3D image displayed at a depth of 30 mm or more become higher by employing the image-shift optical device compared to the case without it. However, the resolution characteristics of 3D images displayed at depth positions of 20 mm or less show better results when the proposed method is not used. These depth positions belong to those where the improvement of the resolution characteristics is theoretically not expected or difficult to obtain, as shown in Section 2. Another possible degradation factor is the effect of the response characteristics of the polarization switching device used to control the image-shift optical device. The state in which polarization is not completely switched can cause ghosting in 3D images, thus it can be inferred that the resolution characteristics are lower than when the proposed method is not employed. However, as the spatial frequency of the experimental results with its device is higher than the theoretical upper spatial frequency without its device, it can be considered that higher resolution 3D images can be reproduced even at the depth position where blur originally occurs, which suggests the effectiveness of the proposed method. A resolution improvement of up to 1.58 times was achieved by employing the proposed method. The theoretical value of the improvement is twice as large; however the improvement in the experimental results is less than that. This is because of the polarization of the light projected from the projector. Low polarization causes viewpoint images that should be displayed only at one viewpoint to be displayed at different viewpoints, which results in the reproduction of ghosted 3D images. In addition, it is desirable that images projected from the projector are corrected in advance, and thus, geometric compensation was conducted in this study. The accuracy of the correction was also considered a cause of restriction of the improvement effect.

 figure: Fig. 10.

Fig. 10. Results of 3D images displayed at different depth positions. (a) Re-shooting results of 3D image displayed at each depth position and (b) evaluation results of the characteristics based on (a).

Download Full Size | PDF

Figure 11 shows the results of re-shooting the resolution characteristics of the 3D images observed near the center of the viewing zone. The purpose of this experiment was to confirm the enhancement of the 3D image resolution by the wobbling device. Therefore, the time-division method with the image-shift optical device was not performed. Aliasing occurred at 600 to 800 scan lines when wobbling was not applied, as shown in Fig. 11(b) and 11(c). Note that Fig. 11(b) and (c) show the re-shooting results when the polarization switching device for wobbling is fixed with no voltage applied, and when it is fixed with voltage applied, respectively. However, with the use of wobbling, there is no aliasing in this range (see Fig. 11(d)), and a 3D image with the high resolution can be reproduced. There are several reasons for not being able to accurately observe up to full HD resolution: the spatial frequency is dominated by the lens pitch of the diffusion screen because the lens pitch of the diffusion screen is larger than the pixel size on the display surface, and the wobbling limit is owing to the aperture ratio of the LCOS in the projector.

 figure: Fig. 11.

Fig. 11. Displayed 3D chart images on screen. (a) Images projected from the projector. Re-shooting images of the 3D image displayed without wobbling (b) when the polarization switching device is fixed with no voltage applied and (c) when it is fixed with voltage applied. (d) Re-shooting image of the 3D image displayed with wobbling.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Displayed 3D image from different viewpoints.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Result of confirming that the optical image is formed in space. Additional diffuser film is used to show the imaging position of the optical image. (a) Displayed 3D image when no diffuser film is placed. (b) Re-shooting result when the diffuser film is placed at the "D" position displayed 10 mm in front of the screen. Only "D" is in focus, and it can be confirmed that the optical image of "D" is formed at this position. (c) Re-shooting result when the diffuser film is moved to the "3" position displayed 100 mm in front of the screen. Only "3" is in focus, and it can be confirmed that the optical image of "3" is formed at this position.

Download Full Size | PDF

Next, the live-action 3D images displayed by the prototype display system were re-shot from various angles to confirm the effectiveness of the proposal time-division light ray quadruplexing technology, and the results are shown in Fig. 12. It can be confirmed that the positional relationship between the woman’s face, paper balloon, and lighting changes according to the position of observation, which indicates that the 3D image with both horizontal and vertical motion parallaxes can be reproduced. Thus, a binocular parallax is realized even when the observer’s head is tilted, which suggests that the observer can recognize the correct front-back relationship in the viewing zone regardless of the posture of the observation. Figure 13 shows the reproduction of a 3D image in space. Figure 13(a) shows the result of re-shooting the image displaying a woman with different depth positions of her left and right hands. Figure 13(b) and 13(c) show the results when the additional diffuser film to confirm the imaging position of the optical image was placed at 10 mm and 100 mm in front of the screen, respectively. The "D" object on the left hand can be observed when the diffuser film is placed at 10 mm. The "3" object in the right hand can be observed when the diffuser film is moved to 100 mm. Therefore, it can be confirmed that an optical image has been formed in space. It is suggested that the display system is constructed in such a manner that the eye strain is less likely to occur because the image can be observed as if the 3D image is actually formed near the screen. Furthermore, it was confirmed that quality degradation caused by flipping can be reduced while preventing the occurrence of flicker by increasing the number of viewpoints using the image-shift optical device (see Visualization 2). Therefore, we believe that the quadruple multiplexing technology proposed in this study can enhance the quality of 3D images and increase the visibility of the images as if they were real. However, color unevenness was observed at certain observation positions. This can caused by individual differences in the optical characteristics of the imaging lens array, lens aberration of the condenser lenses, or differences in the diffusion characteristics of the diffusion screen against the incident angle. It is necessary to improve the lens design and characteristics of the diffusion screen to achieve high quality 3D images.

6. Conclusion

In this study, we proposed a flicker-reduced time-division light ray quadruplex technology to improve both the spatial and angular resolutions of 3D images. This technology uses an image-shift optical device with two PGs, and the viewpoint image can be shifted by an arbitrary amount by adjusting the distance between the two PGs. We experimentally confirmed that the resolution of 3D images with depth can be improved by up to 1.58 times by shifting the image to half the viewpoint interval. Furthermore, we developed a 120 Hz 8K projector that utilizes a high frame rate in the LCOS system, which enables high resolution video display, and a prototyped a display system that applies the proposed time-division light ray quadruplex technology. Consequently, we confirmed that a single projector can realize a high resolution 3D display, and display 3D images at a depth of 100 mm in front of the projector. In principle, the resolution characteristics in the depth direction of the prototype display system are almost symmetrical with respect to the display screen plane; thus the display of 3D images in a depth range of $\pm$ 100 mm can be achieved. The proposed quadruplex technology using wobbling and image-shift optical device is an efficient time-division method because it can enhance the resolution of the reproduced image at all depth positions while preventing the occurrence of flicker using a wavelength-selective $\lambda$/2 plate. Therefore, this technology is expected to be applied for the realization of ideal 3D displays. In the future, we plan to improve the quality degradation of 3D images by designing lenses to reduce lens aberration and the difference in diffusion characteristics depending on the incident angle of the diffusion screen. In addition, we will further study multiplexing methods of the time division technology to realize higher quality 3D image displays.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. E. A. Edirisinghe and J. Jiang, “Stereo imaging, an emerging technology,” International Conference on Advances in Infrastructure for Electronic Business, Science, and Education on the Internet (SSGRR-2000), 2000.

2. O. Cakmakci and J. Rolland, “Head-worn displays: A review,” J. Disp. Technol. 2(3), 199–216 (2006). [CrossRef]  

3. X. Hu and H. Hua, “High-resolution optical see-trough multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014). [CrossRef]  

4. O. Mercier, Y. Sulai, K. Mackenzie, M. Zannoli, J. Hillis, D. Nowrouzezahrai, and D. Lanman, “Fast gaze-contingent optimal decompositions for multifocal displays,” ACM Transactions on Graphics 36(6), 1–15 (2017). [CrossRef]  

5. G. J. Lv, W. X. Zhao, D. H. Li, and Q. H. Wang, “Polarizer parallax barrier 3D display with high brightness, resolution and low crosstalk,” J. Disp. Technol. 10(2), 120–124 (2014). [CrossRef]  

6. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456 (2013). [CrossRef]  

7. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Transactions on Graphics 26(3), 40–49 (2007). [CrossRef]  

8. T. Balogh and P. T. Kovacs, “Real-time 3D light field transmission,” Proc. SPIE Photonics Eur., Brussels, (2010).

9. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor display: Compressive light field synthesis using multilayer displays with directional backlighting,” ACM Transactions on Graphics 31(4), 1–11 (2012). [CrossRef]  

10. X. Xia, X. Liu, H. Li, Z. Zheng, H. Wang, Y. Peng, and W. Shen, “A 360-degree floating 3D display based on light field regeneration,” Opt. Express 21(9), 11237–11247 (2013). [CrossRef]  

11. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]  

12. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37(11), 2034–2045 (1998). [CrossRef]  

13. M. Shiokawa, K. Toyotaka, M. Tsubuku, K. Sugimoto, M. Nakashima, S. Matsuda, H. Shishido, T. Aoyama, H. Ikeda, S. Eguchi, S. Yamazaki, M. Nakada, T. Sato, T. Abe, and J. Koezuka, “A 1058 ppi 8K4K OLED display using a top-gate self-aligned CAAC oxide semiconductor FET,” SID Symposium Digest of Technical Paper 47(1), 1209–1212 (2016). [CrossRef]  

14. J. Arai, F. Okano, M. Kawakita, M. Okui, Y. Haino, M. Yoshimura, M. Furuya, and M. Sato, “Integral three–dimensional television using a 33-Megapixel imaging system,” J. Disp. Technol. 6(10), 422–430 (2010). [CrossRef]  

15. N. Okaichi, M. Miura, J. Arai, M. Kawakita, and T. Mishina, “Integral 3D display using multiple LCD panels and multi-image combining optical system,” Opt. Express 25(3), 2805–2817 (2017). [CrossRef]  

16. H. Watanabe, M. Kawakita, N. Okaichi, H. Sasaki, and T. Mishina, “Integral imaging system using locally controllable point light source array,” IS&T International Symposium on Electronic Imaging 2017 SD&A-247, 247.1–247.5, (2017).

17. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010). [CrossRef]  

18. K. Nagano, A. Jones, J. Liu, J. Busch, X. Yu, M. Bolas, and P. Debevec, “An autosterescopic projector array optimized for 3D facial display,” In proceedings of ACM SIGGRAPH 2013, Emerging Technologies No. 3 (2013).

19. C. K. Lee, S. G. Park, S. Moon, J. Y. Hong, and B. Lee, “Compact multi-projection 3D display system with light-guide projection,” Opt. Express 23(22), 28945 (2015). [CrossRef]  

20. Y. Peng, H. Li, Q. Zhong, X. Xia, and X. Liu, “Large-sized light field three-dimensional display using multi-projectors and directional diffuser,” Opt. Eng. 52(1), 1 (2013). [CrossRef]  

21. J. H. Lee, J. Park, D. Nam, S. Y. Choi, D. S. Park, and C. Y. Kim, “Optimal projector configuration design for 300-Mpixel multi-projection 3D display,” Opt. Express 21(22), 26820–26835 (2013). [CrossRef]  

22. M. Kawakita, S. Iwasawa, R. L. Gulliver, and N. Inoue, “Glasses-free large-screen three-dimensional display and super multi-view camera for highly realistic communication,” Opt. Eng. 57(06), 1 (2018). [CrossRef]  

23. D. Teng, L. Liu, and B. Wang, “Super multi-view three-dimensional display through spatial-spectrum time-multiplexing of planar aligned OLED microdisplays,” Opt. Express 22(25), 31448–31457 (2014). [CrossRef]  

24. X. Xia, X. Zhang, L. Zhang, P. Surman, and Y. Zheng, “Time-multiplexing multi-view three-dimensional display with projector array and steering screen,” Opt. Express 26(12), 15528–15538 (2018). [CrossRef]  

25. J. Arai, M. Kawakita, T. Yamashita, H. Sasaki, M. Miura, H. Hiura, M. Okui, and F. Okano, “Integral three-dimensinaol television with video system using pixel-offset method,” Opt. Express 21(3), 3474–3485 (2013). [CrossRef]  

26. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15(8), 2059–2065 (1998). [CrossRef]  

Supplementary Material (3)

NameDescription
Supplement 1       Supplemental document on diffusion screen, polarization grating, and control method of polarization switching device.
Visualization 1       This movie shows how the image is shifted when the polarization switching device is turned on and off.
Visualization 2       This movie shows that the quality degradation caused by flipping can be reduced with the proposed method by increasing the number of viewpoints using the image-shift optical device.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Basic configuration of developed display optical system.
Fig. 2.
Fig. 2. Observation spatial frequency of 3D image.
Fig. 3.
Fig. 3. Resolution characteristics of 3D image.
Fig. 4.
Fig. 4. Principle of proposed image-shift optical device. (a) When no voltage is applied to the polarization switching device, the image is shifted in the lower left direction along the light path, and (b) when voltage is applied to the polarization switching device, the image is shifted upper right direction along the light path. The results of re-shooting the shifted image (c) in the state of (a) and (d) in the state of (b) (see Visualization 1).
Fig. 5.
Fig. 5. Theoretical diagram of image shift near the optical axis by image-shift optical device.
Fig. 6.
Fig. 6. Appearance of prototype display system with image-shift optical device. (a) Overall view of the display system and (b) enlarged view of the image-shift optical device part.
Fig. 7.
Fig. 7. Configuration of prototype display system: (a) Display optical system. (b) Aperture array for blocking unnecessary light. (c) Half viewpoint image-shift.
Fig. 8.
Fig. 8. Flicker-reduced time-division quadruplexing display.
Fig. 9.
Fig. 9. Geometric correction for R, G, and B images. Optical path (a) without correction and (b) with correction.
Fig. 10.
Fig. 10. Results of 3D images displayed at different depth positions. (a) Re-shooting results of 3D image displayed at each depth position and (b) evaluation results of the characteristics based on (a).
Fig. 11.
Fig. 11. Displayed 3D chart images on screen. (a) Images projected from the projector. Re-shooting images of the 3D image displayed without wobbling (b) when the polarization switching device is fixed with no voltage applied and (c) when it is fixed with voltage applied. (d) Re-shooting image of the 3D image displayed with wobbling.
Fig. 12.
Fig. 12. Displayed 3D image from different viewpoints.
Fig. 13.
Fig. 13. Result of confirming that the optical image is formed in space. Additional diffuser film is used to show the imaging position of the optical image. (a) Displayed 3D image when no diffuser film is placed. (b) Re-shooting result when the diffuser film is placed at the "D" position displayed 10 mm in front of the screen. Only "D" is in focus, and it can be confirmed that the optical image of "D" is formed at this position. (c) Re-shooting result when the diffuser film is moved to the "3" position displayed 100 mm in front of the screen. Only "3" is in focus, and it can be confirmed that the optical image of "3" is formed at this position.

Tables (2)

Tables Icon

Table 1. Specifications of prototype display system.

Tables Icon

Table 2. Theoretical characteristics of displayed 3D image.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

D = 1 1 f V 1 f S ,
θ x = arctan ( p x f S ) p x f S .
β = α D z | z | ,
1 α 2 m a x ( θ x ,   θ d ) ,
β D z 2 θ d | z | = β m a x ,
β n y q = D 2 w .
γ = m i n ( β m a x , β n y q ) = m i n ( D z 2 θ d | z | ,   D 2 w ) .
sin θ 1 sin θ 0 = m λ Λ x ,
( Δ s x θ 2 ) = ( 1 f I 0 1 ) ( x 0 θ 0 ) + ( 1 l + Δ z 0 1 ) ( 0 m 1 λ Λ x ) + ( 1 Δ z 0 1 ) ( 0 m 2 λ Λ x ) ,
( Δ s x θ 2 ) = ( 1 f I 0 1 ) ( x 0 θ 0 ) + ( ± l λ Λ x 0 ) .
| Δ s x | = l λ Λ x = 1 4 p x .
l = Λ x 4 λ p x = Λ 4 λ cos ϕ p x .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.