Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-performance autostereoscopic display based on the lenticular tracking method

Open Access Open Access

Abstract

We propose a novel full-parallax autostereoscopic display based on a lenticular tracking method to achieve separation between the viewing angle and image resolution and to improve these two parameters simultaneously. The proposed method enables the viewing angle to be independent of the image resolution and has the potential to solve the long-term trade-off problem in integral photography. By employing the lenticular lens array instead of the micro-lens array in integral photography with viewing tracking, the proposed method shows a high-image resolution and wide viewing angle 3D display with full parallax. A real-time tracking and rendering algorithm for the display method is also proposed in this study. The experimental results, compared with those of the conventional integral photography display and the tracking-based integral photography display, demonstrate the feasibility of this lenticular tracking display technology and its advantages in display resolution and viewing angle, suggesting its potential in practical three-dimensional applications.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In recent years, three-dimensional (3D) display technology has drawn increasing attention due to its applications in entertainment, industries and medicine. Among the different kinds of 3D display systems, the stereoscopic 3D display technology has the advantages of high quality and low cost. Thus, it has been widely used in cinema, television and game consoles. However, visual fatigue, which is related to the accommodation-vergence conflict, parallax distribution, binocular mismatches, depth, and cognitive inconsistencies, may restrict its application scenarios [1].

A promising alternative for 3D displays is the autostereoscopic display, such as integral photography display [2]. Integral photography display, which is also called integral imaging, uses a micro-lens array to provide 3D images, offering a full parallax, full color, real-time animated display. Despite the merits of integral photography display in terms of low visual fatigue and few demands for additional equipment placed on observers, the unsatisfactory display quality of integral photography has constrained the use of integral photography display technology in real-life applications. The primary challenge is the trade-off [3] between the image resolution and the viewing angle, because 2D visual information has to be divided into elemental images from different viewpoints in the integral photography display.

Sustained efforts have been made to address the trade-off between the image resolution and viewing angle in integral photography display. Most of the existing research has focused on improving either of the parameters mentioned above. For example, a larger viewing angle can be achieved by adding a periodic black mask [4] or a high refractive index medium [5], or designing new optical structures, such as a fiber-coupled monocentric lens array [6], three lens arrays [7] and embossed screens [8]. Other research has focused on improving the image resolution, including time multiplexing of displays [9,10], spatial multiplexing of multiple projectors [11], and using holographic diffuser [12]. However, to the best of our knowledge, none of these approaches has fundamentally solved the trade-off between the viewing angle and image resolution.

The interdependence between the viewing angle and image resolution in the traditional integral photography display is a crucial obstacle to solving this problem. Viewpoint tracking technology can track the viewer's position and display a corresponding image to expand the viewing angle of the integral photography display [13–15]. Furthermore, we employ viewpoint tracking technology to separate the viewing angle from the image resolution, where the viewing angle mainly depends on the viewpoint tracking device, and the image resolution mainly depends on the autostereoscopic display device. Since viewpoint tracking technology can realize the full parallax display ability, instead of a micro-lens array in the conventional integral photography display, we can use a lenticular lens array to take full advantage of the parallax information and achieve 3D images with better resolution.

In this study, we propose a novel full-parallax autostereoscopic display method to achieve separation between the viewing angle and image resolution, and to improve these two parameters simultaneously. This method combines lenticular display technology and viewpoint tracking technology, where the viewing angle mainly depends on the viewpoint tracking device and the image resolution mainly depends on the lenticular display device. We believe this method is able to provide a better 3D image than the conventional integral photography display and the tracking based integral photography display [14]. A real-time tracking and rendering algorithm for the display method is also proposed in this study. A series of experiments have been conducted to evaluate the performance of the proposed method; the results show the feasibility of the lenticular tracking method, suggesting its potential as a candidate to practical 3D display applications.

2. System configuration and proposed methods

2.1. System configuration of the lenticular tracking display

In the proposed lenticular tracking method, we combine lenticular display technology with viewpoint tracking technology to achieve the separation between the viewing angle and image resolution. The conventional autostereoscopic display simultaneously provides motion parallax and binocular parallax. The whole parallax information is obtained from the optical elements. By contrast, we employ viewpoint tracking technology to provide motion parallax which determines the viewing angle of the 3D image. For a particular viewpoint, the optical elements of the lenticular display provide binocular parallax, determining the minimum lenticular lens pitch, which relates to the 3D image resolution. Because binocular parallax and motion parallax are provided from different hardware independently, the separation of the viewing angle and image resolution is achieved in our proposed method.

The lenticular tracking display consists of a lenticular display device and a depth camera (Fig. 1). The lenticular display device comprises a liquid crystal display (LCD) panel and a lenticular lens array. The lenticular lens array distributes the pixels from the LCD panel to multiple viewpoints. Thus, the lenticular display device provides a virtual 3D image with horizontal parallax, which provides binocular parallax for the viewer. The depth camera tracks the viewer’s head, and we dynamically render the elemental images as the viewer’s position changes. Thus, the viewer can obtain the 3D sense with binocular parallax from the lenticular display device and motion parallax from the viewpoint tracking technology.

 figure: Fig. 1

Fig. 1 The system configuration of the 3D lenticular tracking display.

Download Full Size | PDF

2.2. Analysis of the interdependence between the viewing angle and image resolution

The value of the lenticular tracking display is that it addresses the tradeoff between the image resolution and viewing angle so that both parameters can be significantly improved. In this section, we analyze the relationship between the viewing angle and image resolution. We find that the viewing angle is mainly determined by the shooting angle of the depth camera, when the lenticular lens pitch is quite small. And the maximum image resolution depends on the viewing distance and the tracking error on the x-axis.

The definition of the lenticular tracking display’s image resolution is consistent with that of the lenticular display. However, the viewing angle’s definition is entirely different. The vertical viewing angle is defined in the depth camera’s tracking range, which is slightly smaller than the shooting angle of the depth camera. Outside the tracking range, an unflipped 3D image still can be observed. However, the displayed 3D model’s pose is incorrect, and the pose does not change as the viewer moves out of the boundary. The horizontal viewing angle is the depth camera tracking range plus the viewing angle of the lenticular display, which can be defined through the following formula:

θv=θtr+θl,
where θtr is the horizontal tracking angle of the depth camera, θl is the viewing angle of the lenticular display, and θv is the viewing angle of the whole system. The viewing angle of the lenticular display depends on the lenticular lens pitch and the gap between the lenticular lens array and the LCD. When the image resolution rises, the lenticular lens pitch becomes quite small and the viewing angle of the lenticular display is also quite small too. Therefore, the horizontal viewing angle of the lenticular tracking display mainly depends on the horizontal tracking angle of the depth camera, when the 3D image has high resolution. Within the depth camera tracking range, the elemental images move as the position of the viewer changes, so that the viewer remains in the visible range of the lenticular display in the horizontal direction. The viewer may move a small distance out of the tracking range while continuing to observe 3D image without flipping due to the viewing range of the lenticular display. Unlike the vertical observation, exceeding the viewing range in the horizontal viewing direction causes 3D image flipping.

Since the viewing angle is no longer limited by the lenticular lens pitch, we can use a lenticular lens array with a sufficiently short lens pitch to achieve a high-resolution display. For the fixed focus display model, the lens pitch is the most important factor affecting the image display resolution. The image resolution [16] R(D) is defined by:

R(D)=1/(lptich+lpixel|D|dgap),
where D is the distance between the image plane and the lenticular lens array, lptich is the lenticular lens pitch, lpixelis the pixel pitch, and dgap is the distance between the lenticular lens array and the LCD panel. The relationship of the lens pitch, viewing distance, and viewer’s pupil distance can be expressed by the following formula:
lpitch=lviewdviewdgapdgap,
where lview is the length of the viewing area, and dview is the viewing distance. Then, we can obtain Eq. (4) as:
R(D)=1/(lviewdviewdgapdgap+lpixel|D|dgap).
In the lenticular tracking display, the image resolution is no longer constrained by the viewing angle, but the observer's binocular parallax still needs to be considered, as shown in Fig. 2(a). The viewer's eyes should receive the correct viewpoint images to at a certain viewing distance to ensure that the 3D image flip does not occur. Thus, the distance (lpupil) of the human pupils should be smaller than the length of the viewing area.

 figure: Fig. 2

Fig. 2 (a) Optical model of the viewing area; Simulation of the image resolution, the gap and the viewing distance with the parameter D equaling (b) 0 mm, (c)10 mm, and (d) 20 mm.

Download Full Size | PDF

In the simulation, we discuss the relationship of the image resolution, the gap between the LCD panel and the lenticular lens array, and the viewing distance. To obtain the maximum image resolution, we assume that the length of the viewing area is equal to the distance between the pupils. Then we can obtain the simulation results from MATLAB as shown in Figs. 2(b)–(d). From the simulation, we can infer that when the viewing distance increases, the maximum image resolution at different image depths is higher. And the distance between the lenticular lens array and the LCD panel has different effects on the resolution of images at different depths.

We discuss the impact of tracking errors on the 3D display. In our algorithm, we assume that the center of the viewing area is set at the center point of the viewer’s eyes, and the distance (lpupil) of the human pupils is smaller than the length (lview) of the viewing area. The incorrect tracking result on the x-axis may make one of the eyes movies out of the viewing area and cause 3D image flipping. When we use a lenticular lens array with a small lens pitch to obtain a high image resolution, the viewing area will be smaller and we incur the risk that the tracking error on the x-axis may cause 3D image flipping. Thus, we can define a threshold (lthreshold) of the tracking error within which the 3D image flipping does not occur:

lthreshold=lviewlpupil2.
From Eq. (5), we can infer that when the tracking errors on the x-axis is larger, the length of the viewing area should be larger and the maximum image resolution is smaller.

2.3. Real-time viewpoint-tracking and rendering method

We propose a real-time tracking and rendering algorithm for a lenticular tracking display. In the proposed algorithm, the position of the observer’s face is calculated in real time from the depth camera’s data. According to the face position, we adjust the settings in the multiple viewpoint rendering method to obtain the elemental images rendering for the corresponding viewpoint position. The basic workflow is shown in Fig. 3(a). For real-time tracking and rendering, we divide the face tracking process and the multiple viewpoint rendering processes into two sub-threads. In the face tracking sub-thread, we extract the depth camera video stream, use Haar features and local binary pattern features to identify faces, use the OpenCV default face classifier for face detection, and estimate the viewpoint position as the central position of the viewer’s eyes. However, this method of face tracking has some repetition errors, which cause the displayed 3D image to tremble. To stabilize this trembling of the 3D image, we further introduce temporal filtering to reduce the repetition error. The temporal filter is defined as follows:

yk=i=04xki5.
We averaged the tracking results of five frames to obtain stable and low-delay tracking results.

 figure: Fig. 3

Fig. 3 (a) The flow chart of the real-time viewpoint tracking and rendering algorithm for the lenticular tracking display; (b) The calculation of model rotation when the viewer moves; (c) The calculation of the elemental image shift when the viewer moves.

Download Full Size | PDF

In the multiple viewpoint rendering sub-thread, we use a multiple viewpoint rendering method for the lenticular display and add the model rotation and elemental images’ shift based on the results of face tracking. This multiple viewpoint rendering method is based on the traditional integral phytography rendering algorithm [17,18], which uses the virtual camera array to capture multi-view images. In the sub-thread, we adopt the model rotation to obtain the multi-viewpoint map corresponding to the viewer’s position, as shown in Fig. 3(b). The angle at which the model rotates is calculated by the following formula:

θrot=arctan(lmovedview),
where lmove is the displacement of the viewer, and dview is the distance between the observer and the LCD panel. Then, we resample the viewpoint image array to obtain the corresponding elemental images. When the viewer is moving horizontally, the position of the elemental images under the lenticular lens array also needs to be shifted to ensure that the viewer’s position is always at the center of the viewing zone, as shown in Fig. 3(c). The elemental image shift is calculated by the following formula:
xoffset=lmovedviewdgapdgap,
where dgap represents the distance between the lenticular lens array and the LCD panel.

3. Experiments and results

3.1. Experiment setup

We built a prototype to validate the proposed method. As shown in Fig. 4, we constructed a lenticular display device comprising a lenticular lens array (lens pitch 1.016 mm, thickness 3.8 mm) and an LCD panel (2048 × 1536 pixels, 264 PPI). To avoid the color moiré pattern [19], we placed the lenticular lens array and the LCD panel at an inclination of 30°. The lens direction in the lenticular lens array was perpendicular to the ground. A depth camera (Intel RealSense SR300, maximum tracking angle of 48° (H) ×40° (V)) was placed above the lenticular display device to track the viewer.

 figure: Fig. 4

Fig. 4 The prototype of the 3D lenticular tracking display.

Download Full Size | PDF

To display an accurate 3D image, the hardware calibration is critical. Uncalibrated hardware has certain problems in 3D image display and real-time rendering in conjunction with the viewer’s motion. The three steps in the calibration process are as follows:

  • 1. The lenticular lens array and the LCD panel need to be set at an inclination of a preset angle. In addition, the center of the lenticular lens should be placed on the center of the LCD panel.
  • 2. The depth camera coordinate center needs to be placed on the central axis of the LCD panel. Moreover, it is necessary to ensure the coincidence of the two coordinate systems on the x-axis, the y-axis and the z-axis in parallel. Otherwise, the moving parallax is adversely affected.
  • 3. The equivalent gap between the lenticular lens array and LCD in the rendering algorithm is calibrated according to the elemental image offset. According to Fan's paper [20], the material of the lenticular lens array substrate affects the direction of the light. We set an equivalent gap instead of the real gap to avoid the displayed 3D image flipping within the tracking range. The actual value of the gap is greater than 3.8 mm, and the equivalent gap value after calibration is 3 mm.

3.2. The evaluation of the viewpoint tracking

We evaluated the accuracy and robustness of the proposed viewpoint tracking algorithm. We adopted an optical tracing system (Polaris Vicra, Northern Digital Inc.) as the gold standard for tracking. The tracking error of this system is less than 0.5 mm. The experimental setup is shown in Fig. 5. The optical tracking system and the depth camera were placed side by side, and the translational rotation matrix between the two coordinate systems was calibrated using a checkerboard pattern. We used a plastic head model and attached an optical marker to obtain accurate tracking from the optical tracking system. In the experiment, we moved the model head at a position approximately 1m away from the optical tracking system and the depth camera, then obtained its motion trajectory in both systems. The motion trajectory from the depth camera (blue) and the optical tracking system (red) after the coordinate system transformation are shown in Figs. 6(a)–(c). The average tracking errors on the x-axis, y-axis, and z-axis were 11.7 mm, 14.8 mm, and 13.2 mm, respectively.

 figure: Fig. 5

Fig. 5 The experimental setup of the viewpoint tracking evaluation.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 The motion trajectory on the x-axis (a), y-axis (b), and z-axis (c) from the depth camera (blue) and the optical tracking system (red); the tracking errors on the x-axis (d).

Download Full Size | PDF

We adopted the failure rate (Pτ) [21] to evaluate the robustness of the proposed viewpoint tracking method:

Pτ={t|ϕt>τ}t=1NN,
where τ=lthreshold. The failure rate is smaller when the viewpoint tracking is more robust. The lens pitch of the lenticular lens array we used was 1.016 mm. From Eq. (5), we obtained lthreshold=133.8mm, and Pτ=0. In this case, the tracking errors on the x-axis would not affect the 3D display. If we used a lenticular lens array with a smaller lens pitch (0.35 mm), we obtained lthreshold=23.2mmand Pτ=9.4%. We can infer that because of the tracking error, 3D image flipping will occur occasionally when the lens pitch is too small.

3.3. The comparative experiments on the image resolution

We compared the lenticular tracking display with the traditional integral photography display and the tracking based integral photography display in terms of the image resolution.

We built an integral photography display device, as shown in Fig. 7(d). The hardware parameters of the two display devices were selected to be as consistent as possible. The LCD panel of the integral photography display was the same as that of the lenticular tracking display (2048 × 1536 pixels, 264 PPI), the pitch of the micro-lens array was 1.016 mm, and the focal length was 3 mm. We used the Canon EOS70D camera to the capture experimental results. We also built a tracking based integral photography display device, as shown in Fig. 7(c). This method is explained in detail in [13,14]. The optical component and display panel were the same as that of the integral photography display device. A depth camera was placed above the lenticular display device to track the viewer.

 figure: Fig. 7

Fig. 7 (a) The experimental setup of the comparative experiments; (b) the lenticular tracking display device; (c) the tracking based integral photography device; (d) the traditional integral photography device.

Download Full Size | PDF

We implemented the real-time tracking and rendering algorithm described in Section 2 using a computer (CPU Intel I7-8700, GPU NVIDIA 1080). The frame rate to render a virtual brain model (97,000 triangular patches) reached 60 frames with viewpoint tracking.

We used a brain model to compare the image quality of these three display methods. The tracking based integral photography display has the same image resolution as the traditional integral photography display because they use the same display device, both consisting of a micro-lens array and LCD panel. Figure 8(a) is from the lenticular tracking display, and Fig. 8(b) is from the traditional integral photography display and the tracking based integral photography display. By comparing the enlarged views, we can see that the lenticular tracking display provides more detailed information.

 figure: Fig. 8

Fig. 8 A brain model displayed by the 3D lenticular tracking display (a) and the integral photography display (b).

Download Full Size | PDF

Then, we evaluated the image resolution of these three display devices. In the experiment, we used the 1951 USAF resolution test chart to evaluate the image resolution. We rendered the elemental images of the resolution test chart, so the 2D image can be displayed at a distance of 5 mm from the lens array to avoid interference from the lens array. Then the camera was positioned at 1m from the display device and focused on the 2D image of the resolution test chart. Figures 9(a) and 9(b) show the captured images of the resolution test chart displayed by the lenticular tracking display and the integral photography display (the traditional one and the tracking based one), respectively. The image resolution was determined using the Rayleigh criterion [22]. The peak-to-valley ratio of the stripe maps with different widths was calculated, and 0.8 was adopted as the peak-to-valley ratio threshold. In Fig. 9, the red and blue rectangles respectively represent the unresolved and resolved stripes, and the numbers next to them represented the peak-to-valley ratio of these stripes. As shown in Fig. 9, the lenticular tracking display’s first strips of the second group could be resolved in the horizontal direction and the fourth strips of the first group in the vertical direction. According to the corresponding resolution stripe width table, it was found that the horizontal resolution of the lenticular tracking display was 0.125 mm and the vertical resolution was 0.198 mm. Similarly, the integral photography display’s fourth stripes of the −1 group were resolved in the horizontal direction and the third stripes of the −1 group in the vertical direction. According to the corresponding resolution stripe width table, the integral photography display’s horizontal resolution was 0.707 mm, and the vertical resolution was 0.794 mm. From the above quantitative experiments, we could infer that when the hardware parameters were consistent, the resolution of the lenticular tracking display was much better than that of the integral photography display.

 figure: Fig. 9

Fig. 9 The resolution chart displayed by the 3D lenticular tracking display (a) and the integral photography display (b).

Download Full Size | PDF

3.4. The comparative experiments of the viewing angle

We compared the viewing angle of the lenticular tracking display, the traditional integral photography display and the tracking based integral photography display. The experimental setup is shown in Fig. 7(a). In the quantitative experiment, we displayed a virtual skull model on these three display devices. The camera captured photos 1m from the display device. During the experiment, we used a three-axis translation stage to change the position of the camera in the horizontal and vertical directions and kept the camera pointing toward the center of the display screen. We recorded the camera's viewing angle when the edge of the 3D image flipped. This position was the critical value of the viewing angle. In the evaluation experiment of the lenticular tracking display and tracking based integral photography, we placed the camera below a plastic human head model, and moved it with the head model. The depth camera tracked the head model, and the corresponding display image was captured by the camera. To ensure the accuracy of the viewpoint calculation in the rendering procedure, we subtracted 16cm from the tracking result in the y-axis direction. That is, the center of the viewpoints in the rendering procedure was positioned toward the center of the camera.

The experimental results of the lenticular tracking display were shown in Fig. 10 and Fig. 11. In the horizontal direction, when the camera position was set at the viewing angle of −27.5° and 29.0°, the edge of the brain model flipped. Therefore, we inferred that the viewing angle of the lenticular tracking display was 56.5°. In the vertical direction, the brain model did not flip even when the viewer moved beyond the depth camera's tracking range (−10.6° and 28.4°, respectively). When the viewer moved outside of the tracking range, new elemental images with the correct vertical parallax were no longer generated in the rendering procedure. Although the viewer could see the 3D virtual image after exceeding the tracking position in the vertical direction, the vertical parallax was not correct.

 figure: Fig. 10

Fig. 10 The horizontal parallax of the 3D lenticular tracking display (a), the tracking based integral photography display (b) and the traditional integral photography display (c) from different viewing angles.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 The 3D lenticular tracking display shows vertical parallax from different viewing angles.

Download Full Size | PDF

In the viewing angle evaluation, the horizontal viewing angle of the lenticular tracking display could reach 56.5°, and the vertical viewing angle could reach 39.0°. The viewing angle of the integral photography display was also evaluated, with a viewing angle of up to 19.3° in the horizontal direction and 21.2° in the vertical direction. The horizontal viewing angle of the tracking based integral photography display could reach 56.5°, and the vertical viewing angle could reach 55.2°. We inferred that the lenticular tracking display performs much better than the integral photography display in viewing angle, while the tracking based integral photography display has a similar horizontal viewing angle with the lenticular tracking display, and a larger vertical viewing angle. In the tracking based integral photography display, the horizontal viewing angle and vertical viewing angle can be defined as Eq. (1). Therefore, the vertical viewing angle of the tracking based integral photography display has an additional viewing angle from the micro lens array (θl) than the lenticular tracking display. However, when we use a lens array with a smaller lens pitch and a depth camera with a wider tracking area to solve the trade-off problem between the viewing angle and image resolution, the difference in the viewing angle between these two tracking based methods is quite small.

Visualization 1 shows the comparative experiment of the lenticular tracking display and the conventional integral photography display. In the comparative experiment, the image quality of the lenticular tracking display is relatively high. We found that the horizontal viewing angle of the lenticular tracking display was much wider than that of the conventional integral photography display because two 3D image flipping events occurred in the integral photography display while no flipping occurred in the lenticular tracking display during the camera movement. In the experiment, the motion parallax of the lenticular tracking display was slightly delayed when the viewer moved rapidly, due to the temporal filtering of the tracking results. When the delay arose, the viewer's eyes were at the wrong viewpoint next to the correct viewpoint in the horizontal direction. The correct elemental image was detected by the viewer's eyes until the tracking results were followed. During this tracking hysteresis, the viewer noticed a slight jitter of the 3D image. This phenomenon existed only in the horizontal direction due to the time delay of the viewpoint tracking.

4. Conclusion

In this study, we proposed a new lenticular tracking display method, combining lenticular display technology and viewpoint tracking technology to overcome the viewing angle and image resolution trade-off. In the analysis, we found that the viewing angle was mainly determined by the shooting angle of the depth camera, and the maximum image resolution depended on the viewing distance and the tracking error on the x-axis. Therefore, the viewing angle and image resolution could be improved simultaneously by employing high-performance hardware and improving the tracking algorithms. A real-time rendering algorithm for this display technology was also reported here. Experiments were conducted to evaluate the performance. In Table 1, we found that the image resolution of the lenticular tracking display method was 0.125 mm (H) and 0.198 mm (V), while the viewing angle was 56.5° (H) and 39.0° (V). Both parameters performed much better than in the case of the conventional integral photography display. Compared with the tracking based integral photography, the proposed method shows a better image resolution while maintaining a comparable viewing angle. The results demonstrate that the method can realize a naked-eye 3D image display with a high image resolution, large viewing angle and full parallax at the same time, suggesting its potential in real-life applications.

Tables Icon

Table 1. The Comparative Results of the Image Resolution and Viewing Angle

With the depth camera tracking the position of the viewer, the lenticular tracking display method is able to render the 3D images dynamically, thereby reducing the required viewing area and displaying the image information in a more efficient way. At the same time, the lenticular lens array provides only horizontal parallax information, making it a better counterpart for viewpoint tracking technology than micro-lens arrays. In this way, it can be inferred that the proposed method shows a better image resolution than the integral photography display with view-point tracking while keeping the comparable viewing angle. Note that we adopt the marker-free viewpoint tracking method to avoid additional equipment for the observer. However, the relative instability in marker-free viewpoint tracking causes a time delay (approximately 0.2s) during the horizontal viewpoint change. We believe that this issue can be solved with the development of marker-free view-point tracking technology.

Here, we propose the prototype of a high image resolution and large viewing angle, and prove the feasibility of the lenticular tracking display method. As the first study of the proposed method, there are problems to be explored before implementing practical 3D display applications. For instance, the situation where the lens direction in the lenticular lens array is not perpendicular to the ground should be addressed in the rendering algorithm, and the hardware calibration procedure should be simplified. Moreover, the multi-viewer algorithm might also be implemented in the proposed method. These aspects will be addressed in our future studies.

Funding

National Natural Science Foundation of China (81427803, 81771940); National Key Research and Development Program of China (2017YFC0108000); Beijing Municipal Natural Science Foundation (7172122, L172003); Soochow-Tsinghua Innovation Project (2016SZ0206).

Acknowledgments

The authors thank Jingjing Chen and Zhencheng Fan for fruitful discussion.

References

1. W. J. Tam, F. Speranza, S. Yano, K. Shimono, and H. Ono, “Stereoscopic 3D-TV: visual comfort,” IEEE Trans. Broadcast 57(2), 335–346 (2011). [CrossRef]  

2. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7(1), 821–825 (1908). [CrossRef]  

3. J. Hong, Y. Kim, H. J. Choi, J. Hahn, J. H. Park, H. Kim, S. W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]   [PubMed]  

4. C. Luo, C. Ji, F. Wang, Y. Wang, and Q. Wang, “Crosstalk-free integral imaging display with wide viewing angle using periodic black mask,” J. Disp. Technol. 8(11), 634–638 (2012). [CrossRef]  

5. J. Y. Jang, H. S. Lee, S. Cha, and S. H. Shin, “Viewing angle enhanced integral imaging display by using a high refractive index medium,” Appl. Opt. 50(7), B71–B76 (2011). [CrossRef]   [PubMed]  

6. J. Zhang, X. Wang, X. Wu, C. Yang, and Y. Chen, “Wide-viewing integral imaging using fiber-coupled monocentric lens array,” Opt. Express 23(18), 23339–23347 (2015). [CrossRef]   [PubMed]  

7. W. Xie, Y. Wang, H. Deng, and Q. Wang, “Viewing angle-enhanced integral imaging system using three lens arrays,” Chin. Opt. Lett. 12(1), 011101 (2014). [CrossRef]  

8. S. W. Min, J. Kim, and B. Lee, “Wide-viewing projection-type integral imaging system with an embossed screen,” Opt. Lett. 29(20), 2420–2422 (2004). [CrossRef]   [PubMed]  

9. Y. Oh, D. Shin, B. G. Lee, S. I. Jeong, and H. J. Choi, “Resolution-enhanced integral imaging in focal mode with a time-multiplexed electrical mask array,” Opt. Express 22(15), 17620–17629 (2014). [CrossRef]   [PubMed]  

10. M. A. Alam, G. Baasantseren, M. U. Erdenebat, N. Kim, and J. H. Park, “Resolution enhancement of integral‐imaging three‐dimensional display using directional elemental image projection,” J. Soc. Inf. Disp. 20(4), 221–227 (2012). [CrossRef]  

11. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12(6), 1067–1076 (2004). [CrossRef]   [PubMed]  

12. Z. Yan, X. Yan, X. Jiang, H. Gao, and J. Wen, “Integral imaging based light field display with enhanced viewing resolution using holographic diffuser,” Opt. Commun. 402, 437–441 (2017). [CrossRef]  

13. G. Park, J. Hong, Y. Kim, and B. Lee, “Enhancement of viewing angle and viewing distance in integral imaging by head tracking,” in Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2009), p. DWB27 (2009).

14. S. Hong, D. Shin, J. Lee, and B. Lee, “Viewing angle-improved 3D integral imaging display with eye tracking sensor,” J. Inf. Commun. Converg. Eng. 12(4), 208–214 (2014). [CrossRef]  

15. Z. L. Xiong, Q. H. Wang, S. L. Li, H. Deng, and C. C. Ji, “Partially-overlapped viewing zone based integral imaging system with super wide viewing angle,” Opt. Express 22(19), 22268–22277 (2014). [CrossRef]   [PubMed]  

16. X. Zhang, G. Chen, and H. Liao, “High-quality see-through surgical guidance system using enhanced 3-D autostereoscopic augmented reality,” IEEE Trans. Biomed. Eng. 64(8), 1815–1825 (2017). [CrossRef]   [PubMed]  

17. J. Wang, H. Suenaga, H. Liao, K. Hoshi, L. Yang, E. Kobayashi, and I. Sakuma, “Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation,” Comput. Med. Imaging Graph. 40, 147–159 (2015). [CrossRef]   [PubMed]  

18. S. Jiao, X. Wang, M. Zhou, W. Li, T. Hong, D. Nam, J. H. Lee, E. Wu, H. Wang, and J. Y. Kim, “Multiple ray cluster rendering for interactive integral imaging system,” Opt. Express 21(8), 10070–10086 (2013). [CrossRef]   [PubMed]  

19. Y. Kim, G. Park, J. H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48(11), 2178–2187 (2009). [CrossRef]   [PubMed]  

20. Z. Fan, G. Chen, Y. Xia, T. Huang, and H. Liao, “Accurate 3D autostereoscopic display using optimized parameters through quantitative calibration,” J. Opt. Soc. Am. A 34(5), 804–812 (2017). [CrossRef]   [PubMed]  

21. L. Čehovin, A. Leonardis, and M. Kristan, “Visual object tracking performance measures revisited,” IEEE Trans. Image Process. 25(3), 1261–1274 (2016). [PubMed]  

22. Z. Fan, S. Zhang, Y. Weng, G. Chen, and H. Liao, “3D quantitative evaluation system for autostereoscopic display,” J. Disp. Technol. 12(10), 1185–1196 (2016). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Visualization 1 shows the comparative experiment of the lenticular tracking display and the conventional integral photography display. In the comparative experiment, the image quality of the lenticular tracking display is relatively high.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 The system configuration of the 3D lenticular tracking display.
Fig. 2
Fig. 2 (a) Optical model of the viewing area; Simulation of the image resolution, the gap and the viewing distance with the parameter D equaling (b) 0 mm, (c)10 mm, and (d) 20 mm.
Fig. 3
Fig. 3 (a) The flow chart of the real-time viewpoint tracking and rendering algorithm for the lenticular tracking display; (b) The calculation of model rotation when the viewer moves; (c) The calculation of the elemental image shift when the viewer moves.
Fig. 4
Fig. 4 The prototype of the 3D lenticular tracking display.
Fig. 5
Fig. 5 The experimental setup of the viewpoint tracking evaluation.
Fig. 6
Fig. 6 The motion trajectory on the x-axis (a), y-axis (b), and z-axis (c) from the depth camera (blue) and the optical tracking system (red); the tracking errors on the x-axis (d).
Fig. 7
Fig. 7 (a) The experimental setup of the comparative experiments; (b) the lenticular tracking display device; (c) the tracking based integral photography device; (d) the traditional integral photography device.
Fig. 8
Fig. 8 A brain model displayed by the 3D lenticular tracking display (a) and the integral photography display (b).
Fig. 9
Fig. 9 The resolution chart displayed by the 3D lenticular tracking display (a) and the integral photography display (b).
Fig. 10
Fig. 10 The horizontal parallax of the 3D lenticular tracking display (a), the tracking based integral photography display (b) and the traditional integral photography display (c) from different viewing angles.
Fig. 11
Fig. 11 The 3D lenticular tracking display shows vertical parallax from different viewing angles.

Tables (1)

Tables Icon

Table 1 The Comparative Results of the Image Resolution and Viewing Angle

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

θ v = θ tr + θ l ,
R(D)=1/( l ptich + l pixel |D| d gap ),
lpitch= lview dviewdgap dgap,
R(D)=1/( lview dviewdgap dgap+ l pixel |D| d gap ).
lthreshold= lviewlpupil 2 .
y k = i=0 4 x ki 5 .
θ rot =arctan( l move d view ),
x offset = l move d view d gap d gap ,
P τ = { t| ϕ t >τ } t=1 N N ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.