We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer’s eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer’s eyes enables pixel data control software to turn on only pixels for view images near the viewer’s eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking.
More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.
© 2015 Optical Society of America
A stereoscopic display, the so-called three dimensional (3D) display is the device that enables a viewer to perceive depth of an image of a given object as well as its two-dimensional one. This device that intends to provide a viewer with 3D images is designed to direct two different images of a pair of angled-views of a given object towards the viewer’s respective eyes, thus creating the so-called binocular parallax. This will be subsequently recognized by the viewer’s brain for perception of the object image depth [1–3].
The conventional 3D display with the binocular parallax provides two different view images via mandating a viewer to wear the eyewear such as eyeglasses or a specially designed headset. This eyewear is designed to receive right images for respective eyes of a viewer by using optical polarization filtering (spatial multiplexing of images by a polaroid [4, 5]) or time-resolved redistribution of two different images (temporal multiplexing of images by a view-synchronized shutter) [6, 7]. Widening of the eyewear-based 3D display market has been in progress for cinemas, laptops and monitors. The eyewear that must be worn for such a 3D display, however, is known to be one of the critical impediments to such a 3D display to replace a monoscopic 2D one due to the inconvenience with viewers, the headaches triggered by extended use of eyewear, and cost-inefficient equipment/maintenance .
Thus, the eyewear-free 3D display, the so-called autostereoscopic 3D display has come to much attention, and the numerous types of autostereoscopic 3D displays have been proposed and demonstrated. Examples included an autostereoscopic 3D display by methods of temporal multiplexing of view images with light guiding optics [9, 10], holography [11–13], by the focused light array (FLA) , and by the specially designed optical plate [15–19].
The temporal multiplexing method that could provide image resolutions similar to that of a 2D display, however, needed a sufficient speed at which display panel image signals render, their processing bandwidth being in proportion to the number of view images for a 3D display. The holography-based display was capable of providing the most natural 3D images. However this method suffered from the serious lack of image resolutions, the bulky size of a whole system, the small size of its display window, and the high cost fabrication, due to the difficulties in fabricating ultra-high speed spatial modulator of light on a scale of a pixel size (tens of μm). In the meantime, the FLA method that used the video raster scanning of light emerging from the co-focal point of image illuminance, where light arrays of different views were focused, offered an advantage of image resolution comparable to the current monoscopic 2D display. However, this method exhibited drawbacks of high fabrication cost, the bulky system size, the point crosstalk much higher than a commercialized eyewear-based 3D display.
The use of an optical plate placed in a close proximity to the display source panel for spatial multiplexing of light has been the most popular method to create binocular parallax, due to merits such as its inexpensive fabrication/maintenance, compactness of the 3D display unit, and availability of the tolerable resolution of images. The optical plates used for this purpose were usually subgrouped into two kinds, i.e., a parallax barrier (PB), which was a kind of a slit-patterned mask, and a lentxicular sheet of cylindrical lenses, both of which were designed via geometrical optics such that images of two different angled views landed the respective eyes of a viewer at a given position in front of the display panel .
The conventional methods that used optical plates, however, had encountered a serious problem, i.e., the point crosstalk much higher than that allowed in the eyewear-based stereoscopic 3D display . The point crosstalk which, in this article, refers to the relative ratio of unwanted image illuminance with respect to those of right images  became more serious in cases of the autostereoscopic 3D display that utilized images of multiviews rather than those of two views  (the multiview autostereoscopic 3D display, an extended version of a two-view counterpart, could offer binocular parallax during a viewer’s motion in a widened viewing zone at the expense of reduced (but tolerable) resolution of images, by providing more than two views through the various autostereoscopic 3D display methods mentioned above ).
Reduction of point crosstalk in a multiview autostereoscopic 3D display has been achieved by various methods such as by using a V-shaped PB , by assigning variable weight factors to individual display pixel illuminance  and by reducing width of optical beams that emitted from multiple projectors via lenticular lenses . However, the point crosstalk needed to be reduced to the typical level of an eyewear-assisted stereoscopic 3D display, i.e., ≤ 7% [24–26] for commercialization of a multiview autostereoscopic 3D displays with the optical plates.
Head tracking technologies have been combined with multiviews to offer autostereoscopic display to viewers of various interpupillary distance and the related analysis on the view numbers has been reported to provide 3D images without inter-view dark zones [27, 28].
In this paper, we present a multiview autostereoscopic 3D display with a PB, which produces the point crosstalk less than 7%, i.e., via PB engineering with dynamic eye tracking. We widen aperture of all slits of PB such that extended uniformity of image brightness can be achieved while turning off all pixels except those giving view images near viewer’s eyes upon dynamic eye tracking . We control display pixels via software coding for display signal rendering upon real-time monitoring of eye positions of a viewer who is allowed to move within a limited range along a horizontal line at a given optimum viewing distance (OVD) from a display panel (motion parallax). (Eye tracking cameras are synchronized with image rendering software of the 3D display such that they send it a real-time information of a viewer’s eye positions, so that only pixels for view images near the viewer’s eyes are turned on while the others turned off.
The proposed 3D system that offers improved uniformity of image brightness and greatly reduced point crosstalk at an enlarged viewing zone, benefits eye fatigue reduction and improved 3D image quality, thus being considered a possible replacement of an eyewear-assisted 3D display system.
In a conventional multiview autostereoscopic 3D display with an optical plate of binocular parallax, e.g., with a PB whose aperture is similar to a pixel size, the triangle-shaped distributions of view image illuminance versus horizontal position at a viewing zone take place, as checked in schematic of Fig. 1. This results from sum-up of illuminance of optical rays that land on a position at a viewing zone in a geometrical fashion from source pixels through slits of a PB, as described by the following equation:Fig. 1. T(x′, x) denotes the transmission of light emitting from x′ to x at a viewing zone. Adopting the coordinate x′ in a frame with its origin at the center of the slit aperture in Fig. 1, the transmission can be expressed as Fig. 1.
2. Main concept
Note that actual distributions that follow in simulation and experiments (next section) are of a Gaussian shape rather than a triangular one due to an artificial tilting of a PB with respect to a display panel (DP) by θ for both image color balancing and resolution compromise between horizontal and vertical directions. However, in this section, we assume triangular distributions of image illuminance to simplify our discussion to highlight PB engineering concept.
Let us define the point crosstalk of the k-th view image (k:integer) at a given x of the viewing zone for a N-view autostereoscopic 3D display, i.e., as follows:Figure 2(a) illustrates schematic of a PB-based 3D display system with an aperture size as about twice as that of Fig. 1. A uniform illuminance distribution for each view image can be expected over the substantial part of the viewing zone while strong point crosstalk being created over the entire viewing zone as depicted in Fig. 2(b). Note that care has to be taken to design the display pixels and PB to ensure that more than one view images should be present between two eyes at the viewing zone. Then, we can anticipate that the removal of all view images (by turning off corresponding pixels) except images viewed by two eyes at the viewing zone can greatly reduce point crosstalk (zero) as shown in Fig. 2(c) where dashed lines represent distributions of view images to be removed. According to simulation, achievement of vanishing point crosstalk over a widened region of viewing zones at OVD is valid for W greater than approximately twice the subpixel size. In addition, it is noted that this removal of view images that include those adjacent to view images seen by two eyes also can be expected to enhance uniformity of illuminance distribution around sweet spots.
For motion parallax, we employ cameras to track two eye positions, thus allowing the display system to determine which pixel is the closest to the center of two eyes. We exploit a software feedbacked from the position tracking cameras, which is coded to determine which pixels of a display panel to turn on while switching off the rest of them. The software that renders signals to pixels upon tracked positions, then enables motion parallax to be provided, maintaining both substantial reduction of point crosstalk and greatly improved uniformity of image brightness at around sweet spots. The software used consists of both the Face API and a self-developed software. The self-developed software recalibrates information from the Face API to refresh the image information at the speed of 15–30 Hz. Fig. 2(c) also shows that, as a viewer moves on, he/she begins to see the images of distribution that dashed lines represents, in turn, as a result of turning on their corresponding pixels while turning off the rest of pixels (solid lines) via software controlled signal rendering upon eye tracking cameras.
Figure 3 shows a geometrical tracing of optical rays from display pixels to viewing zones that form diamond shapes for a 4-view autostereoscopic display, using the following relations:
3. Results and discussion
We designs an 8-view autostereoscopic 3D display with the set of display parameters given in Table 1 for a given size of a subpixel of a laptop computer. This ensures that the OVD lies within a range of a typical viewing distance (∼ 550 – 850 mm) from a viewer to a laptop computer screen (we measured this range of allowed viewing distance for a 15.6 inch laptop by an optical distance meter). The air gap needs to be carefully adjusted for view image formation at OVD. Based on this design, we conduct computer simulation of optical ray tracing from display panel pixels through PB slits to a viewing zone to obtain theoretical estimation of the 3D display characteristics which is compared with those achieved from experimental measurement of illuminance of the 3D display images of the same set of parameters.
First, we take a PB with its slit aperture of 59.53μm which is slightly smaller than the sub-pixel width of a 59.75μm. Note that 8 subpixel width (8 × 59.75μm= 478μm is slightly larger than Λ = 476.249μm, leading to the resultant viewing zone as seen in Fig. 3. Simulation provides the illuminance distributions of view images at a viewing zone separated from the display panel by 600 mm (OVD of 600 mm) as shown in Fig. 4(a), while experimental measurement of illuminance distributions are shown in Fig. 4(b). As mentioned above, we could observe Gaussian distributions of view image illuminance due to a PB orientation in relative to a DP, as shown in simulation and experimental measurements. It is highly probable that the the value of a computed OVD designed to serve a viewer with minimum point crosstalk of view images cannot be the OVD (600 mm) in a practical setup due to both geometrical mismatch between the designed gap and fabricated one and presence of the medium(glass) in between. We find that even the sweet spots of distributions produces 30.5% point crosstalk in simulation whereas point crosstalk higher than that found in numerical simulation observed in experiments for both eyes of a viewer, and a highly non-uniform distribution of each view image across the viewing zone occurs. We may attribute the difference in point crosstalk at sweet spots between simulation and experiments, to factors that include the slit period error of the fabricated PB and misorientation of a PB with respect to a DP.
We enlarge the PB slit aperture from 59.53 μm to 200 μm while keeping the rest of parameters unchanged, and perform the corresponding simulation and experiments. Figures 5(a) and 5(b) show simulation results of illuminance distributions of view images versus x at OVD of 600 mm before and after elimination of 3 intermediate view images, respectively. Note that the centers of the nearest neighboring view image distributions are separated by about 16.25 mm in Fig. 5(a), while both eyes of a viewer are assumed to be separated from each other by 65 mm. Thus removal of 3 intermediate view images between view images seen by two eyes of a viewer, produces no point crosstalk over a zone of about 44 mm width around each eye of a viewer and these zero point crosstalk zones feature perfectly uniform distribution around their centers, as shown in Fig. 5(b). The two advantageous properties of illuminance, reduced point crosstalk and enhanced brightness uniformity, are beneficial for better image quality and mitigation of an eye fatigue. Note that Fig. 5 (a) assumed all subpixels are turned on to send optical rays through slits (same as in Figs. 6(a), 7(a) and 8(a)). However, if dynamic eye tracking feedbacks a viewer’s position to the pixel data control software, only pixels giving view images near the viewer’s eyes are turned on while the others are turned off. The software redistribution of images includes aforementioned elimination of intermediate view images. In this case, there are no pseudoscopic effects thanks to eye tracking combined software that provide right images to the respective eyes. If a viewer moves to a different place within an allowed range of an entire viewing zone, eye tracking will update a viewer’s position and send it to the software such that new nearest pairs of images to the viewer will be generated for motion parallax.
We set up an autostereoscopic 3D display system based on the parameter set given by Table 1, except W = 200 μm. Figures 6(a) and 6(b) show measured distribution of view image illuminance at OVD of 570 mm, before and after removal of 3 intermediate view images between both eyes of a viewer. It is noted that, unlike simulation results of Fig. 5(a), the central viewing zones (near the origin) receive more illuminance than the other, indicating Lambertian property of the light source used in experiments. Never could the significant point crosstalk be absent even at around the centers of illuminance distributions of view images before eliminating those intermediate view images, as illustrated in Fig. 6(a). Removal of such intermediate view images, however, greatly reduces point crosstalk down to a level of ≤ 7% over a viewing zone as wide as more than 45 mm around each eye of a viewer as indicated by the corresponding pair of dotted lines in Fig. 6(b). The averaged point crosstalk of a view image over each viewing zone bordered by a pair of dotted lines are about 2% for left (blue) and right (red) eyes of a viewer. We also observe approximately uniform distribution of view images at around centers of both eyes of a viewer despite the overall change in illuminance along the horizontal axis due to the Lambertian light source as addressed above.
We further widen the slit aperture to 250 μm and repeat simulation of optical ray tracing to obtain illuminance distribution of view images using the parameter set given by Table 1 (except the slit aperture). Similarly to Fig. 5(a), substantial point crosstalk occurs over an entire viewing zone at OVD of 600 mm before removing the intermediate view images as seen in Fig. 7(a). However, the further enlargement of the slit aperture, i.e., from 200 μm to 250 μm produces the further extended viewing zone of a uniform illuminance distribution for each view image, i.e., from 23.5 mm to 37 mm. Furthermore, removing 3 intermediate view images between both eyes of a viewer generates a viewing zone of no point crosstalk as wide as 30.5 mm as shown in Fig. 7(b). Note that this width of a viewing zone of zero point crosstalk is smaller than that in the case of a 200 μm slit aperture. Thus we find that W widening would increase point crosstalk of view images while enlarging viewing zones of uniform distributions of illuminance of view images.
Figures 8(a) and 8(b) show experimental measurement of illuminance distribution of view images at OVD of 595 mm using W = 250 μm, before and after removal of 3 intermediate view images, respectively. Similarly to Fig. 6(a), we observe Lambertian properties of light source, making uniform distributions of image illuminance more oblique at viewing zones farther away from the horizontal origin, as shown in Fig. 8(a). However, (approximately) a uniform distribution of each view image is extended compared to Fig. 6(a), making the use of 250 μm slit more beneficial to image quality than the case of 200 μm slit.
Meanwhile, removing 3 intermediate view images between both eyes of a viewer reduces point crosstalk of view images, thus generating viewing zones of ≤ 7% point crosstalk as wide as 26–27 mm. These widths are smaller than the widths (45–46 mm) shown in Fig. 6(b), and the averaged point crosstalk over the viewing zones of ≤ 7% are about 5%. Thereby, these point crosstalk related features of viewing zones bordered by dotted lines in Fig. 8(b), cannot support to substitute the PB of 200 μm slits for that of 250 μm slits. This may lead us to compromise between above mentioned merits and demerits for image quality/eye fatigues, and accordingly determine W to be used.
Figure 9 shows a photo of the experimental setup for reconstruction of binocular parallax by the 8-view autostereoscopic 3D display with a PB under dynamic eye tracking. The PB slit aperture used in the setup is W = 200μm. The dynamic eye tracking can be completed both by a position tracking camera built in the laptop of the 3D display panel (dashed circle) and by two cameras embedded at the eye positions of a plastic doll face that is installed on a 3D moving stage. The tracking camera offers coordinate information of the center position of two eyes (two cameras) to the software that controls the display panel to redetermine which pixels to turn on/off when the doll is in motion. We use a commercialized software (Face API) which is calibrated to read position coordinates of two eyes of a doll face. The dynamic tracking of positions is operated at about 15 Hz with the precision in position reading ±5 mm.
We demonstrate binocular parallax recognizable by using different colors that corresponds to different view images. The binocular parallax is produced at different horizontal positions of the center of the two eyes (cameras) at the viewing zone (OVD=570 mm), i.e., x = −50 mm, 0 mm, and +50 mm as shown in Figs. 10(a)–10(c), respectively. Clear distinction of the color images between red and blue for such different positions indicates the substantial reduction of point crosstalk of view images at each eye of a viewer who is in motion. In addition, uniformity of colored image brightness within the image viewed at each eye also signifies that of distribution of illuminance of view images across the viewing zone. We also present a footage that demonstrates tolerable point crosstalk of a view image at each eye of a moving viewer across the viewing zone (OVD of 570 mm). Vertical and horizontal stripes in the footage, which represent the different two view images seen by a viewer, enables us to estimate point crosstalk of view images efficiently. Figure 11 shows an image captured from the footage presented, indicating that the point crosstalk is of the level of an commercialized eyewear based 3D display (≤ 7%).
It should be addressed that this 3D display assisted with the dynamic eye tracking turns out to produce satisfactory quality of view images at OVD when it is tested with a real person, in addition to the quantitative evaluations provided above. In addition, multiple viewers operation is believed to be possible in principle although the work presented in this paper has been demonstrated for a single viewer.
We present a multiview autostereoscopic 3D display with a PB under a real-time tracking of a viewer’s eyes positions for the purpose of its replacing an eyewear-assisted 3D display system. The PB slit width that is made enlarged in relative to a pixel size of the display panel, produces extended viewing zones of uniform brightness of view images while increasing point crosstalk over an entire viewing zone at OVD. Then, appropriate number of view images between two eyes of a viewer can be eliminated by software control of pixels under dynamic tracking of a viewer’s eyes to achieve both substantial reduction of point crosstalk and motion parallax while keeping merit of widened viewing zone of enhanced uniformity of image brightness. This leads to greatly improved quality of view images in a multiview autostereoscopic 3D display.
Related future works may include finding of an optimum ratio between a PB slit width and a pixel size, suited for an autostereoscopic 3D display of a smaller view number to improve image resolution, taking into account compromising effects between image brightness uniformity and point crosstalk. Further investigation of the 3D display system will also cover multiple viewers operation and the evaluation at viewing distances rather than OVD to see changes in the 3D image quality as a viewer moves in the depth direction .
This research was supported by The Cross-Ministry Giga KOREA Project of The Ministry of Science, ICT andFuture Planning, Korea [ GK14D0200].
References and links
1. C. Wheatstone, “Contribution to the physiology of vision,” Philos. trans. R. Soc. A 128, 371–394 (1838). [CrossRef]
2. T. Okoshi, Three Dimensional Imaging Techniques (Academic, 1976).
3. E. Lueder, 3D Displays (John Wiley & Sons, Ltd, 2012).
4. Z. Ray, “Stereoscopic cinema and the origins of 3-D film,” University of Kenturcky , 64, 1838–1952 (2007).
5. J. H. Oh, W. H. Park, B. S. Oh, D. H. Kang, H. J. Kim, S. M. Hong, J. H. Hur, and J. Jang, “Stereoscopic TFT-LCD with wire grid polarizer and retarder,” SID Symposium Digest of Techical Papers 08, 444–447 (2012).
6. S. Shestak and D. Kim, “Application of π cells in time-multiplexed stereoscopic and autostereoscopic displays based on LCD panels,” Proc. SPIE 6490, 64900 (2007). [CrossRef]
7. S.-M. Jung, J.-U. Park, S.-C. Lee, W.-S. Kim, M.-S. Yang, I.-B. Kang, and I.-J Chung, “A novel polarizer-glasses type 3D display with an active retarder,” SID Symposium Digest of Technical Papers 09, 348–351 (2009). [CrossRef]
8. L. Lipton, “Foundation of the stereoscopic cinema,” www.stereoscopic.org
10. T. Sasagawa, A. Yuuki, S. Tahata, O. Murakami, and K. Oda, “Dual directional backlight for stereosopic LCD,” SID Symposium Digest of Technical Papers 34, 399–401 (2003). [CrossRef]
11. R. Kunzig, ”The hologram revolution,” Discover 23, 55–57 (2002).
13. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38, 46–53 (2005). [CrossRef]
14. Y. Kajiki, H. Yoshikawa, and T. Honda, “Autostereoscopic 3-D video display using multiple light beam with scanning,” IEEE Tran. on Circuits Sys. Video Technology 10, 254–260 (2000). [CrossRef]
15. C. Van Berkel, A. R. Franklin, and J. R. Mansell, “Design and applications of multi-view 3D-LCD,” in Proceedings of SID Eurodisplay Design & Apps of 3D-LCD, (1996), pp. 109–112.
16. G. J. Woodgate, J. Harrold, A. M. S. Jacobs, R. R. Moseley, and D. Ezra, “Flat panel autostereoscopic displays: characterisation and enhancement,” Proc. SPIE Stereoscopic Displays and Virtual Reality Systems VII 3957, 153–164 (2000). [CrossRef]
17. J. Y. Son, V. V. Saveljev, J. S. Kim, S. S. Kim, and B. Javidi, “Viewing zone in three-dimensional imaging system based on lenticular, parallax barrier, and microlens-array plates,” Appl. Opt. 43, 4985–4992 (2004). [CrossRef] [PubMed]
18. H. Nam, J. Lee, H. Jang, M. Song, and B. Kim, “Auto-stereoscopic swing 3D display,” SID Symposium Digest of Technical Papers 36, 94–97 (2005). [CrossRef]
19. Y. Takai and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18, 8824–8835 (2010). [CrossRef]
20. R. F. Stevens, Cross-talk in 3D displays, Report CETM 56, National physics laboratoryUK, 2004.
22. R. De La Barre, S. Pastoor, and H. Roder, “Method and device for the autostereoscopic representation of image information,” US patent 8,441,522 B2 (issued May 14, 2013).
24. H. Kang, S. -D. Roh, I. -S. Baik, H. -J. Jung, W. -N. Jeong, J. -K. Shin, and I. -J. Chung, “A novel polarizer glasses-type 3D displays with a patterned retarder,” SID Symposium Digest of Technical Papers 10, 1–4 (2010). [CrossRef]
25. Y. Ko, J. Yoon, K. Cha, and K. Kang, “Crosstalk simulation for polarization switching 3D LCD display,” SID Symposium Digest of Technical Papers 10, 120–123 (2010). [CrossRef]
26. Y. -C. Chang, C. -Y. Ma, and Y. -P. Huang, “Crosstalk suppression by image processing in 3D display,” SID Symposium Digest of Technical Papers 10, 124–127 (2010). [CrossRef]
27. G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187–198 (1997). [CrossRef]
28. N. A. Dodgson, “On the number of views required for head-tracked autostereoscopic display,” Proc. SPIE 6055, 1–12 (2006).
29. S.-K. Kim and K. -H. Yoon, “ 3D Autostereoscopic display apparatus,” Korea Patent 10-1316795 (issued Oct. 11 2013).