This paper describes the first demonstrations of two dynamic exit pupil (DEP) tracker techniques for autostereoscopic displays. The first DEP tracker forms an exit pupil pair for a single viewer in a defined space with low intraocular crosstalk using a pair of moving shutter glasses located within the optical system. A display prototype using the first DEP tracker is constructed from a pair of laser projectors, pupil-forming optics, moving shutter glasses at an intermediate pupil plane, an image relay lens, and a Gabor superlens based viewing screen. The left and right eye images are presented time-sequentially to a single viewer and seen as a 3D image without wearing glasses and allows the viewer to move within a region of 40 cm × 20 cm in the lateral plane, and 30 cm along the axial axis. The second DEP optics can move the exit pupil location dynamically in a much larger 3D space by using a custom spatial light modulator (SLM) forming an array of shutters. Simultaneous control of multiple exit pupils in both lateral and axial axes is demonstrated for the first time and provides a viewing volume with an axial extent of 0.6−3 m from the screen and within a lateral viewing angle of ± 20° for multiple viewers. This system has acceptable crosstalk (< 5%) between the stereo image pairs. In this novel version of the display the optical system is used as an advanced dynamic backlight for a liquid crystal display (LCD). This has advantages in terms of overall display size as there is no requirement for an intermediate image, and in image quality. This system has acceptable crosstalk (< 5%) between the stereo image pairs.
© 2013 OSA
Autostereoscopic 3D displays  display 3D images without the need for special viewer spectacles by using one of the following methods: (i) Holographic , (ii) Multi-view  (iii) Light field , (iv) Volumetric  or (v) Dynamic Exit Pupil [6–9].
This paper is concerned with the most recent method; that of the dynamic exit pupil (DEP) which had only been proposed but not experimentally demonstrated. Until now, no overall experimental integrated system has been constructed to demonstrate the DEP tracker technique; the system. This paper demonstrates two integrated DEP tracker system designs for the first time showing that they both operate with low system crosstalk, low signal to noise ratio and provide a good viewer experience so establishing experimentally that the DEP approach is viable.
In such a system the images for each viewers’ left and right eyes are independently directed towards the appropriate eyes rather than creating wavefronts emanating from virtual sources. In a fully configured system, a camera is used to capture the position of the viewers’ eyes. The system can follow the eye positions by forming multiple exit pupils for multiple simultaneous viewers over a large viewing region laterally and longitudinally towards and away from the viewing screen, thus allowing several users to view 3D whilst having a high degree of freedom of movement. In addition, it also has the potential to produce other modes of operation where the images transmitted to the viewers can be completely different or can change as the viewers move from side to side to show true 3D motion parallax effects.
For motion parallax, two images, left and right, are required per viewer so that the total number of images required for N viewers is 2N ; for stereoscopic presentation only, a total of two views are required. The images required in conjunction with pupil tracking systems can be generated sequentially using temporal multiplexing at 60 Hz refresh rate per eye with a 2D pixelated image generation device to operate at least 2N × 60 times per second . Section 2 briefly introduces the two dynamic exit pupil display system architectures, the light engine and the transfer screen. Section 3 for the first time presents details of two real-time dynamic exit pupil tracker prototypes; one that is capable of tracking the exit pupil locations in the lateral plane and the other that has capability of tracking in 3D space.
2. Multi-viewer autostereoscopic dynamic exit pupil display system
The problem of designing an autostereoscopic display can be solved relatively simply if all the viewers are located close to a particular plane parallel to the screen, referred to as the conjugate plane; in this case the exit pupils merely have to move laterally (the X direction).
When a large viewing region or eyebox is required, the exit pupils need to move nearer to, or further from, the screen (the Z direction). Therefore, the dynamic exit pupils need to be controlled in 2D in the X-Z plane; it is generally not necessary to track the eyes in the Y direction as the exit pupils are elongated vertically along the Y axis. Figure 1 shows the schematic diagrams of two versions of the dynamic exit pupil display. The version shown in Fig. 1(b) has not been proposed before and the use of dynamic exit pupil optics in conjunction with temporal multiplexing on the display panel has not been previously described. This forms the basis for a new class of display where several users can move around independently over a large region and where there is the potential to provide a unique perspective to each viewer, provided there is a sufficiently fast display panel to support this mode of operation.
As seen in Fig. 1(a), it is a two-stage rear projection display where the intermediate image is formed by a column of light scanned horizontally via a laser scanning light engine, across a 2D liquid crystal on silicon (LCOS) optical intensity modulator. The intermediate image is then relayed to the Gabor superlens screen. The beams are directed to the positions of the user’s eyes via a 1D fast liquid crystal spatial light modulator (SLM) which is located in the Fourier transform plane of an intermediate projection lens. The Gabor superlens screen allows beams to be directed to the desired X-Z exit pupil locations. These locations are determined by the eye-tracker. A vertical diffuser located at the output of the Gabor superlens screen expands the exit pupil in the Y axis thus enabling viewers to have vertical freedom of movement. The display in Fig. 1(b) replaces image projection devices with a 120 Hz conventional LCD allowing the display to be realized in a more compact form. The display in Fig. 1(b) produces the image at the viewing screen, as opposed to passing it through the system as in the projection version with the benefit of reduced image distortion and improved resolution.
The light engine module in the projector version comprises red (640 nm), green (532 nm) and blue laser (473 nm) arrays coupled into a multimode optical fiber which, together with two cascaded cylindrical microlens arrays [11, 12] homogenize and shape the beam into a 10 mm by 250 μm vertical light line. The optical design details of the beam-shaping optics were covered in [7, 9, 13]. Previously  the lasers were only homogenized using a dual microlens array homogenizer [11, 12]; however, an undesired periodic light modulation was observed along the vertical line due to the periodicity of the microlenses. This resulted in bright and dark stripes across the final image.
The best solution was found to be the use of a step index multimode optical fiber for mixing and homogenizing the laser beams . If a sufficiently long multimode fiber is used the different mode velocities ensure that the modes are spaced by more than the coherence time, thus, reducing speckle noise  due to interference of the modes. The output profile of the light emitted from a multimode fiber as a function of angle in the far field is not a top hat profile but is nearer to a Gaussian profile ; however, if the fiber is wrapped around a cylindrical mandrel the modes are mixed and more of the modes of the multimode fiber are filled. We used two identical microlens arrays after the optical fiber to homogenize the intensity along the line [7, 8]. The uniformity of the line, which is defined as the ratio of the minimum value to the maximum value along the line, is 85% over the central 10 mm required length, as opposed to the 48% uniformity obtained earlier by using the two microlens arrays alone.
The projection lens effectively relays the intermediate image onto the superlens screen, which relays the SLM openings onto the exit pupil plane. The superlens has some magnification to provide a viewing field that is much larger than the SLM and comprises an array of multiple element microlenses in which two layers of lenses with differing focal lengths perform in the same manner as an array of small telescopes as discussed in [16, 17]. The front screen also comprises two Fresnel collimating lenses and a vertical diffuser.
The pupil tracker was developed by Fraunhofer Heinrich Hertz Institute (HHI), Berlin and is capable of tracking up to four viewers simultaneously. It consists of a pair of IEEE 1394 Firewire cameras (Point Grey type Firefly MV) whose outputs are processed in order to determine the X, Y and Z coordinates of the viewers’ eyes, although we do not use the Y coordinate in this research. Each camera has a 752 × 240 resolution with a frame rate of 90 Hz (which is faster than the usual 30 Hz) and the latency measured by HHI from the camera input to its output is 23 ms when tracking a single viewer and 25 ms when tracking four viewers Latency was measured by capturing a triggered LED with the different sensor technologies. A test set-up measures the time between triggering (LED on/off) and the recognition of the changed state by the tracking technology. This is short enough to avoid giving any latency-induced artifacts and none were evident in the limited evaluation studies carried out on the display; however, this will be investigated further in future research. These effects are important as in some circumstances if they occur; it is possible that head movement could produce feelings of nausea and disorientation in the viewers. The pupil location accuracy of the tracker is ± 5 mm in the X direction and ± 25 mm in the Z direction.
3. Dynamic exit pupil system
The imaging characteristics of a Gabor superlens, when real sources A and B are present, is shown in Fig. 2. Unlike a conventional lens, images are produced on the same side of the axis as the object, and the closer the object distance to the superlens, the closer the image distance [16, 17]. The conjugate plane is the plane of the real image of the SLM which is marked by the broken line in Fig. 2. If the viewer is not in this plane, either the illumination source can be moved away or moved closer to the screen (not a practicable option) or a virtual source can be produced by moving the SLM opening as the scanned image column moves across the screen in the X direction; this is depicted in the figure for the virtual source C.
Note that if the viewer is located at the conjugate plane, the position of the SLM aperture remains stationary over the duration of the scan. If the viewer is not in the conjugate plane, the aperture moves with a velocity along X depending on the distance of the viewer from the conjugate plane and with a sense depending on which side of the plane the viewer is positioned. The ratio of the distances in Fig. 2 (d1 and d2), is the magnification of the overall lens. In the case of our prototypes, this ratio is designed to be four.
3.1 Moving shutter glasses-based dynamic exit pupil demonstrator
The first dynamic exit pupil prototype used a pair of liquid crystal shutter glasses mounted on a moving horizontal stage. The layout of the first prototype equipped with two liquid crystal on silicon (LCOS) projectors is shown in Fig. 1(a). The stage was controlled with a pupil tracker developed in-house and provided autostereoscopic vision to a single viewer as seen in the two videos (Media 1, Media 2). Conventional liquid crystal (LC) shutter glasses from NVIDIA were modified with a custom driver and a microcontroller (PIC12F675) so that the LCs run synchronously with the scanner of the light engine, and the pupil tracker. Figure 3 shows a photograph of the shutter glasses on the linear stage and how it physically changes position as the user moves in the X direction in the conjugate plane. The prototype can be used by a single user with head motion of ± 20 cm along the X axis, ± 10 cm along the Y axis and ± 15 cm along the Z axis.
A version of the tracker that is capable of tracking up to 6 users in real-time was also developed . The software developed in-house, which controls the linear stage synchronously with the pupil tracker, can be downloaded as open-source code . Figure 4 shows the photographs at the conjugate plane position when a camera was moved in 2 cm increments along the X axis to show the transition from the left eye image to the right eye image.
A crosstalk measurement experiment was performed to find out how much of the left eye image arrived at the viewer’s right eye and vice versa. A calibrated camera (SphereOptics PM-1000) was first located at the position of one of the eyes at 90 ± 1 cm distance from the front of the screen. The vision center used for the camera positions was determined by observation of the images; Figs. 4(b) and 4(e) show that ‘R’s are not visible in Fig. 4(b) and ‘L’s not visible in Fig. 4(e). As these photographs are captured at slightly less than the average interocular distance this indicates that the vision center position chosen is reasonably accurate. However, a more accurate method that does not rely on a subjective judgment is to obtain the center from intensity plots taken across the exit pupils. During the experiments, the camera was focused on the superlens screen, which is what the viewer focuses on under normal viewing conditions, and the lens F# was adjusted to 2.8, 4.0, and 5.6 to mimic the human eye pupil under different illumination conditions , which varies in diameter between 2 mm and 8 mm.
After subtracting the ambient level, the left channel crosstalk is calculated using the ratio of the luminance with a white image on the right channel and a black image on the left channel, to the luminance with a white image on the left channel and a black image on the right channel . In order to determine the crosstalk in the other channel, the camera was moved and the procedure repeated but with the black and white images swapped. The ambient level was the same for each camera position and was obtained with the display switched on and the screen area covered; this allows for stray light from the display leaking on to the surrounding surfaces. The calculated system crosstalk values are shown as percentages in Table 1. When the camera had a given separation distance from the screen, a small displacement in the camera’s center position can cause a large relative difference in the dark channel luminance whilst the bright channel remains reasonably constant. Note that Figs. 4(a)-4(f) show the transition from the left eye image to the right eye image across the exit pupil. Conventional stereoscopic systems with LC shutter glasses offer system crosstalk levels in the region of 0.5% and have nonsymmetrical crosstalk behavior for both eyes . The prototype employs additional optics such as a transfer screen, beam shaping optics or LC elements which cause additional scattering which causes crosstalk.
Reference  states that crosstalk less than 5% is hardly noticeable but that ideally it should be kept below 2%. Additionally , suggested that most of the depth perception is maintained at crosstalk levels below 4%. Informal human factors trials were carried out with five consenting colleagues and the consensus was that the stereoscopy is already of a satisfactory quality.
3.2 SLM-based 3D dynamic exit pupil demonstrator
A more advanced demonstrator prototype was built in which the exit pupils were formed using a new custom-built 128-element ferroelectric (FLC) SLM with response time of 85 µs. This is driven from a field programmable gate array (FPGA) which was designed to receive the coordinates of the locations of up to four viewers from the output of the multi-user pupil tracker. Figure 5 is a photograph of the demonstrator taken. The purpose of the demonstrator is to show that the dynamic operation of the exit pupils enables them to be steered in the X direction as would be the case in a non-scanned system, but they can also be steered in the Z direction with the use of a fast SLM synchronized with horizontal scanning of the image.
Illumination is supplied from either a single 3 W laser and the original blue laser was replaced later with green laser one. The new feature of this display prototype is the capability of independently steering up to four sets of exit pupil pairs over a large area in X and Z directions. The prototype has the capability to locate each pupil at any distance between 0.6 m and 3 m from the screen over a viewing angle of ± 20°. In this embodiment of the system the optics are intended to be used as a dynamic steerable backlight for a direct-view LCD display controlled by a multi-user pupil tracker as in Fig. 1(b). This is a completely different approach and never proposed before. Compared to the first prototype, the LCOS has been removed and the whole system is used as a backlight for an LCD and so is a completely different system design from any published before.
Figure 6 shows that the Z axial steering operates as designed as the exit pupil can be seen to be focused at (a) 1 m and (b) 2.5 m from the viewing screen. In Fig. 6(a), the exit pupil Z coordinate is set to 1 m and the pupil is focused onto a white target screen at 1 m (the position of the conjugate plane). The pupil becomes defocused (not shown) when the target screen is moved to 2.5 m from the display screen. This is expected as the exit pupil formed is merely the image of a stationary aperture in the SLM.
When the exit pupil is formed on the target screen located at 2.5 m from the display screen the focused pupil can be seen clearly In Fig. 6(b). When the target screen is then moved to 1 m from the display screen the image becomes virtually imperceptible (not shown). This is due to the source of the illumination, a vertical illuminated column on the display screen, scanning laterally, with the emergent beam having considerably lateral movement at 1 m distance. At 2.5 m the beams all cross at the same region in the viewing space to form the exit pupil.
The FPGA has the capability to set and maintain the width of the pupil throughout the full range of Z distances. Power meter traverses of exit pupils at eight Z positions in the viewing field were used to generate the plots in Fig. 7. The colored peaks in Fig. 7 are experimental scans drawn to scale and are not schematics. The scans were carried out at distances of 0.8, 1, 1.7 and 2.5 m from the viewing screen. The detector capture area was circular with a diameter of 10 mm. For the purposes of determining the shape of the exit pupil profiles, this is sufficiently close to the size of the human eye pupil to provide accurate results.
It is also noticed that when the elongated exit pupil line is moved along X away from the central axis that the line begins to curve very slightly increasing with increasing X for off axis viewers. This is due to the spherical aberration of optical components in the viewing screen. However, the FPGA SLM driver can take this into account by using the pupil tracker to monitor the Y position of the viewers’ eyes and knowing the curvature this can be compensated by opening the appropriate apertures in the SLM.
The widths of the exit pupils along X are sufficiently narrow to provide satisfactory 3D to a viewer when used in a display; however, the ambient levels where there is no pupil present are in the region of −18 dB. This is principally due to the ambient illumination level present at the time of the measurements but is also due to other factors, including scattering at the front screen, light passing through regions of the SLM that are intended to be fully opaque and imperfect collimation by the Gabor superlens. These factors contributed to observed crosstalk.
3.3 DEP display
The dynamic exit pupil demonstrator was also converted into a novel type of display by using it as the backlight for a large area 120 Hz LCD. The original intention of the HELIUM3D display [1, 6, 8] was to present images at a 480 Hz frame rate from a projection engine to supply four separate left-right image pairs providing motion parallax for four viewers. This was the reason for choosing a display configuration that is basically a projection system where images produced on a small LCOS device are transferred through several lenses and optical elements to the front screen via an intermediate image stage (L2 in Fig. 1(a)). The disadvantage of this configuration is that of the large volume taken up by the light path between the projection engine lens and the shutter glasses as the light path must encompass the intermediate image stage (L2).
In order for the demonstrator to show dynamic exit pupil formation it was only necessary to use a single color illumination source. As the current version of the display is simply a demonstrator with an LCD located in front of the superlens screen assembly, color images are not obtained if we use a monochromatic source. However, if a white source were to be used the images will be full color as the LCD panel has color filters. White light can be obtained by replacing the present illumination source with red, green and blue lasers whose outputs are combined into a single white beam with the use of an X-cube .
While the system was being developed, it became apparent that a fast projection engine was not going to become available within the timescale of the project. As 120 Hz LCDs became available whilst the projector version was being built, this opened the possibility to build another novel version where the image is produced on a direct-view LCD located at the front screen. In this case, the optics behind the LCD acts merely as a steerable backlight and has no image information passing through. This results in a much clearer high resolution image as this is not distorted by passing through a sequence of optical elements and accumulating aberrations and distortions; the projection lens L3, is large (.> 150 mm diameter) and only a single element double convex lens has been used for cost considerations. This is subject to fairly high aberrations.
As there is no intermediate image, there is no requirement for light to occupy the complete area of L2; the intermediate image is replaced by a horizontal line that is formed from a scanning spot as shown in Fig. 1(b). The complete light path between the laser and the SLM occupies only a very small volume and this has the capability to be folded. This is expanded with the use of a vertical diffuser located immediately in front of the projector lens, L3 so that the full LCD height is filled.
In the DEP system, the scanner and the SLM must be run synchronously for the exit pupils to be formed and alternate scans of the scanner form the left and right exit pupils sequentially. When the left exit pupils are formed a left image is displayed on the LCD. When the right exit pupils are formed a right image is displayed on the LCD. Therefore, the LCD must also be synchronized; the most convenient means of achieving this is to use the video signal to the LCD as the master and to slave the SLM and scanner from this.
In between, the left and right images being displayed, there are periods when the LCD must be addressed; during this time, both left and right images are displayed simultaneously on different parts of the screen as successive images are overwritten. Light must not be directed to the viewers during this time as viewers will partially observe right images in left exit pupil positions and vice versa. This unwanted effect is overcome by using an optical chopper in the laser beam (as shown in Fig. 1(b)) that blocks the light during the addressing periods.
Sample photographs showing the early 3D performance of the demonstrator equipped with a 120 Hz LCD, which shows the left and right images as viewed by a single-user, are shown in Fig. 8. Figures 8(a) and 8(c) are photographs taken at a spacing of 64 mm, which is that of the average human eye; this gives an indication of the images as seen by a typical viewer. The crosstalk measurements were 5.0% and 3.5% for the left and right channels respectively and were obtained using a similar procedure to that used for the shutter glasses prototype, which as before is considered to be satisfactory. The crosstalk is determined by the shape of the exit pupil intensity profile and the principal contributors to this value are; scattering at the front screen assembly, light bleeding through the dark regions of the SLM and imperfect collimation of the superlens. Asymmetry could possibly be caused by slight misalignment between the elements making up the superlens. Alignment is a delicate operation consisting of three separate sub-operations. Two of these involve the translational and rotatational alignment of two one-dimensional lens sheets that have to be glued in the same operation. Some errors can be introduced at this stage, such as slight relative movement during the ultra-violet curing process and inconsistencies in the glue layer thickness. The superlens screen is a novel component never made before.
Two novel dynamic exit pupil tracker technique prototypes are demonstrated. The first, an autostereoscopic static exit pupil display system, was previously developed by our group. However, in this paper we made the first real-time demonstration of a dynamic exit pupil system using moving shutter glasses, positioned using information from a pupil tracker camera. The dynamic exit pupil-based system has a 60 Hz refresh rate per eye, full resolution 3D for a single viewer with ± 10 cm head movement along the Y axis, about ± 20 cm along the X axis and about ± 15 cm along the Z axis. The crosstalk ratios of the prototype are in the order of 1.0% for the left eye and 3.0% for right eye, which are acceptable for a high-quality 3D display.
The second DEP tracker can move the exit pupil location dynamically across a much larger 3D space by using a custom-built SLM with an array of shutters. Real-time beam steering is demonstrated for several independently steered exit pupils. Simultaneous control of multiple exit pupils in both lateral and axial axes is demonstrated for the first time and provided a viewing volume of an axial extent of 0.6 −3 m from the screen and within a lateral viewing angle of ± 20°. The dynamic 3D tracker is integrated with a prototype 3D display and sample photographs showing the early 3D performance of the multi-viewer system are presented. Crosstalk for the left and right eyes are measured at acceptable levels for 3D displays as 5.0% and 3.5%. The dynamic exit pupil technique has the potential to provide motion parallax to multiple users if a projector or liquid crystal LCD can run in excess of 120 Hz.
The authors gratefully thank the European Union for funding via the HELIUM3D project (FP7 contract No. 215280). The authors also thank Klaus Hopf and Frank Neumann of Fraunhofer HHI, Berlin, Germany for providing the pupil tracker hardware. The authors would like to thank all of the HELIUM3D project members. Hadi Baghsiahi wishes to thank the Dorothy Hodgkin Postgraduate Award, EPSRC and the Part Loader Company for partial funding.
References and links
1. H. Urey, K. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99(4), 540–555 (2011). [CrossRef]
2. L. Onural, F. Yaraş, and H. Kang, “Digital holographic three-dimensional video displays,” Proc. IEEE 99(4), 576–589 (2011). [CrossRef]
4. T. Balogh, “Method and apparatus for displaying three-dimensional images,” (2001). US Patent 6,201,565.
5. A. Sullivan, “3-deep new displays render images you can almost reach out and touch ,” Spectrum, IEEE, 42, 30-35 (2005).
6. P. Surman, “Multi-user autostereoscopic display,” (2008). WO Patent WO/2008/139,181.
7. K. Akşit, S. Olcer, E. Erden, V. C. Kishore, H. Urey, E. Willman, H. Baghsiahi, S. Day, D. R. Selviah, and F. A. Fernandez, “Light engine and optics for HELIUM3D auto-stereoscopic laser scanning display,” in Proceedings of IEEE 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON) (IEEE, 2011) 1–4. [CrossRef]
8. H. Baghsiahi, D. R. Selviah, E. Willman, A. Fernández, S. Day, K. Akşit, S. Ölçer, A. Mostafazadeh, E. Erden, V. Kishore, H. Urey, and P. Surman, “48.4: Beam Forming for a laser based auto-stereoscopic multi-viewer display,” in “SID Display Week, Los Angeles, USA,” 42 (SID, 2011), pp. 702–706.
9. E. Erden, V. Kishore, H. Urey, H. Baghsiahi, E. Willman, S. E. Day, D. R. Selviah, F. A. Fernández, and P. Surman, “Laser scanning based autostereoscopic 3D display with pupil tracking,” in “22nd Annual Meeting of the IEEE-Photonics-Society, Belek Antalya, Turkey, LEOS Annual Meeting Conference Proceedings, 2009. LEOS’09. IEEE,” (IEEE, 2009), pp. 10–11. [CrossRef]
10. P. Surman, K. Hopf, I. Sexton, W. Lee, and R. Bates, “Solving the 3D problem - the history and development of viable domestic 3DTV displays,” Three-Dimensional Television: Capture, Transmission, Display (2007), pp. 471–503.
11. P. C. H. Poon, D. R. Selviah, M. G. Robinson, and C. Tombling, “Microlens array diffuser for incoherent illumination,” National Physical Laboratory, Teddington, UK. Microlens array: 11–12 May 1995, Institute of Physics, London, no.5,UK, pp. 89–92.
12. P. C. H. Poon, D. R. Selviah, J. E. Midwinter, D. Daly, and M. G. Robinson, “Design of a microlens based total interconnection for optical neural networks” in “Optical Society of America Optical Computing Conference, Palm Springs, USA,” (OSA, 1993), 46–49.
13. E. Willman, H. Baghsiahi, F. A. Fernández, D. R. Selviah, S. E. Day, V. C. Kishore, E. Erden, H. Urey, and P. A. Surman, “The optics of an autostereoscopic multiview display,” SID International Symposium Digest of Technical Papers. Society for Information Display: San Jose, US (2010). [CrossRef]
14. J. Goodman, Speckle phenomena in optics: theory and applications, First edition (Roberts & Co Greenwood Village, CO 80111, 2007).
15. R. Voelkel and K. Weible, “Laser beam homogenizing: limitations and constraints,” Proc. SPIE 7102, 207–219 (2008). [CrossRef]
16. D. Daly, M. Hutley, R. Hunt, K. Khand, R. Stevens, and R. Wilson, “The use of a series of lens arrays to match optical arrays of different pitch,” in “Microengineering Applications in Optoelectronics, IEE Colloquium on,” IET, 1–11, 1996. [CrossRef]
17. C. Hembd-Sölner, R. Stevens, and M. Hutley, “Imaging properties of the Gabor superlens,” J. Opt. A, Pure Appl. Opt. 1(1), 94–102 (1999). [CrossRef]
18. K. Hopf, F. Neumann, and D. Przewozny, “Multi-user eye tracking suitable for 3D display applications” in “3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON),” (IEEE, 2011), pp. 1–4. [CrossRef]
19. K. Akşit, “oynak on github” ‘’Software for HELIUM3D project which provides head tracker control over the linear stage” https://github.com/kunguz/oynak (2012).
21. A. J. Woods, “How are crosstalk and ghosting defined in the stereoscopic literature?” Proceedings of SPIE, vol. 7863 ed 2011, p. 78630. [CrossRef]
22. M. Barkowsky, “55.3: Crosstalk Measurements of Shutter Glasses 3D Displays.” SID Symposium Digest of Technical Papers. 42. No. 1. Blackwell Publishing Ltd, 2011. [CrossRef]
23. P. Seuntiëns, L. Meesters, and W. IJsselsteijn, “Perceptual attributes of crosstalk in 3D images,” Displays 26(4-5), 177–183 (2005). [CrossRef]
24. I. Tsirlin, L. Wilcox, and R. Allison, “The effect of crosstalk on the perceived depth from disparity and monocular occlusions,” IEEE Trans. Broadcast 57(2), 445–453 (2011). [CrossRef]