Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Integrated 3D display and imaging using dual purpose passive screen and head-mounted projectors and camera

Open Access Open Access

Abstract

We propose an integrated 3D display and imaging system using a head-mounted device and a special dual-purpose passive screen that can simultaneously facilitate 3D display and imaging. The screen is mainly composed of two optical layers, the first layer is a projection surface, which are the finely patterned retro-reflective microspheres that provide high optical gain when illuminated with head-mounted projectors. The second layer is an imaging surface made up of an array of curved mirrors, which form the perspective views of the scene captured by a head-mounted camera. The display and imaging operation are separated by performing polarization multiplexing. The demonstrated prototype system consists of a head-worn unit having a pair of 15 lumen pico-projectors and a 24MP camera, and an in-house designed and fabricated 30cm × 24cm screen. The screen provides bright display using 25% filled retro-reflective microspheres and 20 different perspective views of the user/scene using 5 × 4 array of convex mirrors. The real-time implementation is demonstrated by displaying stereo-3D content providing high brightness (up to 240 cd/m2) and low crosstalk (<4%), while 3D image capture is demonstrated by performing the computational reconstruction of the discrete free-viewpoint stereo pair displayed on a desktop or virtual reality display. Furthermore, the capture quality is determined by measuring the imaging MTF of the captured views and the capture light efficiency is calculated by considering the loss in transmitted light at each interface. Further developments in microfabrication and computational optics can present the proposed system as a unique mobile platform for immersive human-computer interaction of the future.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

3D content recording and visualization is a critical part of the human-computer interaction. Display technologies are quite advanced but mobile displays and emerging augmented reality headsets have small screens and offer limited field-of-view. Projection displays can offer wide field-of-view using only a small display engine, but a projection surface is needed, and the brightness is limited due to low lumen output of the projectors. 3D imaging technologies, on the other hand, provide accurate depth maps but are typically limited to a single perspective view or large installations. Below is an overview of head-worn displays, 3D imaging technologies, and their limitations followed by the new dual-purpose passive screen-based approach that overcomes some of the important limitations.

The head-worn 3D display technologies can be divided into two parts, 1) near-eye-displays, where the image is displayed on a screen close to the eye, and 2) head-mounted projection displays, where the head-worn projectors are used to illuminate the handheld or fixed screens at intermediate distance. In recent years, an accelerated development in near-eye-displays is seen using various techniques [1]. The Google Glass, a monocular eye-wear display used beam-splitter and curved reflectors [2]. The head-mounted displays using integral imaging, semi-reflective mirrors screens, and holographic optical elements have also been demonstrated [3–5]. Recently, the efforts to provide focus cues in near-eye-displays has been made using light-field displays [6], deformable membrane mirrors [7], and spatial light modulators based holographic displays [8]. On the other hand, the lased sourced pico-projectors [9] based head-mounted displays and high gain retro-reflective screens have also been demonstrated for numerous applications, including training simulations [10], mixed reality games and entertainment [11] and tele-existence [12]. The 3D version of head-mounted projection display is presented using two head-mounted projectors and retro-reflective screen [13]. In our previous work, design and fabrication of see-through retro-reflective screen and its applications for 3D augmented reality have also been demonstrated [14,15].

There are various techniques available today for 3D capture and content recording. A simpler 3D imaging scheme uses stereoscopy, which consist of two cameras separated by a fixed distance. More sophisticated systems either use light field capture and integral imaging approaches or record the depth map of the scene directly using active imaging and reconstruct the 3D scene computationally. The light field capture and integral imaging systems use different techniques to record intensity, position and direction of light rays represented by a 4D radiance function [16,17]. A famous technique for capturing high resolution light field is based on multiple cameras arranged in a matrix to capture the scene from different perspectives simultaneously and then reconstructing the 3D information computationally [18–20]. The alternative of camera arrays is single camera moving on a 1D or 2D plane [21,22]. The camera arrays and moving camera based systems can capture the scene with both horizontal and vertical parallax and address occlusions, but either require computationally intensive inter-camera synchronization and fixed installation or can capture the static scenes only. The light field capture using catadioptric systems have also been demonstrated using curved mirrors as axial cameras [23] and array of planner mirrors [24,25], however modeling of single mirror as hundreds axial cameras is computationally expensive, while planner mirror array provide a very narrow-field-of-view. 3D imaging with single camera, field lens and lens let arrays sheets have also been demonstrated as integral imaging pick up systems, but such techniques can only capture a limited field-of-view and close objects [26–28]. On the other hand, the depth cameras either use structured light or time-of-flight techniques to record the depth maps of the scene together with the single view.

There are few combined 3D display and imaging systems demonstrated in the literature, where most of them are designed for desktop settings only. In [29], the camera arrays and projector are used together to provide autostereoscopic display of dynamic scenes. In [30], a transparent autostereoscopic display and depth cameras are used to facilitate combined capture and display for 3D telepresence. Room-size informal telepresence using combination of depth and panoramic color cameras and wall display have also been demonstrated [31]. Recently, the 3D teleportation using Microsoft HoloLens as display and depth cameras as capture device is also demonstrated [32]. However, the need of separate capture device placed across and large installation setups has limited the usage of head-mounted-displays for telepresence applications.

Although the above-mentioned systems and techniques can provide enhanced 3D display experience using wearable devices and can 3D capture the scene efficiently, but the existing head-mounted displays and smart eye-wears cannot use the built-in cameras on the head to capture the user who is wearing the device. Therefore, such systems rely on separate fixed installations (i.e. depth cameras, camera arrays) around the user and cannot offer true mobility. The problem becomes particularly important when considering such wearable devices for 2D/3D telepresence applications where simultaneous capture and relay of the information is critical. In other words, the current wearable headsets lack the functionality of ‘3D selfie cameras’ let alone ‘multi-perspective selfie camera arrays’ to support telepresence applications in mobile settings.

In this paper, we present a novel integrated 3D display and imaging system in a portable format using a head-mounted unit and a dedicated dual-purpose screen. Compared to existing head-mounted displays, the proposed system provides an alternative platform that facilitates 3D telepresence by using a passive screen that captures the user from different perspectives. The compact size of the screen and the head-mounted device has the tendency to offer mobile operation needed for the ubiquitous 3D telepresence applications. The screen simultaneously acts as a high gain 3D display and multi-perspective virtual camera array when combined with the head-mounted projectors and camera. The stereo-3D display is facilitated by a pair of head-mounted projectors and retro-reflective microspheres deposited on top of the screen, while multi-perspective views of the user are recorded using a head-mounted camera that captures the scene reflections using the bottom mirror array surface of the screen. An additional polarizer surface is sandwiched between display and imaging surface to separate both features and enable simultaneous operation. Section-2 explains the concept of the system, Section-3 presents fabrication of the dual-purpose screen, Section-4 presents the polarization multiplexing technique, Section-5 presents the details of the test prototype and Section-6 shows the experimental results.

2. Concept of the system

The proposed system consists of two elements, 1) head-mounted unit consisting of a camera and pico-projector pair for capture and display respectively and, 2) a multi-layered dual-purpose passive screen to display stereo 3D content and capture multi-perspective views through the reflections of mirrors buried in the screen. Figure 1 shows the conceptual illustration of the system in the 3D telepresence scenario where the 3D view (images of a lady) are displayed on the screen using head-mounted projectors, while the perspective reflections of the user (the boy wearing proposed head-mounted unit and holding the passive screen) are captured at the same time. The screen is composed of two primary layers, where top and bottom layers are responsible for 3D display and 3D capture respectively. The top layer of the screen has retro-reflective microspheres, which reflect the incident projected light back towards source with narrow scattering. The microspheres are partially patterned to create tiny islands, where the spacing between the islands is used by bottom layer for imaging. A pair of head-mounted projectors illuminates the screen with two images corresponding to a stereoscopic content, which is reflected towards the respective eyes positioned close to associated projectors. The distance between each eye and the exit-pupil of the projector is minimized using the beam-splitter, while the separation of stereoscopic content for each eye is facilitated by the retro-reflective property of the top layer [15].

 figure: Fig. 1

Fig. 1 Conceptual illustration showing the working principle of integrated 3D display and imaging system for 3D telepresence application. The stereo-3D views are displayed on the screen using head-mounted projectors, while the perspective views of the user are recorded using head-mounted camera at the same time.

Download Full Size | PDF

The bottom layer of the screen contains an array of curved mirrors. The mirrors are equally aligned and distributed over the screen area to allow multi-perspective image capture. Each mirror in the array forms the virtual image of the user as illustrated in Fig. 1 insets. The distribution of mirrors allows each mirror to see the user from different perspectives and the mirror array as the whole behaves as multi-perspective virtual camera array. All perspective mirror images are captured by single high-resolution head-mounted camera in one shot and are processed by a backend computational module to render the stereo view pairs.

An additional linear polarizer sheet is placed between the patterned retro-reflective surface and the mirror array surface to facilitate 3D display and imaging simultaneously. The polarizer sheet blocks scene reflection and projected light to be seen by the viewer and camera respectively. The details of polarization multiplexing are discussed in Section-4.

3. Screen Fabrication

The screen was fabricated in three steps. First, the retro-reflective microspheres were deposited on the polarizer sheet using optical adhesive. In second step, the mirrors were tiled on the base substrate. In the end, both polarizer surface with retro-reflective microspheres and tiled mirror array were bonded together. Figure 2(a) shows the stacking of the different layers of the screen.

 figure: Fig. 2

Fig. 2 (a) Fabrication of the dual-purpose screen showing the arrangement of different layers of the screen, (b) shows the images of the developed handheld screen, (c), and (d) show the closeups of a mirror in the screen when camera is focused on retro-reflective pattern and reflected image respectively.

Download Full Size | PDF

We used the half-shell aluminum coated glass microspheres as retro-reflective material. The microspheres were based on titanium di oxide (TiO2) having the refractive index of 1.93. The diameter of microspheres was randomly distributed between 35 and 45µm [14]. We used 200µm thick linear polarizer sheet having thin coating of pressure sensitive optical adhesive on one side [33]. To pattern microspheres, we used a 100µm thick stencil (30x30cm2 negative mask) made of stainless steel. The stencil is designed in lab and produced by Printed Circuit Board (PCB) manufacturing firm using laser micromachining. The stencils had tiny square shaped holes, covering 25% of the total area. The microspheres were transferred by attaching the stencil to polarizer sheet and filling the stencil holes with microspheres. The stencil was gently unwrapped, while microsphere stayed bonded to the polarizer sheet. The pitch size of the retro-reflective islands was made small enough (<300µm) to be irresolvable for eye at working distance from the screen.

We used off-the-shelf convex mirrors (used as rear-view blind-spot mirrors in vehicles) for bottom layer of the screen. To better utilize the screen area, we cut the circular mirrors of 7cm diameter into 5cm × 5cm squares using a 3D printed mold and diamond tipped glass cutting tool. The mirrors were uniformly tiled on the 1mm thick plexiglass frame. Figure 2(b) shows the fabricated handheld screen, while Figs. 2(c) and 2(d) show the closeup of a mirror in the screen showing the retro-reflective pattern and reflection through mirror.

4. Polarization Multiplexed Display and Imaging

We use mutually perpendicular polarization to separate the captured and displayed light as illustrated in Fig. 3. Since both projectors use same polarization states, only one projector is shown for the simplicity. In addition to the intermediate polarizer film in the screen, the additional linear polarizer films are used in front of the camera and the projectors/eyes.

 figure: Fig. 3

Fig. 3 Illustrates the principle of the polarization multiplexing of projected and captured light. The ambient scene is seen only by the camera, while the projected images are only perceived by eyes.

Download Full Size | PDF

We first use the polarizer film in front of projectors aperture to orient the projected light in single direction (horizontal). The projection beam is reflected through beam-splitter to illuminate the screen. When the projected light rays hit the screen, the portion of light rays is retro-reflected and is returned towards beam-splitter, while the remaining part of the projected light is transmitted through spacing between the retro-reflective islands and blocked by the intermediate vertical polarizer in the screen. Since a retro-reflective surface retains the polarization orientation after retro-reflection [15,34], the projected light passes through beam-splitter and optional polarizer in front of eye and the displayed content is perceived by the eye. The blockage of projected light after the retro-reflective layer eliminates the visibility of bright spots in each mirror. Table 1 shows the polarization vectors of projected images and captured scene at the different stages of the system.

Tables Icon

Table 1. Polarization state of projected and ambient light at different surfaces of the screen.

On the capture side, the unpolarized ambient light is first transmitted through the patterned retro-reflective layer and vertical polarizer sheet where the horizontal component of the light is removed, the vertical component of light is reflected-off the mirrors and captured by the camera having a matching polarizer (vertically oriented) in front. The camera polarizer also guarantees the blockage of the projected light. Although our current multiplexing scheme use linear polarization orientations, which is prone to tilts and rotations, the multiplexing scheme can be extended to circular polarizations by using additional quarter wave plates films in the system.

5. Experimental Prototype

The experimental prototype was developed to demonstrate the integrated 3D display and imaging in real-time as shown in Fig. 4(a). The setup consisted of a safety helmet type head-mounted piece and an in-house fabricated dual-purpose screen. We used a pair of 15 lumens laser scanned pico-projectors from Microvision [35] providing 6.5 lumen effective power (in single polarization orientation). The projectors were stripped-off from original packaging and placed in a custom 3D printed housing (visible in Fig. 4(b)). The distance between the exit-pupil of projectors was adjusted to 6.5cm which is close to the average Inter-Pupillary Distance (IPD) of human eyes. A pair of 50/50 reflective beam-splitters was used to bend the projected light towards screen and decrease the distance between eye and projector. Each projector had 848x480 pixels (WVGA) resolution covering the 42°x24° field-of-view, which provided 76% overlap between left and right views at the working distance of 70cm.

 figure: Fig. 4

Fig. 4 (a) Tabletop experimental prototype, and (b) close view of the head-mounted unit with two pico-projectors placed in a 3D printed housing and a camera (see Visualization 1).

Download Full Size | PDF

Since the mirror array layer was designed to mimic a camera array. We used 5cm × 5cm mirrors with 7.5cm focal length to provide 40° × 40° field-of-view for each perspective view which is close to the field-of-view of a typical webcam. To achieve sufficient disparity between adjacent views, the center-to-center distance between the mirrors was set to 6cm which is close to average inter pupillary distance IPD (6.5cm) of human eyes. The current implementation provided the perspective views with 1:1 aspect ratio. The field-of-view and aspect ratio can be modified by using the customized mirrors with different focal length and size. The total number of mirrors in the screen were limited by the size of the screen and total pixel count of the camera. Since the size of our stencil mask was limited to 30cm × 30cm, we used an array of 5 × 4 mirrors providing 20 perspective views to develop the 30cm × 24cm screen. The reflections of the screen were captured using a 24MP photography camera (Nikon D5300) attached to the head-mounted unit. The camera provided the resolution of ≈800 × 800 pixels and ≈200 × 200 pixels per view for still photos and real-time video capture respectively.

6. Results and discussion

6.1 Display characteristics

The display performance of the system was evaluated by measuring the size of eye-box, the brightness and crosstalk per eye. Since the display layer of screen is retro-reflective, the brightness and crosstalk depend on the retro-reflective properties of the screen. We used the retro-reflective coefficient data measured using a photometric setup as reported in [14]. The brightness was further determined using retro-reflective coefficient values, projector power, eye-to-projector distance, interpapillary distance and screen-to-viewer distance [14]. Figure 5(a) shows the size and overlap of left/right eye-box, and change in perceived brightness within the eye-box at the working distance of 70cm. Each projector creates a circular shaped eye-box with 2cm diameter, which is enough for head-mounted projector scenarios where user’s eyes are positioned close to the projectors. The small size of eye-box is the result of narrow angular expansion of retro-reflected light which is about 2°. The angular width and shape of the eye-box depends on the aperture diffraction of microspheres [14]. The size of the eye-box also depends on the distance between the user and the screen. The size of the eye-box can be increased either by using the small size microspheres (providing wide diffraction angle) or using the screen from larger distance. A sharp change (up to 4 times reduction) in brightness is observed if eye moves from center to the edge of the eye-box.

 figure: Fig. 5

Fig. 5 (a) Eye-box size and overlap between left/right eye-box at 70cm working distance, and (b) Brightness, optical gain and crosstalk per eye when the distance between the screen and the viewer is changed.

Download Full Size | PDF

Figure 5(b) shows the brightness and crosstalk per eye for the dual-purpose screen and brightness of a non-transparent diffused screen when viewed from the distance of 50cm-150cm. The optical gain of the screen is calculated as ratio of brightness of the dual-purpose screen and diffused screen. Up to 240 cd/m2 brightness is achieved using 15 lumen (≈6.5 lumen after polarizer) projection power only. The optical gain of the dual-purpose screen shows 5.5 times more brightness compared to the diffused screen scattering light in the hemisphere volume when illuminated with similar projector. The optical gain is further increased as the distance between the projector and screen is increased. As the distance between the user and the screen increases, the brightness of diffusing screen decreases more than that of retro-reflective screen due to the wide angle scattering in the diffusing surface. As a result, ratio between the brightness of both screens is increased which provides increased optical gain. The mathematical expressions for the brightness of diffusing screens and retro-reflective screens can be referred from [14]. The crosstalk between left and right views was measured about 2% at the working distance of 70cm, while overall <4% crosstalk is observed anywhere at the working distance of anywhere between 50cm and 150cm. The lower crosstalk is due to the close alignment of eye and projector (up to 0.5cm), which facilitates the eyes to be positioned inside the eye-box of the associated projector.

The acceptable range of working distance can be assessed from Fig. 5(b) which suggests that the screen can be viewed from 50cm to 150cm while providing acceptable brightness and negligible crosstalk. For a retro-reflective screen, the effect of change in position and viewing angle depends on the acceptance angle of the retro-reflective material. The acceptance angle is defined as the range of incident angles where the screen remains highly retro-reflective. The acceptance angle of the microsphere type retro-reflective screen was presented in [14], which showed that the screen can be used from different viewing angles (up to 70°) without having significant effect on the perceived brightness. Figures 6(a) and 6(b) show the stereo-3D content displayed on the screen and as perceived by eyes with and without polarizer sheet in front of the eyes respectively. With polarizer, the screen appears black to viewer and the mirror array become completely invisible to the eyes, while the absence of polarizer provides slightly higher brightness making the mirrors partially visible to the eyes.

 figure: Fig. 6

Fig. 6 (a) and (b) show a stereo 3D content displayed on screen when viewed through polarizer and without polarizer respectively, and (c) shows the perceived views captured from left eye, right eye and center positions respectively (see Visualization 2).

Download Full Size | PDF

For full retro-reflective efficiency, it is important for each microsphere to be oriented correctly with the uncoated part of the microspheres facing outward. In our implementation, the retro-reflective microspheres were deposited in random orientation and about 30-35% of the total microspheres were useful. The inactive microspheres (appearing black in Fig. 2(c) inset due to wrong orientation) produce speckle and grainy effect in the displayed content due to the coherent nature of the lasers used in projectors. Significant improvements in display brightness and quality can be obtained with process improvements. Figure 6(c) shows the displayed stereo views, when captured with the camera placed in left eye-box, right eye-box and in the middle of both eyes. The left and right eye positions show unperceivable crosstalk, while center view shows both left and right views mixed together with mirror reflections.

6.2 3D image capture and visualization

3D image capture through dual-purpose screen was demonstrated by performing the real-time computations on a backend processing module. First, the mirror reflections through the screen are captured using the head-mounted camera in single shot. The field-of-view of the head-mounted camera is adjusted such that all mirrors in the screen are accommodated in the captured shot. A convex mirror creates the virtual image on the other side of the user/object therefore we tuned the focus of the camera close to the focal plane of the mirrors to capture the virtual views sharply. The captured image is then corrected for geometric distortion by performing the four-point projective distortion correction and the resultant image is segmented to extract the perspective views. Figure 7(a) shows the corrected image with all captured views. In next part, two horizontally adjacent perspective views are selected and processed through the geometric ray tracing procedure based on the already known geometry (i.e. screen distance, camera placement) of the system, where the pixels are computationally projected from the camera sensor to the scene to render the resultant stereo views. The ray tracing removes the none-overlapping part of each view and provides the fully overlapping stereo views with approximately correct parallax which are then converted to the side-by-side stereo 3D format. Using the 20 mirrors in the screen, 16 viewpoints are generated where each viewpoint provides stereo 3D view of the user/scene from different angle. Figures 7(b) and 7(c) shows two different viewpoints of a captured object and user. The real-time visualization was provided by interactively selecting and relaying desired viewpoint to a desktop or virtual reality (Oculus Rift) display. To facilitate the real-time operation, the ray tracing is performed on the fragment shader of the Graphical Processing Unit (GPU).

 figure: Fig. 7

Fig. 7 (a) Shows the perspective views of the scene captured in a single shot, (b) and (c) show two different reconstructed viewpoints relayed to a stereo display.

Download Full Size | PDF

In the current implementation, the field-of-view and focus of the camera are set for the working distance of 70cm. For a typical arm’s length working distance (40cm-70cm), the change in distance have negligible effect on the capture quality. However, when the working distance is higher (i.e. >100cm), the image quality is reduced due to fewer pixels per view. On the other hand, changing the viewing angle with respect to the screen has no effect on the quality of captured images but changes the viewing space and is equivalent to tilting the camera array.

The image quality of the system was evaluated by measuring the experimental Modulation Transfer Function (MTF) for a perspective view. We used slant edge technique to measure the MTF [36]. An experimental setup as shown in Fig. 8(a) was constructed where a LCD screen was placed at 70cm from the screen showing an edge with 5° tilt. The perspective views of slanted edge were captured in a single image using the system camera attached to the head-mounted unit. The captured image was segmented into perspective views and one of the perspective view was used as an input to slant edge MTF measuring algorithm. First, the Canny edge detection technique was applied to detect the edge [37]. The angle of the slanted edge was estimated by performing line fitting of the edge, the image was up-sampled by factor of four and edge was straightened by applying affine transformation using computed edge angle. From the column average of subsequent image, an over-sampled edge profile was calculated, which is one dimensional Edge Spread Function (ESF) as shown in Fig. 8(b). The Point Spread Function (PSF) was further calculated by computing the derivative of ESF. Finally, the MTF was determined by calculating the absolute value of the Fourier transform of the PSF.

 figure: Fig. 8

Fig. 8 (a) Experimental setup for MTF measurement, (b) Captured edge profile on the camera sensor for different configurations, and (c) Measured MTF. With 10% cutoff limit, the imaging quality is reduced to one-third when a polarizer sheet is placed on top of mirror surface while 50% further loss is observed when the retro-reflective microspheres are patterned on polarizer.

Download Full Size | PDF

With the method discussed above, the MTF curves shown in Fig. 8(c) were obtained for mirror array surface, mirror array surface covered with polarizer film and dual-purpose screen with 100µm and 300µm retro-reflective islands respectively. Figure 8(c) shows the loss in the quality with the placement of polarizer and retro-reflective microspheres. Considering the 10% of MTF as limit, the resolution of mirror array surface with and without polarizer sheet is 12 cycles/degree and 8 cycles/degree respectively. The further loss in image quality is observed after the microspheres are deposited on the polarizer sheet, where MTF is further decreased to 5 cycles/degree. The performance degradation can be attributed to the apparent loss in PSF due to the retro-reflective surface and contrast loss introduced by the intermediate polarizer. The repeated micropattern of retro-reflective microsphere behaves as diffraction grating, which causes the PSF to expand, hence the loss in resultant MTF is observed. The effect of diffraction grating can also be observed when comparing MTF curves for screen with 100µm and 300µm islands. The image quality can be improved either by using better polarizer with increased transmission and further optimization of retro-reflective surface pitch or using the camera of higher quality. As seen in Fig. 8(c), the system can resolve the features of about 1mm at 70cm, which is closer to the standard midrange telepresence cameras or webcams.

6.3 Captured Light Efficiency

The shared use of dual purpose screen for imaging and display, and placement of multiple polarizer films in the system reduce the amount of light captured by the camera. The capture light efficiency depends on the retro-reflective fill-factor and transmission of polarizer film for parallel and orthogonal polarized light. Table 2 shows the amount of light at different stages of the system. Our screen has 25% retro-reflective fill-factor, which allows 75% of ambient light to pass through. The polarizer film used in system transmits 43% of unpolarized light, while 87% and 0.002% transmission is achieved for parallel and orthogonal polarizations respectively. As mentioned in the table, the camera captures only 22% of the scene light when compared to the direct imaging. The amount of captured light can be improved by replacing the intermediate polarizer with wavelength selective notch coatings and performing the wavelength multiplexing.

Tables Icon

Table 2. Loss in transmitted light at different interfaces of the system.

7. Conclusion

We proposed and demonstrated a combined 3D display and imaging system enabled by head-mounted unit and a dual-purpose screen that simultaneously works as 3D display and multi-perspective virtual camera array. Stereoscopic 3D content is displayed using two head-mounted pico-projectors illuminating the retro-reflective microspheres patterned on the screen. Perspective views of the scene/user are recorded by a head-mounted camera capturing the reflections through the mirror array embedded in the screen. The separation of imaging and display features and simultaneous operation is facilitated using the polarization multiplexing of captured and projected light. The display brightness and crosstalk per eye are measured through the retro-reflective characteristics of the screen. Real-time 3D capture and discrete free-viewpoint stereo reconstruction was accomplished using ray tracing on GPU and visualized on a head-mounted virtual reality display. Our test prototype provided high brightness (up to 240 cd/m2) and negligible crosstalk in 3D mode (<4%) using only 15 lumen pico-projectors and 25% retro-reflective fill-factor on the screen. 5x4 mirror array provided 20 perspective views of the scene using single high-resolution camera.

Full mobility can be added to the system by introducing a screen detection and tracking module for dynamic projection mapping and imaging. The image resolution per view is limited to 5 cycles/degree and the captured light efficiency is 22% due to polarizer layer in the screen. Further improvements in screen fabrication and computational imaging aspects would make the proposed system a unique platform for mobile devices and open new directions in wearable devices, computational imaging, and human-computer interaction.

Funding

European Research Council (ERC) under the European Union's Seventh Framework Program (FP7/2007-2013) / ERC advanced grant agreement (340200) and ERC Proof-of-Concept Grant No. 755154.

Acknowledgment

We would like to thank Kaan Akşit and Osman Eldeş for helpful discussions on computational image reconstruction.

References and links

1. J. P. Rolland, K. P. Thompson, H. Urey, and M. Thomas, “See-Through Head Worn Display (HWD) Architectures,” in Handbook of Visual Display Technology, J. Chen, W. Cranton, and M. Fihn, eds. (Springer Berlin Heidelberg, 2012), pp. 2145–2170.

2. B. Kress and T. Starner, “A review of head-mounted displays (HMD) technologies and applications for consumer electronics,” Proc. SPIE 8720, 87200A (2013). [CrossRef]  

3. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]   [PubMed]  

4. M. Guillaumée, S. P. Vahdati, E. Tremblay, A. Mader, V. J. Cadarso, J. Grossenbacher, J. Brugger, R. Sprague, and C. Moser, “Curved transflective holographic screens for head-mounted display,” Proc. SPIE 8643, 864306 (2013). [CrossRef]  

5. K. Hong, J. Yeom, C. Jang, J. Hong, and B. Lee, “Full-color lens-array holographic optical element for three-dimensional optical see-through augmented reality,” Opt. Lett. 39(1), 127–130 (2014). [CrossRef]   [PubMed]  

6. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM SIGGRAPH Emerg. Technol. 32(6), 1–10 (2013). [CrossRef]  

7. D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Aksit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017). [CrossRef]   [PubMed]  

8. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

9. S. T. S. Holmström, U. Baran, and H. Urey, “MEMS laser scanners: A review,” J. Microelectromech. Syst. 23(2), 259–275 (2014). [CrossRef]  

10. D. M. Krum, E. A. Suma, and M. Bolas, “Augmented reality using personal projection and retroreflection,” Pers. Ubiquitous Comput. 16(1), 17–26 (2012). [CrossRef]  

11. D. Kade, K. Akşit, H. Ürey, and O. Özcan, “Head-mounted mixed reality projection display for games production and entertainment,” Pers. Ubiquitous Comput. 19(3-4), 509–521 (2015). [CrossRef]  

12. S. Tachi, “Telexistence and Retro-reflective Projection Technology (RPT),” in Proceedings of Virtual Reality International Conference(VRIC) (2003), p. 69/1–69/9.

13. D. Héricz, T. Sarkadi, V. Lucza, V. Kovács, and P. Koppa, “Investigation of a 3D head-mounted projection display using retro-reflective screen,” Opt. Express 22(15), 17823–17829 (2014). [CrossRef]   [PubMed]  

14. S. R. Soomro and H. Urey, “Design, fabrication and characterization of transparent retro-reflective screen,” Opt. Express 24(21), 24232–24241 (2016). [CrossRef]   [PubMed]  

15. S. R. Soomro and H. Urey, “Light-efficient augmented reality 3D display using highly transparent retro-reflective screen,” Appl. Opt. 56(22), 6108–6113 (2017). [CrossRef]   [PubMed]  

16. A. Gershun, “The Light Field,” J. Math. Phys. 18(1-4), 51–151 (1939). [CrossRef]  

17. M. Levo and P. Hanrahan, “Light Field Rendering,” in Siggraph (1996), pp. 31–42.

18. B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765 (2005). [CrossRef]  

19. J. Yang, M. Everett, C. Buehler, and L. McMillan, “A real-time distributed light field camera,” in Eurographics Assoc. (2002), pp. 1–10.

20. T. N. Taguchi, K. Takahashi, and T. Naemura, “Design and Implementation of a Real-Time Video-Based Rendering System Using a Network Camera Array Design and Implementation of a Real-Time Video-Based Rendering System Using a Network Camera Array,” IEICE Trans. Inf. Syst. 92(7), 1442–1452 (2009). [CrossRef]  

21. M. Zhang, Y. Piao, N. W. Kim, and E. S. Kim, “Distortion-free wide-angle 3D imaging and visualization using off-axially distributed image sensing,” Opt. Lett. 39(14), 4212–4214 (2014). [CrossRef]   [PubMed]  

22. R. Schulein, M. DaneshPanah, and B. Javidi, “3D imaging with axially distributed sensing,” Opt. Lett. 34(13), 2012–2014 (2009). [CrossRef]   [PubMed]  

23. Y. Taguchi, A. Agrawal, A. Veeraraghavan, S. Ramalingam, and R. Raskar, “Axial-cones: modeling spherical catadioptric cameras for wide-angle light field rendering,” ACM Trans. Graph. (Proceedings SIGGRAPH Asia 2010) 29, 172 (2010). [CrossRef]  

24. W. Song, Y. Liu, W. Li, and Y. Wang, “Light field acquisition using a planar catadioptric system,” Opt. Express 23(24), 31126–31135 (2015). [CrossRef]   [PubMed]  

25. M. Fuchs, M. Kächele, and S. Rusinkiewicz, “Design and fabrication of faceted mirror arrays for light field capture,” Comput. Graph. Forum 32(8), 246–257 (2013). [CrossRef]  

26. J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27(13), 1144–1146 (2002). [CrossRef]   [PubMed]  

27. J. Kim, J. H. Jung, C. Jang, and B. Lee, “Real-time capturing and 3D visualization method based on integral imaging,” Opt. Express 21(16), 18742–18753 (2013). [CrossRef]   [PubMed]  

28. A. Stern and B. Javidi, “3-D computational synthetic aperture integral imaging (COMPSAII),” Opt. Express 11(19), 2446–2451 (2003). [CrossRef]   [PubMed]  

29. W. Matusik and H. Pfister, “3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes,” ACM SIGGRAPH 23, 814 (2004). [CrossRef]  

30. A. Maimone, J. Bidwell, K. Peng, and H. Fuchs, “Enhanced personal autostereoscopic telepresence system using commodity depth cameras,” Comput. Graph. 36(7), 791–807 (2012). [CrossRef]  

31. M. Dou, Y. Shi, J. M. Frahm, H. Fuchs, B. Mauchly, and M. Marathe, “Room-sized informal telepresence system,” in Proceedings - IEEE Virtual Reality (2012), pp. 15–18.

32. “Holoportation by Microsoft,” https://www.microsoft.com/en-us/research/project/holoportation-3/.

33. “High Contrast Linear Polarizer,” http://www.polarization.com/polarshop/.

34. S. R. Soomro, E. Ulusoy, M. Eralp, and H. Urey, “Dual purpose passive screen for simultaneous display and imaging,” Proc. SPIE 10126, 101260N (2017). [CrossRef]  

35. “Microvision Pico Projectors ShowWX+,” http://www.microvision.com/product-support/showwx/.

36. M. Estribeau and P. Magnan, “Fast MTF measurement of CMOS imagers using ISO 12333 slanted-edge methodology,” Proc. SPIE 5251, 243–252 (2004). [CrossRef]  

37. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Visualization 1       Demonstration of simultanous 3D display and imaging using dual-purpose passive screen.
Visualization 2       Demonstration combined 3D display and imaging

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Conceptual illustration showing the working principle of integrated 3D display and imaging system for 3D telepresence application. The stereo-3D views are displayed on the screen using head-mounted projectors, while the perspective views of the user are recorded using head-mounted camera at the same time.
Fig. 2
Fig. 2 (a) Fabrication of the dual-purpose screen showing the arrangement of different layers of the screen, (b) shows the images of the developed handheld screen, (c), and (d) show the closeups of a mirror in the screen when camera is focused on retro-reflective pattern and reflected image respectively.
Fig. 3
Fig. 3 Illustrates the principle of the polarization multiplexing of projected and captured light. The ambient scene is seen only by the camera, while the projected images are only perceived by eyes.
Fig. 4
Fig. 4 (a) Tabletop experimental prototype, and (b) close view of the head-mounted unit with two pico-projectors placed in a 3D printed housing and a camera (see Visualization 1).
Fig. 5
Fig. 5 (a) Eye-box size and overlap between left/right eye-box at 70cm working distance, and (b) Brightness, optical gain and crosstalk per eye when the distance between the screen and the viewer is changed.
Fig. 6
Fig. 6 (a) and (b) show a stereo 3D content displayed on screen when viewed through polarizer and without polarizer respectively, and (c) shows the perceived views captured from left eye, right eye and center positions respectively (see Visualization 2).
Fig. 7
Fig. 7 (a) Shows the perspective views of the scene captured in a single shot, (b) and (c) show two different reconstructed viewpoints relayed to a stereo display.
Fig. 8
Fig. 8 (a) Experimental setup for MTF measurement, (b) Captured edge profile on the camera sensor for different configurations, and (c) Measured MTF. With 10% cutoff limit, the imaging quality is reduced to one-third when a polarizer sheet is placed on top of mirror surface while 50% further loss is observed when the retro-reflective microspheres are patterned on polarizer.

Tables (2)

Tables Icon

Table 1 Polarization state of projected and ambient light at different surfaces of the screen.

Tables Icon

Table 2 Loss in transmitted light at different interfaces of the system.

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.