Abstract

We propose an interactive optical 3D-touch user interface (UI) by using a holographic light-field (LF) 3D display and a color detection system of the scattered light from the touched 3D image. In the proposed system, color information embedded in the LF is used to realize the 3D position identification and movement detection of the interaction point in 3D space only with a single RGB camera. We demonstrate the real-time interactive implementation of the interface working at 12 frames per second, which verifies the feasibility of our proposed concept.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Touchable user interfaces (UIs) are a feature so convenient and widespread in two-dimensional (2D) displays nowadays that we usually assume that such feature is included in any 2D display we interact with. On the other hand, there are constantly new improvements on glasses-free three-dimensional (3D) displays, which are increasingly being adopted in automotive and medical industries [13] in the form of head up displays and autostereoscopic displays. Looking for a natural expansion of the widespread 2D touchable UIs, a lot of attention has been directed to the creation of light-field (LF) 3D displays and aerial image displays with a UI that can allow an interaction with the reconstructed content and offer to the user an experience closer to that of touching something that is not really there [411]. As it is described in the next section, most 3D displays that support user interaction in 3D space rely on sensors that are separated from the display and its content. This is different to the usual 2D touchable screens where the content and the sensor that measures the interaction are in the same physical plane. A mismatch between the content and the interaction point can cause an uncomfortable experience to the user. Our group has previously reported in [4] an approach that directly uses the light scattered by the user when interacting with the reconstructed light of a holographic light field (HLF) display [12] using an RGB sensor. The implementation previously reported in [4] was able to prove the main concept used in this study. However, the system was limited to a binary response, only discriminating between the existence or absence of interaction; made use of LF images computed outside the processing pipeline, and had no real-time update to allow the implementation of a UI. In addition, the image processing algorithm was not optimized to different light conditions and the use of diverse objects to interact.

To increase the applicability of the previously proposed method using the detection of scattered light, this study presents an image processing algorithm capable of detecting the position and color of the scattered light in real-time, using it to track the finger in a 3D space. For this purpose, the color of the scattered light along the direction perpendicular to the RGB sensor is used as an extra cue to realize 3D tracking of the user finger using a single sensor, in real-time. The LF image is updated according to the movement of the user, not being affected by any mismatch between the user interaction and the 3D image. This approach is a new implementation of a 3D interface that tracks the position of the user in the 3D space on real time, based on directly touching the reconstructed LF. We apply this approach to an HLF in this study, but it can be applied to any 3D interface projecting a 3D real image.

2. Related work

2.1 Gesture interfaces in 3D space

Gesture interfaces in 3D space have a system that reads out the movements of a user in the space. In the systems that track the user without requiring any additional hardware or marker attached to the user’s body, the hand will be one of the preferred ways a user will interact with content. Therefore, hand tracking and hand gesture recognition are very extensive topics [13,14]. Tracking and recognition should be performed at a speed greater than that of the user’s movement speed. It should be stressed that 3D-touch interfaces are different from gesture interfaces. An interaction takes place in a space separate from the content of the display in a gesture interface. In contrast, the user interacts directly with the content of a display in a 3D-touch interface. The design and methodology for interface devices is completely different in 3D-touch and gesture interfaces, although some tools for gesture interfaces, such as gesture sensing devices, can be adapted.

2.2 Touch interfaces with aerial displays

Aerial displays reproduce 2D images into the 3D space to generate a floating image [5,6,15]. There are reported systems that also support interaction with bare hands, relying on infrared sensors for the case of [5] and a combination of infrared and depth sensors in [6] to track the finger and obtain information from 3D space to interact with the content. This interfaces should be extended to 3D, which is not a trivial task since the rendering and update in real time of all the views of a 3D object is a computationally demanding process.

2.3 Touch interfaces with 3D displays

There are some reported studies that combine 3D displays with interactive interfaces [710,1618]. The most reported approaches are those based on the Leap Motion controller [8,16] and the Kinect sensor [7,9]. The drawback in [7,8,16] has been that the implemented displays are very small to create a more complex UI, while [9] presents a system in which the image is created at a different place from where the interaction takes place. Other LF systems realize real-time interaction using the conventional PC interface like [17] and [10], which is a commercially available LF display. [10] also offers on its website a guide to use Leap Motion with it, but the interaction is shown by a widget hand inside the screen that interacts with the content (like a mouse cursor) according to the user’s movement, not realizing a physical match with the content and the hand of the user. This problem is illustrated in Fig. 1, where we can see that when we use one system for the display part and another one for the sensing part, a mismatch or registration error arises.

 

Fig. 1. Registration mismatch between the display system and the sensing system. The sensing system registers an interaction even if the user has not interacted with the content

Download Full Size | PPT Slide | PDF

2.4 Light-field touch sensing (LFTS)

In order to realize an interface in which the direct interaction with the displayed content can be detected, the proposal of directly using the light scattered by the user when touching the 3D content has been reported by our group [4]. A 3D UI involving a projection-type HLF display (see principle in Fig. 2) was used to create a real image that the user can touch with its bare hand. The scattered light by the user is detected by an RGB sensor located behind the transparent screen, eliminating the matching problem and creating a more natural UI (Fig. 3(a)). We refer in this paper as light-field touch sensing (LFTS) to the concept of using the light of the reconstructed image as an input signal to create a UI. A related method that uses the LFTS technique has also been reported in [11], where a system using LF for both projection and capture was proposed as the basis of an occlusion-robust hand-tracking method. However, the current LFTS systems are not able of tracking or updating the LF in real time. In addition, one single RGB sensor was used in the system in [4], only capturing a 2D projection of the 3D space. Although it is possible to use other approaches (e.g. stereo camera) to obtain depth information, another strategy to measure the depth information using the LF projection can be adopted.

 

Fig. 2. Reconstruction of light rays using elementary holograms.

Download Full Size | PPT Slide | PDF

 

Fig. 3. (a): Previous UI based on scattered light detection [4]. The "OK" signal means the interaction has been detected. (b): Proposed UI allowing the touch, tracking and LF modification corresponding to the user’s movements.

Download Full Size | PPT Slide | PDF

3. Interactive 3D touch interface using a HLF display

3.1 Overview

As stated in the previous section, the system presented in this paper is based on the one reported in [4], which consists on a 3D user interface involving a projection-type HLF display [1921], and a camera to detect the scattered light that results from the contact between a real 3D image and an object. The HLF display unit consists of an array of holographic optical elements (HOEs) that works as an array of convex mirrors (Fig. 2). Light fulfilling the Bragg condition gets reflected, letting other light pass. By impinging the collimated output of a commercial grade projector on this array, the ray-based LF of an object can be reconstructed. Light rays can be manipulated by modifying the pixels of the projector. The intersection of the light rays in front of the HOE array (screen) recreates the LF of an object, which is an auto-stereoscopic image that does not require glasses to appreciate its 3D features. The touch-sensing unit consisted of an RGB camera located behind the display (Fig. 3(a)). The interaction was realized by subtracting the background from the image captured by the camera and detecting the maximum value between two colors (blue or green). This interface was demonstrated with a static 3D image of two buttons, in which a binary response (i.e. touch or no-touch) was detected by the system as a preliminary concept verification.

3.2 Limitations of previous implementation

The implementation reported in [4], is a system that only provides a binary response (Fig. 3(a)). Since it displays a still LF, functions that require updating the displayed content are not supported. Also, this setup is not fixed, and a calibration between the projector’s pixels and the hogels of the HOE screen is required for the integral image to be pre-distorted and correctly displayed [22], a computationally time consuming operation. All these operations need to be implemented in a time short enough to realize an effective UI.

Regarding the detection of scattered light, in [4] only the maximum value between 2 different color channels and a circle detection was used for discriminating the existence of the interaction. Color detection of a static LF was reported in [23] using this system, but again real-time interaction was missing. Considering that the color and shape measurements can be affected by external light, different touching angles and the use of diverse objects to interact, a more robust detection method is required. Similarly to the process of LF update, the interaction detection should also be realized in a short time to be used in a UI.

In [4], the camera was placed behind the screen, making use of its transparent feature. Although this sensing position allows the detection of the whole screen plane, it can be heavily affected by the light the user’s body and the scene might reflect as well as the scattered light by the holographic screen, interfering with the measurement. However, since we are aiming for an UI that tracks the finger in the screen plane, changing the camera location requires the modification of the tracking method as explained in the next section.

4. Method

In the proposed system, the camera position is changed form the back of the HOE screen to the top of the system to reduce the amount of undesired light, which might cause false positives. In the following discussion, we will consider the center of the HOE screen to be placed in the origin of the xy plane and the z-axis running parallel to the vector perpendicular to the screen. Therefore, the camera placed in the position indicated in Fig. 3(b) captures the xz-plane. Once a user touches the object reconstructed by the LF display, the position of the finger can be detected in the xz-plane from the coordinates in the captured image, realizing 2D tracking of the user’s finger (see Fig. 4(a)) as explained in section 4.1.

The image captured by an RGB camera is a 2D projection of the 3D scene into the camera’s sensor, in which any depth information (information along the y-axis in this case) is lost in the capture process. To realize 3D positioning of an object while avoiding the use of external motion sensors or stereo-camera approaches, we propose to use the color of the LF itself to create an extra cue, so a single RGB camera can be used as a 3D motion sensor (section 4.2). If different colors are used in the object reconstructed by the LF along the y-direction, as it is depicted in Fig. 2, we can compensate for the information that gets lost after capturing an image (i.e. y-direction distance) by detecting the colors along that direction, so practically turning the combination of an RGB camera and a LF display into a 3D position detector.

 

Fig. 4. (a): 2D tracking. Detection of scattered light to track the fingertip of the user in a 2D plane (b): Color tracking. Detection of different colors along the y-axis to add a new measurement dimension. Both (a) and (b) combined realize 3D tracking of the fingertip

Download Full Size | PPT Slide | PDF

For the interactive 3D-touch display, the LF projected by the system needs to be updated based on the tracking of the user’s finger. In order to have a LF that follows the contact point of the user, real-time reconstruction of the LF was implemented. The hurdle in this case is that, as mentioned before, the image needs to be pre-distorted following the data of a Look-Up Table (LUT) like the one obtained in [22], to be correctly displayed by our system. This was solved by using a high-speed Python-library [24] that enabled us to perform this process in a real-time scale. For the real-time implementation, the finger detection step should also be optimized. The result of the implementation is presented in section 5.2.

4.1 Tracking of interaction using 2D movement detection

A figure summarizing the whole detection process is shown in Fig. 5. First, the touch-event detection is performed. As a first step for the touch detection, background subtraction is performed from the camera’s captured image. This will be repeated for a continuous stream of frames to look for any scattered light. When the light is scattered and detected after subtraction, contrast stretching is applied to the image to better locate the interaction point. Next, a binary mask is made from the subtracted image, and multiplied on the captured image to identify the touch-detected region.

 

Fig. 5. Details on how the color identification is used for detecting the movement of the user.

Download Full Size | PPT Slide | PDF

As the next step, image processing modules such as small clusters removal and erosion and dilation filters are applied to have a good quality mask that is robust enough to noise. To increase the robustness of the process, the contours of the main structures in the binary mask are extracted. This extraction detects the chain with the largest number of pixels (contours are one pixel width), which usually corresponds to the main interaction point. In this way, the secondary noisy contours shown in the binary mask of Fig. 6 can be discarded.

 

Fig. 6. Image processing pipeline to extract the color interaction.

Download Full Size | PPT Slide | PDF

Next, the position of touch-detected area in $xz$-coordinates is identified. Since the camera does not move with respect to the LF display, a fixed correspondence relating the reference frames of the interaction-detection RGB camera and the display is previously established. This will allow the system to know the $xz$-location inside the HLF display in which the interaction is taking place by using the camera coordinates.

4.2 Tracking of interaction using color change detection

The interaction detection algorithm described in the previous section can provide a good position of the position of the user’s finger in the xz-plane. However, this method alone is not capable of detecting the movement of the finger in the remaining y-axis. To provide a complete interface that is capable to detect the direction of the user in the y-axis, we propose a color measurement and classification algorithm that can provide extra information to the system about the user’s position.

To detect the movement in the y-axis, we used a sphere with different colors in the upper and lower hemispheres (bicolor sphere), separated by a dark region to provide a color transition (see Fig. 5). In Fig. 5, let us denote the detected color in a single loop as $C$, and the sequence of $C$ in two contiguous frames is denoted as $CS$.

If the user scatters the light of the lower hemisphere (red light), the system detects the red color ($C=$’R’),which indicates that the position of the finger is located below the center of the bicolor sphere. This triggers a displacement of the bicolor sphere to the downward direction (negative y-axis). In fact, the decision is done when $CS=$’RR’ for robustness as shown at the bottom of Fig. 5. This displacement step was set to values between 5 and 8mm (approximate finger vertical width). After repeated color measurements and displacement, once the dark center of the bicolor sphere has been displaced to the interaction point (i.e. user’s fingertip), the mixture of red and blue lights and dark region will cause no color detection (denoted as $C=$’N’ in Fig. 5). In this case, the system will stop triggering the movement of the LF. This is the cue that indicates the position in the y-axis of the user’s fingertip, which corresponds to the distance the LF was displaced. Conversely, if the user scatters the light of the upper hemisphere (blue light, i.e. $CS=$’BB’), the opposite sphere movement takes place until the blue identification fails and the sphere ceases its movement. In this way, the system uses LF to scan the region of the color ball for the correct location of the interaction point in the y-axis. Naturally, if the user moves the interaction point in this axis, the system will detect the displacement when either red or blue scattered light is detected in contiguous frames. Then the ball can track the fingertip movement.

As shown in Fig. 5, the process is repeated for each captured frame. This color detection allows the detection of movement of the user in the y-axis, while the detection in the xz-plane is achieved by the extraction of the interaction point that corresponds to the pixel located closest to the screen in Fig. 6, which is already a 2D value. Although with this method we are not able to get the y coordinate with the precision achieved in the xz-plane, the change of direction is a cue useful enough for the implementation of some basic gestures and to enhance the interface capabilities in an important way.

4.3 Color detection and classification model

To extract the color value of the masked region, which is the interaction point in an image, we have to consider some difficulties; for example, the scattered light, can be mixed with other sources of light coming from the environment surrounding our system (office illumination, reflections, etc.) Baring this idea in mind, we apply a normalization of the $[R,G,B]$ values of each pixel in the interaction frame. Let the captured image be a 3-dimensional array, each channel having a size of $N_{\textrm {x}} \times N_{\textrm {z}}$ pixels. In the pixel at $(x,z)$, the captured RGB value corresponds to $[R_{C}(x,z),G_{C}(x,z),B_{C}(x,z)]$ (one value for each color channel). In our system, each pixel value is normalized as follows:

$$R_{\textrm{N}}=\frac{R_{\textrm{C}}}{R_{\textrm{C}}+G_{\textrm{C}}+B_{\textrm{C}}} , \quad G_{\textrm{N}}=\frac{G_{\textrm{C}}}{R_{\textrm{C}}+G_{\textrm{C}}+B_{\textrm{C}}} , \quad B_{\textrm{N}}=\frac{B_{\textrm{C}}}{R_{\textrm{C}}+G_{\textrm{C}}+B_{\textrm{C}}} .$$

This will provide a stronger color signal over the 3 channels, as shown in Fig. 6 (normalization). It can be seen that we have obtained an image with a region of enhanced color at the tip of the touching object (a stylus in case of Fig. 6).

Here we are mainly interested in the region with the greatest color change. The interaction point that is more prone to have a color variation related to the generated LF is the one closest to the screen, or the minimum z-coordinate, because the fingertip is touching the LF. To avoid for a noisy value to be taken as this z-coordinate, the corresponding minimum coordinate of the largest detected contour is used.

For color classification, the color content of a square box around the touch detected position in an image is measured. To extract a color vector from this box we sum up the pixel values in the box, and then normalize it as follows:

$$\vec{C}_{\textrm{d}} = \left[\frac{R_{\textrm{t}}}{R_{\textrm{t}}+G_{\textrm{t}}+B_{\textrm{t}}}, \frac{G_{\textrm{t}}}{R_{\textrm{t}}+G_{\textrm{t}}+B_{\textrm{t}}}, \frac{B_{\textrm{t}}}{R_{\textrm{t}}+G_{\textrm{t}}+B_{\textrm{t}}}\right].$$

In Eq. (2), the subscript $t$ denotes the sum of the respective color channels in the box that defines the touched area. After averaging several samples of the vector of the detected color $\vec {C}_{\textrm {d}}$, the averaged value is used to detect each captured color. A 2D projection of the detected color from the normalized RG plane is shown in Fig. 7. The results of measuring 6 colors with this method as solid LF balls reconstructed by the system (red, green, blue, yellow, cyan, magenta) and scattered by a finger are depicted in Fig. 7 along with a "No-Color" (or NC) case that corresponds to the color of the interaction target when is not scattering the LF. The distribution shown in Fig. 7 was acquired for 20 touching events for each color. Within the system implementation, this retrieval of color information and saving can be implemented as a photometric calibration prior to the initialization of the UI.

 

Fig. 7. Distribution of the sampled color frames in the RG normalized color space with confidence ellipses of 95% (inner ellipse) and 99 % (outer ellipse) around each color distribution.

Download Full Size | PPT Slide | PDF

The finger movement detection process outlined in Sec. 4.2 gave as a result the color category classes shown in Fig. 7. Several measurements of the vector $\vec {C}_\textrm{d}$ in Eq. (2) provide us with mean value vectors $\vec {\mu }_\textrm{N}$ ($N\in [R,G,B,Y,C,M,NC]$ ) and standard deviations $\sigma _\textrm{N}$ of those color clusters. This information is saved into the system as a color database and used for later identification. The use of the normalized $RGB$ space in Eq. (2) allows us to have a color measurement unaffected by unevenness in intensity of the captured values. Since we are using this space, it will suffice to use only the $R$ and $G$ components of it without any information loss (since $B=1-R-G$). The question that arises now is what is the most adequate way to match a captured color value $\vec {C}_{cap}=(C_\textrm{R},C_\textrm{G})$ with the mean value of each of the clusters in the database $\vec {\mu }_\textrm{N}=(\mu _\textrm{NR}, \mu _\textrm{NG})$ to implement a movement cue along the y-axis. The most straightforward approach is to map a captured color value into the color space of Fig. 7 and measure the Euclidean distance from the mean database values to the captured value. The Euclidean distance expression $d_{\textrm {E}}$ is given by:

$$d_{\textrm{E}}(\vec{C}_\textrm{{cap}}, \vec{\mu}_\textrm{{N}})=\sqrt{(C_{\textrm{R}}-\mu_\textrm{{NR}})^{2}+(C_\textrm{{G}}-\mu_\textrm{{NG}})^{2}}.$$

However, as it is easily seen from the color distribution in Fig. 7, the color values are not evenly spread on the different axes of the RG-space. To take into account the correlations between the variables [25] and to improve the accuracy of this method, we also measure the Mahalanobis distance that $\vec {C}_\textrm{cap}$ has with respect to each of the $N$ clusters with mean value $\vec {\mu }_\textrm{N}$ [23]. In that case, the expression for the Mahalanobis distance $d_\textrm{M}$ is given by:

$$d_{\textrm{M}}(\vec{C}_\textrm{{cap}}, \vec{\mu}_\textrm{{N}})=\sqrt{(\vec{C}_\textrm{{cap}}-\vec{\mu}_\textrm{{N}})^{T}\Sigma^{-1}_\textrm{{N}}(\vec{C}_\textrm{{cap}}-\vec{\mu}_\textrm{{N}})}.$$

In Eq. (4), $\Sigma ^{-1}_\textrm{N}$ is a $2 \times 2$ matrix that denotes the inverse covariance matrix of the data associated to the color cluster $N$ in the $RG$ plane of Fig. 7. Using the eigenvectors and eigenvalues of the covariance matrix, we are able to plot confidence ellipses around each distribution. A captured color vector $\vec {C}_\textrm{cap}$ has to be located inside one of the ellipses defined by a color distribution to be classified as a value corresponding to that distribution. Since we consider the values captured are normally distributed, they follow a chi-squared ($\chi ^{2}$) distribution with 2 degrees of freedom [25]. Therefore, to plot the confidence ellipses, as well as the detection thresholds for the detected values of the vector $\vec {C}_{\textrm {d}}$, the major and minor axes are scaled according to the values of the mentioned distribution for different confidence ranges (see Fig. 7).

5. Experiments

5.1 Experimental setup

The scheme of the experimental setup is shown in Fig. 8. It is based on the setup described in [4,23,26]. The main component is the HOE screen [27,28], which reflects the light impinged into it by a commercial grade projector. The output of the projector is collimated by a lens to create parallel light that can be used as the reconstructing beam of each HOE. Upon reflection, the pixels are reflected in a fan of angles that cross with the pixels of neighboring HOEs, generating the real image in front of the screen (see Fig. 2). To measure the position of the pixels of the projector and the center of each HOE inside the screen, the calibration process described in [22] is followed to obtain the LUTs that match the centers of each HOE in the array with a corresponding pixel in the projector. The projector is connected to a computer to control the integral image that permits the LF reconstruction. The LF is initially generated by using an algorithm based in the open-source 3D computer graphics software Blender [29]. The rendered integral image is pre-distorted according to the LUTs matching the HOEs and the pixels of the projector. After the pre-distortion step, the integral image is projected and the LF is reconstructed. The interaction part consists of an RGB camera located on top of the HOE screen, imaging the zone in front of the screen. Care should be placed on imaging the entire screen. When the system is initialized, the camera will capture the frame of the scene and implement the algorithm described in section 4.2, which realizes the tracking the 3D tracking and updates the LF image accordingly. The process is repeated for as long as the system remains active.

 

Fig. 8. Experimental setup.

Download Full Size | PPT Slide | PDF

5.2 Software optimization

Software optimization to the system was necessary in order to attain real time processing. The first step was the optimization of the pre-distortion step between the rendered integral image and the calibrated LUTs. This was achieved by use of the high speed Python library Numba, that optimizes the code using LLVM compiler library [24]. Another optimization part was related to the update of the integral image to create a moving LF. Since the Blender implementation was computationally expensive and not suited for real time, generating a new frame each cycle is not possible. Therefore, the same integral image was displaced by filling it with empty arrays (black frames) and then re-projecting it, not requiring a new LF rendering and achieving a higher speed. The final processing times are summarized in Table 1, using a captured image size of 1328 $\times$ 600 pixels. However, if the size of the captured image is reduced to 1328 $\times$ 300 pixels, an increase on the frame rate of up to 12 frames per second (FPS) can be achieved.

Tables Icon

Table 1. Processing times per frame for the HLF display interactive system.

A sample of the image output of the interaction is shown in Fig. 9, where we can see how the fingertip gets illuminated with green light when the finger passes through the center of the HOE screen. This blob of light is the one used for confirming the user is actually touching the reconstructed LF.

 

Fig. 9. Example of the image processing output detailed in Fig. 6 using a finger that passes through a LF reproduction. The green blob is enhanced and very noticeable in (b), distinguished from the other light sources that illuminate the finger when it is not within the LF region (a and c). See Visualization 1.

Download Full Size | PPT Slide | PDF

5.3 Color identification tests

In order to evaluate the accuracy of the color classification shown in Fig. 7, we conduct an experiment in which the LF of six spheres of solid color were reconstructed and displayed one by one in the screen. The user then scattered the light of the LF while the system performed the color identification with both Euclidean and Mahalanobis distances. The scattered light was captured by the RGB camera and then, following the algorithm described in Fig. 6 and sections 4.1 and 4.2, the scattered light was converted to the vector $\vec {C}_{\textrm {d}}$ in Eq. (2). The results of the color detection for each of the tested colors are shown in Table 2. To calculate it, the number of correct classifications was considered, counting both misclassified frames and empty frames as failed detections. In the case of Euclidean distance, the detection threshold was set to be a sphere of a radius $r=2\sigma _\textrm{c}$, with $c \in [R,G,B,Y,C,M,NC]$. For the Mahalanobis distance decision threshold, we used the ellipse that corresponds to the interval with 99% of confidence as illustrated in Fig. 7. By comparing both metrics, it is concluded that the Mahalanobis distance is a better fit for the shape that the color distribution acquires in the $RG$ normalized space.

Tables Icon

Table 2. Comparison of the detection effectiveness for each of the distance metrics used. The identification effectiveness was tested on a sample of 250 frames per color. Except for color red, Mahalanobis distance yields in general better results than identification using Euclidean distance.

5.4 Interaction in the xz-plane (drag gesture)

The location of the interaction point is readily extracted from the contour extraction described in Fig. 6. During the detection of scattered light, the displacement of the finger is measured and once the finger has moved a distance roughly equivalent to the size of the LF sphere in the camera coordinates, the LF displacement is triggered and the finger is re-illuminated by the displaced LF. An application of this approach is shown by realizing a drag gesture akin to the one usually used to unlock a mobile device. Since we are detecting a 2D displacement for the drag gesture, the color identification at this stage is only used to confirm the user is actually touching the green LF and not other external light.

A frame sequence showing this application is shown in Fig. 10 and Visualization 2. The system is able to follow the position of the finger tip along the horizontal direction to drag the LF of a sphere and unlock the screen. The application of the color recognition also allows us to place another informative LF (’unlock’ sign) on top of the lower green sphere used as the interaction light. Even if the user scatters the blue light of the upper sign while interacting, this will not be recognized as an input due to the color identification.

 

Fig. 10. Realization of the drag gesture using the light scattered from the LF (see Visualization 2).

Download Full Size | PPT Slide | PDF

5.5 Movement direction detection in the y-direction using color information

By implementing the pipeline detailed in Fig. 5 we realized a demonstration in which the user is capable of moving the LF of a bicolor sphere in the up and down directions. The implementation of the method described in section 4.2 provided an interface that allowed the user to move the ball up and down at will (see Fig. 11).

 

Fig. 11. Demonstration of user’s interaction with the LF along the y-axis using color cues. The position of the LF follows the finger that scatters the different reproduced colors (see Visualization 3).

Download Full Size | PPT Slide | PDF

Another demonstration of how the color can be used to locate different positions along the y-axis consisted on a music volume control, with the detection of each color indicating an increase or decrease of sound (see Fig. 12). Using colors also permits to have more buttons with different functions in a reduced screen area, by just placing several colors at different heights.

 

Fig. 12. Demo for controlling the volume of an audio system. (a) When the volume is placed at ’MAX’, the user can lower it by scattering the green LF (see leftmost picture, fingertip’s color). Correspondingly, the user can increase the volume once more by scattering the blue LF. This interaction only uses the color values of Fig. 7 using the Mahalanobis distance to realize the required task. (b) Front view of the reproduced LF, with the volume indicator placed behind the bicolor sphere (see Visualization 4).

Download Full Size | PPT Slide | PDF

5.6 Combination of xz-plane and y-axis movements

Combining the location in the xz-plane, along with the direction in the y-axis, we realized a scroll-ball capable of moving a reconstructed cuboid. A detailed scheme of its functions is depicted in Fig. 13 and the algorithm shown in Fig. 14. Figure 15 depicts how this scroll-ball was able to move the LF in all the indicated directions. As the interactions are only implemented with the bicolor sphere, nothing happens when a finger touches an area different to the bicolor sphere. This is because the cuboid object and the background content have different colors (orange, magenta and dark blue) from the scroll-ball, therefore minimizing the interference and the errors. Increasing the degrees of freedom in the usage of color is one of the issues to be addressed in future studies.

 

Fig. 13. Scroll-ball used to spin the reproduced LF.

Download Full Size | PPT Slide | PDF

 

Fig. 14. Scroll-ball algorithm

Download Full Size | PPT Slide | PDF

 

Fig. 15. Demonstration of how the scroll-ball of Fig. 13 is used to interact with a LF reproduction. Arrows in the middle of the screen indicate to the user the direction of the movement. The dotted circle in the leftmost picture shows the finger scattering the LF (see Visualization 5).

Download Full Size | PPT Slide | PDF

These demonstrations prove that the concept of reading out the color of the regenerated LF is a very feasible way to create an interface that is not only robust, but also contact-free and requiring nothing more than the use of a single RGB camera. Besides, since it is a technique that uses the LF directly, the problem of calibrating the content with the position of the user gets naturally solved.

6. Conclusion

This study shows, the feasibility of the use of color information in the LF reproduced by a HLF display to attain an interactive interface that naturally solves the problem of aligning the displayed content to the signal used to detect the interaction. The use of color permits the creation of floating buttons with several functions and, as it has been shown, very complex tasks can potentially be implemented, such as writing and drawing in a 3D space without any restriction but the area of the screen. This method is an intuitive one, that works directly with the generated content of the screen and that can be easily popularized and promoted as an easy-to-use and contact free interface. Its applications could range from interactive digital signage for cars, kiosks terminals for department stores and scenarios in which hygiene plays an important role such as hospitals, food-processing facilities, etc. This study is also one step forward into making glasses-free 3D interfaces more widespread, considering that the lack of an adequate application has hindered them from becoming a more popular technology. Finally, although not very exploded in this study, the HLF display based on HOEs is a transparent screen that can be blended with the environment and give rise to even more applications (e.g. information display of a moving scenario).

A limitation of this system is that the use of color for position information forces the content to have a defined object with specific colors. However, it may be possible to design a user interface making use of the designated colors, or to minimize the appearance of the position detection objects by showing them in a short time. Another limitation is the dependency upon the illumination conditions at the moment of saving the reference color values. If the illumination conditions are strongly changed, a new reference color measurement might be required. Nevertheless, this color measurement process was automated and not difficult to perform. The color reference measurement can be performed not only with a finger, but with a pencil, a pen, stylus or any other object used for pointing. One more problem found in this work consisted on the variation on diffraction efficiency that the hogels of the HOE screen can present if the polymer gets deteriorated by humidity and temperature changes [30]. However, an improvement in the fabrication conditions of the HOE screen and in the polymer isolation can potentially increase the efficacy of this method. Brighter colors providing a higher signal-to-noise ratio (SNR) can give a more robust signal. Anyhow, this method is not limited to the use of HOE-based displays, but it can potentially be used with other kinds of LF displays (lenticular arrays, micromirrors, etc.).

Although a high frame rate (12 FPS) was achieved, more complex applications may require a higher frame rate. For this problem, the use of GPU oriented programming strategies is being considered and it will be presented in a future study. This study is an example on the need of interplay between information processing, optics and other disciplines in order to realize an interface that can further potentiate the capabilities of the user.

Funding

Japan Society for the Promotion of Science (15K04691).

Acknowledgments

The authors would like to thank Covestro Deutschland AG for providing the photopolymer holographic recording material. The authors also acknowledge Mr. Kentaro Kakinuma, from the Tokyo Institute of Technology, for very valuable discussions on software implementation and high speed processing.

Disclosures

The authors declare no conflicts of interest.

References

1. D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013). [CrossRef]  

2. Y. Isomae, Y. Shibata, T. Ishinabe, and H. Fujikake, “Phase-only holographic head up display without zero-order diffraction light for automobiles,” IEEE Consumer Electron. Mag. 8(5), 99–104 (2019). [CrossRef]  

3. W. Zhang, X. Sang, X. Gao, X. Yu, C. Gao, B. Yan, and C. Yu, “A flipping-free 3d integral imaging display using a twice-imaging lens array,” Opt. Express 27(22), 32810–32822 (2019). [CrossRef]  

4. M. Yamaguchi and R. Higashida, “3d touchable holographic light-field display,” Appl. Opt. 55(3), A178–A183 (2016). [CrossRef]  

5. M. Yasugi, H. Yamamoto, and Y. Takeda, “Immersive aerial interface showing transparent floating screen between users and audience,” in Three-Dimensional Imaging, Visualization, and Display 2020, vol. 11402 (International Society for Optics and Photonics, 2020), p. 114020O.

6. “Aska 3d display–operation principle and structure,” https://aska3d.com/en/technology.php.

7. A. Butler, O. Hilliges, S. Izadi, S. Hodges, D. Molyneaux, D. Kim, and D. Kong, “Vermeer: direct interaction with a 360 viewable 3d display,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, (2011), pp. 569–576.

8. S. Yamada, T. Kakue, T. Shimobaba, and T. Ito, “Interactive holographic display based on finger gestures,” Sci. Rep. 8(1), 2010 (2018). [CrossRef]  

9. R. Dou, H. Gu, A. Tan, P. Gu, Y. Chen, S. Ma, and L. Cao, “Interactive three-dimensional display based on multi-layer lcds,” in Optics and Photonics for Information Processing XIII, vol. 11136 (International Society for Optics and Photonics, 2019), p. 111360A.

10. “Looking glass factory,” https://lookingglassfactory.com/tech.

11. M. Yasui, Y. Watanabe, and M. Ishikawa, “Occlusion-robust sensing method by using the light-field of a 3d display system toward interaction with a 3d image,” Appl. Opt. 58(5), A209–A227 (2019). [CrossRef]  

12. M. Yamaguchi, “Light-field and holographic three-dimensional displays,” J. Opt. Soc. Am. A 33(12), 2348–2364 (2016). [CrossRef]  

13. H. Cheng, Z. Dai, Z. Liu, and Y. Zhao, “An image-to-class dynamic time warping approach for both 3d static and trajectory hand gesture recognition,” Pattern Recognition 55, 137–147 (2016). [CrossRef]  

14. A. Ahmad, C. Migniot, and A. Dipanda, “Hand pose estimation and tracking in real and virtual interaction: A review,” Image Vis. Comput. 89, 35–49 (2019). [CrossRef]  

15. S.-J. Jang, S.-C. Kim, J.-S. Koo, J.-I. Park, and E.-S. Kim, “100-inch 3d real-image rear-projection display system based on fresnel lens,” in Integrated Optical Devices, Nanostructures, and Displays, vol. 5618 (International Society for Optics and Photonics, 2004), pp. 204–211.

16. T. Matsumaru, A. I. Septiana, and K. Horiuchi, “Three-dimensional aerial image interface, 3daii,” J. Robotics Mechatronics 31(5), 657–670 (2019). [CrossRef]  

17. X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three-dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018). [CrossRef]  

18. M. Yamaguchi, “Full-parallax holographic light-field 3-d displays and interactive 3-d touch,” Proc. IEEE 105(5), 947–959 (2017). [CrossRef]  

19. M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994). [CrossRef]  

20. K. Hong, J. Yeom, C. Jang, J. Hong, and B. Lee, “Full-color lens-array holographic optical element for three-dimensional optical see-through augmented reality,” Opt. Lett. 39(1), 127–130 (2014). [CrossRef]  

21. R. Oi, P.-Y. Chou, J. B. Jessie, K. Wakunami, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Three-dimensional reflection screens fabricated by holographic wavefront printer,” Opt. Eng. 57(6), 061605 (2018). [CrossRef]  

22. T. Nakamura and M. Yamaguchi, “Rapid calibration of a projection-type holographic light-field display using hierarchically upconverted binary sinusoidal patterns,” Appl. Opt. 56(34), 9520–9525 (2017). [CrossRef]  

23. S. Sakurai, T. Nakamura, and M. Yamaguchi, “The use of color in scattered light for 3d touchable holographic light-field display,” in JSAP-OSA Joint Symposia, (Optical Society of America, 2016), p. 13a_C301_4.

24. “Numba, anaconda inc.,” https://numba.pydata.org/.

25. B. F. Manly and J. A. N. Alberto, Multivariate statistical methods: a primer (Chemical Rubber Company (CRC) Taylor and Francis Group, 2016).

26. I. A. Sánchez Salazar Chavarría, T. Nakamura, and M. Yamaguchi, “An interactive holographic light-field display color-aided 3d-touch user interface,” in The International Display Workshops (IDW’19), (2019), pp. INP5–4.

27. M. Takano, H. Shigeta, T. Nishihara, M. Yamaguchi, S. Takahashi, N. Ohyama, A. Kobayashi, and F. Iwata, “Full-color holographic 3d printer,” in Practical Holography XVII and Holographic Materials IX, vol. 5005 (International Society for Optics and Photonics, 2003), pp. 126–136.

28. M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994). [CrossRef]  

29. “Blender,” http://www.blender.org/.

30. O. Andreeva, Y. Korzinin, and B. Manukhin, “Volume transmission hologram gratings basic properties, energy channelizing, effect of ambient temperature and humidity,” Holography Basic Principles and Contemporary Applications pp. 37–60 (2013).

References

  • View by:
  • |
  • |
  • |

  1. D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
    [Crossref]
  2. Y. Isomae, Y. Shibata, T. Ishinabe, and H. Fujikake, “Phase-only holographic head up display without zero-order diffraction light for automobiles,” IEEE Consumer Electron. Mag. 8(5), 99–104 (2019).
    [Crossref]
  3. W. Zhang, X. Sang, X. Gao, X. Yu, C. Gao, B. Yan, and C. Yu, “A flipping-free 3d integral imaging display using a twice-imaging lens array,” Opt. Express 27(22), 32810–32822 (2019).
    [Crossref]
  4. M. Yamaguchi and R. Higashida, “3d touchable holographic light-field display,” Appl. Opt. 55(3), A178–A183 (2016).
    [Crossref]
  5. M. Yasugi, H. Yamamoto, and Y. Takeda, “Immersive aerial interface showing transparent floating screen between users and audience,” in Three-Dimensional Imaging, Visualization, and Display 2020, vol. 11402 (International Society for Optics and Photonics, 2020), p. 114020O.
  6. “Aska 3d display–operation principle and structure,” https://aska3d.com/en/technology.php .
  7. A. Butler, O. Hilliges, S. Izadi, S. Hodges, D. Molyneaux, D. Kim, and D. Kong, “Vermeer: direct interaction with a 360 viewable 3d display,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, (2011), pp. 569–576.
  8. S. Yamada, T. Kakue, T. Shimobaba, and T. Ito, “Interactive holographic display based on finger gestures,” Sci. Rep. 8(1), 2010 (2018).
    [Crossref]
  9. R. Dou, H. Gu, A. Tan, P. Gu, Y. Chen, S. Ma, and L. Cao, “Interactive three-dimensional display based on multi-layer lcds,” in Optics and Photonics for Information Processing XIII, vol. 11136 (International Society for Optics and Photonics, 2019), p. 111360A.
  10. “Looking glass factory,” https://lookingglassfactory.com/tech .
  11. M. Yasui, Y. Watanabe, and M. Ishikawa, “Occlusion-robust sensing method by using the light-field of a 3d display system toward interaction with a 3d image,” Appl. Opt. 58(5), A209–A227 (2019).
    [Crossref]
  12. M. Yamaguchi, “Light-field and holographic three-dimensional displays,” J. Opt. Soc. Am. A 33(12), 2348–2364 (2016).
    [Crossref]
  13. H. Cheng, Z. Dai, Z. Liu, and Y. Zhao, “An image-to-class dynamic time warping approach for both 3d static and trajectory hand gesture recognition,” Pattern Recognition 55, 137–147 (2016).
    [Crossref]
  14. A. Ahmad, C. Migniot, and A. Dipanda, “Hand pose estimation and tracking in real and virtual interaction: A review,” Image Vis. Comput. 89, 35–49 (2019).
    [Crossref]
  15. S.-J. Jang, S.-C. Kim, J.-S. Koo, J.-I. Park, and E.-S. Kim, “100-inch 3d real-image rear-projection display system based on fresnel lens,” in Integrated Optical Devices, Nanostructures, and Displays, vol. 5618 (International Society for Optics and Photonics, 2004), pp. 204–211.
  16. T. Matsumaru, A. I. Septiana, and K. Horiuchi, “Three-dimensional aerial image interface, 3daii,” J. Robotics Mechatronics 31(5), 657–670 (2019).
    [Crossref]
  17. X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three-dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018).
    [Crossref]
  18. M. Yamaguchi, “Full-parallax holographic light-field 3-d displays and interactive 3-d touch,” Proc. IEEE 105(5), 947–959 (2017).
    [Crossref]
  19. M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
    [Crossref]
  20. K. Hong, J. Yeom, C. Jang, J. Hong, and B. Lee, “Full-color lens-array holographic optical element for three-dimensional optical see-through augmented reality,” Opt. Lett. 39(1), 127–130 (2014).
    [Crossref]
  21. R. Oi, P.-Y. Chou, J. B. Jessie, K. Wakunami, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Three-dimensional reflection screens fabricated by holographic wavefront printer,” Opt. Eng. 57(6), 061605 (2018).
    [Crossref]
  22. T. Nakamura and M. Yamaguchi, “Rapid calibration of a projection-type holographic light-field display using hierarchically upconverted binary sinusoidal patterns,” Appl. Opt. 56(34), 9520–9525 (2017).
    [Crossref]
  23. S. Sakurai, T. Nakamura, and M. Yamaguchi, “The use of color in scattered light for 3d touchable holographic light-field display,” in JSAP-OSA Joint Symposia, (Optical Society of America, 2016), p. 13a_C301_4.
  24. “Numba, anaconda inc.,” https://numba.pydata.org/ .
  25. B. F. Manly and J. A. N. Alberto, Multivariate statistical methods: a primer (Chemical Rubber Company (CRC) Taylor and Francis Group, 2016).
  26. I. A. Sánchez Salazar Chavarría, T. Nakamura, and M. Yamaguchi, “An interactive holographic light-field display color-aided 3d-touch user interface,” in The International Display Workshops (IDW’19), (2019), pp. INP5–4.
  27. M. Takano, H. Shigeta, T. Nishihara, M. Yamaguchi, S. Takahashi, N. Ohyama, A. Kobayashi, and F. Iwata, “Full-color holographic 3d printer,” in Practical Holography XVII and Holographic Materials IX, vol. 5005 (International Society for Optics and Photonics, 2003), pp. 126–136.
  28. M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
    [Crossref]
  29. “Blender,” http://www.blender.org/ .
  30. O. Andreeva, Y. Korzinin, and B. Manukhin, “Volume transmission hologram gratings basic properties, energy channelizing, effect of ambient temperature and humidity,” Holography Basic Principles and Contemporary Applications pp. 37–60 (2013).

2019 (5)

Y. Isomae, Y. Shibata, T. Ishinabe, and H. Fujikake, “Phase-only holographic head up display without zero-order diffraction light for automobiles,” IEEE Consumer Electron. Mag. 8(5), 99–104 (2019).
[Crossref]

W. Zhang, X. Sang, X. Gao, X. Yu, C. Gao, B. Yan, and C. Yu, “A flipping-free 3d integral imaging display using a twice-imaging lens array,” Opt. Express 27(22), 32810–32822 (2019).
[Crossref]

A. Ahmad, C. Migniot, and A. Dipanda, “Hand pose estimation and tracking in real and virtual interaction: A review,” Image Vis. Comput. 89, 35–49 (2019).
[Crossref]

T. Matsumaru, A. I. Septiana, and K. Horiuchi, “Three-dimensional aerial image interface, 3daii,” J. Robotics Mechatronics 31(5), 657–670 (2019).
[Crossref]

M. Yasui, Y. Watanabe, and M. Ishikawa, “Occlusion-robust sensing method by using the light-field of a 3d display system toward interaction with a 3d image,” Appl. Opt. 58(5), A209–A227 (2019).
[Crossref]

2018 (3)

R. Oi, P.-Y. Chou, J. B. Jessie, K. Wakunami, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Three-dimensional reflection screens fabricated by holographic wavefront printer,” Opt. Eng. 57(6), 061605 (2018).
[Crossref]

X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three-dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018).
[Crossref]

S. Yamada, T. Kakue, T. Shimobaba, and T. Ito, “Interactive holographic display based on finger gestures,” Sci. Rep. 8(1), 2010 (2018).
[Crossref]

2017 (2)

2016 (3)

M. Yamaguchi, “Light-field and holographic three-dimensional displays,” J. Opt. Soc. Am. A 33(12), 2348–2364 (2016).
[Crossref]

H. Cheng, Z. Dai, Z. Liu, and Y. Zhao, “An image-to-class dynamic time warping approach for both 3d static and trajectory hand gesture recognition,” Pattern Recognition 55, 137–147 (2016).
[Crossref]

M. Yamaguchi and R. Higashida, “3d touchable holographic light-field display,” Appl. Opt. 55(3), A178–A183 (2016).
[Crossref]

2014 (1)

2013 (1)

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

1994 (2)

M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
[Crossref]

M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
[Crossref]

Ahmad, A.

A. Ahmad, C. Migniot, and A. Dipanda, “Hand pose estimation and tracking in real and virtual interaction: A review,” Image Vis. Comput. 89, 35–49 (2019).
[Crossref]

Alberto, J. A. N.

B. F. Manly and J. A. N. Alberto, Multivariate statistical methods: a primer (Chemical Rubber Company (CRC) Taylor and Francis Group, 2016).

Andreeva, O.

O. Andreeva, Y. Korzinin, and B. Manukhin, “Volume transmission hologram gratings basic properties, energy channelizing, effect of ambient temperature and humidity,” Holography Basic Principles and Contemporary Applications pp. 37–60 (2013).

Beausoleil, R. G.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Brug, J.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Butler, A.

A. Butler, O. Hilliges, S. Izadi, S. Hodges, D. Molyneaux, D. Kim, and D. Kong, “Vermeer: direct interaction with a 360 viewable 3d display,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, (2011), pp. 569–576.

Cao, L.

R. Dou, H. Gu, A. Tan, P. Gu, Y. Chen, S. Ma, and L. Cao, “Interactive three-dimensional display based on multi-layer lcds,” in Optics and Photonics for Information Processing XIII, vol. 11136 (International Society for Optics and Photonics, 2019), p. 111360A.

Chen, Y.

R. Dou, H. Gu, A. Tan, P. Gu, Y. Chen, S. Ma, and L. Cao, “Interactive three-dimensional display based on multi-layer lcds,” in Optics and Photonics for Information Processing XIII, vol. 11136 (International Society for Optics and Photonics, 2019), p. 111360A.

Cheng, H.

H. Cheng, Z. Dai, Z. Liu, and Y. Zhao, “An image-to-class dynamic time warping approach for both 3d static and trajectory hand gesture recognition,” Pattern Recognition 55, 137–147 (2016).
[Crossref]

Chou, P.-Y.

R. Oi, P.-Y. Chou, J. B. Jessie, K. Wakunami, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Three-dimensional reflection screens fabricated by holographic wavefront printer,” Opt. Eng. 57(6), 061605 (2018).
[Crossref]

Dai, Z.

H. Cheng, Z. Dai, Z. Liu, and Y. Zhao, “An image-to-class dynamic time warping approach for both 3d static and trajectory hand gesture recognition,” Pattern Recognition 55, 137–147 (2016).
[Crossref]

Dipanda, A.

A. Ahmad, C. Migniot, and A. Dipanda, “Hand pose estimation and tracking in real and virtual interaction: A review,” Image Vis. Comput. 89, 35–49 (2019).
[Crossref]

Dou, R.

R. Dou, H. Gu, A. Tan, P. Gu, Y. Chen, S. Ma, and L. Cao, “Interactive three-dimensional display based on multi-layer lcds,” in Optics and Photonics for Information Processing XIII, vol. 11136 (International Society for Optics and Photonics, 2019), p. 111360A.

Fattal, D.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Fiorentino, M.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Fujikake, H.

Y. Isomae, Y. Shibata, T. Ishinabe, and H. Fujikake, “Phase-only holographic head up display without zero-order diffraction light for automobiles,” IEEE Consumer Electron. Mag. 8(5), 99–104 (2019).
[Crossref]

Gao, C.

Gao, X.

Gu, H.

R. Dou, H. Gu, A. Tan, P. Gu, Y. Chen, S. Ma, and L. Cao, “Interactive three-dimensional display based on multi-layer lcds,” in Optics and Photonics for Information Processing XIII, vol. 11136 (International Society for Optics and Photonics, 2019), p. 111360A.

Gu, P.

R. Dou, H. Gu, A. Tan, P. Gu, Y. Chen, S. Ma, and L. Cao, “Interactive three-dimensional display based on multi-layer lcds,” in Optics and Photonics for Information Processing XIII, vol. 11136 (International Society for Optics and Photonics, 2019), p. 111360A.

Higashida, R.

Hilliges, O.

A. Butler, O. Hilliges, S. Izadi, S. Hodges, D. Molyneaux, D. Kim, and D. Kong, “Vermeer: direct interaction with a 360 viewable 3d display,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, (2011), pp. 569–576.

Hodges, S.

A. Butler, O. Hilliges, S. Izadi, S. Hodges, D. Molyneaux, D. Kim, and D. Kong, “Vermeer: direct interaction with a 360 viewable 3d display,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, (2011), pp. 569–576.

Honda, T.

M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
[Crossref]

M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
[Crossref]

Hong, J.

Hong, K.

Horiuchi, K.

T. Matsumaru, A. I. Septiana, and K. Horiuchi, “Three-dimensional aerial image interface, 3daii,” J. Robotics Mechatronics 31(5), 657–670 (2019).
[Crossref]

Huang, Y.-P.

R. Oi, P.-Y. Chou, J. B. Jessie, K. Wakunami, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Three-dimensional reflection screens fabricated by holographic wavefront printer,” Opt. Eng. 57(6), 061605 (2018).
[Crossref]

Ichihashi, Y.

R. Oi, P.-Y. Chou, J. B. Jessie, K. Wakunami, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Three-dimensional reflection screens fabricated by holographic wavefront printer,” Opt. Eng. 57(6), 061605 (2018).
[Crossref]

Ishikawa, M.

Ishinabe, T.

Y. Isomae, Y. Shibata, T. Ishinabe, and H. Fujikake, “Phase-only holographic head up display without zero-order diffraction light for automobiles,” IEEE Consumer Electron. Mag. 8(5), 99–104 (2019).
[Crossref]

Isomae, Y.

Y. Isomae, Y. Shibata, T. Ishinabe, and H. Fujikake, “Phase-only holographic head up display without zero-order diffraction light for automobiles,” IEEE Consumer Electron. Mag. 8(5), 99–104 (2019).
[Crossref]

Ito, T.

S. Yamada, T. Kakue, T. Shimobaba, and T. Ito, “Interactive holographic display based on finger gestures,” Sci. Rep. 8(1), 2010 (2018).
[Crossref]

Iwata, F.

M. Takano, H. Shigeta, T. Nishihara, M. Yamaguchi, S. Takahashi, N. Ohyama, A. Kobayashi, and F. Iwata, “Full-color holographic 3d printer,” in Practical Holography XVII and Holographic Materials IX, vol. 5005 (International Society for Optics and Photonics, 2003), pp. 126–136.

Izadi, S.

A. Butler, O. Hilliges, S. Izadi, S. Hodges, D. Molyneaux, D. Kim, and D. Kong, “Vermeer: direct interaction with a 360 viewable 3d display,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, (2011), pp. 569–576.

Jang, C.

Jang, S.-J.

S.-J. Jang, S.-C. Kim, J.-S. Koo, J.-I. Park, and E.-S. Kim, “100-inch 3d real-image rear-projection display system based on fresnel lens,” in Integrated Optical Devices, Nanostructures, and Displays, vol. 5618 (International Society for Optics and Photonics, 2004), pp. 204–211.

Jessie, J. B.

R. Oi, P.-Y. Chou, J. B. Jessie, K. Wakunami, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Three-dimensional reflection screens fabricated by holographic wavefront printer,” Opt. Eng. 57(6), 061605 (2018).
[Crossref]

Kakue, T.

S. Yamada, T. Kakue, T. Shimobaba, and T. Ito, “Interactive holographic display based on finger gestures,” Sci. Rep. 8(1), 2010 (2018).
[Crossref]

Kim, D.

A. Butler, O. Hilliges, S. Izadi, S. Hodges, D. Molyneaux, D. Kim, and D. Kong, “Vermeer: direct interaction with a 360 viewable 3d display,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, (2011), pp. 569–576.

Kim, E.-S.

S.-J. Jang, S.-C. Kim, J.-S. Koo, J.-I. Park, and E.-S. Kim, “100-inch 3d real-image rear-projection display system based on fresnel lens,” in Integrated Optical Devices, Nanostructures, and Displays, vol. 5618 (International Society for Optics and Photonics, 2004), pp. 204–211.

Kim, S.-C.

S.-J. Jang, S.-C. Kim, J.-S. Koo, J.-I. Park, and E.-S. Kim, “100-inch 3d real-image rear-projection display system based on fresnel lens,” in Integrated Optical Devices, Nanostructures, and Displays, vol. 5618 (International Society for Optics and Photonics, 2004), pp. 204–211.

Kobayashi, A.

M. Takano, H. Shigeta, T. Nishihara, M. Yamaguchi, S. Takahashi, N. Ohyama, A. Kobayashi, and F. Iwata, “Full-color holographic 3d printer,” in Practical Holography XVII and Holographic Materials IX, vol. 5005 (International Society for Optics and Photonics, 2003), pp. 126–136.

Kong, D.

A. Butler, O. Hilliges, S. Izadi, S. Hodges, D. Molyneaux, D. Kim, and D. Kong, “Vermeer: direct interaction with a 360 viewable 3d display,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, (2011), pp. 569–576.

Koo, J.-S.

S.-J. Jang, S.-C. Kim, J.-S. Koo, J.-I. Park, and E.-S. Kim, “100-inch 3d real-image rear-projection display system based on fresnel lens,” in Integrated Optical Devices, Nanostructures, and Displays, vol. 5618 (International Society for Optics and Photonics, 2004), pp. 204–211.

Korzinin, Y.

O. Andreeva, Y. Korzinin, and B. Manukhin, “Volume transmission hologram gratings basic properties, energy channelizing, effect of ambient temperature and humidity,” Holography Basic Principles and Contemporary Applications pp. 37–60 (2013).

Koyama, T.

M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
[Crossref]

M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
[Crossref]

Lee, B.

Li, Y.

Liu, Z.

H. Cheng, Z. Dai, Z. Liu, and Y. Zhao, “An image-to-class dynamic time warping approach for both 3d static and trajectory hand gesture recognition,” Pattern Recognition 55, 137–147 (2016).
[Crossref]

Ma, S.

R. Dou, H. Gu, A. Tan, P. Gu, Y. Chen, S. Ma, and L. Cao, “Interactive three-dimensional display based on multi-layer lcds,” in Optics and Photonics for Information Processing XIII, vol. 11136 (International Society for Optics and Photonics, 2019), p. 111360A.

Manly, B. F.

B. F. Manly and J. A. N. Alberto, Multivariate statistical methods: a primer (Chemical Rubber Company (CRC) Taylor and Francis Group, 2016).

Manukhin, B.

O. Andreeva, Y. Korzinin, and B. Manukhin, “Volume transmission hologram gratings basic properties, energy channelizing, effect of ambient temperature and humidity,” Holography Basic Principles and Contemporary Applications pp. 37–60 (2013).

Matsumaru, T.

T. Matsumaru, A. I. Septiana, and K. Horiuchi, “Three-dimensional aerial image interface, 3daii,” J. Robotics Mechatronics 31(5), 657–670 (2019).
[Crossref]

Migniot, C.

A. Ahmad, C. Migniot, and A. Dipanda, “Hand pose estimation and tracking in real and virtual interaction: A review,” Image Vis. Comput. 89, 35–49 (2019).
[Crossref]

Molyneaux, D.

A. Butler, O. Hilliges, S. Izadi, S. Hodges, D. Molyneaux, D. Kim, and D. Kong, “Vermeer: direct interaction with a 360 viewable 3d display,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, (2011), pp. 569–576.

Nakamura, T.

T. Nakamura and M. Yamaguchi, “Rapid calibration of a projection-type holographic light-field display using hierarchically upconverted binary sinusoidal patterns,” Appl. Opt. 56(34), 9520–9525 (2017).
[Crossref]

S. Sakurai, T. Nakamura, and M. Yamaguchi, “The use of color in scattered light for 3d touchable holographic light-field display,” in JSAP-OSA Joint Symposia, (Optical Society of America, 2016), p. 13a_C301_4.

I. A. Sánchez Salazar Chavarría, T. Nakamura, and M. Yamaguchi, “An interactive holographic light-field display color-aided 3d-touch user interface,” in The International Display Workshops (IDW’19), (2019), pp. INP5–4.

Nishihara, T.

M. Takano, H. Shigeta, T. Nishihara, M. Yamaguchi, S. Takahashi, N. Ohyama, A. Kobayashi, and F. Iwata, “Full-color holographic 3d printer,” in Practical Holography XVII and Holographic Materials IX, vol. 5005 (International Society for Optics and Photonics, 2003), pp. 126–136.

Ohyama, N.

M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
[Crossref]

M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
[Crossref]

M. Takano, H. Shigeta, T. Nishihara, M. Yamaguchi, S. Takahashi, N. Ohyama, A. Kobayashi, and F. Iwata, “Full-color holographic 3d printer,” in Practical Holography XVII and Holographic Materials IX, vol. 5005 (International Society for Optics and Photonics, 2003), pp. 126–136.

Oi, R.

R. Oi, P.-Y. Chou, J. B. Jessie, K. Wakunami, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Three-dimensional reflection screens fabricated by holographic wavefront printer,” Opt. Eng. 57(6), 061605 (2018).
[Crossref]

Okui, M.

R. Oi, P.-Y. Chou, J. B. Jessie, K. Wakunami, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Three-dimensional reflection screens fabricated by holographic wavefront printer,” Opt. Eng. 57(6), 061605 (2018).
[Crossref]

Park, J.-I.

S.-J. Jang, S.-C. Kim, J.-S. Koo, J.-I. Park, and E.-S. Kim, “100-inch 3d real-image rear-projection display system based on fresnel lens,” in Integrated Optical Devices, Nanostructures, and Displays, vol. 5618 (International Society for Optics and Photonics, 2004), pp. 204–211.

Peng, Z.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Sakurai, S.

S. Sakurai, T. Nakamura, and M. Yamaguchi, “The use of color in scattered light for 3d touchable holographic light-field display,” in JSAP-OSA Joint Symposia, (Optical Society of America, 2016), p. 13a_C301_4.

Sánchez Salazar Chavarría, I. A.

I. A. Sánchez Salazar Chavarría, T. Nakamura, and M. Yamaguchi, “An interactive holographic light-field display color-aided 3d-touch user interface,” in The International Display Workshops (IDW’19), (2019), pp. INP5–4.

Sang, X.

Septiana, A. I.

T. Matsumaru, A. I. Septiana, and K. Horiuchi, “Three-dimensional aerial image interface, 3daii,” J. Robotics Mechatronics 31(5), 657–670 (2019).
[Crossref]

Shibata, Y.

Y. Isomae, Y. Shibata, T. Ishinabe, and H. Fujikake, “Phase-only holographic head up display without zero-order diffraction light for automobiles,” IEEE Consumer Electron. Mag. 8(5), 99–104 (2019).
[Crossref]

Shigeta, H.

M. Takano, H. Shigeta, T. Nishihara, M. Yamaguchi, S. Takahashi, N. Ohyama, A. Kobayashi, and F. Iwata, “Full-color holographic 3d printer,” in Practical Holography XVII and Holographic Materials IX, vol. 5005 (International Society for Optics and Photonics, 2003), pp. 126–136.

Shimobaba, T.

S. Yamada, T. Kakue, T. Shimobaba, and T. Ito, “Interactive holographic display based on finger gestures,” Sci. Rep. 8(1), 2010 (2018).
[Crossref]

Takahashi, S.

M. Takano, H. Shigeta, T. Nishihara, M. Yamaguchi, S. Takahashi, N. Ohyama, A. Kobayashi, and F. Iwata, “Full-color holographic 3d printer,” in Practical Holography XVII and Holographic Materials IX, vol. 5005 (International Society for Optics and Photonics, 2003), pp. 126–136.

Takano, M.

M. Takano, H. Shigeta, T. Nishihara, M. Yamaguchi, S. Takahashi, N. Ohyama, A. Kobayashi, and F. Iwata, “Full-color holographic 3d printer,” in Practical Holography XVII and Holographic Materials IX, vol. 5005 (International Society for Optics and Photonics, 2003), pp. 126–136.

Takeda, Y.

M. Yasugi, H. Yamamoto, and Y. Takeda, “Immersive aerial interface showing transparent floating screen between users and audience,” in Three-Dimensional Imaging, Visualization, and Display 2020, vol. 11402 (International Society for Optics and Photonics, 2020), p. 114020O.

Tan, A.

R. Dou, H. Gu, A. Tan, P. Gu, Y. Chen, S. Ma, and L. Cao, “Interactive three-dimensional display based on multi-layer lcds,” in Optics and Photonics for Information Processing XIII, vol. 11136 (International Society for Optics and Photonics, 2019), p. 111360A.

Tran, T.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Vo, S.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Wakunami, K.

R. Oi, P.-Y. Chou, J. B. Jessie, K. Wakunami, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Three-dimensional reflection screens fabricated by holographic wavefront printer,” Opt. Eng. 57(6), 061605 (2018).
[Crossref]

Watanabe, Y.

Wu, Y.

Xing, S.

Yamada, S.

S. Yamada, T. Kakue, T. Shimobaba, and T. Ito, “Interactive holographic display based on finger gestures,” Sci. Rep. 8(1), 2010 (2018).
[Crossref]

Yamaguchi, M.

M. Yamaguchi, “Full-parallax holographic light-field 3-d displays and interactive 3-d touch,” Proc. IEEE 105(5), 947–959 (2017).
[Crossref]

T. Nakamura and M. Yamaguchi, “Rapid calibration of a projection-type holographic light-field display using hierarchically upconverted binary sinusoidal patterns,” Appl. Opt. 56(34), 9520–9525 (2017).
[Crossref]

M. Yamaguchi, “Light-field and holographic three-dimensional displays,” J. Opt. Soc. Am. A 33(12), 2348–2364 (2016).
[Crossref]

M. Yamaguchi and R. Higashida, “3d touchable holographic light-field display,” Appl. Opt. 55(3), A178–A183 (2016).
[Crossref]

M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
[Crossref]

M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
[Crossref]

M. Takano, H. Shigeta, T. Nishihara, M. Yamaguchi, S. Takahashi, N. Ohyama, A. Kobayashi, and F. Iwata, “Full-color holographic 3d printer,” in Practical Holography XVII and Holographic Materials IX, vol. 5005 (International Society for Optics and Photonics, 2003), pp. 126–136.

I. A. Sánchez Salazar Chavarría, T. Nakamura, and M. Yamaguchi, “An interactive holographic light-field display color-aided 3d-touch user interface,” in The International Display Workshops (IDW’19), (2019), pp. INP5–4.

S. Sakurai, T. Nakamura, and M. Yamaguchi, “The use of color in scattered light for 3d touchable holographic light-field display,” in JSAP-OSA Joint Symposia, (Optical Society of America, 2016), p. 13a_C301_4.

Yamamoto, H.

M. Yasugi, H. Yamamoto, and Y. Takeda, “Immersive aerial interface showing transparent floating screen between users and audience,” in Three-Dimensional Imaging, Visualization, and Display 2020, vol. 11402 (International Society for Optics and Photonics, 2020), p. 114020O.

Yamamoto, K.

R. Oi, P.-Y. Chou, J. B. Jessie, K. Wakunami, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Three-dimensional reflection screens fabricated by holographic wavefront printer,” Opt. Eng. 57(6), 061605 (2018).
[Crossref]

Yan, B.

Yasugi, M.

M. Yasugi, H. Yamamoto, and Y. Takeda, “Immersive aerial interface showing transparent floating screen between users and audience,” in Three-Dimensional Imaging, Visualization, and Display 2020, vol. 11402 (International Society for Optics and Photonics, 2020), p. 114020O.

Yasui, M.

Yeom, J.

Yu, C.

Yu, X.

Zhang, W.

Zhao, Y.

H. Cheng, Z. Dai, Z. Liu, and Y. Zhao, “An image-to-class dynamic time warping approach for both 3d static and trajectory hand gesture recognition,” Pattern Recognition 55, 137–147 (2016).
[Crossref]

Appl. Opt. (3)

IEEE Consumer Electron. Mag. (1)

Y. Isomae, Y. Shibata, T. Ishinabe, and H. Fujikake, “Phase-only holographic head up display without zero-order diffraction light for automobiles,” IEEE Consumer Electron. Mag. 8(5), 99–104 (2019).
[Crossref]

Image Vis. Comput. (1)

A. Ahmad, C. Migniot, and A. Dipanda, “Hand pose estimation and tracking in real and virtual interaction: A review,” Image Vis. Comput. 89, 35–49 (2019).
[Crossref]

J. Opt. Soc. Am. A (1)

J. Robotics Mechatronics (1)

T. Matsumaru, A. I. Septiana, and K. Horiuchi, “Three-dimensional aerial image interface, 3daii,” J. Robotics Mechatronics 31(5), 657–670 (2019).
[Crossref]

Nature (1)

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Opt. Eng. (1)

R. Oi, P.-Y. Chou, J. B. Jessie, K. Wakunami, Y. Ichihashi, M. Okui, Y.-P. Huang, and K. Yamamoto, “Three-dimensional reflection screens fabricated by holographic wavefront printer,” Opt. Eng. 57(6), 061605 (2018).
[Crossref]

Opt. Express (2)

Opt. Lett. (1)

Opt. Rev. (2)

M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
[Crossref]

M. Yamaguchi, T. Koyama, N. Ohyama, and T. Honda, “A stereographic display using a reflection holographic screen,” Opt. Rev. 1(2), 191–194 (1994).
[Crossref]

Pattern Recognition (1)

H. Cheng, Z. Dai, Z. Liu, and Y. Zhao, “An image-to-class dynamic time warping approach for both 3d static and trajectory hand gesture recognition,” Pattern Recognition 55, 137–147 (2016).
[Crossref]

Proc. IEEE (1)

M. Yamaguchi, “Full-parallax holographic light-field 3-d displays and interactive 3-d touch,” Proc. IEEE 105(5), 947–959 (2017).
[Crossref]

Sci. Rep. (1)

S. Yamada, T. Kakue, T. Shimobaba, and T. Ito, “Interactive holographic display based on finger gestures,” Sci. Rep. 8(1), 2010 (2018).
[Crossref]

Other (13)

R. Dou, H. Gu, A. Tan, P. Gu, Y. Chen, S. Ma, and L. Cao, “Interactive three-dimensional display based on multi-layer lcds,” in Optics and Photonics for Information Processing XIII, vol. 11136 (International Society for Optics and Photonics, 2019), p. 111360A.

“Looking glass factory,” https://lookingglassfactory.com/tech .

M. Yasugi, H. Yamamoto, and Y. Takeda, “Immersive aerial interface showing transparent floating screen between users and audience,” in Three-Dimensional Imaging, Visualization, and Display 2020, vol. 11402 (International Society for Optics and Photonics, 2020), p. 114020O.

“Aska 3d display–operation principle and structure,” https://aska3d.com/en/technology.php .

A. Butler, O. Hilliges, S. Izadi, S. Hodges, D. Molyneaux, D. Kim, and D. Kong, “Vermeer: direct interaction with a 360 viewable 3d display,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, (2011), pp. 569–576.

S.-J. Jang, S.-C. Kim, J.-S. Koo, J.-I. Park, and E.-S. Kim, “100-inch 3d real-image rear-projection display system based on fresnel lens,” in Integrated Optical Devices, Nanostructures, and Displays, vol. 5618 (International Society for Optics and Photonics, 2004), pp. 204–211.

“Blender,” http://www.blender.org/ .

O. Andreeva, Y. Korzinin, and B. Manukhin, “Volume transmission hologram gratings basic properties, energy channelizing, effect of ambient temperature and humidity,” Holography Basic Principles and Contemporary Applications pp. 37–60 (2013).

S. Sakurai, T. Nakamura, and M. Yamaguchi, “The use of color in scattered light for 3d touchable holographic light-field display,” in JSAP-OSA Joint Symposia, (Optical Society of America, 2016), p. 13a_C301_4.

“Numba, anaconda inc.,” https://numba.pydata.org/ .

B. F. Manly and J. A. N. Alberto, Multivariate statistical methods: a primer (Chemical Rubber Company (CRC) Taylor and Francis Group, 2016).

I. A. Sánchez Salazar Chavarría, T. Nakamura, and M. Yamaguchi, “An interactive holographic light-field display color-aided 3d-touch user interface,” in The International Display Workshops (IDW’19), (2019), pp. INP5–4.

M. Takano, H. Shigeta, T. Nishihara, M. Yamaguchi, S. Takahashi, N. Ohyama, A. Kobayashi, and F. Iwata, “Full-color holographic 3d printer,” in Practical Holography XVII and Holographic Materials IX, vol. 5005 (International Society for Optics and Photonics, 2003), pp. 126–136.

Supplementary Material (5)

NameDescription
» Visualization 1       Example of the image processing output detailed in Figure 5 using a finger thatpasses through a LF reproduction. The green blob is enhanced and very noticeable, distinguished from the other light sources that illuminate the finger when it is notwithi
» Visualization 2       Realization of the drag gesture using the light scattered from the LF
» Visualization 3       Demonstration of user’s interaction with the LF along they-axis using color cues. The position of the LF follows the finger that scatters the different reproduced colors
» Visualization 4       Demo for controlling the volume of an audio system. When the volume isplaced at ’MAX’, the user can lower it by scattering the green LF. Correspondingly, the user can increase the volume once more byscattering the blue LF. This interaction only uses
» Visualization 5       Demonstration of how the scroll-ball of Figure 13 is used to interact with a LFreproduction. Arrows in the middle of the screen indicate to the user the direction ofthe movement. The dotted circle in the leftmost picture shows the finger scattering t

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Registration mismatch between the display system and the sensing system. The sensing system registers an interaction even if the user has not interacted with the content
Fig. 2.
Fig. 2. Reconstruction of light rays using elementary holograms.
Fig. 3.
Fig. 3. (a): Previous UI based on scattered light detection [4]. The "OK" signal means the interaction has been detected. (b): Proposed UI allowing the touch, tracking and LF modification corresponding to the user’s movements.
Fig. 4.
Fig. 4. (a): 2D tracking. Detection of scattered light to track the fingertip of the user in a 2D plane (b): Color tracking. Detection of different colors along the y-axis to add a new measurement dimension. Both (a) and (b) combined realize 3D tracking of the fingertip
Fig. 5.
Fig. 5. Details on how the color identification is used for detecting the movement of the user.
Fig. 6.
Fig. 6. Image processing pipeline to extract the color interaction.
Fig. 7.
Fig. 7. Distribution of the sampled color frames in the RG normalized color space with confidence ellipses of 95% (inner ellipse) and 99 % (outer ellipse) around each color distribution.
Fig. 8.
Fig. 8. Experimental setup.
Fig. 9.
Fig. 9. Example of the image processing output detailed in Fig. 6 using a finger that passes through a LF reproduction. The green blob is enhanced and very noticeable in (b), distinguished from the other light sources that illuminate the finger when it is not within the LF region (a and c). See Visualization 1.
Fig. 10.
Fig. 10. Realization of the drag gesture using the light scattered from the LF (see Visualization 2).
Fig. 11.
Fig. 11. Demonstration of user’s interaction with the LF along the y-axis using color cues. The position of the LF follows the finger that scatters the different reproduced colors (see Visualization 3).
Fig. 12.
Fig. 12. Demo for controlling the volume of an audio system. (a) When the volume is placed at ’MAX’, the user can lower it by scattering the green LF (see leftmost picture, fingertip’s color). Correspondingly, the user can increase the volume once more by scattering the blue LF. This interaction only uses the color values of Fig. 7 using the Mahalanobis distance to realize the required task. (b) Front view of the reproduced LF, with the volume indicator placed behind the bicolor sphere (see Visualization 4).
Fig. 13.
Fig. 13. Scroll-ball used to spin the reproduced LF.
Fig. 14.
Fig. 14. Scroll-ball algorithm
Fig. 15.
Fig. 15. Demonstration of how the scroll-ball of Fig. 13 is used to interact with a LF reproduction. Arrows in the middle of the screen indicate to the user the direction of the movement. The dotted circle in the leftmost picture shows the finger scattering the LF (see Visualization 5).

Tables (2)

Tables Icon

Table 1. Processing times per frame for the HLF display interactive system.

Tables Icon

Table 2. Comparison of the detection effectiveness for each of the distance metrics used. The identification effectiveness was tested on a sample of 250 frames per color. Except for color red, Mahalanobis distance yields in general better results than identification using Euclidean distance.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

R N = R C R C + G C + B C , G N = G C R C + G C + B C , B N = B C R C + G C + B C .
C d = [ R t R t + G t + B t , G t R t + G t + B t , B t R t + G t + B t ] .
d E ( C {cap} , μ {N} ) = ( C R μ {NR} ) 2 + ( C {G} μ {NG} ) 2 .
d M ( C {cap} , μ {N} ) = ( C {cap} μ {N} ) T Σ {N} 1 ( C {cap} μ {N} ) .

Metrics