Abstract

Mid-air images are formed in the air by the reflection and refraction of light emitted by a light source, which allows the user to view the floating image in real space without wearing special equipment. However, conventional mid-air image optical systems have some weaknesses, such as the need to suitably adjust the height of the viewpoint position depending on its optical arrangement. We propose an optical design that can be installed simply by placing it on a glossy plane, on which an upright mid-air image can be displayed and which is smaller than the existing system.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Visual displays are the key devices for mixed reality (MR) and augmented reality (AR). The interaction with computer graphic (CG) characters in a virtual environment has become common in AR games [1]. Furthermore, spatial AR, which is known as "projection mapping," has recently exploded in popularity [2]. When systems of such technology are installed in public spaces, even people who are casually in the area can interact with CG characters. In this research, we focused on the mid-air imaging system that displays CG images in 3D space beyond screens and allows users to see the images without wearing any device.

A mid-air image is a real image formed by the reflection and refraction of light emitted by a light source. Mid-air imaging is promising for glasses-free MR interaction, because users do not need to wear any equipment to see CG displayed in the real world.

We propose a portable mid-air imaging optical system called “PortOn”, which is installed by simply placing it on glossy surfaces, and it displays upright mid-air images on the surfaces. Figure 1 shows a mid-air image formed by PortOn. In this paper, we describe PortOn in detail so that the optical system is not in the user’s line of vision when he/she sees the mid-air images by the reflection of glossy surfaces in environments. To display mid-air images clearly, our design erases images formed from undesired light using the polarization property of the light source. Since polarization characteristics differ depending on the display used, the internal optical system is designed for multiple types of displays. Moreover, we calculated the range of the arrangement of each optical element, the range of the mid-air image seen from a certain viewpoint, and the viewing range for a certain mid-air image and summarized the design information. Finally, we evaluated whether the images of undesired light could be erased by measuring the luminance of mid-air images and images of undesired light.

 figure: Fig. 1.

Fig. 1. Mid-air image formed by PortOn (our proposed portable mid-air imaging optical system)

Download Full Size | PPT Slide | PDF

2. Related work

2.1 Floating image in the air

There are many approaches for merging real and virtual images such as head mounted display (HMD), projection mapping, and several tracking technologies [3]. However, these methods are not suitable for an unspecified large number of people because they require that each user individually wear a special device. In this study, we consider floating image in the air as an information display method, because it attains MR interaction without any special wearable devices. In floating image taxonomy, the volumetric display is a technology for displaying three-dimensional images and mid-air imaging is a technology to display images in the air. In this research, we focus on mid-air imaging to show images in the real world.

To project a floating image, there are several methods, such as laser plasma emission [4], fog diffusion [5], an optical configuration [6]. However, these methods have several weaknesses. A laser system has the problem of heat harm, a fog system is weak for air flow and difficult to show stability, and a projection mapping system needs a screen and the user cannot interact with the CG directly. Optical mid-air imaging, however, is less dangerous, more resistant to air disturbance, and does not require a screen. In this research, we selected an optical configuration method to form a mid-air image.

2.2 Mid-air imaging

Mid-air imaging is an optical system that forms a real image in the air by reflecting and refracting light emitted by a light source. A mid-air image is used to display visual information in various science fiction movies and is a symbolic item that represents the future human interface. It achieves naked-eye MR because users can see mid-air images without wearing special equipment such as HMD or tablet. It can be displayed just beside the real object; there is no screen. Currently, there are several methods of displaying mid-air images, including roof mirror array (RMA) [7], aerial imaging by retro-reflection (AIRR) [8], and micro-mirror array plates (MMAPs), such as dihedral corner reflector array (DCRA) [9] and two-layer micro-mirror array plates called Aska3D [10].

An RMA can be provided by an array of micrometer-size grooves with planar mirror surfaces arranged in straight line [7]. An RMA has a simple structure so that it can be produced at low cost, but it has a distortion problem. This was solved by installing two RMAs in parallel [11] and arranging one RMA and one plane mirror [12]. However, there are still issues, such as a shift in depth and a change in distortion depending on the viewpoint distance.

The AIRR method is composed of a beam splitter such as a half mirror and a retroreflective element, and displays a mid-air image by retroreflection [8]. Light from the display, which is the light source, is reflected by the half mirror and subsequently enters the retroreflective element. The retroreflected light passes through the half mirror and forms an image. In this method, a bright mid-air image could not be displayed because of the large attenuation of brightness by the beam slitter. To solve this problem, a polarized AIRR (pAIRR) that displays a brighter mid-air image by using a reflective polarizer instead of a half mirror has been proposed [13], but there are still many problems, such as the blurring of the retroreflective material. Additionally, there is a need to improve the functionality of retroreflective materials [14].

The orthogonal micro-mirror structure is an optical system that displays a mid-air image. This function is called retro-transmissive optics and forms the mid-air image in a position symmetrical to the position of the light source with respect to this element. There are roughly two types of retro-transmissive optical elements, the DCRA [9,15] and Aska3D [10]. In this study, the mid-air image was displayed using MMAPs because it is easy to obtain and install. Figure 2 illustrates the optical property of MMAPs. It is composed of two slit mirror arrays plate. Incident light, reflected twice at the a mirror crossed at 90$^\circ$, converges at the plane-symmetric position to the MMAPs.

 figure: Fig. 2.

Fig. 2. The structure and function of MMAPs

Download Full Size | PPT Slide | PDF

This optical property is unique and has several uses. For example, Hiratani et al. proposed a shadowless projection mapping named Shadowless Projector [16] with MMAPs and a projector. A dummy is placed at the target’s face-symmetrical position with respect to the MMAPs, and information is projected onto the dummy. Subsequently, the real image of the dummy displayed by the MMAPs overlaps the target, and projection mapping without shadows is realized. In this research, we implemented this optical element to form a mid-air image.

2.3 Mid-air-image interaction

There are several mid-air-image interaction systems using MMAPs. Kim et al. proposed MARIO [17], which moves the depth of the mid-air image. In this system, the display that serves as the light source is placed horizontally, and the MMAPs are placed above the display at an angle of 45$^\circ$. By moving the position of the light source up and down, the depth of the mid-air image can be moved. Furthermore, employing a depth camera to measure human or physical objects in space, direct interaction between human hands and mid-air images is achieved. In addition, Kajita et al. have proposed an optical design called SkyAnchore, which is a fixed display of a mid-air image for a fast moving real object. Other methods include the application of ultrasonic waves to give a tactile impression to a mid-air image [1820], interaction system with transferring the camera function [21], and the deployment of motion parallax to display a stereoscopic mid-air image [22]. However, with these methods, the MMAPs are arranged at an angle of 45$^\circ$; therefore, the user has to squat or get on a table to adjust the viewpoint position.

In contrast, by developing a mid-air image from a horizontal plane toward the oblique angle, it is easier to adjust the viewpoint by modifying the distance from the plane than adjusting the height of the viewpoint. For example, the method that use MMAPs as a table is one way to form a mid-air image above the table with MMAPs and is as effective as [23], but MMAPs are fragile; thus, they are not suitable for a table. Another method is to place a reflective material in the light path of a mid-air image and reflect it. For example, Enchan Table [24] can display an upright mid-air image on a table using the reflection of the table surface. Another example is Scoopirit [25], which utilizes the Enchan Table optical system and applies water as the reflection surface. This system allows the user to scoop up a mid-air image which is shown on a water surface. In our study, we also adopted this reflective mid-air-image displaying method using MMAPs.

3. Design

The proposed system is divided into two subsystems: a display subsystem for displaying mid-air images and an erasing subsystem for erasing undesired light.

3.1 Display Subsystem

The display subsystem is shown in Fig. 3. It consists of a display, a mirror, MMAPs, louver film, and a reflective surface. Everything except for the reflective surface is in the box. Light from the display is reflected by the mirror and forms display’, which constitutes the light source of a mid-air image. Light from display’ goes through MMAPs and louver film, is reflected by the reflective surface, and finally forms an upright mid-air image on the reflective surface. The louver film blocks the light that passes through without being reflected by MMAPs. However, as shown in Fig. 3, an image I’ appears in the reflective surface due to the light from the display entering the MMAPs. When an object is on a glossy plane, it is natural that an image darker than the object is displayed as a reflection of the object on the plane. In addition, the luminance of I’ appears to be almost the same as that of the mid-air image. Therefore, we decided to erase I’ from the reflective surface by choosing the polarization state of display. Since this design differs depending on the display used, the design for each type of display is described in the next section. Hereafter, I’ is referred as an “undesired-light image.”

 figure: Fig. 3.

Fig. 3. Design of display subsystem

Download Full Size | PPT Slide | PDF

3.2 Erasing subsystem

3.2.1 Light Source A: oblique polarization display

An optical design when using a display like twisted nematic mode is shown in Fig. 4(a). The polarization direction here is referred to as "oblique polarization." A polarizing plate is placed on the MMAPs on the display side. When polarizing plate is placed behind the louver film, the light diffuses in the louver film, and the polarization direction changes. Therefore, the light from the display cannot be absorbed by the polarizing plate. When the polarizing plate is placed between the MMAPs and the louver film, the polarizing plate cannot completely absorb the light from the display due to the change in the polarization state inside the MMAPs. Consequently, the polarizing plate is placed on the MMAPs on the display side. In the case of oblique polarization, the polarization direction of the display’ is reversed with respect to the display. Therefore, by installing the polarizing plate so that its transmission axis is orthogonal to the polarization direction of the display, the light from the display is absorbed, and the undesired-light image is erased.

 figure: Fig. 4.

Fig. 4. Design of erasing system. (a) Erasing system for oblique polarization. (b) Erasing system for vertical and horizontal polarization.

Download Full Size | PPT Slide | PDF

Table 1 shows the Stokes vector of the display ($S_{DO}$) and the Mueller matrix of the mirror ($M_M$) and the polarizing plate ($M_{PO}$). In this paper, aiming to focus on the polarization direction, we do not discuss the incident angle. Visibility range of mid-air display in the viewpoint position

Tables Icon

Table 1. Stokes vector of the display and Mueller matrix of the mirror and the polarizing plate.

The polarization direction $S_{D^{\prime }O}$, when the light from the display is reflected by the mirror and creates display’, is written as follows.

$$S_{D^{\prime}O} = M_{M}S_{DO} = \; ^t \begin{bmatrix} 1 & 0 & -1 & 0 \end{bmatrix}$$

The following calculations show the states when the light from the display and the display’ is incident on the polarizing plate.

$$M_{PO}S_{DO} = \; ^t \begin{bmatrix} 0 & 0 & 0 & 0 \end{bmatrix}$$
$$M_{PO}S_{D^{\prime}O} = \; ^t \begin{bmatrix} 1 & 0 & -1 & 0 \end{bmatrix}$$

Therefore, the light from the display is absorbed by the polarizing plate, and the light from the display’ passes through the polarizing plate.

3.2.2 Light Source B: vertical and horizontal polarization display

Another optical design using the display like in-plane switching (IPS) or vertical alignment (VA) mode is shown in Fig. 4(b). The polarization direction shown in the upper part of the figure is referred to as “vertical polarization,” and that shown in the bottom part is referred to as “horizontal polarization.” The polarizing plate is placed on MMAPs on the display side. For both vertical and horizontal polarization, the polarization direction of the display’ is the same as that of the display. Therefore, we place a quarter-wave plate on the mirror to reverse the polarization direction of the display’ with respect to the display. The light from the display goes through the quarter-wave plate. In this paper, we refer to the side where the light from the display is incident as "front" and the other side as “back.” The light passing through the quarter-wave plate is reflected by the mirror and goes through from the back of the quarter-wave plate. Therefore, by installing the polarizing plate so that its transmission axis is orthogonal to the polarization direction of the display, the light from the display is absorbed; hence, it is possible for the undesired-light image to be erased.

Table 2 shows the Stokes vector of the display ($S_{DV}$) and the Mueller matrix of the front of the quarter-wave plate ($M_{WF}$), back of the quarter-wave plate ($M_{WB}$), and the polarizing plate ($M_{PH}$). The mirror is shown in Table 1.

Tables Icon

Table 2. Stokes vector of the display and Mueller matrix of the quarter-wave plate and the polarizing plate.

The polarization direction of $S_{D'V}$, when the light from the display goes through the front of the quarter-wave plate, reflected by the mirror, and passes through the back of the quarter-wave plate, is written as follows:

$$S_{D^{\prime}V} = M_{WB}M_{M}M_{WF}S_{DV} =\; ^t \begin{bmatrix} 1 & 1 & 0 & 0 \end{bmatrix}$$

The following calculations show the states when the light from the display and the display’ is incident to on the polarizing plate.

$$M_{PH}S_{DV} =\; ^t \begin{bmatrix} 0 & 0 & 0 & 0 \end{bmatrix}$$
$$M_{PH}S_{D^{\prime}V} = \;^t \begin{bmatrix} 1 & 1 & 0 & 0 \end{bmatrix}$$

Therefore, the light from the display is absorbed by the polarizing plate, and the light from the display’ passes through the polarizing plate.

3.2.3 Light Source C: display without polarization

Another optical design is without polarization type, such as an organic LED display and projected screen. The polarizing plate is placed on the display and on MMAPs on the display side. The polarizing plate placed on the display is in the same direction as it is in Fig. 4(a). Consequently, the polarization state of the display’ is reversed with respect to the display. Next, the design is the same as that in the case of using an oblique polarization display.

3.3 Possible light source placement range

To design the possible range of light source placement we deploy the coordinate system. The visible range is the maximum distance at which the mid-air image can be displayed from MMAPS, taking into account its limits. Regarding the angle of the light incident on the MMAPs, if it is small, it is considered that the light passes through it without being reflected; if it is large, it is deemed that the reflections are more than twice in the interior of MMAPs and constitute undesired light. Therefore, the angle of incidence for displaying the mid-air image is considered to have a limiting factor. Since the angle of incidence is different depending on the position of the light source, the visibility range of the mid-air image should be designed with respect to the position of the light source. To clarify the distance between the display or mid-air image and the MMAPs, we make calculations based on a coordinate system. Here, let the coordinate (0, 0) be the junction of MMAPs and the reflective surface. To design the possible range of light source placement, we make the following assumptions: if the angle of the light incident on the MMAPs is from $\theta _{m}$ to $\theta _{M}$, it is reflected twice in the interior of MMAPs to display a mid-air image; if the size of the mid-air image to be displayed is $h$, and the coordinates where the light source and MMAPs can be placed nearest ($D_{min}$, 0) and farthest ($D_{max}$, 0), we get Fig. 5. The distance and angle parameters are summarized below.

  • $H_m$: height of the MMAPs (from the origin)
  • $h$: height of the mid-air image
  • $\theta _m$: minimum incidence angle for forming a mid-air image by entering MMAPs
  • $\theta _M$: maximum incidence angle for forming a mid-air image by entering MMAPs
  • $D_{min}$: minimum distance between displays and MMAPs
  • $D_{max}$: maximum distance between displays and MMAPs

 figure: Fig. 5.

Fig. 5. Display place range

Download Full Size | PPT Slide | PDF

To find $D_{min}$ and $D_{max}$, firstly, we calculate $D_{min}$ from the triangle surrounded by pink in Fig. 5. Equation (7) is obtained from the trigonometric function and $D_{min}$ is as shown in Eq. (8).

$$tan\theta_M = \frac{H_m + h}{D_{min} }$$
$$D_{min} = \frac{H_m + h}{tan\theta_M}$$

Similarly, $D_{max}$ is calculated from the green triangles of Fig. 5. Equation (9) is obtained from the trigonometric function, and $D_{max}$ is as shown in Eq. (10).

$$tan\theta_m = \frac{H_m}{D_{max} }$$
$$D_{max} = \frac{H_m}{tan\theta_m}$$

3.4 Visibility range of mid-air image

To design the visible range of the mid-air image, we found the following viewpoint positions: one in which only $- D_{min}$ can be seen; one in which only $- D_{max}$ can be seen; one in which a mid-air image displayed in the region between $- D_{min}$ and $- D_{max}$ can be seen; and one in which a mid-air image displayed in one position can be seen.

The display area $abcd$ of a mid-air image visible in a position ($E_x$, $E_y$), where the user’s point of view is located as it is shown in Fig. 6,is obtained. The display range of the mid-air image visible by the user is the range between the user’s point of view and MMAPs reflected on the reflective surface. Here, we introduce a new parameter $H_M$ as the height of the MMAPs. $H_m$ is the minimum height required to display the depth $- D_{min}$ to $- D_{max}$, and $H_M$ is the height that determines the area of the user’s field of view (FoV). Therefore, the relation $H_m$ $\leq$ $H_M$ is established. Moreover, the larger $H_M$ is, the larger the user’s FoV is. Note that $E_x < 0$ and $E_y > 0$. The x-coordinates of $a$ and $d$ are the intersection of upper blue line Eq. (11) and $y=h$, the x-coordinates of $b$ and $c$ are the intersection of lower blue line Eq. (12) and $y=0$. From then, the coordinates $a, b, c, d$ are as follows.

$$y = \frac{E_y}{E_x}x$$
$$y = \frac{E_y + H_M}{E_x}x - H_M$$
$$ a: (\frac{E_xh}{E_y} , h),$$
$$ b: (\frac{H_ME_x}{E_y + H_M} , h)$$
$$ c: (\frac{H_ME_x}{E_y + H_M} , 0)$$
$$ d: (\frac{E_xh}{E_y} , 0)$$

 figure: Fig. 6.

Fig. 6. Visibility range of mid-air display in the viewpoint position ($E_x$, $E_y$) and viewpoint position ($E_{xM}$, $E_{yM}$) in which the mid-air image of height $h$ is displayed in a certain position $-M$

Download Full Size | PPT Slide | PDF

Subsequently, we find a viewpoint position ($E_{xM}$, $E_{yM}$) in which the mid-air image of height $h$ is displayed in a certain position $-M$, as shown in Fig. 6. Note that $E_{xM} < 0$ and $E_{yM} > 0$. ($E_{xM}$, $E_{yM}$) is the intersection of orange lines and can be expressed by the following equation.

$$ (E_{xM}, E_{yM}) = (\frac{H_MM}{h-H_M}, \frac{hH_M}{H_M-h} )$$

Furthermore, if the viewpoint position is the pink filled part of Fig. 6, we can see the mid-air image displayed in $-M$.

4. Implementation

Based on the design, we implemented the prototype with a combination of commercially available products. The size is shown in the left of Fig. 7 and the internal structure is shown in the right of Fig. 7. A LITEMAX Durapixel 0708-T (luminance: 1600 cd/m$^{2}$) was used as the display, an Acri-Mirror by Acri-Sunday as mirror, a MCR140N by Mitate Imaging as a $1/4$ $\lambda$ wavelength plate, ASKA3D by Askanet (488 mm $\times$ 488 mm, pitch width is 0.5 mm ) as MMAPs, a Wincos Vision Control Film W-0055 by Lintec as a louver film, and a SHLP41 by Mikan Imaging as a polarizer. These were set up in a wooden box to make them portable.

 figure: Fig. 7.

Fig. 7. LEFT: Prototype design, RIGHT: Implementation

Download Full Size | PPT Slide | PDF

5. Evaluation

The effectiveness of the proposed system in removing undesired light and displaying a mid-air image was evaluated by luminance measurements.

5.1 Procedure

Initially, we displayed mid-air images using a prototype device. Subsequently, we evaluated the brightness of the mid-air image and the brightness of the undesired image aiming to confirm that our device is effective in removing undesired images and that it does not significantly affect the display of the mid-air image. Moreover, we compared the mid-air image display unit alone to the unit equipped with both the mid-air image display unit and the undesired light removal unit.

In this section, a system with only a mid-air image display section is described as “optics without polarization operation (w/o)” and a system with a mid-air image display section and unwanted undesired light removal section is described as “optics with polarization operation (w/)”.

The measurement conditions are shown in Fig. 8. The luminance meter was placed at a distance of 50 $cm$ from the mid-air image. Let the angle in the latitudinal direction be X and the angle in the longitudinal direction be Y. X and Y were measured in $5^\circ$ increments, respectively, with $45^\circ$ and $0^\circ$ in the range of Y and $0^\circ$ in the range of $25^\circ$. A white circle with a diameter of 8 $cm$ was projected on the display, and the luminance of the mid-air image and the center of the undesired light image were measured, respectively. Here, an acrylic mirror was used as a reflective material and a Konica Minolta CS-150 was used as a luminance meter.

 figure: Fig. 8.

Fig. 8. Brightness measurement conditions (A) latitude direction (side view) (B) longitude direction (top view)

Download Full Size | PPT Slide | PDF

5.2 Result

The results of measurements with an optical system without polarization manipulation are shown in the left of Fig. 9 and those with polarization manipulation are shown in the right of Fig. 9. The solid line indicates the luminance of the mid-air image, and the dashed line indicates the luminance of the undesired image. In the case of non-polarized optics, the brightness of the mid-air image and the undesired image are approximately identical. On the other hand, we observed that the mid-air image is much brighter than the undesired image in the case of polarization manipulated optics. The ratio of the luminance measured by the optical system with polarization manipulation to that of the undesired image and the mid-air image measured by the optical system without polarization manipulation when $X = 60^\circ$ is shown in the left of Fig. 10. The luminance of the undesired image displayed by the optical system without polarization manipulation ranged from $5$ to $10\%$ of the luminance of the undesired image displayed by the optical system with polarization manipulation.

 figure: Fig. 9.

Fig. 9. Measurement results for an optical system. LEFT: With polarization manipulation. RIGHT: Without polarization manipulation.

Download Full Size | PPT Slide | PDF

 figure: Fig. 10.

Fig. 10. Comparison of measurement results. LEFT: Ratio of luminance measured by an optical system with polarization manipulation to that measured by an optical system without polarization manipulation. RIGHT: Luminance ratio of the mid-air image to the undesired image

Download Full Size | PPT Slide | PDF

The luminance ratio of the mid-air image to the undesired image is shown in the right of Fig. 10. The dashed line indicates the luminance ratio of the optical system without polarization manipulation, and the solid line indicates the luminance ratio of the optical system with polarization manipulation. In the case of unpolarized optics, the luminance of the undesired image and the mid-air image are approximately identical, whereas in the case of polarized manipulated optics, the mid-air image luminance is much higher than that of the undesired image. It was ascertained that our proposed method was effective in removing the undesired light and displaying the mid-air image alone.

The mid-air image and the undesired image displayed by the optics without polarization manipulation and the optics with polarization manipulation are shown in Fig. 11. Upper images are that displayed with an polarized optical system; lower images are that displayed without a polarized manipulated optical system. The proposed system can remove the undesired light and display the mid-air image alone; the undesired image is not visible even if the directly displayed image is observed when the optics with polarization manipulation is applied.

 figure: Fig. 11.

Fig. 11. Displayed mid-air image and undesired image. Upper images: Images displayed with polarization control. There are mid-air images clearly and no undesired-light image. Lower images: Images without polarization control. There are not only mid-air images but also undesired-light images.

Download Full Size | PPT Slide | PDF

5.3 Spatial resolution

The edge method of the modulation transfer function (MTF) was used to measure the resolution. An aerial image was taken using a Sony ILCE-7M3 camera and SONY SEL70300G lens. We used mirrors, iPads, transparent acrylic plates, white marble, gray marble, black marble, and white tiles as reflective materials to display the aerial image. The measurement setup is shown in Fig. 12(a), and the calculated MTF curve is shown in Fig. 12(b).The background of the mid-air image is translucent and low in luminosity, causing the background of the mid-air image to become visible. Therefore, when a picture is taken, the background pattern overlaps the mid-air image. Moreover, the light diffused on the reflective surface is also superimposed on it. Therefore, it is believed that it is impossible to measure the resolution of the mid-air image only with current method.

 figure: Fig. 12.

Fig. 12. The measurement of spatial resolutions. (a) The setup of measurement. We select several refrective materials. (b)

Download Full Size | PPT Slide | PDF

6. Discussion

6.1 Applications

This study is the first to confirm this removal with an optical system using a micro-mirror array plate, and it enabled us to produce a mid-air image on a glossy surface simply by placing it. This innovation can be employed in several applications. For example, mid-air images are characterized by the fact that they can be displayed next to the real object, since they have no physical frame or screen. Additionally, our proposed system can be installed simply by placing it on a shiny plane, and an upright mid-air image is displayed on that plane. Therefore the user can regale himself/herself by putting his/her favorite CG character in a private space and enjoy the communication with it. In public spaces, the system can be implemented as a digital signage, placing information next to the products in a department store or describing the exhibits in a museum.

Using a tablet or a smartphone as the display of a light source, the user can interact with the mid-air image CG characters without installing any new input sensors in the system. Using multiple sensors such as microphones and cameras on tablets and smartphones, interactive applications can be created. For example, using the microphone of a tablet, the CG character can be used as the image of an assistant tool such as Siri or Alexa. The proposed system can be placed not only on a floor or a table, but also on a ceiling or a wall.

6.2 Limitations

As we mentioned above, the proposed system has three limitations: the narrow FoV, low brightness, and inability to display mid-air images on non-reflective surfaces.

The first issue is the user’s narrow FoV. The FoV of the proposed system is the area between the user’s viewpoint and the MMAPs. Therefore, the FoV depends on the size of MMAPs, and if the viewpoint position is out of the range of MMAPs, the mid-air image is not visible. A possible solution is to substantially widen MMAPs by placing mirrors on the left and right edges of MMAPs [26].

The second problem of this system is the low brightness of mid-air images. In the proposed system, multiple optical elements are utilized, therefore, each reflection or passage by the optical elements attenuates the brightness. In addition, glass and marble surfaces are less reflective and hence less luminous. Consequently, to display a bright mid-air image with our system, it is necessary to use a high brightness display as the light source. It is also believed that bright mid-air images can be displayed using external strong light.

The third point is that it is impossible to display mid-air images on a non-reflective surface. Since our system uses reflections from glossy surfaces to display a mid-air image, it does not form a mid-air image when it is placed on a non-reflective surface such as a carpet. A possible solution is to place a shiny plane such as a tablet on top of the non-reflective surface.

7. Conclusions

We proposed a portable mid-air imaging optical system, PortOn, which can be installed simply by placing it on a glossy horizontal surface such as a table or the floor. We used a combination of MMAPs, a light source, a mirror, and a louver film to display an image formed by a light source on a reflective surface. We designed it for multiple types of light sources to eliminate undesired light images. In the case of using a display with oblique polarization, a polarizing plate was placed on the MMAPs of the display side. In the case of using a display with vertical polarization or horizontal polarization, a quarter-wave plate was placed on the mirror in addition to oblique polarization. In the case of using a display without polarization, a polarizing plate was placed on the display in addition to oblique polarization. We calculated the size of system and visible area, and we prototyped the system. We evaluated whether the proposed system disappeared undesired light images by measuring the luminance of mid-air and undesired light images, and we confirmed that it was possible to highlight mid-air images.

Funding

Precursory Research for Embryonic Science and Technology; Japan Science and Technology Agency (JPMJP16D5); Japan Society for the Promotion of Science (20H04223).

Acknowledgment

Portions of this work were presented at the The ACM Symposium on Virtual Reality Software and Technology in 2019, “Portable Mid-air Imaging Optical System on Glossy Surface.”

Disclosures

The authors declare no conflicts of interest.

References

1. V. Geroimenko, Augmented Reality Games I, vol. 254 of Understanding the Pokémon GO Phenomenon (Springer International Publishing, New York, NY, 2019).

2. M. R. Mine, J. van Baar, A. Grundhofer, D. Rose, and B. Yang, “Projection-based augmented reality in disney theme parks,” Computer 45(7), 32–40 (2012). [CrossRef]  

3. G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Comput. Graph. Forum 38(2), 493–519 (2019). [CrossRef]  

4. Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35(2), 1–14 (2016). [CrossRef]  

5. Y. Tokuda, M. A. Norasikin, S. Subramanian, and D. Martinez Plasencia, “Mistform: Adaptive shape changing fog screens,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2017), CHI’ 17, pp. 4383–4395.

6. Y. Ueda, K. Iwazaki, M. Shibasaki, Y. Mizushina, M. Furukawa, H. Nii, K. Minamizawa, and S. Tachi, “Haptomirage: Mid-air autostereoscopic display for seamless interaction with mixed reality environments,” in ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014, (Association for Computing Machinery, 2014),ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014. ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 ; Conference date: 10-08-2014 Through 14-08-2014.

7. Y. Maeda, D. Miyazaki, and S. Maekawa, “Aerial imaging display based on a heterogeneous imaging system consisting of roof mirror arrays,” in Proceedings of the IEEE 3rd Global Conference on Consumer Electronics (GCCE’14), (IEEE, Los Alamitos, CA, 2014), pp. 211–215.

8. H. Yamamoto and S. Suyama, “Aerial imaging by retro-reflection (airr),” in SID Symposium Digest of Technical Papers, (2013), pp. 895–897.

9. S. Maekawa, K. Nitta, and O. Matoba, “Transmissive optical imaging device with micromirror array,” in Three-Dimensional TV, Video, and Display V, vol. 6392 International Society for Optics and Photonics (SPIE, 2006), pp. 130–137.

10. M. Otsubo, “Optical imaging apparatus and optical imaging method using the same,” (2014).

11. Y. Maeda, D. Miyazaki, and S. Maekawa, “Aerial imaging display based on a heterogeneous imaging system consisting of roof mirror arrays,” in 2014 IEEE 3rd Global Conference on Consumer Electronics (GCCE), (2014), pp. 211–215.

12. Y. Maeda, D. Miyazaki, and S. Maekawa, “Volumetric aerial three-dimensional display based on heterogeneous imaging and image plane scanning,” Appl. Opt. 54(13), 4109–4115 (2015). [CrossRef]  

13. Y. Tokuda, A. Hiyama, M. Hirose, and H. Yamamoto, “R2d2 w/ airr: Real time & real space double-layered display with aerial imaging by retro-reflection,” in SIGGRAPH Asia 2015 Emerging Technologies, (Association for Computing Machinery, New York, NY, USA, 2015), SA ’15.

14. H. Kim, S.-W. Min, and B. Lee, “Geometrical optics analysis of the structural imperfection of retroreflection corner cubes with a nonlinear conjugate gradient method,” Appl. Opt. 47(34), 6453–6469 (2008). [CrossRef]  

15. Y. Yoshimizu and E. Iwase, “Radially arranged dihedral corner reflector array for wide viewing angle of floating image without virtual image,” Opt. Express 27(2), 918–927 (2019). [CrossRef]  

16. K. Hiratani, D. Iwai, P. Punpongsanon, and K. Sato, “Shadowless projector: Suppressing shadows in projection mapping with micro mirror array plate,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), (2019), pp. 1309–1310.

17. H. Kim, I. Takahashi, H. Yamamoto, S. Maekawa, and T. Naemura, “Mario: Mid-air augmented reality interaction with objects,” Entertain. Comput. 5(4), 233–241 (2014). [CrossRef]  

18. Y. Makino, Y. Furuyama, and H. Shinoda, “Haptoclone (haptic-optical clone): Mid-air haptic-optical human-human interaction with perfect synchronization,” in Proceedings of the 3rd ACM Symposium on Spatial User Interaction, (ACM, New York, NY, USA, 2015), SUI’ 15, pp. 139.

19. K. Yoshida, Y. Horiuchi, S. Inoue, Y. Makino, and H. Shinoda, “Haptoclonear: Mutual haptic-optic interactive system with 2d image superimpose, SIGGRAPH’ 17,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), pp. 1–2.

20. Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, S. Inoue, and H. Shinoda, “Haptomime: Mid-air haptic interaction with a floating virtual screen,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2014), UIST’ 14, pp. 663–667.

21. K. Tsuchiya, A. Sano, and N. Koizumi, “Interaction system with mid-air cg character that has own eyes,” in SIGGRAPH Asia 2018 Posters, (Association for Computing Machinery, New York, NY, USA, 2018), SA’ 18.

22. M. Takasaki, K. Ohashi, and S. Mizuno, “Interaction of a stereoscopic 3dcg image with motion parallax displayed in mid-air,” in SIGGRAPH Asia 2018 Posters (Association for Computing Machinery, New York, NY, USA, 2018), SA’ 18.

23. H. Kim, H. Yamamoto, N. Koizumi, S. Maekawa, and T. Naemura, “Hovertable: Dual-sided vertical mid-air images on horizontal tabletop display,” in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2015), CHI EA’ 15, pp. 1115–1120.

24. H. Yamamoto, H. Kajita, N. Koizumi, and T. Naemura, “Enchantable: Displaying a vertically standing mid-air image on a table surface using reflection,” in Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces, (Association for Computing Machinery, New York, NY, USA, 2015), ITS’ 15, pp. 397–400.

25. Y. Matsuura and N. Koizumi, “Scoopirit: A method of scooping mid-air images on water surface,” in Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces, (Association for Computing Machinery, New York, NY, USA, 2018), ISS ’18, pp. 227–235.

26. H. Kajita, N. Koizumi, and T. Naemura, “Skyanchor: Optical design for anchoring mid-air images onto physical objects,” in Proceedings of the 29th Annual Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2016), UIST’ 16, pp. 415–423.

References

  • View by:

  1. V. Geroimenko, Augmented Reality Games I, vol. 254 of Understanding the Pokémon GO Phenomenon (Springer International Publishing, New York, NY, 2019).
  2. M. R. Mine, J. van Baar, A. Grundhofer, D. Rose, and B. Yang, “Projection-based augmented reality in disney theme parks,” Computer 45(7), 32–40 (2012).
    [Crossref]
  3. G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Comput. Graph. Forum 38(2), 493–519 (2019).
    [Crossref]
  4. Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35(2), 1–14 (2016).
    [Crossref]
  5. Y. Tokuda, M. A. Norasikin, S. Subramanian, and D. Martinez Plasencia, “Mistform: Adaptive shape changing fog screens,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2017), CHI’ 17, pp. 4383–4395.
  6. Y. Ueda, K. Iwazaki, M. Shibasaki, Y. Mizushina, M. Furukawa, H. Nii, K. Minamizawa, and S. Tachi, “Haptomirage: Mid-air autostereoscopic display for seamless interaction with mixed reality environments,” in ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014, (Association for Computing Machinery, 2014),ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014. ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 ; Conference date: 10-08-2014 Through 14-08-2014.
  7. Y. Maeda, D. Miyazaki, and S. Maekawa, “Aerial imaging display based on a heterogeneous imaging system consisting of roof mirror arrays,” in Proceedings of the IEEE 3rd Global Conference on Consumer Electronics (GCCE’14), (IEEE, Los Alamitos, CA, 2014), pp. 211–215.
  8. H. Yamamoto and S. Suyama, “Aerial imaging by retro-reflection (airr),” in SID Symposium Digest of Technical Papers, (2013), pp. 895–897.
  9. S. Maekawa, K. Nitta, and O. Matoba, “Transmissive optical imaging device with micromirror array,” in Three-Dimensional TV, Video, and Display V, vol. 6392 International Society for Optics and Photonics (SPIE, 2006), pp. 130–137.
  10. M. Otsubo, “Optical imaging apparatus and optical imaging method using the same,” (2014).
  11. Y. Maeda, D. Miyazaki, and S. Maekawa, “Aerial imaging display based on a heterogeneous imaging system consisting of roof mirror arrays,” in 2014 IEEE 3rd Global Conference on Consumer Electronics (GCCE), (2014), pp. 211–215.
  12. Y. Maeda, D. Miyazaki, and S. Maekawa, “Volumetric aerial three-dimensional display based on heterogeneous imaging and image plane scanning,” Appl. Opt. 54(13), 4109–4115 (2015).
    [Crossref]
  13. Y. Tokuda, A. Hiyama, M. Hirose, and H. Yamamoto, “R2d2 w/ airr: Real time & real space double-layered display with aerial imaging by retro-reflection,” in SIGGRAPH Asia 2015 Emerging Technologies, (Association for Computing Machinery, New York, NY, USA, 2015), SA ’15.
  14. H. Kim, S.-W. Min, and B. Lee, “Geometrical optics analysis of the structural imperfection of retroreflection corner cubes with a nonlinear conjugate gradient method,” Appl. Opt. 47(34), 6453–6469 (2008).
    [Crossref]
  15. Y. Yoshimizu and E. Iwase, “Radially arranged dihedral corner reflector array for wide viewing angle of floating image without virtual image,” Opt. Express 27(2), 918–927 (2019).
    [Crossref]
  16. K. Hiratani, D. Iwai, P. Punpongsanon, and K. Sato, “Shadowless projector: Suppressing shadows in projection mapping with micro mirror array plate,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), (2019), pp. 1309–1310.
  17. H. Kim, I. Takahashi, H. Yamamoto, S. Maekawa, and T. Naemura, “Mario: Mid-air augmented reality interaction with objects,” Entertain. Comput. 5(4), 233–241 (2014).
    [Crossref]
  18. Y. Makino, Y. Furuyama, and H. Shinoda, “Haptoclone (haptic-optical clone): Mid-air haptic-optical human-human interaction with perfect synchronization,” in Proceedings of the 3rd ACM Symposium on Spatial User Interaction, (ACM, New York, NY, USA, 2015), SUI’ 15, pp. 139.
  19. K. Yoshida, Y. Horiuchi, S. Inoue, Y. Makino, and H. Shinoda, “Haptoclonear: Mutual haptic-optic interactive system with 2d image superimpose, SIGGRAPH’ 17,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), pp. 1–2.
  20. Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, S. Inoue, and H. Shinoda, “Haptomime: Mid-air haptic interaction with a floating virtual screen,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2014), UIST’ 14, pp. 663–667.
  21. K. Tsuchiya, A. Sano, and N. Koizumi, “Interaction system with mid-air cg character that has own eyes,” in SIGGRAPH Asia 2018 Posters, (Association for Computing Machinery, New York, NY, USA, 2018), SA’ 18.
  22. M. Takasaki, K. Ohashi, and S. Mizuno, “Interaction of a stereoscopic 3dcg image with motion parallax displayed in mid-air,” in SIGGRAPH Asia 2018 Posters (Association for Computing Machinery, New York, NY, USA, 2018), SA’ 18.
  23. H. Kim, H. Yamamoto, N. Koizumi, S. Maekawa, and T. Naemura, “Hovertable: Dual-sided vertical mid-air images on horizontal tabletop display,” in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2015), CHI EA’ 15, pp. 1115–1120.
  24. H. Yamamoto, H. Kajita, N. Koizumi, and T. Naemura, “Enchantable: Displaying a vertically standing mid-air image on a table surface using reflection,” in Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces, (Association for Computing Machinery, New York, NY, USA, 2015), ITS’ 15, pp. 397–400.
  25. Y. Matsuura and N. Koizumi, “Scoopirit: A method of scooping mid-air images on water surface,” in Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces, (Association for Computing Machinery, New York, NY, USA, 2018), ISS ’18, pp. 227–235.
  26. H. Kajita, N. Koizumi, and T. Naemura, “Skyanchor: Optical design for anchoring mid-air images onto physical objects,” in Proceedings of the 29th Annual Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2016), UIST’ 16, pp. 415–423.

2019 (2)

G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Comput. Graph. Forum 38(2), 493–519 (2019).
[Crossref]

Y. Yoshimizu and E. Iwase, “Radially arranged dihedral corner reflector array for wide viewing angle of floating image without virtual image,” Opt. Express 27(2), 918–927 (2019).
[Crossref]

2016 (1)

Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35(2), 1–14 (2016).
[Crossref]

2015 (1)

2014 (1)

H. Kim, I. Takahashi, H. Yamamoto, S. Maekawa, and T. Naemura, “Mario: Mid-air augmented reality interaction with objects,” Entertain. Comput. 5(4), 233–241 (2014).
[Crossref]

2012 (1)

M. R. Mine, J. van Baar, A. Grundhofer, D. Rose, and B. Yang, “Projection-based augmented reality in disney theme parks,” Computer 45(7), 32–40 (2012).
[Crossref]

2008 (1)

Aksit, K.

G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Comput. Graph. Forum 38(2), 493–519 (2019).
[Crossref]

Fujiwara, M.

Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, S. Inoue, and H. Shinoda, “Haptomime: Mid-air haptic interaction with a floating virtual screen,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2014), UIST’ 14, pp. 663–667.

Furukawa, M.

Y. Ueda, K. Iwazaki, M. Shibasaki, Y. Mizushina, M. Furukawa, H. Nii, K. Minamizawa, and S. Tachi, “Haptomirage: Mid-air autostereoscopic display for seamless interaction with mixed reality environments,” in ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014, (Association for Computing Machinery, 2014),ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014. ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 ; Conference date: 10-08-2014 Through 14-08-2014.

Furuyama, Y.

Y. Makino, Y. Furuyama, and H. Shinoda, “Haptoclone (haptic-optical clone): Mid-air haptic-optical human-human interaction with perfect synchronization,” in Proceedings of the 3rd ACM Symposium on Spatial User Interaction, (ACM, New York, NY, USA, 2015), SUI’ 15, pp. 139.

Geroimenko, V.

V. Geroimenko, Augmented Reality Games I, vol. 254 of Understanding the Pokémon GO Phenomenon (Springer International Publishing, New York, NY, 2019).

Grundhofer, A.

M. R. Mine, J. van Baar, A. Grundhofer, D. Rose, and B. Yang, “Projection-based augmented reality in disney theme parks,” Computer 45(7), 32–40 (2012).
[Crossref]

Hasegawa, K.

Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, S. Inoue, and H. Shinoda, “Haptomime: Mid-air haptic interaction with a floating virtual screen,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2014), UIST’ 14, pp. 663–667.

Hasegawa, S.

Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35(2), 1–14 (2016).
[Crossref]

Hayasaki, Y.

Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35(2), 1–14 (2016).
[Crossref]

Hiratani, K.

K. Hiratani, D. Iwai, P. Punpongsanon, and K. Sato, “Shadowless projector: Suppressing shadows in projection mapping with micro mirror array plate,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), (2019), pp. 1309–1310.

Hirose, M.

Y. Tokuda, A. Hiyama, M. Hirose, and H. Yamamoto, “R2d2 w/ airr: Real time & real space double-layered display with aerial imaging by retro-reflection,” in SIGGRAPH Asia 2015 Emerging Technologies, (Association for Computing Machinery, New York, NY, USA, 2015), SA ’15.

Hiyama, A.

Y. Tokuda, A. Hiyama, M. Hirose, and H. Yamamoto, “R2d2 w/ airr: Real time & real space double-layered display with aerial imaging by retro-reflection,” in SIGGRAPH Asia 2015 Emerging Technologies, (Association for Computing Machinery, New York, NY, USA, 2015), SA ’15.

Horiuchi, Y.

K. Yoshida, Y. Horiuchi, S. Inoue, Y. Makino, and H. Shinoda, “Haptoclonear: Mutual haptic-optic interactive system with 2d image superimpose, SIGGRAPH’ 17,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), pp. 1–2.

Hoshi, T.

Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35(2), 1–14 (2016).
[Crossref]

Inoue, S.

K. Yoshida, Y. Horiuchi, S. Inoue, Y. Makino, and H. Shinoda, “Haptoclonear: Mutual haptic-optic interactive system with 2d image superimpose, SIGGRAPH’ 17,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), pp. 1–2.

Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, S. Inoue, and H. Shinoda, “Haptomime: Mid-air haptic interaction with a floating virtual screen,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2014), UIST’ 14, pp. 663–667.

Iwai, D.

K. Hiratani, D. Iwai, P. Punpongsanon, and K. Sato, “Shadowless projector: Suppressing shadows in projection mapping with micro mirror array plate,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), (2019), pp. 1309–1310.

Iwase, E.

Iwazaki, K.

Y. Ueda, K. Iwazaki, M. Shibasaki, Y. Mizushina, M. Furukawa, H. Nii, K. Minamizawa, and S. Tachi, “Haptomirage: Mid-air autostereoscopic display for seamless interaction with mixed reality environments,” in ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014, (Association for Computing Machinery, 2014),ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014. ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 ; Conference date: 10-08-2014 Through 14-08-2014.

Kajita, H.

H. Yamamoto, H. Kajita, N. Koizumi, and T. Naemura, “Enchantable: Displaying a vertically standing mid-air image on a table surface using reflection,” in Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces, (Association for Computing Machinery, New York, NY, USA, 2015), ITS’ 15, pp. 397–400.

H. Kajita, N. Koizumi, and T. Naemura, “Skyanchor: Optical design for anchoring mid-air images onto physical objects,” in Proceedings of the 29th Annual Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2016), UIST’ 16, pp. 415–423.

Kim, H.

H. Kim, I. Takahashi, H. Yamamoto, S. Maekawa, and T. Naemura, “Mario: Mid-air augmented reality interaction with objects,” Entertain. Comput. 5(4), 233–241 (2014).
[Crossref]

H. Kim, S.-W. Min, and B. Lee, “Geometrical optics analysis of the structural imperfection of retroreflection corner cubes with a nonlinear conjugate gradient method,” Appl. Opt. 47(34), 6453–6469 (2008).
[Crossref]

H. Kim, H. Yamamoto, N. Koizumi, S. Maekawa, and T. Naemura, “Hovertable: Dual-sided vertical mid-air images on horizontal tabletop display,” in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2015), CHI EA’ 15, pp. 1115–1120.

Koizumi, N.

H. Yamamoto, H. Kajita, N. Koizumi, and T. Naemura, “Enchantable: Displaying a vertically standing mid-air image on a table surface using reflection,” in Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces, (Association for Computing Machinery, New York, NY, USA, 2015), ITS’ 15, pp. 397–400.

H. Kajita, N. Koizumi, and T. Naemura, “Skyanchor: Optical design for anchoring mid-air images onto physical objects,” in Proceedings of the 29th Annual Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2016), UIST’ 16, pp. 415–423.

Y. Matsuura and N. Koizumi, “Scoopirit: A method of scooping mid-air images on water surface,” in Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces, (Association for Computing Machinery, New York, NY, USA, 2018), ISS ’18, pp. 227–235.

K. Tsuchiya, A. Sano, and N. Koizumi, “Interaction system with mid-air cg character that has own eyes,” in SIGGRAPH Asia 2018 Posters, (Association for Computing Machinery, New York, NY, USA, 2018), SA’ 18.

H. Kim, H. Yamamoto, N. Koizumi, S. Maekawa, and T. Naemura, “Hovertable: Dual-sided vertical mid-air images on horizontal tabletop display,” in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2015), CHI EA’ 15, pp. 1115–1120.

Koulieris, G. A.

G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Comput. Graph. Forum 38(2), 493–519 (2019).
[Crossref]

Kumagai, K.

Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35(2), 1–14 (2016).
[Crossref]

Lee, B.

Maeda, Y.

Y. Maeda, D. Miyazaki, and S. Maekawa, “Volumetric aerial three-dimensional display based on heterogeneous imaging and image plane scanning,” Appl. Opt. 54(13), 4109–4115 (2015).
[Crossref]

Y. Maeda, D. Miyazaki, and S. Maekawa, “Aerial imaging display based on a heterogeneous imaging system consisting of roof mirror arrays,” in 2014 IEEE 3rd Global Conference on Consumer Electronics (GCCE), (2014), pp. 211–215.

Y. Maeda, D. Miyazaki, and S. Maekawa, “Aerial imaging display based on a heterogeneous imaging system consisting of roof mirror arrays,” in Proceedings of the IEEE 3rd Global Conference on Consumer Electronics (GCCE’14), (IEEE, Los Alamitos, CA, 2014), pp. 211–215.

Maekawa, S.

Y. Maeda, D. Miyazaki, and S. Maekawa, “Volumetric aerial three-dimensional display based on heterogeneous imaging and image plane scanning,” Appl. Opt. 54(13), 4109–4115 (2015).
[Crossref]

H. Kim, I. Takahashi, H. Yamamoto, S. Maekawa, and T. Naemura, “Mario: Mid-air augmented reality interaction with objects,” Entertain. Comput. 5(4), 233–241 (2014).
[Crossref]

Y. Maeda, D. Miyazaki, and S. Maekawa, “Aerial imaging display based on a heterogeneous imaging system consisting of roof mirror arrays,” in 2014 IEEE 3rd Global Conference on Consumer Electronics (GCCE), (2014), pp. 211–215.

S. Maekawa, K. Nitta, and O. Matoba, “Transmissive optical imaging device with micromirror array,” in Three-Dimensional TV, Video, and Display V, vol. 6392 International Society for Optics and Photonics (SPIE, 2006), pp. 130–137.

Y. Maeda, D. Miyazaki, and S. Maekawa, “Aerial imaging display based on a heterogeneous imaging system consisting of roof mirror arrays,” in Proceedings of the IEEE 3rd Global Conference on Consumer Electronics (GCCE’14), (IEEE, Los Alamitos, CA, 2014), pp. 211–215.

H. Kim, H. Yamamoto, N. Koizumi, S. Maekawa, and T. Naemura, “Hovertable: Dual-sided vertical mid-air images on horizontal tabletop display,” in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2015), CHI EA’ 15, pp. 1115–1120.

Makino, Y.

Y. Makino, Y. Furuyama, and H. Shinoda, “Haptoclone (haptic-optical clone): Mid-air haptic-optical human-human interaction with perfect synchronization,” in Proceedings of the 3rd ACM Symposium on Spatial User Interaction, (ACM, New York, NY, USA, 2015), SUI’ 15, pp. 139.

K. Yoshida, Y. Horiuchi, S. Inoue, Y. Makino, and H. Shinoda, “Haptoclonear: Mutual haptic-optic interactive system with 2d image superimpose, SIGGRAPH’ 17,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), pp. 1–2.

Mania, K.

G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Comput. Graph. Forum 38(2), 493–519 (2019).
[Crossref]

Mantiuk, R. K.

G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Comput. Graph. Forum 38(2), 493–519 (2019).
[Crossref]

Martinez Plasencia, D.

Y. Tokuda, M. A. Norasikin, S. Subramanian, and D. Martinez Plasencia, “Mistform: Adaptive shape changing fog screens,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2017), CHI’ 17, pp. 4383–4395.

Matoba, O.

S. Maekawa, K. Nitta, and O. Matoba, “Transmissive optical imaging device with micromirror array,” in Three-Dimensional TV, Video, and Display V, vol. 6392 International Society for Optics and Photonics (SPIE, 2006), pp. 130–137.

Matsuura, Y.

Y. Matsuura and N. Koizumi, “Scoopirit: A method of scooping mid-air images on water surface,” in Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces, (Association for Computing Machinery, New York, NY, USA, 2018), ISS ’18, pp. 227–235.

Min, S.-W.

Minamizawa, K.

Y. Ueda, K. Iwazaki, M. Shibasaki, Y. Mizushina, M. Furukawa, H. Nii, K. Minamizawa, and S. Tachi, “Haptomirage: Mid-air autostereoscopic display for seamless interaction with mixed reality environments,” in ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014, (Association for Computing Machinery, 2014),ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014. ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 ; Conference date: 10-08-2014 Through 14-08-2014.

Mine, M. R.

M. R. Mine, J. van Baar, A. Grundhofer, D. Rose, and B. Yang, “Projection-based augmented reality in disney theme parks,” Computer 45(7), 32–40 (2012).
[Crossref]

Miyazaki, D.

Y. Maeda, D. Miyazaki, and S. Maekawa, “Volumetric aerial three-dimensional display based on heterogeneous imaging and image plane scanning,” Appl. Opt. 54(13), 4109–4115 (2015).
[Crossref]

Y. Maeda, D. Miyazaki, and S. Maekawa, “Aerial imaging display based on a heterogeneous imaging system consisting of roof mirror arrays,” in 2014 IEEE 3rd Global Conference on Consumer Electronics (GCCE), (2014), pp. 211–215.

Y. Maeda, D. Miyazaki, and S. Maekawa, “Aerial imaging display based on a heterogeneous imaging system consisting of roof mirror arrays,” in Proceedings of the IEEE 3rd Global Conference on Consumer Electronics (GCCE’14), (IEEE, Los Alamitos, CA, 2014), pp. 211–215.

Mizuno, S.

M. Takasaki, K. Ohashi, and S. Mizuno, “Interaction of a stereoscopic 3dcg image with motion parallax displayed in mid-air,” in SIGGRAPH Asia 2018 Posters (Association for Computing Machinery, New York, NY, USA, 2018), SA’ 18.

Mizushina, Y.

Y. Ueda, K. Iwazaki, M. Shibasaki, Y. Mizushina, M. Furukawa, H. Nii, K. Minamizawa, and S. Tachi, “Haptomirage: Mid-air autostereoscopic display for seamless interaction with mixed reality environments,” in ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014, (Association for Computing Machinery, 2014),ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014. ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 ; Conference date: 10-08-2014 Through 14-08-2014.

Monnai, Y.

Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, S. Inoue, and H. Shinoda, “Haptomime: Mid-air haptic interaction with a floating virtual screen,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2014), UIST’ 14, pp. 663–667.

Naemura, T.

H. Kim, I. Takahashi, H. Yamamoto, S. Maekawa, and T. Naemura, “Mario: Mid-air augmented reality interaction with objects,” Entertain. Comput. 5(4), 233–241 (2014).
[Crossref]

H. Kim, H. Yamamoto, N. Koizumi, S. Maekawa, and T. Naemura, “Hovertable: Dual-sided vertical mid-air images on horizontal tabletop display,” in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2015), CHI EA’ 15, pp. 1115–1120.

H. Kajita, N. Koizumi, and T. Naemura, “Skyanchor: Optical design for anchoring mid-air images onto physical objects,” in Proceedings of the 29th Annual Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2016), UIST’ 16, pp. 415–423.

H. Yamamoto, H. Kajita, N. Koizumi, and T. Naemura, “Enchantable: Displaying a vertically standing mid-air image on a table surface using reflection,” in Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces, (Association for Computing Machinery, New York, NY, USA, 2015), ITS’ 15, pp. 397–400.

Nii, H.

Y. Ueda, K. Iwazaki, M. Shibasaki, Y. Mizushina, M. Furukawa, H. Nii, K. Minamizawa, and S. Tachi, “Haptomirage: Mid-air autostereoscopic display for seamless interaction with mixed reality environments,” in ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014, (Association for Computing Machinery, 2014),ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014. ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 ; Conference date: 10-08-2014 Through 14-08-2014.

Nitta, K.

S. Maekawa, K. Nitta, and O. Matoba, “Transmissive optical imaging device with micromirror array,” in Three-Dimensional TV, Video, and Display V, vol. 6392 International Society for Optics and Photonics (SPIE, 2006), pp. 130–137.

Norasikin, M. A.

Y. Tokuda, M. A. Norasikin, S. Subramanian, and D. Martinez Plasencia, “Mistform: Adaptive shape changing fog screens,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2017), CHI’ 17, pp. 4383–4395.

Ochiai, Y.

Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35(2), 1–14 (2016).
[Crossref]

Ohashi, K.

M. Takasaki, K. Ohashi, and S. Mizuno, “Interaction of a stereoscopic 3dcg image with motion parallax displayed in mid-air,” in SIGGRAPH Asia 2018 Posters (Association for Computing Machinery, New York, NY, USA, 2018), SA’ 18.

Otsubo, M.

M. Otsubo, “Optical imaging apparatus and optical imaging method using the same,” (2014).

Punpongsanon, P.

K. Hiratani, D. Iwai, P. Punpongsanon, and K. Sato, “Shadowless projector: Suppressing shadows in projection mapping with micro mirror array plate,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), (2019), pp. 1309–1310.

Rekimoto, J.

Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35(2), 1–14 (2016).
[Crossref]

Richardt, C.

G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Comput. Graph. Forum 38(2), 493–519 (2019).
[Crossref]

Rose, D.

M. R. Mine, J. van Baar, A. Grundhofer, D. Rose, and B. Yang, “Projection-based augmented reality in disney theme parks,” Computer 45(7), 32–40 (2012).
[Crossref]

Sano, A.

K. Tsuchiya, A. Sano, and N. Koizumi, “Interaction system with mid-air cg character that has own eyes,” in SIGGRAPH Asia 2018 Posters, (Association for Computing Machinery, New York, NY, USA, 2018), SA’ 18.

Sato, K.

K. Hiratani, D. Iwai, P. Punpongsanon, and K. Sato, “Shadowless projector: Suppressing shadows in projection mapping with micro mirror array plate,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), (2019), pp. 1309–1310.

Shibasaki, M.

Y. Ueda, K. Iwazaki, M. Shibasaki, Y. Mizushina, M. Furukawa, H. Nii, K. Minamizawa, and S. Tachi, “Haptomirage: Mid-air autostereoscopic display for seamless interaction with mixed reality environments,” in ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014, (Association for Computing Machinery, 2014),ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014. ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 ; Conference date: 10-08-2014 Through 14-08-2014.

Shinoda, H.

K. Yoshida, Y. Horiuchi, S. Inoue, Y. Makino, and H. Shinoda, “Haptoclonear: Mutual haptic-optic interactive system with 2d image superimpose, SIGGRAPH’ 17,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), pp. 1–2.

Y. Makino, Y. Furuyama, and H. Shinoda, “Haptoclone (haptic-optical clone): Mid-air haptic-optical human-human interaction with perfect synchronization,” in Proceedings of the 3rd ACM Symposium on Spatial User Interaction, (ACM, New York, NY, USA, 2015), SUI’ 15, pp. 139.

Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, S. Inoue, and H. Shinoda, “Haptomime: Mid-air haptic interaction with a floating virtual screen,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2014), UIST’ 14, pp. 663–667.

Stengel, M.

G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Comput. Graph. Forum 38(2), 493–519 (2019).
[Crossref]

Subramanian, S.

Y. Tokuda, M. A. Norasikin, S. Subramanian, and D. Martinez Plasencia, “Mistform: Adaptive shape changing fog screens,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2017), CHI’ 17, pp. 4383–4395.

Suyama, S.

H. Yamamoto and S. Suyama, “Aerial imaging by retro-reflection (airr),” in SID Symposium Digest of Technical Papers, (2013), pp. 895–897.

Tachi, S.

Y. Ueda, K. Iwazaki, M. Shibasaki, Y. Mizushina, M. Furukawa, H. Nii, K. Minamizawa, and S. Tachi, “Haptomirage: Mid-air autostereoscopic display for seamless interaction with mixed reality environments,” in ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014, (Association for Computing Machinery, 2014),ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014. ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 ; Conference date: 10-08-2014 Through 14-08-2014.

Takahashi, I.

H. Kim, I. Takahashi, H. Yamamoto, S. Maekawa, and T. Naemura, “Mario: Mid-air augmented reality interaction with objects,” Entertain. Comput. 5(4), 233–241 (2014).
[Crossref]

Takasaki, M.

M. Takasaki, K. Ohashi, and S. Mizuno, “Interaction of a stereoscopic 3dcg image with motion parallax displayed in mid-air,” in SIGGRAPH Asia 2018 Posters (Association for Computing Machinery, New York, NY, USA, 2018), SA’ 18.

Tokuda, Y.

Y. Tokuda, A. Hiyama, M. Hirose, and H. Yamamoto, “R2d2 w/ airr: Real time & real space double-layered display with aerial imaging by retro-reflection,” in SIGGRAPH Asia 2015 Emerging Technologies, (Association for Computing Machinery, New York, NY, USA, 2015), SA ’15.

Y. Tokuda, M. A. Norasikin, S. Subramanian, and D. Martinez Plasencia, “Mistform: Adaptive shape changing fog screens,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2017), CHI’ 17, pp. 4383–4395.

Tsuchiya, K.

K. Tsuchiya, A. Sano, and N. Koizumi, “Interaction system with mid-air cg character that has own eyes,” in SIGGRAPH Asia 2018 Posters, (Association for Computing Machinery, New York, NY, USA, 2018), SA’ 18.

Ueda, Y.

Y. Ueda, K. Iwazaki, M. Shibasaki, Y. Mizushina, M. Furukawa, H. Nii, K. Minamizawa, and S. Tachi, “Haptomirage: Mid-air autostereoscopic display for seamless interaction with mixed reality environments,” in ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014, (Association for Computing Machinery, 2014),ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014. ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 ; Conference date: 10-08-2014 Through 14-08-2014.

van Baar, J.

M. R. Mine, J. van Baar, A. Grundhofer, D. Rose, and B. Yang, “Projection-based augmented reality in disney theme parks,” Computer 45(7), 32–40 (2012).
[Crossref]

Yamamoto, H.

H. Kim, I. Takahashi, H. Yamamoto, S. Maekawa, and T. Naemura, “Mario: Mid-air augmented reality interaction with objects,” Entertain. Comput. 5(4), 233–241 (2014).
[Crossref]

Y. Tokuda, A. Hiyama, M. Hirose, and H. Yamamoto, “R2d2 w/ airr: Real time & real space double-layered display with aerial imaging by retro-reflection,” in SIGGRAPH Asia 2015 Emerging Technologies, (Association for Computing Machinery, New York, NY, USA, 2015), SA ’15.

H. Yamamoto and S. Suyama, “Aerial imaging by retro-reflection (airr),” in SID Symposium Digest of Technical Papers, (2013), pp. 895–897.

H. Yamamoto, H. Kajita, N. Koizumi, and T. Naemura, “Enchantable: Displaying a vertically standing mid-air image on a table surface using reflection,” in Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces, (Association for Computing Machinery, New York, NY, USA, 2015), ITS’ 15, pp. 397–400.

H. Kim, H. Yamamoto, N. Koizumi, S. Maekawa, and T. Naemura, “Hovertable: Dual-sided vertical mid-air images on horizontal tabletop display,” in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2015), CHI EA’ 15, pp. 1115–1120.

Yang, B.

M. R. Mine, J. van Baar, A. Grundhofer, D. Rose, and B. Yang, “Projection-based augmented reality in disney theme parks,” Computer 45(7), 32–40 (2012).
[Crossref]

Yoshida, K.

K. Yoshida, Y. Horiuchi, S. Inoue, Y. Makino, and H. Shinoda, “Haptoclonear: Mutual haptic-optic interactive system with 2d image superimpose, SIGGRAPH’ 17,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), pp. 1–2.

Yoshimizu, Y.

Yoshino, K.

Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, S. Inoue, and H. Shinoda, “Haptomime: Mid-air haptic interaction with a floating virtual screen,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2014), UIST’ 14, pp. 663–667.

ACM Trans. Graph. (1)

Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35(2), 1–14 (2016).
[Crossref]

Appl. Opt. (2)

Comput. Graph. Forum (1)

G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Comput. Graph. Forum 38(2), 493–519 (2019).
[Crossref]

Computer (1)

M. R. Mine, J. van Baar, A. Grundhofer, D. Rose, and B. Yang, “Projection-based augmented reality in disney theme parks,” Computer 45(7), 32–40 (2012).
[Crossref]

Entertain. Comput. (1)

H. Kim, I. Takahashi, H. Yamamoto, S. Maekawa, and T. Naemura, “Mario: Mid-air augmented reality interaction with objects,” Entertain. Comput. 5(4), 233–241 (2014).
[Crossref]

Opt. Express (1)

Other (19)

K. Hiratani, D. Iwai, P. Punpongsanon, and K. Sato, “Shadowless projector: Suppressing shadows in projection mapping with micro mirror array plate,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), (2019), pp. 1309–1310.

V. Geroimenko, Augmented Reality Games I, vol. 254 of Understanding the Pokémon GO Phenomenon (Springer International Publishing, New York, NY, 2019).

Y. Tokuda, A. Hiyama, M. Hirose, and H. Yamamoto, “R2d2 w/ airr: Real time & real space double-layered display with aerial imaging by retro-reflection,” in SIGGRAPH Asia 2015 Emerging Technologies, (Association for Computing Machinery, New York, NY, USA, 2015), SA ’15.

Y. Makino, Y. Furuyama, and H. Shinoda, “Haptoclone (haptic-optical clone): Mid-air haptic-optical human-human interaction with perfect synchronization,” in Proceedings of the 3rd ACM Symposium on Spatial User Interaction, (ACM, New York, NY, USA, 2015), SUI’ 15, pp. 139.

K. Yoshida, Y. Horiuchi, S. Inoue, Y. Makino, and H. Shinoda, “Haptoclonear: Mutual haptic-optic interactive system with 2d image superimpose, SIGGRAPH’ 17,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), pp. 1–2.

Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, S. Inoue, and H. Shinoda, “Haptomime: Mid-air haptic interaction with a floating virtual screen,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2014), UIST’ 14, pp. 663–667.

K. Tsuchiya, A. Sano, and N. Koizumi, “Interaction system with mid-air cg character that has own eyes,” in SIGGRAPH Asia 2018 Posters, (Association for Computing Machinery, New York, NY, USA, 2018), SA’ 18.

M. Takasaki, K. Ohashi, and S. Mizuno, “Interaction of a stereoscopic 3dcg image with motion parallax displayed in mid-air,” in SIGGRAPH Asia 2018 Posters (Association for Computing Machinery, New York, NY, USA, 2018), SA’ 18.

H. Kim, H. Yamamoto, N. Koizumi, S. Maekawa, and T. Naemura, “Hovertable: Dual-sided vertical mid-air images on horizontal tabletop display,” in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2015), CHI EA’ 15, pp. 1115–1120.

H. Yamamoto, H. Kajita, N. Koizumi, and T. Naemura, “Enchantable: Displaying a vertically standing mid-air image on a table surface using reflection,” in Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces, (Association for Computing Machinery, New York, NY, USA, 2015), ITS’ 15, pp. 397–400.

Y. Matsuura and N. Koizumi, “Scoopirit: A method of scooping mid-air images on water surface,” in Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces, (Association for Computing Machinery, New York, NY, USA, 2018), ISS ’18, pp. 227–235.

H. Kajita, N. Koizumi, and T. Naemura, “Skyanchor: Optical design for anchoring mid-air images onto physical objects,” in Proceedings of the 29th Annual Symposium on User Interface Software and Technology, (Association for Computing Machinery, New York, NY, USA, 2016), UIST’ 16, pp. 415–423.

Y. Tokuda, M. A. Norasikin, S. Subramanian, and D. Martinez Plasencia, “Mistform: Adaptive shape changing fog screens,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, (Association for Computing Machinery, New York, NY, USA, 2017), CHI’ 17, pp. 4383–4395.

Y. Ueda, K. Iwazaki, M. Shibasaki, Y. Mizushina, M. Furukawa, H. Nii, K. Minamizawa, and S. Tachi, “Haptomirage: Mid-air autostereoscopic display for seamless interaction with mixed reality environments,” in ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014, (Association for Computing Machinery, 2014),ACM SIGGRAPH 2014 Emerging Technologies, SIGGRAPH 2014. ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 ; Conference date: 10-08-2014 Through 14-08-2014.

Y. Maeda, D. Miyazaki, and S. Maekawa, “Aerial imaging display based on a heterogeneous imaging system consisting of roof mirror arrays,” in Proceedings of the IEEE 3rd Global Conference on Consumer Electronics (GCCE’14), (IEEE, Los Alamitos, CA, 2014), pp. 211–215.

H. Yamamoto and S. Suyama, “Aerial imaging by retro-reflection (airr),” in SID Symposium Digest of Technical Papers, (2013), pp. 895–897.

S. Maekawa, K. Nitta, and O. Matoba, “Transmissive optical imaging device with micromirror array,” in Three-Dimensional TV, Video, and Display V, vol. 6392 International Society for Optics and Photonics (SPIE, 2006), pp. 130–137.

M. Otsubo, “Optical imaging apparatus and optical imaging method using the same,” (2014).

Y. Maeda, D. Miyazaki, and S. Maekawa, “Aerial imaging display based on a heterogeneous imaging system consisting of roof mirror arrays,” in 2014 IEEE 3rd Global Conference on Consumer Electronics (GCCE), (2014), pp. 211–215.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Mid-air image formed by PortOn (our proposed portable mid-air imaging optical system)
Fig. 2.
Fig. 2. The structure and function of MMAPs
Fig. 3.
Fig. 3. Design of display subsystem
Fig. 4.
Fig. 4. Design of erasing system. (a) Erasing system for oblique polarization. (b) Erasing system for vertical and horizontal polarization.
Fig. 5.
Fig. 5. Display place range
Fig. 6.
Fig. 6. Visibility range of mid-air display in the viewpoint position ($E_x$, $E_y$) and viewpoint position ($E_{xM}$, $E_{yM}$) in which the mid-air image of height $h$ is displayed in a certain position $-M$
Fig. 7.
Fig. 7. LEFT: Prototype design, RIGHT: Implementation
Fig. 8.
Fig. 8. Brightness measurement conditions (A) latitude direction (side view) (B) longitude direction (top view)
Fig. 9.
Fig. 9. Measurement results for an optical system. LEFT: With polarization manipulation. RIGHT: Without polarization manipulation.
Fig. 10.
Fig. 10. Comparison of measurement results. LEFT: Ratio of luminance measured by an optical system with polarization manipulation to that measured by an optical system without polarization manipulation. RIGHT: Luminance ratio of the mid-air image to the undesired image
Fig. 11.
Fig. 11. Displayed mid-air image and undesired image. Upper images: Images displayed with polarization control. There are mid-air images clearly and no undesired-light image. Lower images: Images without polarization control. There are not only mid-air images but also undesired-light images.
Fig. 12.
Fig. 12. The measurement of spatial resolutions. (a) The setup of measurement. We select several refrective materials. (b)

Tables (2)

Tables Icon

Table 1. Stokes vector of the display and Mueller matrix of the mirror and the polarizing plate.

Tables Icon

Table 2. Stokes vector of the display and Mueller matrix of the quarter-wave plate and the polarizing plate.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

S D O = M M S D O = t [ 1 0 1 0 ]
M P O S D O = t [ 0 0 0 0 ]
M P O S D O = t [ 1 0 1 0 ]
S D V = M W B M M M W F S D V = t [ 1 1 0 0 ]
M P H S D V = t [ 0 0 0 0 ]
M P H S D V = t [ 1 1 0 0 ]
t a n θ M = H m + h D m i n
D m i n = H m + h t a n θ M
t a n θ m = H m D m a x
D m a x = H m t a n θ m
y = E y E x x
y = E y + H M E x x H M
a : ( E x h E y , h ) ,
b : ( H M E x E y + H M , h )
c : ( H M E x E y + H M , 0 )
d : ( E x h E y , 0 )
( E x M , E y M ) = ( H M M h H M , h H M H M h )

Metrics