Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

IlluminatedZoom: spatially varying magnified vision using periodically zooming eyeglasses and a high-speed projector

Open Access Open Access

Abstract

Spatial zooming and magnification, which control the size of only a portion of a scene while maintaining its context, is an essential interaction technique in augmented reality (AR) systems. It has been applied in various AR applications including surgical navigation, visual search support, and human behavior control. However, spatial zooming has been implemented only on video see-through displays and not been supported by optical see-through displays. It is not trivial to achieve spatial zooming of an observed real scene using near-eye optics. This paper presents the first optical see-through spatial zooming glasses which enables interactive control of the perceived sizes of real-world appearances in a spatially varying manner. The key to our technique is the combination of periodically fast zooming eyeglasses and a synchronized high-speed projector. We stack two electrically focus-tunable lenses (ETLs) for each eyeglass and sweep their focal lengths to modulate the magnification periodically from one (unmagnified) to higher (magnified) at 60 Hz in a manner that prevents a user from perceiving the modulation. We use a 1,000 fps high-speed projector to provide high-resolution spatial illumination for the real scene around the user. A portion of the scene that is to appear magnified is illuminated by the projector when the magnification is greater than one, while the other part is illuminated when the magnification is equal to one. Through experiments, we demonstrate the spatial zooming results of up to 30% magnification using a prototype system. Our technique has the potential to expand the application field of spatial zooming interaction in optical see-through AR.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Magnification and zooming are characteristics of an imaging system that effectively controls the size of a captured scene effectively while keeping the scene in focus. We can observe the details of a specific area of the scene by adjusting the focal length of a zoom lens to narrow the field of view (FOV, also known as angle of view), thereby magnifying the area on the imaging sensor. Inspired by optical zooming, researchers in human-computer interaction have developed various digital zooming interfaces, such as zoomable user interfaces [15], visual guidance [6,7], navigation [8,9] and focus and context visualization [1013]. These systems magnify only a portion of a displayed information, while maintaining the size of the other portion. Such spatial zoom control allows a user to understand efficiently both the details of the zoomed area and its context simultaneously.

Spatial zooming is also recognized as an essential interaction technique in augmented reality (AR) systems including virtual object manipulation at a distance [1416], surgical navigation [17], visual search support [18,19], and human behavior control [2023]. Although there are multiple AR display frameworks, including video see-through (VST), optical see-through (OST), and projection, spatial zooming AR systems were implemented only on VST displays so far. In VST-AR systems, a user sees a real scene on a display panel captured as a digital image by a camera. Consequently, spatial zoom control of the real scene is easily achieved through simple digital image processing, i.e., scaling a portion of the captured image. However, current VST displays still suffer from technical limitations in seeing real-world appearances, such as perceivable delay, vergence-accommodation (VA) mismatch [24], low color fidelity, and low dynamic range. In addition, the quality of communication among users gets significantly degraded because VST displays completely cover users’ eyes. In contrast, these drawbacks do not arise in OST displays because a user can observe a real scene directly when using these devices. Consequently, OST displays have a potential to provide better and more natural experiences than VST displays for users of spatial zooming AR systems. However, the flexibility of augmentation is limited in OST displays. It is not trivial to control the magnification factors of an observed real scene interactively in a spatially varying manner using near-eye optics of OST displays. Several studies have applied zooming optics to OST displays in medical [2528] and astronomical [29] applications, but the magnification factor is spatially uniform in such systems. To the best of our knowledge, spatial zoom control has never been achieved in OST-AR systems.

In this paper, we propose an OST display, IlluminatedZoom, that enables interactive control of the perceived size of real-world appearances in a spatially varying manner (Fig. 1). The key to our technique is the combination of periodically zooming eyeglasses and a synchronized high-speed projector. We stack two electrically focus-tunable lenses (ETLs) for each eyeglass and sweep their focal lengths to modulate the magnification factor periodically from $\times$1 to a higher one at 60 Hz to prevent a user from perceiving the modulation (60 Hz is higher than critical flicker fusion frequency). We use a high-speed projector to provide high-resolution spatial illumination of the real scene around the user. An area of the scene that is to appear zoomed is illuminated by the projector when the magnification factor is greater than $\times$1, while the other area is illuminated when the magnification factor is equal to $\times$1 (Fig. 2). We present the computational model behind our technique for determining an appropriate optical power for each ETL that achieves a desired magnification factor, while keeping the real-world appearance focused. In this paper, we explain how to implement the system and validate our model as well as showing spatial zooming results from experiments.

 figure: Fig. 1.

Fig. 1. Concept of IlluminatedZoom. The dashed red circle indicates a magnified part of the paint. Note that the red line is added for a better understanding of the concept but is not visible in an actual system.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Principle of IlluminatedZoom. (a) In a normal condition, two objects are illuminated by environment light. (b) In our proposed system, a user sees the objects through stacked ETLs. The ETLs are periodically modulated so that the magnification and equal magnification states are switched at 60 Hz. A high speed projector illuminates the cylinder object in the magnification state, while it illuminates the cube object in the equal magnification state. As a result, the user perceives only the cylinder object is magnified. Note that the proposed system works in a dark environment.

Download Full Size | PDF

To summarize, the principal contributions of this paper are as follows.

  • • We achieve spatial zoom control in OST-AR through a novel optical design using stacked ETLs and a high-speed projector as periodically zooming eyeglasses and a synchronized illuminator, respectively.
  • • We derive a computational model to compute appropriate optical powers of the stacked ETLs to switch magnified and unmagnified states periodically at 60 Hz.
  • • We verify the concept of the spatial zoom control experimentally.

2. Related work

As discussed in Section 1, interactive spatial zooming is an important and widely-used graphical user interface component in all PC, smartphone, and AR applications. However, the display framework for AR applications is limited to VST. There is no simple optical solution for achieving a spatial zoom control in OST-AR systems. For the remediation of poor vision, previous studies have proposed spectacle-mounted telescopic systems [30,31], in which the magnified image is seen through the telescope simultaneously with the unmagnified image. Although these systems can provide spatial zooming of a real scene, a user must readjust the setup manually to change the magnification factor and the placement of the magnified image in the FOV.

In this paper, we relax this physical constraint by applying computational optics and illumination approaches jointly with considering the temporal perceptual properties of the human visual system. In particular, we combine fast periodically zooming lenses and a high-speed projector. A user wears the zoom lenses as eyeglasses, which switch between a magnified and unmagnified state at a frequency of 60 Hz. The projector is used for spatial illumination to illuminate a portion of a real scene that is to be magnified only when the lens is in the magnified state. The other portion is illuminated only when the lens is in the unmagnified state. Human vision perceives the integral of the brightness in a lens modulation period (Talbot-Plateau law [32]). Consequently, the user can see that one portion of the scene is magnified and that the other portion is unmagnified without seeing any flickers (Fig. 2). The magnified area can be changed interactively without manual readjustment, because the zoom lenses and projector are controlled by a computer.

AR and virtual reality (VR) researchers have developed multifocal and varifocal head-mounted displays (HMDs) using ETLs to resolve the VA mismatch of AR and VR displays [3338]. These studies use a fast focal sweep technique to achieve rapid modulation of the focusing distance of the eyepiece. The display panel switches the displayed images of different distances in synchronization with the focal length. In contrast to these studies, we focus on controlling the light field emitted from a real scene rather than the display panels. The work most related to our research attempted to achieve spatial focal control of real appearances seen through ETLs [39]. To the best of our knowledge, there has been no work until now on spatial zoom control of a real scene’s appearance.

We use two stacked ETLs as our zoom lens for each eyeglass. There are two major categories of zoom lens, varifocal and parfocal [40]. A varifocal lens has variable focal length in which focus changes as magnification changes, while a parfocal zoom lens remains in focus as the magnification changes. In this paper, the term “zoom lens” refers to the parfocal one. An ordinary zoom lens is composed of multiple lenses with different focal lengths. Changing the magnification factor while fixing the focusing distance requires physically adjusting the distance between lenses. This requires a complicated driving mechanism that prevents the zoom lenses from being sufficiently compact to be worn comfortably as eyeglasses. Several studies have used multiple ETLs to develop zoom lens systems that do not require physical adjustment of the lens distance [4144]. We use a zoom lens of stacked ETLs as our eyeglasses because we found that such devices are wearable. A major contribution of the paper is that we develop a “dual focal sweep” technique—a periodic modulation of the focal lengths of the two ETLs—to switch its magnification factor periodically between two states at 60 Hz.

3. Spatial zoom control of real-world appearance

We achieved spatial zoom control of a real-world appearacnce using a fast dual focal sweep of stacked ETLs worn by an observer and synchronized to high-speed illumination. Focal sweep is an optical technique that modulates the optical power of a lens periodically such that every portion of an observed real scene is focused once in each sweep. Dual focal sweep is an extension of this technique to stacked ETLs through which two magnification states (magnified and unmagnified) are switched in each sweep in every portion of the observed real scene. As described in the previous sections, the scene appearance can be controlled between magnified and unmagnified by changing the illumination timings. The core of our technique is the computation of the focal sweep ranges of the ETLs and the phase shift between the sweep signals to ensure that the magnified and unmagnified states are switched at 60 Hz.

In the remainder of this section, we model the image formation of real-world appearances in a user’s eye with the stacked ETLs as the mathematical basis of our technique. Then, we describe how to design the dual focal sweep to achieve the desired periodic magnification modulation. We also discuss a method for alleviating visible seams between magnified and unmagnified areas. We apply polymer-based liquid lenses to the ETLs, because this type of ETL achieves faster focal change than other types of ETL while maintaining a relatively large aperture size. We use a high-speed projector with the illuminator, which enables to the zoom control on a per-pixel basis and thus at a high spatial resolution.

3.1 Prerequisite: ray transfer matrix

This research focuses on lateral magnification on a user’s retina. The human eye consists of several refracting bodies such as the cornea, aqueous humor, vitreous humor, and crystalline lens. We consider these together as a single lens and the retina as an image plane without loss of generality [45]. We compute the lateral magnification based on ray transfer matrix (RTM) analysis (Fig. 3(left)). RTM analysis is a mathematical tool used to perform ray tracing calculations under paraxial approximation. The calculation requires that all ray directions be at small angles $u$ relative to the optical axis of a system such that the approximation of $\sin u\simeq u$ remains valid. An RTM is represented as follows:

$$\begin{bmatrix} x'\\ u' \end{bmatrix} =M \begin{bmatrix} x\\ u \end{bmatrix} = \begin{bmatrix} A & B\\ C & D \end{bmatrix} \begin{bmatrix} x\\ u \end{bmatrix} ,$$
where $M$ is an RTM and a light ray enters an optical component of the system crossing its input plane at a distance $x$ from the optical axis and travels in a direction that makes an angle $u$ with the optical axis. After propagation to the output plane that ray is found at a distance $x'$ from the optical axis and at an angle $u'$ with respect to it.

 figure: Fig. 3.

Fig. 3. RTM analysis. (left) Two points are conjugate to each other for a given RTM. (middle) A ray passes through space. (right) A ray passes through a thin lens.

Download Full Size | PDF

If there is free space between two optical components (see Fig. 3(middle)), the RTM is as follows:

$$T(d)= \begin{bmatrix} 1 & d\\ 0 & 1 \end{bmatrix} ,$$
where $d$ is the distance along the optical axis between the two components. Another simple example is that of a thin lens whose RTM is
$$R(P)= \begin{bmatrix} 1 & 0\\ -P & 1 \end{bmatrix} ,$$
where $P$ is the optical power (inverse of focal length) of the lens (Fig. 3(right)). An important property of an RTM for the following explanation is that its determinant is equal to one.

3.2 Computing optical powers for a desired magnification

A user of our system wears dual ETLs as an eyeglass for each eye such that the eye and the ETLs share the same optical axis. We consider a light ray emitted from or reflected on an object point that travels through the ETLs and the eye and finally hits the retina. Suppose that the distance of the ray on the object from the optical axis and its angle are $x_o$ and $u_o$, respectively. In addition, suppose those on the retina are denoted as $x_r$ and $u_r$, respectively. Then, the RTM of our system is as follows:

$$\begin{bmatrix} x_r\\ u_r \end{bmatrix} =M \begin{bmatrix} x_o\\ u_o \end{bmatrix} = \begin{bmatrix} A & B\\ C & D \end{bmatrix} \begin{bmatrix} x_o\\ u_o \end{bmatrix} ,$$
$$M=T(d_{er})R(P_{e})T(d_{Ee})R(P_{eyeE})T(d_{EE})R(P_{objE})T(d_{oE}),$$
where $d_{er}$, $P_{e}$, $d_{Ee}$, $P_{eyeE}$, $d_{EE}$, $P_{objE}$ and $d_{oE}$ are the distance between the eye lens and the retina, the optical power of the eye lens, the distance between the eyepiece ETL and the eye, the optical power of the eyepiece ETL, the distance between the two ETLs, the optical power of the objective ETL, and the distance between the object and the objective ETL, respectively (Fig. 4).

 figure: Fig. 4.

Fig. 4. Parameters in the RTM analysis of the proposed system.

Download Full Size | PDF

Expanding Eq. (4) for $x_r$ yields

$$x_r=Ax_o+Bu_o.$$

Because we assume a parfocal zoom lens, the light from the object point converges on the retina. Consequently, $x_r$ is independent of the angle of the ray $u_o$ at the object point, which leads to $B=0$, and thus,

$$x_r=Ax_o.$$

Then, because the determinant of an RTM is equal to one by definition,

$$AD=1.$$

Expanding Eq. (5) for $A$ and $D$ in Eq. (4) yields

$$\begin{aligned} A=&1-d_{er}P_e-P_{eyeE}(d_{Ee}(1-d_{er}P_e)+d_{er}) \\ &-P_{objE}(d_{EE}(1-d_{er}P_e-P_{eyeE}(d_{Ee}(1-d_{er}P_e)+d_{er})) \\ &+d_{Ee}(1-d_{er}P_e)+d_{er}), \end{aligned}$$
$$\begin{aligned} D=&d_{oE}({-}P_e-P_{eyeE}(1-d_{Ee}P_e) \\ &-P_{objE}(d_{EE}({-}P_e-P_{eyeE}(1-d_{Ee}P_e))+1-d_{Ee}P_e)) \\ &+d_{EE}({-}P_e-P_{eyeE}(1-d_{Ee}P_e))+1-d_{Ee}P_e. \end{aligned}$$

The magnification factor $s$ of the system is the ratio of $x_r$ to $x_{r0}$, where $x_{r0}$ is the distance of the light ray from the optical axis on the retina when the optical powers of the ETLs are both zero (i.e., $P_{eyeE}=0$ and $P_{objE}=0$). Suppose $A_0$ is the special case of $A$ with

$$P_{eyeE}=0\textrm{ and }P_{objE}=0.$$

Substituting Eq. (11) into Eq. (9) gives

$$A_0=1-d_{er}P_e.$$

Then, from Eq. (7) and (12), we obtain the magnification factor $s$ as

$$s = \frac{x_r}{x_{r0}} = \frac{A}{A_0} = \frac{A}{1-d_{er}P_e}.$$

Thus,

$$A=s(1-d_{er}P_e)$$

Substituting Eq. (14) into Eq. (8) yields

$$D=\frac{1}{s(1-d_{er}P_e)}.$$

By solving Eq. (9), (10), (14), and (15) for $P_{eyeE}$ and $P_{objE}$, we obtain

$$P_{eyeE}(s)=\frac{d_{EE}+d_{Ee}+d_{er}-(d_{EE}+d_{Ee})d_{er}P_{e}+sd_{oE}(1-d_{er}P_{e})}{d_{EE}(d_{Ee}+d_{er}-d_{Ee}d_{er}P_{e})},$$
$$P_{objE}(s)=\frac{d_{Ee}+d_{er}-d_{Ee}d_{er}P_{e}}{sd_{oE}d_{EE}(1-d_{er}P_{e})}+\frac{d_{oE}+d_{EE}}{d_{oE}d_{EE}},$$
where $P_{eyeE}(s)$ and $P_{objE}(s)$ are the optical powers of the respective ETLs that achieve a desired magnification factor $s$.

3.3 Dual focal sweep design

We develop a dual focal sweep technique that modulates the optical powers of the stacked ETLs independently to switch the magnification factor of the zoom lens periodically between two states, magnification ($s_1>1$) and equal magnification ($s_2=1$), at 60 Hz. According to the principle of Galilean telescopes, the optical power of the eyepiece ETL must be negative and that of the objective ETL must be positive to achieve the magnification, that is,

$$P_{eyeE}(s_1)<0 \textrm{ and } P_{objE}(s_1)>0.$$

The equal magnification is achieved by setting the optical powers of the ETLs as zero, that is,

$$P_{eyeE}(s_2)=P_{objE}(s_2)=0.$$

Our technique uses a 60 Hz sinusoidal wave as a periodic drive signal of each ETL. In each period, the optical power of the ETL becomes maximal and minimal at phases of $\frac {1}{2}\pi +\delta \phi$ and $\frac {3}{2}\pi +\delta \phi$, respectively, where $\delta \phi$ is a phase offset caused by the delay of the ETL response from the drive signal. Because the time derivative of the drive signal is zero at the phases of $\frac {1}{2}\pi$ and $\frac {3}{2}\pi$, the optical power of the ETL is the most stable at the phases of $\frac {1}{2}\pi +\delta \phi$ and $\frac {3}{2}\pi +\delta \phi$. Considering that a target scene needs to be illuminated for a certain period of time to avoid too dark appearance, it is reasonable to illuminate the scene at these phases where the optical powers do not significantly change for the illumination period. Suppose the phase of the drive signal to the eyepiece ETL is $\phi _{eyeE}$ and that to the objective ETL is $\phi _{objE}$. To show the scene with the magnification of $s_1$, we illuminate the scene when $\phi _{eyeE}=\frac {3}{2}\pi +\delta \phi$ and $\phi _{objE}=\frac {1}{2}\pi +\delta \phi$, where $P_{eyeE}$ and $P_{objE}$ become the minimal ($=P_{eyeE}(s_1$)) and the maximal ($=P_{objE}(s_1$)) in their respective modulation periods. In the same manner, we illuminate the scene when $\phi _{eyeE}=\frac {1}{2}\pi +\delta \phi$ and $\phi _{objE}=\frac {3}{2}\pi +\delta \phi$, where $P_{eyeE}$ and $P_{objE}$ become the maximal ($=P_{eyeE}(s_2$)) and the minimal ($=P_{objE}(s_2$)), respectively, to show the scene with the equal magnification (i.e., $s_2$). Therefore, we shift the phases of the drive signals to the eyepiece ETL and objective ETL by $\pi$ from each other. The timing chart of our dual focal sweep technique is shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Timing chart of our dual focal sweep technique.

Download Full Size | PDF

3.4 Calibration

As discussed above, our drive signal is a 60 Hz sinusoidal wave. We need to determine the offset and amplitude (i.e., the maximum and minimum values) of drive signals for the ETLs to achieve desired magnifications of $s_1$ and $s_2$ in each modulation period. Specifically, the drive signal for the eyepiece ETL needs to cause optical powers of $P_{eyeE}(s_1)$ and $P_{eyeE}(s_2)$ at the phases of $\frac {3}{2}\pi +\delta \phi$ and $\frac {1}{2}\pi +\delta \phi$, respectively. In the same manner, the drive signal for the objective ETL needs to cause optical powers of $P_{objE}(s_1)$ and $P_{objE}(s_2)$ at the phases of $\frac {1}{2}\pi +\delta \phi$ and $\frac {3}{2}\pi +\delta \phi$, respectively. However, we found in a preliminary study that the relationship between the optical powers at the above-mentioned phases and the maximum and minimum values of the drive signal could not be simply modeled due to the complex physical properties of the liquid lens. Consequently, we decided to use a data-driven rather than an analytical approach to determine the maximum and minimum values of the drive signal to reproduce desired optical powers.

Specifically, we input sinusoidal waves of different combinations of maximum and minimum voltage values, denoted as $V^{M}$ and $V^{m}$, respectively. For each input, we measure the output optical powers at the phases of $\frac {1}{2}\pi +\delta \phi$ and $\frac {3}{2}\pi +\delta \phi$ in a modulation period, which are denoted as $P_E^{M}$ and $P_E^{m}$, respectively. The measured optical powers are stored in relation with $V^{M}$ and $V^{m}$. We then fit a polynomial in two variables of $V^{M}$ and $V^{m}$ to the stored data of $P_E^{M}$. We also fit another polynomial to $P_E^{m}$. We denote the fitted polynomials as $P_E^{M}(V^{M},V^{m})$ and $P_E^{m}(V^{M},V^{m})$, respectively.

Based on the calibration result, we compute the maximum and minimum input voltage values of drive signals for the ETLs in the following three steps. First, we determine a desired magnification factor $s_1$. Note that $s_2$ is always 1 (i.e., equal magnification). Second, using Eq. (16) and (17), we compute optical powers of the ETLs (i.e., $P_{eyeE}(s_1),P_{eyeE}(s_2),P_{objE}(s_1),P_{objE}(s_2)$) that achieve the desired magnifications. Finally, the maximum and minimum input voltage values for the eyepiece ETL (denoted as $V^{M}_{eyeE}$ and $V^{m}_{eyeE}$) are computed by solving the following nonlinear simultaneous equations using the Newton’s method:

$$\begin{aligned}P_{eyeE}(s_1)&=P_E^{m}(V^{M}_{eyeE},V^{m}_{eyeE}), \\ P_{eyeE}(s_2)&=P_E^{M}(V^{M}_{eyeE},V^{m}_{eyeE}). \end{aligned}$$

In the same manner, the maximum and minimum input voltage values for the objective ETL (denoted as $V^{M}_{objE}$ and $V^{m}_{objE}$) are computed by solving the following equations:

$$\begin{aligned}P_{objE}(s_1)&=P_E^{M}(V_M,V_m), \\ P_{objE}(s_2)&=P_E^{m}(V_M,V_m). \end{aligned}$$

3.5 Alleviating visible seams between magnified and unmagnified areas

When a magnified area is located closer to the optical center of the ETLs than an unmagnified area, these areas overlap and the overlapped area appears as a bright seam. On the other hand, when the two areas are reversed, a gap occurs between these areas and appears as a dark seam. These seams become salient when the intensity patterns of projected illuminations are binary and consistent with the magnified and unmagnified areas (Fig. 6(a)). To alleviate these undesirable artifacts, we apply a feathering technique to compensate for the bright and dark seams as follows (Fig. 6(b)).

 figure: Fig. 6.

Fig. 6. Alleviating visible seams. (a) Visible seams are caused, when binary intensity patterns of projected illuminations are spatially consistent with the magnified and unmagnified areas. (b) The seams can be alleviated by our feathering technique. Note $o$ represents the optical center of the ETLs.

Download Full Size | PDF

First, we determine the illumination pattern for a magnification of $s_1$ so that the intensity linearly increases from 0 (black) to 1 (white) at boundaries between the magnified and unmagnified areas. Then, we compute the appearance of this illumination result of an observer wearing the ETLs by magnifying the unmagnified appearance by $s_1$ times. Finally, we determine the illumination pattern for equal magnification (i.e., $s_2$) so that the sum of the apparent intensities of the magnification and equal-magnification illuminations is spatially uniform.

4. Experiment

We built a prototype on which we conducted experiments to validate our model, calibrate the system, and verify the spatial zooming capability of our technique.

4.1 Experimental setup

We constructed a prototype system consisting of a pair of ETLs and a synchronized high-speed projector (Fig. 7). As we mentioned in Sec. 3, we used polymer-based liquid lenses as the ETLs. Specifically, we stacked a pair of light weight ETLs (Optotune AG, EL-16-40-TC, aperture: 16 mm, weight: 40 g). The optical power of the ETLs was controlled from $-10$ D to $10$ D by changing the electric current. The digital signal generated by a workstation (CPU: Intel Xeon E3-1225 v5@3.30GHz, RAM: 32 GB) was input to a D/A converter (National Instruments, USB-6343) and converted to analog voltage. This voltage was then converted to two types of electric current for the eyepiece ETL and the objective ETL by two custom amplifier circuits using an op-amp (LM675T). Finally, the currents were fed to the ETLs. According to the ETL’s data sheet, the input analog voltages in our system were within the range between $-0.07$ V and $0.07$ V. The delay of the ETL response from a drive signal was 7 ms, and thus, $\delta \phi =\frac {7}{1/60}\cdot 2\pi$ assuming an input wave signal of 60 Hz.

 figure: Fig. 7.

Fig. 7. Experimental setup.

Download Full Size | PDF

We used a consumer-grade high-speed projector (Inrevium, TB-UK-DYNAFLASH, 1024$\times$768 pixels, 330 ANSI lumen) that can project 8-bit grayscale images at 1,000 frames per second. As described in Sec. 3.3 and shown in Fig. 5, the optical power of the ETL becomes a target value for a much shorter period than 1/60 seconds. Therefore, the scene must be illuminated for an as short period as possible. We chose the 1,000 fps projector because this is the fastest off-the-shelf projector that we can interactively control to project 8-bit images. Projection images were generated by the workstation and sent to the projector via a PCI Express interface. The display timing of each projection image was adjusted by a 5 V trigger signal from the workstation via the D/A converter. To synchronize the ETLs and the high-speed projector, we used a photodiode to measure the delay of the high-speed projector from a trigger signal of the workstation to the actual projection. As a result, we found that the delay was 0.46 ms. Denoting the delay as $\delta p$ ($=\frac {0.46}{1/60}\cdot 2\pi$), we send a trigger signal to the projector at $\frac {1}{2}\pi +\delta \phi -\delta p$ and $\frac {3}{2}\pi +\delta \phi -\delta p$. The following experiments were conducted under a dark environment.

4.2 Model validation

We validated our computational model (i.e., Eq. (16) and (17)) experimentally. We mounted our stacked ETLs on a camera (Sony $\alpha$7S II) to capture the appearance through the lenses (Fig. 8(a)). We used a fixed focal length lens (Sony SEL24F14GM) as a virtual eye lens. The values of $d_{er}$, $P_{e}$, $d_{Ee}$, $d_{EE}$, and $d_{oE}$ were 25.3 mm, $\frac {1}{24}$ D, 60.0 mm, 14.0 mm, and 393.1 mm, respectively. These values were either measured or derived from the specification sheets of the devices.

 figure: Fig. 8.

Fig. 8. Model validation experiment. (a) Experimental setup. (b) Pairs of optical powers achieving parfocal zooming. Those by which the captured stripe patterns were the sharpest are shown as the white circles. Those computed from Eq. (16) and (17) are shown as a red line. (c) Magnification factors with optical powers computed from Eq. (16) and (17). The plus symbols indicate magnification factors identical to the targets. Measured magnification factors are shown as white circles. A line fitted to the measured values is shown as the red line.

Download Full Size | PDF

First, we verified whether our model can compute an appropriate pair of optical powers by which the resultant appearance does not get blurred to achieve a parfocal zoom lens. We captured a printed stripe pattern (black and white) with different pairs of optical powers. Specifically, we prepared 2,601 pairs by combining 51 optical powers of $P_{objE}$ (from 0 D to 10 D with 0.2 D intervals) and those of $P_{eyeE}$ (from $-10$ D to 0 D with 0.2 D intervals). We then searched for an optical power of $P_{eyeE}$ by which the captured stripe was the sharpest with a particular optical power of $P_{objE}$. The sharpness of the image was evaluated using a blur metric [46], which measured the blurriness based on the difference between the original input image and its blurred version. The larger the difference, the lower the blurriness in the original image. Fig. 8(b) shows the sharpest pairs of $P_{eyeE}$ and $P_{objE}$ with the computed pair by solving Eq. (16) and (17). From this result, we confirmed that our model can compute a pair of optical powers accurately for which the target scene is in focus.

Next, we verified whether the optical powers computed by our model can reproduce a desired magnification factor. We used the same setup and captured a dot pattern using a pair of computed optical powers. We computed the magnification factor by dividing the average of distances between adjacent dots by that without magnification. Fig. 8(c) shows the result. We found that the measured magnification factors were not identical to the target. The difference might occur as a result of the movement of the principal points of the ETLs. Our model assumes fixed distance values between the optical components (i.e., $d_{oE}$, $d_{EE}$, and $d_{Ee}$). However, as a result of the shape deformation nature of a liquid lens-based ETL, its principal point is moved along the optical axis when the optical power is changed [47]. Consequently, the distance values are no longer fixed. We found the measured magnification factors $\tilde {s}$ can be predicted using a linear function of the corresponding target magnification factors $s$. Specifically, the following relation is derived using the regression $\tilde {s}=1.44s-0.44$, where $R^{2}=0.996$ (Fig. 8(c)). Consequently, once we decide on a target magnification factor $s$, we reversely look up this relation to compute the input magnification factor for Eq. (16) and (17). Through this additional process, we can compute the optical powers of $P_{eyeE}$ and $P_{objE}$ accurately that reproduce the desired magnification factor $s$.

4.3 Calibration

We calibrated the stacked ETLs using the method described in Sec. 3.4. We prepared 71 input voltage values (from $-0.07$ V to 0.07 V at 0.002 V intervals) and used every combination with repetition of these values (2,556 in total) as the maximum and minimum values of the input sinusoidal waves ($V_M$ and $V_m$, respectively). The optical powers were measured using a photodiode and a laser emitter (see our previous paper [48] for more details).

Fig. 9 plots the measured maximum and minimum optical powers at the phases of $\frac {1}{2}\pi +\delta \phi$ and $\frac {3}{2}\pi +\delta \phi$ in a modulation period, respectively, for each pair $V_M$ and $V_m$ of the corresponding input wave. From this result, we found that the optical powers changed smoothly in both the $V_M$ and $V_m$ directions. We used a polynomial least squares to fit each of the maximum and minimum optical powers with polynomial functions to find $P^{M}_E(V^{M}, V^{m})$ and $P^{m}_E(V^{M}, V^{m})$ in Sec. 3.4, respectively. By fitting from first- to twelfth-degree polynomials, we found that a seventh-degree polynomial can approximate the data with the smallest residuals. The coefficients of the polynomial function are shown in Visualization 1.

 figure: Fig. 9.

Fig. 9. Measured optical powers with input sinusoidal waves of various minimum ($V_m$) and maximum ($V_M$) voltages. The units of the optical powers and the voltages are diopter and volt, respectively. (a) The minimum optical power $P_{*E}^{m}$. (b) The maximum optical power $P_{*E}^{M}$.

Download Full Size | PDF

4.4 Spatial zoom control

Using two objects, a painting and a newspaper, we verified whether our technique can achieve spatial zoom control (see Visualization 1). First, using the painting, we checked whether our system could provide spatially varying zooming with different magnification factors. Specifically, we magnified a human silhouette area, while keeping the other area unmagnified. The magnification factor was changed from $\times 1.1$, $\times 1.2$, and $\times 1.3$. The results are shown in Fig. 10, in which our system could magnify only the human area by different magnification factors. On the other hand, we did not apply our seam alleviation method to these cases. We found that the visible seam became significant when the magnification factor increased.

 figure: Fig. 10.

Fig. 10. Experimental results of a painting with various magnification factors. Only a human area was magnified without our seam alleviation technique. (top) Original and (bottom) magnified appearances.

Download Full Size | PDF

Second, using the newspaper, we checked the efficacy of our technique for alleviating the visible seam. Fig. 11 shows the results. When we magnified a portion of the texts without using our technique, the overlapping area of the magnified and unmagnified areas appeared unnaturally bright (Fig. 11(b)). In contrast, when we applied our technique, the bright seam disappeared successfully (Fig. 11(c)). The magnification factor was $\times 1.3$. From these results, we confirmed that both of our techniques (spatial zoom control and seam alleviation) worked properly.

 figure: Fig. 11.

Fig. 11. Experimental results of a newspaper (the contrast is adjusted for better readability). (a) Original view. (b) Magnification ($\times 1.3$) without seam alleviation. (c) Magnification with the proposed seam alleviation technique.

Download Full Size | PDF

Due to COVID-19, only an informal user study could be conducted to investigate how a human observer perceives a scene seen-through the stacked ETLs. Four participants (22-39 years old) including two of the authors observed the painting and newspaper objects using the system. As a result, all of them reported that (1) a part of each object was certainly magnified, (2) the seam was significantly improved by the proposed seam alleviation technique, and (3) no flicker was perceived. This result indicates that the proposed technique can provide a desired spatial zooming appearance.

5. Discussion

As shown in the experimental results in Sec. 4.4, the proposed technique enables spatial zoom control in OST-AR systems. We achieved up to 30% magnification. This magnification limit is determined by the optical power range of the ETL under a fast focal sweep. In addition, the angular resolution is bounded by the aperture size according to the Rayleigh criterion. We discuss about the aperture issue below in this section. Few attempts have been made in investigating zooming user interfaces in OST-AR because there have thus far been no effective optical solutions. However, this paper shows the first technical proof of the concept, and thus has potential to increase the applicability of OST-AR significantly. Nevertheless, there are limitations to the current technique.

First, the perceived image quality is degraded in our system in that images are slightly blurred and distorted. We see two principal technical problems. The first problem is that our zoom lens consists of only two lenses. We expect that this could be solved by combining additional ETLs as aberration compensation lenses. However, this solution might lead to a larger form factor, significantly narrower FOV, and more complex control than the current prototype. The second problem is that the illumination period provided by the high-speed projector is too long. As we discussed in Sec. 3.3, we illuminate the scene when the time derivative of the optical power of each ETL is the smallest by expecting the scene appearance to be stable. However, the magnified and unmagnified parfocal states might continue for a shorter period than the projected illumination. Applying a custom projector, we can illuminate the scene for shorter durations than in the current prototype (1 ms) to alleviate the image quality degradation, but this result in a darker appearance. A drive signal waveform optimization technique [48] might improve the image quality by keeping the optical power at the desired values for a longer period.

Second, the presented method needs to turn off environment lighting and illuminate a real scene using a controlled illumination to avoid crosstalk (i.e., the user will perceive the scene as a zoomed-in view is overlaid on the background). It would be too confusing and significantly degrade user experiences. Spatial augmented reality (SAR) or projection mapping systems share the same limitation [4951]. SAR merges real and virtual worlds seamlessly by projecting computer-generated images onto physical surfaces and has the potential to be used in various fields such as video conferencing [52], education [53], medical training [54], theme park [55], and industrial design [56]. SAR researchers have alleviated the above-mentioned limitation by illuminating the whole scene surfaces using multiple projectors to reproduce the original environment lighting [57]. Such a ubiquitous projection approach is a solution to this limitation. Another solution is to apply a spatial mask rather than spatial illumination. Specifically, we open parts of the mask corresponding to magnified areas when the zoom lens magnifies the scene and the other parts when the lens does not magnify the scene. Various spatial light modulators, such as a digital micromirror device (DMD), liquid crystal display (LCD), or liquid crystal on silicon (LCoS), can be applied as the mask. Spatial masks have been used widely in OST-HMDs for occlusion capable displays [58,59]. However, most of the existing methods create an occlusion mask only at a single, fixed depth—typically at infinity. There are varifocal occlusion techniques [60,61], but they make the form factor of our eyeglasses too large for them to be worn. There is another alternative way. Rather than using a fast focal sweep technique, one can use two optical paths (one for scene as it is, and one for a magnified view) and optically combine them before presenting them to the user. The disadvantage in this alternate method would be the added bulk of the two optical paths.

Third, our method requires a high speed projector. Although such a projector currently seems a special equipment, there is already an off-the-shelf 1,000 fps projector on the market which we also used in this research [62]. In addition, researchers found that the high speed projector is essential to realize immersive dynamic SAR applications in the last few years [6366]. Given this technical trend, high speed projectors will probably soon become normal equipment in SAR applications. Therefore, although there are currently technical limitations described here, we believe that they do not immediately degrade the applicability of the proposed method.

There are a couple of limitations regarding the ETL applied in our system. The aperture size of the current ETL is still smaller than normal eyeglasses. In addition, we stack two ETLs, which leads to a significant reduction of the user’s FOV. Recently, researchers developed large deformable optics for near-eye displays [67]. The aperture size of ETL has also been increased thanks to the constant innovation of manufacturers. Therefore, we believe that this limitation will be overcome in the near future. Our liquid-based ETL might not be durable for long time usage with a fast focal sweep, even though the lenses were not broken during our experiments. One solution to this potential limitation would be to replace the ETLs with non-actuated optical devices. We consider that a programmable phase modulator such as a phase-only LCoS is a promising candidate. One prior study [68] has already demonstrated its use for free-form near-eye optics. Using it as an optical component of our system in place of an ETL would be an interesting future direction.

Finally, the current system works only for a real scene placed at around 400 mm away from a user. In Sec. 4.2, we measured magnification factors when we set several combinations of optical powers of the ETLs by which a real scene remained in focus. The current system derives the optical powers to achieve a desired magnification factor by looking up the measured relation shown in Fig. 8(c). This relation is valid only when the distance between the stacked ETLs and a target scene is around 400 mm, because the above measurement was conducted using a planar board at that distance. Once we perform additional measurements, our system will work for other distances.

6. Conclusion

This paper presented the first proof-of-concept of spatial zoom control of real scene appearances in OST-AR. Our technique combines stacked ETLs and a high-speed projector as eyeglasses and spatial illuminator of a scene, respectively. We described the mathematical model behind our technique to determine the focal sweep range of each ETL to achieve a desired magnification factor, while keeping the real-world appearance appearing focused. Through experiments with a prototype, we validated the model and confirmed that spatial zoom control was achieved. Specifically, we could magnify a specific part of a scene by up to 30% magnification, while successfully alleviating the seam between the magnified and unmagnified area. A new segment in near-eye displays is recently enabled with projectors [39,69], and this work is categorized into it. We will keep investigating the possibility and applicability of this emerging framework.

Funding

PRESTO, Japan Science and Technology Agency (JPMJPR19J2); Japan Society for the Promotion of Science (JP17H04691).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. K. Perlin and D. Fox, “Pad: an alternative approach to the computer interface,” in Proceedings of the 20th annual conference on Computer graphics and interactive techniques, (1993), pp. 57–64.

2. E. A. Bier, M. C. Stone, K. Pier, W. Buxton, and T. D. DeRose, “Toolglass and magic lenses: The see-through interface,” in Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, (1993), pp. 73–80.

3. B. B. Bederson and J. D. Hollan, “Pad++: A zooming graphical interface for exploring alternate interface physics,” in Proceedings of the 7th Annual ACM Symposium on User Interface Software and Technology, (1994), pp. 17–26.

4. T. Igarashi and K. Hinckley, “Speed-dependent automatic zooming for browsing large documents,” in Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology, (2000), pp. 139–148.

5. S. Oney, C. Harrison, A. Ogan, and J. Wiese, “Zoomboard: A diminutive qwerty soft keyboard using iterative zooming for ultra-small devices,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, (2013), pp. 2799–2802.

6. A. Toet, “Gaze directed displays as an enabling technology for attention aware systems,” Comput. Human Behavior 22(4), 615–647 (2006). Attention aware systems. [CrossRef]  

7. S. Grogorick, G. Albuquerque, J.-P. Tauscher, and M. Magnor, “Comparison of unobtrusive visual guidance methods in an immersive dome environment,” ACM Trans. Appl. Percept. 15(4), 1–11 (2018). [CrossRef]  

8. S. Kratz, I. Brodien, and M. Rohs, “Semi-automatic zooming for mobile map navigation,” in Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services, (2010), pp. 63–72.

9. D. C. Robbins, E. Cutrell, R. Sarin, and E. Horvitz, “Zonezoom: Map navigation for smartphones with recursive view segmentation,” in Proceedings of the Working Conference on Advanced Visual Interfaces, (2004), pp. 231–234.

10. A. Cockburn, A. Karlson, and B. B. Bederson, “A review of overview+detail, zooming, and focus+context interfaces,” ACM Comput. Surv. 41(1), 1–31 (2009). [CrossRef]  

11. C. Tominski, S. Gladisch, U. Kister, R. Dachselt, and H. Schumann, “Interactive lenses for visualization: An extended survey,” Comput. Graph. Forum 36, 173–200 (2017). [CrossRef]  

12. D. P. Käser, M. Agrawala, and M. Pauly, “Fingerglass: efficient multiscale interaction on multitouch screens,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, (ACM, 2011), pp. 1601–1610.

13. S. Stellmach and R. Dachselt, “Still looking: Investigating seamless gaze-supported selection, positioning, and manipulation of distant targets,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, (2013), p. 285–294.

14. T. N. Hoang and B. H. Thomas, “Augmented viewport: An action at a distance technique for outdoor ar using distant and zoom lens cameras,” in International Symposium on Wearable Computers (ISWC) 2010, (2010), pp. 1–4.

15. B. Avery, C. Sandor, and B. H. Thomas, “Improving spatial perception for augmented reality x-ray vision,” in 2009 IEEE Virtual Reality Conference, (2009), pp. 79–82.

16. A. Mulloni, A. Dünser, and D. Schmalstieg, “Zooming interfaces for augmented reality browsers,” in Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services, (2010), pp. 161–170.

17. P. Sadda, E. Azimi, G. I. Jallo, J. T. Doswell, and P. Kazanzides, “Surgical navigation with a head-mounted tracking system and display,” Stud. Health Technol. Inform. 184, 363–369 (2013).

18. J. Orlosky, T. Toyama, K. Kiyokawa, and D. Sonntag, “Modular: Eye-controlled vision augmentations for head mounted displays,” IEEE Trans. Visual. Comput. Graphics 21(11), 1259–1268 (2015). [CrossRef]  

19. Y. Yano, J. Orlosky, K. Kiyokawa, and H. Takemura, “Dynamic View Expansion for Improving Visual Search in Video See-through AR,” in ICAT-EGVE 2016 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, (2016).

20. T. Narumi, Y. Ban, T. Kajinami, T. Tanikawa, and M. Hirose, “Augmented perception of satiety: Controlling food consumption by changing apparent size of food with augmented reality,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, (2012), pp. 109–118.

21. S. Sakurai, T. Narumi, Y. Ban, T. Tanikawa, and M. Hirose, “Affecting our perception of satiety by changing the size of virtual dishes displayed with a tabletop display,” in Virtual, Augmented and Mixed Reality. Systems and Applications, (Springer, Berlin Heidelberg, 2013), pp. 90–99.

22. E. Suzuki, T. Narumi, S. Sakurai, T. Tanikawa, and M. Hirose, “Illusion cup: Interactive controlling of beverage consumption based on an illusion of volume perception,” in Proceedings of the 5th Augmented Human International Conference, (2014).

23. Y. Kataoka, S. Hashiguchi, F. Shibata, and A. Kimura, “R-V Dynamics Illusion: Psychophysical Phenomenon Caused by the Difference between Dynamics of Real Object and Virtual Object,” in ICAT-EGVE 2015 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, (2015).

24. T. Shibata, J. Kim, D. M. Hoffman, and M. S. Banks, “Visual discomfort with stereo displays: effects of viewing distance and direction of vergence-accommodation conflict,” in Stereoscopic Displays and Applications XXII, vol. 7863 International Society for Optics and Photonics (SPIE, 2011), pp. 222–230.

25. W. Birkfellner, M. Figl, K. Huber, F. Watzinger, F. Wanschitz, J. Hummel, R. Hanel, W. Greimel, P. Homolka, R. Ewers, and H. Bergmann, “A head-mounted operating binocular for augmented reality visualization in medicine - design and initial evaluation,” IEEE Trans. Med. Imaging 21(8), 991–997 (2002). [CrossRef]  

26. M. Figl, C. Ede, J. Hummel, F. Wanschitz, R. Ewers, H. Bergmann, and W. Birkfellner, “A fully automated calibration method for an optical see-through head-mounted operating microscope with variable zoom and focus,” IEEE Trans. Med. Imaging 24(11), 1492–1499 (2005). [CrossRef]  

27. O. Bimber, D. Kloeck, T. Amano, A. Grundhoefer, and D. Kurz, “Closed-loop feedback illumination for optical inverse tone-mapping in light microscopy,” IEEE Trans. Visual. Comput. Graphics 17(6), 857–870 (2011). [CrossRef]  

28. P.-H. C. Chen, K. Gadepalli, R. MacDonald, Y. Liu, S. Kadowaki, K. Nagpal, T. Kohlberger, J. Dean, G. S. Corrado, J. D. Hipp, C. H. Mermel, and M. C. Stumpe, “An augmented reality microscope with real-time artificial intelligence integration for cancer diagnosis,” Nat. Med. 25(9), 1453–1457 (2019). [CrossRef]  

29. A. Lintu and M. Magnor, “An augmented reality system for astronomical observations,” in IEEE Virtual Reality Conference (VR 2006), (2006), pp. 119–126.

30. E. Peli, “Vision multiplexing: an engineering approach to vision rehabilitation device development,” Opt. Vis. Sci. 78(5), 304–315 (2001). [CrossRef]  

31. E. Peli and F. Vargas-Martin, “In-the-spectacle-lens telescopic device for low vision,” in Ophthalmic Technologies XII, vol. 4611F. Manns, P. G. Soederberg, and A. Ho, eds., International Society for Optics and Photonics (SPIE, 2002), pp. 129–135.

32. APA Dictionary of Psychology. https://dictionary.apa.org/talbot-plateau-law (accessed: May 20, 2020).

33. G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Comput. Graph. Forum 38, 493–519 (2019). [CrossRef]  

34. K. Rathinavel, H. Wang, A. Blate, and H. Fuchs, “An extended depth-at-field volumetric near-eye augmented reality display,” IEEE Trans. Visual. Comput. Graphics 24(11), 2857–2866 (2018). [CrossRef]  

35. J.-H. R. Chang, B. V. K. V. Kumar, and A. C. Sankaranarayanan, “Towards multifocal displays with dense focal stacks,” ACM Trans. Graph. 37(6), 1–13 (2019). [CrossRef]  

36. S. Liu, D. Cheng, and H. Hua, “An optical see-through head mounted display with addressable focal planes,” in 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, (2008), pp. 33–42.

37. Y. Jo, S. Lee, D. Yoo, S. Choi, D. Kim, and B. Lee, “Tomographic projector: Large scale volumetric display with uniform viewing experiences,” ACM Trans. Graph. 38(6), 1–13 (2019). [CrossRef]  

38. X. Xia, Y. Guan, A. State, P. Chakravarthula, K. Rathinavel, T. J. Cham, and H. Fuchs, “Towards a switchable ar/vr near-eye display with accommodation-vergence and eyeglass prescription support,” IEEE Trans. Visual. Comput. Graphics 25(11), 3114–3124 (2019). [CrossRef]  

39. T. Ueda, D. Iwai, T. Hiraki, and K. Sato, “Illuminated focus: Vision augmentation using spatial defocusing via focal sweep eyeglasses and high-speed projector,” IEEE Trans. Visual. Comput. Graphics 26(5), 2051–2061 (2020). [CrossRef]  

40. Zoom lens (Wikipedia). https://en.wikipedia.org/wiki/Zoom_lens (accessed: May 20, 2020).

41. M. Ye, M. Noguchi, B. Wang, and S. Sato, “Zoom lens system without moving elements realised using liquid crystal lenses,” Electron. Lett. 45(12), 646–648 (2009). [CrossRef]  

42. H. Li, X. Cheng, and Q. Hao, “An electrically tunable zoom system using liquid lenses,” Sensors 16(1), 45 (2015). [CrossRef]  

43. A. Miks and J. Novak, “Analysis of two-element zoom systems based on variable power lenses,” Opt. Express 18(7), 6797–6810 (2010). [CrossRef]  

44. Y.-H. Lin and M.-S. Chen, “A pico projection system with electrically tunable optical zoom ratio adopting two liquid crystal lenses,” J. Display Technol. 8(7), 401–404 (2012). [CrossRef]  

45. P. Chakravarthula, D. Dunn, K. Akşit, and H. Fuchs, “Focusar: Auto-focus augmented reality eyeglasses for both real world and virtual imagery,” IEEE Trans. Visual. Comput. Graphics 24(11), 2906–2916 (2018). [CrossRef]  

46. F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human vision and electronic imaging XII, vol. 6492 (International Society for Optics and Photonics, 2007), pp. 64920I:1–11.

47. S.-H. Jo and S.-C. Park, “Design and analysis of an 8x four-group zoom system using focus tunable lenses,” Opt. Express 26(10), 13370–13382 (2018). [CrossRef]  

48. D. Iwai, H. Izawa, K. Kashima, T. Ueda, and K. Sato, “Speeded-up focus control of electrically tunable lens by sparse optimization,” Sci. Rep. 9(1), 12365 (2019). [CrossRef]  

49. O. Bimber and R. Raskar, Spatial Augmented Reality: Merging Real and Virtual Worlds (A. K. Peters Ltd., 2005).

50. O. Bimber, D. Iwai, G. Wetzstein, and A. Grundhöfer, “The visual computing of projector-camera systems,” Comput. Graph. Forum 27, 2219–2245 (2008). [CrossRef]  

51. A. Grundhöfer and D. Iwai, “Recent advances in projection mapping algorithms, hardware and applications,” Comput. Graph. Forum 37, 653–675 (2018). [CrossRef]  

52. R. Schubert, G. Welch, P. Lincoln, A. Nagendran, R. Pillat, and H. Fuchs, “Advances in shader lamps avatars for telepresence,” in 2012 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), (2012), pp. 1–4.

53. D. Iwai, R. Matsukage, S. Aoyama, T. Kikukawa, and K. Sato, “Geometrically consistent projection-based tabletop sharing for remote collaboration,” IEEE Access 6, 6293–6302 (2018). [CrossRef]  

54. S. Daher, “Optical see-through vs. spatial augmented reality simulators for medical applications,” in 2017 IEEE Virtual Reality (VR), (2017), pp. 417–418.

55. M. R. Mine, J. van Baar, A. Grundhöfer, D. Rose, and B. Yang, “Projection-based augmented reality in disney theme parks,” IEEE Computer 45(7), 32–40 (2012). [CrossRef]  

56. T. Takezawa, D. Iwai, K. Sato, T. Hara, Y. Takeda, and K. Murase, “Material surface reproduction and perceptual deformation with projection mapping for car interior design,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), (2019), pp. 251–258.

57. B. Jones, R. Sodhi, M. Murdock, R. Mehra, H. Benko, A. Wilson, E. Ofek, B. MacIntyre, N. Raghuvanshi, and L. Shapira, “Roomalive: Magical experiences enabled by scalable, adaptive projector-camera units,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, (2014), p. 637–644.

58. K. Kiyokawa, M. Billinghurst, B. Campbell, and E. Woods, “An occlusion capable optical see-through head mount display for supporting co-located collaboration,” in The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings., (2003), pp. 133–141.

59. O. Cakmakci, Y. Ha, and J. P. Rolland, “A compact optical see-through head-worn display with occlusion support,” in Third IEEE and ACM International Symposium on Mixed and Augmented Reality, (2004), pp. 16–25.

60. T. Hamasaki and Y. Itoh, “Varifocal occlusion for optical see-through head-mounted displays using a slide occlusion mask,” IEEE Trans. Visual. Comput. Graphics 25(5), 1961–1969 (2019). [CrossRef]  

61. K. Rathinavel, G. Wetzstein, and H. Fuchs, “Varifocal occlusion-capable optical see-through augmented reality display based on focus-tunable optics,” IEEE Trans. Visual. Comput. Graphics 25(11), 3125–3134 (2019). [CrossRef]  

62. “DynaFlash, Tokyo Electron Device LTD.,” https://solutions.inrevium.com/application/projector/dynaflash.html. Accessed: 2020-11-25.

63. T. Nomoto, W. Li, H.-L. Peng, and Y. Watanabe, “Dynamic projection mapping with networked multi-projectors based on pixel-parallel intensity control,” in SIGGRAPH Asia 2020 Emerging Technologies, (2020).

64. D. Tone, D. Iwai, S. Hiura, and K. Sato, “Fibar: Embedding optical fibers in 3d printed objects for active markers in dynamic projection mapping,” IEEE Trans. Visual. Comput. Graphics 26(5), 2030–2040 (2020). [CrossRef]  

65. K. Fukamizu, L. Miyashita, and M. Ishikawa, “Elamorph projection: Deformation of 3d shape by dynamic projection mapping,” in IEEE International Symposium on Mixed and Augmented Reality, (2020), pp. 220–229.

66. L. Wang, H. Xu, S. Tabata, Y. Hu, Y. Watanabe, and M. Ishikawa, “High-speed focal tracking projection based on liquid lens,” in ACM SIGGRAPH 2020 Emerging Technologies, (2020).

67. K. Akşit, W. Lopes, J. Kim, P. Shirley, and D. Luebke, “Near-eye varifocal augmented reality display using see-through screens,” ACM Trans. Graph. 36(6), 1–13 (2017). [CrossRef]  

68. Y. Itoh, T. Langlotz, S. Zollmann, D. Iwai, K. Kiyokawa, and T. Amano, “Computational phase-modulated eyeglasses,” IEEE Transactions on Visualization and Computer Graphics p. 1 (2019).

69. Y. Itoh, T. Kaminokado, and K. Akşit, “Beaming displays,” IEEE Trans. Visual. Comput. Graphics 27(5), 2659–2668 (2021). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       A supplementary video of IlluminatedZoom system which allows an observer to see a physical scene such that a part of the scene is magnified while the other parts remain unmagnified.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Concept of IlluminatedZoom. The dashed red circle indicates a magnified part of the paint. Note that the red line is added for a better understanding of the concept but is not visible in an actual system.
Fig. 2.
Fig. 2. Principle of IlluminatedZoom. (a) In a normal condition, two objects are illuminated by environment light. (b) In our proposed system, a user sees the objects through stacked ETLs. The ETLs are periodically modulated so that the magnification and equal magnification states are switched at 60 Hz. A high speed projector illuminates the cylinder object in the magnification state, while it illuminates the cube object in the equal magnification state. As a result, the user perceives only the cylinder object is magnified. Note that the proposed system works in a dark environment.
Fig. 3.
Fig. 3. RTM analysis. (left) Two points are conjugate to each other for a given RTM. (middle) A ray passes through space. (right) A ray passes through a thin lens.
Fig. 4.
Fig. 4. Parameters in the RTM analysis of the proposed system.
Fig. 5.
Fig. 5. Timing chart of our dual focal sweep technique.
Fig. 6.
Fig. 6. Alleviating visible seams. (a) Visible seams are caused, when binary intensity patterns of projected illuminations are spatially consistent with the magnified and unmagnified areas. (b) The seams can be alleviated by our feathering technique. Note $o$ represents the optical center of the ETLs.
Fig. 7.
Fig. 7. Experimental setup.
Fig. 8.
Fig. 8. Model validation experiment. (a) Experimental setup. (b) Pairs of optical powers achieving parfocal zooming. Those by which the captured stripe patterns were the sharpest are shown as the white circles. Those computed from Eq. (16) and (17) are shown as a red line. (c) Magnification factors with optical powers computed from Eq. (16) and (17). The plus symbols indicate magnification factors identical to the targets. Measured magnification factors are shown as white circles. A line fitted to the measured values is shown as the red line.
Fig. 9.
Fig. 9. Measured optical powers with input sinusoidal waves of various minimum ( $V_m$ ) and maximum ( $V_M$ ) voltages. The units of the optical powers and the voltages are diopter and volt, respectively. (a) The minimum optical power $P_{*E}^{m}$ . (b) The maximum optical power $P_{*E}^{M}$ .
Fig. 10.
Fig. 10. Experimental results of a painting with various magnification factors. Only a human area was magnified without our seam alleviation technique. (top) Original and (bottom) magnified appearances.
Fig. 11.
Fig. 11. Experimental results of a newspaper (the contrast is adjusted for better readability). (a) Original view. (b) Magnification ( $\times 1.3$ ) without seam alleviation. (c) Magnification with the proposed seam alleviation technique.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

[ x u ] = M [ x u ] = [ A B C D ] [ x u ] ,
T ( d ) = [ 1 d 0 1 ] ,
R ( P ) = [ 1 0 P 1 ] ,
[ x r u r ] = M [ x o u o ] = [ A B C D ] [ x o u o ] ,
M = T ( d e r ) R ( P e ) T ( d E e ) R ( P e y e E ) T ( d E E ) R ( P o b j E ) T ( d o E ) ,
x r = A x o + B u o .
x r = A x o .
A D = 1.
A = 1 d e r P e P e y e E ( d E e ( 1 d e r P e ) + d e r ) P o b j E ( d E E ( 1 d e r P e P e y e E ( d E e ( 1 d e r P e ) + d e r ) ) + d E e ( 1 d e r P e ) + d e r ) ,
D = d o E ( P e P e y e E ( 1 d E e P e ) P o b j E ( d E E ( P e P e y e E ( 1 d E e P e ) ) + 1 d E e P e ) ) + d E E ( P e P e y e E ( 1 d E e P e ) ) + 1 d E e P e .
P e y e E = 0  and  P o b j E = 0.
A 0 = 1 d e r P e .
s = x r x r 0 = A A 0 = A 1 d e r P e .
A = s ( 1 d e r P e )
D = 1 s ( 1 d e r P e ) .
P e y e E ( s ) = d E E + d E e + d e r ( d E E + d E e ) d e r P e + s d o E ( 1 d e r P e ) d E E ( d E e + d e r d E e d e r P e ) ,
P o b j E ( s ) = d E e + d e r d E e d e r P e s d o E d E E ( 1 d e r P e ) + d o E + d E E d o E d E E ,
P e y e E ( s 1 ) < 0  and  P o b j E ( s 1 ) > 0.
P e y e E ( s 2 ) = P o b j E ( s 2 ) = 0.
P e y e E ( s 1 ) = P E m ( V e y e E M , V e y e E m ) , P e y e E ( s 2 ) = P E M ( V e y e E M , V e y e E m ) .
P o b j E ( s 1 ) = P E M ( V M , V m ) , P o b j E ( s 2 ) = P E m ( V M , V m ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.