## Abstract

Integral floating display is a recently proposed three-dimensional (3D) display method which provides a dynamic 3D image in the vicinity to an observer. It has a viewing window only through which correct 3D images can be observed. However, the positional difference between the viewing window and the floating image causes limited viewing zone in integral floating system. In this paper, we provide the principle and experimental results of the location adjustment of the viewing window of the integral floating display system by modifying the elemental image region for integral imaging. We explain the characteristics of the viewing window and propose how to move the viewing window to maximize the viewing zone.

©2007 Optical Society of America

## 1. Introduction

Integral floating display [1–3] is a three-dimensional (3D) display system which combines two types of 3D display methods, integral imaging and floating display. Integral imaging is a well-known 3D display method which was firstly proposed by Lippmann in 1908 [4–10]. It is composed of a lens array and a two-dimensional (2D) display device on which an elemental image set is displayed. The elemental image set is integrated through the lens array to be perceived as a 3D image. And the conventional floating display uses a large convex lens or a concave mirror to display an image of a real object or 2D image at a location close to an observer.

In integral floating display, the 3D image constructed by an integral imaging system is at the object space of a large convex lens which is called a floating lens. The integral floating display can produce a 3D image with great feel of depth by forming a real image of the 3D image integrated by the integral imaging technique. That is, the integral floating display system moves the 3D image (integrated by the integral imaging) to a location close to an observer to provide better feel of depth.

Although the 3D image source of the integral floating display is provided by the integral imaging system, the 3D image produced by integral floating display possesses different viewing characteristics compared with a 3D image constructed by integral imaging system [11,12]. The most noticeable difference is between viewing angle in integral imaging and viewing window in integral floating system. A 3D image produced by the integral imaging has a limited viewing angle because the 3D image of the integral imaging is an integration of the elemental images through the lens array. There is a finite elemental image region which corresponds to each elemental lens. If an observer is out of the viewing angle, then the elemental image is not integrated through the proper elemental lens. As a result, the observer sees 3D images with wrong perspective, which is called flipped images. The viewing angle in integral imaging produces the viewing window in integral floating display. The viewing window is the most important concept to understand the different viewing characteristics between the 3D images produced by integral floating display and the 3D images produced by integral imaging system. Only through the viewing window, the observer can see correct 3D images [1]. Wrong images such as flipped images can only be observed out of the viewing window in integral floating display.

As in Fig. 1, the location of the viewing window is fixed at the focal plane of the floating lens in our previous integral floating display [1]. The observer in the viewing region observes the 3D image only through the viewing window as in Fig. 1. When the observer moves out of the viewing region, then the 3D image is clipped out of the viewing window. But the 3D images constructed by the integral floating display are more than focal length distant from the floating lens because they are real images. Due to the positional difference between the viewing window and the 3D images, there is a limitation in the viewing region in which the observer can see the 3D image through the viewing window.

Our purpose in this paper is to move the location of the viewing window so that we can enlarge the observer’s viewing region of the 3D image of integral floating display. We could change the location of the viewing window by changing the elemental image region for each elemental lens. The viewing region of the 3D image of integral floating display is expected to be maximized when the viewing window is placed at the position of the 3D image.

## 2. Location change of the viewing window in the integral floating system

In this section, we propose a method to change the location of the viewing window, which is the main contribution of this paper. The first and the second subsections are devoted to explaining the formation and the characteristics of the viewing window in previous integral floating display. The third subsection explains how to change the location of the viewing window.

#### 2.1. The viewing angle of an integral imaging system

We start the investigation with the viewing angle calculation of the integral imaging. The size of the viewing window is closely related to the viewing angle of the integral imaging. Figure 2 shows the geometries used in viewing angle calculation of the real mode and the virtual mode integral imaging systems.

Here, *f _{i}* is the focal length of the elemental lens,

*g*is the gap between the display device and the lens array,

*φ*is the pitch of the elemental lens and

*s*is the elemental image region for one elemental lens. In conventional design,

*s*and

*φ*are the same. We want to find out the viewing angle

*θ*for the middle elemental lens. The viewing angle is defined as the angular range inside which the observer can watch a correct 3D image. For correct view of the 3D image, each elemental image area should be observed through the corresponding elemental lens. In Fig. 2, the middle area indicated by black stripes of the display device should be observed through the middle elemental lens.

Let us imagine a bundle of light rays emitted from the display device, which become parallel to one another after the refraction on the elemental lenses as in Fig. 2. Before the refraction, the rays meet at the left focal plane of the elemental lens in real mode, while the rays diverge as if they started from the left focal plane of the elemental lens in virtual mode. After the refraction, the rays are parallel and the angle to the normal of the elemental lens is *θ*. The lowest ray in real mode and the highest ray in virtual mode, indicated by thick arrows, start at the boundary of the striped region. If the angle of parallel rays after the refraction is smaller than *θ*, then all the rays will start from the black striped region. If the angle of parallel rays after the refraction is larger than *θ*, then some portion or all of rays will not start from the black striped region. Therefore, we conclude that *θ* is the viewing angle. The light rays indicated with thick arrows in Fig. 2 have a special meaning. Among the light rays which do not start from the black striped region, the light rays with thick arrows have the smallest angle to the normal of the elemental lens after the refraction. In fact, the thick arrow lines come from the boundaries between the striped region and its neighbor region. We will refer to them as ‘critical rays’ throughout this paper. The viewing angle of the real mode and the virtual mode integral imaging is acquired through simple geometrical calculation;

#### 2.2. The viewing window of a conventional integral floating system

With the viewing angle of an integral imaging system known, we can find the location and the size of the viewing window of an integral floating system. Figure 3 illustrates the principle of the formation of the viewing window. Although the figure illustrates an integral floating display using a virtual mode integral imaging, the discussion throughout this section can be equally applied to the integral floating display using a real mode integral imaging. For the simplicity of the discussion, we confine our geometry to two-dimension.

Here, we should remind the optical property of a lens such that parallel light rays incident on the lens converge to a point located on the focal plane of the lens. Let us imagine sets of parallel rays with incident angles on the floating lens equal to the viewing angle of the integral imaging. In Fig. 3, a set of upward rays is labeled A (red lines), while a set of downward rays is labeled B (blue lines). The parallel rays with label A converge to an upper point and the parallel rays with label B converge to a lower point on the focal plane. The distance between these two points is the height of the viewing window. We imagine dotted lines in Fig. 3 which pass through the center of the floating lens to derive an equation describing the height of the viewing window. The height of the viewing window *w _{f}* can be calculated by multiplying the focal length of the floating lens by two times (full width) of the tangent of the viewing angle given in Eq. (1);

where *f _{f}* is the focal length of the floating lens. It is an interesting fact that

*w*is independent of

_{f}*g*, the gap between the lens array and the display device, when the virtual mode of integral imaging system is used. It is because the viewing angle of integral imaging is also independent of

*g*as in Eq. (1) for the virtual mode. By expanding our discussion three-dimensionally, we can find the area of the viewing window.

#### 2.3. Location change of the viewing window

In Fig. 3, the location of the viewing window for the integral floating system is fixed at the focal plane of the floating lens because the critical rays for every elemental lens are parallel. Therefore we want to make the critical rays not to be parallel in order to move the location of the viewing window. To achieve this objective, we change *s*, the size of each elemental image area, to be different from *φ*, the pitch of the elemental lens. Precisely, we let *s* to be

This change of the elemental image region creates a viewing window for integral imaging at the location *d* distant from the lens array. The role of the viewing window of integral imaging is just like that of the viewing window of integral floating; correct 3D images can only be observed through the viewing window. *d* takes positive sign when the viewing window is behind the lens array and negative sign when the viewing window is in front of the lens array. *s* is smaller than *φ* when the viewing window is behind the lens array but bigger than *φ* when the viewing window is in front of the lens array. The relationship among *φ*, *s* and *d* are illustrated in Fig. 4.

The red straight lines stand for the upward critical rays of all the elemental lenses while blue straight lines stand for the downward critical rays. As will be explained in the next section, the viewing window of integral floating is required to be located near the 3D image produced by the integral floating. It means the viewing window of integral imaging is also required to be located near the 3D image produced by the integral imaging. Therefore, real mode integral imaging is assumed when the viewing window is formed in front of the lens array and virtual mode integral imaging is assumed when the viewing window is formed behind the lens array. When the viewing window is formed in front of the lens array, critical rays meet at the boundaries of the viewing window. However, the critical rays do not actually meet when the viewing window is formed behind the lens array. But the dashed lines diverging from the boundaries of the viewing window in the right figure indicate the virtual light paths perceived by the observer. Therefore, the critical rays are perceived to the observer as if they meet at the boundaries of the viewing window behind the lens array. With some algebraic calculation, *w _{i}*, the size of the viewing window of the integral imaging, can be derived as

Again, *w _{i}*, is independent of

*g*for virtual mode.

Now let us consider how the size and the location of the viewing window in integral floating display change when we adopt the proposed design. Figure 5 shows the formation of the viewing window in the proposed system.

Here, *b* is the distance between the floating lens and the viewing window of integral floating display. We can find *b* by applying lens law to *a*+*d* and *f _{f}*. With

*b*and

*w*, known, the size of the viewing window of integral floating system

_{i}*w*can be described as

_{f}We notice again that *w _{f}* of the proposed design is independent of

*g*for the integral floating display using virtual mode of integral imaging. The validation of Eq. (5) is verified through an experiment in which the location and the size of the window are measured.

We used an empty elemental image set which is totally black except for the boundaries of the elemental image regions which are marked by green lines. Since critical rays start at the boundaries of the elemental image region, the viewing window will be formed as a rectangle with green boundaries. The experimental setup is just as shown in Fig. 5, and we used a floating lens with focal length of 175mm and the pitch and the focal length of elemental lenses are 10mm and 22mm, respectively. We tried to create a viewing window 350mm distant from the floating lens (*b*=350mm). Since *b* and *a*+*d* satisfy the lens law
$\left(\frac{1}{a+d}+\frac{1}{b}=\frac{1}{{f}_{f}}\right)$
where *f _{f}* is 175mm,

*a*+

*d*should be 350mm to make

*b*350mm. Virtual mode of integral imaging is used for the experiment and

*a*is changed to 250mm, 225mm and 200mm. A diffuser is placed at 350mm away from the floating lens, the desired location of the viewing window for the integral floating system. The size of the viewing window appeared on the diffuser is measured for each case as shown in Fig. 6.

The theoretical values for *w _{f}* calculated using Eq. (5) are in a good agreement with the experimental results shown in Fig. 6. As can be predicted by Eq. (5),

*w*increases linearly as

_{f}*a*decreases.

## 3. Viewing region maximization of an integral floating display

In this section, we analyze the viewing region of integral floating display and explain how to maximize it. In the first subsection, we analyze the viewing region. And we explain the viewing region maximization process in conventional integral floating display in the second subsection. Finally, viewing region maximization with location adjustment of viewing window is proposed in the last subsection. The last subsection will show why the location adjustment of the viewing window is an essential step in the viewing region maximization process.

#### 3.1. Viewing region in integral floating display

Let us assume that we are designing a system which displays 3D images whose expanse is confined in a rectangular parallelepiped volume (this volume will be referred to as a 3D volume throughout the discussion). As in Fig. 7, there is a viewing window and a 3D volume.

The observer is *d _{o}* distant from the viewing window and the center of the 3D volume is Δ distant from the viewing window. The longitudinal size of the 3D volume is

*l*and the transversal size of the 3D volume is

*h*. Here,

*l*and

*h*are constant values for design process. Our goal is to maximize

*u*for given

*d*,

_{o}*l*and

*h*. In the discussion of the viewing region maximization process, only location (with respect to the 3D volume) and size of the viewing window are considered. Surely, too small integral imaging system with a few numbers of elemental lenses can restrict the viewing region so that the 3D image is unobservable even through the viewing window. But this problem can easily be overcome by expanding the integral imaging system to a proper size. Here we assume that an integral imaging system with enough size and resolution is provided and the only obstacle for enlarging

*u*is the location and size adjustment between the viewing window and the 3D volume. Also, the viewing angle of the integral imaging system need not be considered here because its effect is included in the position and the size of the viewing window of integral floating display.

In the situation in Fig. 7, the front part of the 3D volume and the viewing window determines the viewing region. By doing some algebra, *u* can be described as

In deriving Eq. (6), the viewing window is assumed to be behind the 3D volume in the point of view of the observer. This actually is the case in conventional integral floating display because the viewing window is at the focal plane and the 3D images are formed at locations more than focal length distant from the floating lens. But now the viewing window can change its location by the method explained above. Therefore we should also consider the situation where the 3D volume is behind the viewing window as in Fig. 8.

In this case, the rear part of the 3D volume and the viewing window determines *u*. Again, with some algebra, we get

When the viewing window is overlapping with the 3D volume, we choose the smaller one between the values obtained from Eqs. (6) and (7).

#### 3.2. Viewing region maximization in conventional integral floating display

In conventional integral floating display, the size of the viewing window is fixed by Eq. (2). Therefore the only parameter controllable is Δ. As can be predicted by Fig. 7 or Eq. (6), smaller Δ will guarantee larger *u* when other conditions are fixed. But Δ cannot be reduced less than *l*/2. If Δ is less than *l*/2, then some part of the 3D volume will be less than focal length distant from the floating lens and impossible to display (recall that the 3D image is a real image formed by the floating lens, a convex lens, and real images are more than focal length distant from the convex lens). In this design, a virtual mode of integral imaging system is preferred because we want to put the 3D volume as close as possible to the viewing window and the image sources should be very far from the floating lens. If we use real mode integral imaging, then the 3D image source will be produced between the lens array of integral imaging system and the floating lens. It usually leads to very large Δ and small viewing region. Therefore, we used virtual mode integral imaging system to construct an integral floating display. However, the analysis and the discussion throughout this paper cover both the real mode and the virtual mode integral imaging systems.

For design example, we assume a floating lens with focal length of 175mm and lens array composed of elemental lenses with pitch of 10mm and focal length of 22mm. The parameters of 3D volume, *l* and *h* are assumed to be 80mm and 50mm, respectively. The observer is assumed to be 1m distant from the floating lens. This set of parameter values will be maintained in the rest of the paper and also in the experiment. The calculated size of the viewing region *u* versus Δ varying from 40mm to 120mm is illustrated in Fig. 9. The viewing region is clearly maximized when Δ is 40mm, which is the value of *l*/2.

#### 3.3. Viewing region maximization through location adjustment of the viewing window

In most of the cases, the viewing region can be enlarged further by changing the location and the size of the viewing window. The location of the viewing window will be moved toward the center of the 3D volume so that Δ approaches 0. However, the size of the viewing window will alter according to Eq. (5). And *w _{f}* will be smaller as

*a*gets bigger because (

*b*-

*f*) will have a positive value. Therefore,

_{f}*a*should be as small as possible. But too small

*a*is bad for delivering feel of depth. In our case, proper

*a*for good image quality was around 200mm. With

*a*set to 200mm,

*u*can be calculated using Eqs. (6) and (7) and the result is illustrated in Fig. 10.

The viewing region is expected to be further enhanced in the calculation in spite of the size reduction of the viewing window (the size of viewing window is calculated to be 80mm and 77mm when it is 175mm and 215mm away from the floating lens, respectively). The maximum value of the viewing region appears when the location of the viewing window coincides with the center of the 3D volume.

## 4. Experimental results

We performed experiments to verify that the proposed scheme provides wider viewing region for the observer. Viewing-region-maximized system of conventional design and proposed design discussed in the previous section is actually implemented and the viewing regions are measured. As described in the previous section, *f _{f}* is 175mm,

*f*is 22mm,

_{i}*φ*is 10mm,

*a*is 200mm,

*h*is 50mm and

*l*is 80mm. The experimental setup is shown in Fig. 11.

The 3D images of three fish, located 175mm, 215mm and 255mm away from the floating lens are displayed using two viewing-region-maximized integral floating displays. A digital camera which is 1m away from the floating lens took pictures of 3D fish images at 9 different locations (center, 15cm away from the center in right, left, up and down directions, 23cm away from the center in right, left up and down directions). The results are shown in Fig. 12.

When the camera moved 15cm from the central view, the 3D images are already clipped out of the viewing window in the conventional case, but the 3D images produced by the proposed way still show clear results. When the camera moved 23cm away from the central view, the 3D images produced by the conventional method are severely clipped out while 3D images produced by the proposed method are maintaining clear views. However, the 3D images in down view remain clean in both cases because the fish are aligned from lower to upper position, as the distance from the observer to the fish increases. Except for the down view (which is clear in both cases because of the alignment), the proposed system provides wider viewing region in every direction. The viewing regions for observer 1m away from the floating lens are predicted to be 22cm in the conventional system and 46cm in the proposed system according to the graphs shown in Figs. 9 and 10. The experimental results are in good agreement with the graphs. Located below is a movie of the 3D image reconstructed by the proposed integral floating system. It is recorded by a camcorder 1m away from the floating lens.

In the movie, there are some discontinuities in the 3D image while the camcorder moves. These discontinuities are created by the boundaries of the elemental lenses. It means a point on the 3D image is observed through different elemental lenses as observing location varies, and different perspectives are perceived. Therefore the observer perceives 3D depth through the binocular disparity or convergence. In this point of view, the 3D image constructed by the integral floating shares some characteristics with the multi-view 3D display systems.

## 5. Conclusion

The limited viewing region of the integral floating display is originated from the positional difference between the viewing window and the 3D image. We proposed a method to move the location of the viewing window by modifying the elemental image region for each elemental lens. Even if the design of the conventional system had already been optimized to provide a maximum viewing region, the location adjustment of viewing window could further enhance the viewing region. We provided an analysis and experimental results to explain and support our method. The discussion in this paper will become an essential analysis for the design of the elemental image of the integral floating display system.

## Acknowledgment

This research was supported by a grant (F0004190-2007-23) from the Information Display R&D Center, one of the 21st Century Frontier R&D Programs funded by the Ministry of Commerce, Industry and Energy of the Korean Government.

## References and links

**1. **S.-W. Min, M. Hahn, J. Kim, and B. Lee, “Three-dimensional electro-floating display system using an integral imaging method,” Opt. Express **13**, 4358–4369 (2005). [CrossRef] [PubMed]

**2. **B. Lee, J. Kim, and S.-W. Min, “Integral floating 3D display system: principle and analysis,” Three-Dimensional TV, Video, and Display V, Optics East, Boston, MA, USA, Proc. SPIE. **6392**, paper 6392-18, Oct. 2006.

**3. **J. Kim, Y. Kim, S.-W. Cho, S.-W. Min, and B. Lee, “Viewing angle enhancement of an integral imaging system.” Society for Information Display 2006 International Symposium Digest of Technical Paper, San Francisco, CA, USA, vol. **XXXVII**, book I, 186–189, June 2006.

**4. **G. Lippmann, “La photographic intergrale,” C. R. Acad. Sci. **146**, 446–451 (1908).

**5. **F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. **38**, 1072–1077 (1999). [CrossRef]

**6. **A. Stern and B. Javidi, “Three-Dimensional Image Sensing and Reconstruction with Time-Division Multiplexed Computational Integral Imaging,” Appl. Opt. **42**, 7036–7042 (2003). [CrossRef] [PubMed]

**7. **B. Lee, J.-H. Park, and S.-W. Min, “Three-dimensional display and information processing based on integral imaging,” in Digital Holography and Three-Dimensional Display (edited by
T.-C. Poon), Springer, New York, USA, 2006, Chapter 12.

**8. **J. S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus using real and virtual image fields,” Opt. Lett. **28**, 1421–1423 (2003). [CrossRef] [PubMed]

**9. **M. Okui, J. Arai, Y. Nojiri, and F. Okano, “Optical screen for direct projection of integral imaging,” Appl. Opt. **45**, 9132–9139 (2006). [CrossRef] [PubMed]

**10. **H. Choi, Y. Kim, J.-H. Park, S. Jung, and B. Lee, “Improved analysis on the viewing angle of integral imaging,” Appl. Opt. **44**, 2311–2317 (2005). [CrossRef] [PubMed]

**11. **J. Hong, J.-H. Park, J. Kim, and B. Lee, “Analysis of image depth in integral imaging and its enhancement by correction to elemental images,” Novel Optical Systems Design and Optimization VII, SPIE Annual Meeting, Proc. SPIE **5524**, Denver, Colorado, USA, 387–395, Aug. 2004. [CrossRef]

**12. **J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. **40**, 5217–5232 (2001). [CrossRef]