Abstract

In this paper, we present a polyhedron-shaped floating autostereoscopic display viewable from 360 degrees using integral photography (IP) and multiple semitransparent mirrors. IP combined with polyhedron-shaped multiple semitransparent mirrors is used to achieve a 360 degree viewable floating three-dimensional (3D) autostereoscopic display, having the advantage of being able to be viewed by several observers from various viewpoints simultaneously. IP is adopted to generate a 3D autostereoscopic image with full parallax property. Multiple semitransparent mirrors reflect corresponding IP images, and the reflected IP images are situated around the center of the polyhedron-shaped display device for producing the floating display. The spatial reflected IP images reconstruct a floating autostereoscopic image viewable from 360 degrees. We manufactured two prototypes for producing such displays and performed two sets of experiments to evaluate the feasibility of the method described above. The results of our experiments showed that our approach can achieve a floating autostereoscopic display viewable from surrounding area. Moreover, it is shown the proposed method is feasible to facilitate the continuous viewpoint of a whole 360 degree display without flipping.

© 2015 Optical Society of America

1. Introduction

Floating displays can occupy a 3D space allowing observers to be able to see a display from multiple angles in an intuitive and natural manner [1]. Due to such a display’s realistic and natural display quality, such technology has continuously attracted the attention of researchers and investigators alike in the past several decades, and various display techniques have been proposed. Jones et al. described a volumetric display system comprised of a high-speed projector, a spinning mirror, and other auxiliary parts. Using their model, multiple people could observe transparent objects around the display system simultaneously [2]. Saito et al. devised another technique for accomplishing a floating 3D display in space by illuminating a set number of points through laser-plasma scanning [3]. Another technique that was proposed was the Seelinder, which is a dynamic, 3D color display that could be completely viewed from 360 degrees. A cylindrical parallax barrier and a one-dimensional light source array are the component of this display [4]. The Transpost system is a 360-degree scanning display, which renders 24 images across the outer edge of a video projected image, simultaneously projecting these images onto a rapidly rotating anisotropic screen with a circle of mirror facets [5]. The display systems mentioned above more or less achieved the floating display effect. However, they also relied on very complicated mechanisms that are neither intuitive nor feasible solutions to a floating display.

Polyhedron-shaped display was invented to produce pseudo-3D floating display. The device proposed utilizes semitransparent mirror to displace the position of targets presented [6]. A specific number of perspectives—usually four or three—of the target are rendered through computer graphics (CG), which are then reflected on the partly reflective facets of the polyhedron-shaped device. Vizoo Corporation has created such a device, involving multiple semitransparent mirrors to display two-dimensional (2D) images on four facets to obtain a pseudo-3D display. However, the display is not actually 3D and the observers could not have access to the 360 degrees stereo information [7]. Another device is Showcase, which is capable of enhancing the display of 2D surfaces. Showcase is a device that also provides the possibility of displaying a target from several angles in an intuitive way for observers [8, 9]. MRsionCase integrated a more ergonomic and interactive interface into the device to continue to improve the device’s application and overall usability [10]. Chiu and Yang attempted to capture and display multi-perspective images by way of a polyhedron-shaped or cone-shaped showcase [11]. In short, the pseudo-3D display devices have been feasible substitutes for those devices producing a 3D display. Moreover, with improvements in design, the polyhedron-shaped versions further improved the user experience. Nonetheless, the images displayed by such devices were not autostereoscopic, and often required additional mirrors and tracking devices. Furthermore, image flipping occurs between neighboring semitransparent mirrors, meaning the display cannot be viewed continuously at 360 degrees, greatly lessening the user experience.

Integral photography (IP) [12] is an ideal way to display 3D autostereoscopic images. Spatially formatted visible images can be formed from an arbitrary viewpoint without supplementary glasses or tracking devices. IP and relative animated images have attracted much attention of a variety of fields interested in 3D imaging [13, 14]. One of the challenging problems in the field of IP is to extend the viewing angle [15, 16]. Various approaches for enhancing the viewing angle of IP have been proposed [1720]. However, the viewing zone is still confined in a limited area that is directly in front of the display device. Lopez-Gulliver et al. proposed an interesting handheld box-shaped 3D display by using a wide viewing angle IP technique for each surface of the box [21]. IP images created from the cube facets incorporate together to generate a 3D image as a whole. Furthermore, the viewing angle was enhanced at the cost of other aspects of IP performance, such as resolution and depth perception.

Combining IP and other approaches to a floating display, new 3D display modalities can be created. Miyazaki et al. proposed a 3D display method based on the combination of IP, 360 degree scanning mirror, and a concave mirror, aiming at floating image generation for full-parallax auto-stereoscopy viewable from surrounding area [22]. Erdenebat et al. employed a method that displays full-parallax 3D images with 360-degree horizontal viewing angles by combining IP and a rotating mirror scheme [23]. These floating 3D display systems need complicated optical structures and mechanisms that were simply not tangible. Similar to the volumetric display systems, protective measures must be taken to make the devices safe and stable.

In this paper, we report on a floating autostereoscopic display viewable from 360 degrees with full-motion parallax and 360 degree continuous viewing angle without flipping. We employed IP with multiple mirrors, which are semitransparent, to aggregate the originally viewing angle of IP. We manufactured two prototype polyhedron-shaped display devices and evaluated the feasibility of this display by developing a floating 3D autostereoscopic image.

2. System configuration and proposed methods

2.1 System configuration

The display device is mainly composed of IP images in the horizontal plane and multiple semitransparent mirrors as shown in Fig. 1. By splicing mirrors at a specific angle, we can form a floating 3D image that can be observed by multiple people simultaneously. The high-resolution liquid crystal display (LCD) is located horizontally on a support, so that the observer can see the autostereoscopic display from upward direction around the polyhedron-shaped display device. The micro lens array (MLA) is closely placed on the LCD, so that the 3D images are produced on the screen. The polyhedron is made of multiple semitransparent mirrors. By using the polyhedron-shaped display device, observers can view the mirrored IP image in the 3D space at the center of the polyhedron-shaped device. The IP images are generated according to a specific geometrical relationship. After reflection by the mirror facets of the polyhedron, the images are integrated as a whole, and the floating 3D display is produced. The floating 3D image is of full-motion parallax and 360 degree continuous viewpoints without flipping.

 

Fig. 1 System configuration.

Download Full Size | PPT Slide | PDF

2.2 Floating autostereoscopic display

When standing in front of a plane mirror, those observing can see the image of the 3D object that is put in the horizontal plane as shown in Fig. 2. Surface information of the object within a certain angular domain is visible.

 

Fig. 2 Concept of floating display using a semitransparent mirror.

Download Full Size | PPT Slide | PDF

According to the symmetry principle of mirror imaging, the distance from the object on horizontal plane to the central point is equal to the distance from its image on vertical plane to the central point. The semitransparent mirror is tilted at 45 degree with respect to the horizontal plane, such that the image centerline of the 3D object is exactly in the vertical plane. The further of the object is from the center of the polyhedron-shaped display device, the higher its mirrored image is from the horizontal plane.

We use IP to render the 3D image and mirror the IP image by semitransparent mirror to generate the floating autostereoscopic display, which holds its 3D characteristics. In this study, the IP image is put on the horizontal plane, and the mirror is tilted 45 degree with the horizontal plane. Therefore, we can see the floating 3D display in front of us when standing in front of the display device.

IP is a kind of autostereoscopic display technology that has continuous viewpoints within certain viewing angle. IP with fly’s-eye lens has the properties of geometrically accurate, and it has full-motion parallax both horizontally and vertically. Therefore, IP can be adopted as the 3D image source to fabricate the proposed system to get non-flipping and continuous viewpoint 360 degree floating autostereoscopic display with multiple semitransparent mirrors. We use semitransparent mirrors to make the IP continuously, and polyhedron-shaped mirrors are used to realize the 360 degree continuous 3D display.

The feature of semitransparent mirror enables that the objects behind the semitransparent mirror can be observed simultaneously, which may be adopted to realize the mixed reality effect.

2.3 Floating autostereoscopic display with 360 degree view without flipping

We employed IP with multiple semitransparent mirrors, to aggregate the original viewing angle of IP to get a floating 3D display with larger viewing angle. Both the number of semitransparent mirrors and the viewing angle of IP used in this system determine the viewpoint continuousness.

The observer sees the point from the mirror, and his or her eyes receive light from one or two plane mirrors. As shown in Fig. 3, the points P1and P2in the IP images are corresponding to the same point of the floating 3D image. The two mirrors reflect these points, respectively, eventually reaching the observer’s eyes. They correspond to the same point, so that the observer can get the consistent information.

 

Fig. 3 Position of points on the IP display and points reflected into 3D space.

Download Full Size | PPT Slide | PDF

The position relationship of the point on the IP display and the corresponding point reflected onto the 3D space is demonstrated in Fig. 3. This phenomenon of 360 degrees of continuous viewpoints without flipping can be validated by the following analysis.

P1is the representation of a point under the mirror 1 in the reference frame and P2is the corresponding point of P1by rotation:

P2=Rz(θ)R(r1)(θ)P1

Where R(r1)(θ) is a rotation matrix around vector r1 by θ and Rz(θ) is an Euler rotation matrix around Z-axis. r1 is an arbitrary unit vector in the XY plane. θ is a rotation angle between the mirror 1 and the mirror 2 around the Z-axis.

Because the mirrors are 45 degree tilted with the XY plane, we set r1coincide with X-axis in this study. Therefore, we get r1=[1,0,0]Tandu1is in the XZ plane asu1=[sin(45),0,cos(45)]T.

P2=Rz(θ)Rx(θ)P1

Where

Rx(θ)=[1000cos(θ)sin(θ)0sin(θ)cos(θ)]

and

Rz(θ)=[cosθsinθ0sinθcosθ0001]

Furthermore, P1' denotes the mirror-symmetric point of P1 by the mirror 1 and P2' denotes the mirror-symmetric point of P2 by the mirror 2.

P1=(I2u1u1T)P1

and

P2=(I2u2u2T)P2

Equation (5) and (6) can be derived according to Householder transform with u1 and u2 as the normal vectors of the two mirrors [24]. Where

u2=Rz(θ)u1

From Eq. (1)-Eq. (7)

P2'=(I2Rz(θ)u1u1TRzT(θ))Rz(θ)Rr1(θ)P1
and analytically, we can get P1'=P2'.

The Formula 1 is adopted to generate the sub-IP images in our IP rendering algorithm. The result P1'=P2' means that the two points that comply with the predefined relationship will meet at the same coordinate in the 3D space after reflection by the mirrors. This deduction verifies our IP-generating method, which is the fundamental rule of our 360 degree viewable 3D display with smooth transition.

By rendering IP images according to this geometrical relationship, the viewing angles of multiple sub-IP images are accumulated to realize a 360 degree viewable autostereoscopic floating display. Therefore, the floating 3D autostereoscopic image with smooth transition in the center of the polyhedron-shaped display device can be observed by several people from different directions simultaneously as shown in Fig. 4.

 

Fig. 4 Floating autostereoscopic display appears in the center of the polyhedron-shaped display device. When the viewpoint moves, different perspectives of the 3D image are observed. (see Media 1)

Download Full Size | PPT Slide | PDF

The relationship between number of semitransparent mirrors and viewing angle of IP is shown in Fig. 5. A circle with radius r represents displayed object. S1B2 and S2B1 define the viewing region boundaries for facet M2. The IP image mirrored by facet M2 can be viewed when the viewpoint is within this region. The field of view of the eye is defined by an ang1e spanned byB1-X-B2. When the IP viewing angle is larger than the angle φ spanned by S1B2 and S2B1, the view transition between two adjacent facets is seamless without flipping. The relationship of viewing angle of IP, displayed object, and parameters of polyhedron-shaped display is given as

 

Fig. 5 Requirement of IP viewing angle for 360 degree continuous 3D display without flipping.

Download Full Size | PPT Slide | PDF

φ=180+360/N2*arccos(r/R)

where φis the minimal viewing angle of IP image required in this system, N is the number of semitransparent mirror facets, r is the maximal radius of the displayed object, and R is the radius of the polyhedron just at the height of the bottom of the displayed object. When N = 8, r = 0 mm, it is an extreme case that IP viewing angle should be larger than 45 degree.

Table 1 shows the some specific parameters at different conditions. From this table, we can conclude that the bigger the displayed object is, the larger the viewing angle required is. The bigger the polyhedron is, the smaller the viewing angle required is. The more the polyhedron facets are, the smaller the viewing angle required is. Therefore, if the displayed object is smaller, the polyhedron is bigger, and the polyhedron facets are more, the required viewing angle of IP can be smaller.

Tables Icon

Table 1. Parameters of the proposed method

2.4 sub-IP images rendering

A polyhedron with eight semitransparent-mirror facets, for example, has the position and angle’s relationship of eight sub-IP images rendered around the center, as shown in Fig. 6. The sub-IP images are generated in such a way that each view is rotated to face outward from the circle. The angle between two adjacent sub-IP images is 45 degree, and each sub-image itself is rotated around the vertical centerline of the object.

 

Fig. 6 Sub-IP images corresponding to multiple semitransparent mirrors.

Download Full Size | PPT Slide | PDF

The degree of rotation restrains the adjacent sub-IP images. The eight IP images are rendered according to this geometrical requirement. For example, the upper front sub-image is the orthogonal view. It is facing the observer. The left adjacent sub-image is a rotated version of the upper front one. By rotating it 45 degree about the vertical centerline of the object, and then rotating 45 degree about the center of the screen, finally locating it at the upper left of the circle as shown in the figure. Rendering eight views of sub-IP images in this way we get the arrangement of IP image as shown in Fig. 6. These eight sub-IP images are computational generated for each facet of the polyhedron-shaped display device.

After reflected by the 45 degree tilted mirrors, the eight 3D sub-IP images incorporate together in the center of the polyhedron-shaped display device to create an integrated floating 3D display. The sub-IP images are reflected by the mirrors, contributing into their respective portions of the floating 3D image. The IP sub-images are integrated around the central depth plane. Because of the overlapping between them, part of sub-IP images occupies the same space and fuses into the resulting 3D image. The eight sub-IP images are sparsely formed on the screen, but after reflection their images crowded closely as a whole.

3. Experiments and results

3.1 Experiment setup

We manufactured two prototype devices to verify the proposed method. The polyhedron made of semitransparent mirrors was precisely designed. Each of the facets is a triangular semitransparent mirror manufactured using computer numerical control (CNC) machine. The polyhedron-shaped display device is shown in Fig. 7. The polyhedron-shaped display device is 20 cm in height and 36.6 cm (14.4 inch) in diameter. To guarantee the facets were accurately tilted at 45 degree, the top angle of every facet was set at 32.65 degrees, and the bottom side length of every facet is 14.5 cm. Therefore, an object with its width shorter than 14.5 cm can be shown completely by this polyhedron-shaped display device. The object in the floating display is 15cm higher above the screen.

 

Fig. 7 Experimental setup; (a) frontal view; (b) autostereoscopic display from eight surrounding viewpoints.

Download Full Size | PPT Slide | PDF

We designed and performed experiments to evaluate the feasibility and effect of major properties of the proposed method: surrounding floating 3D effect and continuous transition without flipping. The first set is to show the 3D parallax and floating 3D effect from multiple directions of the proposed system. The MLA in the first set is 1.486 mm in pitch and 38 degree in viewing angle. The 22.2 inch LCD (IBM, T221) has a resolution of 3840 × 2400 pixels and its pixel density is 204 pixels per inch. The surrounding perspectives effect is shown in Fig. 7. The rear part of the display can be seen at the back of the display device.

We did the second set of experiment to verify the feasibility of 360 degree continuous viewpoints without flipping. We do not have such a large viewing angle lens array as big as the polyhedron-shaped mirrors. The lens array with maximum viewing angle we can get is 2.32 mm in pitch and 57 degree in viewing angle, but it is 10 inch in diagonal which is smaller than the polyhedron. The LCD panel (Apple, iPad 4) has a resolution of 2048 × 1536 pixels and its pixel density is 264 pixels per inch.

The pictures and movies were captured with a digital video camera (SONY, HDR-TD10).

3.2 IP image rendering

We used a computed tomography (CT) scanned human head to evaluate the motion parallax of the developed IP display device. The image is calculated with a volumetric computed tomographic data set (256 × 256 pixels times 94 slices).

The IP image rendered to generate the 3D image is shown in Fig. 8(a). We can see the eight sub-IP images range around the center. For the polyhedron with eight side facets, 45 degree rotation around centerline of the object itself and 45 degree around the center of LCD are used. Each IP image corresponds to its mirror facet on the polyhedron. Figure 8(b) shows the elemental image in detail. We used hexagonal lens array to get full-motion parallax, maximally utilized the pixels in the LCD and minimalized IP flipping.

 

Fig. 8 Rendered IP elemental images. (a) Eight IP elemental images for eight facets of polyhedron; (b) IP elemental image in detail.

Download Full Size | PPT Slide | PDF

3.3 Floating 3D autostereoscopic display with full motion parallax

Our first set is to show the 3D parallax and floating 3D effect from multiple directions of the proposed system. The full-parallax effect and floating capability can be seen from Fig. 9. The picture was captured from multiple surrounding directions.

 

Fig. 9 Motion parallax of IP autostereoscopic 3D image taken from various directions. The numbers denotes the relative position of the observer: (a) upper; (b) left; (c) lower; (d) right; (e) center. (see Media 2)

Download Full Size | PPT Slide | PDF

The 3D display could be observed clearly from different directions by several people simultaneously and keeps still in the 3D space when the viewpoint move up or down, right or left. We can get different perspectives information of the target by moving around the 3D display.

3.4 Floating autostereoscopic display of 360 degree continuous viewpoints without flipping

We implemented the second set of experiment to evaluate the feasibility of 360 degree continuous viewpoints without flipping. The lens array with the maximum viewing angle we can get is with 10 inch in diagonal and 57 degree viewing angle. With three such lens arrays and three iPad LCD panels (which has frames around the LCD), we did the experiment and showed two transitions of the continuous transition effect.

The radius and facets number of the polyhedron used in our experiment is 197.5mm and 8, respectively. According to Eq. (9), if the viewing angle of IP sub-image is larger than 56.7 degree and r<20 mm, the floating autostereoscopic image can be viewed from 360 degrees without flipping.

The experimental setup and overall view of the display are shown in Fig. 10. The pictures are captured from right to left around three adjacent facets. Clearly, flipping does not occur between any two adjacent facets as shown in Fig. 10(c). Although we only present a continuous 3D image without flipping among three adjacent facets in this experiment, theoretically the proposed display device can show the total 360 degree 3D information of the target by moving continuously around the 3D image.

 

Fig. 10 Floating autostereoscopic display without flipping between adjacent viewing zones. (a) overall view (b) picture taken from a distance between adjacent facets. (c) pictures taken continuously from right to left around three adjacent facets, flipping does not occur between any two adjacent facets. (see Media 3)

Download Full Size | PPT Slide | PDF

4. Discussion and summary

The experimental results show that a floating autostereoscopic display viewable from 360 degrees can be realized by the proposed approach. To the best of our knowledge, although the transflective 3D display has been studied by some researchers, the polyhedron-shaped 360 degree continuously viewable (without flipping) floating autostereoscopic display device has not been reported before. The combination of IP and multiple semitransparent mirrors is a useful way to achieve floating autostereoscopic display viewable from 360 degrees. Multiple persons can view floating 3D display from different viewpoints simultaneously. With three large viewing angle lens arrays and three iPad LCD panels (which has frames around the LCD), we did the second set of experiment and showed two transitions of the continuous transition effect. This technique can be adopted for mixed-reality presentation and advertisement, and it provides a promising platform for presentation in many contexts, such as museums and expositions.

The resolution and smoothness of the floating 3D display is related to the resolution of sub-IP images, the reflectance/transparence characteristics of mirrors, and the assembly accuracy of all the components of apparatus. In this study, we focused on validation of the principle of the proposed polyhedron-shaped flipping-free 360 degree viewable autostereoscopic display using IP and multiple mirrors. Together with viewing angle, stereo depth and resolution are important factors in the IP research field. The advancement of IP may benefit the visual effect of the proposed floating autostereoscopic display in the future.

Images of the adjacent facets may interfere with each other, and the LCD screen may be in sight if the observer is at the front of the device horizontally. The image from adjacent facet will be out of the current viewing zone when observed at a lower position. Although smooth 360 degree 3D display can be achieved, an observable line exists between adjacent facets due to the mechanical splice. The user experience will be improved with fewer facets. If the IP display in our device has a large field of view, the desired performance can be achieved with fewer facets on the polyhedron-shaped display device. Combined lens can be adopted to make the MLA of IP display to enlarge the viewing angle. An animated version of the proposed method will be developed. Therefore, an animated floating autostereoscopic display will be realized. Using OpenGL and cg shading language, graphic processing unit (GPU) parallel processing method can be adopted to reduce processing time [25]. Rendering a series of sub-IP images and arrays of IP elemental images in parallel, can dramatically reduce the time cost [26, 27]. In addition, by integrating human-computer interaction technologies, such as gesture recognition and voice controlling, a real-time interactive 3D displays system with a user-friendly interface can be achieved.

In conclusion, we have developed a polyhedron-shaped floating autostereoscopic device using IP technique and multiple semitransparent mirrors. The feasibility study indicated that aggregation of sub-IP images is satisfactory to make a smooth floating 3D display. The main contribution of this work is the modification of the autostereoscopic technique so it could be applied to realize a floating display system. With this method, display omni-directional images of a 3D object using integral imaging without mechanic movement can be achieved. The combination of sub-IP images to obtain a complete 360 degree 3D image with mirrors is an innovative method. The use of mirrors for extending viewing angle may be promising and be adopted to relevant research fields.

Acknowledgment

This work was supported in part by National Natural Science Foundation of China (Grant No. 81271735, 61361160417, 81427803) and Grant-in-Aid of Project 985.

References and links

1. B. G. Blundell, A. J. Schwarz, and D. K. Horrell, “Volumetric three-dimensional display systems: their past, present, and future,” Eng. Sci. Educ. J. 2(5), 196–200 (1993). [CrossRef]  

2. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Trans. Graph. 26(3), 40 (2007). [CrossRef]  

3. H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008). [CrossRef]  

4. T. Yendo, T. Fujii, M. Tanimoto, and M. P. Tehrani, “The Seelinder: cylindrical 3D display viewable from 360 degrees,” J. Vis. Commun. Image 21(5-6), 586–594 (2010). [CrossRef]  

5. R. Otsuka, T. Hoshino, and Y. Horry, “Transpost: a novel approach to the display and transmission of 360 degrees-viewable 3D solid images,” IEEE Trans. Vis. Comput. Graph. 12(2), 178–185 (2006). [CrossRef]   [PubMed]  

6. Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki, T. Dohi, H. Iseki, and K. Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. ISCAS95, 16–17(1995).

7. P. A. Simonson, “Display device for producing quasi-three-dimensional images,” U.S. Patent 2008/0144175 A1 (Jun.19, 2008).

8. O. Bimber, B. Frohlich, D. Schmalstieg, and L. M. Encarnacao, “The virtual showcase,” IEEE Comput. Graph. Appl. 21(6), 48–55 (2001). [CrossRef]  

9. S. Beckhaus, F. Ledermann, and O. Bimber, “Storytelling and content presentation with the virtual showcase in a museum context,” Proc. Int’l Committee for Documentation, Int’l Council of Museums (2003).

10. H. Kim, S. Nagao, S. Maekawa, and T. Naemura, “MRsionCase a glasses-free mixed reality showcase for surrounding multiple viewers,” ACM siggraph Asia 2012, Singapore (2012).

11. Y. T. Chiu and M. T. Yang, “Virtual multiple-perspective display using pyramidal or conical showcase,” Adv. Intell. Syst. Appl. 21, 431–438 (2013). [CrossRef]  

12. M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7(4), 821–825 (1908).

13. H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010). [CrossRef]   [PubMed]  

14. H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004). [CrossRef]   [PubMed]  

15. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40(29), 5217–5232 (2001). [CrossRef]   [PubMed]  

16. F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999). [CrossRef]  

17. H. Kim, J. Hahn, and B. Lee, “The use of a negative index planoconcave lens array for wide-viewing angle integral imaging,” Opt. Express 16(26), 21865–21880 (2008). [CrossRef]   [PubMed]  

18. S. Jung, J. Park, and B. Lee, “Viewing-angle-enhanced integral 3-D imaging using double display devices with masks,” Opt. Eng. 41(10), 2389–2390 (2002). [CrossRef]  

19. S. Jung, J.-H. Park, H. Choi, and B. Lee, “Wide-viewing integral three-dimensional imaging by use of orthogonal polarization switching,” Appl. Opt. 42(14), 2513–2520 (2003). [CrossRef]   [PubMed]  

20. S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric Fresnel-lens array,” Opt. Eng. 41(10), 2572–2576 (2002). [CrossRef]  

21. R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “gCubik: real-time integral image rendering for a cubic 3D display,” in Proceedings of ACM SIGGRAPH 2009 Emerging Technologies (2009), pp.11.

22. D. Miyazaki, N. Akasaka, K. Okoda, Y. Maeda, and T. Mukai, “Floating three-dimensional display viewable from 360 degrees,” Proc. SPIE 8288, 82881H (2012). [CrossRef]  

23. M.-U. Erdenebat, G. Baasantseren, N. Kim, K. C. Kwon, J. Byeon, K.-H. Yoo, and J.-H. Park, “Integral-floating display with 360 degree horizontal viewing angle,” J. Opt. Soc. Korea 16(4), 365–371 (2012). [CrossRef]  

24. A. S. Householder, “Unitary triangularization of a nonsymmetric matrix,” J. ACM 5(4), 339–342 (1958). [CrossRef]  

25. J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014). [CrossRef]   [PubMed]  

26. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12(6), 1067–1076 (2004). [CrossRef]   [PubMed]  

27. H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable high-resolution integral videography autostereoscopic display with a seamless multiprojection system,” Appl. Opt. 44(3), 305–315 (2005). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. B. G. Blundell, A. J. Schwarz, and D. K. Horrell, “Volumetric three-dimensional display systems: their past, present, and future,” Eng. Sci. Educ. J. 2(5), 196–200 (1993).
    [Crossref]
  2. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Trans. Graph. 26(3), 40 (2007).
    [Crossref]
  3. H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
    [Crossref]
  4. T. Yendo, T. Fujii, M. Tanimoto, and M. P. Tehrani, “The Seelinder: cylindrical 3D display viewable from 360 degrees,” J. Vis. Commun. Image 21(5-6), 586–594 (2010).
    [Crossref]
  5. R. Otsuka, T. Hoshino, and Y. Horry, “Transpost: a novel approach to the display and transmission of 360 degrees-viewable 3D solid images,” IEEE Trans. Vis. Comput. Graph. 12(2), 178–185 (2006).
    [Crossref] [PubMed]
  6. Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki, T. Dohi, H. Iseki, and K. Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. ISCAS95, 16–17(1995).
  7. P. A. Simonson, “Display device for producing quasi-three-dimensional images,” U.S. Patent 2008/0144175 A1 (Jun.19, 2008).
  8. O. Bimber, B. Frohlich, D. Schmalstieg, and L. M. Encarnacao, “The virtual showcase,” IEEE Comput. Graph. Appl. 21(6), 48–55 (2001).
    [Crossref]
  9. S. Beckhaus, F. Ledermann, and O. Bimber, “Storytelling and content presentation with the virtual showcase in a museum context,” Proc. Int’l Committee for Documentation, Int’l Council of Museums (2003).
  10. H. Kim, S. Nagao, S. Maekawa, and T. Naemura, “MRsionCase a glasses-free mixed reality showcase for surrounding multiple viewers,” ACM siggraph Asia 2012, Singapore (2012).
  11. Y. T. Chiu and M. T. Yang, “Virtual multiple-perspective display using pyramidal or conical showcase,” Adv. Intell. Syst. Appl. 21, 431–438 (2013).
    [Crossref]
  12. M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7(4), 821–825 (1908).
  13. H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
    [Crossref] [PubMed]
  14. H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004).
    [Crossref] [PubMed]
  15. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40(29), 5217–5232 (2001).
    [Crossref] [PubMed]
  16. F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
    [Crossref]
  17. H. Kim, J. Hahn, and B. Lee, “The use of a negative index planoconcave lens array for wide-viewing angle integral imaging,” Opt. Express 16(26), 21865–21880 (2008).
    [Crossref] [PubMed]
  18. S. Jung, J. Park, and B. Lee, “Viewing-angle-enhanced integral 3-D imaging using double display devices with masks,” Opt. Eng. 41(10), 2389–2390 (2002).
    [Crossref]
  19. S. Jung, J.-H. Park, H. Choi, and B. Lee, “Wide-viewing integral three-dimensional imaging by use of orthogonal polarization switching,” Appl. Opt. 42(14), 2513–2520 (2003).
    [Crossref] [PubMed]
  20. S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric Fresnel-lens array,” Opt. Eng. 41(10), 2572–2576 (2002).
    [Crossref]
  21. R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “gCubik: real-time integral image rendering for a cubic 3D display,” in Proceedings of ACM SIGGRAPH 2009 Emerging Technologies (2009), pp.11.
  22. D. Miyazaki, N. Akasaka, K. Okoda, Y. Maeda, and T. Mukai, “Floating three-dimensional display viewable from 360 degrees,” Proc. SPIE 8288, 82881H (2012).
    [Crossref]
  23. M.-U. Erdenebat, G. Baasantseren, N. Kim, K. C. Kwon, J. Byeon, K.-H. Yoo, and J.-H. Park, “Integral-floating display with 360 degree horizontal viewing angle,” J. Opt. Soc. Korea 16(4), 365–371 (2012).
    [Crossref]
  24. A. S. Householder, “Unitary triangularization of a nonsymmetric matrix,” J. ACM 5(4), 339–342 (1958).
    [Crossref]
  25. J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
    [Crossref] [PubMed]
  26. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12(6), 1067–1076 (2004).
    [Crossref] [PubMed]
  27. H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable high-resolution integral videography autostereoscopic display with a seamless multiprojection system,” Appl. Opt. 44(3), 305–315 (2005).
    [Crossref] [PubMed]

2014 (1)

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

2013 (1)

Y. T. Chiu and M. T. Yang, “Virtual multiple-perspective display using pyramidal or conical showcase,” Adv. Intell. Syst. Appl. 21, 431–438 (2013).
[Crossref]

2012 (2)

D. Miyazaki, N. Akasaka, K. Okoda, Y. Maeda, and T. Mukai, “Floating three-dimensional display viewable from 360 degrees,” Proc. SPIE 8288, 82881H (2012).
[Crossref]

M.-U. Erdenebat, G. Baasantseren, N. Kim, K. C. Kwon, J. Byeon, K.-H. Yoo, and J.-H. Park, “Integral-floating display with 360 degree horizontal viewing angle,” J. Opt. Soc. Korea 16(4), 365–371 (2012).
[Crossref]

2010 (2)

T. Yendo, T. Fujii, M. Tanimoto, and M. P. Tehrani, “The Seelinder: cylindrical 3D display viewable from 360 degrees,” J. Vis. Commun. Image 21(5-6), 586–594 (2010).
[Crossref]

H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
[Crossref] [PubMed]

2008 (2)

H. Kim, J. Hahn, and B. Lee, “The use of a negative index planoconcave lens array for wide-viewing angle integral imaging,” Opt. Express 16(26), 21865–21880 (2008).
[Crossref] [PubMed]

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

2007 (1)

A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Trans. Graph. 26(3), 40 (2007).
[Crossref]

2006 (1)

R. Otsuka, T. Hoshino, and Y. Horry, “Transpost: a novel approach to the display and transmission of 360 degrees-viewable 3D solid images,” IEEE Trans. Vis. Comput. Graph. 12(2), 178–185 (2006).
[Crossref] [PubMed]

2005 (1)

2004 (2)

H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12(6), 1067–1076 (2004).
[Crossref] [PubMed]

H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004).
[Crossref] [PubMed]

2003 (1)

2002 (2)

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric Fresnel-lens array,” Opt. Eng. 41(10), 2572–2576 (2002).
[Crossref]

S. Jung, J. Park, and B. Lee, “Viewing-angle-enhanced integral 3-D imaging using double display devices with masks,” Opt. Eng. 41(10), 2389–2390 (2002).
[Crossref]

2001 (2)

J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40(29), 5217–5232 (2001).
[Crossref] [PubMed]

O. Bimber, B. Frohlich, D. Schmalstieg, and L. M. Encarnacao, “The virtual showcase,” IEEE Comput. Graph. Appl. 21(6), 48–55 (2001).
[Crossref]

1999 (1)

F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
[Crossref]

1993 (1)

B. G. Blundell, A. J. Schwarz, and D. K. Horrell, “Volumetric three-dimensional display systems: their past, present, and future,” Eng. Sci. Educ. J. 2(5), 196–200 (1993).
[Crossref]

1958 (1)

A. S. Householder, “Unitary triangularization of a nonsymmetric matrix,” J. ACM 5(4), 339–342 (1958).
[Crossref]

1908 (1)

M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7(4), 821–825 (1908).

Akasaka, N.

D. Miyazaki, N. Akasaka, K. Okoda, Y. Maeda, and T. Mukai, “Floating three-dimensional display viewable from 360 degrees,” Proc. SPIE 8288, 82881H (2012).
[Crossref]

Aoki, J.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Arai, J.

F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
[Crossref]

Asano, A.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Baasantseren, G.

Bimber, O.

O. Bimber, B. Frohlich, D. Schmalstieg, and L. M. Encarnacao, “The virtual showcase,” IEEE Comput. Graph. Appl. 21(6), 48–55 (2001).
[Crossref]

Blundell, B. G.

B. G. Blundell, A. J. Schwarz, and D. K. Horrell, “Volumetric three-dimensional display systems: their past, present, and future,” Eng. Sci. Educ. J. 2(5), 196–200 (1993).
[Crossref]

Bolas, M.

A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Trans. Graph. 26(3), 40 (2007).
[Crossref]

Byeon, J.

Chiu, Y. T.

Y. T. Chiu and M. T. Yang, “Virtual multiple-perspective display using pyramidal or conical showcase,” Adv. Intell. Syst. Appl. 21, 431–438 (2013).
[Crossref]

Choi, H.

Debevec, P.

A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Trans. Graph. 26(3), 40 (2007).
[Crossref]

Dohi, T.

H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
[Crossref] [PubMed]

H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable high-resolution integral videography autostereoscopic display with a seamless multiprojection system,” Appl. Opt. 44(3), 305–315 (2005).
[Crossref] [PubMed]

H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12(6), 1067–1076 (2004).
[Crossref] [PubMed]

H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004).
[Crossref] [PubMed]

Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki, T. Dohi, H. Iseki, and K. Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. ISCAS95, 16–17(1995).

Encarnacao, L. M.

O. Bimber, B. Frohlich, D. Schmalstieg, and L. M. Encarnacao, “The virtual showcase,” IEEE Comput. Graph. Appl. 21(6), 48–55 (2001).
[Crossref]

Erdenebat, M.-U.

Frohlich, B.

O. Bimber, B. Frohlich, D. Schmalstieg, and L. M. Encarnacao, “The virtual showcase,” IEEE Comput. Graph. Appl. 21(6), 48–55 (2001).
[Crossref]

Fujii, T.

T. Yendo, T. Fujii, M. Tanimoto, and M. P. Tehrani, “The Seelinder: cylindrical 3D display viewable from 360 degrees,” J. Vis. Commun. Image 21(5-6), 586–594 (2010).
[Crossref]

Hahn, J.

Hata, N.

Horrell, D. K.

B. G. Blundell, A. J. Schwarz, and D. K. Horrell, “Volumetric three-dimensional display systems: their past, present, and future,” Eng. Sci. Educ. J. 2(5), 196–200 (1993).
[Crossref]

Horry, Y.

R. Otsuka, T. Hoshino, and Y. Horry, “Transpost: a novel approach to the display and transmission of 360 degrees-viewable 3D solid images,” IEEE Trans. Vis. Comput. Graph. 12(2), 178–185 (2006).
[Crossref] [PubMed]

Hoshi, K.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

Hoshino, H.

F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
[Crossref]

Hoshino, T.

R. Otsuka, T. Hoshino, and Y. Horry, “Transpost: a novel approach to the display and transmission of 360 degrees-viewable 3D solid images,” IEEE Trans. Vis. Comput. Graph. 12(2), 178–185 (2006).
[Crossref] [PubMed]

Householder, A. S.

A. S. Householder, “Unitary triangularization of a nonsymmetric matrix,” J. ACM 5(4), 339–342 (1958).
[Crossref]

Ino, K.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Inomata, T.

H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
[Crossref] [PubMed]

Inoue, N.

R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “gCubik: real-time integral image rendering for a cubic 3D display,” in Proceedings of ACM SIGGRAPH 2009 Emerging Technologies (2009), pp.11.

Iseki, H.

Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki, T. Dohi, H. Iseki, and K. Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. ISCAS95, 16–17(1995).

Ishikawa, H.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Iwahara, M.

H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable high-resolution integral videography autostereoscopic display with a seamless multiprojection system,” Appl. Opt. 44(3), 305–315 (2005).
[Crossref] [PubMed]

H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12(6), 1067–1076 (2004).
[Crossref] [PubMed]

H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004).
[Crossref] [PubMed]

Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki, T. Dohi, H. Iseki, and K. Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. ISCAS95, 16–17(1995).

Jarusirisawad, S.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Jones, A.

A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Trans. Graph. 26(3), 40 (2007).
[Crossref]

Jung, S.

S. Jung, J.-H. Park, H. Choi, and B. Lee, “Wide-viewing integral three-dimensional imaging by use of orthogonal polarization switching,” Appl. Opt. 42(14), 2513–2520 (2003).
[Crossref] [PubMed]

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric Fresnel-lens array,” Opt. Eng. 41(10), 2572–2576 (2002).
[Crossref]

S. Jung, J. Park, and B. Lee, “Viewing-angle-enhanced integral 3-D imaging using double display devices with masks,” Opt. Eng. 41(10), 2389–2390 (2002).
[Crossref]

J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40(29), 5217–5232 (2001).
[Crossref] [PubMed]

Kakehata, M.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Kayahara, J.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Kim, H.

Kim, N.

Kimura, H.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Kimura, T.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Kobayashi, E.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

Koike, T.

Kwon, K. C.

Lee, B.

Liao, H.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
[Crossref] [PubMed]

H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable high-resolution integral videography autostereoscopic display with a seamless multiprojection system,” Appl. Opt. 44(3), 305–315 (2005).
[Crossref] [PubMed]

H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12(6), 1067–1076 (2004).
[Crossref] [PubMed]

H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004).
[Crossref] [PubMed]

Lippmann, M. G.

M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7(4), 821–825 (1908).

Lopez-Gulliver, R.

R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “gCubik: real-time integral image rendering for a cubic 3D display,” in Proceedings of ACM SIGGRAPH 2009 Emerging Technologies (2009), pp.11.

Maeda, Y.

D. Miyazaki, N. Akasaka, K. Okoda, Y. Maeda, and T. Mukai, “Floating three-dimensional display viewable from 360 degrees,” Proc. SPIE 8288, 82881H (2012).
[Crossref]

Masutani, Y.

Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki, T. Dohi, H. Iseki, and K. Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. ISCAS95, 16–17(1995).

McDowall, I.

A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Trans. Graph. 26(3), 40 (2007).
[Crossref]

Min, S.-W.

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric Fresnel-lens array,” Opt. Eng. 41(10), 2572–2576 (2002).
[Crossref]

J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40(29), 5217–5232 (2001).
[Crossref] [PubMed]

Miyazaki, D.

D. Miyazaki, N. Akasaka, K. Okoda, Y. Maeda, and T. Mukai, “Floating three-dimensional display viewable from 360 degrees,” Proc. SPIE 8288, 82881H (2012).
[Crossref]

Mori, M.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Mukai, T.

D. Miyazaki, N. Akasaka, K. Okoda, Y. Maeda, and T. Mukai, “Floating three-dimensional display viewable from 360 degrees,” Proc. SPIE 8288, 82881H (2012).
[Crossref]

Murakami, T.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Naemura, T.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Nakajima, S.

H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004).
[Crossref] [PubMed]

Nishi, Y.

Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki, T. Dohi, H. Iseki, and K. Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. ISCAS95, 16–17(1995).

Nozick, V.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Okano, F.

F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
[Crossref]

Okoda, K.

D. Miyazaki, N. Akasaka, K. Okoda, Y. Maeda, and T. Mukai, “Floating three-dimensional display viewable from 360 degrees,” Proc. SPIE 8288, 82881H (2012).
[Crossref]

Otsuka, R.

R. Otsuka, T. Hoshino, and Y. Horry, “Transpost: a novel approach to the display and transmission of 360 degrees-viewable 3D solid images,” IEEE Trans. Vis. Comput. Graph. 12(2), 178–185 (2006).
[Crossref] [PubMed]

Park, J.

S. Jung, J. Park, and B. Lee, “Viewing-angle-enhanced integral 3-D imaging using double display devices with masks,” Opt. Eng. 41(10), 2389–2390 (2002).
[Crossref]

Park, J.-H.

Saito, H.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Sakuma, I.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
[Crossref] [PubMed]

H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable high-resolution integral videography autostereoscopic display with a seamless multiprojection system,” Appl. Opt. 44(3), 305–315 (2005).
[Crossref] [PubMed]

H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004).
[Crossref] [PubMed]

Samuta, O.

Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki, T. Dohi, H. Iseki, and K. Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. ISCAS95, 16–17(1995).

Sasaki, F.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Schmalstieg, D.

O. Bimber, B. Frohlich, D. Schmalstieg, and L. M. Encarnacao, “The virtual showcase,” IEEE Comput. Graph. Appl. 21(6), 48–55 (2001).
[Crossref]

Schwarz, A. J.

B. G. Blundell, A. J. Schwarz, and D. K. Horrell, “Volumetric three-dimensional display systems: their past, present, and future,” Eng. Sci. Educ. J. 2(5), 196–200 (1993).
[Crossref]

Shimada, S.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Suenaga, H.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

Suzuki, M.

Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki, T. Dohi, H. Iseki, and K. Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. ISCAS95, 16–17(1995).

Suzuki, N.

Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki, T. Dohi, H. Iseki, and K. Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. ISCAS95, 16–17(1995).

Takakura, K.

Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki, T. Dohi, H. Iseki, and K. Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. ISCAS95, 16–17(1995).

Tanimoto, M.

T. Yendo, T. Fujii, M. Tanimoto, and M. P. Tehrani, “The Seelinder: cylindrical 3D display viewable from 360 degrees,” J. Vis. Commun. Image 21(5-6), 586–594 (2010).
[Crossref]

Tehrani, M. P.

T. Yendo, T. Fujii, M. Tanimoto, and M. P. Tehrani, “The Seelinder: cylindrical 3D display viewable from 360 degrees,” J. Vis. Commun. Image 21(5-6), 586–594 (2010).
[Crossref]

Torizuka, K.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Wang, J.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

Yamada, H.

A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Trans. Graph. 26(3), 40 (2007).
[Crossref]

Yang, L.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

Yang, M. T.

Y. T. Chiu and M. T. Yang, “Virtual multiple-perspective display using pyramidal or conical showcase,” Adv. Intell. Syst. Appl. 21, 431–438 (2013).
[Crossref]

Yano, S.

R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “gCubik: real-time integral image rendering for a cubic 3D display,” in Proceedings of ACM SIGGRAPH 2009 Emerging Technologies (2009), pp.11.

Yashiro, H.

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

Yendo, T.

T. Yendo, T. Fujii, M. Tanimoto, and M. P. Tehrani, “The Seelinder: cylindrical 3D display viewable from 360 degrees,” J. Vis. Commun. Image 21(5-6), 586–594 (2010).
[Crossref]

Yoo, K.-H.

Yoshida, S.

R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “gCubik: real-time integral image rendering for a cubic 3D display,” in Proceedings of ACM SIGGRAPH 2009 Emerging Technologies (2009), pp.11.

Yuma, I.

F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
[Crossref]

ACM Trans. Graph. (1)

A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Trans. Graph. 26(3), 40 (2007).
[Crossref]

Adv. Intell. Syst. Appl. (1)

Y. T. Chiu and M. T. Yang, “Virtual multiple-perspective display using pyramidal or conical showcase,” Adv. Intell. Syst. Appl. 21, 431–438 (2013).
[Crossref]

Appl. Opt. (3)

Eng. Sci. Educ. J. (1)

B. G. Blundell, A. J. Schwarz, and D. K. Horrell, “Volumetric three-dimensional display systems: their past, present, and future,” Eng. Sci. Educ. J. 2(5), 196–200 (1993).
[Crossref]

IEEE Comput. Graph. Appl. (1)

O. Bimber, B. Frohlich, D. Schmalstieg, and L. M. Encarnacao, “The virtual showcase,” IEEE Comput. Graph. Appl. 21(6), 48–55 (2001).
[Crossref]

IEEE Trans. Biomed. Eng. (2)

H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
[Crossref] [PubMed]

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

IEEE Trans. Inf. Technol. Biomed. (1)

H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004).
[Crossref] [PubMed]

IEEE Trans. Vis. Comput. Graph. (1)

R. Otsuka, T. Hoshino, and Y. Horry, “Transpost: a novel approach to the display and transmission of 360 degrees-viewable 3D solid images,” IEEE Trans. Vis. Comput. Graph. 12(2), 178–185 (2006).
[Crossref] [PubMed]

J. ACM (1)

A. S. Householder, “Unitary triangularization of a nonsymmetric matrix,” J. ACM 5(4), 339–342 (1958).
[Crossref]

J. Opt. Soc. Korea (1)

J. Phys. (1)

M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7(4), 821–825 (1908).

J. Vis. Commun. Image (1)

T. Yendo, T. Fujii, M. Tanimoto, and M. P. Tehrani, “The Seelinder: cylindrical 3D display viewable from 360 degrees,” J. Vis. Commun. Image 21(5-6), 586–594 (2010).
[Crossref]

Opt. Eng. (3)

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric Fresnel-lens array,” Opt. Eng. 41(10), 2572–2576 (2002).
[Crossref]

F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
[Crossref]

S. Jung, J. Park, and B. Lee, “Viewing-angle-enhanced integral 3-D imaging using double display devices with masks,” Opt. Eng. 41(10), 2389–2390 (2002).
[Crossref]

Opt. Express (2)

Proc. SPIE (2)

H. Saito, H. Kimura, S. Shimada, T. Naemura, J. Kayahara, S. Jarusirisawad, V. Nozick, H. Ishikawa, T. Murakami, J. Aoki, A. Asano, T. Kimura, M. Kakehata, F. Sasaki, H. Yashiro, M. Mori, K. Torizuka, and K. Ino, “Laser-plasma scanning 3D display for putting digital contents in free space,” Proc. SPIE 6803, 680309 (2008).
[Crossref]

D. Miyazaki, N. Akasaka, K. Okoda, Y. Maeda, and T. Mukai, “Floating three-dimensional display viewable from 360 degrees,” Proc. SPIE 8288, 82881H (2012).
[Crossref]

Other (5)

Y. Masutani, M. Iwahara, O. Samuta, Y. Nishi, N. Suzuki, M. Suzuki, T. Dohi, H. Iseki, and K. Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. ISCAS95, 16–17(1995).

P. A. Simonson, “Display device for producing quasi-three-dimensional images,” U.S. Patent 2008/0144175 A1 (Jun.19, 2008).

S. Beckhaus, F. Ledermann, and O. Bimber, “Storytelling and content presentation with the virtual showcase in a museum context,” Proc. Int’l Committee for Documentation, Int’l Council of Museums (2003).

H. Kim, S. Nagao, S. Maekawa, and T. Naemura, “MRsionCase a glasses-free mixed reality showcase for surrounding multiple viewers,” ACM siggraph Asia 2012, Singapore (2012).

R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “gCubik: real-time integral image rendering for a cubic 3D display,” in Proceedings of ACM SIGGRAPH 2009 Emerging Technologies (2009), pp.11.

Supplementary Material (3)

» Media 1: MOV (444 KB)     
» Media 2: MOV (2315 KB)     
» Media 3: MOV (2275 KB)     

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 System configuration.
Fig. 2
Fig. 2 Concept of floating display using a semitransparent mirror.
Fig. 3
Fig. 3 Position of points on the IP display and points reflected into 3D space.
Fig. 4
Fig. 4 Floating autostereoscopic display appears in the center of the polyhedron-shaped display device. When the viewpoint moves, different perspectives of the 3D image are observed. (see Media 1)
Fig. 5
Fig. 5 Requirement of IP viewing angle for 360 degree continuous 3D display without flipping.
Fig. 6
Fig. 6 Sub-IP images corresponding to multiple semitransparent mirrors.
Fig. 7
Fig. 7 Experimental setup; (a) frontal view; (b) autostereoscopic display from eight surrounding viewpoints.
Fig. 8
Fig. 8 Rendered IP elemental images. (a) Eight IP elemental images for eight facets of polyhedron; (b) IP elemental image in detail.
Fig. 9
Fig. 9 Motion parallax of IP autostereoscopic 3D image taken from various directions. The numbers denotes the relative position of the observer: (a) upper; (b) left; (c) lower; (d) right; (e) center. (see Media 2)
Fig. 10
Fig. 10 Floating autostereoscopic display without flipping between adjacent viewing zones. (a) overall view (b) picture taken from a distance between adjacent facets. (c) pictures taken continuously from right to left around three adjacent facets, flipping does not occur between any two adjacent facets. (see Media 3)

Tables (1)

Tables Icon

Table 1 Parameters of the proposed method

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

P 2 = R z ( θ ) R ( r 1 ) ( θ ) P 1
P 2 = R z ( θ ) R x ( θ ) P 1
R x (θ)=[ 1 0 0 0 cos(θ) sin(θ) 0 sin(θ) cos(θ) ]
R z ( θ )=[ cosθ sinθ 0 sinθ cosθ 0 0 0 1 ]
P 1 =( I2 u 1 u 1 T ) P 1
P 2 =( I2 u 2 u 2 T ) P 2
u 2 = R z ( θ ) u 1
P 2 ' =( I2 R z ( θ ) u 1 u 1 T R z T ( θ ) ) R z ( θ ) R r 1 ( θ ) P 1
φ= 180 + 360 /N2*arccos(r/R)

Metrics