Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive viewing distance in super multi-view displays using aperiodic 3-D pixel location and dynamic view indices

Open Access Open Access

Abstract

In the 3-D display field, super multi-view (SMV) technology has drawn keen attention owing to its advanced 3-D features: smooth motion parallax, wide depth of focus, and a possibility of visual discomfort alleviation. Nevertheless, its applications are limited because of narrow viewing lobes (VLs), unavoidable to prevent an excessive decrease of the lateral resolution. To expand VLs, many head-tracked multi-view display technologies have been developed for decades, but restrictive viewing distance (VD) still remains as one of the most critical drawbacks of SMV technology. This paper proposes a novel method that can adjust the optimal VD (OVD) in flat-panel-based SMV displays without mechanical changes or loss of multi-view properties. To this end, it exploits partially aperiodically located sets of subpixels for defining 3-D pixels and dynamically changing view indices for the 3-D pixels. This enables that the front and rear bounds of initial VDs are adjustable, and VLs are dynamically expandable when a target OVD is renewed in real time using head-tracking. In the experiments, the quantitative VL expansion and qualitative quality of perceptual images are compared, and the feasibility of supporting adaptive VD in real time is further investigated. In addition, our prototype head-tracked SMV system is also introduced as an advanced application towards omnidirectionally free VL.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Autostereoscopy [1–4] is being developed for more than a century as one of the main streams in the fields of 3-D video and display. This is because it has many practical advantages regarding cost-efficiency, compact optics-system, and intuitive implementation for providing three-dimensional (3-D) scenes without wearable gears compared to other alternatives such as holographic displays [5–7] and integral photography [8–10]. Especially in the case of multi-view autostereoscopy, each viewpoint image of multi-view images, integrated on a flat-panel, are separately and clearly projected to viewers via optical mechanisms such as an array of cylindrical lenses (lenticular) or a mask of parallax barriers (PBs). Thus, it achieves further physical comfort of viewers as solving the glasses-free and motion parallax issues up to a certain extent. In addition, super multi-view (SMV) technology [11–13] recently has drawn keen attention owing to its possibility to alleviate the vergence-accommodation conflict (VAC) problem. It is well-known that the lack of accommodation cue elicits the VAC problem, and this causes severe visual discomfort in 3-D displays [14]. In addition, the SMV technology has many advantages for ideal 3-D perception such as smooth motion parallax [15] and wider depth of focus [16–18]. For these reasons, SMV autostereoscopic displays are currently regarded as one of the most promising glasses-free 3-D visualization approaches that would offer a high 3-D Quality of Experience to our future daily lives penetrated by 3-D multimedia [19–21].

Nevertheless, its academic or commercial applications are still rarely spread for the following reasons. First, it is vulnerable to a viewer’s position change since optics designs of autostereoscopic displays are based on target viewing lobes (VLs), where viewers can see clear 3-D scenes without any flipping (or pseudoscopic) images. Second, the optical mechanisms of multi-view autostereoscopy subdivide the lateral resolution of the underlying flat-panel into multiple viewpoint images [22]. Thus, for a given flat-panel, the bigger the number of independent views, the lower the lateral resolution in each view’s image. Moreover, in the case of SMV condition, a viewpoint interval (VPI) which is smaller than the pupil diameter is required to evoke accommodation responses [11,23,24]. As a result, construction of a narrow VL with a limited number of views is unavoidable, and thus expanding the degree of freedom in VL is regarded as a key assignment in SMV autostereoscopic displays for their success in the consumer market.

1.1. Related works

In this point of view, autostereoscopy has been developed according to how VL is constructed, and it can be mainly categorized as follows:

  • Two-view displays – viewing zones (VZs) for the left and right eyes are formed side by side at the optimal viewing distance (OVD), and a valid VL consisting of the two VZs is discretely repeated in the horizontal direction.
  • Head-tracked two-view displays – an initial VL is constructed in the same manner, but it is movable in the horizontal direction according to a viewer’s head position.
  • Multi-view displays – multiple VZs of multi-views construct a finitely wide VL where viewers can perceive a 3-D scene with motion parallax as horizontally moving.
  • Head-tracked multi-view displays – an initially wide VL is constructed in the same manner, but it is horizontally movable or deformable according to a viewer’s position.

The initial autostereoscopic displays, two-view displays, were suffered from discrete dead zones when human eyes are reversely located at the VZs of left and right views although a valid VL is horizontally repeated [25]. To deal with this problem, various head-tracked two-view displays were proposed. These displays could move their VLs correctly by switching the left- and right-view indices or mechanically moving optics systems according to viewers’ head positions [26–28]. However, it was unsuitable for the population who have significantly different interpupillary distance (IPD) since most designs of two-view displays were based on the statistical mean IPD (65 mm) of human beings [29].

As another alternative, multi-view displays were developed to deal with the same dead zone problem regardless of the IPD variation [30–32], in which VZs of multiple views were abutted at the OVD while constructing a wide VL, and viewers could freely see correct 3-D images with natural motion parallax within the VL. However, this group was not free from the aforementioned reduction of the lateral resolution for the price of using many views.

For this reason, a fused approach that combines head-tracking and multi-view technologies was also proposed. This approach efficiently compensate the drawbacks of each approach. For instance, Woodgate et al. [33] proposed a three-view head-tracked display system, where each of the VZs has a width two-thirds of the mean IPD, and one view shows the image for one eye while the other two views show the image for the other eye according to a viewer’s eye position. At the expense of only one more view, it could cope with IPD variations, 40 mm to 80 mm, for all viewers. In the same manner, Dodgson [34] proposed a six-view head-tracked display. It could additionally remove the inter-view dark zones at the OVD by overlapping the adjacent views 100%. Similarly, Kim et al. [35] additionally achieved a substantial reduction of point crosstalk as well as enhanced uniformity of image brightness by exploiting the eight-view property.

1.2. Current challenges

Owing to the above developments, the current multi-view including SMV autostereoscopic displays barely suffer from a viewer’s horizontal position change. However, a restrictive range of viewing distance (VD) in the depth-direction is still one of critical factors that makes multi-view autostereoscopic displays difficult to replace eyewear-assisted 3-D displays in the commercial market because it critically restrains valid spaces for watching the best quality of 3-D images. Therefore, further investigations to provide VD-free environment are strongly required towards omnidirectionally free VL for its practical use in both academic and commercial applications.

Recently, a few approaches that deal with a viewer’s VD change in multi-view or two-view autostereoscopic displays were proposed. For instance, Yoon et al. [36] proposed a method that manipulates a gap size between the display panel and PBs for a four-view display. However, they did not suggest a specific scheme for implementation of the variable gap, and their system cannot avoid a thick optics screen for providing the gap movement. Suzuki et al. [37] suggested a system that controls a pitch size and aperture positions of liquid crystal (LC) barriers for a two-view display using an eye-tracking system. Since their method only manipulates LC barriers, a compact system is realizable, and it is possible to provide a correct 3-D image without a pseudo-stereoscopic image wherever a viewer is in the xz-plane. However, this approach is not free from the latency and stability concerns elicited by mechanical changes and requires their special LC barriers. As another approach, we also previously proposed a subpixel rearrangement method [38,39]. In this method, the test display originally has twelve-views, but only left and right images are allocated to each of two grouped views, where the numbers and positions of adjacent views for the grouping are determined by the current position of a viewer. This approach achieves the same effect of the method proposed by Suzuki et al. [37] without any mechanical changes. However, it critically loses the multi-view property so that it can only provide stereoscopic 3-D images despite the twelve-view display.

1.3. Objective and contributions

In this paper, we propose a novel method that can eliminate a limit of VD in flat-panel-based SMV autostereoscopic displays. Since it is based on a dynamic subpixel rearrangement technology, it requires neither additional hardware components nor mechanical changes of optical systems as in the methods [36, 37]. Thus, the proposed method is free from the concern over latency, stability, and system volume. In addition, the proposed method can preserve the initial multi-view properties of given SMV (or multi-view) displays such as the numbers of views and VPIs unlike the method proposed by Yoon et al. [38, 39]. Although the only drawback of our approach is the increase of computational complexity and memory for the dynamic subpixel rearrangement, these are getting relieved owing to the rapid developments of computer graphics and GPU-based rendering technologies [40–42]. Thus, we do not discuss these issues in this paper. Instead, we focus on the principles and key algorithms of the proposed method.

In order to achieve the objective, our method promptly constructs underlying 3-D pixels by aperiodically grouping successive subpixels according to a target VD. Also, it adaptively allocates one of the various sets of view indices to each 3-D pixel instead of using a fixed set of view indices. These breakthroughs enable viewers to see a correct 3-D scene at any VDs without any flipping images. Basically, our approach can be used to reform a range of valid VD, but it would be more beneficial to realize VD-free environment by renewing a target VD in real time. This indicates that an omnidirectionally VL-free system is also achievable in cooperation with the conventional head-tracked multi-view display technique that expands the horizontal freedom of VLs. In addition, our approach can exactly make up for the critical concern of SMV displays, unavoidably restrictive VLs because of the SMV condition [13,23,24] although a limited number of views is available for sustaining an acceptable lateral resolution of each view image. Therefore, we ultimately expect that our approach would boost the development of eye-tracked SMV displays towards psychologically and physically comfort 3-D viewing.

The remaining parts are organized as follows. In Section 2, we review the basics of multi-view displays regarding a VL construction and perceptual images observed at each position in a VL. Based on these basics, detail algorithms of the proposed approach are described in Section 3. The experimental setups and quantitative and qualitative results are then presented in Section 4, in which we also introduce our prototype of omnidirectionally VL-free SMV display as an advanced application. Eventually, we draw some guidance and future works for the proposed in Section 5 and conclude this paper in Section 6.

2. Basics of multi-view displays

In this section, we review configurations of potential VLs according to major parameters of multi-view displays and perceptual images to be seen at sample positions.

2.1. Construction of viewing zones in multi-view autostereocopic displays

Configurations of potential VLs in multi-view displays can be simulated with three parameters for a design of optics system: the OVD (do), a width of screen (wl), and a width of VL at the OVD (wo) [30,43], and the VL configurations can be categorized into three types according to the two parameters, wl and wo, as follows, and sample configurations are depicted in Fig. 1.

  • If wl < wo, the width of VL is convergent in the front of the OVD, but it is divergent behind the OVD (Fig. 1(a)).
  • If wl = wo, the width of VL is convergent in the front of the OVD, but it is parallel behind the OVD (Fig. 1(b)).
  • If wl > wo, the width of VL is convergent in the front and back of the OVD (Fig. 1(c)).

Also, a precise configuration can be simulated with the line equations as follows:

x=±(zd0(w0wl)2+wl2),x=±(zd0(w0wl)2wl2),
where x and z stand for a horizontal distance and a depth-directional distance from the origin, respectively. Each axis is depicted in Fig. 1(a) as a grey-dotted arrow.

 figure: Fig. 1

Fig. 1 Configurations of viewing lobe (VL) in multi-view autostereocopic displays. Three different types can be designed according to the optimal viewing distance (OVD) (do), the screen width (wl), and a target width of VL at the OVD (wo): (a) VL width wider than the screen, (b) VL width the same with the screen, and (c) VL width narrower than the screen.

Download Full Size | PDF

The VL configurations can be chosen selectively. If we assume that the same number of views is used under an adequate 3-D image resolution, the type of Fig. 1(a) is proper to provide a wide VL, having a wide VPI, via a large screen display. In contrast, the type of Fig. 1(c) is proper to satisfy the SMV condition [16] for the high quality of depth sensation via an SMV display. However, note that there must be a front bound, (zfront=d0wlwl+w0), in every VLs regardless of the configuration types, so that the closest VD is always restrictive. Moreover, there is even a rear bound, (zback=d0wlwlw0), in the case of Fig. 1(c), so that a valid VD is restricted between the closest VD and the farthest VD.

2.2. Perceptual images in a valid viewing lobe

If the number of views (N) is additionally given, perceptual images to be seen at a certain position in a VL can be simulated in detail. For example, a valid VL can be subdivided by intersections of VZs of each view as shown in Fig. 2(a), in which a five-view display having the first type of VL configuration (wl < wo) was assumed. It confirms that each division at the OVD has a single number, and this indicates that a corresponding single-view is shown across the entire screen. However, devisions deviating from the OVD have more than two numbers. This indicates that several views corresponding to that numbers are shown via some parts of the screen.

 figure: Fig. 2

Fig. 2 (a) An example viewing zones of a five-view display, (b) conceptual images for left and right eyes observable at each positions in (a) in which the numbers in each image stand for view indices among the five views, and (c) perceptual images of our 63-view SMV display captured at the OVD and the closest VD.

Download Full Size | PDF

The theoretical images to be seen at the three different sample positions in Fig. 2(a) (P1, P2, and P3), are depicted in the first row of Fig. 2(b). At the OVD (P1), a single-view (the 5th view or 4th view) image is expected to be seen by each of left and right eyes. However, composite-view images having two viewpoint information (the 2nd plus 3rd view or the 1st plus 2nd view) are expected to be seen at P2. Similarly, composite-view images having five viewpoint information (the 2nd to 5th view or the 1st to 4th view) are expected to be seen at P3.

Note that corresponding points in a pair of left and right images have the same binocular disparity, the difference of corresponding view indices, although composite-views are seen at off-OVDs. Thus, natural binocular fusion and depth sensation occurs as if a pair of single-view left and right images are seen at the OVD. The only concerns that we face in composite-views are visual artifacts such as the wiping effect, the discontinuities of a perceptual image across the screen, and the wobbling effect, the discontinuities of perceptual depth across the 3-D scene.

These artifacts are occurred by the discrete joints of several views. However, those are naturally alleviated by optical interpolation of adjacent 3-D pixels in practice. This is because the count of 3-D pixels is as big as the horizontal resolution of the 3-D image although we only drew the three representative 3-D pixels for simplicity in Fig. 2(a), and the view transition in composite-view images occurs smoothly across all 3-D pixels in a screen. In other words, the intermediate 3-D pixels, between 3-D pixels that have an exact one-view shift, produce smooth viewpoint-transitions as shown in the second row of Fig. 2(b). In addition, it is well-known that increasing the number of views under a fixed width of VL decreases the VPI (VPI = wo/N), and a narrow VPI contributes to reducing the above artifacts since the similarity between adjacent views is increased. In this perspective, Dodgson [43] empirically reported that a six-view display produces an acceptable 3-D effect for viewers near to the OVD, and a sixteen-view display significantly reduces these artifacts in their 10-inch display. Similarly, we investigated perceptual images of our 63-view SMV display, having a narrow VPI (2.56 mm), captured at the centers of the OVD and the closest VD as shown in Fig. 2(c). Those confirm that there are not any artifacts across the entire screen. Just more marginal parts of the background image which has a negative depth are observed at the closest VD because of a composite-view.

3. Proposed approach

3.1. Key principles of the proposed method

We reviewed that a valid VL is a limited area, in which VZs of all 3-D pixel are commonly superimposed, and composite-view images having natural view-transitions allow viewers to move forward and backward from the OVD within the valid VL. Inspired by these observations, the proposed method that can eliminate a limit of VD is initiated from two principles as follows:

  • Principle 1: VZs of all 3-D pixel should be converged at the current VD of a viewer in order to be seen across the entire screen as constructing a valid VL.
  • Principle 2: view indices included in composite-view images should be straight in order to be seen naturally with smooth view-transitions when a viewer deviates from the OVD.

This inspiration can be visualized as shown in Fig. 3. If we suppose N = 5 and wl > wo, the original configuration of the valid VL can be depicted as the red lines in Fig. 3(a). It confirms that the valid VD is bounded by Zfront and Zback, and viewers cannot see the marginal 3-D pixels at P1 and P2 supposed to be seen at the OVD. Instead, viewers may observe wrong views of adjacent 3-D pixels at P1 and P2. Example perceptual images, observable at P1 and P2, are shown in Fig. 3(d) and (f), respectively. These images confirm that the marginal parts of the screen show undesirable flipping-images, elicited by adjacent 3-D pixels.

 figure: Fig. 3

Fig. 3 Configurations of (a) the initial viewing lobe (red lines) and (b)(c) the proposed dynamic viewing lobes (blue lines). If viewers are outside the valid viewing distance (P1 and P2), (d)(f) perceptual images contain some flipping images because of wrong viewpoint projections from adjacent 3-D pixels. However, the proposed method can construct (e)(g) natural composite-view images at the same VDs (P′1 and P′2) without any noticeable artifacts.

Download Full Size | PDF

However, we can renew positions of 3-D pixels to satisfy Principle 1 on the basis of VZs of the center 3-D pixel as shown in Fig. 3(b) (when a viewer is at P1) and Fig. 3(c) (when a viewer is at P2). These figures show that the number of views to be added and to be deleted in a new 3-D pixel are always equivalent, and we can know that the locations of the 3-D pixels to be renewed are just slid locations of the originals in the subpixel array. Then, we can satisfy Principle 2 by allocating straight view indices to the newly included subpixels in the renewed 3-D pixels.

For instance, when a viewer is at P1, closer than Zfront, as shown in Fig. 3(b), four outward subpixels should be newly included into each of new marginal 3-D pixels in order to construct VZs converged at that VD. Then, four view indices should be newly allocated to the renewed subpixels to be straight based on the survived subpixels: the 1st view and the 5th view, respectively. An example perceptual image, observable at P′1, is shown in Fig. 3(e). In contrast to Fig. 3(d), it confirms that a natural composite-view image is produced. Similarly, when a viewer is at P2, farther than Zback, as in Fig. 3(c), three inward subpixels should be newly included into each of new marginal 3-D pixels. Then, in the same manner, three view indices should be newly allocated to the renewed subpixels to be straight based on the survived subpixels: the 4th and 5th views for the left marginal 3-D pixel and the 1st and 2nd views for the right marginal 3-D pixel. An example perceptual image, observable at P′2, is shown in Fig. 3(g). This figure also confirms that a natural composite-view image alternates the parts of flipping images shown in Fig. 3(f).

In this manner, a new valid VL can be constructed at the current VD of a viewer. In addition, we can dynamically expand the initial VL as big as the abutting VZs of the center 3-D pixel (blue lines) as shown in Fig. 3(b)(c) if we renew the VL in real time according to a viewer’s head-tracking positions, and this dynamically expandable viewing lobe (DEVL) can be described by two line equations as follows:

x=±(zd0w02),
and this implies that a viewer can see a correct 3-D scene at any VDs in the depth-direction.

However, realizing the above principles are followed by two questions concerning digital image processing: “how can we determine new 3-D pixel positions and view indices to be renewed?” and “how can we realize the 3-D pixel position changes and the view index renewals in the subpixel array?” To answer these questions, we developed two algorithms [44] as follows.

3.2. Algorithm 1: determination of 3-D pixel positions and the view index update

Figure 4 describes Algorithm 1 with the same VL configuration as that shown in Fig. 3, which was applied for a type of PB display. First, the processing of 3-D pixel renewals initiates from the center aperture (A0) of the PB mask. Then, it moves to the next 3-D pixel toward a marginal 3D pixel of the screen panel, where there is no priority in the directions. After completion of one side, the 3-D pixels on the other side are processed in the same manner.

 figure: Fig. 4

Fig. 4 Schematic explanation of determining the position of the view index update, where a 5-view configuration with parallax barrier was applied. The cases with the viewer (a) in front of the OVD and (b) behind the OVD are depicted.

Download Full Size | PDF

The judgment is made by tracing rays projected back from the current VD (z1) to the subpixels on the screen panel via PB apertures. If a ray, initiating from P0, is linked with a subpixel which has not the center-view index via a PB aperture, the view index of the linked subpixel is regarded as a new center-view index. Thereafter, the successive five subpixels surrounding the linked subpixel construct a new 3-D pixel for that PB aperture, and the difference between the two indices, the old center-view and the new center-view, is saved as an offset (ΔO) for Algorithm 2, renewal of view indices in successive subpixels for a new 3-D pixel.

For example, in Fig. 4a, the first ray reaches the initial center-view (C3, where C stands for the camera) through the center aperture (A0). Thus, the corresponding 3-D pixel does not need the view index update. The second ray shows the same behavior if we process the right side first. However, the third ray reaches C4, and this is not the initial center-view index. Thus, C4 becomes a new center-view index, and the corresponding 3-D pixel position is determined to receive a view index update with ΔO (+1). Note that the fourth ray reaches C4 but does not require a 3-D pixel position change and view index update because C4 is now the center-view index by the process of third ray tracing. In the same manner, all rays can be traced. In Fig. 4, rays that require the view index update are depicted as dotted lines.

3.3. Algorithm 2: renewal of view indices in successive subpixels

After the ray tracing via all PB apertures, few subpixels become invisible (I#) at z1 when z1 is closer than the OVD (z0), or superimposed (S#) when z1 is farther than z0 as shown in (a) and (b) of Fig. 4, respectively. However, a computing system access all subpixels in a raster scanning order in practice, and it only has the offset values for view index compensation after Algorithm 1. Therefore, a computing system could renew view indices of all subpixels correctly as reflecting the exceptional subpixels (I# and S#) with the offset values only.

Figure 5 describes Algorithm 2 which can achieve this efficiently. It mainly consists of two sub-functions: offset compensation and view index rotation. From the condition N = 5, it starts with an array of five subpixels comprising view indices C1–C5. Then, a corresponding ΔO is compensated to each subpixel’s view index. Next, the subpixels are rotated as many times based on the absolute number of the offset, where the direction is determined by the condition of z1. If z1 < z0, the outward rotation toward the marginal 3D pixels is conducted, as shown in Fig. 5(a)/(b), and this process reflects the invisible subpixels in Fig. 4(a). In contrast, the inward rotation toward the center 3D pixel is conducted if z1 > z0, as shown in Fig. 5(c)/(d), and this reflects the superimposed subpixels in Fig. 4(b). The third rows show the finally renewed subpixel arrays, where the bold lettering indicates the renewed center-view indices. If a 3-D pixel does not need the view index update, it simply has the copy of the adjacent subpixel array previously processed. In this manner, all arrays of five subpixels can be renewed, and the eventual subpixel raster becomes exactly the same subpixel raster with enlarged subpixels, as shown in Fig. 4.

 figure: Fig. 5

Fig. 5 Schematic explanation of renewing subpixels constructing each 3D pixel by using the same 5-view configuration.

Download Full Size | PDF

4. Experiments

4.1. Experimental conditions

The proposed method can be applied to any type of flat-panel-based multi-view displays that uses a periodic optical structure such as PB and lenticular. To prove this, we examined our two different multi-view displays: an 81-view high-density multi-view display (Display 1) of lenticular-type and a 63-view SMV display (Display 2) of PB-type. Since the objective of this paper is to show the expansion of VD under the same quality of perceptual images, the four parameters (N, do, wl, and wo) for simulating the initial VLs and the proposed DEVLs are given as shown in Table. 1, where VPIs and OVDs were measured by the system used in [45], and the measurements are shown in Fig. 6.

 figure: Fig. 6

Fig. 6 Optical characteristics measured by the system used in [45]. (a)(b) luminance distributions of a single-view on the xz-plane for the OVDs, and (c)(d) luminance distributions of all views at the OVDs for the VPIs of Display1 and Display2, respectively.

Download Full Size | PDF

Tables Icon

Table 1. Specifications of the test displays

Apparatuses of test displays are shown in Fig. 7. For the real-time mode using head-tracking, we attached Intel RealSense cameras: R200 for Display 1 and SR300 for Display 2 – on the top of the test displays as shown in Fig. 7(a). Also, a recent technology reported by Kang et al. [46] was exploited for accurate and stable head-tracking. For the production of test contents, we composed a 3-D scene with a 3-D object model (airplane) and a background image (numbered tiles from one to ten) using computer graphics as shown in Fig. 7(b). The 3-D object was placed in front of the screen to show apparent perspective changes, and the background image was placed behind the screen to show the visual artifacts that could be observed at bounds of the initial VL. Then, we rendered the 3-D scene as viewpoint images by multiple camera setups. Eventually, we converted the viewpoint images into a single multi-view image input corresponding to a test display. For instance, a sample of multi-view input for Displays 1 is shown in Fig. 7(c), and its perceptual images, captured by a camera moving from side to side, are shown in Fig. 7(d).

 figure: Fig. 7

Fig. 7 (a) Apparatus of our test displays and RGB+D sensors used for head-tracking, (b) composition of our test 3-D contents based on computer graphics, (c) a multi-view input image for Display 1, and (d) its perceptual images, captured by a camera moving from side to side at the initial OVD (see Visualization 1).

Download Full Size | PDF

4.2. Experimental results

For the performance evaluation, we examined two main functions of the proposed method: renewal of a valid VL without head-tracking and real-time operation of DEVL using head-tracking. The former function would be suitable for multiple viewers in a large display regardless of computing power for real-time rendering, and the latter function would be suitable for a single viewer in a mobile display when a high-performance PC is available for real-time rendering. In the experiments, Intel Core I7-7700K Processor and NVIDIA TITAN X GPU were used for the rendering. For precise evaluation, the verification system for eye-tracked autostereoscopic displays [39], shown in Fig. 8, was used. In this system, a test 3-D display renders 3-D contents according to positions of the viewer’s face model if head-tracking is activated. At the same time, the face model is electronically movable along three rails in the xyz-directions, and it is able to capture perceptual images via its built-in stereo camera as moving.

 figure: Fig. 8

Fig. 8 The verification system for eye-tracked autostereoscopic display [39]. A test display renders 3-D contents while tracking the viewer’s face model, and the built-in stereo camera captures perceptual images of the test display as moving in the xyz-directions.

Download Full Size | PDF

First, the experimental results of the renewing a valid VL are shown in Fig. 9 and Fig. 10. For the quantitative performance evaluation, we simulated theoretical VLs of the test displays from the four parameters, N, do, wl, and wo, given in Table 1, and we compared these VLs with empirical VLs measured by the verification system, where the current VDs were manually given. In both Fig. 9(a) and (b), the results confirmed that the empirical VLs are almost the same as the simulated VLs, albeit limitedly explored owing to the working range of the verification system. The minor differences between the simulated VLs and the empirical VLs could come from the inaccurate estimation of the four parameters, but those are negligible in our environment. Also, Fig. 9(b) confirmed that the performance gain increases considerably in the case of wl > wo such as our 15.6-inch SMV display since the proposed method eliminates not only zfront but also zback in theory. Of course, perceptual front and rear bounds in a valid VD are present owing to the human visual system.

 figure: Fig. 9

Fig. 9 Quantitative comparison: the simulated and empirical viewing lobes (VLs) of (a) Display 1 and (b) Display 2. The simulations were conducted from the parameters in Table 1. The empirical VLs were measured by the verification system in Fig. 8.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Qualitative comparison of perceptual images captures at the four sample positions in Fig. 9: ①, ②, ③, and ④, close to the initial bounds (see Visualization 2).

Download Full Size | PDF

For the qualitative performance evaluation, perceptual images captured from the outside initial VZ bounds, represented by the camera icons in Fig. 9: ①, ②, ③, and ④, were also compared as shown in Fig. 10. From the principles in Section 2.2, perceptual images should have composite-views since the capture positions deviate from the OVDs, and those must be natural without visual artifacts such as the wiping effect. This phenomenon is identically observed at center parts of all perceptual images in Fig. 10. However, some left or right parts of the screens show flipping images in the original method, and these parts disturb the binocular fusion as causing diplopia or double images. In contrast, natural composite-view images are observed across the entire screens in the proposed method, similar to the center parts of the screens in the original method. In another word, we cannot find any visual discontinuities in the perceptual images of the proposed method. Note that the blurred marginal parts of the screens in Display 1 are caused by high crosstalk and aberration of the lenticular lens array [47]. Those are not related to the proposed method. The perceptual images of Display 2 prove this argument again, in which we can observe that the quality of the marginal parts of the screens is the same with the that of center parts in both the original method and the proposed method.

Second, the experimental results of the real-time operation of DEVL using head-tracking are shown in Fig. 11. In the above experiments, target VDs were manually provided to the rendering system, but the proposed method can be combined with head-tracking to support DEVL in real time. DEVL combined with head-tracking allows a single viewer to move forward and backward freely. To prove the feasibility of DEVL, we took a video clip using the right camera of the face model in the verification system while the face model moves forward and backward from the OVD. Note that the verification system also has a restrictive search range because of its rails, so that perceptual view images of Display 1 were representatively explored in the range of 945 to 460 mm. The whole video clip can be seen in our support material (see Visualization 2), and four representative perceptual images, selected based on Displays 1’s zfront, are shown in Fig 11. In the screens, the numbers at the bottom-left corners stand for frame rates of contents rendering, and the letters at the bottom-right corners stand for the function of DEVL on/off (NE: off and T: on). These results confirm that the proposed method can generate natural composite-views at even closer distances than zfront although the original display starts showing flipping images at zfront, where the flipping images are seen in the left side of the screen. This is because we used the right camera of the face mode, aligned at the center in the x-direction.

 figure: Fig. 11

Fig. 11 Feasibility evaluation of operating the dynamically expandable viewing lobe (DEVL) in real time. Perceptual images were captured in the working range of the verification system.

Download Full Size | PDF

The frame rates were bounded between 22 fps and 24 fps in our test condition which is non-optimized. In fact, the operation speed depends on several factors such as the latency of a head-tracking sensor, the complexity of a 3-D object, the initial viewpoint number of a given display, and so forth. Since system optimization is out of this paper’s scope, we do not discuss the above factors in detail. Just note that real-time operation of DEVL is feasible enough in an even more complicated system such as the application in the next subsection.

4.3. Application

Although we emphasized the key principles and algorithms to expand the freedom of VD, the proposed method can be used to realize an omnidirectionally VL-free multi-view display more meaningfully with minor changes as follows. In section 3.2, we assumed that a viewer is at the center in the horizontal direction. Thus, we used the subpixel positions, shown via the center aperture of the PB mask, and the initial set of view-indices as the references for processing the proposed algorithms. However, when a viewer deviates from the horizontal center, we need to redefine subpixel positions for the center 3-D pixel by linking the current position and observable subpixels via the center aperture of the PB mask, and the initial view-indices also need to be refined by allocating adequate view indices, supposed to be seen in that viewing angle.

As an example system for this modification, we also developed a prototype head-tracked SMV display which supports VZ-free and full parallax 3-D contents. In Fig. 12, its perceptual images were compared with that of the original SMV display (Display 2) when a viewer is moving horizontally, vertically, and depth-directionally, respectively. Note that, the horizontal position change (Δx) covers the left and right bounds of the initially valid VL (cf. Wo = 158.72 mm in Display 2). These results confirm that the proposed method also provides natural single-view images when a viewer is horizontally outside the initial VL at the OVD in contrast to the original display, in which entirely flipping images are seen across the screen, as shown in the Fig. 12(a). When a viewer is moving to the depth-direction, the same results in Fig. 11 are observed with the same performance. In Fig. 12(c), were observed. The vertical perspective changes of the 3-D scene in Fig. 12(b) can be simply achieved by accordingly renewing the vertical position of the virtual camera array in the contents render. Since the proposed algorithms are only related with the horizontal and depth-directional position of a viewer, this additional function does not increase the computational complexity.

 figure: Fig. 12

Fig. 12 An application of the proposed method toward viewing-lobe-free and visually comfort 3-D perception. Our prototype of head-tracked SMV display, combined with the conventional head-tracking technology, allows a viewer to see natural 3-D contents at any positions in the xyz-direction, and it works in 22–24 fps including the function of vertical view changes in the practical environment (see Visualization 3, Visualization 4, Visualization 5, and Visualization 6).

Download Full Size | PDF

This application indicates that advanced multi-view display systems having many futuristic features simultaneously can be developed based on the proposed method. Also, those real-time operation is feasible enough in practice. Although we did not optimize the system’s operating speed as in the DEVL test, our prototype system worked at 22–24 fps in a restrained condition such as our verification system (see Visualization 3) and at 10–45 fps in a more practical condition with natural movements (see Visualization 4, Visualization 5, and Visualization 6).

5. Discussion

To prove the performance of the proposed method, the experiments were elaborately conducted regarding various aspects, and the experimental results confirmed that the proposed method successfully removes both the front and rear bounds of valid VDs. In the performance, the VD and VL gains were maximized when a given display’s VL configuration belongs to the diamond shape such as Fig. 1(c), the common VL configuration of SMV displays.

However, expandable VDs and VLs must be limited in practice. This is because sensors that we used for the head-tracking have limited working ranges in the depth-direction and FOVs in viewing angle. As a result, practically expandable VDs and VLs are determined by the performance of head-tracking-sensors. In this paper, we adopted Intel RealSense’s R200 [48] and SR300 [49] sensors for our 28-inch HDMV display (OVD = 994) and 15.6-inch SMV display (OVD = 994), respectively, regarding those initial OVDs. An example performance of our prototype head-tracked SMV display system is shown Fig. 13. Although the proposed method (DEVL) could move the OVD from the screen to the infinity, the valid VD was empirically restrained from 300 mm to 1500 mm due to the working range of Intel SR300 sensor. Also, the expandable VL in viewing angle was empirically restrained up to 73 degrees. Nevertheless, it is still considerable owing to the high-performance of the current camera-sensors for gesture recognition and human body tracking [50,51]. The expected performance by these sensors might be enough to cover the visibility of the normal human visual system. Thus, for applications of the proposed method, developers may need to select a proper tracking-sensor based on the initial design parameters of a given multi-view autostereoscopic display and the target performance.

 figure: Fig. 13

Fig. 13 Valid VD and VL of our prototype VL-free SMV display system using head-tracking. In practice, VD and VL of the proposed method are restrained because of FOVs and working ranges of tracking-sensors.

Download Full Size | PDF

The proposed method and sample application may be of benefit in both commercial and academic purposes. For the commercial purpose, it can be utilized to move the initial OVD of a commercial (super) multi-view display to elsewhere when a target space cannot guarantee a sufficient VD for the best quality of 3-D images. This scenario would be frequently faced when multiple viewers watch a large glasses-free 3-D display. If the head-tracking is combined, the direct provision of DEVL is available, but it is limited to single users. Thus, it would be more proper for the industry of rapidly growing mobile displays: laptops, tablets, and cellular phones. For the academic use, the proposed method would help to develop more advanced glasses-free 3-D displays since it breaks the conventional concept of subpixel arrangement in multi-view autostereoscopy, periodically located with a fixed set of view indices. In addition, it does not have any specific conflicts so that it would be cooperated with other superior methods.

6. Conclusion

We proposed a novel dynamic subpixel rearrangement method that can eliminate the limit of VD in flat-panel-based SMV (or multi-view) displays. Unlike the conventional methods, it neither requires particular mechanical changes nor loss of optical properties such as the number of views. Therefore, it is free from the concern over system latency, stability, and system volume. The proposed method, inspired by the fact that composite-views observed at off-OVDs within a valid VL provide natural 3-D scene perception, always constructs natural composite-views which can be seen correctly at any VDs via the dynamic subpixel rearrangement method. To the best of our knowledge, this is the first approach that uses partially aperiodically located sets of subpixels for defining 3-D pixels beneath each lenslet (or PB slit) and dynamically changing view indices for the redefined 3-D pixels. This new breakthrough may enhance not only the commercial competitiveness of multi-view autostereoscopic displays against eyewear-assisted 3-D displays but the academic usability for developing more advanced SMV autostereoscopic displays such as our prototype of a VL-free SMV system. Eventually, the proposed method would be of benefit to the flat-panel-based SMV display community towards realizing psychologically and physically comfort 3-D viewing as a low-cost alternative to the holography and integral photography.

Funding

Civil-Military Technology Cooperation Program (Project No. 17-CM-DP-29) of ICMTC and the KIST Institutional Program (Project No. 2E28240).

References and links

1. T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).

2. D. Ezra, G. J. Woodgate, B. A. Omar, N. S. Holliman, J. Harrold, and L. S. Shapiro, “New autostereoscopic display system,” Proc. SPIE 2409, 31–41 (1995). [CrossRef]  

3. P. Harman, “Autostereoscopic display system,” Proc. SPIE 2653, 56–64 (1996). [CrossRef]  

4. I. Sexton and P. Surman, “Stereoscopic and Autostereoscopic Display Systems,” IEEE Signal Process. Mag. 16(3), 85–99 (1999). [CrossRef]  

5. J.-H. Park, “Recent progress in computer-generated holography for three-dimensional scenes,” J. Inf. Disp. 18(1), 1–12 (2016). [CrossRef]  

6. P. Surman and X.W. Sun, “Towards the reality of 3D imaging and display,” in Proceedings of IEEE 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2014), pp. 1–4.

7. ISO/IEC JTC1/SC29/WG11, “3D FTV/SMV visualization and evaluation lab,” in Proceedings of 112th Moving Picture Experts Group Meeting (Warsaw, Poland, 2015), M36577.

8. H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D Display with Long Visualization Depth using Referential Viewing Area based Integral Photography,” IEEE Trans. Vis. Comput. Graphics 17(11), 1690–1701 (2011). [CrossRef]  

9. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in Three-dimensional Integral Imaging: Sensing, Display, and Applications,” Appl. opt. 52(4), 546–560 (2013). [CrossRef]   [PubMed]  

10. J. Arai, E. Nakasu, T. Yamashita, H. Hiura, M. Miura, T. Nakamura, and R. Funatsu, “Progress Overview of Capturing Method for Integral 3-D Imaging Displays,” Proc. IEEE 105(5), 837–849 (2017). [CrossRef]  

11. Y. Takaki, “High-Density Directional Display for Generating Natural Three-Dimensional Images,” Proc. IEEE 94(3), 654–663 (2006). [CrossRef]  

12. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010). [CrossRef]   [PubMed]  

13. Y. Takaki, “Development of super multi-view displays,” ITE Trans. Media Technol. Appl. 2(1), 8–14 (2014). [CrossRef]  

14. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision 8(3), 33 (2008). [CrossRef]  

15. Y. Takaki, Y. Urano, and H. Nishio, “Motion-parallax smoothness of short-, medium-, and long-distance 3D image presentation using multi-view displays,” Opt. Express 20(24), 27180–27197 (2012). [CrossRef]   [PubMed]  

16. Y. Takaki, “3D images with enhanced DOF produced by 128-directional display,” in Proceedings of the 13th International Display Workshops (2006), pp. 1909–1912.

17. J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013). [CrossRef]  

18. X. Shen and B. Javidi, “Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens,” Appl. Opt. 57(7), B184–B189 (2018). [CrossRef]   [PubMed]  

19. M. Tanimoto, “FTV standardization in MPEG,” in Proceedings of IEEE 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2014), pp. 1–4.

20. ISO/IEC JTC1/SC29/WG11, “Call for evidence on free-viewpoint television: Super-multiview and free navigation,” in Proceedings of 112th Moving Picture Experts Group Meeting (Warsaw, Poland, 2015), N15348.

21. P. Carballeira, J. Gutiérrez, F. Morán, J. Cabrera, F. Jaureguizar, and N. García, “MultiView Perceptual Disparity Model for Super MultiView Video,” IEEE J. Sel. Top. Signal Process. 11(1), 113–124 (2017). [CrossRef]  

22. M. Zwicker, S. Yea, A. Vetro, C. Forlines, W. Matusik, and H. Pfister, “Multi-view Video Compression for 3D Displays,” in Proceedings of Conference Record of the Forty-First Asilomar Conference on Signals, Systems and Computers (2007), pp. 1506–1510. [CrossRef]  

23. T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “Three-dimensional display technologies satisfying super multiview condition,” Proc.SPIE 10298, 102980B (2001).

24. ISO/IEC SC29WG11, “Experimental Framework for FTV,” in Proceedings of 110th Moving Picture Experts Group Meeting (Strasbourg, France, Oct. 2014), N15048.

25. N. A. Dodgson, “Autostereoscopic 3D Displays,” Computer 38(8), 31 (2005). [CrossRef]  

26. P. Surman, I. Sexton, K. Hopf, R. Bates, and W. Lee, “Head tracked 3D displays,” Springer LNCS 4105, 769–776 (2006).

27. J. B. Eichenlaub, “An autostereoscopic display with high brightness and power efficiency,” Proc. SPIE 2177, 4–15 (1994). [CrossRef]  

28. M. R. Jewell, G. R. Chamberlin, D. E. Sheat, P. Cochrane, and D. J. McCartney, “3-D imaging systems for video communication applications,” Proc. SPIE 2409, 4–10 (1995). [CrossRef]  

29. N. A. Dodgson, “Variation and extrema of human interpupillary distance,” Proc. SPIE 5291, 36–46 (2004). [CrossRef]  

30. N. A. Dodgson, “Analysis of the viewing zone of the Cambridge autostereoscopic display,” Appl. Opt. 35(10), 1705–1710 (1996). [CrossRef]   [PubMed]  

31. C. Van Berkel and J. A. Clarke, “Characterisation and optimisation of 3D-LCD module design,” Proc. SPIE 3012, 179–186 (1997). [CrossRef]  

32. Y. Takaki, “Multi-view 3-D display employing a flat-panel display with slanted pixel arrangement,” J. Soc. Inf. Disp. 18(7), 476–482 (2010). [CrossRef]  

33. G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer-tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187–199 (1997). [CrossRef]  

34. N. A. Dodgson, “On the number of viewing zones required for head-tracked autostereoscopic display,” Proc. SPIE 605560550Q (2006). [CrossRef]  

35. S.-K. Kim, K.-H. Yoon, S. K. Yoon, and H. Ju, “Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display,” Opt. Express 23(10), 13230–13244 (2015). [CrossRef]   [PubMed]  

36. S. K. Yoon, S. Khym, H. W. Kim, and S.-K. Kim, “Variable parallax barrier spacing in autostereoscopic displays,” Opt. Commun. 370, 319–326 (2016). [CrossRef]  

37. D. Suzuki, S. Hayashi, Y. Hyodo, S. Oka, T. Koito, and H. Sugiyama, “A wide view auto-stereoscopic 3D display with an eye-tracking system for enhanced horizontal viewing position and viewing distance,” J. Soc. Inf. Disp. 24(11), 657–668 (2016). [CrossRef]  

38. K.-H. Yoon and S.-K. Kim, “Expansion method of the three-dimensional viewing freedom of autostereoscopic 3D display with dynamic merged viewing zone (MVZ) under eye tracking,” Proc. SPIE 10219, 1021914 (2017). [CrossRef]  

39. K.-H. Yoon, M.-K. Kang, H. Lee, and S.-K. Kim, “Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation,” Appl. Opt. 57(1), A101–A117 (2018). [CrossRef]   [PubMed]  

40. K.C. Kwon, C. Park, M.U. Erdenebat, J.S. Jeong, J.H. Choi, N. Kim, J.H. Park, Y.T. Lim, and K.H. Yoo, “High Speed Image Space Parallel Processing for Computer-generated Integral Imaging System,” Opt. Express 20(2), 732–740 (2012). [CrossRef]   [PubMed]  

41. S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient Computer-generated Integral Imaging Based on The Backward Ray-Tracing Technique and Optical Reconstruction,” Opt. Express 25(1), 330–338 (2017). [CrossRef]   [PubMed]  

42. G. Chen, C. ma, Z. Fan, X. Cui, and H. Liao, “Real-time Lens based Rendering Algorithm for Super-multiview Integral Photography without Image Resampling,” IEEE Trans. Vis. Comput. Graphics (to be published).

43. N. A. Dodgson, “Analysis of the viewing zone of multi-view autostereoscopic displays,” Proc. SPIE 4660, 254–265 (2002). [CrossRef]  

44. S.-K. Kim and K.-H. Yoon, “METHOD OF FORMING DYNAMIC MAXIMAL VIEWING ZONE OF AUTOSTEREOSCOPIC DISPLAY APPARATUS”, U.S. patent application 15869504 (Jan. 12, 2018).

45. S.K. Yoon and S.-K. Kim, “Measurement method with moving image sensor in autostereoscopic display,” Proc. of SPIE 8384, 83840Y (2012). [CrossRef]  

46. D. Kang, J. Kim, and S.-K. Kim, “Affine registration of three-dimensional point sets for improving the accuracy of eye position trackers,” Opt. Eng. 56(4), 043105 (2017). [CrossRef]  

47. V. Ramachandra, K. Hirakawa, M. Zwicker, and T. Nguyen, “Spatioangular Prefiltering for Multiview 3D Displays,” IEEE Trans. Vis. Comput. Graphics 17(5), 642–654 (2011). [CrossRef]  

48. L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel RealSense Stereoscopic Depth Cameras,” in Proceedings of Computer Vision and Pattern Recognition (cs.CV) (2017), arXiv:1705.05548v2.

49. M. Carfagni, R. Furferi, L. Governi, M. Servi, F. Uccheddu, and Y. Volpe, “On the performance of the Intel SR300 depth camera: metrological and critical characterization,” IEEE Sens. J. 17(14), 4508–4519 (2017). [CrossRef]  

50. L. L. Presti and M. La Cascia, “3D skeleton-based human action classification: A survey,” Pattern Recognit. 53, 130–147 (2016). [CrossRef]  

51. F. L. Siena, B. Byrom, P. Watts, and P. Breedon, “Utilising the Intel RealSense Camera for Measuring Health Outcomes in Clinical Research,” J. Med. Syst. 42(3), 53 (2018). [CrossRef]   [PubMed]  

Supplementary Material (6)

NameDescription
Visualization 1       This clip shows the composition of the test contents and its directional images perceptually observed in the test displays when a built-in stereo camera moves horizontally.
Visualization 2       This clip shows the details of experimental setups and real-time performance of the proposed method. Although flipping images start to be shown at the front bound of the original SMV display, those are disappeared in the modified SMV display by the p
Visualization 3       This clip explains the concept of an advanced viewing-lobe-free SMV display based on the proposed dynamic subpixel rearrangement method.
Visualization 4       This clip proves the feasibility of the proposed method for a viewing-lobe-free SMV 3D display when a viewer is horizontally moving in a practical environment.
Visualization 5       This clip proves the feasibility of the proposed method for a viewing- lobe -free SMV 3D display when a viewer is vertically moving in a practical environment.
Visualization 6       This clip proves the feasibility of the proposed method for a viewing- lobe -free SMV 3D display when a viewer is depth-directionally moving in a practical environment.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Configurations of viewing lobe (VL) in multi-view autostereocopic displays. Three different types can be designed according to the optimal viewing distance (OVD) (do), the screen width (wl), and a target width of VL at the OVD (wo): (a) VL width wider than the screen, (b) VL width the same with the screen, and (c) VL width narrower than the screen.
Fig. 2
Fig. 2 (a) An example viewing zones of a five-view display, (b) conceptual images for left and right eyes observable at each positions in (a) in which the numbers in each image stand for view indices among the five views, and (c) perceptual images of our 63-view SMV display captured at the OVD and the closest VD.
Fig. 3
Fig. 3 Configurations of (a) the initial viewing lobe (red lines) and (b)(c) the proposed dynamic viewing lobes (blue lines). If viewers are outside the valid viewing distance (P1 and P2), (d)(f) perceptual images contain some flipping images because of wrong viewpoint projections from adjacent 3-D pixels. However, the proposed method can construct (e)(g) natural composite-view images at the same VDs (P′1 and P′2) without any noticeable artifacts.
Fig. 4
Fig. 4 Schematic explanation of determining the position of the view index update, where a 5-view configuration with parallax barrier was applied. The cases with the viewer (a) in front of the OVD and (b) behind the OVD are depicted.
Fig. 5
Fig. 5 Schematic explanation of renewing subpixels constructing each 3D pixel by using the same 5-view configuration.
Fig. 6
Fig. 6 Optical characteristics measured by the system used in [45]. (a)(b) luminance distributions of a single-view on the xz-plane for the OVDs, and (c)(d) luminance distributions of all views at the OVDs for the VPIs of Display1 and Display2, respectively.
Fig. 7
Fig. 7 (a) Apparatus of our test displays and RGB+D sensors used for head-tracking, (b) composition of our test 3-D contents based on computer graphics, (c) a multi-view input image for Display 1, and (d) its perceptual images, captured by a camera moving from side to side at the initial OVD (see Visualization 1).
Fig. 8
Fig. 8 The verification system for eye-tracked autostereoscopic display [39]. A test display renders 3-D contents while tracking the viewer’s face model, and the built-in stereo camera captures perceptual images of the test display as moving in the xyz-directions.
Fig. 9
Fig. 9 Quantitative comparison: the simulated and empirical viewing lobes (VLs) of (a) Display 1 and (b) Display 2. The simulations were conducted from the parameters in Table 1. The empirical VLs were measured by the verification system in Fig. 8.
Fig. 10
Fig. 10 Qualitative comparison of perceptual images captures at the four sample positions in Fig. 9: ①, ②, ③, and ④, close to the initial bounds (see Visualization 2).
Fig. 11
Fig. 11 Feasibility evaluation of operating the dynamically expandable viewing lobe (DEVL) in real time. Perceptual images were captured in the working range of the verification system.
Fig. 12
Fig. 12 An application of the proposed method toward viewing-lobe-free and visually comfort 3-D perception. Our prototype of head-tracked SMV display, combined with the conventional head-tracking technology, allows a viewer to see natural 3-D contents at any positions in the xyz-direction, and it works in 22–24 fps including the function of vertical view changes in the practical environment (see Visualization 3, Visualization 4, Visualization 5, and Visualization 6).
Fig. 13
Fig. 13 Valid VD and VL of our prototype VL-free SMV display system using head-tracking. In practice, VD and VL of the proposed method are restrained because of FOVs and working ranges of tracking-sensors.

Tables (1)

Tables Icon

Table 1 Specifications of the test displays

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

x = ± ( z d 0 ( w 0 w l ) 2 + w l 2 ) , x = ± ( z d 0 ( w 0 w l ) 2 w l 2 ) ,
x = ± ( z d 0 w 0 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.