Abstract

Through gating spectrum plane of multiple planar aligned OLED microdisplays by a timely sequential manner, a super-multiview (SMV) three-dimensional (3D) display based on spatiotemporal-multiplexing was developed in our previous paper. But an upper limit of the allowable sub-viewing-zones (SVZs) for an OLED microdisplay did exist in the previous system, even if microdisplays with very high frame rates could be commercially available. In this manuscript, an improved spatiotemporal-multiplexing SMV displays system is developed, which removes the above limitation through controllable fusing of light beams from adjacent OLED microdisplays. The employment of a liquid-crystal panel as the gating-aperture array allows the improved system to accommodate multiple rows of OLED microdisplays for denser SVZs. Experimentally, a prototype system is demonstrated by 24 OLED microdisplays, resulting in 120 SVZs with an interval small to 1.07mm.

© 2015 Optical Society of America

1. Introduction

To overcome the conflict between convergence and accommodation of three-dimensional (3D) displays, the super multi-view (SMV) technology [1] was developed by projecting numerous two-dimensional (2D) perspective views of the target object to dense sub-viewing-zones (SVZs). The interval between adjacent SVZs is set smaller than the pupil diameter of the eye. So, at least two different views are delivered to a single eye pupil and the eye can focus on the spatial spots. Replacing the perspective views by orthogonal views, the SMV technology evolved into the high-density directional (HDD) display technology [2]. In order to obtain a large viewing zone constructed by densely arranged SVZs for both eyes of a viewer, the spatial-multiplexing technology got prosperous development to generate more SVZs in the past ten years, e.g. using display panel with ultra-high resolution [3,4 ], adopting more display panels [5,6 ] or both of them [7]. It is necessary to decrease the tough demands on the resolution or number of used display panels for practical applications of the SMV technology. H. Urey proposed a display system similar to the SMV display by using two conventional display panels [8]. Light selective filters were designed and equipped on Super Stereoscopy (SS3D) glasses to help delivering at least two parallaxing images per eye. Spatiotemporal-multiplexing technology, which combining the time- and spatial-multiplexing together, was another feasible way. Y. Takaki realized a spatiotemporal-multiplexing SMV system through time-multiplexing of a small amount of high-speed DMDs [9]. Since DMD is a passive display panel, Y. Takaki's system needs extra light source and optical structure to provide multiple incident light beams and thus leads to a complicated system structure.

Employing active OLED microdisplays as display panels, DD Teng et al developed a new spatiotemporal-multiplexing SMV 3D display system through spatial-spectrum time-multiplexing of 12 planar aligned OLED microdisplays [10]. 60 SVZs with an interval of 1.6mm were obtained. In the system, a projecting unit was constructed by an OLED microdisplay, a rectangular double-lens Fourier transformation structure and an array of equally spaced gating apertures. Multiple projecting units were aligned seamlessly along the horizontal direction, with a vertical baffle inserted between adjacent projecting units to avoid mutual influence of light beams from adjacent OLED microdisplays. Due to blocking of the baffles which extended to the gating-aperture plane, there existed partial-viewing-zones (PVZs) where only partial microdisplay was visible. To guarantee a complete perspective view being visible to each SVZ which was an image of a gating aperture, the equally spaced gating apertures must have a large interval so as to avoid the PVZs. Therefore, an upper limit of the allowable SVZs for an OLED microdisplay did exist, even if OLED microdisplays with very high frame rates were commercially available.

In this manuscript, the vertical baffles are shortened to eliminate such PVZs through controllable fusing of light beams from adjacent OLED microdisplays. Thus, arbitrary small gating-aperture intervals get available and above mentioned problems can be overcome. A liquid-crystal panel (LCD) serves as the gating aperture array to accommodate multiple rows of microdisplays for denser SVZs and large vertical viewing zone in the improved system. The optimized system parameters are discussed for uniform display resolutions along both horizontal and vertical directions.

The rest of this paper is organized as follows. In section 2, our previous spatiotemporal-multiplexing SMV technology is briefly explained and problems in the system are discussed. Section 3 describes the improved SMV system. Experiments and results are shown in Section 4. Section 5 optimizes the system parameters for uniform horizontal and vertical display resolutions. Section 6 provides a conclusion.

2. Problems in previous spatiotemporal-multiplexing SMV display system

2.1 Brief introduction of the previous display system

Figure 1 showed the optical diagram of the previous SMV system. A group of M symmetrically and equally spaced gating apertures (e.g. Ak1, Ak2, Ak3, Ak4 and Ak5 when M = 5) was placed on the spectrum plane Pspectrum of an OLED microdisplay k along the horizontal x-direction. The aperture group, along with the OLED microdisplay k and corresponding rectangular double-lens (Lens1 and Lens2) Fourier transformation structure, constructed a projecting unit. dx and dy denoted the horizontal and vertical sizes of the effective working face of the OLED microdisplay, respectively. The size of the gating aperture was denoted as δx × δy. Ok1 and Ok2 were the optical centers of the two coaxial lenses of the double-lens Fourier transformation structure, respectively. The two lenses had an identical focal length f and their optical axis passed through the geometrical center of corresponding OLED microdisplay perpendicularly. The distance between two lenses was f(m-1)/(m-2), where m was an integral number larger than 2. The OLED microdisplay had a distance of f/m away from the Len1. According to the geometrical relationship, the Pspectrum plane was also f/m away from the double-lens structure. Through Projection lens (fp) behind the Pspectrum plane, the OLED microdisplay k was imaged to a PP' zone on the focal plane (Pprojection) of the Projection lens. A two-step imaging process through the Projection lens and a lens (fd, called Field lens) locating on the Pprojection plane made the gating apertures be imaged onto the observing plane (Pobser) as SVZs (e.g. SVZk1, SVZk2, SVZk3, SVZk4, SVZk5 when M = 5). The u1 and v1 represented the object distance and image distance of the first imaging. The u2 and v2 denoted the object distance and image distance of the second imaging. With the apertures being gated sequentially, the 2D perspective views converging to corresponding SVZs were refreshed by the OLED microdisplay k synchronously. Specifically, at the time point t, the aperture Ak1 was gated and other four apertures were closed. The 2D perspective view converging to the geometrical center of SVZk1 was displayed by the OLED microdisplay k. Then at t + Δt/5, Ak1 was closed and Ak2 got opened, synchronously the microdisplay was refreshed with the perspective view converging to the geometrical center of SVZk2. And so on, at t + 2Δt/5, t + 3Δt/5 and t + 4Δt/5, Ak3, Ak4 and Ak5 were gated, respectively. Corresponding 2D perspective views converging to the geometrical centers of SVZk3, SVZk4 and SVZk5 were synchronously refreshed by the OLED microdisplay k, respectively. Repeating above procedures cyclically, a multi-view display with M = 5 SVZs got realized by only one OLED microdisplay when the time cycle Δt was small enough for persistence of human vision. The light emission of an OLED pixel has a large divergence angle. After optical Fourier transformation, the light intensity distribution on the spectrum plane Pspectrum was approximately homogeneous in the central region. So, the presented intensity of each 2D perspective view to corresponding gating aperture was approximately equal when the gating apertures were not far away from the optical axis of the double-lens Fourier transformation structure. As a result, the observed images showed no obvious light intensity fluctuation as an observation point scanning across different SVZs, which were images of these gating apertures.

 

Fig. 1 The optical diagram of our previous SMV system [9].

Download Full Size | PPT Slide | PDF

Multiple (Nx) projecting units were aligned side by side along the horizontal x-direction, as shown in Fig. 1. A group of vertical baffles were inserted between adjacent projecting units, extending to the gating apertures to avoid mutual influence of light beams from adjacent OLED microdisplays. The horizontal interval between adjacent projecting units was set as a constant ΔDx and the equal gating-aperture interval was ΔDx/M. For simplicity, only two adjacent projecting units were drawn in Fig. 1 with the sequence numbers k and k + 1. The common optical axis of the Projection lens and the Field lens was taken as the optical axis of the display system. Through the public Projection lens and Field lens, the images of all gating apertures, i.e. the SVZs, lined up along the horizontal x-direction on the Pobser plane. Benefitted from the adopted optical structure, the projected 2D views from different OLED microdisplays overlapped precisely on the Pprojection plane, with points P and P' as the common boundary points. At each time point, a group of ΔDx–spaced gating apertures, i.e. one gating aperture per projecting unit, were gated. For a time cycle consisting of M time points (e.g. t, t + Δt/5, t + 2Δt/5, t + 3Δt/5, t + 4Δt/5 when M = 5), totally MNx SVZs were obtained. When the SVZ interval (βΔDx/M) was smaller than the eye pupil diameter, a SMV display got realized by the proposed display system through time-multiplexing of multiple OLED microdisplays. Here the β = v1v2/u1u2 was the magnification of the SVZ to the aperture.

2.2 Problems encountered by the previous SMV system

A detailed optical diagram of a projecting unit k in the previous display system is shown in Fig. 2 . The normal emission ray from a marginal point pixel (Pk) of the OLED microdisplay intersects with Lens1 and Lens2 at Ak and Bk, respectively. Ck is the intersection point between lines AkBk and Ok1Ok2. MAk and MAk + 1 are the horizontal marginal points of the Lens 2. Fk and Fk + 1 are the end points of the two baffles on the Pspectrum. Through the Lens 1, the OLED microdisplay PkP'k is imaged onto IPkIP'k, which is on the front focal plane of the Lens 2. Due to blocking of the baffle, partial light beams from each OLED pixel are blocked. Among the light beams from the marginal OLED pixel Pk which are not blocked, the beam MAkEk, paralleling with IPkOk2, is the marginal one. For any point in the EkFk zone, only a segment of OLED microdisplay k is visible. Specifically, for a point Gk in the EkFk zone, only DkP'k part of the PkP'k is visible. The boundary point Dk can be determined by the following three steps:

 

Fig. 2 Optical diagram of a projecting unit in the previous system.

Download Full Size | PPT Slide | PDF

  • Step 1, Link points MAk and Gk;
  • Step 2, Through point Ok2, draw a line paralleling with MAkGk, which intersects with IPkIP'k at IDk;
  • Step 3, On the OLED microdisplay, find the object point of the IDk, which is the target point Dk.

Obviously, Dk moves dynamically along with the point Gk. Because only a partial microdisplay k is visible to each point of this zone, EkFk is named as a PVZ and its size is geometrically expressed asEkFk¯=BkOk2¯=0.5dx/(m1). Symmetrically, there exists another PVZ, i.e. E'kFk + 1, adhering to the other vertical baffle, as shown in Fig. 2. Between the two PVZs, the EkE'k zone is defined as VZk where the complete OLED microdisplay k keeps being visible.

When equally and symmetrically spaced gating apertures are placed on the Pspectrum of a microdisplay, the complete microdisplay should keep being visible to each gating aperture, so as to guarantee a complete perspective view projected from the microdisplay being visible to the corresponding SVZ. Therefore, the M gating apertures must be designed to avoid the PVZs. As shown in Fig. 3 , the following geometrical relationship must be satisfied:

 

Fig. 3 Spatial distribution of the gating apertures for avoiding the PVZ zones.

Download Full Size | PPT Slide | PDF

12ΔDM12δxEkFk¯=12dxm1m>2}MΔDδx+dx/(m1)<ΔDδx

According to above Eq. (1), the existence of PVZs leads to an upper limit of M, i.e. the limited allowable SVZs for an OLED microdisplay. In the previous paper, a value of δx = 2.7mm is preferred for a better display resolution. ΔDx = 17.2mm is decided by the mechanical size of the adopted microdisplay. Then, the maximum value of M is Mmax = 6 according to Eq. (1). Under this condition, m needs to be up to 47 with dx = 7.56mm. To provide a sufficient physical space for Lens2 and gating aperture, M = 5 was used in the previous work. Although a larger ΔDx can increase the Mmax, smaller ΔDx is more preferred for the consideration of accommodating more microdisplays to generate more and denser SVZs. So, when microdisplays with very high frame rate become commercially available, the limited number of allowable SVZs for each microdisplay will result in a bottleneck for the proposed spatiotemporal-multiplexing technology. Another problem is the very limited spatial-multiplexing frequency and the narrow vertical viewing zone. Since the mechanical gating of the spectrum plane in the previous system works through rotating a plate with arc-aperture pattern, only one row of gating apertures is allowed. Thus, the OLED microdisplays must be aligned in one horizontal row. A finite-aperture Projection Lens makes the display system only accommodate very limited number of microdisplays, which means very limited spatial-multiplexing frequency. On the other hand, a rotating arc-aperture leads to a small vertical size of the gating apertures, which means a small vertical viewing zone and a worse utilization of the light intensity from microdisplays.

In the present manuscript, the display system is further developed to convert PVZs into fusing-zones (FZs) for allowing an arbitrary small gating-aperture interval. A LCD is introduced in the system as the gating aperture array to realize a wide vertical viewing zone and to make the system accommodating multiple rows of OLED microdisplays for denser SVZs.

3. Improved spatiotemporal-multiplexing SMV system

Figure 4 shows the modified projecting unit for the improved system. For simplicity, only two adjacent projecting units are drawn here with the sequence numbers k-1 and k. Compared with the previous projecting unit shown in Fig. 2, the key difference lies in the shortened vertical baffles, which terminate at the plane of Lens2.

 

Fig. 4 Modified projecting units in the improved system.

Download Full Size | PPT Slide | PDF

For the point Gk in Fig. 4, i.e. the same point discussed in section 2.2 (Fig. 2) which locates at a PVZ in the previous system, the DkP'k segment of the OLED microdisplay k keeps being visible in the improved system. At the same time, the shortened vertical baffles let some light rays from the adjacent microdisplay (microdisplay k-1 here) enter into the EkFk zone. That is to say, part of the microdisplay k-1 gets visible to the EkFk zone in the improved system. Specifically, for the point Gk, the Pk-1Dk-1 segment of the microdisplay k-1 is visible. The position of the point Dk-1 is determined by the similar three-step process discussed in Sec. 2.2. Then, passing through the Projection lens shown in Fig. 1, the DkP'k segment of the OLED microdisplay k and Pk-1Dk-1 segment of the OLED microdisplay k-1 are amplified and tiled together seamlessly on the Pprojection, forming a complete image which is visible to the Gk's image point on the Pobser. The two segments are refreshed by the perspective view converging to the point Gk's image point when the point Gk is gated. For convenience, we use the simple expression that the tiled image is visible to the point Gk in following sections. So, the EkFk zone, which was a PVZ zone in the previous system, becomes into a fusing zone (FZ) where complete perspective views get observable in the improved display system. Symmetrically, the E'k-1Fk zone links with the EkFk and forms a fusing zone FZk ~ k-1 by the similar way. At each point of the FZk ~ k-1, a complete perspective view tiled by two segments from the microdisplays k and k-1 is observable. Therefore, for any point on the spectrum plane, a complete perspective view can be obtained, whether it is from only one microdisplay or tiled by two segments from adjacent microdisplays. As a result, in the improved system, the gating apertures don’t need to avoid some zones. The maximum number of allowable SVZs for each microdisplay, which was restricted by the PVZs in the previous system, will be no longer limited in the improved system. With Nx projecting units aligned side by side along the horizontal direction, Nx-1 FZs and Nx VZs link up in sequence and their images on the Pspetrum form a horizontal viewing region.

In Fig. 4, the rest segment PkDk of the microdisplay k, which is invisible to the point Gk, will be presented to a ΔDx-spaced point Gk + 1. The Gk + 1 locates at an adjacent fusing zone, FZk + 1 ~ k. This is applicable to all microdisplays. For a group of ΔDx-spaced points in the FZs, corresponding perspective views can be displayed by the Nx microdisplays simultaneously without crosstalk, although each perspective view is presented by two segments from adjacent two microdisplays.

However, the gated zone is actually an aperture instead of a point. As shown in Fig. 5 , when the point Gk extends to a gating aperture G ' kG k, the IDk2IP'k zone will become observable for the gating aperture. Similar to the discussions on Fig. 2, here Ok2IDk2 is parallel with MAkG k. According to the object-image relation, the content displayed by the IDk2IDk part of the IDkIPk is that of the perspective view converging to the ΔDx-spaced point, Gk + 1. In other words, for a gating aperture G ' kG k, the partial content of non-target perspective view is perceived, resulting in crosstalk. Such crosstalk always happens when a group of ΔDx-spaced gating apertures locating or partially locating at the FZs are gated at a same time point. To solve this problem, for the gating apertures locating or partially locating at the FZs, each group of ΔDx-spaced apertures are gated by two time points, with non-adjacent apertures being gated at one time point. When one such aperture is gated, the related two microdisplays both load the perspective view corresponding to the center of this aperture. For those gating apertures locating at the VZs completely, each group of ΔDx-spaced apertures are gated at a time point, just as done in the previous system. With this arrangement, direct non-target perspective views can be avoided for each SVZ. Other noise sources include the reflected or scattered messages from the baffles, and the unexpected transmitted messages through the closed LCD–based gating aperture.

 

Fig. 5 Partial non-target perspective view is presented to a gating aperture as crosstalk, due to a spatial size of the gating aperture.

Download Full Size | PPT Slide | PDF

In the improved system, the sequential gating of apertures is implemented by an electro-controlled transmission-mode liquid crystal panel which is placed on the Pspectrum plane. Under such a non-mechanical gating scheme, multiple (Ny) rows of projecting units can be introduced into the display system for more and denser SVZs. Along the vertical y-direction, horizontal baffles are inserted between adjacent rows of microdisplays. Compared with vertical baffles, horizontal baffles are longer, terminating at the Pspetrum. As an example, let Nx = 3, Ny = 3 and M = 5. The arrangement of the gating apertures is shown in Fig. 6 . Here ΔDy denotes the vertical interval between adjacent rows of microdisplays. Different from the aperture arrangement in the previous system shown in Fig. 1, M gating apertures for one projecting unit are divided into M-1 complete apertures and two boundary half-apertures in the improved system, as shown in Fig. 6. With this arrangement, the number of groups of ΔDx-spaced gating apertures locating or partially locating at the FZs gets reduced by one, relative to the previous arrangement. As has been discussed above, gating one group of such ΔDx-spaced gating apertures needs two time points. So, the proposed gating-aperture arrangement in the improved system makes the needed time point decreased by one. Taking M = 5 as an example, with the arrangement of the previous system which arranges all the gating-apertures for a projecting unit symmetrically, two gating apertures may enter into the FZs, as shown in the upper part of Fig. 7 . Under this condition, two groups of ΔDx-spaced gating apertures, which include above two gating-apertures respectively, will need four time points and the residual three groups of gating-apertures need three time points. So, seven time points are needed totally. Under the same condition, with the proposed gating-aperture arrangement, four of the five gating-apertures for a projecting unit will be in the VZ, as shown in the lower part of the Fig. 7. The needed time points decrease to 4 × 1 + 1 × 2 = 6. Setting the frame rate of OLED microdisplays to be 85Hz in the following experiment and taking M = 5 gating apertures for each projecting unit, the display frequency of the 3D image is int[85/7] = 12Hz based on the previous gating-aperture arrangement. Here int[ ] is the rounding symbol. However, using the new arrangement proposed here, the display frequency increases to int[85/6] = 14Hz. That is to say, if the number of SVZs provided by a microdisplay is the same, the proposed gating-aperture arrangement will implement a display system with a higher display frequency.

 

Fig. 6 Distribution of gating apertures on the Pspectrum.

Download Full Size | PPT Slide | PDF

 

Fig. 7 Arrangement of the gating-apertures for a projecting unit.

Download Full Size | PPT Slide | PDF

Along the horizontal direction, adjacent rows of microdisplays are arranged successively by a position offset ΔDx/MNy. Through the double-lens Fourier transformation structure and the Projection lens, all microdisplays overlap precisely on the Projection. But the SVZs from projecting units belong to different rows are non-overlapped along the vertical direction. To obtain the vertical viewing zone, a vertical diffuser is attached to the Field lens to produce a vertical overlapping zone among all SVZs. Compared with the case of only one row of microdisplays, the quantity and density of SVZs along the horizontal direction are increased greatly in the improved system.

4. Experiments and results

Nx × Ny = 8 × 3 green OLED microdisplays, from Yunnan North OLEiD Opto-Electronic Technology Co of China, are used to demonstrate the idea described above. The used OLED microdisplays are with a display areadx×dy=7.56×10.08mm2, a resolution 600×800and a frame rate of 85Hz. Their shorter sides are set along the horizontal direction. Due to the integration of the control chip, the total size of a microdisplay module is 17mm × 22mm. Baffles have a thickness of 0.1mm. So, ΔDx × ΔDy is set as 17.1mm × 22.1mm in our experiment. Baffles have low reflectivity for light blocking. Lens1 (f=100mm) and Lens2 (f=100mm) are processed into rectangular shape (17mm × 22mm). Each projecting unit, with baffles attached, is packaged into a cuboid structure of 17.1mm × 22.1mm × 120mm (W × H × L). Close-packed projecting units (Nx × Ny = 8 × 3) are braced by a frame. A LCD, with a working area of 204mm × 108.2mm and a frame rate of 85Hz, functions as the gating apertures. Limited by the frame rate of the used microdisplays and LCD, 6 time points are adopted in the prototype system, leading to a display frequency of 14Hz. Although slight flicker is observed at this display frequency in the experiments, it is enough for the sense of motion. According to the lower part of the Fig. 7, when only one group of ΔDx-spaced gating apertures enter into the FZs, the number of gating apertures for one projecting unit reaches a maximum Mmax = 5. To satisfy this situation, m = 4 is taken in the experiment to ensure a smaller horizontal size of the FZ (dx/(m-1) = 2.52mm), compared with the gating-aperture interval (ΔDx/M = 3.42mm). The size of the gating apertures is set as δx × δy = 2.76mm × 15mm.

Two Fresnel lenses, with apertures of 279.4mm × 279.4mm and 100mm × 100mm, function as the Projection lens and Field lens, respectively. To alleviate the wavefront aberration accompanying with the usage of Fresnel lenses, anti-distortion through a correction table is performed, as done in [11]. A vertical diffuser with a diffusion angle of 8° is attached to the Field lens, resulting in a vertical viewing zone of about 50mm. Figure 8 shows the photograph of the experimental display system. Other system parameters includefp=609.6mm, fd=300mm and u1=40mm. Under this condition, β = 0.94, v2 = 572.5mm, and the available horizontal display size PP' = 61.4mm. The display space for the target 3D object is set as 60mm × 60mm × 60mm between planes P1 and P2 of Fig. 1, with their distances Δz to the Pprojection being 30mm. The horizontal viewing zone constructed by 120 sub-viewing-zones is 128.4mm which is 2 times as large as the average interocular distance (64mm) of a viewer. The horizontal interval between adjacent SVZs gets as small as 1.07mm. For an average pupil size of 4mm under normal room brightness [12], at least three adjacent sub-viewing-zones can be captured completely. So the SMV effect keeps being active throughout the entire viewing zone along the Pobser. In the experiment, a CCD from OPTRONIS of German with an objective aperture of 4mm is used to capture images displayed by the prototype display system. The CCD can move in the Pobserv plane freely. For the convenience of photographing, the CCD is placed near the Field lens in the Fig. 8.

 

Fig. 8 Photograph of the experimental display system.

Download Full Size | PPT Slide | PDF

The usage of gating-aperture array will lead to a reduction of the optical efficiency of the prototype system by three mechanisms: a spatial reduction mechanism depending on the spatial ratio of apertures to the observing region along the horizontal direction M × δx/ΔDx, a temporal reduction mechanism depending on the exposure time of each gating aperture in a unit time, and the transmittance reduction mechanism related with the LCD’s transmittance. Experimentally, with the prototype system working, the light intensities at seven points in the viewing zone with a spatial interval of 19mm along the Projection horizontally were measured by a luminance meter CS-2000A. The diameter of its circular detecting area is set as 4mm. During the measurement, all microdisplays keep being refreshed by images at the full brightness. To obtain the optical efficiency of the gating-aperture array, the measurements are carried out under two situations: keeping the gating-aperture array in the prototype system and removing it away. At each measurement point, the ratio between the measured values under above two situations gives the optical efficiency of the gating-aperture array. The obtained values are: 6.12%, 6.20%, 6.05%, 6.08%, 6.18%, 6.09%, and 6.21%, respectively.

Displaying a 3D image by a SMV system, captured images at two camera positions with a horizontal interval smaller than the diameter of the pupil shall be different. However, for the prototype system, at the observing distance v2 = 572.5mm, an interval smaller than the diameter of the pupil corresponds to a very small angular interval between the two captured images’ viewing directions, which means very little difference between the two captured images. Here two parallel cuboids with a gap of 0.6mm along the horizontal direction are displayed by the proposed system to demonstrate the difference between two images captured by the CCD at two close-spaced positions, as shown in Fig. 9(a) . The spatial size of the each cuboid is set asΔx×Δy×Δz=2mm×40mm×60mm. The first position of the CCD is perpendicular to the end faces of the two cuboids along the z-direction. As can be seen in Fig. 9(b), the gap between the two cuboids is obvious in the captured image. Then shifting the CCD by 2mm horizontally, the captured gap gets shrink due to occlusion of the cuboids, as shown in Fig. 9(c). The observable difference between Fig. 9(b) and (c) demonstrates the super-multiview effect of the proposed system.

 

Fig. 9 Captured images with CCD locating at two 2mm-spaced positions on the Pobserv along the horizontal direction.

Download Full Size | PPT Slide | PDF

Following the method in [13], the crosstalk of the proposed system is quantified by Eq. (2):

crosstalk(%)=noise=reflection+transmissionsignal×100
where “reflection” represents the luminance of lights being reflected or scattered by baffles, “transmission” represents the unexpected transmitted luminance of lights when the gating apertures are closed, and “signal” denotes the target message. According to the symmetrical structure of the prototype system, the crosstalk at five SVZs for one projecting unit can characterize the crosstalk phenomenon in the system. The above mentioned luminance meter is used here. When measuring a SVZ, the corresponding gating aperture is gated by an extra filtering aperture. The clear aperture of the introduced extra filtering aperture is set as 3.726mm(i.e. ΔDx/M) × 15mm to cover only the corresponding gating aperture. When measuring the “signal” values at a SVZ with the corresponding gating aperture in the VZ, the corresponding one microdisplay is set to be at the full brightness and all vertical baffles are removed from the working prototype system. Other microdisplays are powered off. Similarly, for the FZ-related SVZ, two corresponding microdisplays are set to be at the full brightness and all other microdisplays are powered off. But the vertical baffle between the two corresponding microdispays needs to be preserved in this case. The measured results are: 10.2, 10.1, 10.5, 10.7, 1, and 10.7cd/m2, respectively. Then, assemble all the baffles back to the system and continue to have the extra filtering aperture, With all perspective views being at the full brightness, the measured results are 10.7, 10.3, 10.8, 10.8, and 11.1cd/m2, respectively. These measured values include both the “signal” and the “noise”. For each SVZ, the differential value of the above two measurement results is the intensity of the “noise”. According to Eq. (2), the maximum crosstalk is less than 5%, a rather small value.

A 3D pyramid is displayed to demonstrate the proposed idea and system. The captured images with the aforementioned CCD being at different positions along the Pobserv horizontally are shown in Fig. 10 .

 

Fig. 10 Images captured by the CCD located at a series of points on the Pobserv along the horizontal direction with a spatial interval of 19mm when the proposed display system works.

Download Full Size | PPT Slide | PDF

The prototype system is a horizontal-parallax-only SMV display system. As the pupil moves along the vertical viewing zone, the displayed spatial spots will make an undesirable movement, which leads to deformation of the display 3D object. Here three spatial rectangular frames, with distances of 0mm, 15mm and 30mm to the Pprojection respectively, are used to demonstrate this undesirable movement. The largest frame is the one on the Pprojection. Let the camera keep being focused on the middle frame. The other two frames will blur due to out-of-focus. The images captured by the camera located at three positions along the vertical viewing zone, i.e. the ideal position and positions deviating from the ideal position by ± 23mm, are shown in Fig. 11 . As the camera deviates from the ideal vertical position (i.e. the middle point of the vertical viewing zone) along the vertical direction, the captured two other frames present corresponding vertical shift. Obviously, the smallest frame which is farthest away from the Pprojection presents a larger offset. A vertical pupil-tracking technology can settle this problem, which will be studied in our future work. Figure 12 shows the change of blurring frames when the largest and smallest rectangular frames are on-focus separately. The phenomena reveal that the prototype system is able to produce 3D image on which the human eye can focus.

 

Fig. 11 Images captured by the CCD located at three positions along vertical viewing zone: the ideal position (middle one) and positions deviating from the ideas position by ± 23mm.

Download Full Size | PPT Slide | PDF

 

Fig. 12 Captured images when the largest and smallest rectangular frames are on-focus separately.

Download Full Size | PPT Slide | PDF

5. Optimization of system parameters for uniform display resolutions along two perpendicular directions

In the proposed SMV system, a spatial light spot is formed by superimposing incoherent cone-shaped light beams coming from different 2D perspective views. All displayed spatial spots are categorized as locating on a series of parallel planes with different distances to the Pprojection plane. As discussed in [9], the cone-shaped beam has a minimum projection size on the Pprojection and keeps expanding as it leaves away from the Pprojection plane. On the planes (P1 and P2) farthest away from the Pprojection plane, the projection sizes of the cone-shaped beam have two extremum values. In the proposed system, to guarantee the displayed spatial spot keeps being fixed for a viewer moving along the horizontal direction, the horizontal size of a SVZ must be not larger than the pupil. So, along the horizontal direction, a cone-shaped beam is confined in a SVZ on the Pobserv. Geometrically, the cone-shaped beam can be approximated by a straight-edge beam shown in Fig. 13(a) . Specifically, the horizontal distribution zone of the straight-edge beam is between polygonal lines S ׀׀ 1Q ׀׀ 1M ׀׀ 1 and S ׀׀ 2Q ׀׀ 2M ׀׀ 2. Here Q ׀׀ 1 and Q ׀׀ 2 are the horizontal boundary points of the projected light spot on the Pprojection plane. M ׀׀ 1 and M ׀׀ 2 are the boundary points of the SVZm. S ׀׀ 1 and S ׀׀ 2 are the intersection points of lines M ׀׀ 2Q ׀׀ 1 and M ׀׀ 1Q ׀׀ 2 with the plane P1. T ׀׀ 1 and T ׀׀ 2 are the intersection points of lines M ׀׀ 1Q ׀׀ 1 and M ׀׀ 2Q ׀׀ 2 with the plane P2. Therefore, S ׀׀ 1S ׀׀ 2 and T ׀׀ 1T ׀׀ 2 can be approximated as the displayed light spots on the P1 plane and P2 plane, respectively. Their lateral sizes, ε1 and ε2, are deduced geometrically by:

{ε1=Δz(βδx+εd)+εdv2v2(βδx+εd)εdv2βδxε2=Δz(βδxεd)+εdv2v2(βδxεd)+εdv2βδx
The larger one of ε1 and ε2 represents the horizontal resolution limit of the display system. As a diffraction-limited incoherent imaging system, the imaging of the OLED microdisplays to the Pprojection takes tri() as the optical transmission function and the displayed discernable light spot on the Pprojection plane is:
εd=(λfp/δx)mm
Here λ=532nm, which is the peak wavelength of the used green OLED microdisplay.

 

Fig. 13 Geometrical diagram showing the displayed spot sizes of the 2D display planes in the 3D display space.

Download Full Size | PPT Slide | PDF

Along the vertical direction, the vertical size of the spot on the Pprojection is expressed as εd=(λfp/δy)mm. Due to the attached vertical diffuser, a large vertical viewing zone can be obtained, which ensures the cone-shaped beam cover the pupil of a viewer completely in the Pobserv. Under this condition, the straight-edge beam, which determines the perceived vertical display resolution, is decided by δy and pupil of the viewer, as shown in Fig. 13(b). Similar to Eq. (3), the vertical resolution limit of the display system is the larger one of ε1 and ε2:

{ε1=Δz(Dpupil+εd)+εdv2v2(Dpupil+εd)εdv2Dpupilε2=Δz(Dpupilεd)+εdv2v2(Dpupilεd)+εdv2Dpupil

In the proposed system, uniform display resolutions along two orthogonal directions are pursued for better 3D effect. So the optimization of system parameters is carried out to find the uniform horizontal and vertical resolution limits. To guarantee a horizontal interval between adjacent SVZs be about 1mm, the β is set as around 0.9 in the experiment. The (m/m-1)(fp/f) is set as (60/7.56) to ensure a horizontal display size of 60mm. Under these prerequisites, numerical simulations based on Eq. (3) and (5) get implemented to study the influence of system parameters on the display resolution limits. According to the simulation, the values of u1, m, and β have little effect on the lateral display resolution limit. The key parameters which play decided roles on the lateral resolution limit are f and the gating aperture size.

Figure 14 shows the evolution of the horizontal and vertical resolution limits as a function of f. Two curves intersect at f = 99mm, indicating identical horizontal and vertical display resolution limits of 0.25mm are obtained at this point. With f = 99mm, the corresponding display resolution distributions are drawn in Fig. 15 . It can be seen that the lateral widths of the gating aperture take the values δx × δy = 2.8mm × 20mm for the resolution limits 0.25mm. In summary, the preferred parameters for the prototype display system are f = 99mm, fp = 595mm, fd = 300mm, δx × δy = 2.8mm × 20mm.

 

Fig. 14 Evolutions of the lateral resolution limits (including horizontal and vertical resolution limits) as a function of f.

Download Full Size | PPT Slide | PDF

 

Fig. 15 Evolutions of lateral display resolutions on the P1 and P2 plane as a function of the lateral width of the gating aperture size at f = 100mm.

Download Full Size | PPT Slide | PDF

In the experiment, f and fp are set as 100 and 609.6mm, respectively, which are close to the preferred values and are available from the market. Experimentally, the lateral display resolution limit is 0.253mm at δx = 2.76mm. To obtain a uniform lateral display resolution limit, the vertical size of the gating aperture is set as δy = 15mm in the experiment.

6. Conclusions

In conclusion, an improved spatiotemporal-multiplexing SMV displays system is developed through controllable fusing of light beams from adjacent OLED microdisplays. This system removes the limitation on the allowable SVZs for the microdisplays in the display system, and paves a way for 3D display with dense SVZs especially when high-speed micodisplays are commercially available in the future . The optimization of system parameters for equal horizontal and vertical display resolution limits is discussed. Experimentally, employing three rows of OLED microdisplays with a frame rate of 85Hz, 120 sub-viewing-zones with an interval as small as 1.07mm get realized. Due to limitation from the frame rate of the used OLED microdisplay, slight flicker is observed in the prototype system. If microdisplays with ultra-high frame rates are commercially available, the flicker phenomenon can be effectively alleviated and denser SVZs can be expected. For example, if the frame rate of the used OLED microdisplays reache 240Hz in the prototype system, M = 7 SVZs for one microdisplay can be implemented when only one group of ΔDx-spaced gating apertures locating at the FZs. In this case, the display frequency can increase to 30Hz and the interval between adjacent SVZs decreases to 0.76mm. A more comfortable and natural 3D effect will be offered by the proposed system.

Acknowledgments

The authors gratefully acknowledge supports by the Guangzhou Technical Plan Project under No. 201510010280, National High-tech R&D Program of China under No. 2013AA03A106, Grant No. 2015AA03A101, National Natural Science Foundation of China under Grant No. U1201254, Grand No. 11204384, Grant No. 61172027, Guangdong Natural Science Foundation under Grant No. 2014A030311049.

References and links

1. Y. Kajiki, H. Yoshikawa, and T. Honda, “Hologram-like video images by 45-view stereoscopic display,” Proc. SPIE 3012, 154–166 (1997). [CrossRef]  

2. Y. Kajiki, H. Yoshikawa, and T. Honda, “Ocular accommodation by super multi-view stereogram and 45-view stereoscopic display,” Proceedings of the Third International Display Workshops (IDW’96), 2, 489–492 (1996).

3. Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005). [CrossRef]  

4. Y. Takaki, Y. Urano, S. Kashiwada, H. Ando, and K. Nakamura, “Super multi-view windshield display for long-distance image information presentation,” Opt. Express 19(2), 704–716 (2011). [CrossRef]   [PubMed]  

5. H. Nakanuma, H. Kamei, and Y. Takaki, “Natural 3D display with 128 directional images used for human-engineering evaluation,” Proc. SPIE 5664, 28–35 (2005). [CrossRef]  

6. J. H. Lee, J. Park, D. Nam, S. Y. Choi, D. S. Park, and C. Y. Kim, “Optimal projector configuration design for 300-Mpixel multi-projection 3D display,” Opt. Express 21(22), 26820–26835 (2013). [CrossRef]   [PubMed]  

7. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010). [CrossRef]   [PubMed]  

8. K. Akşit, A. H. G. Niaki, E. Ulusoy, and H. Urey, “Super stereoscopy technique for comfortable and realistic 3D displays,” Opt. Lett. 39(24), 6903–6906 (2014). [CrossRef]   [PubMed]  

9. T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008). [CrossRef]  

10. D. Teng, L. Liu, and B. Wang, “Super multi-view three-dimensional display through spatial-spectrum time-multiplexing of planar aligned OLED microdisplays,” Opt. Express 22(25), 31448–31457 (2014). [CrossRef]   [PubMed]  

11. Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003). [CrossRef]  

12. J. Y. Son and B. Javidi, “Three-dimensional imaging methods based on multiview images,” J. Disp. Technol. 1(1), 125–140 (2005). [CrossRef]  

13. D. Teng, L. Liu, and B. Wang, “Generation of 360° three-dimensional display using circular-aligned OLED microdisplays,” Opt. Express 23(3), 2058–2069 (2015). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. Y. Kajiki, H. Yoshikawa, and T. Honda, “Hologram-like video images by 45-view stereoscopic display,” Proc. SPIE 3012, 154–166 (1997).
    [Crossref]
  2. Y. Kajiki, H. Yoshikawa, and T. Honda, “Ocular accommodation by super multi-view stereogram and 45-view stereoscopic display,” Proceedings of the Third International Display Workshops (IDW’96), 2, 489–492 (1996).
  3. Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005).
    [Crossref]
  4. Y. Takaki, Y. Urano, S. Kashiwada, H. Ando, and K. Nakamura, “Super multi-view windshield display for long-distance image information presentation,” Opt. Express 19(2), 704–716 (2011).
    [Crossref] [PubMed]
  5. H. Nakanuma, H. Kamei, and Y. Takaki, “Natural 3D display with 128 directional images used for human-engineering evaluation,” Proc. SPIE 5664, 28–35 (2005).
    [Crossref]
  6. J. H. Lee, J. Park, D. Nam, S. Y. Choi, D. S. Park, and C. Y. Kim, “Optimal projector configuration design for 300-Mpixel multi-projection 3D display,” Opt. Express 21(22), 26820–26835 (2013).
    [Crossref] [PubMed]
  7. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010).
    [Crossref] [PubMed]
  8. K. Akşit, A. H. G. Niaki, E. Ulusoy, and H. Urey, “Super stereoscopy technique for comfortable and realistic 3D displays,” Opt. Lett. 39(24), 6903–6906 (2014).
    [Crossref] [PubMed]
  9. T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008).
    [Crossref]
  10. D. Teng, L. Liu, and B. Wang, “Super multi-view three-dimensional display through spatial-spectrum time-multiplexing of planar aligned OLED microdisplays,” Opt. Express 22(25), 31448–31457 (2014).
    [Crossref] [PubMed]
  11. Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003).
    [Crossref]
  12. J. Y. Son and B. Javidi, “Three-dimensional imaging methods based on multiview images,” J. Disp. Technol. 1(1), 125–140 (2005).
    [Crossref]
  13. D. Teng, L. Liu, and B. Wang, “Generation of 360° three-dimensional display using circular-aligned OLED microdisplays,” Opt. Express 23(3), 2058–2069 (2015).
    [Crossref] [PubMed]

2015 (1)

2014 (2)

2013 (1)

2011 (1)

2010 (1)

2008 (1)

T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008).
[Crossref]

2005 (3)

H. Nakanuma, H. Kamei, and Y. Takaki, “Natural 3D display with 128 directional images used for human-engineering evaluation,” Proc. SPIE 5664, 28–35 (2005).
[Crossref]

Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005).
[Crossref]

J. Y. Son and B. Javidi, “Three-dimensional imaging methods based on multiview images,” J. Disp. Technol. 1(1), 125–140 (2005).
[Crossref]

2003 (1)

Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003).
[Crossref]

1997 (1)

Y. Kajiki, H. Yoshikawa, and T. Honda, “Hologram-like video images by 45-view stereoscopic display,” Proc. SPIE 3012, 154–166 (1997).
[Crossref]

Aksit, K.

Ando, H.

Choi, S. Y.

Honda, T.

Y. Kajiki, H. Yoshikawa, and T. Honda, “Hologram-like video images by 45-view stereoscopic display,” Proc. SPIE 3012, 154–166 (1997).
[Crossref]

Y. Kajiki, H. Yoshikawa, and T. Honda, “Ocular accommodation by super multi-view stereogram and 45-view stereoscopic display,” Proceedings of the Third International Display Workshops (IDW’96), 2, 489–492 (1996).

Javidi, B.

J. Y. Son and B. Javidi, “Three-dimensional imaging methods based on multiview images,” J. Disp. Technol. 1(1), 125–140 (2005).
[Crossref]

Kajiki, Y.

Y. Kajiki, H. Yoshikawa, and T. Honda, “Hologram-like video images by 45-view stereoscopic display,” Proc. SPIE 3012, 154–166 (1997).
[Crossref]

Y. Kajiki, H. Yoshikawa, and T. Honda, “Ocular accommodation by super multi-view stereogram and 45-view stereoscopic display,” Proceedings of the Third International Display Workshops (IDW’96), 2, 489–492 (1996).

Kamei, H.

H. Nakanuma, H. Kamei, and Y. Takaki, “Natural 3D display with 128 directional images used for human-engineering evaluation,” Proc. SPIE 5664, 28–35 (2005).
[Crossref]

Kanebako, T.

T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008).
[Crossref]

Kashiwada, S.

Kim, C. Y.

Lee, J. H.

Liu, L.

Nago, N.

Nakamura, K.

Nakanuma, H.

H. Nakanuma, H. Kamei, and Y. Takaki, “Natural 3D display with 128 directional images used for human-engineering evaluation,” Proc. SPIE 5664, 28–35 (2005).
[Crossref]

Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003).
[Crossref]

Nam, D.

Niaki, A. H. G.

Park, D. S.

Park, J.

Son, J. Y.

J. Y. Son and B. Javidi, “Three-dimensional imaging methods based on multiview images,” J. Disp. Technol. 1(1), 125–140 (2005).
[Crossref]

Takaki, Y.

Y. Takaki, Y. Urano, S. Kashiwada, H. Ando, and K. Nakamura, “Super multi-view windshield display for long-distance image information presentation,” Opt. Express 19(2), 704–716 (2011).
[Crossref] [PubMed]

Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010).
[Crossref] [PubMed]

T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008).
[Crossref]

Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005).
[Crossref]

H. Nakanuma, H. Kamei, and Y. Takaki, “Natural 3D display with 128 directional images used for human-engineering evaluation,” Proc. SPIE 5664, 28–35 (2005).
[Crossref]

Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003).
[Crossref]

Teng, D.

Ulusoy, E.

Urano, Y.

Urey, H.

Wang, B.

Yoshikawa, H.

Y. Kajiki, H. Yoshikawa, and T. Honda, “Hologram-like video images by 45-view stereoscopic display,” Proc. SPIE 3012, 154–166 (1997).
[Crossref]

Y. Kajiki, H. Yoshikawa, and T. Honda, “Ocular accommodation by super multi-view stereogram and 45-view stereoscopic display,” Proceedings of the Third International Display Workshops (IDW’96), 2, 489–492 (1996).

J. Disp. Technol. (1)

J. Y. Son and B. Javidi, “Three-dimensional imaging methods based on multiview images,” J. Disp. Technol. 1(1), 125–140 (2005).
[Crossref]

Opt. Express (5)

Opt. Lett. (1)

Proc. SPIE (5)

T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008).
[Crossref]

H. Nakanuma, H. Kamei, and Y. Takaki, “Natural 3D display with 128 directional images used for human-engineering evaluation,” Proc. SPIE 5664, 28–35 (2005).
[Crossref]

Y. Kajiki, H. Yoshikawa, and T. Honda, “Hologram-like video images by 45-view stereoscopic display,” Proc. SPIE 3012, 154–166 (1997).
[Crossref]

Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003).
[Crossref]

Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005).
[Crossref]

Other (1)

Y. Kajiki, H. Yoshikawa, and T. Honda, “Ocular accommodation by super multi-view stereogram and 45-view stereoscopic display,” Proceedings of the Third International Display Workshops (IDW’96), 2, 489–492 (1996).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 The optical diagram of our previous SMV system [9].
Fig. 2
Fig. 2 Optical diagram of a projecting unit in the previous system.
Fig. 3
Fig. 3 Spatial distribution of the gating apertures for avoiding the PVZ zones.
Fig. 4
Fig. 4 Modified projecting units in the improved system.
Fig. 5
Fig. 5 Partial non-target perspective view is presented to a gating aperture as crosstalk, due to a spatial size of the gating aperture.
Fig. 6
Fig. 6 Distribution of gating apertures on the Pspectrum .
Fig. 7
Fig. 7 Arrangement of the gating-apertures for a projecting unit.
Fig. 8
Fig. 8 Photograph of the experimental display system.
Fig. 9
Fig. 9 Captured images with CCD locating at two 2mm-spaced positions on the Pobserv along the horizontal direction.
Fig. 10
Fig. 10 Images captured by the CCD located at a series of points on the Pobserv along the horizontal direction with a spatial interval of 19mm when the proposed display system works.
Fig. 11
Fig. 11 Images captured by the CCD located at three positions along vertical viewing zone: the ideal position (middle one) and positions deviating from the ideas position by ± 23mm.
Fig. 12
Fig. 12 Captured images when the largest and smallest rectangular frames are on-focus separately.
Fig. 13
Fig. 13 Geometrical diagram showing the displayed spot sizes of the 2D display planes in the 3D display space.
Fig. 14
Fig. 14 Evolutions of the lateral resolution limits (including horizontal and vertical resolution limits) as a function of f.
Fig. 15
Fig. 15 Evolutions of lateral display resolutions on the P1 and P2 plane as a function of the lateral width of the gating aperture size at f = 100mm.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

1 2 Δ D M 1 2 δ x E k F k ¯ = 1 2 d x m 1 m > 2 } M Δ D δ x + d x / ( m 1 ) < Δ D δ x
c r o s s t a l k ( % ) = n o i s e = r e f l e c t i o n + t r a n s m i s s i o n s i g n a l × 100
{ ε 1 = Δ z ( β δ x + ε d ) + ε d v 2 v 2 ( β δ x + ε d ) ε d v 2 β δ x ε 2 = Δ z ( β δ x ε d ) + ε d v 2 v 2 ( β δ x ε d ) + ε d v 2 β δ x
ε d = ( λ f p / δ x ) m m
{ ε 1 = Δ z ( D p u p i l + ε d ) + ε d v 2 v 2 ( D p u p i l + ε d ) ε d v 2 D p u p i l ε 2 = Δ z ( D p u p i l ε d ) + ε d v 2 v 2 ( D p u p i l ε d ) + ε d v 2 D p u p i l

Metrics