Abstract

Color moiré occurs owing to the subpixel structure of the display panel in the integral three-dimensional (3D) display method, deteriorating the 3D-image quality. To address this, we propose a method for reducing the color moiré and improving the 3D-image resolution, simultaneously, by combining multiple 3D images. In the prototype system, triple 3D display units with lens arrays closely attached to 8K-resolution display panels are optically combined. By controlling the color moiré of the 3D image generated on each display and shifting and combining the elemental lenses constituting the lens array, sufficient reduction in the color moiré is realized, while suppressing the deterioration of the 3D-image quality, at a distant position from the lens array in the depth direction, along with an approximately two-fold enhancement of the resolution near the lens array.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The integral three-dimensional (3D) display is one of the display methods of the spatial-image-reconstruction-type 3D display for reproducing a moving 3D image, using the basic principle of integral photography [1]. It includes full-parallax both horizontally and vertically, and a feature, wherein natural 3D images can be reproduced in the continuous depth directional positions [2]. It is a representative method of the light field display, which reproduces the equivalent light rays emitted from a real object in the 3D space by combining elemental images and the corresponding elemental lenses.

In order to realize a thin integral 3D display system, it is necessary to display elemental images, using a high-definition flat display panel. However, considerable low-frequency color moiré appears in the 3D image, when a flat display panel is used for integral 3D display. This moiré occurs as a beat phenomenon owing to the frequency differences in the lens-array sampling, and the red (R), green (G), and blue (B) subpixel structures. It also occurs in the binocular stereoscopic method, which uses a parallax barrier instead of special glasses, and in the multi view 3D method, which provides only horizontal parallax using a parallax barrier or lenticular lens as in the case of the integral 3D method; methods have been proposed to solve this problem, including the allocation of horizontal lines alternately to images for the left and right eyes [3], turning of the display panel by 90  and arranging the color filters in a horizontal stripe arrangement, micronization of the lenticular lens and setting the lens interval to two subpixels (2/3 pixels) [4], and the arrangement of the lenticular lens obliquely at an angle to move a pixel in the vertical direction, when moving a subpixel in the horizontal direction, such that each the lens sequentially reads the RGB subpixels in the longitudinal direction. Even in an integral 3D display, by rotating the lens array in the display surface and creating an angle with respect to the arrangement of the subpixels, it is possible to shift the moiré component to the high frequency region, to render the moiré inconspicuous [5–7].

These methods do not reduce the moiré itself but move the spatial frequency higher. In this case, the highest frequency at which the moiré is the least conspicuous is 1/3 the sampling frequency of the lens array, and moiré will be perceived, unless it is observed from a distance, where the three lenses cannot be distinguished. This also implies that the displayable spatial frequency in the horizontal direction is 1/3 the frequency of the elemental-lens sampling. A method has been reported, which uses a special color filter structure to display a single color alone with one lens and expresses colors with a total of three lenses, and an application example of the integral 3D display [8]. However, it has similar problems as mentioned above in terms of the resolution. As a method has been proposed that reducing the modulation of the moiré itself, the installation of a diffusion plate on the display panel surface, or the usage of elemental-lens defocus by shifting the display panel surface from the depth position of the elemental-lens focal length [9]. However, in both these methods, there are problems involving the resolution characteristics of the elemental images and the depth reproducibility of the 3D images [10, 11].

As described above, there was no technique for reducing the modulation of the moiré in a flat-panel integral 3D display system without reducing the resolution and depth reproducibility. Narrow-pixel pitch flat-panel color display devices without subpixel structures have been researched, and the above-mentioned problems can be solved, if they are applied to elemental-image displays [12]; however, there are problems in multipixel formation and mass production, and this has not been practically applied till date. In the flat-panel integral 3D display, the improvement of the resolution and enlargement of the viewing zone [13, 14], the expansion of the depth reproduction range [15] by double-unit combination has been researched. However, the improvement of the resolution by spatiotemporal multiplexing of a display [16]; however, the occurrence of color moiré and its reduction have not been discussed. In addition, methods have been proposed that use optical lenses with negative focal length to demagnify the 3D image to increase the pixel density and spatial resolution [17]. Further, methods have been proposed to increase the total number of pixels in the elemental images and tiling the display elemental and optical system [18, 19]; however, a similar color moiré problem exists. Although the problem of color moiré does not occur with the projection type method [20], the size of the device is not suitable for portable applications owing to the projection distance.

In the image acquisition stage, a shifting method of elemental lens array with a pickup lens is proposed as a computational synthetic aperture integral imaging technique that increases the field of view and viewing angle [21]. A monospectral image encryption method in which the multispectral color image is acquired by using heterogeneous monospectral cameras has also been proposed [22]. In comparison, it is considered that display quality can be improved by devising such as lens shifting and combining multiple displays.

In view of the above, we propose a method for reducing color moiré and further improving the resolution by combining multiple flat-panel integral 3D display units to synthesize 3D images [23, 24]. This renders it possible to improve the overall reproduction quality of the reconstructed 3D image. We demonstrate that a reduction effect can be obtained by adjusting the relative position between the elemental images and elemental lenses and adjusting the phase of the color moiré of the 3D image generated in each display unit to cancel each other. We analyze the moiré reduction effect in detail and theoretically clarify the improvement that can be achieved, when double- or triple-unit systems are combined, compared to a single display unit. In addition, we show that the sampling points of the 3D image can be increased at equal intervals by shifting the positions of the display-unit lens arrays away from each other, according to the lens arrangement and number of synthesized lenses; moreover, we analyze the resolution improvement theoretically. We perform a display experiment, for the case of a triple-unit combination in the prototype system and confirm that the moiré can be effectively reduced, along with the simultaneous improvement of the resolution of the reconstructed image in the lens array. Furthermore, a conventional moiré reduction method using the defocus of the lens array is applied along with weak defocus. As a result, it is confirmed that the moiré can be further reduced and the overall display quality can be improved, while suppressing the quality deterioration in the depth direction of a 3D image.

 

Fig. 1 (a) Occurrence of color moiré, (b) Spatial luminance distribution on a multiple combined display panel (equivalent representations).

Download Full Size | PPT Slide | PDF

2. Principle of occurrence and reduction of color moiré

2.1. Occurrence of color moiré in a flat-panel integral 3D display

A flat-panel display device generally has a subpixel structure for each RGB color. For example, in the case of vertically elongated stripe-shaped subpixels, the RGB subpixels are repeatedly arranged in the horizontal direction of the screen. Therefore, aliasing occurs at low frequencies because of sampling using a lens array. This state is depicted schematically in Fig. 1(a). Focusing on the G subpixels as an example, the pixel pitch, Ppx, and spatial frequency, fpx, are related as Ppx=1/fpx. The spatial frequency, fmoire, of the aliasing beat (i.e., the moiré) caused by the pixel and elemental lens is

fmoire=min |fpxnfLx|,
when the elemental lens pitch is PLx=1/fLx and n is an arbitrary integer that minimizes fmoire. As seen from Eq. (1), the moiré in a frequency domain lower than the sampling frequency of elemental lenses, fLx, appears conspicuously in a 3D image. This is the same for the R and B. The moiré corresponding to each color overlaps, when the phases differ by 1/(3fmoire) from each other; hence, the observer perceives it as remarkable color moiré. In this paper, we discuss the moiré of G alone because the occurrence of color moiré follows the same principle as that of monochrome moiré, as described above. Here, we consider the case of displaying the monochrome of G with 100% luminance (i.e., maximum range direct current (DC) is given as source signal of G channel) on the entire display panel of a single display unit. When the subpixels have a vertical-striped structure, they include periodic components in the horizontal direction alone, and DC components in the vertical direction; hence, only one dimension in the horizontal x-axis direction can be considered. Figure 1(b-1) shows only the G monochrome pixels extracted from the display panel of Fig. 1(a). For simplicity, it is assumed that the aperture ratios of the pixels are 100%, the black matrix is not considered, and each RGB subpixel aperture is also 1/3 the pixel aperture, apx=Ppx. The luminance, c1(x), on the display panel, considering the influence of the defocus, by reading-out by each elemental lens of the lens array can be represented by a Fourier series, such as the following formula [9], as a superposition of the fundamental and harmonic waves:
c1(x)=c03[F(0)+2n=1sinc(n3)cos (2πnxPpx)F(2πn)],
F(2πn)=2J1(2πnρ)2πnρ.
n is arbitrary integer, F(2πn) is the amplitude factor of the n-th harmonic, Jk() is the k-th order Bessel function of the first kind, and ρ is the radius of the confusion circle, when the elemental lens is defocused; let the normalized sinc function be sinc(n)=sin (πn)/(πn). If d is the diameter of the elemental lens, lf is the focal length, and g is the distance between the principal points of the elemental lens, the elemental image, ρ, is
ρ=d2[(1lf1L)g1].

In this case, the modulation degree, m(c1), of the moiré is

m(c1)=max {c1(x)}min {c1(x)}max {c1(x)}+min {c1(x)}.

 

Fig. 2 (a) Composition of a triple-unit combination, (b) Spatial distribution of the color moiré and schematic of the suppression by a triple-unit combination.

Download Full Size | PPT Slide | PDF

 

Fig. 3 Relationship between the radius of the elemental-lens confusion circle and the moiré modulation degree.

Download Full Size | PPT Slide | PDF

2.2. Moiré reduction by multiple display combination

We discuss the moiré reduction technique, combining multiple display units. A double-unit combination is realized with a half mirror. Figure 2(a) shows an example of a triple-unit combination case. As shown in Fig. 2(b), in the case of a triple-unit combination, for example, the moiré is suppressed by superimposing its phase after shifting it by 1/3 for each display. Let the luminance of the display panels be c2(x) and c3(x), for a double- and triple-unit combination, respectively. Considering the elemental images on the display panel, as shown in Fig. 2(a), the positional relationship of the display device, which minimizes the moiré in the lens array, is equivalent to the case, where the pixels are shifted by 1/2 and 1/3, as shown in Figs. 1(b-2) and 1(b-3), respectively, and can be expressed using Eq. (2) as

c2(x)=c1(x)+c1(x+Ppx/2),
c3(x)=c1(x)+c1(x+Ppx/3)+c1(x+2Ppx/3).

The modulation degree of the moiré is equal to the modulation degree calculated from the luminance distribution on these virtually composite display panels. Figure 3 shows the calculation results of the moiré modulation degrees m(c1), m(c2), and m(c3) in each case, using Eq. (5). The radius of the confusion circle, ρ, is a negative value, when the distance, g, between the lens array and display panel is lesser than the focal length, lf, and is expressed as a positive value, when the distance is longer than the focal length, as shown in Eq. (4). Compared to the case of a single display, there is remarkable moiré reduction in the double-unit combination, except for the case, where the radius of the confusion circle at the time of defocus is extremely small at 0.03 pixels or less. For practical use, the modulation degree of the moiré is maintained below 1% for ρ = 0, which is normalized in the case of a single display unit and is 100% [9]. For example, the radius of the confusion circle, which reduces the modulation degree of the moiré to 1%, is 1.62 pixels for single display but 0.56 pixels for double-unit displays. As described in detail later, the reduction of the radius of the confusion circle (i.e., the degree of defocus) contributes to the improvement of the MTF of the reconstructed image at a distant position from the lens-array plane. Moreover, the moiré can be completely suppressed theoretically in the case of a triple-unit combination.

In the actual composition of an integral 3D display unit, the subpixels of the display panel are sampled by the elemental lens, and then synthesized on the screen. As one cycle of the luminance distribution on the display- panel surface is equal to the pixel interval, Ppx, if the elemental lens is moved by Ppx on the display panel, the phase of the moiré changes accordingly in one cycle. Although the relative positional relationship between the pixel and elemental lens needs to be adjusted in the order of Ppx/3

[pixel], which is the subpixel interval, observed value of phase shift is magnified by sampling of lens array; therefore, it can be easily adjusted using a fine motion-adjustment mechanism.

3. Resolution enhancement by the lens shift method

3.1. Resolution characteristics of the reconstructed image on the lens array

Moiré reduction is actualized by combining multiple integral 3D display units. For reducing moiré, the relative positions of the elemental lenses, in the case of synthesis, are quantized with Ppx intervals. However, the elemental lenses may be superimposed at the same position or shifted from each other. Any position can be selected as long as the moiré reduction condition is satisfied.

When the resolution characteristics improve along with the reduction of the color moiré, methods shifting the lens position by combining displays [13, 14] or spatiotemporal multiplexing of a display [16] is effective. We used square-arrangement lens arrays and a double-unit combination with the lens shift method [23]. It has the same effect as the display-pixel shifting of a two-dimensional (2D) image with a square lattice pixel structure [25, 26].

 

Fig. 4 (a) Lens-shift combination with triple-unit and (b) its displayable spatial frequency bandwidth.

Download Full Size | PPT Slide | PDF

As the number of sampling points of the 3D image increase with the number of display units to be combined, the resolution of the reconstructed image can be improved. From the discussion in the previous section, it is preferable to combine three units for reducing moiré. Moreover, when the lens array is in a hexagonal arrangement, it is not possible to perform geometrically equidistant lens shifting with a double-unit combination. However, in the case of a triple-unit combination, it is possible to achieve lens shift at equal intervals. Figure 4(a) shows the configuration of the elemental lenses for a triple-unit combination. If the horizontal direction is the x-axis and the vertical direction is the y-axis, the sampling points of the lens arrays for each display unit h31(x,y), h32(x,y), and h33(x,y) are as follows:

h31(x,y)=comb(xδx,yδy)+comb(xδx/2δx,yδy/2δy),
h32(x,y)=h31(x,yδy3)=comb(xδx,yδy/3δy)+comb(xδx/2δx,y5δy/6δy),
h33(x,y)=h31(x,y+δy3)=comb(xδx,y+δy/3δy)+comb(xδx/2δx,yδy/6δy).

Here, comb(,) is a 2D comb function (periodic delta function), and δx=PL and δy=3PL are the sampling pitch of the lenses, respectively; the lens pitch of a line in the horizontal or oblique 60  direction, where the lenses are in contact with each other is PL. From the Fourier transforms of these equations, the sampling carrier frequencies H31(u,v), H32(u,v), and H33(u,v) are as follows:

H31(u,v)=|δxuδyv|comb(δxu,δyv)[1+exp (iπδxu)exp (iπδyv)],
H32(u,v)=|δxuδyv|comb(δxu,δyv)[exp (2iπδyv3)+exp (iπδxu)exp (5iπδyv3)],
H33(u,v)=|δxuδyv|comb(δxu,δyv)[exp (2iπδyv3)+exp (iπδxu)exp (iπδyv3)].

Note that u and v are the horizontal and vertical spatial frequencies, respectively. The carrier frequency, H3(u,v), of the triple-unit combination is a summation of these complex spatial frequencies:

H3(u,v)=H31(u,v)+H32(u,v)+H33(u,v).

Figure 4(b) shows the spatial frequencies of the 3D image that can be represented by sampling. In the cases of a single-unit (e.g. Eq. (11)), the carrier frequencies are indicated by white and black circles, and the Nyquist frequencies are indicated by solid lines, while in the case of a triple-unit combination (Eq. (14)), they are indicated only by black circles and broken lines, respectively. The displayable range within the Nyquist frequency is tripled in the y-axis direction, and the bandwidth appears tripled. However, this is the case where only the sampling is considered, and the effect of the elemental-lens aperture is not taken into account. To evaluate the real image quality, it is necessary to consider the influence of the spatial frequency response due to the lens aperture.

 

Fig. 5 Resolution characteristics of the lens-shift combination by the triple-unit in the (a) horizontal and (b) vertical directions.

Download Full Size | PPT Slide | PDF

The frequency response of the elemental lens aperture is Da(f) and f is the spatial frequency of the 3D image displayed on the lens surface. When the aperture of the elemental lens is circular, and its diameter is d, the following relationship is obtained:

Da(f)=|2J1(πdf)πdf|.

Figure 5 shows the results of calculating Eq. (15), on the assumption that PL is equal to the diameter, d, and the normalization factor is, PL=d=1. The sampling frequencies, in case of a single-unit, are fLx1/PLx=2/PL=2 and fLy1/PLy=23/3; fLy33/PLy=23PL=23 in the y-axis direction for a triple-unit combination. One half of each of these are the Nyquist frequencies. For single-unit and triple-unit combinations, the overall frequency characteristics of the area below the Nyquist frequency are expressed by the dark and light-grey painted areas. As apparent from Fig. 5(a), there is no change in the x-axis direction between the single- and triple-unit combinations. With respect to the y-axis direction, the frequency bandwidth is considerably restricted, compared to the x-axis direction in the case of a single-unit, whereas the bandwidth that can be displayed is more than double in the triple-unit combination. The bandwidth balance in the horizontal x-axis and vertical y-axis directions are improved.

 

Fig. 6 Relationship between the depth position of the reconstructed image and its frequency limitation. The viewing distance from the lens array is L = 100 mm.

Download Full Size | PPT Slide | PDF

3.2. Resolution characteristics of the reconstructed image at a distant position from the lens array

3.2.1. Upper-limit resolution

Consider the upper-limit resolution of the reconstructed image at a position away from the lens array, including the influence of the confusion-circle radius of Eq. (4). α [cpr (cycles/rad)] obtained by normalizing the spatial frequency, ν [cycles/mm], of the reconstructed image at a distance is

α=ν|g|,
and its maximum frequency is
αmax =|g|/2Pp.

If the position in the depth direction of the reconstructed image is z and the distance from the lens array to the observer is L, the spatial frequency, β(z) [cpr], when the reconstructed image is viewed by the observer is

β(z)=α(Lz)/|z|.

The observer observes the generated reconstructed image sampled with an elemental lens pitch, PL. The upper-limit frequency (Nyquist frequency) that can be represented by this reconstructed image, βnyq, is

βnyq=L/2PL.

In the case of a lens-shift triple-unit combination, βnyq is tripled because the lens spacing is reduced to 1/3. On the contrary, from Fig. 5(b), the response becomes 0 at 1.22 cycles/lens owing to the aperture of the elemental lens, as depicted in Eq. (15). Let the upper-limit frequency of this lens aperture be βa; if the elemental lens aperture is circular,

βa=1.22L/d.

Considering the case, where the width of the elemental-lens aperture is larger than the sampling interval of this lens, the total upper-limit frequency, βlim, when the reconstructed image at a distance, z, from the lens array is viewed by the observer, is represented as follows:

βlim(z)=min {β(z),βnyq,βa}.

The calculation results of Eq. (21) in the vertical direction of a single display and a lens-shift triple-unit combined display are indicated by blue ((×1)) and red lines ((×3)) in Fig. 6, respectively, when the reconstructed images are observed, and the elemental lenses in Table 1 are at a position, L=1000 mm. βlim(z) in the horizontal direction, which does not change for the single and triple-unit, are indicated by green lines (H).

MTF

We next consider the MTF, which is the spatial frequency response within the obtained upper-limit resolution. The MTF, DLz(α), of the reconstructed image at a distance, z, from the lens array, which depends on the pupils of the elemental lenses, can be calculated by the autocorrelation function of the pupil function, when circular lenses are used as the elemental lenses [27]. If the wavelength of light is λ, the MTF is represented as follows [11]:

DLz(α)=|1πr2pupilexp [2iπαxϵ(z)]dxdy|=|4πrb(K1cos bαλ2K2sin bαλ2)|,
K1=2[J1(rb)(θ2+sin 2θ4)J3(rb)(sin 2θ4+sin 4θ8)+J5(rb)(sin 4θ8+sin 6θ12)],
K2=J0(rb)sin θ2[J2(rb)(sin θ2+sin 3θ6)J4(rb)(sin 3θ6+sin 5θ10)+J6(rb)(sin 5θ10+sin 7θ14)],
ϵ(z)=|1/g1/z1/lf|,
where b=2παϵ(z), r=d/2, and θ=arccos (αλ/2r). The MTF, Dp(α), caused by the pixel aperture, wpx, is
Dp(α)=|sinc(αwpx/|g|)|;
hence, the total MTF, Dz(α), of the reconstructed image at a distance, z, from the lens array is as follows:
Dz(α)=DLz(α)Dp(α).

For the display unit composed of lens arrays and display panels, as specified in Tables 1 and 2, the total MTF, Dz(α), when the reconstructed image is displayed at a position, z=±80 mm, observed from a position, L=1000 mm, are depicted in Fig. 7. The spatial frequency was calculated by converting α into β using Eq. (18). ρ is the radius of the confusion circle, when defocusing the elemental lens represented in Eq. (4). It is understood that the MTF of the reconstructed image at a distant position from the lens array also depends on the size of this confusion circle. The MTFs of the reconstructed images at distant positions from the lens array are the same, for a single display as well as a lens-shift triple-unit combined display.

 

Fig. 7 MTFs of the 3D image displayed at position, z = ±80 mm. The viewing distance from the lens array is L = 1000 mm.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Specifications of lens array in the experimental setup.

Tables Icon

Table 2. Specifications of display panel in the experimental setup.

 

Fig. 8 Prototype triple-unit combination optical systems.

Download Full Size | PPT Slide | PDF

 

Fig. 9 Results of the color moiré, when the entire white is displayed by enlarging a part of the lens array, i.e., 20 × 15 lenses. (a) Single display, (b) Double-unit combination, (c) Triple-unit combination; (d) Normalized average luminance profile by extracting the B- channel alone; (e) Chromaticity diagram of the color moiré.

Download Full Size | PPT Slide | PDF

4. Experimental results

4.1. Color moiré reduction effect

Experiments were conducted to confirm the moiré reduction effect. Tables 1 and 2 lists the specifications of the lens array and display panel used in the experiment.

Display systems as per the configuration in Figs. 2(a) and 8 were used, and the display was captured with a digital still camera for measurement to obtain the image data. Figure 9(a) shows the captured result of the entire white that was displayed, instead of the elemental images on the display unit; the remaining two displays were made entirely black. Figure 9(b) depicts the result of the entire white that was displayed on the two units, while the remaining unit was made entirely black; the position of the lens array was adjusted to minimize moiré. Furthermore, Fig. 9(c) shows the result of the entire white that was displayed on all the three units; the position of the lens array was adjusted to minimize moiré. These figures show parts of the display plane, with 20(H) × 12(V) elemental lenses. The lens arrays were moved for adjusting their horizontal position within the distance range of a pixel using the fine motion stage, while observing the contrast of the moiré, to obtain the position, where the contrast was minimized. The captured gamma, γ, was set to γ = 1 because the image data is linear with respect to the luminance of the display surface. The contrast of the moiré components of each color were measured from the image data. The contrast was obtained as the intensity of the moiré frequency by discrete Fourier transformation (DFT) of the one-dimensional waveform corresponding to the average in the vertical direction, in the same areas of Figs. 9(a)–9(c). For each case, the numerical values were normalized, with the DC component of each color of the RGB, as unity. As an example, the average luminance profile of B in the above-mentioned region is shown in Fig. 9(d). The abscissa axis is the normalized position, x, in the horizontal direction with an elemental lens pitch, PL; the ordinate axis is the normalized luminance, with the luminance value of the DC component of the triple-unit combination set to unity. The broken line indicates the case of a single display (display panel #1 alone), the one-dot chain line indicates a double-unit combination, and the solid line, a triple-unit combination. The moiré contrast is 22% (R: 24%, G: 19%, B: 22%) and 10% (R: 9%, G: 9%, B: 12%) on an average in the double- and triple-unit combinations, respectively, when the case of the single display unit #1 is normalized to 100%. Figure 9(e) shows the results of plotting the RGB values of the same areas, as in the above-described profile for the CIE 1931 xy

chromaticity diagram [28], assuming that the color gamut follows ITU-R recommendation BT.709 [29]. This is the chromaticity range, when the single-, double-, and triple-units are plotted in black, dark grey, and light grey, respectively. As the image used for measurement is entirely white, it is expected that the chromaticity values will converge to a point indicated by the circle at the center, i.e., white. The chromaticity values become close to white as the number of units to be combined increase. It can be observed that the color moiré is reduced.

4.2. Resolution improvement

Moiré reduction is performed by combining multiple units. As described in section 3, it is possible to simultaneously improve the resolution of the lens array surface and reduce moiré by lens-shift synthesis. For the systems, in which the 3D integral display systems were combined as per the configurations depicted in Figures 2(a) and 8, the resolution was measured by displaying a star chart at positions of 0 mm (on the lens array) and 80 mm (on the front-side of the lens array) at a depth, z. Table 1 lists the specifications of the lens array used. Figures 10(a) and 10(c) include single displays, whereas 10(b) and 10(d) are triple-unit combinations. Figures 10(a) and 10(b) show the displayed star chart at z = 0 mm, whereas 10(c) and 10(d) show the same chart at z=80 mm, respectively. In the triple-unit combination, the portion, where aliasing distortion occurred, in the case of a single display, shows improvement in the high spatial frequency portion, in the center. At z = 0 mm, the displayable spatial frequency in the vertical direction is improved by approximately twice, and in the horizontal direction, there is no expansion of the spatial frequency bandwidth as per the previously mentioned theory; however, originally it had a wider band compared to the vertical direction. Improvement in the oblique direction according to the angle is recognized, and the bandwidth balance between directions is improved. At z=80 mm, the visual disturbance is reduced by replacing the aliasing distortion with the DC component. In addition, the jaggies recognized at the edge of the chart are improved and smoothened. These results are consistent with the theory.

 

Fig. 10 Experimental results of the resolution-chart display by single- and triple-unit displays. Enlargement of the star-chart parts of the lens array, i.e., 61×70 lenses in a single-unit. (a) and (c) Single-unit, and (b) and (d) Triple-unit combined display; The star charts in (a) and (b) are displayed at z = 0 mm, whereas in (c) and (d) they are displayed at z = 80 mm.

Download Full Size | PPT Slide | PDF

5. Discussion

As described in section 2, complete suppression of the color moiré should be possible in the case of a triple-unit combination under ideal conditions; however, the experimental results had only reduced the effect to 10%. It is considered that there is a slight position error between each elemental lens of the lens array and the display panel. Hence, the phases of the three moirés are not perfectly coincident with each other over the entire screen, and a complete moiré suppression effect cannot be obtained; hence, only a partial reduction effect is achieved. A complete moiré suppression effect can be obtained by spatiotemporal multiplexing in a single display system instead of combining multiple physically independent display systems. This also contributes to the thinning of the display system.

The modulation degree of the moiré is expected to be 1% or less for the case of ρ = 0 in a single display, as described in section 1. To reduce the moiré to 1% or less in the case of a single display, according to Eq. (5), the radius of the confusion circle must be set to |ρ|1.6. However, in comparison with the cases where z=±80 in Fig. 7, the MTF of the reconstructed image at a distant position from the lens array is considerably reduced compared to the case with no defocus, i.e., ρ = 0. The best balance between the moiré reduction effect and the resolution characteristic owing to the defocus is at |ρ|0.6. To realize further moiré reduction, a diffusion screen can be used together, instead of increasing |ρ| [9]. The moiré can be reduced to less than 1% in the case of a double-unit combination with |ρ|=0.6, and it should always be 0% without depending on the value of ρ in a triple-unit combined display, theoretically. However, it is reduced to only 10% in the above experimental results. We consider the concurrent use of a weak defocus state with |ρ|0.6, along with multiple combinations of the display unit. From Fig. 7, the MTF of the reconstructed image located at z=80 is almost the same as that when ρ = 0; for z=80, it is improved, compared to ρ=1.62. The modulation degree of the moiré can be reduced below 1%, without deteriorating the reproducibility in the depth direction.

We attempted to display 3D real video images with ρ0.6 pixels. If the modulation degree of the moiré is normalized to 100%, when ρ = 0, in a single display, the expected modulation degree of the moiré in a single display in an ideal state will be approximately 3%, and in a triple-unit combination it becomes 0%, for ρ0.6. However, it is expected to be approximately 0.3% because, from the above experimental results, it was determined that the modulation degree becomes approximately 1/10 in comparison to that of a single display in an actual triple-unit combined display. The experimental results are shown in Figs. 11 and 12. The moiré intensity was approximately 2%, which was visible in the case of a single display. In the triple-unit combination display, it was approximately 0.2%; the moiré was reduced to a level that could not be visually recognized. Focusing on the overall resolution, the improvement effects could be confirmed with respect to the eyes, pattern of the head, and the outline of the doll subject. By suppressing the defocusing degree by combining multiple units, the MTF of the 3D image can be maintained, which to be reconstructed at a distant position from the lens array plane, at a necessary level; this can be realized simultaneously with resolution improvement near the lens array plane and moiré suppression. Thus, the adoption of the proposed method can comprehensively improve the image quality of the reconstructed image.

 

Fig. 11 3D-real-video-image display experiment with ρ ≈ −0.6 pixels. (a) and (d) Experimental results for a single-unit display; (b), (c), and (e) Results of a triple-unit display; (d) and (e) are the enlarged views of (a) and (b); (c) is from other view point (left direction); respectively. Motion parallax can be confirmed.

Download Full Size | PPT Slide | PDF

 

Fig. 12 Normalized average luminance profile (B-channel alone) of the blue-background part of Figs. 11(a) and 11(b).

Download Full Size | PPT Slide | PDF

On the contrary, the elemental lenses of the combined display unit can be superposed at perfectly the same position without shifting the lens. In this case, the same effect as increasing the number of pixels of individual elemental images can be obtained, instead of an increase in the number of elemental images. Therefore, although there is no improvement in the resolution and MTF in the vicinity of the lens array plane, the upper limit resolution and MTF of the reconstructed image at the position distant from the lens array are improved. As can be said of the method proposed in this paper, the quantitative analysis of the resolution [30, 31] is more useful to prove the benefit of this method.

6. Conclusions

In this paper, we have proposed a method for displaying 3D images using combined multiple display systems for reducing color moiré, and improving the resolution of the integral 3D display using direct view display panels. Based on theoretical analysis, in the case of single display, the confusion circle radius of the elemental lenses that reduces the moiré modulation degree to 1% or less is 1.6 pixels or more; on the other hand, for a double-unit combination, an equivalent reduction effect is expected with a radius of 0.6 pixels or more; in the triple-unit combination, it is possible to completely suppress the moiré, regardless of the confusion circle radius. Furthermore, the frequency response was theoretically analyzed with respect to the resolution improvement effect by lens shifting for composite display. We demonstrated that in the triple-unit combined display, the vertical displayable bandwidth can be expanded more than twice. The conducted experiments demonstrated that a color moiré reduction effect was obtained, with a simultaneous improvement in the resolution, while displaying an entire white image, 3D star chart, and integral 3D-real-video-image. In addition, we utilized weak lens-array defocus, which is one of the conventional moiré reduction methods, for moiré components that remained, even after combining a plurality of units. As a result, the quality degradation in the depth direction of the 3D image was suppressed, and the moiré was invisible as it was reduced to 0.2%; additionally, the resolution at the depth position close to the lens array plane was improved. The realization of a thin integral 3D display system has not been achieved, because this method combines multiple display devices optically by using half mirrors. In future, we intend to realize both color moiré reduction and resolution improvement in a thin portable 3D display system using this method in combination with spatiotemporal multiplexing display technology.

Acknowledgments

The authors appreciate Semiconductor Energy Laboratory Co., Ltd. for providing the ultra-high definition (8K) OLED flat panel display devices [32] used in the experiment.

References

1. G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908). [CrossRef]  

2. F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1078 (1999). [CrossRef]  

3. H. Morishima, H. Nose, N. Taniguchi, K. Inoguchi, and S. Matsumura, “Rear-cross-lenticular 3D display without eyeglasses,” Proc. SPIE 3295, 1–10 (1998).

4. R. Börner, “Four autostereoscopic monitors on the level of industrial prototypes,” Displays 20, 57–64 (1999). [CrossRef]  

5. J.-Y. Son, V. Saveljev, S.-H. Shin, Y.-J. Choi, and S.-S. Kim, “Moire pattern reduction in full-parallax autostereoscopic imaging systems using two crossed lenticular plates as a viewing zone forming optics,” in Proceedings of The International Display Workshops, vol. 10 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2003), pp. 1401–1404.

6. Y. Kim, G. Park, S.-W. Cho, J.-H. Jung, B. Lee, Y. Choi, and M.-G. Lee, “Integral imaging with reduced color moire pattern by using a slanted lens array,” Proc. SPIE 6803, 68031L (2008). [CrossRef]  

7. Y. Kim, G. Park, J.-H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48, 2178–2187 (2009). [CrossRef]   [PubMed]  

8. T. Koike, K. Utsugi, and M. Oikawa, “Moiré-reduction methods for integral videography autostereoscopic display with color-filter LCD,” J. Soc. Inf. Disp. 18, 678–685 (2010). [CrossRef]  

9. M. Okui, M. Kobayashi, J. Arai, and F. Okano, “Moiré fringe reduction by optical filters in integral three-dimensional imaging on a color flat-panel display,” Appl. Opt. 44, 4475–4483 (2005). [CrossRef]   [PubMed]  

10. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15, 2059–2065 (1998). [CrossRef]  

11. J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects of focusing on the resolution characteristics of integral photography,” J. Opt. Soc. Am. A 20, 996–1004 (2003). [CrossRef]  

12. H. S. El-Ghoroury and Z. Y. Alpaslan, “Quantum photonic imager (QPI): a new display technology and its applications,” in Proceedings of The International Display Workshops, vol. 21 (Institute of Image Information and Television Engineersof Japan and Society of Information Display, 2014), pp. 1292–1295.

13. B. Lee, S.-W. Min, and B. Javidi, “Theoretical analysis for three-dimensional integral imaging systems with double devices,” Appl. Opt. 41, 4856–4865 (2002). [CrossRef]   [PubMed]  

14. Y. Kim, J.-H. Jung, J.-M. Kang, Y. Kim, B. Lee, and B. Javidi, “Resolution-enhanced three-dimensional integral imaging using double display devices,” in LEOS 2007 - IEEE Lasers and Electro-Optics Society Annual Meeting Conference Proceedings, (IEEE, 2007), pp. 356–357.

15. S.-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42, 4186–4195 (2003). [CrossRef]   [PubMed]  

16. H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism sheets,” Opt. Express 15, 4814–4822 (2007). [CrossRef]   [PubMed]  

17. X. Zhang, G. Chen, and H. Liao, “High-quality see-through surgical guidance system using enhanced 3-D autostereoscopic augmented reality,” IEEE Trans. Biomed. Eng. 64, 1815–1825 (2017). [CrossRef]  

18. N. Okaichi, M. Miura, J. Arai, M. Kawakita, and T. Mishina, “Integral 3D display using multiple LCD panels and multi-image combining optical system,” Opt. Express 25, 2805–2817 (2017). [CrossRef]  

19. N. Okaichi, M. Kawakita, H. Sasaki, H. Watanabe, and T. Mishina, “High-quality direct-view display combining multiple integral 3D images,” J. Soc. Inf. Disp. 27, 41–52 (2019). [CrossRef]  

20. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12, 1067–1076 (2004). [CrossRef]  

21. A. Stern and B. Javidi, “3-D computational synthetic aperture integral imaging (COMPSAII),” Opt. Express 11, 2446–2451 (2003). [CrossRef]   [PubMed]  

22. X. Li, Y. Wang, Q.-H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019). [CrossRef]  

23. H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Kawakita, and T. Mishina, “Color moiré reduction and resolution enhancement technique for integral three-dimensional display,” in 2017 3DTV Conference: The True Vision-Capture, Transmissionand Display of 3D Video (3DTV-CON), (IEEE, 2017), pp. 1–4.

24. H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré and resolution analysis of multiple integral three-dimensional displays,” in Proceedings of The International Display Workshops, vol. 24 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2017), pp. 860–863.

25. J. Arai, M. Kawakita, T. Yamashita, H. Sasaki, M. Miura, H. Hiura, M. Okui, and F. Okano, “Integral three-dimensional television with video system using pixel-offset method,” Opt. Express 21, 3474–3485 (2013). [CrossRef]  

26. M. Kanazawa, K. Hamada, I. Kondoh, F. Okano, Y. Haino, and M. Sato, “An ultrahigh-definition display using the pixel-offset method,” J. Soc. Inf. Disp. 12, 93–103 (2004). [CrossRef]  

27. H. H. Hopkins, “The frequency response of a defocused optical system,” Proc. Royal Soc. Lond. A: Math. Phys. Eng. Sci. 231, 91–103 (1955). [CrossRef]  

28. T. Smith and J. Guild, “The C.I.E. colorimetric standards and their use,” Transactions Opt. Soc. 33, 73 (1931). [CrossRef]  

29. Radiocommunication Sector of International Telecommunication Union, “Parameter values for the HDTV standards for production and international programme exchange,” Recomm. ITU-RBT.709 (2015).

30. Z. Fan, S. Zhang, Y. Weng, G. Chen, and H. Liao, “3D quantitative evaluation system for autostereoscopic display,” J. Disp. Technol. 12, 1185–1196 (2016). [CrossRef]  

31. Z. Fan, G. Chen, Y. Xia, T. Huang, and H. Liao, “Accurate 3D autostereoscopic display using optimized parameters through quantitative calibration,” J. Opt. Soc. Am. A 34, 804–812 (2017). [CrossRef]  

32. S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
    [Crossref]
  2. F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1078 (1999).
    [Crossref]
  3. H. Morishima, H. Nose, N. Taniguchi, K. Inoguchi, and S. Matsumura, “Rear-cross-lenticular 3D display without eyeglasses,” Proc. SPIE 3295, 1–10 (1998).
  4. R. Börner, “Four autostereoscopic monitors on the level of industrial prototypes,” Displays 20, 57–64 (1999).
    [Crossref]
  5. J.-Y. Son, V. Saveljev, S.-H. Shin, Y.-J. Choi, and S.-S. Kim, “Moire pattern reduction in full-parallax autostereoscopic imaging systems using two crossed lenticular plates as a viewing zone forming optics,” in Proceedings of The International Display Workshops, vol. 10 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2003), pp. 1401–1404.
  6. Y. Kim, G. Park, S.-W. Cho, J.-H. Jung, B. Lee, Y. Choi, and M.-G. Lee, “Integral imaging with reduced color moire pattern by using a slanted lens array,” Proc. SPIE 6803, 68031L (2008).
    [Crossref]
  7. Y. Kim, G. Park, J.-H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48, 2178–2187 (2009).
    [Crossref] [PubMed]
  8. T. Koike, K. Utsugi, and M. Oikawa, “Moiré-reduction methods for integral videography autostereoscopic display with color-filter LCD,” J. Soc. Inf. Disp. 18, 678–685 (2010).
    [Crossref]
  9. M. Okui, M. Kobayashi, J. Arai, and F. Okano, “Moiré fringe reduction by optical filters in integral three-dimensional imaging on a color flat-panel display,” Appl. Opt. 44, 4475–4483 (2005).
    [Crossref] [PubMed]
  10. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15, 2059–2065 (1998).
    [Crossref]
  11. J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects of focusing on the resolution characteristics of integral photography,” J. Opt. Soc. Am. A 20, 996–1004 (2003).
    [Crossref]
  12. H. S. El-Ghoroury and Z. Y. Alpaslan, “Quantum photonic imager (QPI): a new display technology and its applications,” in Proceedings of The International Display Workshops, vol. 21 (Institute of Image Information and Television Engineersof Japan and Society of Information Display, 2014), pp. 1292–1295.
  13. B. Lee, S.-W. Min, and B. Javidi, “Theoretical analysis for three-dimensional integral imaging systems with double devices,” Appl. Opt. 41, 4856–4865 (2002).
    [Crossref] [PubMed]
  14. Y. Kim, J.-H. Jung, J.-M. Kang, Y. Kim, B. Lee, and B. Javidi, “Resolution-enhanced three-dimensional integral imaging using double display devices,” in LEOS 2007 - IEEE Lasers and Electro-Optics Society Annual Meeting Conference Proceedings, (IEEE, 2007), pp. 356–357.
  15. S.-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42, 4186–4195 (2003).
    [Crossref] [PubMed]
  16. H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism sheets,” Opt. Express 15, 4814–4822 (2007).
    [Crossref] [PubMed]
  17. X. Zhang, G. Chen, and H. Liao, “High-quality see-through surgical guidance system using enhanced 3-D autostereoscopic augmented reality,” IEEE Trans. Biomed. Eng. 64, 1815–1825 (2017).
    [Crossref]
  18. N. Okaichi, M. Miura, J. Arai, M. Kawakita, and T. Mishina, “Integral 3D display using multiple LCD panels and multi-image combining optical system,” Opt. Express 25, 2805–2817 (2017).
    [Crossref]
  19. N. Okaichi, M. Kawakita, H. Sasaki, H. Watanabe, and T. Mishina, “High-quality direct-view display combining multiple integral 3D images,” J. Soc. Inf. Disp. 27, 41–52 (2019).
    [Crossref]
  20. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12, 1067–1076 (2004).
    [Crossref]
  21. A. Stern and B. Javidi, “3-D computational synthetic aperture integral imaging (COMPSAII),” Opt. Express 11, 2446–2451 (2003).
    [Crossref] [PubMed]
  22. X. Li, Y. Wang, Q.-H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
    [Crossref]
  23. H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Kawakita, and T. Mishina, “Color moiré reduction and resolution enhancement technique for integral three-dimensional display,” in 2017 3DTV Conference: The True Vision-Capture, Transmissionand Display of 3D Video (3DTV-CON), (IEEE, 2017), pp. 1–4.
  24. H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré and resolution analysis of multiple integral three-dimensional displays,” in Proceedings of The International Display Workshops, vol. 24 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2017), pp. 860–863.
  25. J. Arai, M. Kawakita, T. Yamashita, H. Sasaki, M. Miura, H. Hiura, M. Okui, and F. Okano, “Integral three-dimensional television with video system using pixel-offset method,” Opt. Express 21, 3474–3485 (2013).
    [Crossref]
  26. M. Kanazawa, K. Hamada, I. Kondoh, F. Okano, Y. Haino, and M. Sato, “An ultrahigh-definition display using the pixel-offset method,” J. Soc. Inf. Disp. 12, 93–103 (2004).
    [Crossref]
  27. H. H. Hopkins, “The frequency response of a defocused optical system,” Proc. Royal Soc. Lond. A: Math. Phys. Eng. Sci. 231, 91–103 (1955).
    [Crossref]
  28. T. Smith and J. Guild, “The C.I.E. colorimetric standards and their use,” Transactions Opt. Soc. 33, 73 (1931).
    [Crossref]
  29. Radiocommunication Sector of International Telecommunication Union, “Parameter values for the HDTV standards for production and international programme exchange,” Recomm. ITU-RBT.709 (2015).
  30. Z. Fan, S. Zhang, Y. Weng, G. Chen, and H. Liao, “3D quantitative evaluation system for autostereoscopic display,” J. Disp. Technol. 12, 1185–1196 (2016).
    [Crossref]
  31. Z. Fan, G. Chen, Y. Xia, T. Huang, and H. Liao, “Accurate 3D autostereoscopic display using optimized parameters through quantitative calibration,” J. Opt. Soc. Am. A 34, 804–812 (2017).
    [Crossref]
  32. S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
    [Crossref]

2019 (2)

N. Okaichi, M. Kawakita, H. Sasaki, H. Watanabe, and T. Mishina, “High-quality direct-view display combining multiple integral 3D images,” J. Soc. Inf. Disp. 27, 41–52 (2019).
[Crossref]

X. Li, Y. Wang, Q.-H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

2017 (3)

2016 (1)

Z. Fan, S. Zhang, Y. Weng, G. Chen, and H. Liao, “3D quantitative evaluation system for autostereoscopic display,” J. Disp. Technol. 12, 1185–1196 (2016).
[Crossref]

2014 (1)

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

2013 (1)

2010 (1)

T. Koike, K. Utsugi, and M. Oikawa, “Moiré-reduction methods for integral videography autostereoscopic display with color-filter LCD,” J. Soc. Inf. Disp. 18, 678–685 (2010).
[Crossref]

2009 (1)

2008 (1)

Y. Kim, G. Park, S.-W. Cho, J.-H. Jung, B. Lee, Y. Choi, and M.-G. Lee, “Integral imaging with reduced color moire pattern by using a slanted lens array,” Proc. SPIE 6803, 68031L (2008).
[Crossref]

2007 (1)

2005 (1)

2004 (2)

H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12, 1067–1076 (2004).
[Crossref]

M. Kanazawa, K. Hamada, I. Kondoh, F. Okano, Y. Haino, and M. Sato, “An ultrahigh-definition display using the pixel-offset method,” J. Soc. Inf. Disp. 12, 93–103 (2004).
[Crossref]

2003 (3)

2002 (1)

1999 (2)

F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1078 (1999).
[Crossref]

R. Börner, “Four autostereoscopic monitors on the level of industrial prototypes,” Displays 20, 57–64 (1999).
[Crossref]

1998 (2)

H. Morishima, H. Nose, N. Taniguchi, K. Inoguchi, and S. Matsumura, “Rear-cross-lenticular 3D display without eyeglasses,” Proc. SPIE 3295, 1–10 (1998).

H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15, 2059–2065 (1998).
[Crossref]

1955 (1)

H. H. Hopkins, “The frequency response of a defocused optical system,” Proc. Royal Soc. Lond. A: Math. Phys. Eng. Sci. 231, 91–103 (1955).
[Crossref]

1931 (1)

T. Smith and J. Guild, “The C.I.E. colorimetric standards and their use,” Transactions Opt. Soc. 33, 73 (1931).
[Crossref]

1908 (1)

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
[Crossref]

Alpaslan, Z. Y.

H. S. El-Ghoroury and Z. Y. Alpaslan, “Quantum photonic imager (QPI): a new display technology and its applications,” in Proceedings of The International Display Workshops, vol. 21 (Institute of Image Information and Television Engineersof Japan and Society of Information Display, 2014), pp. 1292–1295.

Arai, J.

Börner, R.

R. Börner, “Four autostereoscopic monitors on the level of industrial prototypes,” Displays 20, 57–64 (1999).
[Crossref]

Chen, G.

X. Zhang, G. Chen, and H. Liao, “High-quality see-through surgical guidance system using enhanced 3-D autostereoscopic augmented reality,” IEEE Trans. Biomed. Eng. 64, 1815–1825 (2017).
[Crossref]

Z. Fan, G. Chen, Y. Xia, T. Huang, and H. Liao, “Accurate 3D autostereoscopic display using optimized parameters through quantitative calibration,” J. Opt. Soc. Am. A 34, 804–812 (2017).
[Crossref]

Z. Fan, S. Zhang, Y. Weng, G. Chen, and H. Liao, “3D quantitative evaluation system for autostereoscopic display,” J. Disp. Technol. 12, 1185–1196 (2016).
[Crossref]

Cho, S.-W.

Y. Kim, G. Park, S.-W. Cho, J.-H. Jung, B. Lee, Y. Choi, and M.-G. Lee, “Integral imaging with reduced color moire pattern by using a slanted lens array,” Proc. SPIE 6803, 68031L (2008).
[Crossref]

Choi, Y.

Y. Kim, G. Park, S.-W. Cho, J.-H. Jung, B. Lee, Y. Choi, and M.-G. Lee, “Integral imaging with reduced color moire pattern by using a slanted lens array,” Proc. SPIE 6803, 68031L (2008).
[Crossref]

Choi, Y.-J.

J.-Y. Son, V. Saveljev, S.-H. Shin, Y.-J. Choi, and S.-S. Kim, “Moire pattern reduction in full-parallax autostereoscopic imaging systems using two crossed lenticular plates as a viewing zone forming optics,” in Proceedings of The International Display Workshops, vol. 10 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2003), pp. 1401–1404.

Dohi, T.

Eguchi, S.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

El-Ghoroury, H. S.

H. S. El-Ghoroury and Z. Y. Alpaslan, “Quantum photonic imager (QPI): a new display technology and its applications,” in Proceedings of The International Display Workshops, vol. 21 (Institute of Image Information and Television Engineersof Japan and Society of Information Display, 2014), pp. 1292–1295.

Fan, Z.

Z. Fan, G. Chen, Y. Xia, T. Huang, and H. Liao, “Accurate 3D autostereoscopic display using optimized parameters through quantitative calibration,” J. Opt. Soc. Am. A 34, 804–812 (2017).
[Crossref]

Z. Fan, S. Zhang, Y. Weng, G. Chen, and H. Liao, “3D quantitative evaluation system for autostereoscopic display,” J. Disp. Technol. 12, 1185–1196 (2016).
[Crossref]

Guild, J.

T. Smith and J. Guild, “The C.I.E. colorimetric standards and their use,” Transactions Opt. Soc. 33, 73 (1931).
[Crossref]

Haino, Y.

M. Kanazawa, K. Hamada, I. Kondoh, F. Okano, Y. Haino, and M. Sato, “An ultrahigh-definition display using the pixel-offset method,” J. Soc. Inf. Disp. 12, 93–103 (2004).
[Crossref]

Hamada, K.

M. Kanazawa, K. Hamada, I. Kondoh, F. Okano, Y. Haino, and M. Sato, “An ultrahigh-definition display using the pixel-offset method,” J. Soc. Inf. Disp. 12, 93–103 (2004).
[Crossref]

Hata, N.

Hirakata, Y.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Hiura, H.

Hopkins, H. H.

H. H. Hopkins, “The frequency response of a defocused optical system,” Proc. Royal Soc. Lond. A: Math. Phys. Eng. Sci. 231, 91–103 (1955).
[Crossref]

Hoshino, H.

Huang, T.

Inoguchi, K.

H. Morishima, H. Nose, N. Taniguchi, K. Inoguchi, and S. Matsumura, “Rear-cross-lenticular 3D display without eyeglasses,” Proc. SPIE 3295, 1–10 (1998).

Inoue, S.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Isono, H.

Iwahara, M.

Javidi, B.

Jung, J.-H.

Y. Kim, G. Park, J.-H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48, 2178–2187 (2009).
[Crossref] [PubMed]

Y. Kim, G. Park, S.-W. Cho, J.-H. Jung, B. Lee, Y. Choi, and M.-G. Lee, “Integral imaging with reduced color moire pattern by using a slanted lens array,” Proc. SPIE 6803, 68031L (2008).
[Crossref]

Y. Kim, J.-H. Jung, J.-M. Kang, Y. Kim, B. Lee, and B. Javidi, “Resolution-enhanced three-dimensional integral imaging using double display devices,” in LEOS 2007 - IEEE Lasers and Electro-Optics Society Annual Meeting Conference Proceedings, (IEEE, 2007), pp. 356–357.

Kanazawa, M.

M. Kanazawa, K. Hamada, I. Kondoh, F. Okano, Y. Haino, and M. Sato, “An ultrahigh-definition display using the pixel-offset method,” J. Soc. Inf. Disp. 12, 93–103 (2004).
[Crossref]

Kang, J.-M.

Y. Kim, J.-H. Jung, J.-M. Kang, Y. Kim, B. Lee, and B. Javidi, “Resolution-enhanced three-dimensional integral imaging using double display devices,” in LEOS 2007 - IEEE Lasers and Electro-Optics Society Annual Meeting Conference Proceedings, (IEEE, 2007), pp. 356–357.

Kano, M.

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré and resolution analysis of multiple integral three-dimensional displays,” in Proceedings of The International Display Workshops, vol. 24 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2017), pp. 860–863.

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Kawakita, and T. Mishina, “Color moiré reduction and resolution enhancement technique for integral three-dimensional display,” in 2017 3DTV Conference: The True Vision-Capture, Transmissionand Display of 3D Video (3DTV-CON), (IEEE, 2017), pp. 1–4.

Kawakita, M.

N. Okaichi, M. Kawakita, H. Sasaki, H. Watanabe, and T. Mishina, “High-quality direct-view display combining multiple integral 3D images,” J. Soc. Inf. Disp. 27, 41–52 (2019).
[Crossref]

N. Okaichi, M. Miura, J. Arai, M. Kawakita, and T. Mishina, “Integral 3D display using multiple LCD panels and multi-image combining optical system,” Opt. Express 25, 2805–2817 (2017).
[Crossref]

J. Arai, M. Kawakita, T. Yamashita, H. Sasaki, M. Miura, H. Hiura, M. Okui, and F. Okano, “Integral three-dimensional television with video system using pixel-offset method,” Opt. Express 21, 3474–3485 (2013).
[Crossref]

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré and resolution analysis of multiple integral three-dimensional displays,” in Proceedings of The International Display Workshops, vol. 24 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2017), pp. 860–863.

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Kawakita, and T. Mishina, “Color moiré reduction and resolution enhancement technique for integral three-dimensional display,” in 2017 3DTV Conference: The True Vision-Capture, Transmissionand Display of 3D Video (3DTV-CON), (IEEE, 2017), pp. 1–4.

Kawashima, S.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Kim, J.

Kim, S.-S.

J.-Y. Son, V. Saveljev, S.-H. Shin, Y.-J. Choi, and S.-S. Kim, “Moire pattern reduction in full-parallax autostereoscopic imaging systems using two crossed lenticular plates as a viewing zone forming optics,” in Proceedings of The International Display Workshops, vol. 10 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2003), pp. 1401–1404.

Kim, Y.

Y. Kim, G. Park, J.-H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48, 2178–2187 (2009).
[Crossref] [PubMed]

Y. Kim, G. Park, S.-W. Cho, J.-H. Jung, B. Lee, Y. Choi, and M.-G. Lee, “Integral imaging with reduced color moire pattern by using a slanted lens array,” Proc. SPIE 6803, 68031L (2008).
[Crossref]

Y. Kim, J.-H. Jung, J.-M. Kang, Y. Kim, B. Lee, and B. Javidi, “Resolution-enhanced three-dimensional integral imaging using double display devices,” in LEOS 2007 - IEEE Lasers and Electro-Optics Society Annual Meeting Conference Proceedings, (IEEE, 2007), pp. 356–357.

Y. Kim, J.-H. Jung, J.-M. Kang, Y. Kim, B. Lee, and B. Javidi, “Resolution-enhanced three-dimensional integral imaging using double display devices,” in LEOS 2007 - IEEE Lasers and Electro-Optics Society Annual Meeting Conference Proceedings, (IEEE, 2007), pp. 356–357.

Kobayashi, M.

Koike, T.

T. Koike, K. Utsugi, and M. Oikawa, “Moiré-reduction methods for integral videography autostereoscopic display with color-filter LCD,” J. Soc. Inf. Disp. 18, 678–685 (2010).
[Crossref]

Kondoh, I.

M. Kanazawa, K. Hamada, I. Kondoh, F. Okano, Y. Haino, and M. Sato, “An ultrahigh-definition display using the pixel-offset method,” J. Soc. Inf. Disp. 12, 93–103 (2004).
[Crossref]

Koyama, J.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Lee, B.

Y. Kim, G. Park, J.-H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48, 2178–2187 (2009).
[Crossref] [PubMed]

Y. Kim, G. Park, S.-W. Cho, J.-H. Jung, B. Lee, Y. Choi, and M.-G. Lee, “Integral imaging with reduced color moire pattern by using a slanted lens array,” Proc. SPIE 6803, 68031L (2008).
[Crossref]

S.-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42, 4186–4195 (2003).
[Crossref] [PubMed]

B. Lee, S.-W. Min, and B. Javidi, “Theoretical analysis for three-dimensional integral imaging systems with double devices,” Appl. Opt. 41, 4856–4865 (2002).
[Crossref] [PubMed]

Y. Kim, J.-H. Jung, J.-M. Kang, Y. Kim, B. Lee, and B. Javidi, “Resolution-enhanced three-dimensional integral imaging using double display devices,” in LEOS 2007 - IEEE Lasers and Electro-Optics Society Annual Meeting Conference Proceedings, (IEEE, 2007), pp. 356–357.

Lee, M.-G.

Y. Kim, G. Park, S.-W. Cho, J.-H. Jung, B. Lee, Y. Choi, and M.-G. Lee, “Integral imaging with reduced color moire pattern by using a slanted lens array,” Proc. SPIE 6803, 68031L (2008).
[Crossref]

Li, X.

X. Li, Y. Wang, Q.-H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

Liao, H.

Lippmann, G.

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
[Crossref]

Liu, Y.

X. Li, Y. Wang, Q.-H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

Matsumura, S.

H. Morishima, H. Nose, N. Taniguchi, K. Inoguchi, and S. Matsumura, “Rear-cross-lenticular 3D display without eyeglasses,” Proc. SPIE 3295, 1–10 (1998).

Matsuo, T.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Min, S.-W.

Mishina, T.

N. Okaichi, M. Kawakita, H. Sasaki, H. Watanabe, and T. Mishina, “High-quality direct-view display combining multiple integral 3D images,” J. Soc. Inf. Disp. 27, 41–52 (2019).
[Crossref]

N. Okaichi, M. Miura, J. Arai, M. Kawakita, and T. Mishina, “Integral 3D display using multiple LCD panels and multi-image combining optical system,” Opt. Express 25, 2805–2817 (2017).
[Crossref]

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Kawakita, and T. Mishina, “Color moiré reduction and resolution enhancement technique for integral three-dimensional display,” in 2017 3DTV Conference: The True Vision-Capture, Transmissionand Display of 3D Video (3DTV-CON), (IEEE, 2017), pp. 1–4.

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré and resolution analysis of multiple integral three-dimensional displays,” in Proceedings of The International Display Workshops, vol. 24 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2017), pp. 860–863.

Mitsui, S.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Miura, M.

N. Okaichi, M. Miura, J. Arai, M. Kawakita, and T. Mishina, “Integral 3D display using multiple LCD panels and multi-image combining optical system,” Opt. Express 25, 2805–2817 (2017).
[Crossref]

J. Arai, M. Kawakita, T. Yamashita, H. Sasaki, M. Miura, H. Hiura, M. Okui, and F. Okano, “Integral three-dimensional television with video system using pixel-offset method,” Opt. Express 21, 3474–3485 (2013).
[Crossref]

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré and resolution analysis of multiple integral three-dimensional displays,” in Proceedings of The International Display Workshops, vol. 24 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2017), pp. 860–863.

Morishima, H.

H. Morishima, H. Nose, N. Taniguchi, K. Inoguchi, and S. Matsumura, “Rear-cross-lenticular 3D display without eyeglasses,” Proc. SPIE 3295, 1–10 (1998).

Nose, H.

H. Morishima, H. Nose, N. Taniguchi, K. Inoguchi, and S. Matsumura, “Rear-cross-lenticular 3D display without eyeglasses,” Proc. SPIE 3295, 1–10 (1998).

Ohta, Y.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Oikawa, M.

T. Koike, K. Utsugi, and M. Oikawa, “Moiré-reduction methods for integral videography autostereoscopic display with color-filter LCD,” J. Soc. Inf. Disp. 18, 678–685 (2010).
[Crossref]

Okaichi, N.

N. Okaichi, M. Kawakita, H. Sasaki, H. Watanabe, and T. Mishina, “High-quality direct-view display combining multiple integral 3D images,” J. Soc. Inf. Disp. 27, 41–52 (2019).
[Crossref]

N. Okaichi, M. Miura, J. Arai, M. Kawakita, and T. Mishina, “Integral 3D display using multiple LCD panels and multi-image combining optical system,” Opt. Express 25, 2805–2817 (2017).
[Crossref]

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré and resolution analysis of multiple integral three-dimensional displays,” in Proceedings of The International Display Workshops, vol. 24 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2017), pp. 860–863.

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Kawakita, and T. Mishina, “Color moiré reduction and resolution enhancement technique for integral three-dimensional display,” in 2017 3DTV Conference: The True Vision-Capture, Transmissionand Display of 3D Video (3DTV-CON), (IEEE, 2017), pp. 1–4.

Okano, F.

Okui, M.

Park, G.

Y. Kim, G. Park, J.-H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48, 2178–2187 (2009).
[Crossref] [PubMed]

Y. Kim, G. Park, S.-W. Cho, J.-H. Jung, B. Lee, Y. Choi, and M.-G. Lee, “Integral imaging with reduced color moire pattern by using a slanted lens array,” Proc. SPIE 6803, 68031L (2008).
[Crossref]

Sasaki, H.

N. Okaichi, M. Kawakita, H. Sasaki, H. Watanabe, and T. Mishina, “High-quality direct-view display combining multiple integral 3D images,” J. Soc. Inf. Disp. 27, 41–52 (2019).
[Crossref]

J. Arai, M. Kawakita, T. Yamashita, H. Sasaki, M. Miura, H. Hiura, M. Okui, and F. Okano, “Integral three-dimensional television with video system using pixel-offset method,” Opt. Express 21, 3474–3485 (2013).
[Crossref]

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Kawakita, and T. Mishina, “Color moiré reduction and resolution enhancement technique for integral three-dimensional display,” in 2017 3DTV Conference: The True Vision-Capture, Transmissionand Display of 3D Video (3DTV-CON), (IEEE, 2017), pp. 1–4.

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré and resolution analysis of multiple integral three-dimensional displays,” in Proceedings of The International Display Workshops, vol. 24 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2017), pp. 860–863.

Sato, M.

M. Kanazawa, K. Hamada, I. Kondoh, F. Okano, Y. Haino, and M. Sato, “An ultrahigh-definition display using the pixel-offset method,” J. Soc. Inf. Disp. 12, 93–103 (2004).
[Crossref]

Sato, T.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Saveljev, V.

J.-Y. Son, V. Saveljev, S.-H. Shin, Y.-J. Choi, and S.-S. Kim, “Moire pattern reduction in full-parallax autostereoscopic imaging systems using two crossed lenticular plates as a viewing zone forming optics,” in Proceedings of The International Display Workshops, vol. 10 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2003), pp. 1401–1404.

Shigenobu, T.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Shin, S.-H.

J.-Y. Son, V. Saveljev, S.-H. Shin, Y.-J. Choi, and S.-S. Kim, “Moire pattern reduction in full-parallax autostereoscopic imaging systems using two crossed lenticular plates as a viewing zone forming optics,” in Proceedings of The International Display Workshops, vol. 10 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2003), pp. 1401–1404.

Shiokawa, M.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Smith, T.

T. Smith and J. Guild, “The C.I.E. colorimetric standards and their use,” Transactions Opt. Soc. 33, 73 (1931).
[Crossref]

Son, J.-Y.

J.-Y. Son, V. Saveljev, S.-H. Shin, Y.-J. Choi, and S.-S. Kim, “Moire pattern reduction in full-parallax autostereoscopic imaging systems using two crossed lenticular plates as a viewing zone forming optics,” in Proceedings of The International Display Workshops, vol. 10 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2003), pp. 1401–1404.

Stern, A.

Suzuki, A.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Taniguchi, N.

H. Morishima, H. Nose, N. Taniguchi, K. Inoguchi, and S. Matsumura, “Rear-cross-lenticular 3D display without eyeglasses,” Proc. SPIE 3295, 1–10 (1998).

Ueda, N.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Utsugi, K.

T. Koike, K. Utsugi, and M. Oikawa, “Moiré-reduction methods for integral videography autostereoscopic display with color-filter LCD,” J. Soc. Inf. Disp. 18, 678–685 (2010).
[Crossref]

Wang, Q.-H.

X. Li, Y. Wang, Q.-H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

Wang, Y.

X. Li, Y. Wang, Q.-H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

Watanabe, H.

N. Okaichi, M. Kawakita, H. Sasaki, H. Watanabe, and T. Mishina, “High-quality direct-view display combining multiple integral 3D images,” J. Soc. Inf. Disp. 27, 41–52 (2019).
[Crossref]

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Kawakita, and T. Mishina, “Color moiré reduction and resolution enhancement technique for integral three-dimensional display,” in 2017 3DTV Conference: The True Vision-Capture, Transmissionand Display of 3D Video (3DTV-CON), (IEEE, 2017), pp. 1–4.

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré and resolution analysis of multiple integral three-dimensional displays,” in Proceedings of The International Display Workshops, vol. 24 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2017), pp. 860–863.

Weng, Y.

Z. Fan, S. Zhang, Y. Weng, G. Chen, and H. Liao, “3D quantitative evaluation system for autostereoscopic display,” J. Disp. Technol. 12, 1185–1196 (2016).
[Crossref]

Xia, Y.

Yamashita, T.

Yamazaki, S.

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Yuyama, I.

F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1078 (1999).
[Crossref]

H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15, 2059–2065 (1998).
[Crossref]

Zhang, S.

Z. Fan, S. Zhang, Y. Weng, G. Chen, and H. Liao, “3D quantitative evaluation system for autostereoscopic display,” J. Disp. Technol. 12, 1185–1196 (2016).
[Crossref]

Zhang, X.

X. Zhang, G. Chen, and H. Liao, “High-quality see-through surgical guidance system using enhanced 3-D autostereoscopic augmented reality,” IEEE Trans. Biomed. Eng. 64, 1815–1825 (2017).
[Crossref]

Zhou, X.

X. Li, Y. Wang, Q.-H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

Appl. Opt. (4)

Displays (1)

R. Börner, “Four autostereoscopic monitors on the level of industrial prototypes,” Displays 20, 57–64 (1999).
[Crossref]

IEEE Trans. Biomed. Eng. (1)

X. Zhang, G. Chen, and H. Liao, “High-quality see-through surgical guidance system using enhanced 3-D autostereoscopic augmented reality,” IEEE Trans. Biomed. Eng. 64, 1815–1825 (2017).
[Crossref]

J. Disp. Technol. (1)

Z. Fan, S. Zhang, Y. Weng, G. Chen, and H. Liao, “3D quantitative evaluation system for autostereoscopic display,” J. Disp. Technol. 12, 1185–1196 (2016).
[Crossref]

J. Opt. Soc. Am. A (3)

J. Phys. Theor. Appl. (1)

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
[Crossref]

J. Soc. Inf. Disp. (3)

T. Koike, K. Utsugi, and M. Oikawa, “Moiré-reduction methods for integral videography autostereoscopic display with color-filter LCD,” J. Soc. Inf. Disp. 18, 678–685 (2010).
[Crossref]

N. Okaichi, M. Kawakita, H. Sasaki, H. Watanabe, and T. Mishina, “High-quality direct-view display combining multiple integral 3D images,” J. Soc. Inf. Disp. 27, 41–52 (2019).
[Crossref]

M. Kanazawa, K. Hamada, I. Kondoh, F. Okano, Y. Haino, and M. Sato, “An ultrahigh-definition display using the pixel-offset method,” J. Soc. Inf. Disp. 12, 93–103 (2004).
[Crossref]

Opt. Eng. (1)

F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1078 (1999).
[Crossref]

Opt. Express (5)

Opt. Lasers Eng. (1)

X. Li, Y. Wang, Q.-H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

Proc. Royal Soc. Lond. A: Math. Phys. Eng. Sci. (1)

H. H. Hopkins, “The frequency response of a defocused optical system,” Proc. Royal Soc. Lond. A: Math. Phys. Eng. Sci. 231, 91–103 (1955).
[Crossref]

Proc. SPIE (2)

Y. Kim, G. Park, S.-W. Cho, J.-H. Jung, B. Lee, Y. Choi, and M.-G. Lee, “Integral imaging with reduced color moire pattern by using a slanted lens array,” Proc. SPIE 6803, 68031L (2008).
[Crossref]

H. Morishima, H. Nose, N. Taniguchi, K. Inoguchi, and S. Matsumura, “Rear-cross-lenticular 3D display without eyeglasses,” Proc. SPIE 3295, 1–10 (1998).

SID Symp. Dig. Tech. Pap. (1)

S. Kawashima, S. Inoue, M. Shiokawa, A. Suzuki, S. Eguchi, Y. Hirakata, J. Koyama, S. Yamazaki, T. Sato, T. Shigenobu, Y. Ohta, S. Mitsui, N. Ueda, and T. Matsuo, “44.1: Distinguished paper: 13.3-in. 8K x 4K 664-ppi OLED display using CAAC-OS FETs,” SID Symp. Dig. Tech. Pap. 45, 627–630 (2014).
[Crossref]

Transactions Opt. Soc. (1)

T. Smith and J. Guild, “The C.I.E. colorimetric standards and their use,” Transactions Opt. Soc. 33, 73 (1931).
[Crossref]

Other (6)

Radiocommunication Sector of International Telecommunication Union, “Parameter values for the HDTV standards for production and international programme exchange,” Recomm. ITU-RBT.709 (2015).

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Kawakita, and T. Mishina, “Color moiré reduction and resolution enhancement technique for integral three-dimensional display,” in 2017 3DTV Conference: The True Vision-Capture, Transmissionand Display of 3D Video (3DTV-CON), (IEEE, 2017), pp. 1–4.

H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré and resolution analysis of multiple integral three-dimensional displays,” in Proceedings of The International Display Workshops, vol. 24 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2017), pp. 860–863.

J.-Y. Son, V. Saveljev, S.-H. Shin, Y.-J. Choi, and S.-S. Kim, “Moire pattern reduction in full-parallax autostereoscopic imaging systems using two crossed lenticular plates as a viewing zone forming optics,” in Proceedings of The International Display Workshops, vol. 10 (Institute of Image Information and Television Engineers of Japan and Society of Information Display, 2003), pp. 1401–1404.

H. S. El-Ghoroury and Z. Y. Alpaslan, “Quantum photonic imager (QPI): a new display technology and its applications,” in Proceedings of The International Display Workshops, vol. 21 (Institute of Image Information and Television Engineersof Japan and Society of Information Display, 2014), pp. 1292–1295.

Y. Kim, J.-H. Jung, J.-M. Kang, Y. Kim, B. Lee, and B. Javidi, “Resolution-enhanced three-dimensional integral imaging using double display devices,” in LEOS 2007 - IEEE Lasers and Electro-Optics Society Annual Meeting Conference Proceedings, (IEEE, 2007), pp. 356–357.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 (a) Occurrence of color moiré, (b) Spatial luminance distribution on a multiple combined display panel (equivalent representations).
Fig. 2
Fig. 2 (a) Composition of a triple-unit combination, (b) Spatial distribution of the color moiré and schematic of the suppression by a triple-unit combination.
Fig. 3
Fig. 3 Relationship between the radius of the elemental-lens confusion circle and the moiré modulation degree.
Fig. 4
Fig. 4 (a) Lens-shift combination with triple-unit and (b) its displayable spatial frequency bandwidth.
Fig. 5
Fig. 5 Resolution characteristics of the lens-shift combination by the triple-unit in the (a) horizontal and (b) vertical directions.
Fig. 6
Fig. 6 Relationship between the depth position of the reconstructed image and its frequency limitation. The viewing distance from the lens array is L = 100 mm.
Fig. 7
Fig. 7 MTFs of the 3D image displayed at position, z = ±80 mm. The viewing distance from the lens array is L = 1000 mm.
Fig. 8
Fig. 8 Prototype triple-unit combination optical systems.
Fig. 9
Fig. 9 Results of the color moiré, when the entire white is displayed by enlarging a part of the lens array, i.e., 20 × 15 lenses. (a) Single display, (b) Double-unit combination, (c) Triple-unit combination; (d) Normalized average luminance profile by extracting the B- channel alone; (e) Chromaticity diagram of the color moiré.
Fig. 10
Fig. 10 Experimental results of the resolution-chart display by single- and triple-unit displays. Enlargement of the star-chart parts of the lens array, i.e., 61×70 lenses in a single-unit. (a) and (c) Single-unit, and (b) and (d) Triple-unit combined display; The star charts in (a) and (b) are displayed at z = 0 mm, whereas in (c) and (d) they are displayed at z = 80 mm.
Fig. 11
Fig. 11 3D-real-video-image display experiment with ρ ≈ −0.6 pixels. (a) and (d) Experimental results for a single-unit display; (b), (c), and (e) Results of a triple-unit display; (d) and (e) are the enlarged views of (a) and (b); (c) is from other view point (left direction); respectively. Motion parallax can be confirmed.
Fig. 12
Fig. 12 Normalized average luminance profile (B-channel alone) of the blue-background part of Figs. 11(a) and 11(b).

Tables (2)

Tables Icon

Table 1 Specifications of lens array in the experimental setup.

Tables Icon

Table 2 Specifications of display panel in the experimental setup.

Equations (27)

Equations on this page are rendered with MathJax. Learn more.

f moir e = min  | f p x n f L x | ,
c 1 ( x ) = c 0 3 [ F ( 0 ) + 2 n = 1 sinc ( n 3 ) cos  ( 2 π n x P p x ) F ( 2 π n ) ] ,
F ( 2 π n ) = 2 J 1 ( 2 π n ρ ) 2 π n ρ .
ρ = d 2 [ ( 1 l f 1 L ) g 1 ] .
m ( c 1 ) = max  { c 1 ( x ) } min  { c 1 ( x ) } max  { c 1 ( x ) } + min  { c 1 ( x ) } .
c 2 ( x ) = c 1 ( x ) + c 1 ( x + P p x / 2 ) ,
c 3 ( x ) = c 1 ( x ) + c 1 ( x + P p x / 3 ) + c 1 ( x + 2 P p x / 3 ) .
h 31 ( x , y ) = comb ( x δ x , y δ y ) + comb ( x δ x / 2 δ x , y δ y / 2 δ y ) ,
h 32 ( x , y ) = h 31 ( x , y δ y 3 ) = comb ( x δ x , y δ y / 3 δ y ) + comb ( x δ x / 2 δ x , y 5 δ y / 6 δ y ) ,
h 33 ( x , y ) = h 31 ( x , y + δ y 3 ) = comb ( x δ x , y + δ y / 3 δ y ) + comb ( x δ x / 2 δ x , y δ y / 6 δ y ) .
H 31 ( u , v ) = | δ x u δ y v | comb ( δ x u , δ y v ) [ 1 + exp  ( i π δ x u ) exp  ( i π δ y v ) ] ,
H 32 ( u , v ) = | δ x u δ y v | comb ( δ x u , δ y v ) [ exp  ( 2 i π δ y v 3 ) + exp  ( i π δ x u ) exp  ( 5 i π δ y v 3 ) ] ,
H 33 ( u , v ) = | δ x u δ y v | comb ( δ x u , δ y v ) [ exp  ( 2 i π δ y v 3 ) + exp  ( i π δ x u ) exp  ( i π δ y v 3 ) ] .
H 3 ( u , v ) = H 31 ( u , v ) + H 32 ( u , v ) + H 33 ( u , v ) .
D a ( f ) = | 2 J 1 ( π d f ) π d f | .
α = ν | g | ,
α max  = | g | / 2 P p .
β ( z ) = α ( L z ) / | z | .
β nyq = L / 2 P L .
β a = 1.22 L / d .
β lim ( z ) = min  { β ( z ) , β nyq , β a } .
D L z ( α ) = | 1 π r 2 pupil exp  [ 2 i π α x ϵ ( z ) ] d x d y | = | 4 π r b ( K 1 cos  b α λ 2 K 2 sin  b α λ 2 ) | ,
K 1 = 2 [ J 1 ( r b ) ( θ 2 + sin  2 θ 4 ) J 3 ( r b ) ( sin  2 θ 4 + sin  4 θ 8 ) + J 5 ( r b ) ( sin  4 θ 8 + sin  6 θ 12 ) ] ,
K 2 = J 0 ( r b ) sin   θ 2 [ J 2 ( r b ) ( sin   θ 2 + sin   3 θ 6 ) J 4 ( r b ) ( sin   3 θ 6 + sin   5 θ 10 ) + J 6 ( r b ) ( sin   5 θ 10 + sin   7 θ 14 ) ] ,
ϵ ( z ) = | 1 / g 1 / z 1 / l f | ,
D p ( α ) = | sinc ( α w p x / | g | ) | ;
D z ( α ) = D L z ( α ) D p ( α ) .

Metrics