Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Asymmetric integral imaging system for a see-through three-dimensional display with background imaging function

Open Access Open Access

Abstract

A see-through three-dimensional display with variable background imaging function is proposed. The proposed display system is based on integral imaging and consists of three lens arrays and a transparent flat-panel display. An asymmetric alignment of the three lens arrays enables variable background imaging. The background scene situated at any distance from the display system can be imaged at an intended distance from the display system. The possible imaging regions are shown. The proposed technique was experimentally verified using two optical systems that consisted of lens arrays with large and small lens pitches.

© 2017 Optical Society of America

1. Introduction

See-through three-dimensional (3D) displays are key devices for augmented reality (AR) systems [1,2] because they can superimpose 3D images on real objects at the same depth as those objects. We have previously proposed an optical see-through 3D display system that is based on integral imaging [3,4]; the see-through function was made possible through the use of an optical system that used lens arrays, which allowed background scenes to be seen by optical means without any time delay or loss in resolution. In this study, this optical system was modified so that a background scene situated at any distance from the display system could be imaged at an intended distance from the system, i.e., the background imaging function is provided.

Several types of optical see-through 3D display systems have been developed. Hong et al. [5] proposed a display system that utilized both a holographic optical element and a projector to produce optical see-through function. Hua and Javidi [6] proposed a display system that combined a small integral imaging display and a freeform optical element to minimize the size of the optical system. Both these systems were based on optical projection-based systems so that they are suitable for use in head-mounted displays. Flat-panel systems are also important because they can be used to provide smartphones and tablets with AR functionalities. Maimone and Fuchs [7] proposed a flat-panel system that utilized a multi-layer display technique; their technique utilizes multiple stacked transparent liquid crystal display (LCD) panels. Maimone et al. proposed a “pinlight display” system [8] in which a transparent LCD panel is combined with a transparent plate that possesses an array of dots so that an array of point light sources is produced. However, as these two flat-panel systems do not contain lens imaging systems, they cannot provide the background imaging function.

We have also proposed an optical see-through 3D display having a flat-panel shape [3], consisting of three lens arrays and a transparent flat-panel display (FPD). Integral imaging allows the combination of a single lens array and an FPD to generate 3D images [9,10], whereas three lens arrays produce the desired optical see-through function. This enables 3D images to be superimposed onto a background scene. By adding another FPD, the proposed system can achieve background occlusion [4]; this means that background scenes can be occluded by the 3D images. The occlusion mask patterns are displayed on the additional FPD. For the see-through function to be realized, the three lens array imaging system must have a magnification of unity, i.e., the object length and image length are equal. In this study, we modified the three lens imaging system so that variable imaging could be achieved, i.e., the object length and image length do not have to be equal, and the positions of the object plane and image plane can be changed. This variable imaging function could be used to assist people with refractive errors, because by having background objects imaged at a depth range in which people with refractive errors can focus, they will be able to view the background scene through the display without the need for eyeglasses.

Several display systems intended to assist people with refractive errors have already been proposed. For example, Huang et al. proposed a multi-layer display system [11]. In this system, the eye’s point spread function (PSF) is used to deconvolute a displayed image, and the resultant image is represented by multiple displays. Because the resultant image contains negative values, this system is able to display low contrast images. Meanwhile, Pamplona et al. proposed a light field display system [12], where the directions of the rays generated by the system are determined based on the eye’s PSF. Because this system is based on integral imaging, the resolution of the output image is low. The techniques proposed by Pamplona et al. and Huang et al. were combined so that higher contrast and higher resolution images are produced [13]; in this combined system, the deconvoluted images are displayed by the light field display. Although these systems can display 3D images to people with refractive errors, the technique proposed in this study can provide both 3D images and background images to them.

The flat-panel type see-through 3D display that we developed in an earlier study [3] is explained in Section 2, the mechanism producing the variable background imaging function is explained in Section 3, and the limitations of our proposed technique are explained in Section 4. The experiments are described in Section 5, and a discussion on them follows in Section 6 before we conclude the paper (Section 7).

2. Flat-panel type see-through 3D display

Before describing how the technique proposed in this paper works, we first provide a brief explanation on an optical see-through 3D display having a flat-panel shape, which we have previously proposed [3].

Figure 1(a) is a schematic diagram of this previously proposed system. The system consists of three two-dimensional (2D) lens arrays, a transparent FPD, and a light blocking wall (LBW). For the sake of simplicity, this figure shows a system that does not support the background occlusion function [4]. The focal length of the two outer lens arrays is f, and the focal length of the central lens array is f /2. The distance to each of the two outer arrays from the central lens array is 2f. The FPD is located on the focal plane of the right-hand-side lens array. This system consists of a 2D array of elementary imaging systems. As shown in Fig. 1(b), each elementary imaging system produces an upright image with unit magnification. Thus, the whole system provides see-through images. The LBW prevents rays from entering adjacent elementary imaging systems and thereby prevents multiple image generation. The right-hand-side lens array and the transparent FPD constitute an integral imaging display that produces 3D images; as such, 3D images can be superimposed onto the see-through images. This system achieves see-through function by adding the two lens arrays in order to symmetrize the integral imaging system. This symmetric integral imaging system is therefore capable of AR function.

 figure: Fig. 1

Fig. 1 Flat-panel type optical see-through 3D display: (a) symmetric integral imaging system and (b) elementary imaging system.

Download Full Size | PDF

3. Variable background imaging function

In a symmetric integral imaging system, the object and image planes are fixed to positions outside the imaging system: the object plane is located at a distance of 2f from the left-hand lens array, and the image plane is located at a distance of 2f from the right-hand lens array. In the present study, the lens array imaging system was modified to make it asymmetric and allow the positions of the object and image planes to be changed; i.e., the two distances between the three lens arrays can be varied, and the background scene can be imaged at any distance from the lens array system. As a result, the asymmetric integral imaging system provides the variable background imaging function.

Figure 2(a) illustrates the asymmetric integral imaging system proposed by the present study. The object plane is located at a distance of l from the left-hand lens array, and the image plane is located at a distance of l′ from the right-hand lens array. The distance between the left-hand-side and central lens arrays is denoted by d1, and the distance between the right-hand-side and central lens arrays is denoted by d2.

 figure: Fig. 2

Fig. 2 Asymmetric integral imaging system: (a) structure of the system and (b) elementary imaging system.

Download Full Size | PDF

Figure 2(b) shows one of the elementary imaging systems that make up the asymmetric integral imaging system. Each elementary imaging system should generate an upright image with unit magnification, and the images of all the elementary imaging systems should be continuously connected, as shown in Fig. 3.

 figure: Fig. 3

Fig. 3 Background imaging by the asymmetric integral imaging system.

Download Full Size | PDF

In this newly proposed system, the two gaps between the three lens arrays, d1 and d2, are derived using a ray transfer matrix analysis [14]. The system matrix of the elementary imaging system is given by

S=(ABCD)=TlLfTd2Lf/2Td1LfTl=(1l01)(101/f1)(1d201)(102/f1)(1d101)(101/f1)(1l01),
where T and L represent the transfer and lens matrices, respectively. The element B needs to be zero to satisfy the image formula, and the element A needs to be one for unit magnification to be obtained. By adhering to the conditions for A and B, d1 and d2 can be derived:

d1=3ll2(lf)f,
d2=3ll2(lf)f

The resolution of the background images provided by the proposed system is not limited by the number of lenses in the lens arrays, because every elementary imaging system produces an image. As a result, this system does not spatially sample the background scene.

The variable background imaging function allows people with refractive errors to view background scenes. For people with myopia, the background scene can be imaged nearer to the eye, as shown in Fig. 4(a), because their focal range is located closer to them than for people with regular eyesight. In this case, the image length l' should be positive. For people with hyperopia or presbyopia, the background scene can be imaged farther away from them, as shown in Fig. 4(b), because their focal range is located further away from them than for people with regular eyesight. In this case, the image length l' should be negative. When a background scene is imaged in a viewer’s focal range, the scene can be observed without wearing eyeglasses or contact lenses. The position of the transparent FPD needs to be properly adjusted so that the integral imaging based 3D images are generated at the same depth as that of the background images. When the gap between the FPD and the right lens array is denoted by s and the width of elementary images is denoted by w, the viewing zone angle of 3D images is given by 2tan−1(w/2s). The viewing zone angle of 3D images should be equal to or larger than that of the background images, which depends on the asymmetric configuration of the integral imaging system.

 figure: Fig. 4

Fig. 4 Background imaging function for people with refractive errors: a background scene is imaged (a) nearer to viewers with myopia and (b) farther from viewers with hyperopia or presbyopia.

Download Full Size | PDF

4. Limitations of the proposed system

To determine the possible imaging regions, we begin by assuming that the two gaps between the three lens arrays, d1 and d2, should be positive, because the gaps might be controlled by mechanical means. By using the condition d10, the relationship between l and l' can be derived from Eq. (2):

l3l(forl>f),l3l(forl<f).
The relationship between l and l' for d20 can also be derived from Eq. (3):

ll/3(forl>f),ll/3(forl<f).

The length between the left-hand-side lens array and the object plane, l, must also be positive.

Figure 5 illustrates the possible imaging regions that are obtained when Eqs. (4) and (5) are used. They are shown as red and blue regions in the figure, and as can be seen, the possible imaging regions are limited. It should also be noted that some combinations of l and l' are not allowed.

 figure: Fig. 5

Fig. 5 Possible imaging regions of the variable background imaging function. The two dots and the cross indicate the experimental conditions described in Section 5.

Download Full Size | PDF

In Fig. 4(a), the background imaging condition for eyes affected by myopia is described by l' > 0 and l + d1 + d2 > −l' when l' < 0. By combining Eqs. (2) and (3), the following relationship can be obtained:

l>0,(2l3f)l2+2(l+f)ll3fl2<0(forl<0).

The regions for eyes affected by myopia (referred to as the myopia area) given by the above equations are indicated by the blue colored regions in Fig. 5. From Fig. 4(b), it can be seen that the condition for eyes affected by hyperopia and presbyopia is described by l + d1 + d2 < −l′ when l′ < 0, meaning that the following relationship can be obtained:

(2l3f)l2+2(l+f)ll3fl2>0(forl<0).

The region for eyes affected by hyperopia and presbyopia (referred to as the hyperopia and presbyopia area) is indicated by the red colored region in Fig. 5. The dots and a cross mark in this figure are the experimental conditions, which are described in Section 5.

In the previous optical see-through 3D displays [3,4], both the height and the direction of the incident rays are preserved by employing symmetric integral imaging systems. However, the asymmetric integral imaging system proposed in this study preserves only the height of the rays. Therefore, the proposed system is suitable for imaging planar images (2D images). With this limitation, the proposed technique is still effective for assisting people with refractive errors in their visions, as they will be able to get information from 2D images, such as books, newspapers, and computer monitors.

5. Experimental

To experimentally verify the proposed background imaging function, we first constructed a scaled-up model using commercial lens arrays that had a large lens pitch; the experimental system is illustrated in Fig. 6. Four identical plano-convex lens arrays (#63-230, Edmund Optics Inc.) were used. The lens arrays consisted of 10 × 13 square lenses that were aligned with a horizontal pitch of 4.0 mm and a vertical pitch of 3.0 mm. The size of each lens was 4.0 × 3.0 mm2 and their focal lengths were 38.1 mm. The central lens arrays, which had half the focal length of the outer two arrays, were obtained by attaching two lens arrays with their convex surfaces faced one another. In the experiments, the screen of an LCD monitor, upon which alphabetic characters were displayed, was used as the object in the background scene, as shown in Fig. 7.

 figure: Fig. 6

Fig. 6 Scaled-up model of the proposed system.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Experimental setup using the scaled-up model.

Download Full Size | PDF

Figure 8(a) shows the experimental results when the background image was produced so as to be nearer a viewer. The object was placed 300 mm behind the experimental system, and the background image was produced 200 mm in front of it (l = 300 mm and l' = 200 mm). The dot in the myopia area of Fig. 5 shows this experimental condition. The gaps were calculated as d1 = 50.9 mm and d2 = 35.3 mm. Figure 8(a) shows the experimental result when the camera was focused on the produced background image; although the background image could be observed clearly, the object appeared to be blurry. Figure 8(b) shows the result when the camera was focused on the object in the background scene; although the object could be observed clearly, the produced background image became blurred. A magnified background image is also shown in Fig. 8(a), and weak multiple images were observed; this was because an LBW was not used in the scaled-up model. The characters in the background image appeared to be larger than those that appeared on the LCD screen, because the background image was closer to the observation position than the LCD screen.

 figure: Fig. 8

Fig. 8 Experimental results using the scaled-up model: (a), (b) the background image was produced nearer to a viewer, with the camera focused on (a) the background image and (b) the object; (c), (d) the background image was produced farther from a viewer, with the camera focused on (c) the background image and (d) the object.

Download Full Size | PDF

Figure 8(c) shows the experimental results when the background image was produced farther from a viewer. The object was placed 153 mm behind the experimental system, and the background image was produced 237 mm behind it (l = 153 mm and l' = −450 mm); i.e., the background image was produced behind the object. The dot in the hyperopia and presbyopia area of Fig. 5 corresponds to this experiment. The gaps were calculated as d1 = 162.6 mm and d2 = 58.2 mm. Figure 8(c) shows the result when the camera was focused on the produced background image; the background image could be observed clearly. Figure 8(d) shows the result when the camera was focused on the object in the background scene; the object could be observed clearly, but the produced background image was less clear than that in Fig. 8(c) because of the severe multiple image generation. In a magnified background image shown in Fig. 8(c), multiple images were still observed. The characters in the background image looked smaller than those that appeared on the LCD screen, because the background image was farther from the observation position than the LCD screen.

We also verified the background imaging function using the experimental system constructed in our previous study [3]. Commercial plano-convex lens arrays (#630, Fresnel Technologies Inc.) with a small lens pitch (1.0 mm) were used. The number of lenses was 154 × 154. The size of each lens was 1.0 × 1.0 mm2, and their focal lengths were 3.3 mm. The central lens arrays consisted of two lens arrays, as in the scaled-up model. The lens arrays were aligned using a lens array holder, as shown in Fig. 9. The object in this system was also the screen of the LCD monitor.

 figure: Fig. 9

Fig. 9 Experimental system using the lens arrays that had a small lens pitch.

Download Full Size | PDF

Figure 10 shows the experimental result when the background image was produced closer to a viewer. The object was placed 100 mm behind the experimental system, and the background image was produced 90 mm in front of it (l = 100 mm and l' = 90 mm). The cross mark in Fig. 5 shows this experimental condition. The gaps were calculated as d1 = 3.6 mm and d2 = 3.2 mm. The background image could be observed through the experimental system. The blur in the background image was caused by multiple image generation because an LBW was not used. The background image could not be produced farther from a viewer because the possible gaps among the lens arrays in the experimental system were limited.

 figure: Fig. 10

Fig. 10 Experimental result obtained by the experimental system using lens arrays with a small lens pitch.

Download Full Size | PDF

Finally, the superposition of 3D images onto the background image was experimentally demonstrated. A transparent film, onto which elementary images were printed, was inserted in the scaled-up model. The film was placed on the focal plane of the right lens array in this experiment. As shown in Fig. 11, the characters “3D” were generated as a 3D image which was superimposed on the background image. As the number of lenses in the lens arrays was limited, the resolution of the 3D image was also limited.

 figure: Fig. 11

Fig. 11 Superposition of a 3D image onto a background image.

Download Full Size | PDF

6. Discussion

The background image in Fig. 8(c) was not as sharp as that in Fig. 8(a). The gaps between the lens arrays, d1 and d2, were calculated using Eqs. (2) and (3); however, they were based on a paraxial approximation. To improve the image quality, we determined the gaps using the optical design software ZEMAX (ZEMAX, LLC); the gaps were subsequently determined as d1 = 157.5 mm and d2 = 54.0 mm. Figure 12(b) is the background image obtained by this optimized system. For comparison, Fig. 12(a) shows the original background image before the optimization (Fig. 8(c)). The image became sharper than that using the original conditions. To further improve image quality, the image degradation caused by aberrations in the lenses was reduced by placing an aperture array between the left-hand-side and central lens arrays. The aperture array was placed 139.9 mm away from the left-hand-side lens array. This gap was also determined by using the optical design software. The apertures were square shaped, and the size of the apertures was 1 × 1 mm2. The apertures were fabricated on a transparent film using a thermal transfer printer to print an aperture pattern. Figure 12(c) shows the background image obtained using this aperture array; the image was found to be sharper than that shown in Fig. 12(b).

 figure: Fig. 12

Fig. 12 Improvements in the sharpness of the background images: images were obtained with (a) the original configuration, (b) optimized gaps among the lens arrays, and (c) an aperture array.

Download Full Size | PDF

The imaging characteristics of the elementary imaging systems are affected by the maximum incident angle of the elementary imaging system. In the experimental result shown in Fig. 8(a), when the background scene was imaged nearer to viewers as shown in Fig. 4(a), the maximum incident angle of the rays emitted from the LCD screen and passing through the elementary imaging system was 0.38°. In this case, the maximum incident angle was limited by the size of the lenses constituting the left lens array. In the experimental results shown in Fig. 8(c) and Fig. 12(b), when the background scene was imaged farther from viewers as shown in Fig. 4(b), the maximum incident angles were 0.22° and 0.24°, respectively. In this case, the maximum incident angle was limited by the size of the lenses constituting the right lens array. As the maximum incident angle of the former case was larger than that of the latter case, the resolution for the former should be higher than that of the latter case. The image shown in Fig. 8(a) appeared sharper than those shown in Fig. 8(c) and 12(b). However, because the maximum incident angles were very small in both cases, the resolution was low.

In the experimental results shown so far, multiple images were observed in the background images because LBWs were not used in the experimental systems. An LBW was fabricated for the previous symmetric integral imaging system [3]. However, fabricating LBWs for the asymmetric imaging systems was difficult because the gaps between the lens arrays were different for different background imaging conditions. Nevertheless, we checked the effectiveness of the LBW by constructing it using strips of black paper. The LBW corresponding to 3 × 3 lenses was inserted between the left-hand-side and central lens arrays. The subsequent background image obtained is shown in Fig. 13, and the 3 × 3 lenses are indicated by the yellow lines. We found that this arrangement resulted in fewer multiple images being produced in the 3 × 3 lens area than had been the case for an equivalent area in Fig. 12. Dark regions appeared around the 3 × 3 lenses owing to the support of the papers that were used as the LBW. To prevent multiple images from being generated in a variable background imaging system, a variable length LBW will need to be developed.

 figure: Fig. 13

Fig. 13 Prevention of multiple images from being generated through the use of an LBW that corresponds to 3 × 3 lenses.

Download Full Size | PDF

The background occlusion function, which we previously proposed [4], can also be implemented in the proposed system. Another transparent FPD was added between lens arrays 2 and 3 to display occlusion mask patterns. The new FPD was placed at the conjugate plane of the FPD, which displays the elementary images and is located between lens arrays 1 and 3. When 3D images are generated in front of the background image, the occlusion mask patterns are displayed so that the background image is occluded by the 3D images. When the 3D images are generated behind the background image, elemental images of partial 3D images occluded by real objects are displayed on the FPD between lens arrays 1 and 3 so that 3D images are occluded by the real objects. In this case, the 3D structures of the real objects should be known.

7. Conclusion

An optical see-through 3D display that used a symmetric integral imaging system was modified so that a background imaging function could be created; this was achieved through the use of an asymmetric integral imaging system that consisted of three lens arrays.

A scaled-up model was constructed using lens arrays that consisted of 4.0 × 3.0 mm2 lenses; this model was used to experimentally verify the proposed background imaging technique. The object behind the scaled-up model was imaged both in front of and behind the object. The experimental system consisted of dense lens arrays that had a lens pitch of 1.0 mm; this was also used to demonstrate the background imaging function. The asymmetric integral imaging system was subsequently optimized with the optical design software to improve the image quality of the background image. An aperture array and an LBW were also found to be effective in improving the overall image quality of the model proposed in this paper.

The technique proposed in this study is expected to result in a method that will allow people with refractive errors to view images without the need to wear eyeglasses or contact lenses.

Funding

Hoso Bunka Foundation, Japan; SECOM Science and Technology Foundation, Japan; JSPS KAKENHI Grant Number JP17J02464.

References and links

1. R. Azuma, Y. Bailot, R. Behringer, S. Feiner, S. Julier, and B. MacIntyre, “Recent advances in augmented reality,” IEEE Comput. Graph. Appl. 21(9), 34–47 (2001). [CrossRef]  

2. D. W. F. van Krevelen and R. Poelman, “A survey of augmented reality technologies, applications and limitations,” Int. J. Virtual Reality 9(2), 1–19 (2010).

3. Y. Takaki and Y. Yamaguchi, “Flat-panel see-through three-dimensional display based on integral imaging,” Opt. Lett. 40(8), 1873–1876 (2015). [CrossRef]   [PubMed]  

4. Y. Yamaguchi and Y. Takaki, “See-through integral imaging display with background occlusion capability,” Appl. Opt. 55(3), A144–A149 (2016). [CrossRef]   [PubMed]  

5. K. Hong, J. Yeom, C. Jang, J. Hong, and B. Lee, “Full-color lens-array holographic optical element for three-dimensional optical see-through display,” Opt. Lett. 39(1), 127–130 (2014). [CrossRef]   [PubMed]  

6. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]   [PubMed]  

7. A. Maimone and H. Fuchs, “Computational Augmented Reality Eyeglasses,” in Proceedings of IEEE International Symposium on Mixed and Augmented Reality (IEEE, 2013), pp. 29–38.

8. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: Wide Field of View Augmented Reality Eyeglasses using Defocused Point Light Sources,” ACM Trans. Graph. 33(4), 89 (2014). [CrossRef]  

9. B. Javidi and S.-H. Hong, “Three-dimensional holographic image sensing and integral imaging display,” J. Disp. Technol. 1(2), 341–346 (2005). [CrossRef]  

10. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues,” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]   [PubMed]  

11. F.-C. Huang, D. Lanman, B. A. Barsky, and R. Raskar, “Correcting for optical aberrations using multilayer dispalys,” ACM Trans. Graph. 31(6), 185 (2012). [CrossRef]  

12. V. F. Pamplona, M. M. Oliveira, D. G. Aliaga, and R. Raskar, “Tailored displays to compensate for visual aberrations,” ACM Trans. Graph. 31(4), 81 (2012). [CrossRef]  

13. F.-C. Huang, G. Wetzstein, B. A. Barsky, and R. Raskar, “Eyeglasses-free display: towards correcting visual aberrations with computational light field display,” ACM Trans. Graph. 33(4), 59 (2014). [CrossRef]  

14. E. Hecht, Optics (Addison-Wesley, 2002), Chap. 6.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Flat-panel type optical see-through 3D display: (a) symmetric integral imaging system and (b) elementary imaging system.
Fig. 2
Fig. 2 Asymmetric integral imaging system: (a) structure of the system and (b) elementary imaging system.
Fig. 3
Fig. 3 Background imaging by the asymmetric integral imaging system.
Fig. 4
Fig. 4 Background imaging function for people with refractive errors: a background scene is imaged (a) nearer to viewers with myopia and (b) farther from viewers with hyperopia or presbyopia.
Fig. 5
Fig. 5 Possible imaging regions of the variable background imaging function. The two dots and the cross indicate the experimental conditions described in Section 5.
Fig. 6
Fig. 6 Scaled-up model of the proposed system.
Fig. 7
Fig. 7 Experimental setup using the scaled-up model.
Fig. 8
Fig. 8 Experimental results using the scaled-up model: (a), (b) the background image was produced nearer to a viewer, with the camera focused on (a) the background image and (b) the object; (c), (d) the background image was produced farther from a viewer, with the camera focused on (c) the background image and (d) the object.
Fig. 9
Fig. 9 Experimental system using the lens arrays that had a small lens pitch.
Fig. 10
Fig. 10 Experimental result obtained by the experimental system using lens arrays with a small lens pitch.
Fig. 11
Fig. 11 Superposition of a 3D image onto a background image.
Fig. 12
Fig. 12 Improvements in the sharpness of the background images: images were obtained with (a) the original configuration, (b) optimized gaps among the lens arrays, and (c) an aperture array.
Fig. 13
Fig. 13 Prevention of multiple images from being generated through the use of an LBW that corresponds to 3 × 3 lenses.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

S=( A B C D ) = T l L f T d 2 L f/2 T d 1 L f T l =( 1 l 0 1 )( 1 0 1/f 1 )( 1 d 2 0 1 )( 1 0 2/f 1 )( 1 d 1 0 1 )( 1 0 1/f 1 )( 1 l 0 1 ),
d 1 = 3l l 2(lf) f,
d 2 = 3 l l 2( l f) f
l 3l (for l>f), l 3l (for l<f).
l l/3 (for l >f), l l/3 (for l <f).
l >0, ( 2l3f ) l 2 +2( l+f )l l 3f l 2 <0 (for l <0) .
( 2l3f ) l 2 +2( l+f )l l 3f l 2 >0(for l <0).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.