Abstract

To obtain a large viewing angle in conventional projection-type integral photography frameworks, multiple projectors need to be arranged at a particular angle to a lens array. Hence, the systems require a large space. This paper proposes a system that achieves a large viewing angle in a space-saving manner by using curved mirrors that face each other. To this end, a projector is placed directly behind a lens array, and curved mirrors are installed to surround the rays from the projector. The incident angle toward the lens array increases after there are multiple reflections between the mirrors, which increases the viewing angle. In addition, it is not necessary to install the projector at an angle to the lens array, which results in a space-saving system. With the proposed method, a viewing angle of ±60 deg can be achieved.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Integral photography (IP) is a method to generate a 3D image [15]. The IP controls the direction of the rays and their wavelengths (colors) to generate a light field. One type of IP is to project directly onto a lens array by using projectors [5]. In this IP method, the space required by the system becomes large when the viewing angle is extended. This is because it is necessary to increase the range of the incident angle into the lens array for a wide viewing angle. As shown on the left of Fig. 1, several projectors should be installed behind the lens array. Therefore, in this study, we aim to achieve a wide viewing angle and a space-saving system for projection-type IP.

 figure: Fig. 1.

Fig. 1. Comparison of the conventional and proposed systems.

Download Full Size | PPT Slide | PDF

This paper proposes a method to save space in the system by using the mirrors that are facing each other. The right of Fig. 1 shows a space-saving configuration in which a single projector is placed directly behind the lens array, and the curved mirrors are placed such that they are facing each other in the proposed method. Even in this case, the incident angle toward the lens array can be increased by multiple reflections between the mirrors. We have already proposed a method that uses flat mirrors [6]. However, the viewing angle is narrow because of the flat mirrors and the system was not optimized. This paper proposes a method that uses curved mirrors. The viewing angle can be increased using curved mirrors instead of flat mirrors. The proposed method allows us to achieve a wide viewing angle and a space-saving system. In addition, several parameters should be determined in the system design. Then, a new evaluation value was set based on the viewing angle. We used this evaluation value and Bayesian optimization to search for the optimal set of parameters. Using a system based on the optimization results, we achieved a viewing angle of $\pm 60\, \mathrm {deg}$ in a space-saving system.

2. Related works

2.1 Integral photography

Three types of IP exist. The first type is to use a flat surface display [1,2]. For this panel-type, the flat surface display is placed just behind the lens array. This is referred to as the panel-type in this paper. The other two types install the projectors behind the lens array. The second type involves placing a diffuser between the lens array and the projectors [3,4]. This method is called the diffuser-type in this paper. The third type is to project directly onto the lens array [5]. This method is called the direct-projection-type in this paper.

The number of rays depend on the resolution of the flat surface display in the panel-type. In contrast, the number of rays can be increased by increasing the number of projectors in the diffuser-type and the direct-projection-type. In terms of 3D image resolution, this depends on the lens pitch of the lens array in the panel-type and the diffuser-type. This is because only one type of light can be emitted from a single lens in the same direction, as shown on the left of Fig. 2. By contrast, in the direct-projection-type, the 3D image resolution is independent of the lens pitch. As shown on the right of Fig. 2, rays enter into a single lens from multiple directions. This makes it possible to emit different light in the same direction from a single lens. Different light means that we can control its rays on/off and its color individually. Thus, the direct-projection-type has an advantage over the other methods in terms of the number of rays and the resolution; however, there is one problem.

The problem is the space that the system needs. In the panel-type, only a flat surface display exists behind the lens array; thus, it saves space in the system. By contrast, to achieve the same viewing angle as the panel-type, the direct-projection-type requires a large space. In the direct-projection-type, as shown in Fig. 3, the larger is the incident angle to the lens array, the larger is the exit angle. For these reasons, the range of the incident angle to the lens array should be increased to improve the range of the viewing angle. In conventional systems, multiple projectors need to be placed at a distance from each other and arranged at an angle to the lens array. The proposed method simultaneously achieves a large viewing angle and a space-saving system.

 figure: Fig. 2.

Fig. 2. Difference in the light source. (Left) Panel-type and diffuser-type methods. Right: Direct-projection-type method.

Download Full Size | PPT Slide | PDF

 figure: Fig. 3.

Fig. 3. Relationship between the incident angle and exit angle. The larger is the incident angle, the larger is the exit angle.

Download Full Size | PPT Slide | PDF

2.2 Wide viewing angle

Realizing wide viewing angles in conventional IP has been studied previously [714], and viewing angles of up to $\pm 60\, \mathrm {deg}$ have been reported [1114]. We set this value as the target angle in the present study. Wide viewing angles have been realized using multiple lenses in a single lens unit [11,12], and thus, we could incorporate such complex arrangement of lenses in our approach. However, it is significantly more difficult to increase the lens pitch in such arrangements, compared to normal lens arrays. Therefore, we adopted a combination of normal lenses and curved mirrors in our system. A previously reported approach of using a special structure in the light source with time multiplexing [13] is difficult to implement in our methodology. In addition, this approach has limitations in realizing higher resolution and higher refresh rates. We selected direct-projection-type, because it does not suffer from these limitations. Furthermore, a negative index lens was used [14]. In their study, they theoretically tested the IP with a negative refractive index. However, the negative index media can be used only in the distant future because of the frequency band limitation. Therefore, verification experiments that use the actual setup have not been conducted yet.

2.3 Using mirrors for various purposes

A few studies use multiple mirrors, which is similar to our study, although the objectives are different. In the first study, they measure the surface reflection characteristics by using mirrors [15]. In their study, a triangular prism was made by combining flat mirrors, and a light source and a camera were used to observe the object that was placed in the prism. Multiple mirror images of the light source are formed, and the rays enter the target from various directions. The mirrors form multiple mirror images of the measurement target, and the camera can capture the target from various directions. This method enables us to measure a target in a space-saving system. The second study uses flat mirrors to create a 3D image [16]. This method makes it possible to observe 3D images from a wide viewing angle. In terms of 3D images and a wide viewing angle, this method has a few similarities with our study. However, the formation of the 3D image is different. In their method, a real object or a light field should be placed as the light source. In addition, a few differences exist: the curved mirror is not used, and there is no aspect of saving space when using their method. The third study realizes 3D image with a single projector by projecting onto multiple mirrors [17]. In this approach, the flat mirrors are tiled on an ellipsoidal sphere. The arrangement of the mirrors has the same effect as placing multiple projectors in the area. This increases the ray density of the 3D image. For this method, the mirror reflection is used to improve the resolution. The purpose of this method is not to improve the viewing angle or save space.

3. Method

The proposed system is shown in Fig. 4. As shown in the figure, a projector was installed directly behind the lens array. Curved mirrors are installed to surround the rays from the projector. This system replaces the flat mirrors in our proposed system [6] with curved mirrors.

 figure: Fig. 4.

Fig. 4. System overview.

Download Full Size | PPT Slide | PDF

3.1 Mirrors

The mirrors face each other, making it possible to save space. In addition, compared with the flat mirrors, the curved mirrors help increase the viewing angle because they can expand the range of the incident angle. The flat mirrors and curved mirrors are compared in Fig. 5. The reflections using the three types of mirrors are shown in Figs. 5(a)-(c), and a comparison of the mirror shapes is shown in Fig. 5(d). As shown in Fig. 5(a), when the flat mirrors are installed in parallel, the incident angle toward the lens array cannot be larger than the viewing angle of the projector. By contrast, as shown in Fig. 5(b), the incident angle can be increased when two flat mirrors are placed at an angle to each other. As shown in Fig. 5(c), the curved mirrors can also make the incident angle into the lens array larger than the viewing angle of the projector.

Along with the incident angle, the focal points are important. Ideally, the focal planes of the projector and the lens array should be in correspondence. The luminous fluxes of each emitted light would then be parallel to each other. If the spread of the luminous flux of the emitted light is large, problems may occur such as crosstalk. As shown in Fig. 5(a), when the flat mirrors are installed in parallel, since the focal points of the projector are aligned on a plane, both focal planes of the projector and the lens array can be in correspondence. By contrast, as shown in Fig. 5(b), when two flat mirrors are placed at an angle to each other, the focal points of the projector are not lined up in a plane. The focal points of the projector and the focal plane of the lens array significantly deviate from each other. Therefore, the emitted light is not parallel and spreads out. When the curved mirrors are installed, even though the focal points of the projector are not aligned on a single plane, the focal points are all close to the approximate plane, as shown in Fig. 5(c). This allows the focal points of the projector to be close to the focal plane of the lens array.

The left image in Fig. 6 depicts a ray diagram of reflection on a flat mirror. Ray_A, Ray_B, and Ray_C represent rays from the same pixel. If there is no mirror, the three rays will enter the focal point at $x = x_2$. If the angle of the flat mirror is $\theta$, the incident angle on the $x$-axis increases by $2\theta$. The focal point is located ($x_2 - x_1$)$\sin 2\theta$ from the $x$-axis. From the above discussion, the larger $\theta$, the larger the incident angle is. The larger the incident angle, the further the focal point is from the $x$-axis in a flat mirror. The right image in Fig. 6 depicts a ray diagram of reflection on a curved mirror. As in the case of the flat mirror, the design is such that $2\theta$ is added to the incident angle made by Ray_B. In this case, Ray_A and Ray_C are the same as those reflected from flat mirrors, with installation angles $\theta - \epsilon _2$ and $\theta + \epsilon _1$, respectively. As a result, the incident angle made by Ray_A becomes $2\epsilon _2$ smaller than that of the reflection from the flat mirror, and the incident angle made by Ray_C becomes $2\epsilon _1$ larger than that of the reflection from the flat mirror. In the case of the curved mirror, the distance between the focal point and the $x$-axis is difficult to obtain analytically. However, as shown in the change in the incident angle of Ray_A and Ray_C, the focal point can be brought closer to the $x$-axis in the curved mirror than in the flat mirror. Thus, we used curved mirrors in the proposed system.

 figure: Fig. 5.

Fig. 5. Comparison between the reflections using a flat mirror and a curved mirror. The dots represent the focal points of the projector. (a) Flat mirrors installed in parallel. (b) Flat mirrors installed at an angle. (c) Curved mirrors installed facing each other. (d) Overlay of the mirrors.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. Ray diagrams depicting reflection on a flat mirror and a curved mirror. (Left) Reflection on a flat mirror. (Right) Reflection on a curved mirror.

Download Full Size | PPT Slide | PDF

3.2 System design

The design parameters are determined by performing optimization and are as follows: the offset of the projector (I), the radius of the curved mirror (II), the shift amount (III), and the step height of the mirror (IV). These are shown in Fig. 7. The reference position of the projector is set at the focal length of the projector from the focal plane of the lens array. The offset of the projector (I) is the distance from this reference position. We adopted a cylindrical surface for the curved mirror. The radius of the curved mirror (II) is that of the cylinder. As shown on the right of Fig. 7, the area that is cut out from this cylinder is determined by the shift amount (III); in other words, the installation angle of the curved mirror is determined by the shift amount (III). If the step height of the mirror (IV) is large, the mirror is a curved mirror. If the step height of the mirror (IV) is small, the mirror is made by dividing the curved surface into sections with a fixed thickness, as shown on the right of Fig. 7. This approach reduces the thickness of the mirrors, like a Fresnel lens. Note that there are two parameters for the radius of the curved mirror (II) and the shift amount (III) respectively, one for left and right, and one for upper and lower.

 figure: Fig. 7.

Fig. 7. Parameters of this system.

Download Full Size | PPT Slide | PDF

Trade-offs exist between the four design parameters. Suppose that the radius of the curved mirror (II) is extremely large and the amount of shift (III) is extremely small. In this case, the mirror surface is parallel to the optical axis direction of the projector, and the incident angle cannot be increased. By contrast, if the radius of the curved mirror (II) is extremely small and the amount of shift (III) is extremely large, the angle between the mirror surface and the optical axis direction of the projector will be too large for the rays from the projector to enter the lens array. As mentioned earlier, the focal points of the projector cannot be made perfect on the focal plane of the lens array, as shown in Fig. 5(c). Therefore, it may be useful to move the offset of the projector (I). In addition, the finer is the step height of the mirror (IV), the smaller the system becomes. By contrast, the rays striking the edge of a step do not form an image in the focal plane. Therefore, if a step is extremely fine, the number of rays that are formed near the focal plane of the lens array is reduced. Based on the trade-offs explained above, the optimal design parameters should be identified.

3.3 Parameter search

When searching for the optimal design parameters, we defined the evaluation value based on the viewing angle. The evaluation is based on how many viewpoints the light reaches. The evaluation value for each set of design parameters was calculated via Algorithm 1. We conducted an optimization that maximizes this evaluation value. First, thirty-five viewpoints were created as shown on the left of Fig. 8. In Fig. 8, the lens array was set up in parallel with the $X$ and $Y$ axes, and the viewpoints were all in the $YZ$ plane. By setting the viewpoint of $\pm 0\, \mathrm {deg}$ in front of the lens array, we moved it horizontally from here, and set the viewpoints of $\pm 5\, \mathrm {deg}$, $\pm 10\, \mathrm {deg}$, …, $\pm 85\, \mathrm {deg}$ from the normal angle of the lens array. All these viewpoints are at the same distance from the center of the lens array. Second, we determined that the viewpoint is valid if the rays reach it from all the lenses in the lens array; otherwise, it is invalid. In Algorithm 1 and Fig. 8, the number of lenses and the distance between the lens array and the viewpoints (15 lenses and $350\, \mathrm {mm}$) are aligned to those in the next section. The evaluation value is determined by counting the number of consecutive valid viewpoints from the viewpoint of $\pm 0\, \mathrm {deg}$. In addition, the values of the first invalidated viewpoint and the next two viewpoints were calculated for the evaluation values. The reason for this is to make the evaluation value as close as possible to the continuous values. Continuous values are better suited for the optimization search than discrete values. Viewpoints were not set higher than $\pm 85\, \mathrm {deg}$, because we assumed that the light could not reach these viewpoints due to the structural constraints. It is difficult to expose light in a direction parallel to the lens array in the proposed system. Although our target viewing angle is $\pm 60\, \mathrm {deg}$, we set viewpoints up to $\pm 85\, \mathrm {deg}$ to achieve a wider viewing angle. On the other hand, setting viewpoints wide angle may potentially lead to unstable solutions. In unstable solutions, the number of rays reaching a viewpoint from a lens is extremely small. However, in this study, we emphasize on the importance of obtaining a wider viewing angle.

 figure: Fig. 8.

Fig. 8. (Left) Thirty-five viewpoints for the evaluation. (Right) Example of calculating an evaluation value.

Download Full Size | PPT Slide | PDF

In the proposed system, spherical aberration occurs due to the use of curved mirrors. In addition, as described in Section 3.1, the focal points of the projector and the lens array are not perfectly aligned. Due to these reasons, the light flux of the rays that compose a single pixel spreads out slightly after being emitted from the lens array. If the spread is large, it may cause problems such as crosstalk. However, Algorithm 1 does not evaluate this spread. Therefore, we exclude rays with a large spread from the evaluation by switching off the corresponding pixels in the projector. Thus, we excluded the rays spreading more than half of the mean interpupillary distance ($32\, \mathrm {mm}$) [18] at the viewpoints. In this manner, we avoided the problems caused by spherical aberration and distance between focal points.

An example of the calculation of the evaluation values is shown on the right of Fig. 8. In this example, the viewpoints of $\pm 0\, \mathrm {deg}$ and $+5\, \mathrm {deg}$ are valid. Subsequently, when considering the next three viewpoints, the number of lenses whose rays reach the viewpoint is added to the evaluation value. In this example, the evaluation value is $2.700$.

In the optimization for this system, several local optimal solutions exist. In addition, a number of non-differentiable points exist because of the exclusion of the rays when the light flux spread is large. Furthermore, the search area is so vast that the grid search is not effective. Therefore, Bayesian optimization [19,20] was adopted for the search.

4. Evaluation and experiments

First, the design parameters were optimized in the simulation. Second, we used these design parameters to construct an actual system. Third, we estimated the assembly error. Next, the performance of the actual system was verified. Finally, the 3D image was displayed on the system.

4.1 Optimization

The goal of this optimization is to identify the set of design parameters that maximize the viewing angle by using the simulation.

When considering the equipment that is used in the simulation, we first decided on the projector and the lens array. The parameters of the projector and the lens array can be added to the optimization search. However, we excluded them from the search because the large search space would make the search more difficult. We looked for a compact and high-resolution projector in the market. We adopted Qumi Q38 (body size: $186*116*35\, \mathrm {mm}$, image size: $1920*1080\, \mathrm {pixel}$) as the projector that is used in the proposed system. The focal length was set to $200\, \mathrm {mm}$. The lens array should have as large a numerical aperture (NA) for each lens as possible. The most common type of lens array is the fly-eye lens. These lenses are often used to provide uniform illumination, and hence, most of them have a small NA. Therefore, we developed the lens array. In the Edmund Optics products, we chose one with a large NA (# 69–852, diameter: $10\, \mathrm {mm}$, focal length: $7.5\, \mathrm {mm}$, NA: $0.67$). In the lens array for our system, these lenses were arranged at $1\, \mathrm {mm}$ intervals for a total of 15 ($3*5$) lenses.

We used Python 3 to implement the ray-tracing simulation and used GPyOpt [21] for the Bayesian optimization. The wavelength of light that was used in the simulation was $589.3\, \mathrm {nm}$ (D-line of sodium). The search range was as follows: the radius of the curved mirror was $0\, \mathrm {mm}$ - $5000\, \mathrm {mm}$, the shift amount was $0\, \mathrm {mm}$ - $2500\, \mathrm {mm}$, the offset of the projector was $-20\, \mathrm {mm}$ - $20\, \mathrm {mm}$, and the step height of the mirror was $0\, \mathrm {mm}$ - $20\, \mathrm {mm}$. In this case, if the shift amount is larger than the radius of the curved mirror, the system cannot be created. The optimization was performed with these constraints. Five thousand iterations were scheduled for the above conditions. However, the search was stopped at the 2406th iteration. The reason for this is that the next set of parameter points were too close to the previous one to continue searching. Based on this, we believed that the results were not optimal. We conducted 2,500 more iterations of the same optimization around the set of parameters that provided the best result. The search range was as follows: the radius of the curved mirror was $4500\, \mathrm {mm}$ - $5000\, \mathrm {mm}$, the shift amount was $0\, \mathrm {mm}$ - $250\, \mathrm {mm}$, the offset of the projector was $4\, \mathrm {mm}$ - $8\, \mathrm {mm}$, and the step height of the mirror was $11.5\, \mathrm {mm}$ - $16.6\, \mathrm {mm}$.

The results of this parameter search are summarized in Table 1. By using these parameters, we obtained an evaluation value of $13.467$. These parameters allowed us to achieve a viewing angle of $\pm 60\, \mathrm {deg}$ in the simulation. By contrast, the evaluation value of the system with flat mirrors was $10.988$, and the viewing angle was $\pm 45\, \mathrm {deg}$. Compared with the system with flat mirrors, the viewing angle can be expanded by $15\, \mathrm {deg}$ using curved mirrors. The mirrors do not have steps such as the Fresnel lens because the radius of the curved mirror and the step height of the mirror are large. We thought that a mirror shaped like a Fresnel lens would be optimal. However, it did not work out that way.

Tables Icon

Table 1. Results of the parameter search.

4.2 Actual system

Based on the simulation results, curved mirrors were created. An actual system was assembled with optimal design parameters that were obtained from the simulation. The system used in the experiments is shown in Fig. 9. The guide rail is fixed to the outer frame, and the base stand can move on the guide rail. The base stand has rotational and translational freedom, and the position and orientation of the lens array, curved mirrors, and projector can thus be adjusted. A mirror-coated acrylic plate was used for the curved mirror, and aluminum vapor deposition was used in mirror coating. The curved mirror was attached to the base material. A curved line of the base material was cut using a laser cutter.

 figure: Fig. 9.

Fig. 9. Experimental system. (a) Front view, (b) top view, (c) side view, (d) the lens array and curved mirrors observed from the projector side, and (e) the curved mirrors and base material.

Download Full Size | PPT Slide | PDF

4.3 Calibration

We performed calibrations to estimate the assembly error. The assembly error is the same as the installation parameter. A XIMEA xiQ MQ003MG-CM camera was used for the calibration. First, the camera was placed at an arbitrary position, and the position and orientation of the camera were measured. Capturing the lens array by using the camera, we estimated the extrinsic parameters by solving the perspective-n-point (PnP) problem. Second, we turned on each pixel in the projector image plane sequentially. The camera captured the images, and we observed whether a light spot occurred on the camera image. Finally, we obtained a set of light point positions $L^c$ on the camera image and the corresponding pixel position $L^p$ on the projector image plane.

The next step is to estimate the installation parameters, and $30$ degrees of freedom were employed in the estimation. Each of the four curved mirrors and the projector had $6$ degrees of freedom ($3$ for rotational and $3$ for translational motion). The sum of the re-projection errors was used as the error for this estimation. The installation parameters were estimated by performing an optimization to minimize this error. The error of the set of installation parameters is calculated via Eq. (1).

$$\begin{aligned} error &= \sum_{i} \omega_i |L_i^{c\prime} - L_i^c|^2 \alpha_i + (1 - \omega_i)\beta,\\ \omega_i &= \begin{cases} 1 & \textrm{observed by the camera,} \\ 0 & \textrm{not,}\\ \end{cases}\\ \alpha_i &= \begin{cases} 1 & SpreadTmp < SpreadTH, \\ SpreadTmp/SpreadTH & SpreadTmp >{=} SpreadTH,\\ \end{cases}\\ \beta &= 10^{10}. \end{aligned}$$

The squared error of $L_i^c$ and $L_i^{c\prime }$ was used to calculate the re-projection error. $L_{i}^{c\prime }$ is the light point position observed in the camera image plane, and $L_{i}^{c\prime }$ is observed when the pixel position on the projector image plane $L_{i}^{p}$ is turned on in the simulation using the installation parameters. If the spread angle $SpreadTmp$ of the $L^p$-derived rays is larger than the threshold angle $SpreadTH$ used in Section 3.3, we multiply the re-projection error $\alpha _i$ by a weight ($SpreadTmp/SpreadTH$). When no $L^p$-derived rays entered the camera, we added $10^{10}$ to the error.

In this case, thirty parameters are too many to perform the optimization calculation. Therefore, we divided the parameters into five groups for the optimization. First, the rays from the pixels in region I shown in Figs. 10(a)-(c) do not enter any of the mirrors, but directly enter the lens array. These rays are not affected by the installation parameters of the mirror. Therefore, only six parameters of the projector contribute to the optimization using these rays. This reduces the number of parameters to six instead of thirty, and the optimization can thus be performed with a realistic computation time. Similarly, only six parameters each of the projector and the left mirror contribute to the optimization that uses only the rays from the pixels in region II, as shown in Figs. 10(a) and (b). Only six parameters each of the projector and the right mirror contribute to the optimization that uses only the rays from the pixels in region III, as shown in Figs. 10(a) and (b). Only six parameters each of the projector and the upper mirror contribute to the optimization that uses only the rays from the pixels in region IV, as shown in Figs. 10(a) and (c). Only six parameters of the projector and twelve parameters of the upper and lower mirrors contribute to the optimization that uses only the rays from the pixels in region V, as shown in Figs. 10(a) and (c). By using these constraints, we used the rays from regions I, II, III, IV, and V and estimated the parameters for the projector, left mirror, right mirror, upper mirror, and lower mirror in order. The estimated installation parameters are listed in Table 2.

 figure: Fig. 10.

Fig. 10. Classification of the pixel positions in the projection image. Rays from the pixels in the region other than regions I, II, III, IV, and V are not used in the calibration. The rays from the pixels in region II, III, and IV are reflected once. The rays from the pixels in region V are reflected once by each of the upper and lower mirrors. (a) Distribution of pixels. (b) Top view. (c) Side view.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Results of the calibration.

4.4 Experiment on the real system

We conducted the simulation to integrate the installation parameters, and we evaluated the viewing angle using Algorithm 1. As a result, the viewing angle was $\pm 45\, \mathrm {deg}$, and the evaluation value was $10.999$. This result was lower than the $\pm 60\, \mathrm {deg}$ result that was obtained in Section 4.1. The reason for this is that if the actual installation parameters are slightly different from the ideal parameters, there will be rays that do not reach some of the viewpoints. In detail, the rays from the $14/15$ lenses reach the viewpoint of $\pm 50\, \mathrm {deg}$, the rays from the $14/15$ lenses reach the viewpoint of $\pm 55\, \mathrm {deg}$, and the rays from the $10/15$ lenses reach the viewpoint of $\pm 60\, \mathrm {deg}$. In the simulation, only one wavelength ($589.3\, \mathrm {nm}$) was used; however, the rays with multiple wavelengths were used simultaneously in the actual display. In this case, if the wavelength of the rays is short, the refraction angle in a lens is smaller; hence, the rays may reach the viewpoints of $\pm 50\, \mathrm {deg}$ - $\pm 60\, \mathrm {deg}$. In addition, if the wavelength of the rays is long, the pixels that were excluded at a wavelength of $589.3\, \mathrm {nm}$ may not be excluded because of the longer focal length of the lens with a longer wavelength. The rays with a longer wavelength may reach the viewpoints of $\pm 50\, \mathrm {deg}$ - $\pm 60\, \mathrm {deg}$. Therefore, the rays from all the lenses could potentially reach even the viewpoints of $\pm 50\, \mathrm {deg}$ - $\pm 60\, \mathrm {deg}$. For the above reasons, the performance is sufficiently close to the viewing angle of $\pm 60\, \mathrm {deg}$.

Next, we evaluated the observations from each viewpoint by using the actual system. We created an image for the projector in which the different color rays reach each viewpoint. At each viewpoint, the red, green, and blue color rays arrive in sequence. For example, only the red rays arrive at the viewpoint of $\pm 0\, \mathrm {deg}$, only the green rays arrive at the viewpoint of $\pm 5\, \mathrm {deg}$, and only the blue rays arrive at the viewpoint of $\pm 10\, \mathrm {deg}$. To create the image for the projector, we used a simulation integrated the installation parameters. In the simulation, the wavelength of red was $706.5\, \mathrm {nm}$, the wavelength of green was $546.1\, \mathrm {nm}$, and the wavelength of blue was $435.8\, \mathrm {nm}$. The actual and simulated observations from each viewpoint are shown in Fig. 11. Some of the rays did not have the expected color, and some of the rays did not reach the viewpoints. However, the experimental results show that the rays reach the viewpoint of $\pm 60\, \mathrm {deg}$. The difference between the simulation and the actual system is caused by the difference between the estimated installation parameters and the real setup. By and large, a viewing angle of $\pm 60\, \mathrm {deg}$ was achieved on the actual system, although the estimation of the installation parameters was not sufficient.

 figure: Fig. 11.

Fig. 11. Experimental result. We prepared a projector image so that red, green, and blue can be observed at the viewpoints. The left side shows the actual view and the right side shows the simulation view.

Download Full Size | PPT Slide | PDF

Finally, we created a 3D image using the actual system. As shown on the left of Fig. 12, an image of "O" is created in red at $1\, \mathrm {mm}$ in front of the lens array, and an image of "E" is created in green at $50\, \mathrm {mm}$ in front of the lens array. From the positions far from $800\, \mathrm {mm}$ in front of the lens array, we observed this 3D image. The results are shown in the right of Fig. 12. We moved the observation position up, down, right, and left while maintaining the distance from the lens array. The vertical and horizontal shifts were approximately $200\, \mathrm {mm}$ from the front position. The range of the viewing angle was approximately $\pm 14\, \mathrm {deg}$. As shown in the results, a 3D image was created using the proposed method.

 figure: Fig. 12.

Fig. 12. 3D images in the proposed system. (Left) Positional relationship between the 3D images and the lens array. (Right) 3D images from five viewpoints.

Download Full Size | PPT Slide | PDF

5. Discussion

The resolution of the 3D images was low in the experimental results. However, this problem can be addressed. Increasing the lens pitch of the lens array can improve the resolution. As discussed in Section 2.1, the resolution in the direct-projection-type is not limited by the lens pitch. However, a prerequisite is that the lens pitch is increased sufficiently. Since the distance between adjacent lenses in our experimental system is large, specifically $11\, \mathrm {mm}$, the achieved resolution is limited. The distance can become much smaller than $1\, \mathrm {mm}$ technologically. In addition, for the direct-projection-type, the number of rays can be increased by increasing the number of projectors, and the image resolution can thus be improved. In the proposed method, increasing the number of projectors does not lead to a larger space. For conventional methods, multiple projectors need to be placed at a distance from each other to increase the resolution while achieving a wide viewing angle. For this reason, conventional methods require larger spaces. By contrast, in the proposed method, the projectors can be arranged densely. Therefore, the space requirement of the system can be reduced and the number of rays can be increased, thus improving the resolution.

In our experiments, there were some problems that some rays in wrong color reach, and some of the rays did not reach the viewpoints due to the estimation errors for the installation parameters. This can be potentially explained via three reasons. The first is the estimation error of the camera parameters. In the estimation of the extrinsic parameters, we solved the PnP problem by capturing the lens array. In this case, the size of the lens array was small in relation to the angle of view of the camera, and an error may occur. The improvement is expected to be achieved by placing markers around the lens array. The accuracy of the extrinsic parameter estimation can be improved by distributing the markers over a wider area in the camera image than in the lens array. The second is the initial value determination for searching the installation parameters. In this study, we used the results of the grid search as the initial values for the Levenberg–Marquardt method. The grid size may be insufficient to approach the global optimal solution. More computational resources should be provided to get closer to the global optimal solution. The third point is the assembly error itself. Figure 13 shows the relationship between the assembly error and the evaluation value. Each design parameter was set as shown in Table 1. We calculated the evaluation value when the assembly error was $Error$ $Range$ mm or $Error$ $Range$ deg. The target evaluation value is $13.0$ or higher which achieves a viewing angle of $\pm 60\, \mathrm {deg}$. Small errors such as $\pm 10^{-4}\, \mathrm {mm}$ and $\pm 10^{-4}\, \mathrm {deg}$, decrease the evaluation value below $13.0$ and the viewing angle below $\pm 60\, \mathrm {deg}$. This is because the number of rays that reach the viewpoint of $\pm 60\, \mathrm {deg}$ is small, while the number of rays that reach the viewpoint near $\pm 0\, \mathrm {deg}$ is sufficient. A small assembly error causes rays from the lenses to fail in reaching the viewpoint of $\pm 60\, \mathrm {deg}$. Although manual adjustment is difficult, it is possible to make adjustments on the order of $\pm 10^{-5}\, \mathrm {mm}$ and $\pm 10^{-5}\, \mathrm {deg}$ with the resolution of an optical precision actuator. Using the feedback loop of the projection, observation, and adjustment, the assembly error can be made sufficiently small.

 figure: Fig. 13.

Fig. 13. Boxplot of the assembly error and evaluation value. The horizontal axis is the assembly error range, and the vertical axis is the evaluation value. The diamond-shaped plots represent the outliers. Each design parameter was set to the values shown in Table 1. For each assembly error range, the evaluation value was calculated with $Error$ $Range$ * $randselect(\{-1, 0, +1\})$ mm or deg as the assembly error. The $randselect(\{-1, 0, +1\})$ means that one of $-1$, $0$, or $+1$ is randomly selected. For each assembly error range, 100 trials were conducted.

Download Full Size | PPT Slide | PDF

6. Conclusion

This paper proposes a space-saving system with a large viewing angle for the projection-type IP. We placed curved mirrors such that they faced each other. We also developed a system based on the proposed method. The performance was evaluated via simulations and experiments. In the simulation, the rays reached the viewpoint of $\pm 60\, \mathrm {deg}$ from all the lenses. In the experiments with the actual system, the rays reached the viewpoint of $\pm 60\, \mathrm {deg}$, although they were not from all the lenses. These results demonstrate that a viewing angle of $\pm 60\, \mathrm {deg}$ can be achieved using the proposed method.

Disclosures

The authors declare no conflicts of interest.

References

1. H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré reduction and resolution enhancement of flat-panel integral three-dimensional display,” Opt. Express 27(6), 8488–8503 (2019). [CrossRef]  

2. Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014). [CrossRef]  

3. P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015). [CrossRef]  

4. W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004). [CrossRef]  

5. H. Sakai, M. Yamasaki, T. Koike, M. Oikawa, and M. Kobayashi, “41.2: Autostereoscopic display based on enhanced integral photography using overlaid multiple projectors,” in SID symposium digest of technical papers, vol. 40, No. 1 (Wiley Online Library, 2009), pp. 611–614.

6. M. Yasui, Y. Watanabe, and M. Ishikawa, “Projection-type integral 3D display using mirrors facing each other for a wide viewing angle with a downsized system,” in Advances in Display Technologies X, vol. 11304 (International Society for Optics and Photonics, 2020), p. 1130406.

7. S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric fresnel lens array,” Opt. Eng. 41(10), 2572–2577 (2002). [CrossRef]  

8. Y. Kim, J.-H. Park, H. Choi, S. Jung, S.-W. Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12(3), 421–429 (2004). [CrossRef]  

9. S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Opt. Express 11(12), 1346–1356 (2003). [CrossRef]  

10. H. Takahashi, H. Fujinami, and K. Yamada, “Wide-viewing-angle three-dimensional display system using HOE lens array,” in Stereoscopic Displays and Virtual Reality Systems XIII, vol. 6055 (International Society for Optics and Photonics, 2006), p. 60551C.

11. R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “Poster: Toward an interactive box-shaped 3D display: Study of the requirements for wide field of view,” in 2008 IEEE Symposium on 3D User Interfaces, (IEEE, 2008), pp. 157–158.

12. H. Kim, J. Hahn, and H.-J. Choi, “Numerical investigation on the viewing angle of a lenticular three-dimensional display with a triplet lens array,” Appl. Opt. 50(11), 1534–1540 (2011). [CrossRef]  

13. B. Liu, X. Sang, X. Yu, X. Gao, L. Liu, C. Gao, P. Wang, Y. Le, and J. Du, “Time-multiplexed light field display with 120-degree wide viewing angle,” Opt. Express 27(24), 35728–35739 (2019). [CrossRef]  

14. H. Kim, J. Hahn, and B. Lee, “The use of a negative index planoconcave lens array for wide-viewing angle integral imaging,” Opt. Express 16(26), 21865–21880 (2008). [CrossRef]  

15. J. Y. Han and K. Perlin, “Measuring bidirectional texture reflectance with a kaleidoscope,” in ACM SIGGRAPH 2003 Papers, (2003), pp. 741–748.

16. N. Hashiomoto and K. Hamamoto, “Aerial 3D display using a symmetrical mirror structure,” in ACM SIGGRAPH 2018 Posters, (2018), pp. 1–2.

17. B. Chen, L. Ruan, and M.-L. Lam, “Light field display with ellipsoidal mirror array and single projector,” Opt. Express 27(15), 21999–22016 (2019). [CrossRef]  

18. N. A. Dodgson, “Variation and extrema of human interpupillary distance,” in Stereoscopic Displays and Virtual Reality Systems XI, vol. 5291 (International Society for Optics and Photonics, 2004), pp. 36–46.

19. B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas, “Taking the human out of the loop: A review of bayesian optimization,” Proc. IEEE 104(1), 148–175 (2016). [CrossRef]  

20. Y. Chen, A. Huang, Z. Wang, I. Antonoglou, J. Schrittwieser, D. Silver, and N. de Freitas, “Bayesian optimization in AlphaGo,” arXiv preprint arXiv:1812.06855 (2018).

21. T. G. authors, “Gpyopt: A bayesian optimization framework in python,” http://github.com/SheffieldML/GPyOpt (2016).

References

  • View by:

  1. H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré reduction and resolution enhancement of flat-panel integral three-dimensional display,” Opt. Express 27(6), 8488–8503 (2019).
    [Crossref]
  2. Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014).
    [Crossref]
  3. P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
    [Crossref]
  4. W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004).
    [Crossref]
  5. H. Sakai, M. Yamasaki, T. Koike, M. Oikawa, and M. Kobayashi, “41.2: Autostereoscopic display based on enhanced integral photography using overlaid multiple projectors,” in SID symposium digest of technical papers, vol. 40, No. 1 (Wiley Online Library, 2009), pp. 611–614.
  6. M. Yasui, Y. Watanabe, and M. Ishikawa, “Projection-type integral 3D display using mirrors facing each other for a wide viewing angle with a downsized system,” in Advances in Display Technologies X, vol. 11304 (International Society for Optics and Photonics, 2020), p. 1130406.
  7. S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric fresnel lens array,” Opt. Eng. 41(10), 2572–2577 (2002).
    [Crossref]
  8. Y. Kim, J.-H. Park, H. Choi, S. Jung, S.-W. Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12(3), 421–429 (2004).
    [Crossref]
  9. S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Opt. Express 11(12), 1346–1356 (2003).
    [Crossref]
  10. H. Takahashi, H. Fujinami, and K. Yamada, “Wide-viewing-angle three-dimensional display system using HOE lens array,” in Stereoscopic Displays and Virtual Reality Systems XIII, vol. 6055 (International Society for Optics and Photonics, 2006), p. 60551C.
  11. R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “Poster: Toward an interactive box-shaped 3D display: Study of the requirements for wide field of view,” in 2008 IEEE Symposium on 3D User Interfaces, (IEEE, 2008), pp. 157–158.
  12. H. Kim, J. Hahn, and H.-J. Choi, “Numerical investigation on the viewing angle of a lenticular three-dimensional display with a triplet lens array,” Appl. Opt. 50(11), 1534–1540 (2011).
    [Crossref]
  13. B. Liu, X. Sang, X. Yu, X. Gao, L. Liu, C. Gao, P. Wang, Y. Le, and J. Du, “Time-multiplexed light field display with 120-degree wide viewing angle,” Opt. Express 27(24), 35728–35739 (2019).
    [Crossref]
  14. H. Kim, J. Hahn, and B. Lee, “The use of a negative index planoconcave lens array for wide-viewing angle integral imaging,” Opt. Express 16(26), 21865–21880 (2008).
    [Crossref]
  15. J. Y. Han and K. Perlin, “Measuring bidirectional texture reflectance with a kaleidoscope,” in ACM SIGGRAPH 2003 Papers, (2003), pp. 741–748.
  16. N. Hashiomoto and K. Hamamoto, “Aerial 3D display using a symmetrical mirror structure,” in ACM SIGGRAPH 2018 Posters, (2018), pp. 1–2.
  17. B. Chen, L. Ruan, and M.-L. Lam, “Light field display with ellipsoidal mirror array and single projector,” Opt. Express 27(15), 21999–22016 (2019).
    [Crossref]
  18. N. A. Dodgson, “Variation and extrema of human interpupillary distance,” in Stereoscopic Displays and Virtual Reality Systems XI, vol. 5291 (International Society for Optics and Photonics, 2004), pp. 36–46.
  19. B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas, “Taking the human out of the loop: A review of bayesian optimization,” Proc. IEEE 104(1), 148–175 (2016).
    [Crossref]
  20. Y. Chen, A. Huang, Z. Wang, I. Antonoglou, J. Schrittwieser, D. Silver, and N. de Freitas, “Bayesian optimization in AlphaGo,” arXiv preprint arXiv:1812.06855 (2018).
  21. T. G. authors, “Gpyopt: A bayesian optimization framework in python,” http://github.com/SheffieldML/GPyOpt (2016).

2019 (3)

2016 (1)

B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas, “Taking the human out of the loop: A review of bayesian optimization,” Proc. IEEE 104(1), 148–175 (2016).
[Crossref]

2015 (1)

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

2014 (1)

2011 (1)

2008 (1)

2004 (2)

Y. Kim, J.-H. Park, H. Choi, S. Jung, S.-W. Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12(3), 421–429 (2004).
[Crossref]

W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004).
[Crossref]

2003 (1)

2002 (1)

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric fresnel lens array,” Opt. Eng. 41(10), 2572–2577 (2002).
[Crossref]

Adams, R. P.

B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas, “Taking the human out of the loop: A review of bayesian optimization,” Proc. IEEE 104(1), 148–175 (2016).
[Crossref]

Antonoglou, I.

Y. Chen, A. Huang, Z. Wang, I. Antonoglou, J. Schrittwieser, D. Silver, and N. de Freitas, “Bayesian optimization in AlphaGo,” arXiv preprint arXiv:1812.06855 (2018).

authors, T. G.

T. G. authors, “Gpyopt: A bayesian optimization framework in python,” http://github.com/SheffieldML/GPyOpt (2016).

Chen, B.

Chen, D.

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

Chen, Y.

Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014).
[Crossref]

Y. Chen, A. Huang, Z. Wang, I. Antonoglou, J. Schrittwieser, D. Silver, and N. de Freitas, “Bayesian optimization in AlphaGo,” arXiv preprint arXiv:1812.06855 (2018).

Choi, H.

Choi, H.-J.

De Freitas, N.

B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas, “Taking the human out of the loop: A review of bayesian optimization,” Proc. IEEE 104(1), 148–175 (2016).
[Crossref]

Y. Chen, A. Huang, Z. Wang, I. Antonoglou, J. Schrittwieser, D. Silver, and N. de Freitas, “Bayesian optimization in AlphaGo,” arXiv preprint arXiv:1812.06855 (2018).

Dodgson, N. A.

N. A. Dodgson, “Variation and extrema of human interpupillary distance,” in Stereoscopic Displays and Virtual Reality Systems XI, vol. 5291 (International Society for Optics and Photonics, 2004), pp. 36–46.

Dou, W.

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

Du, J.

Fujinami, H.

H. Takahashi, H. Fujinami, and K. Yamada, “Wide-viewing-angle three-dimensional display system using HOE lens array,” in Stereoscopic Displays and Virtual Reality Systems XIII, vol. 6055 (International Society for Optics and Photonics, 2006), p. 60551C.

Gao, C.

Gao, X.

B. Liu, X. Sang, X. Yu, X. Gao, L. Liu, C. Gao, P. Wang, Y. Le, and J. Du, “Time-multiplexed light field display with 120-degree wide viewing angle,” Opt. Express 27(24), 35728–35739 (2019).
[Crossref]

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

Guo, B.

Hahn, J.

Hamamoto, K.

N. Hashiomoto and K. Hamamoto, “Aerial 3D display using a symmetrical mirror structure,” in ACM SIGGRAPH 2018 Posters, (2018), pp. 1–2.

Han, J. Y.

J. Y. Han and K. Perlin, “Measuring bidirectional texture reflectance with a kaleidoscope,” in ACM SIGGRAPH 2003 Papers, (2003), pp. 741–748.

Hashiomoto, N.

N. Hashiomoto and K. Hamamoto, “Aerial 3D display using a symmetrical mirror structure,” in ACM SIGGRAPH 2018 Posters, (2018), pp. 1–2.

Huang, A.

Y. Chen, A. Huang, Z. Wang, I. Antonoglou, J. Schrittwieser, D. Silver, and N. de Freitas, “Bayesian optimization in AlphaGo,” arXiv preprint arXiv:1812.06855 (2018).

Inoue, N.

R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “Poster: Toward an interactive box-shaped 3D display: Study of the requirements for wide field of view,” in 2008 IEEE Symposium on 3D User Interfaces, (IEEE, 2008), pp. 157–158.

Ishikawa, M.

M. Yasui, Y. Watanabe, and M. Ishikawa, “Projection-type integral 3D display using mirrors facing each other for a wide viewing angle with a downsized system,” in Advances in Display Technologies X, vol. 11304 (International Society for Optics and Photonics, 2020), p. 1130406.

Jung, S.

Kano, M.

Kawakita, M.

Kim, H.

Kim, Y.

Kobayashi, M.

H. Sakai, M. Yamasaki, T. Koike, M. Oikawa, and M. Kobayashi, “41.2: Autostereoscopic display based on enhanced integral photography using overlaid multiple projectors,” in SID symposium digest of technical papers, vol. 40, No. 1 (Wiley Online Library, 2009), pp. 611–614.

Koike, T.

H. Sakai, M. Yamasaki, T. Koike, M. Oikawa, and M. Kobayashi, “41.2: Autostereoscopic display based on enhanced integral photography using overlaid multiple projectors,” in SID symposium digest of technical papers, vol. 40, No. 1 (Wiley Online Library, 2009), pp. 611–614.

Lam, M.-L.

Le, Y.

Lee, B.

Li, C.

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

Liu, B.

Liu, L.

Lopez-Gulliver, R.

R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “Poster: Toward an interactive box-shaped 3D display: Study of the requirements for wide field of view,” in 2008 IEEE Symposium on 3D User Interfaces, (IEEE, 2008), pp. 157–158.

Matusik, W.

W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004).
[Crossref]

Min, S.-W.

Y. Kim, J.-H. Park, H. Choi, S. Jung, S.-W. Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12(3), 421–429 (2004).
[Crossref]

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric fresnel lens array,” Opt. Eng. 41(10), 2572–2577 (2002).
[Crossref]

Mishina, T.

Miura, M.

Oikawa, M.

H. Sakai, M. Yamasaki, T. Koike, M. Oikawa, and M. Kobayashi, “41.2: Autostereoscopic display based on enhanced integral photography using overlaid multiple projectors,” in SID symposium digest of technical papers, vol. 40, No. 1 (Wiley Online Library, 2009), pp. 611–614.

Okaichi, N.

Park, J.-H.

Perlin, K.

J. Y. Han and K. Perlin, “Measuring bidirectional texture reflectance with a kaleidoscope,” in ACM SIGGRAPH 2003 Papers, (2003), pp. 741–748.

Pfister, H.

W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004).
[Crossref]

Ruan, L.

Sakai, H.

H. Sakai, M. Yamasaki, T. Koike, M. Oikawa, and M. Kobayashi, “41.2: Autostereoscopic display based on enhanced integral photography using overlaid multiple projectors,” in SID symposium digest of technical papers, vol. 40, No. 1 (Wiley Online Library, 2009), pp. 611–614.

Sang, X.

B. Liu, X. Sang, X. Yu, X. Gao, L. Liu, C. Gao, P. Wang, Y. Le, and J. Du, “Time-multiplexed light field display with 120-degree wide viewing angle,” Opt. Express 27(24), 35728–35739 (2019).
[Crossref]

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

Sasaki, H.

Schrittwieser, J.

Y. Chen, A. Huang, Z. Wang, I. Antonoglou, J. Schrittwieser, D. Silver, and N. de Freitas, “Bayesian optimization in AlphaGo,” arXiv preprint arXiv:1812.06855 (2018).

Shahriari, B.

B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas, “Taking the human out of the loop: A review of bayesian optimization,” Proc. IEEE 104(1), 148–175 (2016).
[Crossref]

Silver, D.

Y. Chen, A. Huang, Z. Wang, I. Antonoglou, J. Schrittwieser, D. Silver, and N. de Freitas, “Bayesian optimization in AlphaGo,” arXiv preprint arXiv:1812.06855 (2018).

Swersky, K.

B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas, “Taking the human out of the loop: A review of bayesian optimization,” Proc. IEEE 104(1), 148–175 (2016).
[Crossref]

Takahashi, H.

H. Takahashi, H. Fujinami, and K. Yamada, “Wide-viewing-angle three-dimensional display system using HOE lens array,” in Stereoscopic Displays and Virtual Reality Systems XIII, vol. 6055 (International Society for Optics and Photonics, 2006), p. 60551C.

Wang, P.

B. Liu, X. Sang, X. Yu, X. Gao, L. Liu, C. Gao, P. Wang, Y. Le, and J. Du, “Time-multiplexed light field display with 120-degree wide viewing angle,” Opt. Express 27(24), 35728–35739 (2019).
[Crossref]

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

Wang, X.

Wang, Z.

B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas, “Taking the human out of the loop: A review of bayesian optimization,” Proc. IEEE 104(1), 148–175 (2016).
[Crossref]

Y. Chen, A. Huang, Z. Wang, I. Antonoglou, J. Schrittwieser, D. Silver, and N. de Freitas, “Bayesian optimization in AlphaGo,” arXiv preprint arXiv:1812.06855 (2018).

Watanabe, H.

Watanabe, Y.

M. Yasui, Y. Watanabe, and M. Ishikawa, “Projection-type integral 3D display using mirrors facing each other for a wide viewing angle with a downsized system,” in Advances in Display Technologies X, vol. 11304 (International Society for Optics and Photonics, 2020), p. 1130406.

Xiao, L.

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

Xie, S.

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

Yamada, K.

H. Takahashi, H. Fujinami, and K. Yamada, “Wide-viewing-angle three-dimensional display system using HOE lens array,” in Stereoscopic Displays and Virtual Reality Systems XIII, vol. 6055 (International Society for Optics and Photonics, 2006), p. 60551C.

Yamasaki, M.

H. Sakai, M. Yamasaki, T. Koike, M. Oikawa, and M. Kobayashi, “41.2: Autostereoscopic display based on enhanced integral photography using overlaid multiple projectors,” in SID symposium digest of technical papers, vol. 40, No. 1 (Wiley Online Library, 2009), pp. 611–614.

Yan, B.

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

Yano, S.

R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “Poster: Toward an interactive box-shaped 3D display: Study of the requirements for wide field of view,” in 2008 IEEE Symposium on 3D User Interfaces, (IEEE, 2008), pp. 157–158.

Yasui, M.

M. Yasui, Y. Watanabe, and M. Ishikawa, “Projection-type integral 3D display using mirrors facing each other for a wide viewing angle with a downsized system,” in Advances in Display Technologies X, vol. 11304 (International Society for Optics and Photonics, 2020), p. 1130406.

Yoshida, S.

R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “Poster: Toward an interactive box-shaped 3D display: Study of the requirements for wide field of view,” in 2008 IEEE Symposium on 3D User Interfaces, (IEEE, 2008), pp. 157–158.

Yu, C.

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

Yu, S.

Yu, X.

B. Liu, X. Sang, X. Yu, X. Gao, L. Liu, C. Gao, P. Wang, Y. Le, and J. Du, “Time-multiplexed light field display with 120-degree wide viewing angle,” Opt. Express 27(24), 35728–35739 (2019).
[Crossref]

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

Zhang, J.

Zhang, Q.

ACM Trans. Graph. (1)

W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004).
[Crossref]

Appl. Opt. (1)

Opt. Commun. (1)

P. Wang, S. Xie, X. Sang, D. Chen, C. Li, X. Gao, X. Yu, C. Yu, B. Yan, W. Dou, and L. Xiao, “A large depth of field frontal multi-projection three-dimensional display with uniform light field distribution,” Opt. Commun. 354, 321–329 (2015).
[Crossref]

Opt. Eng. (1)

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric fresnel lens array,” Opt. Eng. 41(10), 2572–2577 (2002).
[Crossref]

Opt. Express (7)

Proc. IEEE (1)

B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas, “Taking the human out of the loop: A review of bayesian optimization,” Proc. IEEE 104(1), 148–175 (2016).
[Crossref]

Other (9)

Y. Chen, A. Huang, Z. Wang, I. Antonoglou, J. Schrittwieser, D. Silver, and N. de Freitas, “Bayesian optimization in AlphaGo,” arXiv preprint arXiv:1812.06855 (2018).

T. G. authors, “Gpyopt: A bayesian optimization framework in python,” http://github.com/SheffieldML/GPyOpt (2016).

N. A. Dodgson, “Variation and extrema of human interpupillary distance,” in Stereoscopic Displays and Virtual Reality Systems XI, vol. 5291 (International Society for Optics and Photonics, 2004), pp. 36–46.

J. Y. Han and K. Perlin, “Measuring bidirectional texture reflectance with a kaleidoscope,” in ACM SIGGRAPH 2003 Papers, (2003), pp. 741–748.

N. Hashiomoto and K. Hamamoto, “Aerial 3D display using a symmetrical mirror structure,” in ACM SIGGRAPH 2018 Posters, (2018), pp. 1–2.

H. Sakai, M. Yamasaki, T. Koike, M. Oikawa, and M. Kobayashi, “41.2: Autostereoscopic display based on enhanced integral photography using overlaid multiple projectors,” in SID symposium digest of technical papers, vol. 40, No. 1 (Wiley Online Library, 2009), pp. 611–614.

M. Yasui, Y. Watanabe, and M. Ishikawa, “Projection-type integral 3D display using mirrors facing each other for a wide viewing angle with a downsized system,” in Advances in Display Technologies X, vol. 11304 (International Society for Optics and Photonics, 2020), p. 1130406.

H. Takahashi, H. Fujinami, and K. Yamada, “Wide-viewing-angle three-dimensional display system using HOE lens array,” in Stereoscopic Displays and Virtual Reality Systems XIII, vol. 6055 (International Society for Optics and Photonics, 2006), p. 60551C.

R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue, “Poster: Toward an interactive box-shaped 3D display: Study of the requirements for wide field of view,” in 2008 IEEE Symposium on 3D User Interfaces, (IEEE, 2008), pp. 157–158.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Comparison of the conventional and proposed systems.
Fig. 2.
Fig. 2. Difference in the light source. (Left) Panel-type and diffuser-type methods. Right: Direct-projection-type method.
Fig. 3.
Fig. 3. Relationship between the incident angle and exit angle. The larger is the incident angle, the larger is the exit angle.
Fig. 4.
Fig. 4. System overview.
Fig. 5.
Fig. 5. Comparison between the reflections using a flat mirror and a curved mirror. The dots represent the focal points of the projector. (a) Flat mirrors installed in parallel. (b) Flat mirrors installed at an angle. (c) Curved mirrors installed facing each other. (d) Overlay of the mirrors.
Fig. 6.
Fig. 6. Ray diagrams depicting reflection on a flat mirror and a curved mirror. (Left) Reflection on a flat mirror. (Right) Reflection on a curved mirror.
Fig. 7.
Fig. 7. Parameters of this system.
Fig. 8.
Fig. 8. (Left) Thirty-five viewpoints for the evaluation. (Right) Example of calculating an evaluation value.
Fig. 9.
Fig. 9. Experimental system. (a) Front view, (b) top view, (c) side view, (d) the lens array and curved mirrors observed from the projector side, and (e) the curved mirrors and base material.
Fig. 10.
Fig. 10. Classification of the pixel positions in the projection image. Rays from the pixels in the region other than regions I, II, III, IV, and V are not used in the calibration. The rays from the pixels in region II, III, and IV are reflected once. The rays from the pixels in region V are reflected once by each of the upper and lower mirrors. (a) Distribution of pixels. (b) Top view. (c) Side view.
Fig. 11.
Fig. 11. Experimental result. We prepared a projector image so that red, green, and blue can be observed at the viewpoints. The left side shows the actual view and the right side shows the simulation view.
Fig. 12.
Fig. 12. 3D images in the proposed system. (Left) Positional relationship between the 3D images and the lens array. (Right) 3D images from five viewpoints.
Fig. 13.
Fig. 13. Boxplot of the assembly error and evaluation value. The horizontal axis is the assembly error range, and the vertical axis is the evaluation value. The diamond-shaped plots represent the outliers. Each design parameter was set to the values shown in Table 1. For each assembly error range, the evaluation value was calculated with $Error$ $Range$ * $randselect(\{-1, 0, +1\})$ mm or deg as the assembly error. The $randselect(\{-1, 0, +1\})$ means that one of $-1$, $0$, or $+1$ is randomly selected. For each assembly error range, 100 trials were conducted.

Tables (2)

Tables Icon

Table 1. Results of the parameter search.

Tables Icon

Table 2. Results of the calibration.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

e r r o r = i ω i | L i c L i c | 2 α i + ( 1 ω i ) β , ω i = { 1 observed by the camera, 0 not, α i = { 1 S p r e a d T m p < S p r e a d T H , S p r e a d T m p / S p r e a d T H S p r e a d T m p > = S p r e a d T H , β = 10 10 .

Metrics