## Abstract

A computational multi-projection display is proposed by employing a multi-projection system combining with compressive light field displays. By modulating the intensity of light rays from a spatial light modulator inside a single projector, the proposed system can offer several compact views to observer. Since light rays are spread to all directions, the system can provide flexible positioning of viewpoints without stacking projectors in vertical direction. Also, if the system is constructed properly, it is possible to generate view images with inter-pupillary gap and satisfy the super multi-view condition. We explain the principle of the proposed system and verify its feasibility with simulations and experimental results.

© 2016 Optical Society of America

## 1. Introduction

Projection-type three-dimensional (3D) displays are one of the most representative ways to generate 3D images. They displays are known to provide large scale realistic images [1]. These displays are categorized according to the system format such as stereoscopic, volumetric, and multi-view methods [2]. Separating the left and right images using a special viewing aid of polarization glasses is the generalized method which is currently used in most 3D theaters. This method generates a stereopsis by relying on binocular disparity and the convergence of eyes. However, using an additional viewing aid makes the observer uncomfortable and consequently brought the necessity to develop autostereoscopic displays [3]. A projection-type volumetric 3D display provides 3D images composed of volumetric pixels (voxels). Mechanically movable screen synchronized with the projected images sweeps the space to generate volumetric 3D images in a desired space [4]. Since sweeping the 3D space is essential in some volumetric 3D display system, mechanical movement of projection device or screen should be accompanied, but movement and synchronization increase system complexity [5, 6]. Another method uses a multi-view approach and there are two typical types of systems: a tiled-projection system and a multi-projection system. In a tiled-projection system, two-dimensional (2D) elemental images are projected forming tiles and 3D image is presented to user through a relay optics such as a parallax barrier or a lens array. The optical element converts tiled 2D elemental images into a large 3D image [7]. However, employing optical elements involves inevitable problems such as brightness loss, aberration problem and decrease of spatial resolution. A multi-projection system, which consists of a projector array and a screen, mitigates the issues in the tiled-projection system [8]. The configurations for the tiled-projection system and multi-projection system are similar in terms of employing multiple projectors. However, projectors in the multi-projection system substitute the role of parallax barrier and lens array in the tiled-projection system. The system can retain spatial resolution of images since each projector generates independent views, and by increasing the number of projectors, the system can secure better viewing parameters.

By virtue of the high 3D image quality, many research groups have reported the multi-projection systems. National Institute of Information and Communication Technology (NICT, Japan) demonstrated a 200 inch multi-projection system with an optical screen consisting of special diffuser film and a large condenser lens [9, 10]. Takaki et al. introduced the super multi-view display using an array of projectors to arrange with the vergence-accommodation mismatch problem which brings unnatural stereoscopic condition and visual fatigues to the observers [11, 12]. Although it is an innovative concept, it still has a limitation that each projector can only generate single view for retaining the spatial image resolution. In other words, to generate sufficient number of views, a number of calibrated projectors are necessary. There has been an effort to provide multiple images from a single projector using uniaxial crystal. According to the polarization state of a single projection unit, double refraction occurs in the uniaxial crystal and it separates the optical path into two different ways [13]. However each projector cannot generate more than two views and the system is too complicated and expensive.

To provide a number of views with a few display devices, systems called compressive light field displays have been proposed [14–18]. They offer 3D images with significantly increased spatial resolution using several stacked liquid crystal display (LCD) panels. Light fields are collections of rays indicating a 3D scene from different perspectives. To provide high-resolution light fields, the display usually requires an excessively high resolution. On the other hand, compressive light field displays are able to offer high-resolution scenes with computational-processing algorithms such as nonnegative matrix factorization (NMF). In particular, using additional optical components, it is possible to build the system which generates compact views and gives two or more views to observers’ each eye. This induces the monocular accommodation response by satisfying the super multi-view condition [19]. However, there is a limitation in above systems. Generally adding more layers enhances the overall image quality but it also reduces the brightness of the reconstructed images.

In this paper, the computational multi-projection display is presented by employing multi-projection system in addition to the combination with compressive light field display. The system consists of several projectors, a light shaping diffuser, the Fresnel lens, and an LCD panel. The computational multi-projection display describes 3D images without considerable resolution loss and preserves the brightness of the system. Also, it can express both of horizontal and vertical parallax since it is capable of providing flexible-position viewpoints, which is difficult to realize in the previous multi-projection systems. Light rays from the projectors are modulated as they penetrate through the rear LCD panel and are focused into observer’s eyes satisfying super multi-view condition. Each of light fields from the projection system is optimized to target light fields using nonnegative matrix factorization [14]. Two types of virtual 3D objects are used as target images in the experiment. For both targets, simulation and experiment show the proper parallax. We also present additional analyses of the system such as vertical parallax, accommodation response, and peak signal-to-noise ratio (PSNR) with various conditions.

## 2. Computational multi-projection display

The proposed system is composed of two parts as presented in Fig. 1. One is the conventional multi-projection system which consists of a projector array, a diffuser and a Fresnel lens. The other is the rear LCD panel. The conventional multi-projection system provides a number of views to the observer while the LCD panel modulates them to have appropriate directional components. This enables to fill the empty space between neighboring views and gives smoother and more natural motion parallax to the observer. For instance, the emitted light ray *l _{n}* in Fig. 1 is given by the following expression:

*η*is the transmittance at the given pixel of spatial light modulator (SLM) inside the projector and

_{0}*η*is the transmittance at the specific pixel of LCD panel.

_{n}#### 2.1 Computational multi-projection system

Computational multi-projection system is designed to overcome the limitations of conventional multi-projection systems. Each of light fields spreading from the diffuser alters as it transmits the LCD panel in front of the Fresnel lens. If the distance between projectors and the diffuser is long enough, it is able to assume that the SLM inside the projector is floated on the diffuser plane. Figure 2 shows the 2D equivalent model of computational multi-projection system for a single projector. To express the arbitrary light field, two parameters *x* and *v* are adopted. *x* is the spatial coordinate on the observing plane and *v* = tan(*θ*) is the point of intersection with a relative plane at unit distance.

The transmittance of the pixel in the floated SLM is expressed as *p _{s}*(

*f*(

_{s}*x, v*)) where ${f}_{s}:\mathbb{R}\times \mathbb{R}\to \mathbb{R}$ is a function which maps each light ray (

*x*,

*v*) on the observer side of the screen to the SLM. Similar to

*f*for

_{s}*p*, ${f}_{s}:\mathbb{R}\times \mathbb{R}\to \mathbb{R}$ maps the ray (

_{l}*x*,

*v*) to the LCD panel.

*Δz*is the distance between floated SLM and the Fresnel lens,

*g*is the gap between the Fresnel lens and the rear LCD panel and

*d*is the observation distance.

_{ob}When illuminated by a backlight unit, the emitted light field $\widehat{l}(x,v)$ is expressed as follows:

*f*and

_{s}*f*can be derived using ray transfer matrices as below [20]:

_{l}*f*is focal length of the Fresnel lens and the incident ray angle

*α*and

*β*deserve little consideration. To expand the above matrix equations into 3D space, additional two parameters

*y*and

*u*should be adopted. For

*y-z*plane, whole calculation process is same as

*x-z*plane but parameters (

*y, u*) should replace (

*x, v*).

#### 2.2 Light field reconstruction

Equation (2), expressing the emitted light field, is discretized as follows:

**F**

*and*

_{s}**F**

*are projection matrices that permute the rows of the discrete SLM and LCD patterns*

_{l}**p**

*and*

_{s}**p**

*according to the mapping functions*

_{l}*f*and

_{s}*f*, respectively. Symbol $\circ $ denotes element-wise multiplication of matrices (Hadamard product) and the projection matrices

_{l}**F**

*and*

_{s}**F**

*are created through ray tracing equations [Eqs. (3) and (4)]. If a target light field*

_{l}**L**is given, an optimization problem can be expressed in a least-squared error sense where

**p**

*and*

_{s}**p**

*provide positive constraints ensuring that optimized patterns are physically feasible:*

_{l}#### 2.3 Simulation I: Comparison between target light field and reconstructed light field

For investigating the feasibility of computational multi-projection system, simulations for several target light fields are proceeded. Two projectors are used, and each of them generates desired viewpoints around each eye individually. The gap *g* between the floated SLM and the LCD panel is set to 20 mm and the observation distance is 190 mm. A single projector constructs nine views in horizontal direction and the distance between adjacent two views is set to 3 mm.

Figure 3 represents the simulation results. Two image sources (letters and a human figure) are used and target images are rendered from Blender [22]. The 1st view image from projector 1 is 44.5mm away from the central axis of screen. Symmetrically, the 9th view from projector 2 is 44.5mm away from the axis to the opposite side. The distance between 5th view images from each projector is 62.5mm which is similar to the inter-pupillary distance. Reconstructed simulation images in each position have proper disparity and they offer the desired parallax compared to the target images. The further discussion about the quality of the reconstructed images will be explained in the next section.

#### 2.4 Simulation II: Reliability of reconstructed images with various gaps

To evaluate the quality of reproduced images, we use PSNR. It expresses the ratio of maximum intensity value to mean square error between target image and reconstructed image [23].

In Figs. 3(a) and 3(b), we calculated the PSNRs for both images and the values are about 40 dB. However, they are not reliable since the background takes up a large portion of target images, decreasing the mean square error. Accordingly, we rendered another object which fills up enough part of target images and calculated PSNR with various gaps as shown Fig. 4(a).

Figure 4(b) represents change of PSNR as the number of iterations in multiplicative update rules increases. Figure 4(c) shows the relationship between PSNR and the size of gap. As presented in Fig. 4(c), the peak PSNR appears at the gap value between 30mm and 40mm which is not the smallest or biggest value. By dividing this interval into 10 equal parts, the gap value which provides the peak PSNR is realized as 32mm. The reason for this phenomenon is configured in Fig. 5.

Figure 5(a) presents the situation when the gap is small and the dashed line presents the specific depth. If the system desires to express a_{1}, b_{1}, and c_{1}, the rays from o_{1} would pass through three different points in the LCD panel with interval *Δx*. If *Δx* is smaller than pixel pitch of LCD panel, three different points cannot be differentiated, and this induces the resolution degradation. When the gap becomes larger like Fig. 5(b), the interval *Δx’* gets large enough compared to the pixel pitch. However, the interval between point a_{2}, b_{2,} and c_{2}, becomes larger, which means the correlation of three points decreases. Since o_{2} attempts to contain information from these three less-correlated points, the quality of image is degraded. Consequently, even if the system presents the same 3D images, their qualities are influenced by the gap. Also there is an optimized gap value for a 3D content which offers the maximum PSNR. Detailed analysis for this phenomenon remains for the further research.

## 3. Experiment

#### 3.1 Experimental setup

In the experiment, two typical projectors (PB63K, LG, Korea), a light shaping diffuser (#65-885, Edmund Optics, USA), a Fresnel lens, and an LCD panel (OD190E0A-TAS, Odhitec, Korea) are used for implementing the proposed system. 5° × 5° diffuser is employed to achieve the proper viewing area with enough brightness while the diffusing angle and brightness are in trade-off relation. Figure 6 represents the experimental setup. As shown in Fig. 6(c), diffuser and Fresnel lens are placed 20 mm behind the rear LCD panel. The diffuser is attached to the lens. For calibrating two projectors and LCD panel, a webcam and checkboard images are employed. The detailed experimental conditions are shown in Table 1.

#### 3.2 Results

Figure 7 represents the results of the experiment. By comparing the simulation and experimental results, we can confirm the feasibility of the proposed system. The results for both letters and a human figure show the desired parallax with correct color information. The results show that the proposed system can provide several horizontal views without increasing the number of projectors or employing mechanical movement. This characteristic can reduce the complexity of the system and increase stability. Hence it is beneficial in the cost issue which is the major disadvantage of general multi-projection systems.

One advantage of computational multi-projection display is the capability for providing flexible-position viewpoints. General projection type multi-view systems usually offer only horizontal parallax since it is difficult to stack projector arrays in vertical direction properly. However, the proposed system does not have any limitation with providing vertical parallax since the diffuser spreads light rays to all directions generating appropriate viewpoints area. We prove the ability to offer the vertical parallax with experiments using 9 target images and a single projector.

Figure 8 represents the comparison between targets, simulation, and experimental results. View images from projector 1 are 32.5mm away in horizontal direction from the central axis of screen and each of view is separated with 3mm interval. View images from projector 2 are placed symmetrical to the images from projector 1 in opposite side of the central axis. Experimental results show that the system offers the disparity with desired vertical parallax. However, the system shows the limitations of expressing details of view images. The results for two letters in different depths are reproduced well since they have little textures and simple patterns, but when the target images own complicated structures such as a human figure, it is difficult to present accurately with the proposed system. Figure 9 compares the specific parts of human figure from target images, simulation, and experiment.

Figures 9(a) and 9(b) show human figure’s torso and face that contain delicate parts. Since the system presents 18 views using two projectors and one rear LCD panel, it is obvious that the optimization involves some degradations in image quality which appeared in simulation. When it comes to the experiment, some additional factors restrict the precise reconstruction. First, there is diffraction effect at the pixelated structure in the LCD panel. As a rectangular aperture with width *w* and height *h* is illuminated, the intensity pattern that appeared along *x-y* plane caused by diffraction is expressed as square of sinc function. If the distance *z* between the aperture and intensity pattern is sufficiently large compared to *x* and *y*, intensity pattern can be expressed as follows [24]:

*λ*is wavelength of the light and

*I*is intensity of incident plane wave. Angles

_{c}*θ*and

_{x}*θ*are tan

_{y}^{−1}(

*x*/

*z*) and tan

^{−1}(

*y*/

*z*), respectively. Figure. 10(a) demonstrates the diffraction effect in the system. As the wave propagates through the floated SLM, it spreads in a cone shape due to the rectangular aperture of the pixel. Before the diffraction is considered, each ray passes through a single pixel of the rear LCD panel. However, in the real situation, a set of light rays are involved from a pixel of the floated SLM due to the diffraction and they affect neighboring pixels as presented in Fig. 10(a). Figure 10(b) represents the normalized intensity pattern of the proposed system along

*x*and

*y*directional angles based on Eq. (8) where

*w*and

*h*are 0.345 mm and

*λ*is 540 nm. The pixel boundary is determined as the angle of the light ray which passes through the boundary of a pixel of the rear LCD panel. Here, the gap between the floated SLM and the rear LCD panel is 20 mm and the pixel pitch of the rear LCD panel is 0.2944 mm,As presented in Fig. 10(b), in the proposed system the size of the intensity pattern from the diffraction effect is much smaller than a pixel pitch of the rear LCD panel. This represents that the diffraction effect is not significant in the system.

To analyze the diffraction effect in more precise way, the simulation condition is altered. The gap is set to 163.1 mm so that the pixel boundary would be overlapped into 0.7*I _{c}* line as represented in Fig. 10(c). In such a case, adjacent 8 pixels are affected by the diffraction while center of these pixels are overlapped into 0.5

*I*. Then, the projection matrix is modified by calculating the convolution between the original projection matrix and a scale matrix which considers the intensity of adjacent 8 pixels with 50% scale [18]. Previous PSNR value for human figure is 44.77. After the projection matrix is modified according to the diffraction effect, PSNR value becomes 42.8. This indicates that diffraction effect is not significant even though the system parameters are altered into exaggerated situation.

_{c}Second influence comes from the groove structure of the Fresnel lens and coarse grains of the anisotropic diffuser. Because Fresnel lens contains groove structure, the transition between these grooves causes a discontinuity in the optical properties of the lens and inevitable scattering [25]. The gap between grooves of the Fresnel lens which is used in the system is not negligible compared to delicate parts of target images. Also, the light shaping diffuser has coarse grains which bring additional degradation in reconstructed images by the system. These influences could be alleviated by using Fresnel lens produced by the precision operation and light shaping diffuser which has small grain size.

#### 3.3 Number of projectors

To certify that the proposed system can be regarded as a new type of multi-projection display, a simulation for identifying the relationship between the quality of reconstructed images and the number of projectors is proposed. We assumed that each projector can generate at most 9 views in parallel with 3mm gap since the light shaping diffuser provides 24 mm of viewing area. Maximum viewing zone is decided as 236 mm which is same as the width of active screen because contribution of rear LCD panel decreases if the viewpoint gets out of the active screen area. Same target images as in section 2.4 are used to evaluate the quality of reconstructed images in more precise way. Figure 11 represents the relationship between PSNR value of reconstructed images and the number of projectors.

As shown in Fig. 11(a), increase in the number of projectors could expand the viewing angle of the system with fixed viewpoints’ interval of 3 mm. When the system employs a single projector providing 9 viewpoints, the viewing angle is about 6 °. As the number of employed projectors increases to five and ten, the viewing angle becomes 30 ° and 60 °, respectively. Figure 11(b) indicates that PSNR decreases as the number of projector increases because increase in the number of projectors indicates that the number of rays passing through the rear LCD panel increases. In other words, the rear LCD panel gets involved in many view images as the system employs more projectors. However, PSNR value is still larger than 26 when 10 projectors are employed, providing about 90 viewpoints. It represents that the proposed system has potential to become a new type multi-projection display.

#### 3.4 Accommodation response

Vergence-accommodation conflict brings unnatural stereoscopic condition to the observers and induces visual fatigue. It can be solved with providing more than two view images to observer’s each eye and inducing the monocular accommodation response [5]. However, it is hard to offer such compact views with conventional projection type multi-view systems due to the cost issue and the spatial limitation of stacking projection units densely. Meanwhile, the proposed system is designed for giving dense views with a single projector. The system is designed to provide viewpoints with minimum 3 mm interval which is small enough compared to the diameter size of a pupil.

Based on the assumption that each pixel consists of a round aperture and light is focused on the plane of the eye, the diffraction causes light to spread out to form an Airy disk [24]. Central element of the disk is bounded by the first minimum at the following angle:

where*λ*is the wavelength of the light and

*p*is the diameter of the pixel aperture. If the diameter of the central element of the Airy disk is less than or equal to the view spacing over the pupil, as Eq. (10), adjacent views would not overlap due to diffraction and occur accommodation response [19].Here,

*d*is the observation distance,

_{ob}*r*is the pupil diameter and

*n*is the number of views spaced over the pupil. Angle

*θ*is 0.1094 ° in the proposed system.

_{d}Figure 11 demonstrates the accommodation response in simulation and experiments. The reconstructed images in five different positions are used to provoke the accommodation response. Based on the central image, other four images are 3 mm away in horizontal and vertical directions forming cross shape as presented in Fig. 12(a), and the gap (depth or axial) distance between two letters is 40 mm. The camera is placed 190 mm away from the rear LCD panel which is same as the observation distance. It satisfies the super multi-view condition as we substitute parameters *d _{ob}*,

*θ*,

_{d}*r*, and

*n*in Eq. (10) which are 190 mm, 0.1094 °, 6 mm and 3, respectively. Figure 12(b) shows the simulation results with an assumption that the observer’s eye perceives these five images at once. Five reconstructed images are generated including diffraction effect mentioned in section 3.2. As shown in Fig. 12(b), when the virtual lens focuses on letter 3, the image of letter 3 is focused clearly while letter D is blurred. On the contrary, if the lens focuses on letter D, the image of letter 3 is blurred while letter D shows clear form. Also in the experiment as shown in Fig. 12(c), when the camera simulating the eye focuses on letter 3, letter D is blurred. When letter D is focused, letter 3 is blurred.

## 4. Conclusion

Computational multi-projection display is realized by employing multi-projection system in combination with compressive light field display. Light rays from projectors are scattered as they penetrate a light shaping diffuser and are guided to observer’s eyes by the Fresnel lens. Before the light rays are projected, they pass the rear LCD panel which modulates their intensity. Therefore, several dense viewpoints are formulated from a single projector without loss of spatial resolution. In the experiment, off-the-shelf projectors are used to show the feasibility of the proposed system. Nine views with 3 mm gap in horizontal direction are generated from each projector, and two independent target 3D models are used for showing capability of displaying various contents.

Computational multi-projection system is advantageous to provide flexible positioning of viewpoints since the light shaping diffuser generates the appropriate viewpoints area. To prove this characteristic, nine views with 3 mm gap in vertical direction are used in the simulation and the experiment. Both horizontal and vertical directional view images show the desired parallax with correct color information. To achieve a smaller interval of viewpoints than 3mm while maintaining the other experiment factors, the pixel pitch of real LCD panel should decrease. As the pixel pitch decreases, however, the effect of diffraction is aggravated and makes two adjacent views overlapped as expressed in Eq. (10). Thus, the proper pixel pitch should be decided as it distinguishes two adjacent views, while satisfies the condition in Eq. (10) simultaneously.

Although multiple layers are preferable for better image quality of compressive light field displays, the brightness loss restricts practical implementation of multi-layer system. The proposed system is advantageous to increase the number of display devices (projectors) without brightness loss and additional arithmetic operations such as time-multiplexing method. We expect that the proposed method can overcome the barrier of brightness loss for a practical implementation of compressive displays, and the capability of generating application-specific viewing zone will make the system applied in various environments.

## Acknowledgment

This research was supported by “The Cross-Ministry Giga KOREA Project” of The Ministry of Science, ICT and Future Planning, Korea [GK15D0200, Development of Super Multi-View (SMV) Display Providing Real-Time Interaction]. The 3D human figure used in the experiment was modeled by Alexander Lee, and was used under Creative Commons Attribution, Share Alike 3.0.

## References and links

**1. **B. Lee, “Three-dimensional displays, past and present,” Phys. Today **66**(4), 36–41 (2013). [CrossRef]

**2. **S.-G. Park, J.-Y. Hong, C.-K. Lee, M. Miranda, Y. Kim, and B. Lee, “Depth-expression characteristics of multi-projection 3D display systems [invited],” Appl. Opt. **53**(27), G198–G208 (2014). [CrossRef] [PubMed]

**3. **K. Akşit, A. H. G. Niaki, E. Ulusoy, and H. Urey, “Super stereoscopy technique for comfortable and realistic 3D displays,” Opt. Lett. **39**(24), 6903–6906 (2014). [CrossRef] [PubMed]

**4. **J. Geng, “A volumetric 3D display based on a DLP projection engine,” Displays **34**(1), 39–48 (2013). [CrossRef]

**5. **A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360 light field display,” ACM Trans. Graph. **26**(3), 40 (2007). [CrossRef]

**6. **S. Yoon, H. Baek, S. W. Min, S.-G. Park, M. K. Park, S. H. Yoo, H. R. Kim, and B. Lee, “Implementation of active-type Lamina 3D display system,” Opt. Express **23**(12), 15848–15856 (2015). [CrossRef] [PubMed]

**7. **Y. Takaki, M. Tokoro, and K. Hirabayashi, “Tiled large-screen three-dimensional display consisting of frameless multi-view display modules,” Opt. Express **22**(6), 6210–6221 (2014). [CrossRef] [PubMed]

**8. **K. Nagano, A. Jones, J. Liu, J. Busch, X. Yu, M. Bolas, and P. Debevec, “An autostereoscopic projector array optimized for 3D facial display,” http://gl.ict.usc.edu/Research/PicoArray/ (2013).

**9. **M. Kawakita, S. Iwasawa, M. Sakai, Y. Haino, M. Sato, and N. Inoue, “3D image quality of 200-inch glasses-free 3D display system,” Proc. SPIE **8288**, 82880B (2012). [CrossRef]

**10. **S. Iwasawa, M. Kawakita, and N. Inoue, “REI: an automultiscopic projection display,” in Proceedings of Three Dimensional Systems and Applications Conference (Ultra Realistic Communication Forum), Osaka, Japan, 2013, paper 1.

**11. **D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. **8**(3), 33 (2008). [CrossRef] [PubMed]

**12. **Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express **18**(9), 8824–8835 (2010). [CrossRef] [PubMed]

**13. **C.-K. Lee, S.-G. Park, J. Jeong, and B. Lee, “Multi-projection 3D display with dual projection system using uniaxial crystal,” SID Symp. Dig. Tech. Pap. **46**(1), 538–541 (2015).

**14. **D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-later 3D displays using low-rank light field factorization,” ACM Trans. Graph. **29**(6), 163 (2010). [CrossRef]

**15. **G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. **30**(4), 95 (2011). [CrossRef]

**16. **D. Lanman, G. Wetzstein, M. Hirsch, W. Heidrich, and R. Raskar, “Polarization fields: dynamic light field display using multi-layer LCDs,” ACM Trans. Graph. **30**(6), 186 (2011). [CrossRef]

**17. **G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Tensor display: compressive light field synthesis using multilayer display with directional backlighting,” ACM Trans. Graph. **31**, 1–11 (2012). [CrossRef]

**18. **M. Hirsch, G. Wetzstein, and R. Raskar, “A compressive light field projection system,” ACM Trans. Graph. **33**(4), 58 (2014). [CrossRef]

**19. **A. Maimone, G. Wetzstein, M. Hirsch, D. Lanman, R. Raskar, and H. Fuchs, “Focus 3D: compressive accommodation display,” ACM Trans. Graph. **32**(5), 153 (2013). [CrossRef]

**20. **B. E. Saleh, M. C. Teich, and B. E. Saleh, *Fundamentals of Photonics* 3^{rd} ed (Wiley-Interscience, 1991).

**21. **N. D. Ho, P. Van Dooren, and V. Blondel, “Weighted nonnegative matrix factorization and face feature extraction,” Image Vis. Comput. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.550.2833 (2007).

**22. ** Blender Org, “Blender 2.76b”, https://www.blender.org/.

**23. **A. V. Oppenhiem, A. S. Willsky, and S. H. Nawab, *Signals and Systems* (Prentice Hall, 1996).

**24. **J. W. Goodman, *Fourier Optics* 3^{rd} ed (Roberts & Company, 2005).

**25. **“Fresnel lens comparison”, https://www.modulatedlight.org/optical_comms/fresnel_lens_comparision.html