## Abstract

A high-efficient computer-generated integral imaging (CGII) method is presented based on the backward ray-tracing technique. In traditional CGII methods, the total rendering time is long, because a large number of cameras are established in the virtual world. The ray origin and the ray direction for every pixel in elemental image array are calculated with the backward ray-tracing technique, and the total rendering time can be noticeably reduced. The method is suitable to create high quality integral image without the pseudoscopic problem. Real time and non-real time CGII rendering images and optical reconstruction are demonstrated, and the effectiveness is verified with different types of 3D object models. Real time optical reconstruction with 90 × 90 viewpoints and the frame rate above 40 fps for the CGII 3D display are realized without the pseudoscopic problem.

© 2017 Optical Society of America

© 2017 Optical Society of America

## 1. Introduction

Three-dimensional (3D) displays are important tools to achieve high immersion 3D images. Among various 3D display methods, integral imaging technique can display full-color and quasi-continuous 3D images based on two-dimensional (2D) images [1,2], and the conventional generation, processing and transmission techniques can easily be used. Since integral image was firstly proposed [3] in 1908, many methods to generate the elemental image array (EIA) have been exploited. In recent years, computer-generated integral imaging (CGII) techniques [4,5] have attracted people’s attentions. However, there are some problems in existing CGII methods. According to the lens array parameters and the object size, it usually takes a long time to render the EIA for the proposed methods, such as each camera viewpoint independent rendering (ECVIR) [6], multiple viewpoints rendering (MVR) [7],viewpoint vector rendering (VVR) [8], and parallel group rendering(PGR) [9]. If an elemental image contains ** M’ × N’** pixels for ECVIR, the total rendering time will be

**times rendering time of a single viewpoint, where M and N are the pixel numbers in the row and the column of the elemental image. MVR, VVR and PGR can be treated as the special ECVIR method by accelerating the generation of a single viewpoint image. With increasing the spatial information in the elemental image and the object size, the rendering time is too long to generate the animation for the integral imaging display system. The pseudoscopic problem should be addressed for the integral imaging display. Though most of previous CGII methods work well in the virtual mode, the pseudoscopic problem in the real mode is not taken into consideration. For an example, high speed image space parallel processing [10] was used for the real-time integral image 3D display system with the hexagonal lens array based on CGII [11]. Incompatibility for different kinds of 3D object model is another problem for CGII. Many previous algorithms are only suitable for one kind of 3D data files. For an example, PGR [12] only can process point data, and a real-time pickup method with GPU & Octree [13] worked well with medical data. However, it cannot deal with the general mesh 3D graphic data file, which are widely used in other field, such as 3Ds for 3DMax,etc.**

*M’*×*N’*Here, a novel CGII technique based on the backward ray-tracing technique is presented. It is known that the ray tracing technique [14] is a main rendering approach to generate a virtual image. The real time ray tracing technique [15–18] has been developed very quickly in recent years. Single-stage integral photography capturing system [19] used the ray tracer to generate integral image, but the real pseudoscopic problem must be eliminated by performing a 180-deg rotation of each elemental image around its optical axis. Ray tracking technique was presented [20]. In fact, it is the object-order rendering similar with the Rasterization technique in computer graphics. Here the backward ray-tracing technique in computer graphic is proposed to generate integral imaging image, which is suitable for both in the real mode and the virtual mode without pseudoscopic problem. The standard ray tracing method is used to implement the BRT CGII. Optical experiments for real-time reconstruction are presented, and the processing time with different 3D objects are analyzed. In addition, the real-time rendering for big mesh data with SBVH acceleration structure is realized.

## 2. Principles of BRTCGII

#### 2.1 The relationship between EIA pixels in integral image and the camera position in the virtual world

A light ray can be represented as an origin and its direction. As shown in Fig. 1, $\overrightarrow{{S}_{in}}$ and $\overrightarrow{{E}_{in}}$ is the start point and the end point of lenses’ incident ray$\overrightarrow{{R}_{in}}$, and $\overrightarrow{{D}_{in}}=\overrightarrow{{E}_{in}}-\overrightarrow{{S}_{in}}$ is the direction of$\overrightarrow{{R}_{in}}$. The optical lens in the sheet is arbitrary and can be treated as a black box, and $\overrightarrow{{P}_{lens}}$is the optical property of the lens or lenses. Given that $\overrightarrow{{R}_{in}}$ and $\overrightarrow{{P}_{lens}}$ is known, the exit ray$\overrightarrow{{R}_{ex}}$can be calculated. $\overrightarrow{{R}_{ex}}$ is the function of $\overrightarrow{{R}_{in}}$ and ${P}_{lens}$as depicted in Eq. (1).

To address the pseudoscopic problem, The arrangement of camera array in the matrix format [6] is used. The camera intersected with $\overrightarrow{{R}_{ex}}$can be found and the obtained camera position $\overrightarrow{{P}_{cam}}$ can be represent by Eq. (2).where ${P}_{camArray}$is the property of camera array.Through the above analysis, we can conclude that if ${P}_{camArray}$and ${P}_{lens}$ are known, the camera position $\overrightarrow{{P}_{cam}}$ is determined by the pixel position$\overrightarrow{{S}_{in}}$, which is the start point of incident ray$\overrightarrow{{R}_{in}}$.

#### 2.2 The principle of backward ray-tracing technique

Backward ray-tracing (BRT) technique works by tracing a path from a virtual camera through each pixel in a virtual screen, as shown in Fig. 2, and the color of the object visible through it is calculated. A basic backward ray tracer includes three parts [21]. The first part is *ray generation*, where the origin and the direction of each pixel’s view ray based on the camera geometry are calculated. The second part is *ray intersection*, where the closest object intersecting the view ray is found. *Shading part* is the final part, which computes the pixel color based on the results of ray intersection.

BRT technique shows several advantages over rasterization. One is that special camera models are easy to be supported. The user can associate points on the screen with any expected directions. There are many virtual cameras for integral image, and every pixel in the virtual screen has its virtual camera. Only the *ray generation* part is modified to generate the ideal view rays for integral images. The backward view rays in BRT CGII should start from one of virtual cameras, and their directions should be from the start point to its pixel positions in the virtual camera. The total count of the view rays has the same value with a single 2D picture, so BRT CGII can effectively reduce the rendering time.

Because of the reversibility of optical path, the direction of the view ray $\overrightarrow{{R}_{d}}$is $-\overrightarrow{{R}_{ex}}$, and the start point of $\overrightarrow{{R}_{d}}$ is the position of one virtual camera in Fig. 1. The view ray$\overrightarrow{{R}_{d}}$insects with 3D object at the closest hit and the farthest hit. The pixel $\overrightarrow{{S}_{in}}$ color is the color of the closest hit rather than the color of the farthest hit, because the latter will cause the pseudoscopic problem in the real mode. Therefore, it is easy to diminish the pseudoscopic problem with the BRT technique.

#### 2.3 Parameter settings for BRT CGII

For simplicity, optical lenses in Section 2.1 are treated as pinholes. It is assumed that the elemental image on LCD panel consists of N × N pixels as shown in Figs. 3(a) and 3(b). N × N virtual cameras are arranged in the matrix format. Given that a point (x, y,0) is on the LCD panel, and its index (k,m) can be expressed as Eq. (5). Each pixel has its own index (k,m) in whole LCD panel and the ** index (i,j)** in its elemental image.

**can be induced by Eq. (3);**

*Index (i,j)*A virtual camera can be represented as ** Cam(i,j)**.The position of

**can be calculated with the Eq. (4). ${P}_{ij}$is the function of the**

*Cam(i,j)***.**

*index(k,m)**pp*represents the width of the LCD pixel, and

**and**

*W***are the horizontal and vertical resolution, respectively. Therefore, the camera position${P}_{ij}$ can be obtained by combining Eqs. (3), (4) and (5) under the condition that the pixel position is (x, y,0).**

*H*Through the above discussion, the origin$\overrightarrow{{R}_{o}}$ and the direction $\overrightarrow{{R}_{d}}$of the pixel ** (k,m)**’s view ray can be described in the following equation.

As depicted as Fig. 3(d),the distance ** D_{CL}** between camera array and lens array can be represented as

**(**

*D*_{CL}= g /**- 1**

*L*_{EAI}/L_{LEN}**)**, where

**is the distance between adjacent lenses;**

*L*_{LEN}**represents the gap between lens array and LCD panel.**

*g***is the side length of EAI. The distance**

*L*_{EAI}**between two adjacent cameras can be calculated with the formulation**

*D*_{C}**=**

*D*_{C}**·**

*pp***/**

*g***.**

*D*_{CL}The general process of BRT CGII technique can be expressed in Fig. 4(a), and we adapt parallel computing method to generate the view rays. Every ray has its dependent thread, and its ray-object intersection, shading is happened in the same thread. Sampling technique make us to get away from shape-edged, clinical clean, traditional ray tracing image. The higher sampling rate can enhance image quality, so a pixel may contain many random-distribution view rays from its virtual camera [23]. At the end, the final integral image is directly rendered without generating each viewpoint picture like ECVIR. Therefore, the higher efficiency of BRT CGII over ECVIR is obvious.

## 3. Experimental results

The proposed BRT CGII is implemented with non-real time and real time backward ray tracing engines. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality integral image elemental array, which is suitable to render featured films and videos. For games and simulations, the frame rate of 30 to 120 FPS should be achieved, real time rendering should be realized the real time back ray tracing engine.

To illustrate the generality of BRT CGII for dealing with 3D graphics data files, different 3D mesh models are rendered the in the experiment. ECVIR implemented with the raster render engine is used for comparison. The main procedure of ECVIR is shown in Fig. 4(b).

Table 1 shows specifications of 3D data required in the experiment. The PC hardware is composed of Intel(R) Core(TM) i7-4790 CPU @ 3.6GHZ with 8Gb RAM and NVidia GeForce GTX 770 (2 GB / NVidia) graphic card. The integral image display configuration can display the 3D light field with the view angle of 40° × 40°. The performance of the proposed method is evaluated by comparing the generation time of integral pictures. Three kinds of 3D object models including Monkey, Cup and Skull with different file formats are used. Figure 5 shows non-real time 3D mesh models and their integral images. The monkey model shown on the integral image display [24] in different perspectives is given in Fig. 6. The real time BRT CGII rendering result is shown in Fig. 7.

Table 2 shows the processing time comparison between the method proposed by Yanaka et al [6] and our proposed method. In the optical reconstruction, the lens array consists of 43 × 36 lenses and each elemental image covers 90 × 90 pixels. We can see that the proposed method is much faster than the ECVIR method for different kinds of 3D data files.

The rendering time depends on the count view rays in serial computing (CPU) mode. If parallel computing mode with GPU is used with enough computation unit, the one pixel rendering time is nearly equal to the total rendering time. Under this mode, the real time rendering can be realized.The factors that affect the speed of BRT computer-generated integral imaging is similar to the factors that affect the speed of common 2D image with ray tracing technique. The main factor influencing the rendering efficiency is the ray-object intersection efficiency. The common approach to enhance the intersection efficiency is to construct an acceleration structure, as presented in Ref [25]. For instance, SBVH is the best acceleration structure for static geometry group. In our experiment in Fig. 7(c), the number of view rays and polygons is 8 million and 1.5 million respectively, and real-time rending can be realized with SBVH acceleration structure .

Figure 8 illustrates the processing time with different sizes of the EIA. The processing time with the ECVIR method for three different size mesh models is shown in Fig. 8(a). We can see that when the amount of EAI pixels becomes larger, the rending time noticeably increased. Figure 8(b) shows the processing time with different number of pixels for three models with our proposed BRTCGII method, whose rendering time is not observably affected by the number of elemental image pixels. Comparing Figs. 8(a) and 8(b), we can see that BRT CGII is more efficient than ECVIR. Therefore, it is possible for us to generate elemental images for 3D mesh models with our proposed method when the amount of EIA pixels is large.

## 4. Conclusion

In summary, a high-efficient computer-generated integral imaging (CGII) method is presented based on the backward ray-tracing technique. Integral image with different kinds of 3D object models can be efficiently generated . Integral images are generated for different type 3D data in non-real time and real time, and optical reconstruction for different kinds of 3D models is demonstrated. The number of viewpoints in experimental optical system can reach 90 × 90.The rendering efficiency of our method is not affected by the number of viewpoints. When the number of view rays and polygons in scene is 8 million and 1.5 million respectively, BRTCGII with the SBVH acceleration structure in GPU mode can realize real time rendering for integral image display device with the 3840 × 2160 LCD panel.

## Funding

This work is partly supported by the “863” Program (2015AA015902), the National Natural Science Foundation of China (NSFC) (61575025), the Fund of the State Key Laboratory of Information Phonics and Optical Communications.

## References and links

**1. **J. Y. Son, V. V. Saveljev, B. Javidi, D. S. Kim, and M. C. Park, “Pixel patterns for voxels in a contact-type three-dimensional imaging system for full-parallax image display,” Appl. Opt. **45**(18), 4325–4333 (2006). [CrossRef] [PubMed]

**2. **J. Y. Son, V. V. Saveljev, K. D. Kwack, S. K. Kim, and M. C. Park, “Characteristics of pixel arrangements in various rhombuses for full-parallax three-dimensional image generation,” Appl. Opt. **45**(12), 2689–2696 (2006). [CrossRef] [PubMed]

**3. **H. E. Ives, “Optical properties of a Lippman lenticulated sheet,” J. Opt. Soc. Am. **21**(3), 171 (1931). [CrossRef]

**4. **B. N. R. Lee, Y. Cho, K. S. Park, S. W. Min, J. S. Lim, C. W. Min, and R. P. Kang, “Design and implementation of a fast integral image rendering method,” in *Entertainment Computing - ICEC 2006* (2006), pp. 135–140.

**5. **H. Liao, K. Nomura, and T. Dohi, “Autostereoscopic integral photography imaging using pixel distribution of computer graphics generated image,” in *ACM SIGGRAPH 2005 Posters* (2005), pp. 73.

**6. **K. Yanaka, “Integral photography using hexagonal fly’s eye lens and fractional view,” Proc. SPIE **6803**, 68031K (2008). [CrossRef]

**7. **M. Halle, “Multiple viewpoint rendering,” in Conference on Computer Graphics and Interactive Techniques (2010), pp. 243–254.

**8. **K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. **E90-D**(1), 233–241 (2007). [CrossRef]

**9. **S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. Lett. **44**(2), L71–L74 (2005). [CrossRef]

**10. **K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express **20**(2), 732–740 (2012). [CrossRef] [PubMed]

**11. **D. H. Kim, M. U. Erdenebat, K. C. Kwon, J. S. Jeong, J. W. Lee, K. A. Kim, N. Kim, and K. H. Yoo, “Real-time 3D display system based on computer-generated integral imaging technique using enhanced ISPP for hexagonal lens array,” Appl. Opt. **52**(34), 8411–8418 (2013). [CrossRef] [PubMed]

**12. **Y. Igarashi, H. Murata, and M. Ueda, “3-D display system using a computer generated integral photograph,” Jpn. J. Appl. Phys. **17**(9), 1683–1684 (1978). [CrossRef]

**13. **Y. H. Jang, P. Chan, J. S. Jung, J. H. Park, N. Kim, J. S. Ha, and K. H. Yoo, “Integral imaging pickup method of bio-medical data using GPU and octree,” Electrophoresis **10**, 924–929 (2010).

**14. **K. Akeley, D. Kirk, L. Seiler, P. Slusallek, and B. Grantham, “When will ray-tracing replace rasterization?” in *ACM SIGGRAPH 2002 Conference Abstracts and Applications* (2002), pp. 206–207.

**15. **J. M. Singh and P. J. Narayanan, “Real-time ray tracing of implicit surfaces on the GPU,” IEEE Trans. Vis. Comput. Graph. **16**(2), 261–272 (2010). [CrossRef] [PubMed]

**16. **Z. Li, T. Wang, and Y. Deng, “Fully parallel kd-tree construction for real-time ray tracing,” in *Meeting of the ACM SIGGRAPH Symposium on Interactive 3d Graphics and Games* (2014), pp. 159. [CrossRef]

**17. **B. C. Budge, J. C. Anderson, C. Garth, and K. I. Joy, “A straightforward CUDA implementation for interactive ray-tracing,” in IEEE Symposium on Interactive Ray Tracing (IEEE, 2008), pp. 178. [CrossRef]

**18. **S. G. Parker, A. Robison, M. Stich, J. Bigler, A. Dietrich, H. Friedrich, J. Hoberock, D. Luebke, D. McAllister, M. McGuire, and K. Morley, “OptiX: a general purpose ray tracing engine,” ACM Trans. Graph. **29**(4), 157–166 (2010). [CrossRef]

**19. **S. S. Athineos and P. G. Papageorgas, “Photorealistic integral photography using a ray-traced model of capturing optics,” J. Electron. Imaging **15**(4), 043007 (2006). [CrossRef]

**20. **G. Milnthorpe, M. Mccormick, and N. Davies, “Computer modeling of lens arrays for integral image rendering,” in Proceedings of the Eurographics UK Conference*,*2002 (2002), pp. 136–141. [CrossRef]

**21. **M. Ashikhmin, *Fundamentals of Computer Graphics* (AK Peters, 2005), pp. 70–79.

**22. **Wikipedia, Ray tracing (graphics), https://en.wikipedia.org/wiki/Ray_tracing_(graphics)

**23. **S. K. Ray, *Tracing from the Ground Up* (A. K. Peters, Ltd. 2015), pp. 93–119.

**24. **X. Sang, F. C. Fan, C. C. Jiang, S. Choi, W. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color three-dimensional display,” Opt. Lett. **34**(24), 3803–3805 (2009). [CrossRef] [PubMed]

**25. ** NVIDIA, SBVH acceleration structure, http://docs.nvidia.com/gameworks/index.html#gameworkslibrary/optix/optix_host_api.htm?Highlight=sbvh