Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Backward ray tracing based high-speed visual simulation for light field display and experimental verification

Open Access Open Access

Abstract

The exiting simulation method is not capable of achieving three-dimensional (3D) display result of the light field display (LFD) directly, which is important for design and optimization. Here, a high-speed visual simulation method to calculate the 3D image light field distribution is presented. Based on the backward ray tracing technique (BRT), the geometric and optical models of the LFD are constructed. The display result images are obtained, and the field of view angle (FOV) and depth of field (DOF) can be estimated, which are consistent with theoretical results and experimental results. The simulation time is 1s when the number of sampling rays is 3840×2160×100, and the computational speed of the method is at least 1000 times faster than that of the traditional physics-based renderer. A prototype was fabricated to evaluate the feasibility of the proposed method. From the results, our simulation method shows good potential for predicting the displayed image of the LFD for various positions of the observer’s eye with sufficient calculation speed.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Light field display (LFD), a promising three-dimensional (3D) imaging technique, has been extensively investigated in recent years, because it’s many applications, such as medical sciences, robotics, manufactures, defense and so on [1]. Full-color 3D images with continuous horizontal and vertical motion parallax can be provided [25]. However, design and improvement of LFD systems is a time-consuming process, as it is difficult to predict their 3D display effect without the prototype. Commercial optical software such as Zemax and Lighttools pay attention to optical characteristics analysis rather than graphic visualization. Therefore, it is a pressing requirement to directly predict the display appearance of an LFD.

Several simulation models have been used to describe the optical phenomena for barrier or lenticular array based autostereoscopic displays [69]. These reports are highly focused on the calculation of the angular distribution of the light profile at a local point of the display screen. A 3D simulation model was developed to calculate the optical intensity distribution for the entire screen of an autostereoscopic display at a given eye position [10]. However, this numerical simulation method is not intuitive enough for designers to predict display effect. Therefore, an optical model for valid simulation of the displayed image over the entire screen is strongly required.

BRT can be used to generate an image by tracing the path of light as pixels in an image plane and simulating the effects of its encounters with virtual objects [11]. It is capable of simulating a wide variety of optical effects, such as radiation, scattering, reflection and refraction [12]. The high-speed BRT generating technique with a graphics processor unit (GPU) [13] has recently been developed very quickly and were also demonstrated in experiments [14,15].

Here, a high-speed LFD visual simulation method based on the BRT is presented. The models of LCD, lens array and diffuser models are constructed in the virtual scene. The real-time program of simulation is implemented based on BRT. Finally, the correctness of simulation is verified in experiments. The simulation display result images are obtained, and the FOV and DOF are estimated, which are consistent with theoretical results and experimental results. The improvement in the calculation speed with BRT is also discussed by comparing the calculation time of our model with that of the traditional physics-based renderer.

2. Basic modeling

BRT works by tracing a path of light from a virtual camera through each pixel in a virtual screen, and the color of the object visible through it is calculated, as shown in Fig. 1(a). A basic backward ray tracer includes 2 parts; ray intersection and shading. The result of ray intersection is determined by the geometric shape of the model, and the result of shading is determined by the model material. The geometric shapes of the lens in the LFD system are defined by the constructive solid geometry (CSG) [16], as shown in Fig. 1(b). Compared with common modeling method boundary representations (BREP) [17] as shown in Fig. 1(c), the geometry based on CSG is more accurate, and the calculation speed is faster with BRT. The material of each component is defined with the bidirectional scattering distribution function (BSDF), which describes how light is scattered at a particular point on a surface [18]. The complete system comprises an LCD panel, a lens array and a diffuser as shown in Fig. 1(a). The geometric shapes and materials of models are given in the following sections.

 figure: Fig. 1.

Fig. 1. (a) The computation process of BRT. (b) Lens modeled by CSG in proposed method and (c) BREP.

Download Full Size | PDF

2.1 LCD model

As shown in Fig. 2(a), the geometric model of the LCD is a rectangle. The elemental image array (EIA) shown in LCD panel is mapped to the panel geometric model as a texture, and the resolution of texture is equal to the LCD’s resolution. Ideally, the light emitted by the LCD panel is Lambertian, so according to the Lambert's cosine law [19], the radiant intensity of the LCD can be expressed as in Eq. (1).

$${I_\theta } = {I_0}\cos \theta$$
Here, θ is the angle of the radiation light vectors and the unit normal vector at the starting point on the screen surface. I0 is intensity when θ equals to 90°. Figure 2(b) shows intensity distribution in different directions.

 figure: Fig. 2.

Fig. 2. (a) The texture of LCD panel model; (b) Intensity distribution in different directions on the LCD panel according to the Lambert's cosine law.

Download Full Size | PDF

2.2 Lens array model

The lens array consists of many identical lenses, so a single lens can be an element. The geometric shape of the lens is shown in Fig. 3(a). The lens distribution is arranged squarely as shown in Fig. 3(b). The surface of a lens includes 3 parts; a disk surface, a partial cylinder surface and a partial sphere surface. The surface can be expressed as Eq. (2). Here, r is the radius of the cylinder and bottom surfaces, h is height of the cylinder, and R is radius of the top sphere surface. The position of the lens center is described as Eq. (2).

$$\left\{ {\begin{array}{{c}} {\begin{array}{{cc}} {{x^2} + {y^2} \le {r^2}}&{({\textrm{z}} = 0)} \end{array}}\\ {\begin{array}{{cc}} {{x^2} + {y^2} \le {r^2}}&{(0 < z < h)} \end{array}}\\ {\begin{array}{{cc}} {{x^2} + {y^2} + {z^2} \le {R^2}}&{({\textrm{z}} > h)} \end{array}} \end{array}} \right.$$
$$Center\_Position = (i + 0.5) \cdot d \cdot {\boldsymbol {u}} + (j + 0.5) \cdot d \cdot {\boldsymbol {v}} + {\boldsymbol {w}}$$

 figure: Fig. 3.

Fig. 3. (a) The shape of the lens and the path of the ray; (b) the lens distribution of square arranged.

Download Full Size | PDF

To obtain the optical relations between the incident and transmitted light rays on a given arbitrary surface interfacing two refractive media, the incident and transmitted wave vectors, i and t, and the unit normal vector, n, at the incident point of the refractive surface can be described as

$${\boldsymbol {t}} = \frac{{{n_1}}}{{{n_2}}}{\boldsymbol {i}} - \left( {\frac{{{n_1}}}{{{n_2}}}n{\boldsymbol {i}} + \sqrt {1 - \frac{{n_1^2}}{{n_2^2}}({1 - {{({n{\boldsymbol {i}}} )}^2}} )} } \right),$$
where n1 and n2 are the indices of the two media.

The directions of incident and transmitted wave vectors are shown in Fig. 3(a). The intensity distribution of refracted light and reflected light are based on Schlick's approximation, a formula for approximating the contribution of the Fresnel factor in the specular reflection of light from a non-conducting interface between two media [20].

According to the Schlick's model [20], the specular reflection coefficient R can be approximated by

$$R = {R_0} + (1 - {R_0}){(1 - \cos \theta )^5},$$
$${R_0} = {\left( {\frac{{{n_1} - {n_2}}}{{{n_1} + {n_2}}}} \right)^2},$$
where θ is the angle between the direction incident light and the normal. If the intensity attenuation is ignored, the reflection coefficient is (1-R).

2.3 Diffuser model

The geometric model of the diffuser is also a rectangular panel. As shown in Fig. 4, when the incident rays hit the diffuser surface, the intensity of beams will be evenly distributed in the range of ω in the horizontal and vertical directions [21,22]. The intensity of the emitted light is continuously distributed in space, so Monte Carlo sampling is used. N rays are sampled from the exit beam, and the accuracy increases with the increase of N. The direction of nth exit ray tn can be described by:

$${{\boldsymbol {t}}_n} = R({\boldsymbol {u}},rand(\omega ))R({\boldsymbol {v}},rand(\omega )){\boldsymbol {i}}$$
Here, u and v are unit vectors parallel to coordinate axes, and i is the incident wave vector, rand(ω) is a function used to generate random number between 0 and ω, and R(v, ω) is a rotation matrix which can make vector i rotate ω degree round vector v, as shown in the following,
$$\textrm{R}({{\boldsymbol {v}}\textrm{, }\omega } )= \left[ {\begin{array}{{ccc}} {{v_x}^2 + ({1 - {v_x}^2} )\cos \omega }&{{v_x}{v_y}({1 - \cos \omega } )- z\sin \omega }&{{v_z}{v_x}({1 - \cos \omega } )- y\sin \omega }\\ {{v_x}{v_y}({1 - \cos \omega } )- z\sin \omega }&{{v_y}^2 + ({1 - {v_y}^2} )\cos \omega }&{{v_y}{v_z}({1 - \cos \omega } )- x\sin \omega }\\ {{v_z}{v_x}({1 - \cos \omega } )- y\sin \omega }&{{v_y}{v_z}({1 - \cos \omega } )- x\sin \omega }&{{v_z}^2 + ({1 - {v_y}^2} )\cos \omega } \end{array}} \right]$$

 figure: Fig. 4.

Fig. 4. Diffuser with scattering effect.

Download Full Size | PDF

The intensity of nth exit ray is described as:

$${I_n} = \frac{I}{N},$$
where I is the intensity of incident ray.

3. Simulation and experiment verification

3.1 Configuration of LFD

A prototype was fabricated to evaluate the feasibility of the proposed method. The, parameters and values used for simulation are given in Table 1, and the relative position of each component is shown in the Fig. 5. The resolution of simulation image of is 3840×2160 pixels, which is equal to the number of sampling rays of the virtual camera array that has 43×24 lenses with 88×88 viewpoints. The PC hardware is composed of an Intel(R) Core(TM) i7-4790 CPU @ 3.6GHZ with 8Gb RAM and NVidia GeForce GTX 980 (4 GB / NVidia) graphic card. The simulation program is implemented with CUDA SDK 8.0 and OpenGL4.0.

 figure: Fig. 5.

Fig. 5. The position of models in virtual world.

Download Full Size | PDF

Tables Icon

Table 1. List of parameters and the values used in the simulation

3.2 Simulation result

The effect of each component is simulated separately, based on BRT as shown in Fig. 6. The simulation result of the reconstructed image is shown in Fig. 6(a), and the texture of the LCD model including the EIA of 3D object “Monkey” is shown in Fig. 6(b). The effect of lens array and diffuser are tested using the checkerboard panel as show in Figs. 6(c) and 6(d). These simulation experiments have produced satisfactory results.

 figure: Fig. 6.

Fig. 6. The simulation component of LFD, (a) the reconstructed image of LFD; (b) the EIA shown on LCD; (c) checkerboard test of lens array; (d) checkerboard test of diffuser.

Download Full Size | PDF

Both simulation and experimental results of the reconstructed images are obtained with different 3D contents as shown in Fig. 7 where it is obvious that, experimental results and simulation results are consistent. Because of the camera acquisition and assembling error for the used lens array, the experimental result is blurred. The reconstructed images of “Monkey” are captured by a camera, which is located in 2 m away from the LFD in different directions, as shown in Fig. 8. The resolution of pictures is 3840×2160. The interactive simulation program based on the proposed method provides real-time simulation image result at 20fps.

 figure: Fig. 7.

Fig. 7. The images shown with integral images, including Car, Building and Monkey.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The reconstructed image in different direction. (see Visualization 1)

Download Full Size | PDF

3.3 Display performance analysis

The structural similarity (SSIM) index is a method for predicting the perceived quality of digital television and cinematic pictures, as well as other kinds of digital images and videos [23]. SSIM is used for measuring the similarity between two images. The SSIM index is a full reference metric; in other words, the measurement or prediction of the image quality is based on an initial uncompressed or distortion-free image as reference. The FOV and DOF are estimated by the SSIM between simulation image and the photograph of a 3D object captured by a virtual camera. Figure 9(a) shows the SSIM values at different viewing angles. The theoretical FOV of LFD is [−20°, 20°], according to the size of lens and the distance between lens and LCD panel. The SSIM values x, y ∈ [−20°, 20°] are higher than other directions, which coincides with the theoretical and experimental FOV values. Theoretical and experimental FOV values are obtained based on the method proposed in X. Yu’s paper [24].

 figure: Fig. 9.

Fig. 9. SSIM indices of simulation result (a) in horizontal and vertical directions and (b) with different depth of 3D object used to generate EIA.

Download Full Size | PDF

The DOF is defined as the distance between the rear and front marginal depth plane located at two positions where the spot sizes equaled to the pixel sizes on the central depth plane [1,25]. The EIAs generated based on 3D objects located at different depths are used as the textures of the LCD model. The SSIM values of simulation images with different depths are show in Fig. 9(b). The theoretical DOF is 15 cm, according to the focal length of lens and the distance between lens and LCD panel. The SSIM values are higher when 3D object depth z∈[7 cm, 22 cm](DOF = 15 cm), which coincide with the theoretical and experimental DOF values. Theoretical and experimental DOF values are obtained based on the method proposed in S. W. Yang’s paper [25]. According to the experimental and simulation results, the range of FOV and DOF are determined. When the absolute value of gradient > 0.005 (degree−1) or the SSIM index < 0.85, the image is out of FOV. When the absolute value of gradient > 0.02 (cm−1) or the SSIM index < 0.85, the image is out of DOF.

3.4 The consuming time of simulation

As CSG modelling and BRT are used with a GPU, the proposed method has high calculation speed. For the ray tracing technique, reducing the number of faces and sampling rays in the scene can effectively improve the computing speed. On the one hand, the lens used in this paper is represented by only 3 faces based on CSG, while at least 3386 faces are required based on BREP. Modeling by CSG can greatly reduce the number of faces and improve the computational efficiency. On the other hand, compared with traditional forward ray tracing method, BRT requires less sampling rays and improve the computational efficiency.

The calculation speed of the proposed method is compared with the commercial 3D modeling software as shown in Table 2. Obviously, the calculation speed of our method is at least 1000 times faster than that of the traditional physics-based renderer, when the number of sampling rays is 3840×2160×100. The computation time is positively correlated with resolution of simulation image and the sampling ray number of diffuser. The result of our method is compared with that of Cycles renderer based on BREP, as show in Fig. 10. A large number of triangular faces are required with BREP. Although 16380 triangles are used, there are some noises. Theoretically, if the lens model is consist of infinitely many triangles, the simulation result based on BREP is consistent with the simulation result based on CSG. When the number of triangles is finite, the quality of our method is better than other renderers.

 figure: Fig. 10.

Fig. 10. The results of simulation based on CSG and BREP.

Download Full Size | PDF

Tables Icon

Table 2. List of the computation time of simulation

4. Conclusion

In summary, a high-speed LFD simulation method based on BRT technique is presented. The display result images are obtained by simulation, and the FOV and DOF of the LFD are estimated, which are consistent with theoretical results and experimental results. The computation time of simulation is 1s when the number of sampling rays is 3840×2160×100, and the computational speed of the method is at least 1000 times faster than that of the traditional physics-based renderer. The experimental results reveal that the calculation speed of our simulation model shows dramatic improvement in time spent compared to the commercial renderer with the same results of calculation, and that the simulation method has good potential to predict and optimize various design conditions of LFD.

Funding

State Key Laboratory of Information Photonics and Optical Communications; Fundamental Research Funds for the Central Universities (2019PTB-018); National Key Research and Development Program of China Stem Cell and Translational Research (2017YFB1002900); National Natural Science Foundation of China (61575025).

References

1. M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512 (2018). [CrossRef]  

2. H. E. Ives, “Optical Properties of a Lippmann Lenticulated Sheet,” J. Opt. Soc. Am. 21(3), 171–176 (1931). [CrossRef]  

3. J. H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]  

4. X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three-dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018). [CrossRef]  

5. M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-Dimensional Optical Sensing and Visualization Using Integral Imaging,” Proc. IEEE 99(4), 556–575 (2011). [CrossRef]  

6. M.-C. Park, H.-D. Lee, and J.-Y. Son, “Interactive 3D simulator for autostereoscopic display systems,” in Proceedings of International Display Workshops (2011), pp. 1849–1851.

7. S.-M. Jung, J.-H. Jang, H.-Y. Kang, K.-J. Lee, J.-N. Kang, S.-C. Lee, K.-M. Lim, and S.-D. Yeo, “Optical modeling of a lenticular array for autostereoscopic displays,” Proc. SPIE 8648, 864805 (2013). [CrossRef]  

8. S.-M. Jung, S.-C. Lee, and K.-M. Lim, “Two-dimensional modeling of optical transmission on the surface of a lenticular array for autostereoscopic displays,” Curr. Appl. Phys. 13(7), 1339–1343 (2013). [CrossRef]  

9. S.-M. Jung and I.-B. Kang, “Three-dimensional modeling of light rays on the surface of a slanted lenticular array for autostereoscopic displays,” Appl. Opt. 52(23), 5591–5599 (2013). [CrossRef]  

10. S.-M. Jung and I.-B. Kang, “Numerical simulation of the optical characteristics of autostereoscopic displays that have an aspherical lens array with a slanted angle,” Appl. Opt. 53(5), 868–877 (2014). [CrossRef]  

11. D. C. Anderson and J. Cychosz, “An introduction to ray tracing,” Image Vis. Comput. 8(2), 169 (2003). [CrossRef]  

12. J. Arvo, “Backward ray tracing,” Dev. Ray Tracing (1986).

13. G. Pratx and L. Xing, “GPU computing in medical physics: a review,” Med. Phys. 38(5), 2685–2697 (2011). [CrossRef]  

14. S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25(1), 330 (2017). [CrossRef]  

15. B. Pang, X. Sang, S. Xing, X. Yu, D. Chen, B. Yan, K. Wang, C. Yu, B. Liu, C. Cui, Y. Guan, W. Xiang, and L. Ge, “High-efficient rendering of the multi-view image for the three-dimensional display based on the backward ray-tracing technique,” Opt. Commun. 405, 306–311 (2017). [CrossRef]  

16. R. T. Stevens, “Constructive Solid Geometry,” in Object-Oriented Graphics Programming in C++ (2014).

17. D. Terzopoulos, “The Computation of Visible-Surface Representations,” IEEE Trans. Pattern Anal. Mach. Intell. 10(4), 417–438 (1988). [CrossRef]  

18. J. J. Koenderink, A. J. Van Doorn, K. J. Dana, and S. Nayar, “Bidirectional reflection distribution function of thoroughly pitted surfaces,” Int. J. Comput. Vis. 31(2-3), 129–144 (1999). [CrossRef]  

19. M. H. Weik and M. H. Weik, “Lambert’s cosine law,” in Computer Science and Communications Dictionary (2006).

20. C. Schlick, “An Inexpensive BRDF Model for Physically-based Rendering,” Comput. Graph. Forum 13(3), 233–246 (1994). [CrossRef]  

21. X. Sang, F. C. Fan, C. C. Jiang, S. Choi, W. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color three-dimensional display,” Opt. Lett. 34(24), 3803 (2009). [CrossRef]  

22. S. Li, H. Li, Z. Zheng, Y. Peng, S. Wang, and X. Liu, “Full-parallax three-dimensional display using new directional diffuser,” Chinese Opt. Lett. 9(8), 081202 (2011). [CrossRef]  

23. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in (2004).

24. X. Yu, X. Sang, X. Gao, Z. Chen, D. Chen, W. Duan, B. Yan, C. Yu, and D. Xu, “Large viewing angle three-dimensional display with smooth motion parallax and accurate depth cues,” Opt. Express 23(20), 25950 (2015). [CrossRef]  

25. S. Yang, X. Sang, X. Gao, X. Yu, B. Yan, J. Yuan, and K. Wang, “Influences of the pickup process on the depth of field of integral imaging display,” Opt. Commun. 386, 22–26 (2017). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       The reconstructed image in different direction.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. (a) The computation process of BRT. (b) Lens modeled by CSG in proposed method and (c) BREP.
Fig. 2.
Fig. 2. (a) The texture of LCD panel model; (b) Intensity distribution in different directions on the LCD panel according to the Lambert's cosine law.
Fig. 3.
Fig. 3. (a) The shape of the lens and the path of the ray; (b) the lens distribution of square arranged.
Fig. 4.
Fig. 4. Diffuser with scattering effect.
Fig. 5.
Fig. 5. The position of models in virtual world.
Fig. 6.
Fig. 6. The simulation component of LFD, (a) the reconstructed image of LFD; (b) the EIA shown on LCD; (c) checkerboard test of lens array; (d) checkerboard test of diffuser.
Fig. 7.
Fig. 7. The images shown with integral images, including Car, Building and Monkey.
Fig. 8.
Fig. 8. The reconstructed image in different direction. (see Visualization 1)
Fig. 9.
Fig. 9. SSIM indices of simulation result (a) in horizontal and vertical directions and (b) with different depth of 3D object used to generate EIA.
Fig. 10.
Fig. 10. The results of simulation based on CSG and BREP.

Tables (2)

Tables Icon

Table 1. List of parameters and the values used in the simulation

Tables Icon

Table 2. List of the computation time of simulation

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

I θ = I 0 cos θ
{ x 2 + y 2 r 2 ( z = 0 ) x 2 + y 2 r 2 ( 0 < z < h ) x 2 + y 2 + z 2 R 2 ( z > h )
C e n t e r _ P o s i t i o n = ( i + 0.5 ) d u + ( j + 0.5 ) d v + w
t = n 1 n 2 i ( n 1 n 2 n i + 1 n 1 2 n 2 2 ( 1 ( n i ) 2 ) ) ,
R = R 0 + ( 1 R 0 ) ( 1 cos θ ) 5 ,
R 0 = ( n 1 n 2 n 1 + n 2 ) 2 ,
t n = R ( u , r a n d ( ω ) ) R ( v , r a n d ( ω ) ) i
R ( v ω ) = [ v x 2 + ( 1 v x 2 ) cos ω v x v y ( 1 cos ω ) z sin ω v z v x ( 1 cos ω ) y sin ω v x v y ( 1 cos ω ) z sin ω v y 2 + ( 1 v y 2 ) cos ω v y v z ( 1 cos ω ) x sin ω v z v x ( 1 cos ω ) y sin ω v y v z ( 1 cos ω ) x sin ω v z 2 + ( 1 v y 2 ) cos ω ]
I n = I N ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.