Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Perspective clipping and fast rendering of light field images for holographic stereograms using RGBD data

Open Access Open Access

Abstract

The production of holographic stereogram (HS) requires a huge amount of light field data. How to efficiently clip and render these image data remains a challenge in the field. This work focuses on the perspective clipping and fast rendering algorithm for light field images using RGBD data without explicit 3D reconstruction. The RGBD data is expanded to RGBDθ data by introducing a light cone for each point, which gives a new degree of freedom for light field image rendering. Using the light cone and perspective coherence, the visibility of 3D image points can be clipped programmatically. Optical imaging effects including mirror imaging and half mirror imaging effects of 3D images can also be rendered with the help of light cones during the light field rendering process. The perspective coherence is also used to accelerate the rendering, which has been shown to be on average 168% faster than traditional DIBR algorithms. A homemade holographic printing system was developed to make the HSs using the rendered light field images. The vivid 3D effects of the HS have validated the effectiveness of the proposed method. It can also be used in holographic dynamic 3D display, augmented reality, virtual reality, and other fields.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

How to efficiently record the colorful 3D information and generate realistic 3D images is the goal that people strive for [13]. Holographic stereogram (HS) technology combines the stereoscopic technology with holographic technology to realize the rapid production of large format realistic 3D holograms, while dramatically reducing the amount of holographic information [4,5]. HSs are widely used in industrial design, medical navigation, film and television entertainment, anti-counterfeiting security, and other fields [6,7]. The fabrication process of a HS consists of two parts: light field images acquisition and holographic printing [8,9]. The acquisition of light field images is the premise of high-quality holographic printing [1012]. High-quality HSs require a large amount of light field image data. For example, horizontal parallax holograms usually require hundreds of images, while the number of full parallax holograms is at least square of that number. Flexibly and efficiently acquiring high-quality light field image data remains a challenge in the field. There are two major types of light field image acquisition techniques: one is optical acquisition and the other is digital rendering method. Digital rendering method uses computer technology to generate light field images. Combined with the computer graphics technology, it can render special artistic effects flexibly. Digital rendering method includes two major categories: one is model-based rendering (MBR) and the other is image-based rendering (IBR).

The MBR method uses model with 3D geometry data, texture, and lighting information for rendering, which can get high-quality light field images, but the rendering efficiency needs to be improved [1315]. To improve the efficiency, the multi-view rendering (MVR) algorithm using the perspective coherence between views is designed to accelerate the rendering process [13]. Parallel multi-view polygon rasterization algorithm combines the advantages of parallel rasterization technology and MVR algorithm to further improve the rendering speed [16]. However, such algorithms require models with accurate 3D spatial information, which not only need complex data structures, but also require large memory space and high modeling costs.

The IBR method renders light field image by interpolating the given 2D views of the models, which reduces the rendering complexity, improves the rendering efficiency, and has rich image sources. However, this method degrades the rendering quality of the light field images, due to the incomplete spatial information of the given 2D views. The depth image-based rendering (DIBR) method uses both depth and texture information to render the light field images [17,18], which compensates the spatial information to a certain extent and improves the rendering accuracy. However, the spatial information of the rendering target is still incomplete, and images rendered by both single-reference view DIBR and multi-reference view DIBR techniques suffer from holes, resampling, and overlapping artifacts. The improved DIBR algorithms can alleviate the above problems [19,20], but due to the lack of geometric topology information of the 3D images, the existing DIBR algorithms still have the following challenges: a). It is difficult to render the correct occlusion and lighting. b). The rendering efficiency is limited by the redundant calculations between the multi-view images. c). DIBR algorithm cannot effectively clip the given perspective view information.

To address the problems mentioned above, a fast rendering algorithm for light field images with perspective clipping function is proposed for the applications of HS. The proposed algorithm firstly introduces a light cone for each image point and uses it to find the correct occlusion of 3D image points. Then it uses perspective coherence to accelerate the rendering process of the light field images. At the same time, light cones are used to control the visibility of 3D image points by editing the energy distribution in the light cones. Perspective coherence and light cones are combined to clip the perspective information of the light field effectively. A homemade holographic printing system was developed and used to make the HSs using the rendered light field images. Vivid 3D images with mirror imaging and half mirror imaging effects were reconstructed in the hologram, which verify the effectiveness of the algorithm.

2. Principle of virtual viewpoint rendering algorithm

The proposed algorithm consists of three parts. The first part is to define the light cones and use them to find the occlusion relationship of 3D image points. The second part is to use the light cones to control and clip the visibility and perspective information of 3D images. The third one is to use the perspective coherence to construct a fast rendering method for light field images based on RGBD data. The effectiveness and advantages of the algorithm are demonstrated by comparing with the DIBR algorithm.

2.1 Light cone and the occlusion relationship of 3D points

Figure 1(a) is a schematic diagram of the light field image recording process. The camera array is located in the XY plane, and the 3D images defined by the RGBD data are located at a certain distance away from the camera array in Z direction. The RGBD data of four corner viewpoint cameras (defined as reference cameras) are given, while the image data of the other cameras in the array need to be solved. The RGBD data of the four reference cameras records the 3D coordinates and texture information of image points from their viewpoints respectively. The data structure is simple, but the occlusion relationship of image points is lost. The missing information will introduce visibility errors of some image points during the rendering of light field images. To address the issue, light cones are defined for each image point and used to find the occlusion relationship of 3D points. The light cones can also be used to define the visible range of 3D points as needed.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the light field image recording process and the light cones.

Download Full Size | PDF

As shown in Fig. 1(b), the vertex of the light cone is located at the image point and the tensor angle of the light cone is θ. The initial value of the tensor angle θ is defined as the viewing angle of the HS. The image point emits light rays within the range bounded by the light cone, and the projection area formed by these rays on the camera plane is the visible range of the image point. If the light cone of an image point contains other models, the light cone will generate geometric shadows of the models on the camera plane. Such as the blue light cone in Fig. 1(b), there are shadows of an apple in it. The cameras in these shadows can’t see the image point. Thus, the light cone can be used to define the occlusion relationship for each 3D image point. In practice, the 3D images can be sliced perpendicular to the Z-direction according to the depth. The light cones of the image on each slice will generate the shadows of the models in front of it on the camera plane after it is shaded by the models on all the slices in front of it. Then, we can find the correct projection area.

In fact, by defining the light cones, the RGBD four-dimensional data is extended into RGBDθ five-dimensional data. It gives a new degree of freedom for the rendering of light field images. By actively adjusting the parameter θ, many special optical effects that cannot be achieved by previous rendering methods can be rendered. For example, energy distribution in the light cones can be adjusted to actively hide some images or clip the visibility of certain images during the rendering process of the light field images. Light cones can also be used to generate special optical effects, such as mirror imaging of 3D images without the 3D modeling.

2.2 Effective control and clipping the visibility of 3D image using the light cone

It is often necessary to clip the 3D images after the RGBD images are captured. If the 3D image clipping can be realized during the rendering process of the light field image without 3D modeling, the rendering efficiency will be improved effectively. The light cones provide an effective clipping tool. By actively controlling the projection area of the light cones on the camera plane, we can programmatically clip the viewing range of each image point according to the design requirements of the artistic effect. As shown in Fig. 2, the visibility of the apple in different viewpoints can be controlled by adjusting the energy distribution of light within the light cones. For example, the apple is invisible in the areas 2 and 4 because there is no light projection, and the apple is visible in the area 1, 3, and 5 because of light projection.

 figure: Fig. 2.

Fig. 2. Light cone with visibility definition.

Download Full Size | PDF

Further, optical imaging effects of the 3D images such as mirror imaging and half mirror imaging effects can also be rendered with the help of light cones. The mirror and half mirror will change the perspective of the 3D images. Here, we use the (half) mirror imaging effects as examples to illustrate the perspective clipping function of the proposed rendering method without 3D reconstruction. For HS applications, these special imaging effects create special artistic effects. As shown in Fig. 3, a virtual mirror is added at the required position in the 3D models. The mirror will redefine the viewing range of the 3D images by reflecting the light cones based on the reflection law of the light. And then the light field images are rendered within the new viewing range. Then we can get the mirror imaging effect of 3D images. By controlling the reflectance and transmittance of the mirror, a half mirror imaging effect of the 3D images can also be rendered.

 figure: Fig. 3.

Fig. 3. Rendering process of light field images with mirror imaging effect of 3D models.

Download Full Size | PDF

For a specific point P of the models in Fig. 3, the yellow line is the rays in the light cones emitting from point P. The red line on the left side of the mirror represents the transmitted light of point P through the mirror (T-light). The green line on the right side of the mirror represents the reflected light of the point P by the mirror (R-light). The reflected light will generate a virtual image point P’. The transmitted light intensity is the product of the total light intensity and the transmittance of the mirror. The visible range of transmitted light at point P is the same as that without the mirror. The reflected light intensity is the product of the total light intensity and mirror’s reflectivity. The visible range of the reflected light is the projection area of the light cone reflected by the mirror on the camera plane. For example, in the area indicated by WN, we can see both the virtual image point P’ and the original point P at the same time. Thus, by using the light cones, the proposed algorithm can render (half) mirror imaging effects of the models for light field images.

2.3 Fast rendering algorithm for light field images using perspective coherence

To speed up the rendering, we propose a fast light field image rendering algorithm using perspective coherence. Perspective coherence describes a similarity among images of a static model observed from different viewpoints. This similarity stems from the neat mapping relation between the camera position and the geometry and texture of the image. As shown in Fig. 4(a), the global coordinate system follows the coordinate system shown in Fig. 1, where the camera plane TXTY is coincides with the XY plane and the film plane of the camera is in the plane Z = f. The local coordinate system (VXO'VY) is set on the film plane of each camera. As shown in Fig. 4(b), the epipolar coordinate system defines the TX axis of the camera plane as the horizontal axis, and the VX axis of the camera’s film plane as the vertical axis. The light field images of two models (Little Raccoon and its apple) with mutual occlusion are recorded. For a 3D point P at a distance, z, from the camera plane, the camera array records a series of image points, Vnx, as the red points in Fig. 4(a). The position of the image points on each camera’s film varies with the position of the camera’s viewpoint. The variation law is more concise in the epipolar coordinate system as shown in Fig. 4(b). All image points of the 3D point recorded by the linear camera array lie on a straight line in the epipolar coordinate system. The equation of the line is:

$$V = v - T \cdot EPIslope$$
where the slope of the line, EPIslope, is $\frac{f}{{z({v,t} )}}$, and f is the focal length, z(v, t) is the depth of the point. The intercept of the line is v which is determined by the relative coordinates between the rendered 3D point and the camera array.

 figure: Fig. 4.

Fig. 4. The coordinate transformation of image points recorded by the camera array in different coordinate systems. (a) In global coordinate system, (b) In epipolar coordinate system.

Download Full Size | PDF

According to Eq. (1), to render the light field images of a 3D point in epipolar coordinates, it is only to solve a linear function with slope EPIslope. In order to solve the linear function accurately, the variable’s domain of the linear function needs to be defined. This domain actually corresponds to the visible range of the image point, which is determined by the light cone’s projection on the camera plane. In one-dimensional case, as shown in Fig. 4(b), the domain is $[{{t_1},{t_x}} ]$. Using the linear function and the reference camera’s RGBD data, the coordinates and texture information of the image point captured by the unknown camera can be interpolated. The interpolation algorithms are generally faster than vector operations. According to the principle, a fast rendering algorithm for light field images using perspective coherence is designed. The flow chart of the algorithm is shown in the Fig. 5. First the parameters of this rendering system are set, including the camera parameters, number of reference cameras, number of views, view resolution, gaze center position, etc. Then it enters the render loop. The RGBD data of one reference camera is read, including texture image and depth map. Later, the depth map is sliced in the direction perpendicular to the Z-axis. And then it uses the light cones to define the visibility of image points, and uses the light cones to perform perspective clipping as needed. After that, it renders the images in EPI coordinate according to Eq. (1), and converts the EPI images to the virtual viewpoint images to complete one rendering cycle. Next, another set of virtual viewpoint images are rendered using the images from the next reference camera. This cycle loops until all virtual viewpoint images of all reference cameras are rendered. Finally, the virtual viewpoint images rendered by different reference cameras are fused to generate the high-quality light field images, and the images are outputted.

 figure: Fig. 5.

Fig. 5. Flow chart of the proposed rendering algorithm.

Download Full Size | PDF

3. Rendering experiment

3.1 Image quality assessment

To quantitatively assess the quality of the images rendered by the algorithm, we use the parameters peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) for evaluation [21]. The images rendered by the commercial software, Blender, were used as the benchmark, and the evaluation was performed by comparing the results rendered by the proposed algorithm with the benchmark. The camera arrays used in the tests have mutually parallel optical axes. The position relationship between the camera and the 3D images is shown in Fig. 1. The total number of viewpoints rendered is 1000, and the resolution of a single viewpoint image is 664×664. Figure 6(a) shows the virtual viewpoint image rendered by the proposed algorithm. Figure 6(b) is the rendering result of the same model by Blender under the same viewpoint and the same camera parameters. Comparing the two images, it is difficult for human eyes to distinguish the subtle differences between them. The SSIM of the two images is 0.994, which indicates that the quality of the rendered image is close to that of commercial software. The effectiveness of the algorithm is proved.

 figure: Fig. 6.

Fig. 6. The rendered virtual viewpoint image. (a) Rendered by proposed algorithm, (b) Rendered by Blender.

Download Full Size | PDF

To optimize the image quality, the effects of different light field image fusion methods are analyzed. Distance-first weight fusion (DFWF) algorithm and the luminance-first fusion (LFF) algorithm are both tested [22]. The DFWF algorithm uses the distance between the virtual viewpoint and the reference camera as the weight of image fusion. The closer the distance, the greater the weight. Unlike other algorithms, we quantify the weights. It is quantized into three levels of 1, 0.5, and 0, corresponding to the nearest neighbor, the second nearest neighbor, and the third nearest neighbor distance, respectively. To determine the weights, we divide the camera plane into four quadrants in a Cartesian coordinate system with the center of symmetry as the origin. First, it determines the quadrant where the virtual viewpoint is located. Then, it determines whether the pixel value of the virtual viewpoint image rendered by the reference camera in the quadrant is 0. If it is not 0, the weight is defined as 1, and the weights of other reference cameras are 0. If the pixel value is 0, then the weight is 0. Next, it determines whether the image’s pixel value rendered by the reference cameras in the two quadrants adjacent to the quadrant where the virtual viewpoint is located is 0. If neither is 0, the weights of the two adjacent reference cameras are both set to 0.5 respectively, and the weights of the other reference cameras are set to 0. If one of them is 0, then this reference camera has a weight of 0, another reference camera has a weight of 1, and the other reference cameras have a weight of 0. If both are 0, both weights are 0. If the pixel values of the images rendered by the first three reference cameras are all 0, the weight of the reference camera in the quadrant opposite the quadrant where the virtual viewpoint is located is set to 1, otherwise its weight is 0. The DFWF algorithm utilizes the following rules: the image of the viewpoint that is closer to the reference camera has a stronger correlation with the viewpoint where the reference camera is located, and vice versa. The DFWF algorithm with quantized weights can not only reduce computation, but also speed up rendering and obtain high-quality light field images.

LFF algorithm uses the brightest pixel in the images of the virtual viewpoint rendered by all reference cameras as the pixel value of the fused image [22]. The LFF algorithm utilizes the rules that the greater the pixel values the sharper the image. Thus, this algorithm chooses the sharp regions from each input image by choosing the greatest value for each pixel, resulting in high-definition images. The advantages of this algorithm are that the calculation speed is fast, and a better image resolution and hole filling effect can be obtained. The disadvantage is that the image correlation between the viewpoints is not considered. So it may introduce some errors and noises in the fused images.

For comparison, light field images containing 100 viewpoints with a resolution of 664×664 for each view was rendered using the two fusion methods. The PSNR and SSIM of all the views were evaluated. Figure 7 shows the PSNR and SSIM values of the light field images using different fusion methods. The results show that the DFWF algorithm outperforms the LFF algorithm on both SSIM and PSNR. Both algorithms share the same trend, with higher scores for virtual viewpoint images close to the reference camera. Because the parallax effect increases as the virtual viewpoint moves away from the reference camera, the difference between its view and the reference camera's view increases. Thus, its PSNR and SSIM decrease, and a minimum appears near the central viewpoint.

 figure: Fig. 7.

Fig. 7. PSNR and SSIM of the rendered light filed images. (a) PSNR, (b) SSIM.

Download Full Size | PDF

To verify that the proposed algorithm has the ability to control and clip the visibility of 3D image points, light field images of the 3D models shown in Fig. 2 were rendered. Figure 8 shows the perspective images with visibility clipping effect using the proposed algorithm. Figure 8(a), (c), and (e) are the images observed in regions 1, 3, and 5, respectively. Figure 8(b) and (c) are the images observed in regions 2 and 4, respectively. We can see the apple is visible in observation regions 1, 3, and 5 and invisible in regions 2 and 4. The results show that by using the light cones, the visibility of 3D images can be clipped and controlled during the rendering of light field images without 3D reconstruction.

 figure: Fig. 8.

Fig. 8. The rendered perspective images with visibility clipping effect using the proposed algorithm.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. The rendered images of 3D models with mirror imaging effect using proposed algorithm. (a) In observation area 1. (b) In observation area 2.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The rendered images of 3D models with half mirror imaging effect using proposed algorithm. (a) In observation area 1. (b) In observation area 2.

Download Full Size | PDF

To demonstrate that the mirror imaging or half mirror imaging effects can be realized by the proposed algorithm using the light cones, light field images of the 3D models with mirror shown in Fig. 3 were rendered. Figure 9 shows the perspective images of 3D models with mirror imaging effect. Figure 9(a) is the image taken at observation area 1 as in Fig. 3, where the light is blocked by the mirror and the image of models cannot be seen. Figure 9(b) shows the image taken at observation area 2, where both the virtual image formed by the mirror and the original image can be seen simultaneously. Figure 10 shows the perspective images of the 3D models with half mirror imaging effect, where the other parameters are the same as Fig. 9. Figure 10(a) shows the image taken at the observation area 1 as in Fig. 3, where some of the light is reflected by the half mirror, so that the brightness of the models is seen to be reduced. Figure 10(b) shows the image taken at observation area 2, where both the virtual image formed by the half mirror and the original image can be seen simultaneously. The brightness of the virtual image is weaker than that of the original models, because some light passes through the half mirror.

3.2 Rendering speed assessment

In order to verify the speed advantage of the rendering algorithm, the proposed algorithm is compared with the traditional DIBR algorithm. The rendering experiments runs on a computer equipped with an Intel Core i7-11800H CPU with 2.3GHz frequency and two DDR4-8GB memories with 3200MHz frequency. The data source used in the experiments is the RGBD data of the four reference cameras. The models to be rendered are the Little Raccoon and its apple which is the same as that in Fig. 1. The rendering result is the light field composed of M×N virtual viewpoint images, each with a resolution of X×Y.

First, we set the image resolution to be fixed to test how the rendering time of the two algorithms varies with the number of viewpoints. The results are shown in Fig. 11, where the image resolution is 166×166. Thanks to the use of perspective coherence, the proposed algorithm takes less time than DIBR algorithm for all viewpoint numbers. As the number of viewpoints increases, the time difference between the two algorithms becomes larger. That is to say, the advantages of the proposed algorithm are more obvious for the rendering of light field images with dense viewpoints.

 figure: Fig. 11.

Fig. 11. The rendering time vs. number of viewpoints of the proposed algorithm and DIBR algorithm.

Download Full Size | PDF

Then, we set a fixed number of viewpoints to test how the rendering speed of the two algorithms varies with the image resolution. The rendering speed is defined by the number of viewpoints rendered per second. The results are shown in Fig. 12, where the number of viewports is 100. The proposed algorithm has a faster rendering speed than DIBR algorithm for all the image resolutions. For both algorithms, the lower the resolution, the faster the rendering speed. As the image resolution increases, the amount of computation required increases, and the rendering speed of both algorithms decreases, but the proposed algorithm is still faster than the DIBR algorithm.

 figure: Fig. 12.

Fig. 12. The rendering speed vs. image resolution of the proposed algorithm and DIBR algorithm.

Download Full Size | PDF

4. Holographic printing and result analysis

In order to demonstrate the correctness of the rendered light field images, the light field images were input into a homemade holographic printing system to print the HSs. The schematic diagram of the optical path of the holographic printing system is shown in Fig. 13(a). The laser emits a laser beam, and the beam enters the beam expander and collimator system through the shutter to generate a collimated beam. The collimated beam passes through a beam splitter (BS), which is then split into a reference beam and an object beam. The object beam illuminates the spatial light modulator (SLM) after reflection from the second BS. The rendered images are loaded on the SLM in turn. The light reflected from the SLM goes through the BS. The light going through the BS is imaged, filtered, and focused on the hologram by the lens group. The reference beam is reflected by a mirror and passes through a half wave plate and a polarizer to modulate the polarization and intensity of the reference light to match the object light. Then the reference beam is reflected by the second mirror. And then it is filtered by an aperture. After that it is relayed to the hologram from the opposite direction of the object beam though a lens pair. The object and reference beams on the hologram interfere to generate the hogels. The hologram is loaded on a 2D translation stage. By synchronizing the refresh of the SLM and the 2D translation of the stage, the hogel arrays are recorded sequentially to generate a HS. In the printing system, the laser wavelength is 532 nm. The F/# and focal length of the lens group is 0.86 and 16 mm respectively. The SLM is a digital micro-mirror device with a resolution of 1280 × 1024. Figure 13(b) is the photo of the holographic printing system.

 figure: Fig. 13.

Fig. 13. Holographic printing system, (a) Schematic diagram of the holographic printing optical path. (b) Photo of the holographic printing system.

Download Full Size | PDF

Figure 14 shows the holographic display results of the hologram, Little Raccoon and its apple. Figure 14(a)-(e) are the horizontal perspective views from the left 30° to the right 30° viewing angles, respectively, which are excerpted from Visualization 1. It shows that high quality holograms with large viewing angle and vivid 3D effects can be made using the proposed method.

 figure: Fig. 14.

Fig. 14. Holographic display results of the HS without clipping.

Download Full Size | PDF

Figure 15 shows the holographic display results of the HS printed using the light field images shown in Fig. 8. When rendering the light field images, the apple is clipped at certain viewpoints. Figure 15(a)-(e) are horizontal perspective views of the HS at the same viewing angle as Fig. 14(a)-(e), respectively. These images are some frames excerpted from Visualization 2. The apple disappears in Fig. 15(b) and (d), and appears in Fig. 15(a), (c) and (e). It shows that using the proposed method, we can clip the visibility of the light field images effectively.

 figure: Fig. 15.

Fig. 15. Holographic display results of the HS with visibility clipping.

Download Full Size | PDF

Figure 16 shows the holographic display results of the HS printed using the light field images shown in Fig. 9. A virtual mirror is inserted into the 3D models as shown in Fig. 3. The light field images with mirror imaging effects are rendered and used to make the HS. Figure 16(a) is the image captured on the left side of the mirror, where the light is blocked by the mirror. The images therefore are invisible, as the case in the viewing area 1 shown in Fig. 3. Figure 16(b) is a frame excerpted from Visualization 3, where the image is captured in the right side of the mirror. Both the original and virtual images can be seen, as the case in the viewing area 2 shown in Fig. 3.

 figure: Fig. 16.

Fig. 16. Holographic display results of the HS with mirror imaging effect.

Download Full Size | PDF

Figure 17 shows the holographic display results of the HS printed using the light field images shown in Fig. 10. Figure 17(a) and (b) are two frames excerpted from Visualization 4. Different from Fig. 16, a virtual half mirror is inserted into the 3D models. Figure 17(a) is the image captured from the same viewing angle as that in Fig. 16(a). The half mirror reflects some of light. The images formed by the transmitted light therefore are somewhat dim, as the case in the viewing area 1 shown in Fig. 3. Figure 17(b) is the image captured from the same viewing angle as that in Fig. 16(b). Both the original and virtual images can be seen, as the case in the viewing area 2 shown in Fig. 3.

 figure: Fig. 17.

Fig. 17. Holographic display results of the HS with half mirror imaging effect.

Download Full Size | PDF

There are more holes in the images with mirror or half mirror imaging effects as shown in Fig. 16 and Fig. 17. The reasons are ascribed to the incomplete geometric topology information of the 3D models. The reference cameras can provide us with limited information of the models. For example, the texture information of the models facing the mirror direction is missing. The new perspective views with mirror imaging effects are calculated from the images captured by the reference cameras. Thus, there will be more holes and missing information when the new perspective views with mirrors are reconstructed. Image fusion from more reference cameras can improve the image quality, but the unrecorded information still produces holes. Advanced information acquisition techniques should be explored and used to improve the image quality in the future work.

The imaging results of Fig. 16 and 17 indicate that optical imaging effects of 3D images can be rendered with the help of light cones during the light field rendering process without the use of 3D modeling software. The proposed method provides more degrees of freedom for the rendering of light field images. The optical imaging effects are not limited to the mirror imaging or half mirror imaging effects, lens or prism imaging effects can also be rendered. It’s a useful and interesting method.

5. Conclusion

For holographic applications, we propose and demonstrate a fast rendering algorithm for light field images with perspective clipping function based on RGBD data. Light cones are introduced to each point of RGBD data to increase the modulation freedom. Correct occlusion, visibility and perspective clipping, and optical imaging effects of 3D images can be rendered with the help of the light cones. Perspective coherence is used to accelerate the rendering of light field images, which has been shown to be on average 168% faster than traditional DIBR algorithms. The values of parameters PSNR and SSIM show that higher quality light field images are rendered than traditional algorithms. Finally, the light field images were holographic encoded using a homemade holographic printing system to make the HSs. Vivid 3D images with controllable visibility and optical imaging effects were reconstructed in the holograms. The experimental results demonstrate the effectiveness and advantages of the proposed method. The proposed method injects more flexibility into the rendering process of light field images. It can also be used in holographic dynamic 3D display, augmented reality, virtual reality, and other fields.

Funding

Natural Science Foundation of Zhejiang Province (LTY22F020002, LY19F050018); Science Startup Fund of Zhejiang Sci-Tech University (17062061-Y).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Y. Kim, K. Hong, H. J. Yeom, K. Choi, J. Park, and S. W. Min, “Wide-viewing holographic stereogram based on self-interference incoherent digital holography,” Opt. Express 30(8), 12760–12774 (2022). [CrossRef]  

2. E. Dashdavaa, A. Khuderchuluun, H. Y. Wu, Y. T. Lim, C. W. Shin, H. Kang, S. H. Jeon, and N. Kim, “Efficient hogel-based hologram synthesis method for holographic stereogram printing,” Appl. Sci. 10(22), 8088 (2020). [CrossRef]  

3. A. Khuderchuluun, M. U. Erdenebat, E. Dashdavaa, K. C. Kwon, J. R. Jeong, and N. Kim, “Inverse-directed propagation-based hexagonal hogel sampling for holographic stereogram printing system,” J. Web Eng. 1, 1225–1238 (2022). [CrossRef]  

4. M. Klug and M. Holzbach, Holographic Stereograms and Printing (John Wiley & Sons, 2007), Chap. 20.

5. H. Bjelkhagen and D. Brotherton-Ratcliffe, “Ultrarealistic imaging: the future of display holography,” Opt. Eng. 53(11), 112310 (2014). [CrossRef]  

6. J. Su, X. Yan, Y. Huang, X. Jiang, Y. Chen, and T. Zhang, “Progress in the synthetic holographic stereogram printing technique,” Appl. Sci. 8(6), 851 (2018). [CrossRef]  

7. Y. Kim, E. Stoykova, H. Kang, S. Hong, J. Park, J. Park, and J. Hong, “Seamless full color holographic printing method based on spatial partitioning of SLM,” Opt. Express 23(1), 172–182 (2015). [CrossRef]  

8. M. W. Halle, S. A. Benton, M. A. Klug, and J. S. Underkoffler, “Ultragram: a generalized holographic stereogram,” Proc. SPIE 1461, 142–155 (1991). [CrossRef]  

9. M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic 3-D printer,” in Practical Holography IV, S. A. Benton, ed., Proc. SPIE 1212, 84–92 (1990).

10. H. Choi, H. Kang, and N. Kim, “Analysis of potential distortions corresponding to the hologram printed by a holographic wave-front printer,” Opt. Express 29(16), 24972–24988 (2021). [CrossRef]  

11. X. Yan, C. Wang, Y. Liu, X. Wang, X. Liu, T. Jing, S. Chen, P. Li, and X. Jiang, “Implementation of the real-virtual 3D scene-fused full-parallax holographic stereogram,” Opt. Express 29(16), 25979–26003 (2021). [CrossRef]  

12. S. Fachada, D. Bonatto, and G. Lafruit, “High-quality holographic stereogram generation using four RGBD images,” Appl. Opt. 60(4), A250–A259 (2021). [CrossRef]  

13. M. Halle, “Multiple viewpoint rendering,” SIGGRAPH’98, Proceedings of 25th annual conference on Computer graphics and interactive techniques, 243–254 (1998).

14. J. Su, Q. Yuan, Y. Huang, X. Jiang, and X. Yan, “Method of single-step full parallax synthetic holographic stereogram printing based on effective perspective images’ segmentation and mosaicking,” Opt. Express 25(19), 23523–23544 (2017). [CrossRef]  

15. H. Jeon, S. Lim, Y. Jeon, W. Baek, D. Heo, Y. Kim, H. Kim, and J. Hahn, “Holographic printing for generating large-angle freeform holographic optical elements,” Opt. Lett. 47(2), 257–260 (2022). [CrossRef]  

16. Y. Guan, X. Sang, S. Xing, Y. Li, and B. Yan, “Real-time rendering method of depth-image-based multiple reference views for integral imaging display,” IEEE Access 7, 170545–170552 (2019). [CrossRef]  

17. Y. Gao, H. Chen, W. Gao, and T. Vaudrey, “Virtual view synthesis based on DIBR and image inpainting,” in Pacific-Rim Symposium on Image and Video Technology, 172–183 (2013).

18. Z. Liu, P. An, S. X. Liu, and Z. Y. Zhang, “Arbitrary view generation based on DIBR,” in Proceedings of IEEE International Symposium on Intelligent Signal Processing and Communication Systems (IEEE, 2007), pp. 168–171.

19. A. Q. D. Oliveira, T. L. D. Silveira, M. Walter, and C. R. Jung, “A hierarchical superpixel-based approach for DIBR view synthesis,” in Proceedings of IEEE Trans. Image Process (IEEE, 2021), pp. 6408–6419.

20. S. Li, C. Zhu, and M. T. Sun, “Hole filling with multiple reference views in DIBR view synthesis,” in Proceedings of IEEE Transactions on Multimedia (IEEE, 2018), pp. 1948–1959.

21. Z. Wang, E. P. Simoncelli, and A. Bovik, “Multiscale structural similarity for image quality assessment,” in Proceedings of IEEE Asilomar Conference on Signals, Systems & Computers (IEEE, 2003), pp. 1398–1402.

22. A. Malviya and S. G. Bhirud, “Image fusion of digital images,” International Journal of Recent Trends in Engineering 2(3), 146 (2009).

Supplementary Material (4)

NameDescription
Visualization 1       3D display results of the hologram without clipping.
Visualization 2       3D display results of the hologram with visibility clipping
Visualization 3       3D display results of the hologram with mirror imaging effect.
Visualization 4       3D display results of the hologram with half mirror imaging effect.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Schematic diagram of the light field image recording process and the light cones.
Fig. 2.
Fig. 2. Light cone with visibility definition.
Fig. 3.
Fig. 3. Rendering process of light field images with mirror imaging effect of 3D models.
Fig. 4.
Fig. 4. The coordinate transformation of image points recorded by the camera array in different coordinate systems. (a) In global coordinate system, (b) In epipolar coordinate system.
Fig. 5.
Fig. 5. Flow chart of the proposed rendering algorithm.
Fig. 6.
Fig. 6. The rendered virtual viewpoint image. (a) Rendered by proposed algorithm, (b) Rendered by Blender.
Fig. 7.
Fig. 7. PSNR and SSIM of the rendered light filed images. (a) PSNR, (b) SSIM.
Fig. 8.
Fig. 8. The rendered perspective images with visibility clipping effect using the proposed algorithm.
Fig. 9.
Fig. 9. The rendered images of 3D models with mirror imaging effect using proposed algorithm. (a) In observation area 1. (b) In observation area 2.
Fig. 10.
Fig. 10. The rendered images of 3D models with half mirror imaging effect using proposed algorithm. (a) In observation area 1. (b) In observation area 2.
Fig. 11.
Fig. 11. The rendering time vs. number of viewpoints of the proposed algorithm and DIBR algorithm.
Fig. 12.
Fig. 12. The rendering speed vs. image resolution of the proposed algorithm and DIBR algorithm.
Fig. 13.
Fig. 13. Holographic printing system, (a) Schematic diagram of the holographic printing optical path. (b) Photo of the holographic printing system.
Fig. 14.
Fig. 14. Holographic display results of the HS without clipping.
Fig. 15.
Fig. 15. Holographic display results of the HS with visibility clipping.
Fig. 16.
Fig. 16. Holographic display results of the HS with mirror imaging effect.
Fig. 17.
Fig. 17. Holographic display results of the HS with half mirror imaging effect.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

V = v T E P I s l o p e
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.