Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Holographic display method for volume data by volume rendering

Open Access Open Access

Abstract

Volume data are widely used in many areas, especially in biomedical science and geology. Visualizing volume data is very important to enable intuitive understanding of 3D structures, making them easier to analyze. However, current visualization technologies for volume data cannot satisfy human requirements. In this study, we propose a holographic display method for volume data by volume rendering. Based on this holographic display method, we can generate holograms for transparent objects and multi-layer objects. In this paper, to increase the speed of calculation, we propose an approximate volume rendering based CGH calculation method with elemental holograms.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Currently, volume data, which represents three-dimensional (3D) objects, are widely used in many areas, such as in biomedical science [1] and geology. Volume data are composed of a series of voxels, which is the fundamental element of volume data. A voxel represents a value on a regular grid in 3D space. Generally, volume data can be generated by many measuring instruments, including magnetic resonance imaging (MRI), computed tomography (CT), and meteorological instruments.

To make volume data easy to understand, many excellent visualization technologies have been proposed in the computer graphics (CG) area, such as surface rendering [2], maximum intensity projection (MIP) [3], and volume rendering [4, 5]. According to different purposes of volume data, we can select different visualization method to use. The visualized images are very clear and easy to understand, but they are projections of CG models in a 2D plane, not real 3D images.

Holography is an ideal display technology [6] that can satisfy all human physiological and perceptual requirements. Holography can record and reconstruct three-dimensional images by using the diffraction and interference of light. Nowadays, by using a computer, hologram data can simulate the interference patterns of holograms, which are called computer-generated holograms (CGHs). Several methods to generate holograms that represent volume data have been proposed. For example, the Voxgram system proposed by S. Heart et al. has been very successful in medical imaging [7, 8]. It proved that holograms can be used in medical science and are better than conventional 3D medical-display methods. The hologram is recorded on a photographic plate as a 3D image created by multiple exposures of each equal depth layer of volume data. Holoxica [9] proposed a holographic display system for medical images. Y. Sakamoto proposed a volume rendering method [10] that can generate CGHs from volume data. The reconstructed images were 3D, but these methods cannot display holographic animations. By using an electro-holographic display device, holographic animation and viewpoint movement can be realized. We have proposed two CGH calculation methods for volume data in our last research [11]. One method is polygon-based method, and the other is maximum intensity projection based (MIP-based) method. In the polygon-based method, there are two operations: surface extraction and CGH calculation with polygon data. Volume data were transformed to the polygonal models by surface extraction. The CGHs of volume data were generated using a ray-tracing method [12]. Unlike the polygon-based method, the MIP-based holographic display method is a direct volume rendering method for volume data. In the MIP-based method, we used a modified ray-tracing method to generate holograms of MIP models. Both these methods can generate CGHs for volume data well. The polygon-based method can generate colorful holograms, but the MIP-based method is much faster. And these two methods were able to generate holographic animations.

In this study, we propose a new holographic display method for volume data that uses the direct volume rendering method and point light source method to generate holograms for volume data. We applied the concept of transparency for voxels in this study. When we calculate the intensity of the reflected light, the diffuse reflection is calculated as the Lambert reflection. Based on this method, we can generate clear holograms for transparent objects and multi-layer objects. To increase the calculation speed, we propose an approximate volume rendering based CGH calculation method with elemental holograms. And we reconstruct the holographic images on an eyepiece-type electro-holographic display device.

2. Related work

2.1. Volume data

Volume data values are recorded as voxels on a discrete grid in a 3D space, as shown in Fig. 1.

 figure: Fig. 1

Fig. 1 Volume data (set of voxels).

Download Full Size | PDF

A voxel is a volume element representing a value on a regular grid in 3D space. This is analogous to a pixel, which represents two-dimensional (2D) image data in a bitmap. Note that voxels do not have their coordinates explicitly encoded along with their values. Instead, rendering systems infer the coordinates of a voxel on the basis of its position relative to other voxels.

Visualizing volume data is an important technique to enable intuitive understanding of the data, making it easier to analyze. Many excellent rendering technologies have been proposed to visualize volume data in the CG area. A volume can be visualized either by direct volume rendering [3–5] or by extracting polygon iso-surfaces [2] that follow the contours of given threshold values. A direct volume renderer requires every voxel value to be mapped to an opacity and color. Both ray tracing and ray casting can be applied to render the volume data directly. In some special cases, MIP is also a good choice. The marching cubes algorithm [13] is often used for iso-surface extraction. In addition, there are several visualization tools that are able to visualize volume data in a variety of ways, like ImageJ [14], OsiriX [15], and the visualization toolkit (VTK) [16]. We can easily visualize volume data with these tools.

2.2. Polygon-based holographic display method for volume data

We propose a polygon-based holographic display method for volume data. There are two operations in this method: surface extraction and CGH calculation with polygon data. In the polygon-based method, volume data were transformed to polygonal models. We applied the marching cubes algorithm to generate polygonal models of volume data. The DICOM viewer “OsiriX” realizes the marching cubes algorithm for volume data. By using OsiriX, polygonal models can be generated and exported easily. By using this method, we can display a complex scene of polygonal models. We used modeling editing software, like 3ds Max [17], to combine multiple models into one scene.

We calculated the hologram data using the ray-tracing CGH calculation method [12], that can generate hologram data correctly and quickly. The intersections between the models and rays were treated as point light sources. However, the brightnesses of intersections in the ray tracing operation were changed by various reflectance distributions. The intensity of the reflected light was calculated in accordance with the Phong reflection model.

It is possible to generate full color holographic animations by the polygon-based CGH calculation method. And we have displayed a simple rotation animation on our electro-holographic display device.

2.3. MIP-based holographic display method for volume data

Unlike the polygon-based method, the MIP-based holographic display method is a direct volume rendering method for volume data. An MIP projects in the visualization plane the voxels with maximum voxel value that fall in the way of parallel rays traced from the viewpoint to the plane of projection.

In a medical diagnosis, an MIP can show objects with very high voxel values, like blood vessels and bones. In our proposed MIP-based holographic display method for volume data, we reconstruct the MIP data with depth coordinates on the basis of the positions relative to other voxels to improve the sense of depth. Due to the MIP characteristics, only one MIP result can be obtained in each direction. Usually, an MIP result is a single color image.

In our MIP-based holographic display method [11], we applied a modified ray tracing method to obtain a series of points. For each ray, the point inside the volumetric data with the maximum voxel value was used. Each point can be treated as a point light source. And we applied the point light based CGH calculation method to generate the MIP hologram data.

The MIP-based holographic display method was able to generate hologram data in a short time, which was close to the time taken in real time.

3. Volume rendering based holographic display method

In the last section, we have introduced our previous holographic display methods for volume data. In this section, we will introduce our volume rendering based holographic display method for volume data.

3.1. CGH calculation by volume rendering method

The MIP-based method is the simplest volume rendering method to generate hologram data for volume data. However, the MIP-based method was only able to generate hologram data of objects with maximum voxel values.

In our CGH calculation method, there are three steps. First, initialize the color and opacity information based on the color map, α map, and each voxel value. There are many different color map and α map schemes depending on the object and imaging needs. In this study, we use Fig. 2 as an example. I represents the voxel value. C represents R, G, B, or α, and Cx is calculated with Eq. (1).

 figure: Fig. 2

Fig. 2 Schematic diagram of color map and α map.

Download Full Size | PDF

Cx=C3(C3C2)(I3Ix)(I3I2),

where C2, C3, I2, and I3 are the known parameters of the color map and α map. Ix is the value of voxelx. Based on the color map and α map, each voxel value was redefined as R, G, B, and α information, as shown in Fig. 3.

 figure: Fig. 3

Fig. 3 Redefinition of R, G, B, and α based on color map and α map.

Download Full Size | PDF

Second, we render each voxel with the diffuse reflection model. In the CG area, the volume rendering result is a 2D image. Therefore, the value of each pixel is superposed by the rendering results of voxels. Our volume rendering method for CGH calculation is different from the CG volume rendering method. CGHs can record the wave information of 3D objects. Therefore, in our volume rendering method for CGH calculation, we should consider the intensity of reflected light and the position for each voxel. In this study, we assumed that the reflection model is diffuse reflection.

For a single viewpoint, there are three parts in our volume rendering method, as shown in Fig. 4. The first part is light propagation from the light source to voxeli. The second part is the calculation of reflected light for voxeli. The third part is light propagation from voxeli to the viewpoint. For each voxel, we have transformed the voxel value to R, G, B, and α information. A voxel can be represented as a transparent object. Thus, the light propagation is accompanied by attenuation of light intensity.

 figure: Fig. 4

Fig. 4 Volume rendering method for single view point.

Download Full Size | PDF

Here, we take voxeli as an example. In the first part, a ray propagates from the light source to voxeli. The attenuation equation is

Idi=Id0j=0i(1αj),
where Id0 is the intensity of the light source, αj is the opacity value of voxelj, and Idi is the incident light of the second step. The principle of diffuse reflection is shown in Fig. 5. We calculated the intensity of the reflected light with the Lambertian model as follows:
Iri=kdiIdimax(0,LN),
where kdi corresponds to the ratio of Lambert light, and vector L is the unit vector pointing to the light source. Vector N is the normal unit vector; its values are the gradient values of α in the x, y, and z directions. Different N values can be obtained with different α maps. Therefore, we can get various imaging results from the same series of volume data with different α maps. Iri is the intensity of reflected light. From Eq. (3) and Fig. 5, we can obtain a very obvious conclusion that the intensity of reflected light does not depend on viewpoint position. After this, the reflected ray propagates from voxeli to the viewpoint. And the attenuation equation is
IrLi=Irij=i+1n(1αj),
where IrLi is the final reflected light intensity of voxeli.

 figure: Fig. 5

Fig. 5 Diffuse reflection model.

Download Full Size | PDF

In theory, CGHs are multi-viewpoint images. For each voxel and each viewpoint, we need to calculate the intensity of diffuse reflected light. As we know, the intensity of reflected light does not depend on viewpoint direction. Therefore, Idiand Iri are the same for all viewpoints. For each viewpoint, IrLi needs to be recalculated, as shown in Fig. 6

 figure: Fig. 6

Fig. 6 CGH calculation with volume rendering method.

Download Full Size | PDF

The last step is calculating the hologram data for the rendered volume data. In this study, we applied the point light source method to generate hologram data. Every voxel can be treated as a point light source. For a point light source i, the complex amplitude distributions μ(ξ,η) were calculated by

μi(ξ,η)=IrLinriexp(j(kri+ϕi)),
μ(ξ,η)=i=1Nμi(ξ,η),
where ϕi is the random phase of point light source i, and ri is the distance between the point light source i and the pixel on the hologram plane. IrLin is the amplitude of point light source i for viewpoint n, which represents R, G, and B information. And IrLin is calculated with Eq. (4).

3.2. Approximate volume rendering based CGH calculation with elemental holograms

In the last subsection, we have introduced the volume rendering based holographic display method. However, the calculation amount is very large. The number of voxels and hologram pixels are M and N, respectively, and the computational complexity of incident light, reflected light and final light propagation are O(M2), O(M), and O(M2N), respectively. And the computational complexity of the CGH calculation is O(MN).

To solve the problem of a huge amount of computation, we propose an approximate solution to generate the hologram data. Dr. Ichikawa [12] has proposed an elemental hologram concept in his study. And the experiments show that there are no major effects on the reconstructed images. In our approximate volume rendering based CGH calculation method, we applied elemental holograms to increase the calculation speed. When we calculate the reflected light intensity, we divide the hologram into several elemental holograms, as shown in Fig. 7.

 figure: Fig. 7

Fig. 7 CGH calculation with elemental holograms.

Download Full Size | PDF

Unlike the last subsection, we can calculate the final light propagation just once for each elemental hologram. We used the center pixel of the elemental hologram to represent the entire elemental hologram. Based on this approximate CGH calculation method, the computational complexity of the incident light, reflected light, and final light propagation are O(M2), O(M), and O(M2Ne), respectively. The computational complexity of the CGH calculation is O(M2N), where M is the number of voxels, N is the number of hologram pixels, and Ne is the number of elemental holograms. In theory, the CGH calculation time of the two methods is the same. If Ne is small enough relative to N, the calculation time of final light propagation will be greatly shortened. And Eq. (5) should be rewritten as

μi(ξ,η)=IrLiekriexp(j(kri+ϕi)),
where IrLiek is the amplitude of point light source i for elemental hologram k. For each pixel in elemental hologram k, the amplitude of point light source i is the same.

4. Experiments

4.1. Experimental setup

In this study, the experiments were carried out on an electro-holographic device, as shown in Fig. 8. This display system has been proposed by Yoneyama [18] in 2013. A white LED was used as the light source in this display system, and the reconstructed images were colorized using the time division method [19, 20]. Using the white LED as the light source cannot reconstruct our images perfectly. The illuminated light source is emitted to the SLM through the lens, and the reconstructed light is reflected by the half mirror. The reconstructed light is converged around the viewing window. Viewers could observe the full-color reconstructed images through the viewing window.

The parameters of the reconstruction device and computation device are shown in Tables 1 and 2, respectively. In this study, we used medical images as experimental data. The experimental dataset consists of three parts: the DICOM Image Library [21], real patient data, and self-made data.

 figure: Fig. 8

Fig. 8 CGH reconstruction device.

Download Full Size | PDF

Tables Icon

Table 1. Parameters of Reconstruction Device

Tables Icon

Table 2. Parameters of Computation Device

4.2. Optical reconstructions

In this subsection, we discuss an optical experiment. The geometry for this experiment is shown in Fig. 9. There are two series of volume data. The z coordinates of data1 and data2 are -50 cm and -100 cm, respectively. The reconstructed images are shown in Fig. 10. From this figure, we determined that when the camera was focused on one data, the other appeared blurred. This indicated that these reconstructed images display the depth correctly.

 figure: Fig. 9

Fig. 9 Geometry for optical experiment.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Reconstructed images of optical experiment. (a) Focusing on data1. (b) Focusing on data2.

Download Full Size | PDF

4.3. Approximate CGH calculation experiment

In this subsection, we performed an experiment to verify the validity of our approximate CGH calculation method, which was explained in subsection 3.2. In this study, the volume rendering based CGH calculation method was called “proposed method” for short, and the approximate volume rendering based CGH calculation method with elemental holograms was called “proposed method with EH” for short. We applied 10×6 elemental holograms to perform the approximate calculation. In Fig. 11, 11(a) is the CG result, 11(b) is the reconstructed image of the proposed method, and 11(c) is the reconstructed image of the proposed method with EH. From Figs. 11(b) and 11(c), we determined that there were no major effects on the reconstructed images between the proposed method and proposed method with EH. And the calculation time can be greatly shortened, as shown in section 4.5.

 figure: Fig. 11

Fig. 11 Reconstructed images of approximate experiment. (a) is CG result. (b) is reconstructed image of proposed method. (c) is reconstructed image of proposed method with EH.

Download Full Size | PDF

4.4. Reconstructed images

In this subsection, we discuss the use of the four series of volume data we used in our experiments. The first series is the gallbladder (data type: MRI). In this experiment, we applied polygon-based and MIP-based methods as comparative tests. The voxel values of the gallbladder are relatively large. Therefore, we can obtain similar results by these three methods, as shown in Fig. 12. From this figure, we determined that these three methods can be used to generate hologram data for volume data. And the detail of Fig. 12(d) is better than that of the other two methods.

 figure: Fig. 12

Fig. 12 Reconstructed images of different method. (a) is the CG rendering result. (b) is the reconstructed image of polygon-based method. (c) is the reconstructed image of MIP-based method. (d) is the reconstructed image of proposed method with EH.

Download Full Size | PDF

The second experimental series of volume data is called VIX (data type: CT). In this experiment, we applied our proposed method to generate the hologram data for transparent objects. Based on the proposed method, we have obtained the reconstructed results of transparent objects, as shown in Fig. 13. Figures 13(a) and 13(b) are the CG rendering results. Figure 13(b) is the reconstructed image of Fig. 13(a). And Fig. 13(d) is the reconstructed image of Fig. 13(c). Figure 13(b) shows the reconstructed image of bone, which was an opaque object. Figure 13(d) shows the reconstructed image of feet, and we can see the bone through transparent skin. Based on this experiment, we determined that we can generate the hologram data for transparent objects by the proposed method.

 figure: Fig. 13

Fig. 13 Reconstructed images of VIX (transparent objects).

Download Full Size | PDF

The fourth experimental series of volume data is generated by ourselves. In this experiment, we applied our proposed method to show the effect of α maps on the rendering results. We try to apply different α maps to generate hologram data with the proposed method. And the reconstructed images are shown in Fig. 14. In this experiment, the green ball is an opaque object and the red cube is an transparent object. Figures 14(a)14(d) are the reconstructed images with different α maps. The transparency changes of the red object are clearly displayed.

 figure: Fig. 14

Fig. 14 Reconstructed images of transparent object with different α maps. (a), (b), (c), and (d) are reconstructed images with different α maps.

Download Full Size | PDF

The fourth experimental series of volume data is called Cenovix (data type: CT). In this experiment, we applied our proposed method to generate the hologram data for multi-layer objects. In Fig. 15, there are three objects: bone, visceral tissue, and metal. Figure 15(b) is the reconstructed image of this experiment. Due to the low resolution of the device, the reconstructed image is not very clear. We can see multiple objects in Fig. 15(b). Based on this experiment, we determined that we can generate the multi-layer objects by the proposed method.

 figure: Fig. 15

Fig. 15 Reconstructed images of multi-layer objects. (a) is CG rendering result of multi-layer objects. (b) is reconstructed image of multi-layer objects.

Download Full Size | PDF

4.5. Calculation time

As we said above, the calculation amount of the proposed method is very large. And the proposed method with EH was proposed to increase the calculation speed. In this subsection, we discuss the calculation time of the four methods. The polygon-based method, MIP-based method, and proposed method all contain two steps: rendering and CGH calculation. Of course, the polygon-based method should generate the polygon data before these two steps. In this experiment, we use the gallbladder as our experimental data, and the number of voxels is 300×300×69. Table 3 shows the calculation time of the different methods. The number of elemental holograms is 10×6 elements in the proposed method with EH. All the calculation time results of Table 3 are the averages of ten calculations. From this table, we determined that the proposed method and proposed method with EH take more time to do the rendering step than the polygon-based and MIP-based methods. In theory, the CGHcalculation time of proposed method and proposed method with EH should be the same. Generally, the proposed method with EH is 5 to 6 times faster than the proposed method. And the polygon-based method and MIP-based method are far faster than the other two methods. The MIP-based method can achieve real-time computing with the latest GPU.

Tables Icon

Table 3. Calculation time of different methods

5. Conclusion

In this study, we proposed a holographic display method for volume data by direct volume rendering method. In this method, we attempted to apply the diffuse reflection model to render the volume data. We calculated the CGHs for volume data using a pointlight source based CGH method. We obtained the reconstructed images using an electro-holographic device. Our method can display the transparent objects and multi-layer objects well. Compared with polygon-based and MIP-based methods, the detail of holograms calculated by the proposed method is better. However, the calculation time is longer. To increase the calculation speed, an approximate volume rendering based CGH calculation method with elemental holograms was proposed.

By using this holographic display method, various volume data can be displayed in an effective way, including those in the medical field. Of course, the current methods are not good enough. We will continue our research in the future. If possible, we will introduce the specular reflection model and accelerated computing in our next work.

Funding

Japan Society for the Promotion of Science (JSPS) (KAKENHI Grant 16H02852).

References

1. J. A. Maintz and M. A. Viergever, “A survey of medical image registration,” Medical Images Analysis 2, 1–36 (1998). [CrossRef]  

2. M. Levoy, “Display of surfaces from volume data,” IEEE Computer Graphics and Applications 8, 29–37 (1998). [CrossRef]  

3. S. Napel, M. P. Marks, G. D. Rubin, M. D. Dake, C. H. McDonnell, S. M. Song, D. R. Enzmann, and J. R B Jeffrey, “CT angiography with spiral CT and maximum intensity projection,” Radiology 185, 607–610 (1992). [CrossRef]   [PubMed]  

4. R. A. Drebin, L. Carpenter, and P. Hanrahan, “Volume rendering,” ACM Siggraph Computer Graphics 22, 65–74 (1998). [CrossRef]  

5. J. Kruger and R. Westermann, “Acceleration techniques for GPU-based volume rendering,” in Proceedings of the 14th IEEE Visualization 2003 (VIS’03), (IEEE Computer Society, Washington, DC, USA, 2003), VIS ’03, pp. 38–43.

6. E. N. Leith and J. Upatnieks, “Reconstructed wavefronts and communication theory,” J. Opt. Soc. Am 53, 1123–1130 (1962). [CrossRef]  

7. A. Wolfe and S. Hart, “Digital volumetric holograms for medical imaging,” http://spie.org/newsroom/digital-volumetric-holograms-for-medical-imaging.

8. C. Honda, S. Takahashi, S. J. Hart, and R. J. Rankin, “The technology in the digital holography system,” Japan Science and Technology Information Aggregator, Electronic 15, 135–145 (1998).

9. J. Khan, “Static digital holograms for medical images,” http://www.holoxica.com/digital-holograms/.

10. Y. Sakamoto, T. Aoyama, and Y. Aoki, “Volume rendering for computer generated holograms,” in Proceedings of 19th International Commission for Optics (ICO-19), (2002), pp. 555–556.

11. Z. Lu and Y. Sakamoto, “Holographic display methods for volume data: polygon-based and MIP-based methods,” Appl Opt 57, A142–A149 (2018). [CrossRef]   [PubMed]  

12. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,” Applied Optics 52, 201–209 (2013). [CrossRef]  

13. W. E. Lorensen and H. E. Cline, “Marching cubes: A high resolution 3d surface construction algorithm,” ACM SIGGRAPH Computer Graphics 21, 163–169 (1987). [CrossRef]  

14. J Image, “Image processing and analysis in Java,” https://imagej.net/Welcome.

15. X Osiri, “The world famous DICOM viewer,” http://www.osirix-viewer.com/.

16. The Visualization Toolkit, http://www.vtk.org/.

17. Audtodesk 3ds Max, http://www.autodesk.com.

18. T. Yoneyama, C. Yang, Y. Sakamoto, and F. Okuyama, “Eyepiece-type full-color electro-holographic binocular display with see-through vision,” in Proceedings of Digital Holography and Three-Dimensional Imaging, (Optical Society of America, 2013), p. DW2A.11. [CrossRef]  

19. T. Shimobaba and T. Ito, “A color holographic reconstruction system by time division multiplexing with reference lights of laser,” Optical Review 10, 339–341 (2003). [CrossRef]  

20. H. Araki, N. Takada, H. Niwase, S. Ikawa, M. Fujiwara, H. Nakayama, T. Kakue, T. Shimobaba, and T. Ito, “Real-time time-division color electroholography using a single GPU and a USB module for synchronizing reference light,” Applied optics 54, 10029–10034 (2015). [CrossRef]  

21. DICOM Image Library, http://www.osirix-viewer.com/resources/dicom-image-library/.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 Volume data (set of voxels).
Fig. 2
Fig. 2 Schematic diagram of color map and α map.
Fig. 3
Fig. 3 Redefinition of R, G, B, and α based on color map and α map.
Fig. 4
Fig. 4 Volume rendering method for single view point.
Fig. 5
Fig. 5 Diffuse reflection model.
Fig. 6
Fig. 6 CGH calculation with volume rendering method.
Fig. 7
Fig. 7 CGH calculation with elemental holograms.
Fig. 8
Fig. 8 CGH reconstruction device.
Fig. 9
Fig. 9 Geometry for optical experiment.
Fig. 10
Fig. 10 Reconstructed images of optical experiment. (a) Focusing on data1. (b) Focusing on data2.
Fig. 11
Fig. 11 Reconstructed images of approximate experiment. (a) is CG result. (b) is reconstructed image of proposed method. (c) is reconstructed image of proposed method with EH.
Fig. 12
Fig. 12 Reconstructed images of different method. (a) is the CG rendering result. (b) is the reconstructed image of polygon-based method. (c) is the reconstructed image of MIP-based method. (d) is the reconstructed image of proposed method with EH.
Fig. 13
Fig. 13 Reconstructed images of VIX (transparent objects).
Fig. 14
Fig. 14 Reconstructed images of transparent object with different α maps. (a), (b), (c), and (d) are reconstructed images with different α maps.
Fig. 15
Fig. 15 Reconstructed images of multi-layer objects. (a) is CG rendering result of multi-layer objects. (b) is reconstructed image of multi-layer objects.

Tables (3)

Tables Icon

Table 1 Parameters of Reconstruction Device

Tables Icon

Table 2 Parameters of Computation Device

Tables Icon

Table 3 Calculation time of different methods

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

C x = C 3 ( C 3 C 2 ) ( I 3 I x ) ( I 3 I 2 ) ,
I d i = I d 0 j = 0 i ( 1 α j ) ,
I r i = k d i I d i m a x ( 0 , L N ) ,
I r L i = I r i j = i + 1 n ( 1 α j ) ,
μ i ( ξ , η ) = I r L i n r i exp ( j ( k r i + ϕ i ) ) ,
μ ( ξ , η ) = i = 1 N μ i ( ξ , η ) ,
μ i ( ξ , η ) = I r L i e k r i exp ( j ( k r i + ϕ i ) ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.