Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications

Open Access Open Access

Abstract

Layer-based method has been proposed as an efficient approach to calculate holograms for holographic image display. This paper further improves its calculation speed and depth cues quality by introducing three different techniques, an improved coding scheme, a multilayer depth- fused 3D method and a fraction method. As a result the total computation time is reduced more than 4 times, and holographic images with accommodation cue are calculated in real time to interactions with the displayed image in a proof-of-concept setting of head-mounted holographic displays.

© 2015 Optical Society of America

1. Literature review

Many methods on calculating computer generated holograms (CGH) have been researched. One of the major methods is the point-based method, which starts from cloud points to calculate the holograms. Different methods to improve the calculation speed of point-based methods [1–5] and allow point-based method to support critical depth cues, such as occlusion [6,7] and shading [8], were proposed. Furthermore, parallel calculation with several graphic cards were used to show a fast point-based method in a simple format without occlusion and shading cues [9,10] with a calculation time of 50 ms for 3D content of 3000 points, which might not be sufficient for a large size and large view holographic image. Wavefront recording plane (WRP) method was also reported to be fast [11,12], but it can only support limited depth for the reconstructed object, and it doesn’t support some depth cues, such as occlusion cue.

To render an image and deal with occlusion and shading, image-based methods [13,14] are convenient because the multiple-view rendering makes occlusion and shading on each view independent and then they can be assigned according to the content. Image-based methods were mostly used in holographic stereograms [15–18], but accommodation cue was not supported. The proposed Diffraction Specific Coherent Panoramagram (DSCP) method [19] introduced accommodation cue to image-based method. This concept is potentially compatible to other multiple-view systems, such as angular tiling system [20–22] and computer integral holography (CIH) system [23].

On the other hand, polygon-based method [24–26] was developed to compete with point-based method for calculation speed. In principle, the necessary number of polygons used in polygon-based methods is much less than the necessary number of points used in point-based methods. And the calculation time of each polygon is not much longer than the calculation time for each point, therefore the total calculation time in polygon method is potentially to be less. Few method were also proposed to increase the speed of polygon method [27,28]. Methods supporting occlusion [29,30] and shading [31,32] have been published as well.

In addition to these methods, layer-based method was proposed [33,34]. In such an effective method, the calculation of each layer is fast and only limited number of layers is necessary for limited depth so that it greatly reduces the calculation load. One of its significant features is its trade-off between resolution of accommodation cue and calculation speed, and the actual calculation speed has been proved to be rapid while the image quality remains high. This paper aims to significantly improve its speed by integrating different computational and image processing techniques in the calculation, such as the fraction method and the depth fused 3D (DF3D) method [35,36].

2. Three techniques used to improve computation speed

Three compatible individual techniques, an improved coding scheme, a multilayer depth-from-defocus fused 3D method and a fraction method, will be introduced to improve the calculation speed and visual quality for layer-based method. A specific optical setting of head-mounted display (HMD) is used to project the calculated holographic image for proof-of-concept purpose.

2.1 An improved coding scheme

Layer-based method was firstly implemented on MATLAB with GPUmat, which is an open source toolbox. After the concept was proven, we started using an open source tool “psychotoolbox” [37], which supports MATLAB with OpenGL [38] rendering, to implement the graphics rendering [33].

However, GPUmat doesn’t allow user to optimize the parameters for operating parallel calculation. Therefore, we coded the algorithm on C + + with OpenGL, which is essentially based on C, and CUDA, which supports direct communication with GPUs. All commands of adding, multiplying, data transferring and so on are optimized in kernels. Also, the thread and block numbers are optimized after testing with different parameters. This scheme of coding improves the performance by avoiding latency occur during the communication between different interfaces. The improvement is shown in the Results section.

2.2 Multi-layer DF3D method

DF3D method compensates accommodation cue by modifying pixel amplitudes on two transparent layers, which aligned in the viewer’s viewing direction. However, this technique is only effective within a limited and narrow viewing angle due to the necessity of viewing two DF3D layers along the aligned direction. However, DF3D method can benefit from angular tiling and layer-based method due to its use of layers. We propose to combine layer-based method with DF3D method and call it “ multilayer DF3D” method.

The illustration of the multilayer DF3D method is shown in Fig. 1, in which Fig. 1(a) is the general case of a cloud of point to be reconstructed; Fig. 1(b) is the layer-based method which slices the space into N layers. Every source data point, coloured by blue, is assigned to its closest layer and generate its corresponding point, coloured by red. The actual reconstructed images are red points with depth error rather than blue source data; Fig. 1(c) is the multilayer DF3D method which slices space into M layers. Every source data point, coloured by blue, has two corresponding data points, coloured by red, assigned to its two closest layers with the magnitudes decided by DF3D principle. The actual reconstructed images are those red points which can be visually perceived as blue source points without depth error because of the DF3D effect if the viewing direction is along the layers’ aligning direction.

 figure: Fig. 1

Fig. 1 Illustration of the use of multi-layer DF3D method. (a) Source data to be reconstructed; (b) in normal layer-based method; (c) in multi-layer DF3D method.

Download Full Size | PDF

In the normal DF3D method, every point has its corresponding points on two fixed layers. Differently in multilayer DF3D method, every point has its corresponding points on two different layers, and this gives us two advantages. First, the distance between each DF3D layers can be reduced and the valid viewing angle for DF3D is enhanced, as shown in Fig. 2. This is important because the normal DF3D has a too small effective viewing angle. For example, a total depth of 10 cm with the pixel pitch of 200 um using the normal DF3D method only supports the effective viewing angle of ± 0.11. When viewer is watching images outer than this angle, mismatch of content and depth error will be perceived. On the other hand, using multilayer DF3D method of 10 layers supports the effective viewing angle to ± 1.15°, and this angle is suitable to be used on a multiple view system, such as our CIH system [23], each view of which only covers less than 1° so that there is no situation the mismatch of content and depth error could occur. The second advantage is that the multilayer DF3D method compensates accommodation cue and then fewer layers are necessary so that the calculation load is reduced. For example, a total depth of 10 cm with 10 normal layers only supports depth resolution of 1 cm, while the multilayer DF3D needs less than 10 layers but supports an even better depth resolution.

 figure: Fig. 2

Fig. 2 The illustration of effective viewing angle in multi-layer DF3D and normal DF3D methods.

Download Full Size | PDF

It is worth mentioning that the reduction of necessary layers and the calculation load are shown in the Results section. However, the corresponding quality of 3D image is difficult to be quantitatively analysed so we only provide pictures as the qualitative evidence in section 3.3, which is clearly showing the image quality and accommodation cue visually.

Another advantage which multi-layer DF3D provides over normal DF3D is the image quality. Research on DF3D has shown that eyes can accommodate the focus on a synthesized pixel, with its size depends on how far the distance is between the synthesized point and physical points on two DF3D layers. A shorter distance between two DF3D layers can generates a better visual image quality. This is illustrated in Fig. 3. Figure 3(a) is the case that there is a physical ideal dot between layers, and the focus at it. In this case, the image projected on retina is a ideal dot. Figure 3(b) is the case that two dots located on two layers to synthesize a virtual dot, where the eye focus on, and the projected image on retina is a blurred dot. Figure 3(c) is the case that the distance between two layers is shorten, and the area of blurring is reduced.

 figure: Fig. 3

Fig. 3 The illustration of blurring improvement by reducing distance between two DF3D layers.

Download Full Size | PDF

In addition, we compare multi-layer DF3D and previous layer-based method. The multi-layer DF3D method compensates accommodation cue and fewer layers are necessary compare to previous layer-based method, so that the calculation load is reduced. For example, a total depth of 10 cm with 10 normal layers supports depth resolution of 1 cm (assuming we uniformly divided the space), while the multi-layer DF3D needs less than 10 layers but can support smooth depth. The limitation of the least number of DF3D layers is the pixels overlapping requirement. Say we divide the same object of 10 cm total depth into 3 sections, which means 4 DF3D layers are used, and there is a 3.33 cm interval distance between two DF3D layers. This can provide smooth accommodation cue limited within 0.52° (≈tan−1(300 um /3.33 cm)) viewing angle for each view. It should be noted that 0.52° is the viewing angle a multi-view system can support [34].

2.3 Fraction method for acceleration

Layer-based method for calculating angular tiling holograms trades unnecessary resolution of accommodation cue for the faster speed. To further improve the performance, we can trade the resolution of reconstructed images, which is proportional to the resolution of the sub-hologram and the calculation load, for the calculation speed. For example, the resolution of 256 × 192 can support an 8 cm × 6 cm reconstructed holographic image with a pixel pitch of about 300 um, so the use of XGA or even HD resolution is probably too good for eyes.

Therefore, we can reduce the rendering resolution for the targeted object. The discrete Fourier transform (DFT), which is implemented by fast Fourier transform (FFT), of each layer will have a lower resolution so that the calculation load is less. In practice, before attaching the holographic lens and stacking up with other layers, results of DFT are simply duplicated and fitted to the original resolution, as shown in Fig. 4. The information on the hologram is for the reconstructed image with a lower resolution on the same hologram resolution. For convenience, we call this method “fraction method”.

 figure: Fig. 4

Fig. 4 Illustration of the tiling method.

Download Full Size | PDF

The main contribution of this method is the huge reduction of DFT calculations. However, the amount of the transferred data and stacked up layers remains the same. Especially, the calling action to replicate a certain area takes some time, therefore the more fractional it is, the more time on such actions. The improvement by fraction method has its limits and it improves little once the time spent on data transferring is more than the time spent on FFT. This situation is happening when we apply fraction of 2 × 2. We have tried up to 4 × 4 ratio, but 2 × 2 has the fastest speed. Figure 5 illustrates the overall procedure for the fractioned layer-based method. The ration optimization is a complicated procedure which is not just related to the hologram size, but also the data transmission speed and function calling speed. This can vary from machine to machine depending on both hardware (like processor) and software (like programming language); therefore, we don’t intend to analyse it in detail in this paper but just to propose the use of this concept and apply it to 3D image hologram calculation.

 figure: Fig. 5

Fig. 5 Illustration of the fraction method.

Download Full Size | PDF

There are a few of points which should be noted here. Firstly, although a smaller size image (down-sampled) and its corresponded hologram only provide a smaller field of field, but it together with its replicated holograms make up the original size, so the overall field of view is not reduced. Secondly, although the same hologram at different locations behind the same lens (as those replicated holograms in our structure) reconstructs the same image in terms of amplitude, the phase profile of these reconstructed images may be different and result in unwanted interference patterns. Our optical reconstruction results as shown in the results section do not show any of such an effect. Detailed analysis can be a future work for the further understanding of the full impact of the faction method. Thirdly, the use of holographic lens on a SLM, as illustrated in Fig. 5, can cause the aliasing issue [39,40]. To avoid this problem, the holographic lens’s phase profile (i.e. its power) cannot excess what the SLM can support. The SLM we use has a pixel pitch of 8 μm, which supports ± 2.26° diffraction angle for light with wavelength of 633nm, with a resolution of 1,920 × 1,080. Therefore, the focal lens it applies shouldn’t be shorter than 195 mm (960·8μm/tan(2.26°)), otherwise aliasing will occur. This shortest distance can be further reduced if the hologram is relayed by a de-magnification system, which reduces the equivalent pixel pitch and shortens the supported minimum focal length of the holographic lens.

2.4 A conceptual holographic head mounted display

To confirm the effects of our methods and present the solid images, we apply the idea of head mounted holographic display to show our results. The concept of head-mounted displays is to fix the display on head with potable and light frame. This feature is suitable to holographic images and following is the reason. The current SLM technology can only provide limited amount of holographic information on a single SLM, so the viewing angle and image size are both small. Besides, the computation ability of current hardware can only support real time hologram calculation for either small viewing angle or small image size. Furthermore, even if both SLM technology and computation hardware would have been improved well enough to support wide viewing angle and large image size, a user can only perceive a certain viewing angle at one time, so the optical reconstruction and hologram computation of all the other viewing angles are just wasted and not seen. Comparatively, head-mounted holographic display only project information to eyes. Therefore, the head-mounted holographic display concept cannot only reduce the burden of optical reconstruction and computation load, but also make the information usage more efficient.

The idea of applying holography on the HMD can be traced back to 1970 [41] to our best knowledge. Back then, there was no good enough SLM and computer technology to support the requirements. Until lately, few research works re-produced this idea and presented its possibility [42–44] based on technology up to date. In this paper, we apply a proof-of-concept set-up as a platform to demonstrate the fast hologram calculation speed, while realizing a real time interaction system.

The main idea of holographic HMD is to image the hologram directly to the pupil, which has a small size, and therefore the necessary requirement for hologram is more manageable than the holographic displays designed to be seen for everyone. The conceptual drawing of holographic HMD is shown in Fig. 6, in which the relayed hologram is relayed from the SLM by a 4f system in front of the eye. The actual setting-up in this paper is shown in Fig. 7, and the overall setting can be folded into a compact size.

 figure: Fig. 6

Fig. 6 Conceptual sketch of a head mounted holographic display (single eye model).

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Experimental set-up used for the demonstration of the concept of a HMD holographic display.

Download Full Size | PDF

It should be noted that removing the zero order diffraction (ZOD) beam has been a subject of research studies [45–47]. In our system it is blocked by a physical object placed between the two lenses shown in Fig. 7, which was not placed in the picture for simplicity and clarity. In addition. as shown in the next section, there is an area of lump shape light which was caused by the imperfect ZOD blocking. Ideally, it can be blocked by using a customized mask. It should also be noted that blocking ZOD in a HMD is important for eyes safety, because the convergence of ZOD may damage eyes if they focus at the wrong plane.

3. Results

3.1 Calculation speed

To test the calculation performance, a procedure shown in Fig. 8 is used. An obj file of 3D data is feed into OpenGL for rendering. An image for a specific viewing angle and its depth map are then rendered to compose a cloud of points before the layer-based calculation. This structure has been implemented on two systems for comparison. One is based on MATLAB with the help of GPUmat and psychtoolbox, which is our last version. Another is implemented on C + + communicating with CUDA and OpenGL, which is the improved system in this paper. The hardware and software in use are shown in Table 1, and some important parameters for the speed test are shown in Table 2.

 figure: Fig. 8

Fig. 8 An illustration of the system procedure of the algorithm in this paper.

Download Full Size | PDF

Tables Icon

Table 1. The information of used hardware and software

Tables Icon

Table 2. important parameters

To show the improvement due to the introduction of individual methods, the total computation time at different stages is listed in Table 3. The first column is the result from our previous work [34] based on MATLAB interface with GPUmat and Psychotoolbox, as a reference point. The second column is the result when coded in C + + with CUDA and OpenGL, and the improvement mainly comes from optimization of using parallel processor units and the communication between C + + and OpenGL. The third column is the result when the multilayer DF3D method is added, and the contribution is the reduction of the necessary number of layers in calculation due to the accommodation cues compensation. The final column is the result when the fraction method is further added with a factor of 2 × 2, and the main improvement comes from the reduction of resolution for FFT and rendering.

Tables Icon

Table 3. Results after using each proposed methods for improving calculation performance. (Unit: ms)

The final calculation time after applying proposed methods is 17.60 ms per frame, in which less than 0.2 ms is spent on rendering, 3.34 ms is spent on data transferring between host (CPU and its memory) and device (GPU and its memory), and 14.12 ms is spent on the improved layer-based method including fractioning, dissecting and hologram calculation. Within the 14.12 ms, less than 3 ms is spent on FFT calculation and most of the 14.12 ms is spent on attaching holographic lenses and stacked up all the layers together, and the data transferring within device.

According to this result, a GPU with higher data bandwidth, both of GPU-GPU internal bandwidth and GPU-CPU external bandwidth will help a lot, while the GPU with higher computational ability will only contribute limited help. For example, the high-end gamer graphics card, NVIDIA Titan, has computing ability of 2.2 TFlops [49], which is more than ten times of that in NVIDIA GTX 460SE used in this research work. But the data transferring bandwidth improvement from Titan to GTX 460 SE is only 2.6 times. Therefore, it is predicted that the speed will be limited at about 7 ms (< 2.6 × ) rather than 1.7 ms (10 × ) even if the highest available GPU on the market is used.

The final performance of 17.6 ms for calculating a 1024 x 768 hologram is equal to 4.5 × 107 pixels per second (pps), which is about a 4.4 times faster than that of our previous work [34]. This is the fastest hologram calculation speed reported so far for multiple level phase-only SLMs while a complicated 3D content is used.

3.2 Results of the concept of head mounted holographic display

The specification of SLM and optical components are shown in Table 4. The actual images seen through the beam splitter is shown in Fig. 9.

Tables Icon

Table 4. Specification of used hardware

 figure: Fig. 9

Fig. 9 The actual images perceived through the HMD holographic display.

Download Full Size | PDF

To show the accommodation cues clearly, we provide two holographic images located at different depths and one physical ruler for comparison. The result is shown in Fig. 10, in which Fig. 10(a) is the rendered contents for this test; Fig. 10(b) is shot when the camera is focusing at the rabbit which is located 10 cm away from the camera, and the dragon behind apparently lose its focus; Fig. 10(c) is shot when the camera is focusing at a physical ruler, which is located at 30 cm away from the camera. The effective size is 9 cm for 30cm viewing distance, which is equal to a FOV is tan−1(9/30) = 16.7°. This number confirms what the hardware specification predicts: 8um pixel pitch through a 15 cm- 4 cm imaging system will turn into 2.13 um pitch size, which supports about 16.5° viewing angle for wavelength of 633 nm; and Fig. 10(d) is shot when the camera is focusing on the dragon, which is located at 100 cm away from the camera, and both the rabbit and the ruler lose their focus.

 figure: Fig. 10

Fig. 10 A holographic object as observed through the head-mounted holographic display.

Download Full Size | PDF

3.3. Comparison between normal layers method, with DF3D and with multi-layer DF3D

In order to illustrate the optical difference in their holographic reconstruction between the normal layered method with different numbers of layers and those with DF3D or with multilayer DF3D methods, letters from A to I locating from near to far are used. The camera was focused on the farthest letter, ”I”. The results are shown in Fig. 11.

 figure: Fig. 11

Fig. 11 Comparison between normal layer-based method, with DF3D method, and with multilayer DF3D method.

Download Full Size | PDF

In “DF3D”, only two layers (framed layers in the figure, numbered 1 & 9) are used while the DF3D technique is used. The approximation effect of accommodation cue is apparent. However, there are noticeable view mismatches causing a shadow-like effect on the letters due to its limited viewing angle, especially on the letter “E”. In normal layer-based method, each letter is located at different layers (all 9 layers are used), and the accommodation cues between them are clear. In comparison, “multi-layer DF3D” uses 4 layers (layers coloured by red in the figure, numbered 1, 4, 7 and 9). It shows accommodation cues similar to that in a normal layer-based method and the mismatch is not noticeable.

3.4 Video results

To further demonstrate the capability of the improved algorithm, we demonstrate a real time interactive holographic display using a head mounted display setting. The user can manipulate a reconstructed holographic object, and then the corresponding holograms are calculated and projected in real time to show the response.

The pure calculation, which applies fraction factor of 2 × 2 and 10 DF3D layers, costs about 46 ms without considering uploading to SLM and other hardware latency. This number is equal to 4.5 × 107 pps, which fits to the results of the XGA version in section 3.2. The practical frame rate is actually 17.8 fps (56 ms per frame) because the data uploading and hardware latency cost about 10 ms.

This has been checked in real time and recorded by video (Visualization 1) in which the 3D content was rotated, moved and scaled by the user and its corresponding holographic image was projected accordingly. The sketch of the designed 3D object and some of the video frames are extracted and shown in Fig. 12.

 figure: Fig. 12

Fig. 12 (a) Sketch of the designed 3D object and (b) some of the extract video frames (Visualization 1).

Download Full Size | PDF

4. Conclusions

We proposed the use of three compatible techniques to improve the calculation speed of holograms based on a previously proposed layer-based method. It was shown that an algorithm of layer-based method runs faster when coded in C + + , the dynamic DF3D method reduces the number of layers while maintaining the accommodation cue quality and the fraction method can save the calculation load. As a result, the calculation speed of an algorithm based on layer-based method can be improved by more than four times. Furthermore, real-time interaction with holographic images through a head-mounted holographic display was demonstrated to show the potential of fast hologram generation for future real-time interactive holographic displays.

Acknowledgment

JSC and DPC would like to thank the UK Engineering and Physical Sciences Research Council (EPSRC) for the support through the Platform Grant in Liquid Crystal Photonics (EP/F00897X/1).

References and links

1. R. H.-Y. Chen and T. D. Wilkinson, “Computer generated hologram from point cloud using graphics processor,” Appl. Opt. 48(36), 6841 (2009). [CrossRef]   [PubMed]  

2. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]   [PubMed]  

3. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]   [PubMed]  

4. S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009). [CrossRef]   [PubMed]  

5. J. Jia, Y. Wang, J. Liu, X. Li, Y. Pan, Z. Sun, B. Zhang, Q. Zhao, and W. Jiang, “Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. 52(7), 1404–1412 (2013). [CrossRef]   [PubMed]  

6. R. H. Chen and T. D. Wilkinson, “Computer generated hologram with geometric occlusion using GPU-accelerated depth buffer rasterization for three-dimensional display,” Appl. Opt. 48(21), 4246–4255 (2009). [CrossRef]   [PubMed]  

7. H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011). [CrossRef]  

8. T. Kurihara and Y. Takaki, “Shading of a computer-generated hologram by zone plate modulation,” Opt. Express 20(4), 3529–3540 (2012). [CrossRef]   [PubMed]  

9. J. Song, J. Park, and J.-I. Park, ‘Fast calculation of computer-generated holography using multi-graphic processing units’, in 2012 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (2012), pp. 1–5. [CrossRef]  

10. N. Takada, T. Shimobaba, H. Nakayama, A. Shiraki, N. Okada, M. Oikawa, N. Masuda, and T. Ito, “Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system,” Appl. Opt. 51(30), 7303–7307 (2012). [CrossRef]   [PubMed]  

11. P. Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou, “Holographic video at 40 frames per second for 4-million object points,” Opt. Express 19(16), 15205–15211 (2011). [CrossRef]   [PubMed]  

12. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012). [CrossRef]   [PubMed]  

13. F. Remondino and S. El-Hakim, “Image-based 3D Modelling: A Review,” Photogramm. Rec. 21(115), 269–291 (2006). [CrossRef]  

14. H. Shum and S. B. Kang, ‘Review of image-based rendering techniques’, Visual Communications and Image Processing 2000. International Society for Optics and Photonics, 2–13 (2000).

15. M. Lucente, ‘Diffraction-Specific Fringe Computation for Electro-Holography’, Doctoral Thesis Dissertation, Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science, Cambridge MA USA (1994).

16. W. Plesniak, M. Halle, J. Bove, J. Barabas, and R. Pappu, “Reconfigurable image projection holograms,” Opt. Eng. 45(11), 115801 (2006). [CrossRef]  

17. H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. 47(19), D44–D54 (2008). [CrossRef]   [PubMed]  

18. Q. Y. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove Jr., “Real-time shader rendering of holographic stereograms,” Proc. SPIE 7233, 723302 (2009).

19. J. Barabas, S. Jolly, D. E. Smalley, and V. M. Bove Jr., “Diffraction Specific Coherent Panoramagrams of Real Scenes,” Proc. SPIE 7957, 795702 (2011). [CrossRef]  

20. D. Abookasis and J. Rosen, “Computer-generated holograms of three-dimensional objects synthesized from their multiple angular viewpoints,” J. Opt. Soc. Am. A 20(8), 1537–1545 (2003). [CrossRef]   [PubMed]  

21. D. Abookasis and J. Rosen, “Three types of computer-generated hologram synthesized from multiple angular viewpoints of a three-dimensional scene,” Appl. Opt. 45(25), 6533–6538 (2006). [CrossRef]   [PubMed]  

22. J.-S. Chen, Q. Smithwick, and D. Chu, “Implementation of shading effect for reconstruction of smooth layer-based 3D holographic images,” Proc. SPIE 8648, 86480R (2013). [CrossRef]  

23. Q. Smithwick, J.-S. Chen, and D. Chu, ‘A Coarse Integral Holographic Display’, SID Symposium Digest of Technical Papers44(1), 310–313 (2013).

24. K. Matsushima, ‘Wave-Field Rendering in Computational Holography: The Polygon-Based Method for Full-Parallax High-Definition CGHs’, IEEE/ACIS 9th International Conference on Computer and Information Science (ICIS), 846–851 (2010). [CrossRef]  

25. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009). [CrossRef]   [PubMed]  

26. D. Im, E. Moon, Y. Park, D. Lee, J. Hahn, and H. Kim, “Phase-regularized polygon computer-generated holograms,” Opt. Lett. 39(12), 3642–3645 (2014). [CrossRef]   [PubMed]  

27. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Fast polygon-based method for calculating computer-generated holograms in three-dimensional display,” Appl. Opt. 52(1), A290–A299 (2013). [CrossRef]   [PubMed]  

28. D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express 23(3), 2863–2871 (2015). [CrossRef]   [PubMed]  

29. K. Matsushima and A. Kondoh, “A wave-optical algorithm for hidden-surface removal in digitally synthetic full-parallax holograms for three-dimensional objects,” Proc. SPIE 5290, 90–97 (2004). [CrossRef]  

30. A. Kondoh and K. Matsushima, “Hidden surface removal in full‐parallax CGHs by silhouette approximation,” Syst. Comput. Jpn. 38(6), 53–61 (2007). [CrossRef]  

31. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44(22), 4607–4614 (2005). [CrossRef]   [PubMed]  

32. H. Nishi, K. Matsushima, and S. Nakahara, “Advanced rendering techniques for producing specular smooth surfaces in polygon-based high-definition computer holography,” Proc. SPIE 8281, 828110 (2012). [CrossRef]  

33. J.-S. Chen, Q. Smithwick, and D. Chu, “Implementation of shading effect for reconstruction of smooth layer-based 3D holographic images,” Proc. SPIE 8648, 86480R (2013). [CrossRef]  

34. J.-S. Chen, D. Chu, and Q. Smithwick, “Rapid hologram generation utilizing layer-based approach and graphic rendering for realistic three-dimensional image reconstruction by angular tiling,” J. Electron. Imaging 23(2), 023016 (2014). [CrossRef]  

35. S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res. 44(8), 785–793 (2004). [CrossRef]   [PubMed]  

36. C. Lee, S. DiVerdi, and T. Höllerer, ‘An immaterial depth-fused 3D display’, in Proceedings of the 2007 ACM symposium on Virtual reality software and technology, New York, NY, USA, 191–198 (2007). [CrossRef]  

37. ‘Psychtoolbox Wiki: Psychtoolbox-3′. [Online]. Available: http://psychtoolbox.org/. [Accessed: 21-May-2015].

38. ‘OpenGL - The Industry Standard for High Performance Graphics’. [Online]. Available: http://www.opengl.org/. [Accessed: 21-May-2015].

39. T. Meeser, C. von Kopylow, and C. Falldorf, ‘Advanced digital lensless fourier holography by means of a spatial light modulator’ in 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON, 2010), pp. 1–4. [CrossRef]  

40. J. Liang and M. F. Becker, “Spatial bandwidth analysis of fast backward Fresnel diffraction for precise computer-generated hologram design,” Appl. Opt. 53(27), G84–G94 (2014). [CrossRef]   [PubMed]  

41. R. A. Farrar and M. Southfield, ‘Helmet-mounted Holographic Aiming Sight’, U.S. Patent No 3,633,988, 11-Jan-1970.

42. H.-E. Kim, N. Kim, H. Song, H.-S. Lee, and J.-H. Park, “Three-dimensional holographic display using active shutter for head mounted display application,” Proc. SPIE 7863, 78631Y (2011). [CrossRef]  

43. T. Yoneyama, C. Yang, Y. Sakamoto, and F. Okuyama, ‘Eyepiece-type Full-color Electro-holographic Binocular Display with See-through Vision’, in Digital Holography and Three-Dimensional Imaging, DW2A.11 (2013). [CrossRef]  

44. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]   [PubMed]  

45. D. Palima and V. R. Daria, “Effect of spurious diffraction orders in arbitrary multifoci patterns produced via phase-only holograms,” Appl. Opt. 45(26), 6689–6693 (2006). [CrossRef]   [PubMed]  

46. D. Palima and V. R. Daria, “Holographic projection of arbitrary light patterns with a suppressed zero-order beam,” Appl. Opt. 46(20), 4197–4201 (2007). [CrossRef]   [PubMed]  

47. J. Liang, S. Y. Wu, F. K. Fatemi, and M. F. Becker, “Suppression of the zero-order diffracted beam from a pixelated spatial light modulator by phase compression,” Appl. Opt. 51(16), 3294–3304 (2012). [CrossRef]   [PubMed]  

48. “The Stanford 3D scanning repository,” [Online]. Available: http://graphics.stanford.edu/data/3Dscanrep/.

49. “World leader in visual computing technologies NVIDIA,” [Online]. Available: http://www.nvidia.com/page/home.html.

Supplementary Material (1)

NameDescription
Visualization 1: AVI (10953 KB)      Multimedia video file

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Illustration of the use of multi-layer DF3D method. (a) Source data to be reconstructed; (b) in normal layer-based method; (c) in multi-layer DF3D method.
Fig. 2
Fig. 2 The illustration of effective viewing angle in multi-layer DF3D and normal DF3D methods.
Fig. 3
Fig. 3 The illustration of blurring improvement by reducing distance between two DF3D layers.
Fig. 4
Fig. 4 Illustration of the tiling method.
Fig. 5
Fig. 5 Illustration of the fraction method.
Fig. 6
Fig. 6 Conceptual sketch of a head mounted holographic display (single eye model).
Fig. 7
Fig. 7 Experimental set-up used for the demonstration of the concept of a HMD holographic display.
Fig. 8
Fig. 8 An illustration of the system procedure of the algorithm in this paper.
Fig. 9
Fig. 9 The actual images perceived through the HMD holographic display.
Fig. 10
Fig. 10 A holographic object as observed through the head-mounted holographic display.
Fig. 11
Fig. 11 Comparison between normal layer-based method, with DF3D method, and with multilayer DF3D method.
Fig. 12
Fig. 12 (a) Sketch of the designed 3D object and (b) some of the extract video frames (Visualization 1).

Tables (4)

Tables Icon

Table 1 The information of used hardware and software

Tables Icon

Table 2 important parameters

Tables Icon

Table 3 Results after using each proposed methods for improving calculation performance. (Unit: ms)

Tables Icon

Table 4 Specification of used hardware

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.