Abstract

In computer-generated hologram (CGH) calculations, a diffraction pattern needs to be calculated from all points of a 3-D object, which requires a heavy computational cost. In this paper, we propose a novel fast computer-generated hologram calculation method using sparse fast Fourier transform. The proposed method consists of two steps. First, the sparse dominant signals of CGHs are measured by calculating a wavefront on a virtual plane between the object and the CGH plane. Second, the wavefront on CGH plane is calculated by using the measured sparsity with sparse Fresnel diffraction. Experimental results proved that the proposed method is much faster than existing works while it preserving the visual quality.

© 2016 Optical Society of America

1. Introduction

Holographic 3-D is a promising technology in realistic 3-D display, which could provide fully satisfying 3-D perception without shutter/polarized 3D glasses in the 3-D stereoscopy [1]. Computer-generated holograms (CGHs) are attracting increasing interest for the holographic display. CGHs are generated by numerically calculating a holographic fringe pattern on a computer simulation. CGHs have attractive features, such as not requiring specialized holographic recording materials and the physical manifestation of synthetic objects [2].

There are various CGH calculation methods for 3-D object, which are point light source (PLS) based methods [3–12], polygon based methods [13, 14], RGB-D images based method [15], and so on. The PLS-based methods deal with 3-D object points as an aggregate of PLS. By using a ray tracing algorithm, they can generate CGHs more flexible than polygon-based method [10]. However, in ray tracing process, a heavy computation is required to calculate the complex amplitude of all 3-D object points at every pixel on the CGH plane. As a result, the computational complexity is known as one of the challenging problems [10].

To accelerate the computational speed of CGH calculation, several methods were proposed. In [3], a look-up table (LUT) based approach was proposed. It could reduce the computational cost by pre-calculating and storing holographic fringe patterns. Kim et al. proposed novel LUT based approaches to reduce the required memory space for storing the pre-calculated values [4–6]. However, a large amount of memory and additional operations (e.g., file read or array access) are still required for high resolution CGHs [7]. In [8] and [9], recurrence based approaches were proposed using recurrence relations. However, approximation errors could be accumulated and propagated. Recently, wavefront recording plane (WRP) based approaches were proposed [10–12]. In [10], a computational complexity was reduced by calculating the wavefront (i.e., complex amplitude) on the WRP followed by Fresnel diffraction calculation with fast Fourier transform (FFT). In [11] and [12], multi-WRP and tilted WRP methods were proposed.

As aforementioned, a holographic fringe pattern can be represented by FFT based light propagation such as Fresnel diffraction or angular spectrum. Most of coefficients of the Fourier representation are very small or zero in image, video, and audio signals [16, 17]. Without the loss of generality, the holographic fringe patterns are sparse in the Fourier domain as well [18]. Based on the sparsity property of holographic fringe pattern, compressive holography methods were proposed that reconstruct original data from a small set of sampled signals [18, 19]. To reduce redundant calculations, the sparsity property can play key role in the CGH calculation as well as the compressive holography. The existing CGH calculation methods do not consider the sparsity property of CGHs yet.

In this paper, we propose a novel fast CGH calculation method using the sparsity of holographic fringe patterns. The main contributions of this paper are the following.

  • 1) We observe the visual quality of the numerical reconstruction and a small (sparse) set of CGH signals. The experiments show that the CGHs of 3-D object are signals which are sparse enough so that a 3-D object can be reconstructed with a small number of dominant CGH signals. Based on this observation, we propose a fast CGH calculation using the sparsity of the CGHs.
  • 2) The sparsity property of holographic fringe pattern is calculated in the wavefront on a virtual plane. In this paper, the wavefront on the virtual plane is effectively obtained by clustering 3-D object points. Multiple wavefronts of corresponding clustered points are calculated in parallel. Using the wavefront on the virtual plane, the sparsity of the holographic fringe pattern is measured. With the measured sparsity, sparse fast Fourier transform (sFFT) is applied to calculate the wavefront on the CGH plane using Fresnel diffraction. The computational complexity of sFFT is about O(k log k), which is lower than that of FFT, O(N log N). Note that N indicates the entire signals (e.g., N-dimensional entire CGH pixels) and k indicates the number of sparse dominant signals.

The remainder of this paper is organized as follows. In Section 2, the sparsity of the CGHs is investigated. In Section 3, the proposed fast CGH calculation method is described. Section 4 presents the experiments and results to evaluate the performance of the proposed method. Finally, conclusions are drawn in Section 5.

2. Sparsity of CGH and the visual quality

In this section, we observe the visual quality of CGH signals. CGH signals are sampled with two different sampling scheme. The visual qualities of corresponding numerical reconstructions are observed. The details of the experimental parameters (e.g., wavelength, sampling pitch, distance between object and CGH plane, CGH resolution, etc.) are the same as the experimental conditions described in Table 2 in Section 4. Figure 1 shows examples of numerically reconstructed results by sampled CGH signals, where 10% of the signals have been used. Figure 1(a) and 1(b) show a 3-D object point cloud and its holographic fringe pattern (i.e., CGH) generated by ray tracing, respectively. Figure 1(c) shows the result reconstructed by randomly sampled signals. Figure 1(d) shows the result reconstructed from the top 10% magnitude of CGH signals. As shown in the Fig. 1(c) and 1(d), sampled sparse signals could provide visually acceptable results. In particular, magnitude based sampling could provide better visual quality than random sampling with the same sampling ratio. This observation is consistent with the previous works [18]. As a result, we observe that top magnitude samples of the CGHs signals have an important effect on the visual quality.

 figure: Fig. 1

Fig. 1 Examples of numerical reconstructed images of sampled CGH signals, where 10% of the signals were used. (a) 3-D object point cloud for Bunny. (b) Holographic fringe pattern (i.e., CGH) (c) The numerical reconstructed result from randomly selected 10% of CGH signals. (d) The numerical reconstructed result from the top 10% magnitude of CGH signals.

Download Full Size | PPT Slide | PDF

In addition, we measure the visual quality depending on the amount of samples in CGHs. Figure 2 shows average peak signal to ratio (PSNR) [dB] according to sampling ratio for four data sets (please refer to Section 4 for details). The PSNR is calculated between numerical reconstructed results (i.e., numerically reconstructed 2-D image as shown in Fig. 1(c) and 1(d)) from the entire CGH signals and sampled CGH signals. The numerical reconstructed results from the entire CGH signals are considered as a reference to measure the PSNR. To obtain the numerical reconstructed results from CGH of 3-D data, the direct calculation using ray tracing is used in our experiments [20]. As shown in the Fig. 2, in the magnitude based sampling case, 30 dB PSNR can be achieved with around 5% dominant signals (i.e., top 5% magnitude of CGH signals). PSNR over 30 dB provides viewers with enough visual image quality [21].

 figure: Fig. 2

Fig. 2 The quality of the numerical reconstruction according to the sampling ratio. The visual quality is measured by PSNR.

Download Full Size | PPT Slide | PDF

We observe that the CGH signals are sparse enough so that about top 5 - 10% dominant signals in the CGHs could provide feasible visual quality of the numerical reconstruction. Based on this observation, we propose a fast CGH calculation considering the sparsity property of holographic fringe pattern in the following section.

3. Proposed fast CGH calculation method

Figure 3 shows the overview of the proposed CGH calculation for 3-D object based on the sparsity of holographic fringe pattern. The proposed method consists of two steps. First, the proposed method calculates the wavefronts of object points clustered on a virtual plane which is located between the object and the CGH plane. The sparsity of the holographic fringe pattern is measured by the number of large coefficients (red dots in Fig. 3) on the virtual plane. Second, the CGH is calculated by propagating the wavefront on the virtual plane to the CGH plane using sparse Fresnel diffraction with the measured sparsity. The details of the proposed fast CGH calculation are illustrated in next subsections.

 figure: Fig. 3

Fig. 3 Overview of the proposed fast CGH calculation for 3-D objects. In the first step, the wavefront on the virtual plane is calculated using multiple ray tracing for each object point cluster. In the second step, the wavefront on the CGH plane is calculated using sparse Fresnel diffraction with sFFT. Note that the red dots indicate a small number of dominant signals on the virtual plane. For sFFT, the number of dominant signals (i.e, sparsity) is measured based on the magnitude. z1 represents the distance between the object and the virtual plane. z2 represents the distance between the virtual plane and the CGH plane.

Download Full Size | PPT Slide | PDF

3.1 Calculation of the wavefront on the virtual plane using multiple ray tracing

In this section, we present the fast calculation of the complex amplitude (i.e., wavefront) on the virtual plane. Since the virtual plane is placed close to the object, point lights traverse limited areas [10]. As a result, the computational complexity of each point light is reduced by employing the virtual plane. However, a heavy computational cost is still required to calculate the wavefronts of a large number of object points.

In this paper, the computational time of the wavefront is reduced by simultaneously calculating multiple complex amplitudes on the virtual plane. To that end, 3-D object points are divided into S clusters. Let Ct denote the t-th cluster (t = 1, …, S) and Nt denote the number of points belonging to the t-th cluster. Let uVP denote the wavefront on the virtual plane. The wavefront on the virtual plane, uVP, is calculated by integrating multiple complex amplitudes of clustered points. It can be written as

uVP(x,y)=t=1Si=1NtAtiRtiexp(jkRti).
where Rti=(xxti)2+(yyti)2+(zti)2, which denotes the distance between pixel (x, y) on the virtual plane and the i-th point (xti, yti, zti), in the t-th cluster (Ct). Ati indicates the intensity of i-th object point in the t-th cluster (Ct). k represents the wave number, k = λ/2π. λ indicates the wave length of the reference light.

Figure 4 shows the proposed fast calculation of the wavefront on the virtual plane using multiple ray tracing. As shown in Fig. 4, the number of object points within a cluster is much smaller than that of all 3-D object points. The areas traversed by point lights are small on the virtual plane. By calculating the wavefronts of point clusters in parallel, the computational speed of the ray tracing can be significantly improved for 3-D object. In this paper, the multiple wavefronts on the virtual plane are calculated by CPU multi threads in openMP. The ray tracing can be parallelized with unclustered 3-D points. However, a 3-D object data set has thousands of points. If a large number of CPU threads are available, additional costs such as memory allocation may significantly increase for the point-grained parallel computing. On the other hand, the cluster-grained parallel computing can make effective use of the limited number of CPU threads in practical (the number of clusters can be much less than the number of 3-D points). By assigning each cluster to each CPU thread for cluster-based parallel computing, the calculation times of wavefronts of all 3-D points can be effectively improved. The 3-D object points are simply divided into clusters according to the order of point number.

 figure: Fig. 4

Fig. 4 Proposed fast calculation of wavefront on the virtual plane using multiple ray tracing. The red dots and the green dots represent the object points belonging to the first and second clusters, C1 and C2, respectively. The blue dots represent the object points belonging to the S-th cluster, CS. Notably, multiple wavefronts of each cluster on the virtual plane are calculated using ray tracing in parallel. Note that the radius of small area traversed by i-th point light in the Ct is defined as Wti = |zti|tanθ = |zti|tan(sin−1(λ/2p)), as reported in [10]. zti represents the distance between i-th point in the Ct and the virtual plane. p is the sampling pitch, which is 8.5 μm in this paper.

Download Full Size | PPT Slide | PDF

In this paper, the virtual plane plays a key role in obtaining the signal characteristics of the holographic fringe pattern as well as to reduce the computational cost of ray tracing. The wavefront on the virtual plane is followed by Fresnel diffraction with sFFT for CGH calculation. The details of the proposed Fresnel diffraction with sFFT are described in the next session.

3.2 Calculation of the wavefront on CGH plane using sparse Fresnel diffraction with sFFT

To calculate the wavefront on the CGH plane from the virtual plane, Fourier transform based propagation such as Fresnel diffraction has been widely used [3–12]. Fourier transform based propagation reduces computational time of CGH calculation by using FFT. However, it is not enough to reduce the computational complexity for calculating CGH with a high resolution. In this paper, we propose a fast CGH calculation using sFFT in Fourier based propagation.

Recently, sFFT methods have been proposed for reducing the computational complexity of FFT [16, 17]. The main idea of sFFT is to sample a small number of signals (with the sparsity of signals). By calculating the FFT with sparse signals, sFFT could achieve a low computational complexity. However, it requires the locations and values of dominant spare signals in the frequency domain to avoid data loss or incorrect FFT calculation. By performing filtering or a permutation scheme, sFFT increases the probability of capturing dominant signals in Fourier domain and outperforms the FFT [22].

As described in Section 2, sparse signals of CGHs could provide feasible visual quality of numerical reconstruction. A sparse Fresnel diffraction with sFFT can accelerate the calculation of the wavefront on the CGH plane. The main idea of the proposed method is to measure the sparsity of the holographic fringe pattern from the wavefront recorded on the virtual plane. With the measured sparsity, the CGH is rapidly generated using sFFT. The signal characteristics of CGH are similar to those of the wavefront on virtual plane since the diffraction calculation on the virtual plane is equivalent to CGH calculation from the 3-D object [10].

Figure 5 shows the proposed fast CGH calculation using sparse Fresnel diffraction with sFFT. The sparsity of the holographic fringe pattern is measured by finding the number of dominant signals on the virtual plane [23]. In this paper, dominant signals are found as top magnitude signals, i.e., whose magnitude is greater than a threshold. The wavefront on the virtual plane and its sparsity (the number of dominant signals) are inputs of sFFT. In the sFFT, input signals are randomly permuted and sampled referring to the sparsity of wavefront on virtual plane. Then, the FFT is performed with the permuted and sampled input signals. Finally, the wavefront on CGH plane is obtained by sparse Fresnel diffraction using sFFT. Let u(ξ,η) denote the wavefront on the CGH plane. It can be written as

u(ξ,η)=exp(j2πλz2)jλz2uVP(x,y)exp(jπλz2((ξx)2+(ηy)2))dxdyexp(j2πλz2)jλz2S1[S[uVP(ξ,η)]S[h(ξ,η)]].
where ℱ[∙] and ℱ−1[∙] represent the sFFT and inverse sFFT operators. (ξ,η) is the pixel position of the CGH in real domain. z2 represents the perpendicular distance between the virtual plane and the CGH plane. h(ξ,η) is the impulse response function, which is equal to exp(jπλz2(ξ2+η2))

 figure: Fig. 5

Fig. 5 Schematic diagram of the proposed fast CGH calculation using sparse Fresnel diffraction with sFFT. The wavefront on the CGH plane is calculated from the wavefront on a virtual plane using sFFT. To that end, the sparsity of the wavefront is measured by counting the dominant signals based on the magnitude.

Download Full Size | PPT Slide | PDF

In the proposed method, the selection process is performed in Fourier domain of CGHs to investigate the magnitude threshold for about top 10% of the CGH fringe patterns of 3-D point cloud data in advance. In our experiment, the threshold value (Th) is 0.35 with which about top 10% of signals of CGH of 3-D point cloud data sets can be extracted. By applying the threshold to the magnitudes of wavefront on the virtual plane, the sparsity of the CGH can be estimated. The sparsity (k) estimated by the threshold (Th = 0.35) depends on the signal distributions of each 3-D point cloud data. With the wavefront on the virtual plane and estimated sparsity value (k) of CGH signals, k dominant CGH signals can be generated by sparse Fresnel diffraction. The threshold parameter and the sparsity of each 3-D point cloud data set are described in Section 4.2. Figure 6 shows an example of the distributions of the CGH fringe pattern for Bunny. As shown in Fig. 6(a), the CGH fringe pattern in Fourier domain has a few dominant signals and a lot of small or zero signals. Figure 6(b) shows the sparse distribution of the CGH fringe pattern after the selection process.

 figure: Fig. 6

Fig. 6 Examples of the distributions of the CGH fringe pattern for Bunny. (a) Original signal distribution of the CGH fringe pattern in Fourier domain. (b) Sparse distribution of the CGH fringe pattern in Fourier domain after the selection process.

Download Full Size | PPT Slide | PDF

4. Experiments and results

4.1 Data sets

To evaluate the performance of the proposed fast CGH calculation for 3-D object, four publicly available 3-D point cloud data sets were utilized. Among them, three data sets were collected from the Berkeley instance recognition data set (BigBIRD) [24]: Baby toy, Soft soap, and Syrup. In addition, Bunny was collected from the Stanford 3D scanning repository [25]. Table 1 shows the data sets used in our experiment. For example, the number of object points for bunny is 35,947 and it is a synthetic object.

Tables Icon

Table 1. Data sets in our experiments.

4.2 Experimental setup

For our computing environment to calculate the CGHs from the 3-D point cloud data sets, we used Microsoft Windows 7 Professional Service pack 1, Intel Core i7-4770 CPU @ 3.40 GHz and a 32 GBytes memory, Microsoft Visual Studio 2013 and openMP. Table 2 illustrates the CGH calculation conditions. We used the FFTW 3.2.1 library [26] for the FFT operation and the sFFT algorithm of [17].

Tables Icon

Table 2. CGH calculation conditions in our experiments.

In our experiment, the threshold parameter (Th) was 0.35. By the threshold, the k dominant signals were obtained from N-dimensional CGH signals (N = 1024x1024 = 1,048,576). The sparsity (k) measured by the threshold was 118,490 (k/N = 11.3%) for Baby toy data set, 99,304 (9.5%) for Soft soap data set, 70,884 (6.8%) for Syrup data set, and 159,382 (15.2%) for Bunny data set.

4.3 Performance evaluation results for visual quality and computational speed

Experiments were performed in terms of visual quality and computational time in order to evaluate the performance of the proposed method. To demonstrate the visual quality, we compared the numerically reconstructed results from the CGHs generated by five CGH calculation methods, which were a ray tracing method, an LUT based method [3], a recurrence based method [8] (with stride 2 pixels), a WRP based method [10], and the proposed method. Notably, the results by the ray tracing method were considered as ground truth [7]. In addition, we compared the PSNR between the numerical reconstructed results of ray tracing and other methods in order to quantitatively evaluate the performance of the visual quality of the CGHs. The PSNR value is increased as the difference between the CGH calculation method and the ground truth is decreased.

Table 3 shows the computational time of the CGH calculation for each data set. We measured the average calculation times when a 3-D object is rotated 360 degrees. As seen in Table 3, the proposed method shows a low computational complexity for the CGH of 3-D objects. The computational times of the ray tracing method (i.e., ground truth) and the LUT based method were rapidly increased with the large number of object points. The recurrence based method was faster than the ray tracing. But it still required a lot of computational time. The WRP based method was faster than other three methods. The proposed method was at least 15 times faster than the WRP based method while providing visually plausible results (please see Fig. 7 and Table 4). For example, the computational times of calculating the wavefront on the virtual plane and the CGH plane are 0.08 s and 0.07 s, respectively, for Baby toy in the proposed method.

Tables Icon

Table 3. Computational times [seconds (s)] of CGH calculation for each data set.

 figure: Fig. 7

Fig. 7 Visual results of the numerical reconstruction from the CGHs generated by four existing methods and the proposed method for each data set. (a) Results of the ray tracing, (b) Results of the LUT based method [3], (c) Results of the recurrence based method [8], (d) Results of the WRP based method [10], (e) Results of the proposed method.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 4. PSNR [dB] for visual quality of the numerical reconstruction from the generated CGH.

Figure 7 shows the reconstructed images obtained by a numerical reconstruction from each CGH calculation method. In the Fig. 7, the first row shows the numerical reconstructed results from the CGHs generated by ray tracing algorithm (i.e., ground truth). The second and third rows show the numerical reconstructed results from the CGHs generated by the LUT based and recurrence based methods, respectively. The fourth and fifth rows show the results from the CGHs calculated by the WRP based method and the proposed method, respectively. As shown in Fig. 7(b), the visual results by the LUT based method were very similar to the ground truth because pre-calculated values were just loaded and used. In Fig. 7(c), the visual result of the recurrence based method for Bunny had some distortions while visual results for other data sets provided visually plausible results. While a small number of object points are sparsely distributed in the Baby toy, Soft soap, and Syrup, a large number of points (35,947) stand close together in the Bunny. Therefore, in the Bunny case, it is easy to be affected by the approximation errors by the stride (2 pixels = 17 μm). As shown in Fig. 7(e), the proposed method provided visually plausible results for four data sets regardless of the number of points, similar to Fig. 7(d).

Table 4 illustrates the PSNR of the numerically reconstructed images. To measure the PSNR value, the result by ray tracing method was used as reference data. As shown in Table 4, the LUT based method achieved higher PSNR values. However, they needed a larger memory size. The PSNR values of the results by the recurrence based method were lower than those by the LUT based method due to the approximation errors. In particular, the visual quality was highly dependent on the density of the point light sources and the stride. The proposed method generally provided visually acceptable quality with a low computational complexity.

4.4 Discussions

In many previous studies [1, 7, 11, 12] whose experimental conditions are similar to the proposed method, the Fresnel diffraction model with three FFTs has been used mostly. For this reason, we used Fresnel diffraction calculation using three FFTs. Other methods that need less than three FFTs can be used to calculate the diffraction in our framework. In addition, by using the analytical transfer function of Fresnel impulse response, computational times of the proposed method can be reduced.

There is a trade-off between WRP approach and Fresnel diffraction with sFFT. In [27], the simulation results showed that the compressive sampling ratio decreased as the function of the distance, z/z0, increased (z0 = Np2/λ). The compressive sampling ratio, which was related to the coherence, asymptotically approached its lower bound and nearly reached saturation at z/z0 = 0.3 [27]. In our experimental environment, the parameter, z0, was about 0.12 m. We kept enough low coherence on the virtual plane in terms of compressed sensing because the ratio value, z1/z0, was about 0.4. If the distance between the object and virtual plane, z1, is smaller than 0.36 m (i.e., z1/z0 = 0.3), the coherence can monotonically increase with the distance. In our experiment, the distance, z1, was set to 0.05 m in order to preserve enough low coherence while reducing the computational cost.

A holographic stereogram is a ray based approach using a single Fourier coefficient per block for every point for CGH calculation. To compare a ray-based CGH algorithm, the phase added stereogram (PAS) method [28] was employed. The calculations of Fourier coefficient per each hologram (block) for every point were parallelized with 16 CPU multi threads. The 16 CPU multi threads were the same condition as the multiple ray tracing of the proposed method. The PAS segmentation size was set to 64x64 [28] and other experimental parameters (e.g., wavelength, CGH resolution, sampling pitch, etc.) were equal to Table 4. The computational times of the PAS were 4.6 s for Baby toy, 12.4 s for Soft soap, 24.6 s for Syrup, and 96.2 s for Bunny, respectively. The computation times of the proposed method were 0.15 s for Baby toy, 0.22 s for Soft soap, 0.39 s for Syrup, and 1.82 s for Bunny (as seen in Table 3). The proposed method was at least 30 times faster than the PAS. The PSNR values of the PAS were 31.06 dB for Baby toy, 31.89 dB for Soft soap, 31.09 dB for Syrup, and 30.10 dB for Bunny. The PSNR values of the proposed method were 28.76 dB for Baby toy, 30.21 dB for Soft soap, 30.08 dB for Syrup, and 30.05 dB for Bunny (as seen in Table 4). Compared to the visual quality of the proposed method, the PAS was slightly better.

Figure 8 shows the graph on the visual quality (PSNR) and computation time of sFFT according to sparsity (k) for Bunny. As shown in Fig. 8, the sparsity decreases as the computation time increases. The visual quality gradually reaches the result of the WRP based method.

 figure: Fig. 8

Fig. 8 The quality and the computation time (step 2) as a function of k for Bunny.

Download Full Size | PPT Slide | PDF

Furthermore, to handle the specular objects (specular surfaces), polygon-based methods can be used [29, 30] because they can deal with the 3-D object (3-D surface) with both diffuse and specular reflectance on each individual polygon. On the other hand, the proposed method is a point-based approach in order to calculate CGH of 3-D point cloud data. To handle the specular object using the proposed method, the conversion of specular object to point cloud can be one of alternatives. For example, the specular object data is converted into the point cloud data. Then, the proposed method can be applied to calculate CGH of the point cloud data. The specular object can be reconstructed by a rendering algorithm for specular surfaces [31].

5. Conclusions

This paper presented a fast CGH calculation method for 3-D objects. In the proposed CGH calculation framework, the sparsity property of holographic fringe pattern was considered. To measure the signal characteristics of CGHs, wavefront on a virtual plane was calculated. By dividing all object points into sub clusters and calculating the multiple wavefronts of the clusters in parallel, we improved the computational speed. In particular, based on the wavefront recorded on the virtual plane, we introduced a novel fast CGH calculation using sparse Fresnel diffraction with sFFT. In our experiment, we achieved at least 15 times faster than other existing works while preserving the visual quality. The numerical reconstructed results showed that the proposed CGH calculation method could achieve good visual quality. In addition, we verified the performance of the proposed method using an objective quality metric, PSNR.

Acknowledgements

This work was supported by Samsung Electronics.

References and links

1. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012). [CrossRef]   [PubMed]  

2. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005). [CrossRef]  

3. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

4. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]   [PubMed]  

5. S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009). [CrossRef]   [PubMed]  

6. S.-C. Kim, J.-M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012). [CrossRef]   [PubMed]  

7. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015). [CrossRef]   [PubMed]  

8. K. Matsushima and M. Takai, “Recurrence formulas for fast creation of synthetic three-dimensional holograms,” Appl. Opt. 39(35), 6587–6594 (2000). [CrossRef]   [PubMed]  

9. H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000). [CrossRef]  

10. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]   [PubMed]  

11. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015). [CrossRef]   [PubMed]  

12. D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer-generated holograms using tilted wavefront recording plane method,” Opt. Express 23(2), 1740–1747 (2015). [CrossRef]   [PubMed]  

13. T. Tommasi and B. Bianco, “Computer-generated holograms of tilted planes by a spatial frequency approach,” J. Opt. Soc. Am. A 10(2), 299–305 (1993). [CrossRef]  

14. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003). [CrossRef]   [PubMed]  

15. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013). [CrossRef]   [PubMed]  

16. H. Hassanieh, P. Indyk, D. Katabi, and E. Price, “Simple and practical algorithm for sparse Fourier transform,” in Proc. 23rd Annu. ACM-SIAM SODA (ACM, 2012), pp. 1183–1194.

17. S. Pawar and K. Ramchandran, “Computing a k-sparse n-length discrete Fourier transform using at most 4k samples and O(k log k) complexity,” in Proc. Int. Symp. Information Theory (IEEE, 2013), pp. 464–468. [CrossRef]  

18. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Disp. Technol. 6(10), 506–509 (2010). [CrossRef]  

19. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17(15), 13040–13049 (2009). [CrossRef]   [PubMed]  

20. T. Nishitsuji, T. Shimobaba, T. Kakue, D. Arai, and T. Ito, “Simple and fast cosine approximation method for computer-generated hologram calculation,” Opt. Express 23(25), 32465–32470 (2015). [CrossRef]   [PubMed]  

21. T. Shimobaba and T. Ito, “Random phase-free computer-generated hologram,” Opt. Express 23(7), 9549–9554 (2015). [CrossRef]   [PubMed]  

22. S.-H. Hsieh, C.-S. Lu, and S.-C. Pei, “Sparse fast fourier transform by downsampling,” in Proc. Int. Conf. Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 5637–5641.

23. X. Luan, B. Fang, L. Liu, W. Yang, and J. Qian, “Extracting sparse error of robust PCA for face recognition in the presence of varying illumination and occlusion,” Pattern Recognit. 47(2), 495–508 (2014). [CrossRef]  

24. A. Singh, J. Sha, K. S. Narayan, T. Achim, and P. Abbeel, “Bigbird: A large-scale 3d database of object instances,” in Proc. Int. Conf. Robotics and Automation (IEEE, 2014), pp. 509–516. [CrossRef]  

25. The Stanford 3D Scanning Repository: http://graphics.stanford.edu/data/3Dscanrep/

26. F. F. T. W. Home Page, www.fftw.org/

27. Y. Rivenson and A. Stern, “Conditions for practicing compressive Fresnel holography,” Opt. Lett. 36(17), 3365–3367 (2011). [CrossRef]   [PubMed]  

28. H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. 47(19), D44–D54 (2008). [CrossRef]   [PubMed]  

29. K. Yamaguchi and Y. Sakamoto, “Computer generated hologram with characteristics of reflection: reflectance distributions and reflected images,” Appl. Opt. 48(34), H203–H211 (2009). [CrossRef]   [PubMed]  

30. T. Ichikawa, Y. Sakamoto, A. Subagyo, and K. Sueoka, “A method of calculating reflectance distributions for CGH with FDTD using the structure of actual surfaces,” Proc. SPIE 7957, 795707 (2011). [CrossRef]  

31. H. Nishi, K. Matsushima, and S. Nakahara, “Advanced rendering techniques for producing specular smooth surfaces in polygon-based high-definition computer holography,” Proc. SPIE 8281, 828110 (2012). [CrossRef]  

References

  • View by:

  1. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012).
    [Crossref] [PubMed]
  2. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005).
    [Crossref]
  3. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).
    [Crossref]
  4. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008).
    [Crossref] [PubMed]
  5. S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009).
    [Crossref] [PubMed]
  6. S.-C. Kim, J.-M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012).
    [Crossref] [PubMed]
  7. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015).
    [Crossref] [PubMed]
  8. K. Matsushima and M. Takai, “Recurrence formulas for fast creation of synthetic three-dimensional holograms,” Appl. Opt. 39(35), 6587–6594 (2000).
    [Crossref] [PubMed]
  9. H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
    [Crossref]
  10. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009).
    [Crossref] [PubMed]
  11. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015).
    [Crossref] [PubMed]
  12. D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer-generated holograms using tilted wavefront recording plane method,” Opt. Express 23(2), 1740–1747 (2015).
    [Crossref] [PubMed]
  13. T. Tommasi and B. Bianco, “Computer-generated holograms of tilted planes by a spatial frequency approach,” J. Opt. Soc. Am. A 10(2), 299–305 (1993).
    [Crossref]
  14. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003).
    [Crossref] [PubMed]
  15. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013).
    [Crossref] [PubMed]
  16. H. Hassanieh, P. Indyk, D. Katabi, and E. Price, “Simple and practical algorithm for sparse Fourier transform,” in Proc. 23rd Annu. ACM-SIAM SODA (ACM, 2012), pp. 1183–1194.
  17. S. Pawar and K. Ramchandran, “Computing a k-sparse n-length discrete Fourier transform using at most 4k samples and O(k log k) complexity,” in Proc. Int. Symp. Information Theory (IEEE, 2013), pp. 464–468.
    [Crossref]
  18. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Disp. Technol. 6(10), 506–509 (2010).
    [Crossref]
  19. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17(15), 13040–13049 (2009).
    [Crossref] [PubMed]
  20. T. Nishitsuji, T. Shimobaba, T. Kakue, D. Arai, and T. Ito, “Simple and fast cosine approximation method for computer-generated hologram calculation,” Opt. Express 23(25), 32465–32470 (2015).
    [Crossref] [PubMed]
  21. T. Shimobaba and T. Ito, “Random phase-free computer-generated hologram,” Opt. Express 23(7), 9549–9554 (2015).
    [Crossref] [PubMed]
  22. S.-H. Hsieh, C.-S. Lu, and S.-C. Pei, “Sparse fast fourier transform by downsampling,” in Proc. Int. Conf. Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 5637–5641.
  23. X. Luan, B. Fang, L. Liu, W. Yang, and J. Qian, “Extracting sparse error of robust PCA for face recognition in the presence of varying illumination and occlusion,” Pattern Recognit. 47(2), 495–508 (2014).
    [Crossref]
  24. A. Singh, J. Sha, K. S. Narayan, T. Achim, and P. Abbeel, “Bigbird: A large-scale 3d database of object instances,” in Proc. Int. Conf. Robotics and Automation (IEEE, 2014), pp. 509–516.
    [Crossref]
  25. The Stanford 3D Scanning Repository: http://graphics.stanford.edu/data/3Dscanrep/
  26. F. F. T. W. Home Page, www.fftw.org/
  27. Y. Rivenson and A. Stern, “Conditions for practicing compressive Fresnel holography,” Opt. Lett. 36(17), 3365–3367 (2011).
    [Crossref] [PubMed]
  28. H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. 47(19), D44–D54 (2008).
    [Crossref] [PubMed]
  29. K. Yamaguchi and Y. Sakamoto, “Computer generated hologram with characteristics of reflection: reflectance distributions and reflected images,” Appl. Opt. 48(34), H203–H211 (2009).
    [Crossref] [PubMed]
  30. T. Ichikawa, Y. Sakamoto, A. Subagyo, and K. Sueoka, “A method of calculating reflectance distributions for CGH with FDTD using the structure of actual surfaces,” Proc. SPIE 7957, 795707 (2011).
    [Crossref]
  31. H. Nishi, K. Matsushima, and S. Nakahara, “Advanced rendering techniques for producing specular smooth surfaces in polygon-based high-definition computer holography,” Proc. SPIE 8281, 828110 (2012).
    [Crossref]

2015 (5)

2014 (1)

X. Luan, B. Fang, L. Liu, W. Yang, and J. Qian, “Extracting sparse error of robust PCA for face recognition in the presence of varying illumination and occlusion,” Pattern Recognit. 47(2), 495–508 (2014).
[Crossref]

2013 (1)

2012 (3)

2011 (2)

T. Ichikawa, Y. Sakamoto, A. Subagyo, and K. Sueoka, “A method of calculating reflectance distributions for CGH with FDTD using the structure of actual surfaces,” Proc. SPIE 7957, 795707 (2011).
[Crossref]

Y. Rivenson and A. Stern, “Conditions for practicing compressive Fresnel holography,” Opt. Lett. 36(17), 3365–3367 (2011).
[Crossref] [PubMed]

2010 (1)

Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Disp. Technol. 6(10), 506–509 (2010).
[Crossref]

2009 (4)

2008 (2)

2005 (1)

C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005).
[Crossref]

2003 (1)

2000 (2)

K. Matsushima and M. Takai, “Recurrence formulas for fast creation of synthetic three-dimensional holograms,” Appl. Opt. 39(35), 6587–6594 (2000).
[Crossref] [PubMed]

H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
[Crossref]

1993 (2)

M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).
[Crossref]

T. Tommasi and B. Bianco, “Computer-generated holograms of tilted planes by a spatial frequency approach,” J. Opt. Soc. Am. A 10(2), 299–305 (1993).
[Crossref]

Abbeel, P.

A. Singh, J. Sha, K. S. Narayan, T. Achim, and P. Abbeel, “Bigbird: A large-scale 3d database of object instances,” in Proc. Int. Conf. Robotics and Automation (IEEE, 2014), pp. 509–516.
[Crossref]

Achim, T.

A. Singh, J. Sha, K. S. Narayan, T. Achim, and P. Abbeel, “Bigbird: A large-scale 3d database of object instances,” in Proc. Int. Conf. Robotics and Automation (IEEE, 2014), pp. 509–516.
[Crossref]

Arai, D.

Bianco, B.

Blinder, D.

Brady, D. J.

Cameron, C.

C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005).
[Crossref]

Choi, K.

Endo, Y.

Fang, B.

X. Luan, B. Fang, L. Liu, W. Yang, and J. Qian, “Extracting sparse error of robust PCA for face recognition in the presence of varying illumination and occlusion,” Pattern Recognit. 47(2), 495–508 (2014).
[Crossref]

Hirayama, R.

Hiyama, D.

Horisaki, R.

Hsieh, S.-H.

S.-H. Hsieh, C.-S. Lu, and S.-C. Pei, “Sparse fast fourier transform by downsampling,” in Proc. Int. Conf. Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 5637–5641.

Ichihashi, Y.

Ichikawa, T.

T. Ichikawa, Y. Sakamoto, A. Subagyo, and K. Sueoka, “A method of calculating reflectance distributions for CGH with FDTD using the structure of actual surfaces,” Proc. SPIE 7957, 795707 (2011).
[Crossref]

Ito, T.

T. Shimobaba and T. Ito, “Random phase-free computer-generated hologram,” Opt. Express 23(7), 9549–9554 (2015).
[Crossref] [PubMed]

D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer-generated holograms using tilted wavefront recording plane method,” Opt. Express 23(2), 1740–1747 (2015).
[Crossref] [PubMed]

T. Nishitsuji, T. Shimobaba, T. Kakue, D. Arai, and T. Ito, “Simple and fast cosine approximation method for computer-generated hologram calculation,” Opt. Express 23(25), 32465–32470 (2015).
[Crossref] [PubMed]

T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015).
[Crossref] [PubMed]

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013).
[Crossref] [PubMed]

J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012).
[Crossref] [PubMed]

T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009).
[Crossref] [PubMed]

Iwase, S.

H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
[Crossref]

Javidi, B.

Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Disp. Technol. 6(10), 506–509 (2010).
[Crossref]

Kakue, T.

Kang, H.

Kim, E.-S.

Kim, J.-M.

Kim, S.-C.

Lim, S.

Liu, L.

X. Luan, B. Fang, L. Liu, W. Yang, and J. Qian, “Extracting sparse error of robust PCA for face recognition in the presence of varying illumination and occlusion,” Pattern Recognit. 47(2), 495–508 (2014).
[Crossref]

Lu, C.-S.

S.-H. Hsieh, C.-S. Lu, and S.-C. Pei, “Sparse fast fourier transform by downsampling,” in Proc. Int. Conf. Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 5637–5641.

Luan, X.

X. Luan, B. Fang, L. Liu, W. Yang, and J. Qian, “Extracting sparse error of robust PCA for face recognition in the presence of varying illumination and occlusion,” Pattern Recognit. 47(2), 495–508 (2014).
[Crossref]

Lucente, M.

M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).
[Crossref]

Marks, D. L.

Masuda, N.

Matsushima, K.

Munteanu, A.

Murano, K.

Nakahara, S.

H. Nishi, K. Matsushima, and S. Nakahara, “Advanced rendering techniques for producing specular smooth surfaces in polygon-based high-definition computer holography,” Proc. SPIE 8281, 828110 (2012).
[Crossref]

Nakayama, H.

Narayan, K. S.

A. Singh, J. Sha, K. S. Narayan, T. Achim, and P. Abbeel, “Bigbird: A large-scale 3d database of object instances,” in Proc. Int. Conf. Robotics and Automation (IEEE, 2014), pp. 509–516.
[Crossref]

Nishi, H.

H. Nishi, K. Matsushima, and S. Nakahara, “Advanced rendering techniques for producing specular smooth surfaces in polygon-based high-definition computer holography,” Proc. SPIE 8281, 828110 (2012).
[Crossref]

Nishitsuji, T.

Oi, R.

Oikawa, M.

Okada, N.

Oneda, T.

H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
[Crossref]

Pawar, S.

S. Pawar and K. Ramchandran, “Computing a k-sparse n-length discrete Fourier transform using at most 4k samples and O(k log k) complexity,” in Proc. Int. Symp. Information Theory (IEEE, 2013), pp. 464–468.
[Crossref]

Pei, S.-C.

S.-H. Hsieh, C.-S. Lu, and S.-C. Pei, “Sparse fast fourier transform by downsampling,” in Proc. Int. Conf. Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 5637–5641.

Qian, J.

X. Luan, B. Fang, L. Liu, W. Yang, and J. Qian, “Extracting sparse error of robust PCA for face recognition in the presence of varying illumination and occlusion,” Pattern Recognit. 47(2), 495–508 (2014).
[Crossref]

Ramchandran, K.

S. Pawar and K. Ramchandran, “Computing a k-sparse n-length discrete Fourier transform using at most 4k samples and O(k log k) complexity,” in Proc. Int. Symp. Information Theory (IEEE, 2013), pp. 464–468.
[Crossref]

Rivenson, Y.

Y. Rivenson and A. Stern, “Conditions for practicing compressive Fresnel holography,” Opt. Lett. 36(17), 3365–3367 (2011).
[Crossref] [PubMed]

Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Disp. Technol. 6(10), 506–509 (2010).
[Crossref]

Sakamoto, Y.

T. Ichikawa, Y. Sakamoto, A. Subagyo, and K. Sueoka, “A method of calculating reflectance distributions for CGH with FDTD using the structure of actual surfaces,” Proc. SPIE 7957, 795707 (2011).
[Crossref]

K. Yamaguchi and Y. Sakamoto, “Computer generated hologram with characteristics of reflection: reflectance distributions and reflected images,” Appl. Opt. 48(34), H203–H211 (2009).
[Crossref] [PubMed]

Schelkens, P.

Schimmel, H.

Sha, J.

A. Singh, J. Sha, K. S. Narayan, T. Achim, and P. Abbeel, “Bigbird: A large-scale 3d database of object instances,” in Proc. Int. Conf. Robotics and Automation (IEEE, 2014), pp. 509–516.
[Crossref]

Shimobaba, T.

T. Nishitsuji, T. Shimobaba, T. Kakue, D. Arai, and T. Ito, “Simple and fast cosine approximation method for computer-generated hologram calculation,” Opt. Express 23(25), 32465–32470 (2015).
[Crossref] [PubMed]

T. Shimobaba and T. Ito, “Random phase-free computer-generated hologram,” Opt. Express 23(7), 9549–9554 (2015).
[Crossref] [PubMed]

D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer-generated holograms using tilted wavefront recording plane method,” Opt. Express 23(2), 1740–1747 (2015).
[Crossref] [PubMed]

T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015).
[Crossref] [PubMed]

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013).
[Crossref] [PubMed]

J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012).
[Crossref] [PubMed]

T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009).
[Crossref] [PubMed]

Singh, A.

A. Singh, J. Sha, K. S. Narayan, T. Achim, and P. Abbeel, “Bigbird: A large-scale 3d database of object instances,” in Proc. Int. Conf. Robotics and Automation (IEEE, 2014), pp. 509–516.
[Crossref]

Slinger, C.

C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005).
[Crossref]

Stanley, M.

C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005).
[Crossref]

Stern, A.

Y. Rivenson and A. Stern, “Conditions for practicing compressive Fresnel holography,” Opt. Lett. 36(17), 3365–3367 (2011).
[Crossref] [PubMed]

Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Disp. Technol. 6(10), 506–509 (2010).
[Crossref]

Subagyo, A.

T. Ichikawa, Y. Sakamoto, A. Subagyo, and K. Sueoka, “A method of calculating reflectance distributions for CGH with FDTD using the structure of actual surfaces,” Proc. SPIE 7957, 795707 (2011).
[Crossref]

Sueoka, K.

T. Ichikawa, Y. Sakamoto, A. Subagyo, and K. Sueoka, “A method of calculating reflectance distributions for CGH with FDTD using the structure of actual surfaces,” Proc. SPIE 7957, 795707 (2011).
[Crossref]

Symeonidou, A.

Takai, M.

Tommasi, T.

Weng, J.

Wyrowski, F.

Yamaguchi, K.

Yamaguchi, T.

Yamamoto, K.

Yang, W.

X. Luan, B. Fang, L. Liu, W. Yang, and J. Qian, “Extracting sparse error of robust PCA for face recognition in the presence of varying illumination and occlusion,” Pattern Recognit. 47(2), 495–508 (2014).
[Crossref]

Yoshikawa, H.

H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. 47(19), D44–D54 (2008).
[Crossref] [PubMed]

H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
[Crossref]

Appl. Opt. (5)

Computer (1)

C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005).
[Crossref]

J. Disp. Technol. (1)

Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Disp. Technol. 6(10), 506–509 (2010).
[Crossref]

J. Electron. Imaging (1)

M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).
[Crossref]

J. Opt. Soc. Am. A (2)

Opt. Express (9)

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013).
[Crossref] [PubMed]

J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012).
[Crossref] [PubMed]

A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015).
[Crossref] [PubMed]

D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer-generated holograms using tilted wavefront recording plane method,” Opt. Express 23(2), 1740–1747 (2015).
[Crossref] [PubMed]

S.-C. Kim, J.-M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012).
[Crossref] [PubMed]

T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015).
[Crossref] [PubMed]

D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17(15), 13040–13049 (2009).
[Crossref] [PubMed]

T. Nishitsuji, T. Shimobaba, T. Kakue, D. Arai, and T. Ito, “Simple and fast cosine approximation method for computer-generated hologram calculation,” Opt. Express 23(25), 32465–32470 (2015).
[Crossref] [PubMed]

T. Shimobaba and T. Ito, “Random phase-free computer-generated hologram,” Opt. Express 23(7), 9549–9554 (2015).
[Crossref] [PubMed]

Opt. Lett. (2)

Pattern Recognit. (1)

X. Luan, B. Fang, L. Liu, W. Yang, and J. Qian, “Extracting sparse error of robust PCA for face recognition in the presence of varying illumination and occlusion,” Pattern Recognit. 47(2), 495–508 (2014).
[Crossref]

Proc. SPIE (3)

T. Ichikawa, Y. Sakamoto, A. Subagyo, and K. Sueoka, “A method of calculating reflectance distributions for CGH with FDTD using the structure of actual surfaces,” Proc. SPIE 7957, 795707 (2011).
[Crossref]

H. Nishi, K. Matsushima, and S. Nakahara, “Advanced rendering techniques for producing specular smooth surfaces in polygon-based high-definition computer holography,” Proc. SPIE 8281, 828110 (2012).
[Crossref]

H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
[Crossref]

Other (6)

H. Hassanieh, P. Indyk, D. Katabi, and E. Price, “Simple and practical algorithm for sparse Fourier transform,” in Proc. 23rd Annu. ACM-SIAM SODA (ACM, 2012), pp. 1183–1194.

S. Pawar and K. Ramchandran, “Computing a k-sparse n-length discrete Fourier transform using at most 4k samples and O(k log k) complexity,” in Proc. Int. Symp. Information Theory (IEEE, 2013), pp. 464–468.
[Crossref]

A. Singh, J. Sha, K. S. Narayan, T. Achim, and P. Abbeel, “Bigbird: A large-scale 3d database of object instances,” in Proc. Int. Conf. Robotics and Automation (IEEE, 2014), pp. 509–516.
[Crossref]

The Stanford 3D Scanning Repository: http://graphics.stanford.edu/data/3Dscanrep/

F. F. T. W. Home Page, www.fftw.org/

S.-H. Hsieh, C.-S. Lu, and S.-C. Pei, “Sparse fast fourier transform by downsampling,” in Proc. Int. Conf. Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 5637–5641.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Examples of numerical reconstructed images of sampled CGH signals, where 10% of the signals were used. (a) 3-D object point cloud for Bunny. (b) Holographic fringe pattern (i.e., CGH) (c) The numerical reconstructed result from randomly selected 10% of CGH signals. (d) The numerical reconstructed result from the top 10% magnitude of CGH signals.
Fig. 2
Fig. 2 The quality of the numerical reconstruction according to the sampling ratio. The visual quality is measured by PSNR.
Fig. 3
Fig. 3 Overview of the proposed fast CGH calculation for 3-D objects. In the first step, the wavefront on the virtual plane is calculated using multiple ray tracing for each object point cluster. In the second step, the wavefront on the CGH plane is calculated using sparse Fresnel diffraction with sFFT. Note that the red dots indicate a small number of dominant signals on the virtual plane. For sFFT, the number of dominant signals (i.e, sparsity) is measured based on the magnitude. z1 represents the distance between the object and the virtual plane. z2 represents the distance between the virtual plane and the CGH plane.
Fig. 4
Fig. 4 Proposed fast calculation of wavefront on the virtual plane using multiple ray tracing. The red dots and the green dots represent the object points belonging to the first and second clusters, C1 and C2, respectively. The blue dots represent the object points belonging to the S-th cluster, CS. Notably, multiple wavefronts of each cluster on the virtual plane are calculated using ray tracing in parallel. Note that the radius of small area traversed by i-th point light in the Ct is defined as Wti = |zti|tanθ = |zti|tan(sin−1(λ/2p)), as reported in [10]. zti represents the distance between i-th point in the Ct and the virtual plane. p is the sampling pitch, which is 8.5 μm in this paper.
Fig. 5
Fig. 5 Schematic diagram of the proposed fast CGH calculation using sparse Fresnel diffraction with sFFT. The wavefront on the CGH plane is calculated from the wavefront on a virtual plane using sFFT. To that end, the sparsity of the wavefront is measured by counting the dominant signals based on the magnitude.
Fig. 6
Fig. 6 Examples of the distributions of the CGH fringe pattern for Bunny. (a) Original signal distribution of the CGH fringe pattern in Fourier domain. (b) Sparse distribution of the CGH fringe pattern in Fourier domain after the selection process.
Fig. 7
Fig. 7 Visual results of the numerical reconstruction from the CGHs generated by four existing methods and the proposed method for each data set. (a) Results of the ray tracing, (b) Results of the LUT based method [3], (c) Results of the recurrence based method [8], (d) Results of the WRP based method [10], (e) Results of the proposed method.
Fig. 8
Fig. 8 The quality and the computation time (step 2) as a function of k for Bunny.

Tables (4)

Tables Icon

Table 1 Data sets in our experiments.

Tables Icon

Table 2 CGH calculation conditions in our experiments.

Tables Icon

Table 3 Computational times [seconds (s)] of CGH calculation for each data set.

Tables Icon

Table 4 PSNR [dB] for visual quality of the numerical reconstruction from the generated CGH.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

u VP (x,y)= t=1 S i=1 N t A t i R t i exp(jk R t i ) .
u(ξ,η)= exp( j 2π λ z 2 ) jλ z 2 u VP (x,y)exp( j π λ z 2 ( ( ξx ) 2 + ( ηy ) 2 ) )dxdy exp( j 2π λ z 2 ) jλ z 2 S 1 [ S[ u VP (ξ,η) ]S[ h(ξ,η) ] ] .

Metrics