Abstract

Computer Generated Holograms (CGH) are generated on computers; however, a great deal of computational power is required because the quality of the image is proportional to the number of point light sources of a 3D object. The Wavefront Recording Plane (WRP) method is an algorithm that enables reduction of the amount of calculations required. However, the WRP method also has a defect; it is not effective in the case of a 3D object with a deep structure. In this study, we propose two improved WRP methods: “Least Square Tilted WRP method” and “RANSAC Multi-Tilted WRP method.”

© 2015 Optical Society of America

1. Introduction

Holography is a technique used for playing back natural 3D images, because it faithfully duplicates the wave front of light from a real 3D object. A Computer Generated Hologram (CGH) is generated by simulating the physical process of holography on a computer. A 3D movie can be displayed on a Liquid Crystal Display (LCD) by electronic holography [1]. Various CGH calculation algorithms have been proposed, some of which deal with 3D objects as the aggregate of point light sources [2–4], as the aggregate of polygons [5, 6], as a RGB-D image [7], and as a multi-view image, e.g., holographic stereogram or integral photography technique [8, 9]. Electronic holography has great potential, but it also has problems: replay gives a small 3D image, the viewing area is narrow, and the calculation amount increases. In this paper, we focus on the acceleration of CGH calculations using the point light source method.

In representing the 3D object model as the aggregate of object points, each object point is considered as a point light source emitting a spherical wave. It is possible to generate CGHs by adding spherical waves. We define the complex amplitude on a CGH plane asUh(x,y), the object is composed ofNpoints, and the complex amplitude on the CGH created by the i-th point in distanceri is Ui(ri). The final complex amplitude Uh(x,y) is expressed by,

Uh(x,y)=iNUi(ri)=iNAiriexp(jkri)
where, ri=(xxi)2+(yyi)2+zi2, the number of object points is N, and k expresses the wave number. This expression is very simple, but the calculation amount is problematic; as it is proportional to the number of point light sources, it is therefore difficult to obtain a practical 3D image in real-time. Algorithms to accelerate this expression have been proposed, e.g., look-up-table [2,3], image hologram [4], recurrence formula method [10], and wave recording plane (WRP) method [11–16].

The WRP method (Fig. 1) sets a WRP that is a virtual plane between 3D objects and the CGH plane. It calculates a CGH in two steps [11]. In the first step, the light wave from 3D objects is recorded on a WRP. In the second step, it is possible to accelerate the calculation of the light wave from the WRP to CGH by diffraction calculation based on fast Fourier transform (FFT). If the WRP is set near the 3D objects, the calculation amount in the first step is significantly reduced, so acceleration of the CGH calculation is possible. In the past, acceleration methods for improving the first step calculations [12,13], and the multi-WRPs method have been suggested for deep 3D objects [15,16].

 

Fig. 1 Original WRP method.

Download Full Size | PPT Slide | PDF

In this study, we propose two improved methods to further accelerate the WRP method: “Least Square Tilted WRP method” and “RANSAC Multi-Tilted WRP method.”

2. Proposed method

In the first step of the original WRP method, the light wave on the WRP is calculated from object points by Eq. (1). The radius of zone plate Rwi [11] on the WRP from the i-th point light source is defined by Rwi=dwi×tan{sin(λ2pw)} where dwi is the distance between point i and the WRP, pwis the sampling interval of the WRP. If the WRP is set near objects, the calculating area becomes smaller. In the second step, the complex amplitude on the CGH is calculated from the WRP to the CGH plane by diffraction calculation.

Let us consider the calculation amount of the WRP method. The WRP and CGH planes are defined as M2 pixels. The calculation amount in the first step is O=firstO(NRw2)=O(Ndw2) where the average radiusRw=1NiNRwi, the average distancedw=1NiNdwi. The calculation amount in the second step using FFT is Osecond=O(M2logM). The calculation amount in the WRP method is the sum of these calculation amounts. The calculation amount becomes smaller than the calculation amount of O(NM2) of Eq. (1) because of Rwi<<Mpw. The calculation amount of the second step is not dependent on the number and the distribution of object points. Whereas, the calculation amount in the first step depends ondw, in other words, the distribution of the 3D objects. In this study, we propose two improved methods using non-parallel WRPs for reducing the distribution problem.

2.1. Least square tilted WRP method

A WRP is set parallel to the CGH plane in the original WRP method shown in Fig. 1. The calculation amount is determined by the average distance dw between the objects and WRP as noted earlier. So, the calculation amount is the smallest in the case of setting the WRP to be the smallestdw. We propose the Least Square Tilted WRP (LST-WRP) method in which the WRP position is determined to be the smallest calculation amounts by the least square method. The determined WRP may be non-parallel to the CGH. The conceptual diagram of the LST-WRP method is shown in Fig. 2. The concrete algorithm of this method is described below.

 

Fig. 2 Optimized WRP method.

Download Full Size | PPT Slide | PDF

  • 1. A plane is defined as the sum of the distance dwi between the plane and the object points that becomes minimum by the least square method. We set the plane as the WRP.
  • 2. Calculate the first step in the WRP method.
  • 3. If the WRP is non-parallel to the CGH depending on the object shape, we calculate diffraction in non-parallel planes.

The WRP is discretized by the sampling intervalpw, so that the object points cannot be recorded if dwi is too near. Therefore, in fact, we need to move the WRP determined by the least square method δd from the object along to the normal vector of the plane. We empirically decided δd in terms of the calculation time and the reconstruction quality. The conceptual diagram of this correction is shown in Fig. 3.

 

Fig. 3 Correction of the WRP decided by least square method.

Download Full Size | PPT Slide | PDF

The normal vector of the least square plane n=[a,b,c]T (c>0) is defined as shown in Fig. 3. And, the farthest and nearest distances between Least square plane and the object point aredfaranddnear, respectively. When dfar>dnear, we need to correct the location of the WRP by moving itdnear + δd to the negative direction of the normal vector. In the same way, when dnear>dfar, we need to correct the location of the WRP by moving it dfar + δd to the positive direction of the normal vector. In this way, it is possible for the corrected WRP to record all object points.

However, when the angle of the WRP to CGH is large, the object is distorted because the reference light of Eq. (1) propagates in the same direction as the normal vector of the WRP (Fig. 4). In the case where CGH and WRP are parallel such as in Fig. 4 (a), the reconstruction position is the same as the recording position because the normal directions of the WRP and CGH coincide. Whereas, in the case where the WRP is tilted to the CGH plane such as in Fig. 4 (b), the propagation direction of the object points is the normal direction of the WRP. Therefore, when illuminating the planar wave parallel to the CGH, the actual reconstruction points are shifted from the original recording positions. Namely, the object points must be recorded on the WRP as the propagation direction of the object light coincides with the normal direction of the CGH such as in Fig. 4(c).

 

Fig. 4 The position gap and correction of the point in the case of setting CGH and WRP. (a) Parallel WRP to the CGH. (b) Tilted WRP (the propagation direction of the object light does not coincide with the normal direction of the CGH) . (c) Tilted WRP (the propagation direction of the object light coincides with the normal direction of the CGH).

Download Full Size | PPT Slide | PDF

In Fig. 5, we show reconstructed images by the WRP parallel to the CGH (the case of Fig. 4(a)), tilted WRP without the correction (the case of Fig. 4(b)) and with the correction (the case of Fig. 4(c)). In the case without the correction, the reconstruction image is distorted as compared with Fig. 5 (a). Whereas, the reconstructed image is not changed with the correction. The Signal to Noise Ratio (SNR) is 25dB between Figs. 5 (a) and 5 (c).

 

Fig. 5 Comparison of the numerical reconstruction images. (a) the case of Fig. 4(a). (b) the case of Fig. 4(b). (c) the case of Fig. 4(c).

Download Full Size | PPT Slide | PDF

2.2. RANSAC multi-tilted WRP method

In the previous section, we stated that the WRP method is efficient when the WRP is set along the distribution of the object points. If the object is divided into small sub-areas and multiple WRPs are properly placed at each sub-area, we can expect to obtain a faster method.

In this section we propose the RANSAC (RANdom Sample Consensus) multi-tilted WRP method (RMT-WRP). As described in Sec. 2.1, it is difficult to record the lightwave of object points on a WRP when they are too close to the WRP. In LST-WRP, the WRP is corrected by parallel displacement. In RMT-WRP, instead of the parallel displacement, object points are directly set on the WRP if the object points are too close. The other object points are calculated by Eq. (1). In addition, RMT-WRP uses the RANSAC algorithm [17] to estimate the optimal locations of the multiple WRPs corresponding to the sub-areas, not the least square method. The RANSAC algorithm that is widely used in computer vision is a robust method for noises. The RANSAC algorithm is used because the algorithm can appropriately estimate WRPs that include many object points.

We show how to estimate the optimal location of a WRP corresponding to a sub-area having N object points by the RANSAC algorithm.

  • 1. Three object points are randomly selected from N object points.
  • 2. The coefficients a,b,c,d in the plane equation ax+by+cz+d=0 are calculated by the selected three points.
  • 3. The distance between the object points except for the selected three points and the plane calculated in Step 2 is calculated.
  • 4. We select if the distance is smaller than the allowable distanceΔd.
  • 5. After repeating k times the Steps 1 to 4, the plane with the most selected parameter is defined as a tentative plane.
  • 6. The points whose distances to the tentative plane are smaller thanΔd are extracted.
  • 7. We determine the WRP from the extracted object points by the least square method.

In this paper, we define Δd = 1[mm] and k = 30. We empirically decided Δd = 1[mm] and k = 30 in terms of the calculation time and the reconstruction quality. The entire process of the RMT-WRP method is shown in Fig. 6. The algorithm is described below.

  • A) The object is divided into some small sub-areas.
  • B) The WRPs of the sub-areas are defined by Steps 1 to 7 above.
  • C) The brightness values of object points are directly set on the WRP when the distances dwi of the object points are dwiδd.
  • D) The complex amplitude on the WRP of the object points except for the object points of Step C is calculated by Eq. (1).
  • E) All the WRPs are once propagated to a parallel WRP that is parallel to the CGH by a tilted diffraction calculation [5, 6]. The parallel WRP is set near the 3D object.
  • F) We calculate the propagation from the parallel WRP to the CGH by normal Fresnel diffraction calculation.

3. Results

First, we compared the calculation time of the previous WRP method [11] and LST-WRP method. The parameter of the simulation is shown in Table 1. We used 8 CPU threads of Intel Core i7 2600K CPU for this calculation.

Tables Icon

Table 1. Simulation parameter (LST-WRP).

The calculation results of the first steps are shown in Table 2. And, we measured the maximum, minimum and average calculation times when a 3D object is rotated 360 degrees, because the calculation time of the previous WRP method is strongly dependent on the distribution of the object points. The difference between maximum and minimum times of the previous WRP method is about 2 seconds. In contrast, the difference between maximum and minimum times of LST-WRP is almost constant.

Tables Icon

Table 2. Comparison of calculation time by WRP setting (First step).

In average time, we revealed that LST-WRP was about 3.5 times faster than the previous one. Table 3 shows all the calculation times including the second step of the WRP methods. Note that we use the average time shown in Table 2 as the first step of the WRP methods. The total calculation time of LST-WRP is about 1.5 times faster than the previous one. We show the reconstructed images of the previous WRP method and LST-WRP in Fig. 7.

Tables Icon

Table 3. Comparison of calculation time by WRP setting (Total).

 

Fig. 7 Numerical reconstruction images of previous WRP and LST-WRP methods.

Download Full Size | PPT Slide | PDF

Next, we show the results of the RMT-WRP method. The parameters of the simulation are shown in Table 4. The number of WRPs was defined as two in this simulation. We show the calculation times and reconstruction results in Table 5and Fig. 8, respectively. The RMT-WRP method was twice as fast as the previous WRP.

Tables Icon

Table 4. Simulation parameter (RMT-WRP method).

Tables Icon

Table 5. Comparison of computation time of existing WRP method and RMT-WRP method.

 

Fig. 8 Numerical reconstruction images of previous WRP and RMT-WRP methods.

Download Full Size | PPT Slide | PDF

4. Conclusion

We proposed two improved WRP methods: LST-WRP and RMT-WRP. We succeeded in achieving a time that was twice as fast as the previous WRP method by RMT-WRP.

Acknowledgments

This work is partially supported by JSPS KAKENHI Grant Numbers 25330125 and 25240015, and the Kayamori Foundation of Information Science Advancement and Yazaki Memorial Foundation for Science and Technology.

References and links

1. T. C. Poon, ed., Digital Holography and Three-dimensional display: Principles and Applications, Springer (2006).

2. N. Takada, T. Shimobaba, H. Nakayama, A. Shiraki, N. Okada, M. Oikawa, N. Masuda, and T. Ito, “Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system,” Appl. Opt. 51(30), 7303–7307 (2012). [CrossRef]   [PubMed]  

3. S. C. Kim and E. S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]   [PubMed]  

4. H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2009, DWC4 (2009).

5. T. Tommasi and B. Bianco, “Computer-generated holograms of tilted planes by a spatial frequency approach,” J. Opt. Soc. Am. A 10(2), 299305 (1993). [CrossRef]  

6. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003). [CrossRef]   [PubMed]  

7. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013). [CrossRef]   [PubMed]  

8. M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993). [CrossRef]  

9. Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express 20(19), 21645–21655 (2012). [CrossRef]   [PubMed]  

10. T. Shimobaba, S. Hishinuma, and T. Ito, “Special-Purpose Computer for Holography HORN-4 with recurrence algorithm,” Comput. Phys. Commun. 148(2), 160–170 (2002). [CrossRef]  

11. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]   [PubMed]  

12. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010). [CrossRef]   [PubMed]  

13. P. Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou, “Holographic video at 40 frames per second for 4-million object points,” Opt. Express 19(16), 15205–15211 (2011). [CrossRef]   [PubMed]  

14. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012). [CrossRef]   [PubMed]  

15. T. Shimobaba, T. Kakue, N. Masuda, Y. Ichihashi, K. Yamamoto, T. Ito, “Computer holography using wavefront recording method, ” Digital Holography and Three-Dimensional Imaging(DH) DH2013, DTu1A.2 (2013).

16. A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014). [CrossRef]  

17. D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach (2nd Edition), Prentice Hall (2011).

References

  • View by:
  • |
  • |
  • |

  1. T. C. Poon, ed., Digital Holography and Three-dimensional display: Principles and Applications, Springer (2006).
  2. N. Takada, T. Shimobaba, H. Nakayama, A. Shiraki, N. Okada, M. Oikawa, N. Masuda, and T. Ito, “Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system,” Appl. Opt. 51(30), 7303–7307 (2012).
    [Crossref] [PubMed]
  3. S. C. Kim and E. S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008).
    [Crossref] [PubMed]
  4. H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2009, DWC4 (2009).
  5. T. Tommasi and B. Bianco, “Computer-generated holograms of tilted planes by a spatial frequency approach,” J. Opt. Soc. Am. A 10(2), 299305 (1993).
    [Crossref]
  6. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003).
    [Crossref] [PubMed]
  7. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013).
    [Crossref] [PubMed]
  8. M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
    [Crossref]
  9. Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express 20(19), 21645–21655 (2012).
    [Crossref] [PubMed]
  10. T. Shimobaba, S. Hishinuma, and T. Ito, “Special-Purpose Computer for Holography HORN-4 with recurrence algorithm,” Comput. Phys. Commun. 148(2), 160–170 (2002).
    [Crossref]
  11. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009).
    [Crossref] [PubMed]
  12. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010).
    [Crossref] [PubMed]
  13. P. Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou, “Holographic video at 40 frames per second for 4-million object points,” Opt. Express 19(16), 15205–15211 (2011).
    [Crossref] [PubMed]
  14. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012).
    [Crossref] [PubMed]
  15. T. Shimobaba, T. Kakue, N. Masuda, Y. Ichihashi, K. Yamamoto, T. Ito, “Computer holography using wavefront recording method, ” Digital Holography and Three-Dimensional Imaging(DH) DH2013, DTu1A.2 (2013).
  16. A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
    [Crossref]
  17. D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach (2nd Edition), Prentice Hall (2011).

2014 (1)

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

2013 (1)

2012 (3)

2011 (1)

2010 (1)

2009 (2)

T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009).
[Crossref] [PubMed]

H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2009, DWC4 (2009).

2008 (1)

2003 (1)

2002 (1)

T. Shimobaba, S. Hishinuma, and T. Ito, “Special-Purpose Computer for Holography HORN-4 with recurrence algorithm,” Comput. Phys. Commun. 148(2), 160–170 (2002).
[Crossref]

1993 (2)

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
[Crossref]

T. Tommasi and B. Bianco, “Computer-generated holograms of tilted planes by a spatial frequency approach,” J. Opt. Soc. Am. A 10(2), 299305 (1993).
[Crossref]

Alam, M. A.

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

Bianco, B.

T. Tommasi and B. Bianco, “Computer-generated holograms of tilted planes by a spatial frequency approach,” J. Opt. Soc. Am. A 10(2), 299305 (1993).
[Crossref]

Cheung, W.-K.

Hishinuma, S.

T. Shimobaba, S. Hishinuma, and T. Ito, “Special-Purpose Computer for Holography HORN-4 with recurrence algorithm,” Comput. Phys. Commun. 148(2), 160–170 (2002).
[Crossref]

Honda, T.

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
[Crossref]

Hoshino, H.

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
[Crossref]

Ichihashi, Y.

Ito, T.

Jeon, S.-H.

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

Kakue, T.

Kim, E. S.

Kim, N.

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

Kim, S. C.

Kitayama, R.

H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2009, DWC4 (2009).

Kurita, T.

Lee, J.-H.

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

Masuda, N.

Matsushima, K.

Nakayama, H.

Ohyama, N.

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
[Crossref]

Oi, R.

Oikawa, M.

Okada, N.

Phan, A.-H.

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

Poon, T.-C.

Schimmel, H.

Senoh, T.

Shimobaba, T.

Shiraki, A.

Takada, N.

Tommasi, T.

T. Tommasi and B. Bianco, “Computer-generated holograms of tilted planes by a spatial frequency approach,” J. Opt. Soc. Am. A 10(2), 299305 (1993).
[Crossref]

Tsang, P.

Weng, J.

Wyrowski, F.

Yamaguchi, M.

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
[Crossref]

Yamaguchi, T.

H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2009, DWC4 (2009).

Yamamoto, K.

Yoshikawa, H.

H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2009, DWC4 (2009).

Zhou, C.

Appl. Opt. (2)

Comput. Phys. Commun. (1)

T. Shimobaba, S. Hishinuma, and T. Ito, “Special-Purpose Computer for Holography HORN-4 with recurrence algorithm,” Comput. Phys. Commun. 148(2), 160–170 (2002).
[Crossref]

J. Opt. Soc. Am. A (2)

Opt. Express (5)

Opt. Lett. (1)

OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging (1)

H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2009, DWC4 (2009).

Proc. SPIE (2)

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
[Crossref]

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

Other (3)

D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach (2nd Edition), Prentice Hall (2011).

T. Shimobaba, T. Kakue, N. Masuda, Y. Ichihashi, K. Yamamoto, T. Ito, “Computer holography using wavefront recording method, ” Digital Holography and Three-Dimensional Imaging(DH) DH2013, DTu1A.2 (2013).

T. C. Poon, ed., Digital Holography and Three-dimensional display: Principles and Applications, Springer (2006).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Original WRP method.
Fig. 2
Fig. 2 Optimized WRP method.
Fig. 3
Fig. 3 Correction of the WRP decided by least square method.
Fig. 4
Fig. 4 The position gap and correction of the point in the case of setting CGH and WRP. (a) Parallel WRP to the CGH. (b) Tilted WRP (the propagation direction of the object light does not coincide with the normal direction of the CGH) . (c) Tilted WRP (the propagation direction of the object light coincides with the normal direction of the CGH).
Fig. 5
Fig. 5 Comparison of the numerical reconstruction images. (a) the case of Fig. 4(a). (b) the case of Fig. 4(b). (c) the case of Fig. 4(c).
Fig. 6
Fig. 6 RMT-WRP method.
Fig. 7
Fig. 7 Numerical reconstruction images of previous WRP and LST-WRP methods.
Fig. 8
Fig. 8 Numerical reconstruction images of previous WRP and RMT-WRP methods.

Tables (5)

Tables Icon

Table 1 Simulation parameter (LST-WRP).

Tables Icon

Table 2 Comparison of calculation time by WRP setting (First step).

Tables Icon

Table 3 Comparison of calculation time by WRP setting (Total).

Tables Icon

Table 4 Simulation parameter (RMT-WRP method).

Tables Icon

Table 5 Comparison of computation time of existing WRP method and RMT-WRP method.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

U h (x,y)= i N U i ( r i )= i N A i r i exp(jk r i )

Metrics