Abstract

In this paper, a fast hologram generation method is proposed based on the optimal segmentation of a sub-computer-generated-hologram (sub-CGH). The relationship between the pixels on the hologram and the corresponding reconstructed image is calculated firstly. Secondly, the sub-CGH corresponding to the object point from the recorded object is optimized and divided into the optimized diffraction area and the invalid diffraction area. Then, the optimized diffraction area of the sub-CGH for each object point is pre-calculated and saved. Finally, the final hologram can be generated by superimposing all the sub-CGHs. With the proposed method, the calculation time for the final hologram can be significantly reduced and the quality of the reconstructed image is not affected. Moreover, the proposed method has the advantages of perspective enlargement compared with the traditional method, and the experiment results verify its feasibility.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With the development of computer and material science, the quality of the reconstructed image based on holography is getting better [13], which has attracted more attentions in recent years [45]. Computer-generated-hologram (CGH) based holographic display technology has also obtained wide attentions owing to simplicity and flexibility [6], and the applications based on the CGHs also make a huge progress, such as projection system [7], encryption system [8] and near-eye display system [910]. However, heavy amount of the calculation is a huge challenge to the CGH generation. In many real-time dynamic holographic display systems, high-speed calculation for the CGH generation is necessary. To fulfill the requirement, it is necessary to develop fast generation method to simplify the calculation process and accelerate the calculation speed [1112].

In the CGH generation process, the point light source (PLS) model is commonly used to calculate the 3D object. The 3D object is discretized into individual object points, which are assumed to independent ideal sources [1314]. The sub-CGH is calculated by each object point, and the entire CGH can be obtained by overlapping all the sub-CGHs [1517]. To reduce the calculation time, many methods have been developed, such as look-up-table (LUT) method [18], wavefront recording plane method [19] and other derivative methods [2022]. The LUT method is widely used to accelerate the calculations by utilizing the pre-calculated data to replace the complicated calculations [23]. Each complex amplitude distribution of the PLS is pre-calculated and stored. Then, the CGH can be easily obtained by calling the pre-calculated data. However, this method needs to occupy a large number of memory, and it puts forward high requirements for the computer hardware [24]. To address this problem, some improved methods are proposed such as the novel-LUT (NLUT) [25], the compressed NLUT (C-NLUT) [26] and the color-tunable NLUT (CT-NLUT) methods [27]. Despite the LUT method and the derivative methods can accelerate the calculation speed, the huge resolution of the sub-CGH limits the further improvement of the calculation speed. Except for the LUT-based methods, the wavefront recording plane method is one of the most efficient algorithms that accelerates the calculation by eliminating the spatial redundancy [2829]. Of course, other methods such as the cosine function approximation method [30] and the video compression method [31] have also been reported. Besides, some researchers optimize the hardware to improve the calculation speed and the corresponding fast calculation methods based on the computer memory allocation calls and parallel processing have also been reported [3233].

In this paper, we propose a sub-CGH optimal segmentation method to realize the fast hologram generation. The key of the proposed method is the effective viewing area calculation and the optimal segmentation of the sub-CGHs according to the effective viewing area calculation. The effective viewing area is calculated by the size of the 3D object, position of each point, parameter of the SLM and the reconstructed distance. Then based on the effective viewing area, the size of the optimized diffraction area (ODA) on the sub-CGH for each object point is calculated. Only the ODAs of all point are pre-saved and used to calculation the final CGH, while the other part of the sub-CGH is eliminated. In this way, the proposed method can improve the calculation speed while maintaining the integrity of the viewing area.

2. Principle of the method

The process of the proposed method is shown in Fig. 1. Firstly, the intensity value, the depth information and the wavelength of the 3D object are extracted. Each sub-CGH corresponding to the 3D object point is generated according to the Fresnel diffraction theory. When the CGH is loaded on the spatial light modulator (SLM) to reconstruct the 3D object, only a part of sub-CGHs, which named as ODA, make contributions to the reconstructed image. However, the process of sub-CGHs coherent superposition consumes about 99% hologram generation time. Secondly, each sub-CGH is optimized segmentation by calculating the effective diffraction area to obtain the ODA and eliminate the unserviceable pixels area. The unserviceable pixel area calculations occupy for a large proportion, which contributes nothing to the reconstructed correctly image. Thirdly, the final CGH is obtained by overlapping the ODA instead of sub-CGH from each individual object point. Finally, the final CGH is loaded on the SLM synchronously according to the computer. When the light illuminates on the SLM, the holographic reconstructed image can be displayed.

 figure: Fig. 1.

Fig. 1. Process of the proposed method.

Download Full Size | PPT Slide | PDF

In the first step, the sub-CGHs is calculated by the Fresnel diffraction theory, given by Eq. (1):

$$U({x,y} )= \frac{{\exp ({\textrm{j}kz} )}}{{\textrm{j}\lambda z}}{\int\!\!\!\int_{ - \infty }^\infty {{A_0}{e^{\textrm{j}{\varphi _0}}}} } \exp \left[ {\textrm{j}k\frac{{{{({x - {x_0}} )}^2} + {{({y - {y_0}} )}^2}}}{{2z}}} \right]\textrm{d}{x_0}\textrm{d}{y_0}, $$
where U(x, y) is the optical field distribution on the sub-CGHs, x0 and y0 are the position coordinates of the object point, A0e0 is the complex amplitude of the object point, z is the reconstructed distance, λ is the wavelength of light and k is the wave number. Thus, the sub-CGHs can be obtained by superimposing integral on the initial light field.

In the second step, by calculating the effective diffraction areas, the sub-CGH of each point is optimized segmentation, respectively. For easily describing the optimized segmentation process, a cross section of the hologram reconstruction is selected, as shown in Fig. 2. It is assumed that the 3D object size is W, and the SLM is located at x = 0. The size of CGH is H, the same as the SLM. The SLM pixel pitch is p. The distance between the SLM and the reconstructed image is L, the viewing distance is R. P and Q are the endpoints of the intercepted 3D object, corresponding to the P′ and Q′ of the reconstructed image. P and Q have the same distance from the x axis. The 3D object size and the reconstructed distance are limited by the SLM pixel pitch and size, due to the diffraction-limitation of light. The diffraction angle θ satisfies Eq. (2):

$$\theta \le {\sin ^{ - 1}}\textrm{(}\frac{\lambda }{{2p}}\textrm{)}, $$
where p is the pixel pitch of the SLM. The diffraction space of P′ is AC, in which we can see the point endpoint P′. Similarly, the diffraction space of Q′ is BD, in which we can see the point endpoint Q′. Here, in order to calculate the diffraction angle, T is recorded as the intersection between PQ′ and the vertical line M1T. In the traditional method, the maximum diffraction angle α can be calculated according to the geometric analysis:
$$\alpha = {\tan ^{ - 1}}\textrm{(}\frac{{{\raise0.7ex\hbox{$W$} \!\mathord{\left/ {\vphantom {W 2}} \right.}\!\lower0.7ex\hbox{$2$}} + {\raise0.7ex\hbox{$H$} \!\mathord{\left/ {\vphantom {H 2}} \right.}\!\lower0.7ex\hbox{$2$}}}}{L}\textrm{)} = {\tan ^{ - 1}}\textrm{(}\frac{{W + H}}{{2L}}\textrm{)}. $$

 figure: Fig. 2.

Fig. 2. Sub-CGH optimal segmentation.

Download Full Size | PPT Slide | PDF

However, only in the overlapped area of AC and BD (the effective diffraction area BC), the complete reconstructed image can be observed. In the area AB or CD, only a part of the image near the P′ or the Q′ endpoint can be observed. The ODAs of the sub-CGHs are the regions that make contribution to the effective diffraction area BC. The rest makes no contribution to BC. On the basic of the effective diffraction area BC, each sub-CGH of object point segmentation is calculated and optimized. Different from our previous work, the ODAs are not central symmetry and the ODA on the CGH are non-linear. Apparently, the ODA of the endpoint P′ is the area N1N2.Therefore, the maximum diffraction angle of the proposed method is β, given by Eq. (4).

$$\beta = {\tan ^{ - 1}}\textrm{(}\frac{{H - W}}{{2L}}\textrm{)}. $$
The maximum diffraction angle α and β should be less than or equal to the diffraction angle θ. The restriction relations of the proposed method and traditional method are given by Eqs. (5) and (6), respectively.
$$L \ge ({H - W} )\frac{p}{\lambda }, $$
$$L \ge ({H + W} )\frac{p}{\lambda }. $$
When the object size W is fixed, the reconstructed distance L can be a smaller value in the proposed method according to Eq. (5). As we can see from Fig. 2, when L gets smaller, the viewing angle gets larger.

Then, the detailed optimal segmentation method of any point m′(x0, y0) on the reconstructed image is shown in Fig. 3. Wherever the point m′ is, the ODA of the point is corresponding to the effective diffraction area BC, thus the ODA diffraction boundary W(0, y1) and Z(0, y2) can be calculated by the geometric relationship, which are given by Eqs. (7) and (8).

$${y_1} ={-} \frac{H}{2} + \frac{{R\textrm{(}2{y_0} + W\textrm{)}}}{{2\textrm{(}R - L\textrm{)}}}, $$
$${y_2} = \frac{H}{2} + \frac{{R\textrm{(}2{y_0} - W\textrm{)}}}{{2\textrm{(}R - L\textrm{)}}}. $$

 figure: Fig. 3.

Fig. 3. Sub-CGH optimal segmentation of any point m′.

Download Full Size | PPT Slide | PDF

The ODA size S is the distance between W(0, y1) and Z(0, y2), which can be calculated according to Eqs. (7) and (8), given by Eq. (9).

$$S = H - W\frac{R}{{R - L}}. $$

If the preset parameters are determined, the ODA size S for any point is a fixed value. That means the same size matrix can be used to segment all the sub-CGHs in hologram generation. Then, the exact position of the ODA in the sub-CGH is given by Eq. (10):

$$\tau = {y_2} - {y_0} = \frac{H}{2} + \frac{{2L{y_0} - WR}}{{2\textrm{(}R - L\textrm{)}}}, $$
where τ is the distance from the ODA coboundary to the sub-CGH center, which is related to the point position. Then, each sub-CGH is segmented according to the position and the size of the ODA.

In the third step, the CGH is generated by the coherent superposition of the ODAs. The position relationship of the two adjacent points ODA can be calculated by Eq. (11).

$$k = ({{y_2}^{\prime} - {y_2}} )p = \frac{R}{{R - L}}p, $$
where k is the distance between two adjacent ODA and y2′ is the ordinate of the adjacent point.

In the final step, the CGH is loaded on the SLM by the computer. When the laser is used to illuminate on the SLM, the modulated reflected light can produce the reconstructed image of the 3D object.

3. Experiments and discussion

In the experiment, it is assumed that the distance between the object points and the SLM is 20 cm, and the viewing distance is 30 cm. The wavelength of the laser is 532 nm. The resolution and the pixel pitch of the SLM are 1920×1080 and 6.4 µm, respectively. The method is performed in MATLAB R2017b with an Intel i5-8600K processor (3.6 GHz). The memory of the computer is 8 Gbytes. To demonstrate the advantage of the proposed method, the comparative experiment based on the NLUT algorithm is carried out.

3.1 Calculation of the hologram

To evaluate the calculation speed performance of the proposed method in general situations, a series of images, which are assumed to consist of N×N random points located in the reconstructed distance, are selected to generate the CGH by the proposed method and the NLUT method respectively for comparison. In the NLUT method, the CGH is generated by superposition the sub-CGHs corresponding to each object point directly. In the proposed method, the sub-CGHs is optimized and divided into two diffraction areas firstly. Then only the ODAs are used to generate the CGH. We take the size of the images N=600 and the reconstructed distance L=15 cm as an example to process the optimal segmentation and superposition. The ODA size S can be calculated by Eq. (9). The horizontal and vertical calculation results of S is 1020 and 180, respectively. The location of the ODA on the sub-CGH is calculated by Eq. (10). If the object point corresponding to the sub-CGH is located at (x, y), the horizontal and vertical calculation results of the distance from the ODA edge to the sub-CGH center τ are 360+x and y-60, respectively. Then each ODA corresponding to each object point can be segmented by the parameters S and τ. Finally, the CGH is generated by superposition all the ODAs. The final calculation times are summarized in Table 1.

Tables Icon

Table 1. Calculation time of the proposed method and the NLUT method.

It is clear that the proposed method can obviously improve the calculation speed. When the resolutions of the objects are 400×400, 500×500 and 600×600, the calculation speed improvement ratios of the proposed method are 32.78%, 41.47% and 51.50%, respectively. Figure 4 shows the total calculation time variations of the proposed method and the NLUT method.

 figure: Fig. 4.

Fig. 4. Total calculation time variations of the proposed method and the NLUT method.

Download Full Size | PPT Slide | PDF

Figure 5 shows the average calculation time variations of the proposed method and the NLUT method when the object point quantity changes. Actually, when the CGH is generated via MATLAB or other software, the sub-CGHs coherent superposition takes up most of the time. Thus, the resolution of the sub-CGH plays a decisive role in the average calculation time for one object point. Whatever the object point quantity is, the resolution of the traditional method is a constant value, which is equal to that of the SLM. Thus, the average calculation time for one object point by the NLUT method is almost unchanged. With the object points increasing, the total calculation time will increase roughly in quadratic form. In the proposed method, the resolution of the ODAs and the object size are negatively correlated. As shown in Fig. 5, with the object points increasing, the average calculation time for one object point by the proposed method is reduced. Therefore, with the increase of object points, the merits of the proposed method become more obvious. Compared with our previous work, the calculation speed is also getting faster. When the quantity of the object point is 160000, the calculation speed improvement ratios of the proposed method is 11.2%. Moreover, with the increase of object points, the merits of the proposed method become more obvious.

 figure: Fig. 5.

Fig. 5. Average calculation time of the proposed method and the NLUT method.

Download Full Size | PPT Slide | PDF

3.2 Holographic reconstruction

To illustrate the ability of the proposed method for improving the reconstructed image quality, the holographic display system is carried out, as shown in Fig. 6. The parameters of the experiments have been given in the first paragraph of Section 3. The CGH loaded on the SLM is illuminated by the green laser source after the beam broadening collimation system and the beam splitter (BS). The size of the BS is 25.4 mm×25.4 mm×25.4 mm. The beam broadening collimation system consists of a beam expander (BE) and lens 1. The reflected light from the SLM is transmitted through lens 2, the filter and lens 3. The parameters of lens 2 and lens 3 are exactly the same, thus forming a 4f system. The focal lengths of the three lenses are 30 cm. The filter produced by the Daheng Corporation (GCM-5701M) is used to suppress the stray light. Finally, the camera is used to capture the reconstructed image.

 figure: Fig. 6.

Fig. 6. Holographic display system.

Download Full Size | PPT Slide | PDF

The numerically reconstructed results of the camel with the resolution 670×450, are carried out, as shown in Fig. 7. Besides, the experimental reconstructed images of the camel by using the proposed method and the NLUT method are shown in Fig. 8. It can be seen that all the details of the camel are completely reconstructed by the proposed method. And, the quality of the reconstructed image is similar to that of the NLUT method. So, it can be confirmed that the cut-out part in sub-CGHs makes no contribution to the reconstructed image.

 figure: Fig. 7.

Fig. 7. Numerical reconstruction results for the camel. (a) Proposed method and (b) NLUT method.

Download Full Size | PPT Slide | PDF

 figure: Fig. 8.

Fig. 8. Reconstructed images of the camel. (a) Proposed method and (b) NLUT method.

Download Full Size | PPT Slide | PDF

The experimental reconstructed images of the plane with the resolution 670×425, and the aerofoil of the plane in the red dotted frame is selected to analyze the local light intensity distribution by using the proposed method and the NLUT method, respectively. As shown in Fig. 9, the uniformity coefficient of the proposed method is optimized. In the proposed method, as the invalid part of the sub-CGH has been removed, there will be less interference in the reconstructed image. Then the disturbance caused by invalid part will be reduced in the effective viewing area. The speckle contrast of the reconstructed image in the red box is calculated to quantitatively evaluate the image quality. The speckle contrast by using the proposed method and the NLUT method is 0.6234 and 0.7567, respectively. So, the image quality of the proposed method is slightly higher. But to the naked eye, there is almost no difference.

 figure: Fig. 9.

Fig. 9. Reconstructed images of the plane by using the (a) proposed method and (b) NLUT method and the local light intensity distribution of the plane by using the (c) proposed method and (d) NLUT method.

Download Full Size | PPT Slide | PDF

As shown in Table 1, the ODA is smaller than the original CGH. Therefore, the hologram resolution of the recorded object is smaller than the original hologram. Therefore, the hologram resolution of the recorded object is smaller than the original hologram. Taking an object with a resolution of 400×400 as an example, the ODA resolution of the proposed method is 1320 × 480, while the hologram resolution of the traditional method is 1920 × 1080. In the holographic reconstruction, the overall display resolution of the proposed method is smaller than that of the traditional method. We know that only in the effective diffraction area, the complete reproduction of the object can be seen. In other viewing area, which is also named as the invalid viewing area, only part of the image can be seen. Thus, the part on the CGH which only makes contribution to the invalid viewing area is called as the invalid part, and it will not affect the quality of the reconstructed image. Although the resolution of ODA is smaller than the traditional hologram, the resolution of the hologram reproduced in the effective viewing area has been completely recorded by the proposed method. That is, the resolution of the object reproduced in the effective viewing area is not affected. So, the display resolution in the effective viewing area of the proposed method is equal to that of the traditional method.

Then, a 3D scene, of which resolution is 640×455, is selected to generate the hologram by the proposed method and the NLUT method, respectively. The 3D scene consists of two characters “3” and “D” placed at 0.1 m and 0.2 m from the hologram plane, respectively. The results are shown in Fig. 10. It can be observed that the reconstructed images generated by the proposed method can focus correctly in different depth. The maximum reachable size of the 3D object can be calculated according to Eq. (5), and the size of the reconstructed image is equal to that of the 3D object in our holographic display system. Take the horizontal dimension as an example, the maximum reachable size is 4.352 mm. The sizes of 3D objects used in the experiment is 4.288 mm, 4.288 mm, and 4.096 mm, corresponding to the reconstructed image in Figs. 8, 9 and 10. So, the results are basically close to their maximum sizes.

 figure: Fig. 10.

Fig. 10. Reconstructed results for the 3D scene generated by (a)-(b) the proposed method and (c)-(d) the NLUT method.

Download Full Size | PPT Slide | PDF

There has a balance between the object size and the viewing angle. Actually, the CGH based holographic display technology is always limited by the space bandwidth product. With the increasing of 3D object size, the effective viewing area will be decreased correspondingly, which will result in the corresponding ODA on the sub-CGH decrease accordingly. Thus, from the perspective of the effective viewing area, regardless of the proposed method or the NLUT method, as the number of object points increases, the quality of the reconstructed image will be limited. However, the quality of the reconstructed image by using the proposed method is not affected compared with the NLUT method, under the same experimental parameters.

In order to verify the feasibility of the proposed method, we use the NLUT algorithm for experimental verification and result comparison. Actually, the proposed method can be used to optimize most PLS model-based algorithms. For example, the WRP algorithm is a fast algorithm based on PLS model. The traditional WRP method reduces the calculation complexity of the CGH by using two-step calculation. In the first step, the complex amplitude on the WRP is recorded by the raytracing between the 3D object and the WRP, which takes extensive calculations. In the second step, the complex amplitude on the CGH is calculated by using the diffraction calculation between the WRP and CGH. The improved WRP method can reduce the calculation time by replacing the raytracing method with the LUT method in the first step, which has been inspired by Ref. [29]. In the proposed method, the effective viewing area calculation is also applicable to optimize the WRP algorithm. Here, we design an optimization method preliminarily: the LUT method used in the first step can be optimized by the optimal segmentation of the sub-CGH of our method. Then the calculation speed of the hologram by using the WRP method will be increased obviously. In the future, we will continue to study in order to obtain the faster calculation speed.

3.3 Holographic viewing angle expansion

The proposed method has the advantage of the holographic viewing angle expansion. To illustrate the proposed method loose the restrictive relation of the object size, the reconstructed distance and the viewing angle, three objects with the same pattern are created for the numerical simulation of viewing extension. The pattern and the numerically reconstructed result of the image are shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. (a) Pattern and (b) numerically reconstructed result of the image.

Download Full Size | PPT Slide | PDF

The resolutions of the three objects are 200×200, 250×250 and 300×300, respectively. Because the horizontal and vertical directions have the same constraints, in the simulation, the horizontal direction is selected to illustrate the viewing angle extension character. When the object size W and the SLM size H are fixed value, the minimum reconstructed distance L of the proposed method and traditional method can be calculated by Eqs. (5) and (6), respectively. Apparently, the minimum reconstructed distance L of the proposed method is much less than that of the traditional method. And, when the reconstructed distance L is getting smaller, the viewing angle Ω are getting bigger, as shown in Fig. 12. Then, the maximum viewing angles in the NLUT method and the proposed method can be calculated by Eqs. (3) and (4), respectively. The final calculation results are summarized in Table 2.

 figure: Fig. 12.

Fig. 12. Principle of holographic viewing angle expansion.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Theoretical calculation results of the proposed method and the NLUT method.

It can be observed that the minimum reconstructed distance of the proposed method is lower that of the NLUT method when the object resolution is a fixed value. The maximum viewing angle of the proposed method is expended by 22.5%, 29.3% and 38.5% corresponding to the 200×200, 250×250 and 300×300 resolution objects. The viewing angle character is also analyzed by numerical simulation the light distribution of the leftmost and the rightmost virtual objects points under the viewing distance R=200 mm, as shown in Fig. 13.

 figure: Fig. 13.

Fig. 13. Light distribution of the leftmost and the rightmost image points with the resolution of the object is 200×200, 250×250 and 300×300 respectively, by (a)-(c) the NLUT method and (d)-(f) the proposed method.

Download Full Size | PPT Slide | PDF

Compared with the NLUT method, the viewing area of the proposed method at critical reconstructed distance is increased greatly. With the increase of the object resolution, the expansion of viewing area is getting more significant. When the number of the SLM increases, the expansion of viewing area of the proposed method will be more remarkable than the NLUT method. The relative value of viewing angle K is given by Eq. (12):

$$K = \frac{{\tan ({{\raise0.7ex\hbox{${\Omega^{\prime}}$} \!\mathord{\left/ {\vphantom {{\Omega^{\prime}} 2}} \right.}\!\lower0.7ex\hbox{$2$}}} )}}{{\tan ({{\raise0.7ex\hbox{$\Omega $} \!\mathord{\left/ {\vphantom {\Omega 2}} \right.}\!\lower0.7ex\hbox{$2$}}} )}} = \frac{{H + W}}{{H - W}}. $$
It can be seen that with the SLM size H increasing, the relative value of the viewing angle K will be increased accordingly.

The calculation speed acceleration features and viewing angle expansion characters are limited by the performance of the SLM. Due to the limitation of the SLM resolution, the translation and superposition of the ODAs need to approximate treatment, which will slightly affect the reconstructed image. To solve this problem, we need to set the reconstructed distance and the viewing distance appropriately. But if the SLM resolution gets bigger, the reconstructed image will be better, and the parameter settings are also more flexible. Currently, the purpose of expending the viewing angle can be achieved by using multiple SLMs. When the proposed method is used in the two SLM spliced systems, the viewing angle can be expanded by 16.9%, under the condition that the wavelength is 532 nm, pixel pitch is 6.4 µm, the SLM resolution is 1920×1080 and the object resolution is 300×300. Therefore, the proposed method has great advantages and potential in viewing angle expansion.

The key of the proposed method is the effective viewing area calculation and the optimal segmentation of the sub-CGHs according to the effective viewing area calculation. The effective viewing area of our proposed method is calculated by the size of the 3D object, the position of each point, the parameters of the SLM and the reconstructed distance. Then based on the effective viewing area, the size of the ODAs on the sub-CGH for each object point is calculated. Only the ODAs of the sub-CGH are pre-saved and used to calculation the final CGH, while the other part of the sub-CGH is ignored. In this way, the holographic reconstructed image can be calculated rapidly. The proposed method improves the calculation speed while maintaining the integrity of the viewing area. Compared with the existing method, the proposed method has the following advantages:

  • 1) The traditional methods usually use the whole sub-CGHs for hologram generation, which contain an amount of invalid diffraction areas, so the calculation speed is relatively slow. While the proposed method calculates the ODA of the sub-CGH, which can distinctly reduce the calculation complexity and accelerate the speed. 2) Compared with our previous work, the sub-CGH of each object point in this method is optimized. The ODA size of each object point is smaller than that of the previous work. So, the calculation speed is greatly improved. 3) Compared with the methods by optimizing the holograms based on eye tracking [2122], the effective viewing area of the proposed method is determined by the size of the 3D object, the position of the point, the parameters of the SLM and the reconstructed distance. The size and position of the ODA corresponding to each object point are calculated by the effective viewing area, not by tracking the VP. The effective viewing area of the proposed method is the maximum effective viewing area that can be achieved. Thus, the proposed method is expected to be used for multiple people viewing. Of course, the method of the Seereal Corporation has advantages in near-eye display. In addition, as the optimal segmentation of sub-CGH is determined by the maximum effective viewing area, not by tracking the VP, the proposed method also can be used to optimize some PLS model-based algorithms, such as the WRP method. 4) The feasibility of the proposed method has been verified experimentally, and the potential advantage in viewing angle expansion has also been verified. With the increasing of the object resolution, the expansion of viewing area of the proposed method will become more obvious. We will focus more on holographic related technologies in the future, and hope the calculation speed can really meet the requirement of real-time display.

4. Conclusion

In this paper, a fast hologram generation method based on the sub-CGH optimal segmentation is proposed. The sub-CGH of each object point from the recorded object are optimized and divided into two parts. Only the ODAs are pre-saved and used to calculate the hologram. The CGH calculation results prove that the CGH calculation speed is improved significantly. Besides, with the increase of the object points, the merits of the proposed method become more obvious. Actually, our proposed method can be used to optimize most PLS model based holographic algorithm.

Funding

National Natural Science Foundation of China (61927809, 62020106010).

Acknowledgment

The authors would thank Wei Duan graduated from Nanjing University for her technical assistance in MATLAB 2017b.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. Y. Tsuchiyama and K. Matsushima, “Full-color large-scaled computer-generated holograms using RGB color filters,” Opt. Express 25(3), 2016–2030 (2017). [CrossRef]  

2. X. Xiao, K. Wakunami, X. Chen, X. Shen, B. Javidi, J. Kim, and J. Nam, “Three-dimensional holographic display using dense ray sampling and integral imaging capture,” J. Disp. Technol. 10(8), 688–694 (2014). [CrossRef]  

3. T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX 1(1), 10 (2020). [CrossRef]  

4. L. J. Su, K. Y. Kwang, L. M. Young, and W. Y. Hyub, “Enhanced see-through near-eye display using time-division multiplexing of a Maxwellian-view and holographic display,” Opt. Express 27(2), 689–701 (2019). [CrossRef]  

5. W. Zhao, B. Liu, H. Jiang, J. Song, Y. Pei, and Y. Jiang, “Full-color hologram using spatial multiplexing of dielectric metasurface,” Opt. Lett. 41(1), 147–151 (2016). [CrossRef]  

6. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graphics 36(4), 1–16 (2017). [CrossRef]  

7. D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX 1(1), 6 (2020). [CrossRef]  

8. S. Xi, X. Wang, L. Song, Z. Zhu, B. Zhu, S. Huang, N. Yu, and H. Wang, “Experimental study on optical image encryption with asymmetric double random phase and computer-generated hologram,” Opt. Express 25(7), 8212–8222 (2017). [CrossRef]  

9. P. Sun, S. Chang, S. Liu, X. Tao, C. Wang, and Z. Zheng, “Holographic near-eye display system based on double-convergence light Gerchberg-Saxton algorithm,” Opt. Express 26(8), 10140–10151 (2018). [CrossRef]  

10. C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 1–14 (2019). [CrossRef]  

11. J. Jia, Y. Wang, J. Liu, X. Li, Y. Pan, Z. Sun, B. Zhang, Q. Zhao, and W. Jiang, “Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. 52(7), 1404–1412 (2013). [CrossRef]  

12. S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017). [CrossRef]  

13. H. K. Cao and E. S. Kim, “Full-scale one-dimensional NLUT method for accelerated generation of holographic videos with the least memory capacity,” Opt. Express 27(9), 12673–12691 (2019). [CrossRef]  

14. T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX 1(1), 10 (2020). [CrossRef]  

15. Y. G. Ju and J. H. Park, “Foveated computer-generated hologram and its progressive update using triangular mesh scene model for near-eye displays,” Opt. Express 27(17), 23725–23738 (2019). [CrossRef]  

16. D. Blinder and P. Schelkens, “Phase added sub-stereograms for accelerating computer generated holography,” Opt. Express 28(11), 16924–16934 (2020). [CrossRef]  

17. D. Wang, N. N. Li, C. Liu, and Q. H. Wang, “Holographic display method to suppress speckle noise based on effective utilization of two spatial light modulators,” Opt. Express 27(8), 11617–11625 (2019). [CrossRef]  

18. D. Pi, J. Liu, Y. Han, A. U. R. Khalid, and S. Yu, “Simple and effective calculation method for computer-generated hologram based on non-uniform sampling using look-up-table,” Opt. Express 27(26), 37337–37348 (2019). [CrossRef]  

19. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015). [CrossRef]  

20. B. J. Jackin and T. Yatagai, “Fast calculation of spherical computer generated hologram using spherical wave spectrum method,” Opt. Express 21(1), 935–948 (2013). [CrossRef]  

21. www.seereal.com/technology/.

22. E. Zschau, A. Schwerdtner, and B. Kroll, “Method for generating video holograms in real time by means of subholograms,” United States Patent 20100073744 (March 25, 2010).

23. Q. A. Pham, D. L. Bueno, T. Wang, G. Montoro, and P. L. Gilabert, “Partial least squares identification of multi look-up table digital predistorters for concurrent dual-band envelope tracking power amplifiers,” IEEE Trans. Microwave Theory Techn. 66(12), 5143–5150 (2018). [CrossRef]  

24. S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017). [CrossRef]  

25. M. W. Kwon, S. C. Kim, and E. S. Kim, “Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes,” Appl. Opt. 55(3), A22–A31 (2016). [CrossRef]  

26. S. C. Kim, J. M. Kim, and E. S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012). [CrossRef]  

27. S. C. Kim, X. B. Dong, and E. S. Kim, “Accelerated one-step generation of full-color holographic videos using a color-tunable novel-look-up table method for holographic three-dimensional television broadcasting,” Sci. Rep. 5(1), 14056 (2015). [CrossRef]  

28. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012). [CrossRef]  

29. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010). [CrossRef]  

30. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015). [CrossRef]  

31. X. Dong, S. C. Kim, and E. S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express 22(14), 16925–16944 (2014). [CrossRef]  

32. T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,” Opt. Express 18(10), 9955–9960 (2010). [CrossRef]  

33. Y. Seo, Y. Lee, J. Yoo, and D. Kim, “Hardware architecture of high-performance digital hologram generator on the basis of a pixel-by-pixel calculation scheme,” Appl. Opt. 51(18), 4003–4012 (2012). [CrossRef]  

References

  • View by:

  1. Y. Tsuchiyama and K. Matsushima, “Full-color large-scaled computer-generated holograms using RGB color filters,” Opt. Express 25(3), 2016–2030 (2017).
    [Crossref]
  2. X. Xiao, K. Wakunami, X. Chen, X. Shen, B. Javidi, J. Kim, and J. Nam, “Three-dimensional holographic display using dense ray sampling and integral imaging capture,” J. Disp. Technol. 10(8), 688–694 (2014).
    [Crossref]
  3. T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
    [Crossref]
  4. L. J. Su, K. Y. Kwang, L. M. Young, and W. Y. Hyub, “Enhanced see-through near-eye display using time-division multiplexing of a Maxwellian-view and holographic display,” Opt. Express 27(2), 689–701 (2019).
    [Crossref]
  5. W. Zhao, B. Liu, H. Jiang, J. Song, Y. Pei, and Y. Jiang, “Full-color hologram using spatial multiplexing of dielectric metasurface,” Opt. Lett. 41(1), 147–151 (2016).
    [Crossref]
  6. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graphics 36(4), 1–16 (2017).
    [Crossref]
  7. D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX  1(1), 6 (2020).
    [Crossref]
  8. S. Xi, X. Wang, L. Song, Z. Zhu, B. Zhu, S. Huang, N. Yu, and H. Wang, “Experimental study on optical image encryption with asymmetric double random phase and computer-generated hologram,” Opt. Express 25(7), 8212–8222 (2017).
    [Crossref]
  9. P. Sun, S. Chang, S. Liu, X. Tao, C. Wang, and Z. Zheng, “Holographic near-eye display system based on double-convergence light Gerchberg-Saxton algorithm,” Opt. Express 26(8), 10140–10151 (2018).
    [Crossref]
  10. C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 1–14 (2019).
    [Crossref]
  11. J. Jia, Y. Wang, J. Liu, X. Li, Y. Pan, Z. Sun, B. Zhang, Q. Zhao, and W. Jiang, “Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. 52(7), 1404–1412 (2013).
    [Crossref]
  12. S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017).
    [Crossref]
  13. H. K. Cao and E. S. Kim, “Full-scale one-dimensional NLUT method for accelerated generation of holographic videos with the least memory capacity,” Opt. Express 27(9), 12673–12691 (2019).
    [Crossref]
  14. T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
    [Crossref]
  15. Y. G. Ju and J. H. Park, “Foveated computer-generated hologram and its progressive update using triangular mesh scene model for near-eye displays,” Opt. Express 27(17), 23725–23738 (2019).
    [Crossref]
  16. D. Blinder and P. Schelkens, “Phase added sub-stereograms for accelerating computer generated holography,” Opt. Express 28(11), 16924–16934 (2020).
    [Crossref]
  17. D. Wang, N. N. Li, C. Liu, and Q. H. Wang, “Holographic display method to suppress speckle noise based on effective utilization of two spatial light modulators,” Opt. Express 27(8), 11617–11625 (2019).
    [Crossref]
  18. D. Pi, J. Liu, Y. Han, A. U. R. Khalid, and S. Yu, “Simple and effective calculation method for computer-generated hologram based on non-uniform sampling using look-up-table,” Opt. Express 27(26), 37337–37348 (2019).
    [Crossref]
  19. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015).
    [Crossref]
  20. B. J. Jackin and T. Yatagai, “Fast calculation of spherical computer generated hologram using spherical wave spectrum method,” Opt. Express 21(1), 935–948 (2013).
    [Crossref]
  21. www.seereal.com/technology/ .
  22. E. Zschau, A. Schwerdtner, and B. Kroll, “Method for generating video holograms in real time by means of subholograms,” United States Patent 20100073744 (March 25, 2010).
  23. Q. A. Pham, D. L. Bueno, T. Wang, G. Montoro, and P. L. Gilabert, “Partial least squares identification of multi look-up table digital predistorters for concurrent dual-band envelope tracking power amplifiers,” IEEE Trans. Microwave Theory Techn. 66(12), 5143–5150 (2018).
    [Crossref]
  24. S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017).
    [Crossref]
  25. M. W. Kwon, S. C. Kim, and E. S. Kim, “Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes,” Appl. Opt. 55(3), A22–A31 (2016).
    [Crossref]
  26. S. C. Kim, J. M. Kim, and E. S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012).
    [Crossref]
  27. S. C. Kim, X. B. Dong, and E. S. Kim, “Accelerated one-step generation of full-color holographic videos using a color-tunable novel-look-up table method for holographic three-dimensional television broadcasting,” Sci. Rep. 5(1), 14056 (2015).
    [Crossref]
  28. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012).
    [Crossref]
  29. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010).
    [Crossref]
  30. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015).
    [Crossref]
  31. X. Dong, S. C. Kim, and E. S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express 22(14), 16925–16944 (2014).
    [Crossref]
  32. T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,” Opt. Express 18(10), 9955–9960 (2010).
    [Crossref]
  33. Y. Seo, Y. Lee, J. Yoo, and D. Kim, “Hardware architecture of high-performance digital hologram generator on the basis of a pixel-by-pixel calculation scheme,” Appl. Opt. 51(18), 4003–4012 (2012).
    [Crossref]

2020 (4)

T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
[Crossref]

D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX  1(1), 6 (2020).
[Crossref]

D. Blinder and P. Schelkens, “Phase added sub-stereograms for accelerating computer generated holography,” Opt. Express 28(11), 16924–16934 (2020).
[Crossref]

T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
[Crossref]

2019 (6)

2018 (2)

P. Sun, S. Chang, S. Liu, X. Tao, C. Wang, and Z. Zheng, “Holographic near-eye display system based on double-convergence light Gerchberg-Saxton algorithm,” Opt. Express 26(8), 10140–10151 (2018).
[Crossref]

Q. A. Pham, D. L. Bueno, T. Wang, G. Montoro, and P. L. Gilabert, “Partial least squares identification of multi look-up table digital predistorters for concurrent dual-band envelope tracking power amplifiers,” IEEE Trans. Microwave Theory Techn. 66(12), 5143–5150 (2018).
[Crossref]

2017 (5)

2016 (2)

2015 (3)

2014 (2)

X. Xiao, K. Wakunami, X. Chen, X. Shen, B. Javidi, J. Kim, and J. Nam, “Three-dimensional holographic display using dense ray sampling and integral imaging capture,” J. Disp. Technol. 10(8), 688–694 (2014).
[Crossref]

X. Dong, S. C. Kim, and E. S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express 22(14), 16925–16944 (2014).
[Crossref]

2013 (2)

2012 (3)

2010 (2)

Bang, K.

C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 1–14 (2019).
[Crossref]

Blinder, D.

Bueno, D. L.

Q. A. Pham, D. L. Bueno, T. Wang, G. Montoro, and P. L. Gilabert, “Partial least squares identification of multi look-up table digital predistorters for concurrent dual-band envelope tracking power amplifiers,” IEEE Trans. Microwave Theory Techn. 66(12), 5143–5150 (2018).
[Crossref]

Cao, H. K.

Chang, S.

Chen, X.

X. Xiao, K. Wakunami, X. Chen, X. Shen, B. Javidi, J. Kim, and J. Nam, “Three-dimensional holographic display using dense ray sampling and integral imaging capture,” J. Disp. Technol. 10(8), 688–694 (2014).
[Crossref]

Dong, X.

Dong, X. B.

S. C. Kim, X. B. Dong, and E. S. Kim, “Accelerated one-step generation of full-color holographic videos using a color-tunable novel-look-up table method for holographic three-dimensional television broadcasting,” Sci. Rep. 5(1), 14056 (2015).
[Crossref]

Georgiou, A.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graphics 36(4), 1–16 (2017).
[Crossref]

Gilabert, P. L.

Q. A. Pham, D. L. Bueno, T. Wang, G. Montoro, and P. L. Gilabert, “Partial least squares identification of multi look-up table digital predistorters for concurrent dual-band envelope tracking power amplifiers,” IEEE Trans. Microwave Theory Techn. 66(12), 5143–5150 (2018).
[Crossref]

Han, Y.

Huang, S.

Hyub, W. Y.

Ichihashi, Y.

Ito, T.

Jackin, B. J.

Jang, C.

C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 1–14 (2019).
[Crossref]

Javidi, B.

X. Xiao, K. Wakunami, X. Chen, X. Shen, B. Javidi, J. Kim, and J. Nam, “Three-dimensional holographic display using dense ray sampling and integral imaging capture,” J. Disp. Technol. 10(8), 688–694 (2014).
[Crossref]

Jia, J.

Jiang, H.

Jiang, W.

Jiang, Y.

Jiao, S.

Ju, Y. G.

Kakue, T.

Khalid, A. U. R.

Kim, D.

Kim, E. S.

Kim, J.

X. Xiao, K. Wakunami, X. Chen, X. Shen, B. Javidi, J. Kim, and J. Nam, “Three-dimensional holographic display using dense ray sampling and integral imaging capture,” J. Disp. Technol. 10(8), 688–694 (2014).
[Crossref]

Kim, J. M.

Kim, S. C.

Kollin, J. S.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graphics 36(4), 1–16 (2017).
[Crossref]

Kroll, B.

E. Zschau, A. Schwerdtner, and B. Kroll, “Method for generating video holograms in real time by means of subholograms,” United States Patent 20100073744 (March 25, 2010).

Kwang, K. Y.

Kwon, M. W.

Lee, B.

C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 1–14 (2019).
[Crossref]

Lee, Y.

Li, G.

C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 1–14 (2019).
[Crossref]

Li, N. N.

Li, X.

Liu, B.

Liu, C.

D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX  1(1), 6 (2020).
[Crossref]

D. Wang, N. N. Li, C. Liu, and Q. H. Wang, “Holographic display method to suppress speckle noise based on effective utilization of two spatial light modulators,” Opt. Express 27(8), 11617–11625 (2019).
[Crossref]

Liu, J.

Liu, S.

Maimone, A.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graphics 36(4), 1–16 (2017).
[Crossref]

Masuda, N.

Matsushima, K.

Montoro, G.

Q. A. Pham, D. L. Bueno, T. Wang, G. Montoro, and P. L. Gilabert, “Partial least squares identification of multi look-up table digital predistorters for concurrent dual-band envelope tracking power amplifiers,” IEEE Trans. Microwave Theory Techn. 66(12), 5143–5150 (2018).
[Crossref]

Munteanu, A.

Nakayama, H.

Nam, J.

X. Xiao, K. Wakunami, X. Chen, X. Shen, B. Javidi, J. Kim, and J. Nam, “Three-dimensional holographic display using dense ray sampling and integral imaging capture,” J. Disp. Technol. 10(8), 688–694 (2014).
[Crossref]

Nishitsuji, T.

Oikawa, M.

Okada, N.

Pan, Y.

Park, J. H.

Pei, Y.

Pham, Q. A.

Q. A. Pham, D. L. Bueno, T. Wang, G. Montoro, and P. L. Gilabert, “Partial least squares identification of multi look-up table digital predistorters for concurrent dual-band envelope tracking power amplifiers,” IEEE Trans. Microwave Theory Techn. 66(12), 5143–5150 (2018).
[Crossref]

Pi, D.

Schelkens, P.

Schwerdtner, A.

E. Zschau, A. Schwerdtner, and B. Kroll, “Method for generating video holograms in real time by means of subholograms,” United States Patent 20100073744 (March 25, 2010).

Seo, Y.

Shen, C.

D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX  1(1), 6 (2020).
[Crossref]

Shen, X.

X. Xiao, K. Wakunami, X. Chen, X. Shen, B. Javidi, J. Kim, and J. Nam, “Three-dimensional holographic display using dense ray sampling and integral imaging capture,” J. Disp. Technol. 10(8), 688–694 (2014).
[Crossref]

Shimobaba, T.

Song, J.

Song, L.

Su, L. J.

Sun, P.

Sun, Z.

Symeonidou, A.

Takada, N.

Tao, X.

Tsuchiyama, Y.

Wakunami, K.

X. Xiao, K. Wakunami, X. Chen, X. Shen, B. Javidi, J. Kim, and J. Nam, “Three-dimensional holographic display using dense ray sampling and integral imaging capture,” J. Disp. Technol. 10(8), 688–694 (2014).
[Crossref]

Wang, C.

Wang, D.

D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX  1(1), 6 (2020).
[Crossref]

D. Wang, N. N. Li, C. Liu, and Q. H. Wang, “Holographic display method to suppress speckle noise based on effective utilization of two spatial light modulators,” Opt. Express 27(8), 11617–11625 (2019).
[Crossref]

Wang, H.

Wang, Q. H.

D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX  1(1), 6 (2020).
[Crossref]

D. Wang, N. N. Li, C. Liu, and Q. H. Wang, “Holographic display method to suppress speckle noise based on effective utilization of two spatial light modulators,” Opt. Express 27(8), 11617–11625 (2019).
[Crossref]

Wang, T.

Q. A. Pham, D. L. Bueno, T. Wang, G. Montoro, and P. L. Gilabert, “Partial least squares identification of multi look-up table digital predistorters for concurrent dual-band envelope tracking power amplifiers,” IEEE Trans. Microwave Theory Techn. 66(12), 5143–5150 (2018).
[Crossref]

Wang, X.

Wang, Y.

Weng, J.

Wu, S. T.

T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
[Crossref]

T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
[Crossref]

Xi, S.

Xiao, X.

X. Xiao, K. Wakunami, X. Chen, X. Shen, B. Javidi, J. Kim, and J. Nam, “Three-dimensional holographic display using dense ray sampling and integral imaging capture,” J. Disp. Technol. 10(8), 688–694 (2014).
[Crossref]

Xing, Y.

D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX  1(1), 6 (2020).
[Crossref]

Xiong, J. G.

T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
[Crossref]

T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
[Crossref]

Yatagai, T.

Yoo, J.

Young, L. M.

Yu, N.

Yu, S.

Zhan, T.

T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
[Crossref]

T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
[Crossref]

Zhang, B.

Zhao, Q.

Zhao, W.

Zheng, Z.

Zhu, B.

Zhu, Z.

Zhuang, Z.

Zou, J. Y.

T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
[Crossref]

T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
[Crossref]

Zou, W.

Zschau, E.

E. Zschau, A. Schwerdtner, and B. Kroll, “Method for generating video holograms in real time by means of subholograms,” United States Patent 20100073744 (March 25, 2010).

ACM Trans. Graph. (1)

C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 1–14 (2019).
[Crossref]

ACM Trans. Graphics (1)

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graphics 36(4), 1–16 (2017).
[Crossref]

Appl. Opt. (3)

IEEE Trans. Microwave Theory Techn. (1)

Q. A. Pham, D. L. Bueno, T. Wang, G. Montoro, and P. L. Gilabert, “Partial least squares identification of multi look-up table digital predistorters for concurrent dual-band envelope tracking power amplifiers,” IEEE Trans. Microwave Theory Techn. 66(12), 5143–5150 (2018).
[Crossref]

J. Disp. Technol. (1)

X. Xiao, K. Wakunami, X. Chen, X. Shen, B. Javidi, J. Kim, and J. Nam, “Three-dimensional holographic display using dense ray sampling and integral imaging capture,” J. Disp. Technol. 10(8), 688–694 (2014).
[Crossref]

Opt. Express (19)

L. J. Su, K. Y. Kwang, L. M. Young, and W. Y. Hyub, “Enhanced see-through near-eye display using time-division multiplexing of a Maxwellian-view and holographic display,” Opt. Express 27(2), 689–701 (2019).
[Crossref]

S. Xi, X. Wang, L. Song, Z. Zhu, B. Zhu, S. Huang, N. Yu, and H. Wang, “Experimental study on optical image encryption with asymmetric double random phase and computer-generated hologram,” Opt. Express 25(7), 8212–8222 (2017).
[Crossref]

P. Sun, S. Chang, S. Liu, X. Tao, C. Wang, and Z. Zheng, “Holographic near-eye display system based on double-convergence light Gerchberg-Saxton algorithm,” Opt. Express 26(8), 10140–10151 (2018).
[Crossref]

Y. G. Ju and J. H. Park, “Foveated computer-generated hologram and its progressive update using triangular mesh scene model for near-eye displays,” Opt. Express 27(17), 23725–23738 (2019).
[Crossref]

D. Blinder and P. Schelkens, “Phase added sub-stereograms for accelerating computer generated holography,” Opt. Express 28(11), 16924–16934 (2020).
[Crossref]

D. Wang, N. N. Li, C. Liu, and Q. H. Wang, “Holographic display method to suppress speckle noise based on effective utilization of two spatial light modulators,” Opt. Express 27(8), 11617–11625 (2019).
[Crossref]

D. Pi, J. Liu, Y. Han, A. U. R. Khalid, and S. Yu, “Simple and effective calculation method for computer-generated hologram based on non-uniform sampling using look-up-table,” Opt. Express 27(26), 37337–37348 (2019).
[Crossref]

A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015).
[Crossref]

B. J. Jackin and T. Yatagai, “Fast calculation of spherical computer generated hologram using spherical wave spectrum method,” Opt. Express 21(1), 935–948 (2013).
[Crossref]

S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017).
[Crossref]

S. C. Kim, J. M. Kim, and E. S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012).
[Crossref]

S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017).
[Crossref]

H. K. Cao and E. S. Kim, “Full-scale one-dimensional NLUT method for accelerated generation of holographic videos with the least memory capacity,” Opt. Express 27(9), 12673–12691 (2019).
[Crossref]

Y. Tsuchiyama and K. Matsushima, “Full-color large-scaled computer-generated holograms using RGB color filters,” Opt. Express 25(3), 2016–2030 (2017).
[Crossref]

J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012).
[Crossref]

T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010).
[Crossref]

T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015).
[Crossref]

X. Dong, S. C. Kim, and E. S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express 22(14), 16925–16944 (2014).
[Crossref]

T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,” Opt. Express 18(10), 9955–9960 (2010).
[Crossref]

Opt. Lett. (1)

PhotoniX? (3)

T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
[Crossref]

D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX  1(1), 6 (2020).
[Crossref]

T. Zhan, J. G. Xiong, J. Y. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX  1(1), 10 (2020).
[Crossref]

Sci. Rep. (1)

S. C. Kim, X. B. Dong, and E. S. Kim, “Accelerated one-step generation of full-color holographic videos using a color-tunable novel-look-up table method for holographic three-dimensional television broadcasting,” Sci. Rep. 5(1), 14056 (2015).
[Crossref]

Other (2)

www.seereal.com/technology/ .

E. Zschau, A. Schwerdtner, and B. Kroll, “Method for generating video holograms in real time by means of subholograms,” United States Patent 20100073744 (March 25, 2010).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Process of the proposed method.
Fig. 2.
Fig. 2. Sub-CGH optimal segmentation.
Fig. 3.
Fig. 3. Sub-CGH optimal segmentation of any point m′.
Fig. 4.
Fig. 4. Total calculation time variations of the proposed method and the NLUT method.
Fig. 5.
Fig. 5. Average calculation time of the proposed method and the NLUT method.
Fig. 6.
Fig. 6. Holographic display system.
Fig. 7.
Fig. 7. Numerical reconstruction results for the camel. (a) Proposed method and (b) NLUT method.
Fig. 8.
Fig. 8. Reconstructed images of the camel. (a) Proposed method and (b) NLUT method.
Fig. 9.
Fig. 9. Reconstructed images of the plane by using the (a) proposed method and (b) NLUT method and the local light intensity distribution of the plane by using the (c) proposed method and (d) NLUT method.
Fig. 10.
Fig. 10. Reconstructed results for the 3D scene generated by (a)-(b) the proposed method and (c)-(d) the NLUT method.
Fig. 11.
Fig. 11. (a) Pattern and (b) numerically reconstructed result of the image.
Fig. 12.
Fig. 12. Principle of holographic viewing angle expansion.
Fig. 13.
Fig. 13. Light distribution of the leftmost and the rightmost image points with the resolution of the object is 200×200, 250×250 and 300×300 respectively, by (a)-(c) the NLUT method and (d)-(f) the proposed method.

Tables (2)

Tables Icon

Table 1. Calculation time of the proposed method and the NLUT method.

Tables Icon

Table 2. Theoretical calculation results of the proposed method and the NLUT method.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

U ( x , y ) = exp ( j k z ) j λ z A 0 e j φ 0 exp [ j k ( x x 0 ) 2 + ( y y 0 ) 2 2 z ] d x 0 d y 0 ,
θ sin 1 ( λ 2 p ) ,
α = tan 1 ( W / W 2 2 + H / H 2 2 L ) = tan 1 ( W + H 2 L ) .
β = tan 1 ( H W 2 L ) .
L ( H W ) p λ ,
L ( H + W ) p λ .
y 1 = H 2 + R ( 2 y 0 + W ) 2 ( R L ) ,
y 2 = H 2 + R ( 2 y 0 W ) 2 ( R L ) .
S = H W R R L .
τ = y 2 y 0 = H 2 + 2 L y 0 W R 2 ( R L ) ,
k = ( y 2 y 2 ) p = R R L p ,
K = tan ( Ω / Ω 2 2 ) tan ( Ω / Ω 2 2 ) = H + W H W .

Metrics