Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-depth hologram generation using stochastic gradient descent algorithm with complex loss function

Open Access Open Access

Abstract

The stochastic gradient descent (SGD) method is useful in the phase-only hologram optimization process and can achieve a high-quality holographic display. However, for the current SGD solution in multi-depth hologram generation, the optimization time increases dramatically as the number of depth layers of object increases, leading to the SGD method nearly impractical in hologram generation of the complicated three-dimensional object. In this paper, the proposed method uses a complex loss function instead of an amplitude-only loss function in the SGD optimization process. This substitution ensures that the total loss function can be obtained through only one calculation, and the optimization time can be reduced hugely. Moreover, since both the amplitude and phase parts of the object are optimized, the proposed method can obtain a relatively accurate complex amplitude distribution. The defocus blur effect is therefore matched with the result from the complex amplitude reconstruction. Numerical simulations and optical experiments have validated the effectiveness of the proposed method.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Holography is regarded as a promising technology for three-dimensional (3D) display since it can reconstruct the whole optical wave field of a 3D scene and provide all of the 3D information [15]. In computer-generated holography, the devices to display hologram are usually phase-only type and amplitude-only type. Thus, before the holographic reconstruction, the object’s amplitude and phase information need to be encoded in the phase-only or amplitude-only hologram [6,7].

To generate the phase-only hologram, directly discarding the amplitude of the complex amplitude of the diffraction field and reserving the phase-only part is a simple method, while the quality of the reconstructed image is heavily degraded [8]. The random phase method is an effective method to record the object into the phase-only hologram and to simulate the diffuse reflection of the object [9], while the results are contaminated by speckle noise since the strong interference occurs in the reconstructed field [10,11]. The classic method to deal with the speckle noise problem is the iteration method, such as the Gerchberg-Saxton (GS) method [12] and its improved forms [1316]. These methods are effective in an ideal optical reconstruction system. However, these methods are mainly focused on amplitude optimization, and it is hard to obtain an optimized amplitude and phase distribution simultaneously. Moreover, since there is exists some optical reconstruction errors in the phase term [17], which will heavily affect the reconstructed phase from the random phase based iteration methods. Thus, the speckle noise appears again in the reconstruction result. The error diffusion method [18] and double phase method [1921] are the effective complex amplitude encoding methods, which can encode the diffraction field into the phase-only hologram. Since the random phase is usually not applied in the encoding process, the imperfect optical system will have a limited impact on the reconstructed phase term, and the speckle noise can be suppressed well in the reconstruction result.

Except for using the traditional iteration form and direct encoding method to generate the phase-only hologram, recently, the stochastic gradient descent (SGD) method [22,23] and related methods have been proposed to optimize the phase-only hologram [2426]. By defining a loss function in the optimization process, the gradient of the loss function can be used to update the initial phase. The initial phase is then propagated to the target plane, and the loss value can be obtained again. Using the SGD method and combining it with some powerful optimizers, such as RMSProp [27], Adam [28], the optimization process is able to approach the global minimum point of the loss function and achieve a high-quality result. Besides, the SGD method can integrate with the camera-in-the-loop iteration process [17], which considers the imperfect optical reconstruction conditions (such as the dust in the system, the scratches in the lens), and it can achieve a near-ideal optical reconstruction result. Although the SGD has been demonstrated powerful in the phase-only hologram, the previous studies are mainly focused on the 2D object rather than the 3D object. In fact, it is still a challenge for the SGD method in the hologram generation of a multi-depth 3D object. This is because the current methods mostly compare the difference between the target amplitude and the reconstructed amplitude to calculate the loss function. Therefore, for a multi-depth object, these methods need to calculate the loss function for each depth layer, and the total loss function is obtained by adding them together. However, this process is regarded as time-consuming and has a high memory footprint, and is hard to practice when the number of depth layers of the 3D object increases to a large value. Additionally, since there are no phase constraints in the SGD with the amplitude-only loss method, the reconstructed results thereby suffer from expressing the blur effect in the multi-depth holographic reconstruction.

In this paper, the SGD method based on complex loss function is proposed to reduce the optimization time in the multi-depth hologram generation process. Firstly, the target complex amplitude of the 3D object is generated by using the angular spectrum layer-based method [11]. Then, the complex loss function can be obtained by comparing the complex amplitude of the reconstructed diffraction result with the target complex amplitude. Finally, the phase-only hologram can be optimized through the stochastic gradient descent process. Since the whole 3D object’s complex loss function is calculated by only one comparison in a single optimization process, the optimization time is similar to that of the single depth layer object and much faster than that of the amplitude loss strategy. Moreover, because the complex loss function can optimize the amplitude and phase information of the object simultaneously, the reconstructed results have similar behavior to the complex amplitude reconstruction. Therefore, the proposed method is effective and practical in high-quality multi-depth hologram generation.

2. SGD Method for multi-depth object

The effectiveness of the SGD method for single-depth holographic display has been demonstrated in the previous research [22,23]. However, in multi-depth hologram generation, the optimization time of the hologram and the quality of the reconstructed image are still the challenges. A typical solution to extend the SGD method to the multi-depth case is to compare the loss between reconstructed amplitude and target amplitude at each depth layer and then add them together as a total loss. Furthermore, the total loss is used to update the initial phase. The diagram of this solution is shown in Fig. 1.

As shown in Fig. 1, to obtain the total loss function of a multi-depth object, the diffraction propagation is implemented many times in this process. Therefore, the optimization time will increase hugely according to the increase in the number of depth layers and the optimization steps.

 figure: Fig. 1.

Fig. 1. Conventional SGD solution for multi-depth object with amplitude loss function.

Download Full Size | PDF

The critical problem in the multi-depth SGD optimization is how to obtain and calculate the loss function. In previous research, the loss function is usually obtained by comparing the amplitude of the reconstructed image with that of the target at different depth planes, and the optimization time becomes a challenge. In order to solve this problem, we propose using the complex loss function instead of the amplitude-only loss for SGD optimization. Before the calculation of the total loss function, the complex amplitude field of a multi-depth object is firstly calculated using the angular spectrum layer-based method, as shown in Fig. 2:

 figure: Fig. 2.

Fig. 2. Schematic diagram of the proposed method.

Download Full Size | PDF

In Fig. 2, the symbol $z_0$ is the distance from the hologram plane to the target plane, and $z_t$ is the distance from the target plane to the back of the object, and the length of the object is $l_0$. Figure 2 shows the whole process of the proposed method. Firstly, the layer-based method is used to obtain the complex amplitude of the 3D object, and the angular spectrum method (ASM) is used as the diffraction calculation method. The expression of the ASM is given by Eq. (1):

$$U_z(x_1,y_1)= IFFT{\bigg\{}FFT{\bigg\{}U_0(x,y){\bigg\}}\cdot H_f(f_x,f_y){\bigg\}},$$
where $U_0(x,y)$ and $U_z(x_1,y_1)$ represent the complex amplitude distributions of the input and the output objects, respectively. $U_0(x,y)=A_0\cdot \textrm {exp}(j\varphi _{0}(x, y))$, $A_0$ and $\varphi _{0}$ are the amplitude and phase of each depth layer, respectively. $FFT$ and $IFFT$ represent the fast Fourier transform (FFT) and inverse FFT algorithm. $H_f(f_x,f_y)$ is the transform function in ASM. The expression of $H_f(f_x,f_y)$ is expressed by the following equation:
$$H_f(f_x,f_y)=\textrm{exp}\Big( ikz\sqrt{1-(\lambda f_x)^2-(\lambda f_y)^2}\Big),$$
where $k$ represents the wavenumber and is given as $2\pi /\lambda$, $\lambda$ is the wavelength, $z$ is the diffraction distance, and $f_x$, $f_y$ are the spatial frequencies.

In the target complex amplitude calculation process, a compensate phase is used to multiply with the object at each depth plane. This is to eliminate the phase shift among different planes when propagating to the destination plane [29]. Otherwise, there will exist an interference among different planes in the edge area since the phase distribution of the different depth planes on the hologram plane is different. This compensate phase can give us a smoother amplitude transition in the reconstruction result [30]. The expression of the phase is expressed by:

$$\varphi_0(x,y)=\textrm{exp}({-}jkz_{ot}),$$
where $z_{ot}$ is equal to the distance from the depth layer of the object to the target plane. If the number of the depth layers is set to $N$, then $z_{ot}=z_t+n_i\cdot l_0/N$, where $n_i$ is the index of the layer which belongs to the range $[0, N-1]$. Therefore, $U_0=A_0\cdot \varphi _0(x,y)=A_0\cdot \textrm {exp}(-jkz_{ot})$. This compensate phase is used in the target complex amplitude generation process.

After the target complex amplitude calculation, the complex amplitude is divided into the signal area and background area by using a mask. The loss of signal and background are calculated separately. The generation of the mask can be easily done by converting the normalized amplitude $A$ of the target complex amplitude into a binary value through control of the amplitude threshold. The expression of the mask is given by:

$$Mask(m,n)=\begin{cases} 1 \quad(A(m,n)>threshold) \\ 0 \quad(else), \end{cases}$$
where $m$ and $n$ are the number of sampling points of the object in the $x$ and $y$ dimensions, respectively. By setting different thresholds, it is able to control the size of the signal and background areas. The standard to choose a proper threshold is that the mask area should cover most of the diffraction information of the signal. To balance the optimization direction of the SGD method and obtain a more flexible form of the loss function, then the parameter $\beta$ is applied to adjust the ratio between signal loss and background loss. A large value of $\beta$ means the optimization process tends to optimize the background, and a small value means the optimization process tends to optimize the signal. Thereby we need to select a balanced value. Next, the complex loss function can be obtained by comparing the reconstructed complex amplitude with the target complex amplitude. In the proposed method, the combination of the real and imaginary loss is used as the complex loss function (an alternative option is to use the combination of the phase and amplitude loss as the complex loss function).

Then, the total loss function is given by:

$$Loss_{sum} = Loss_{real}(R_{r}, R_t) + Loss_{imag}(I_{r}, I_t) + \beta \cdot Loss_{bg}(A_{rb}, A_{tb}),$$
where $R_{r}$ and $I_r$ are the real part and imaginary part of the reconstructed complex amplitude in the signal area, $R_t$ and $I_t$ are the real part and the imaginary part of the target complex amplitude in the signal area, and $A_{rb}$ and $A_{tb}$ represent the amplitude of the reconstructed complex amplitude and target complex amplitude in the background area. The MSE loss function is used to calculate the $Loss_{real}$, $Loss_{imag}$ and $Loss_{bg}$ in Eq. (5), and it can be expressed as:
$$MSELoss = \frac{1}{mn}\sum_{m,n}^{}[C_r(m,n) - C_t(m,n)]^2,$$
where $C_r$ and $C_t$ represent the reconstructed result and the target result, respectively.

Finally, based on the total loss function $Loss_{sum}$, the SGD can be applied to optimize the initial phase-only hologram, and the Adam optimizer can be used as the update rule in the optimization process. Adam is an adaptive learning rate method that only requires first-order gradients. It uses estimations of the first and second moments of the gradients to adapt the learning rate, the first moment is mean, and the second moment is uncentered variance [31]. Based on the updating rule, the optimization process is able to approach the global minima point. After several iterations, the optimized multi-depth hologram can be obtained.

The proposed method can reduce the optimization time in the SGD optimization process by comparing the complex loss instead of the amplitude-based multi-depth loss. The optimization time is close to the time of the single-depth optimization. This is a benefit for the hologram generation of the multi-depth object. Moreover, since the optimization can optimize the complex amplitude of the reconstructed results, the reconstructed results match with that of the complex amplitude reconstruction, which guarantees the quality of the reconstruction.

3. Results

3.1 Simulation results

In this part, the simulation experiments are carried out to compare the results of the direct amplitude discard method, double phase method, SGD method with amplitude loss function, proposed method, and the complex amplitude reconstruction.

The 3D object used in the experiments is a combination of the letters “A”,“B”,“C”, "D”, which has four different depths, and the resolution for each letter is $1920\times 1080$, as shown in Fig. 3. The distance $z_t$ is set to 0.00 m, and the interval $\Delta z$ between these letters is set to 0.04 m. The resolution and pixel pitch of the hologram are $1920\times 1080$ and 6.4 µm, respectively. The distance $z_0$ between the hologram plane and the target plane is set to 0.25 m. The wavelength of the light source in the simulation is set to 532.0 nm. The optimization steps for the SGD amplitude loss method and the proposed method are set to 100. The parameter $\beta$ in Eq. (5) can choose from a large range. Usually, we select it from 0.1 to 1. In this case, it is set to 0.1 empirically, and this value can give us a good optimization result for both the signal and background. The PyTorch 1.7.0 and Python 3.6.9 are used in the experiments to achieve the SGD optimization, and the Adam optimizer is used in the optimization process. The CPU and GPU used in the experiments are Intel Xeon CPU @ 2.20GHz and Tesla T4 16GB with CUDA version 10.1, respectively.

 figure: Fig. 3.

Fig. 3. Position relationship among the hologram plane, reconstruction plane, target plane, and the 3D object.

Download Full Size | PDF

Before the optimization process, the target complex amplitude is first obtained, and a mask is used to separate the target complex amplitude into background and signal area. Here the threshold for the mask generation is set to 0.05. Under this value, the generated mask can cover most of the details of the diffraction field, and this is important to preserve most of the object’s information in the phase-only hologram. Based on the above conditions, the amplitude of the target complex amplitude is shown in Fig. 4(a), and the generated mask is shown in Fig. 4(b). Figures 4(c) and 4(d) are the real and imaginary parts of the target complex amplitude, respectively.

 figure: Fig. 4.

Fig. 4. (a) Amplitude of the generated target complex amplitude, (b) generated mask, (c) and (d) real and imaginary parts of the target complex amplitude.

Download Full Size | PDF

To evaluate the quality of the reconstructed results, the peak signal-to-noise ratio (PSNR) is used here. The equation of PSNR for the 8-bit gray-level image is defined as:

$$PSNR = 10\, \textrm{log}\Bigg ( \frac{255^2}{\frac{1}{m_{1}n_{1}}\sum_{m_{1},n_{1}}[Y_0(m_{1},n_{1}) - Y_r(m_{1},n_{1})]^2}\Bigg ),$$
where $Y_0$ and $Y_r$ are the original image and reconstructed image, $m_1$ and $n_1$ are the horizontal and vertical numbers of pixels.

A comparison among the direct amplitude discard method, double phase method, SGD with amplitude loss method, proposed method, and the complex amplitude reconstruction is given in Fig. 5. The distance $z$ in Fig. 5 is the reconstruction distance. The columns (a)-(e) in Fig. 5 represent the reconstruction results of these five different methods at different depth planes, and column (f) represents the reconstructed phase of these methods at $z=0.37$ m.

 figure: Fig. 5.

Fig. 5. Simulation results with direct amplitude discard method, double phase method, SGD amplitude loss method, the proposed method, and the complex amplitude reconstruction. Column (a) is the reconstructed result when $z=0.25$ m. Columns (b)-(e) are the detailed reconstruction results with different methods when $z=0.25$ m, 0.29 m, 0.33 m and 0.37 m. Column (f) is the phase distribution of the letter “D” with different methods when $z=0.37$ m.

Download Full Size | PDF

As we can see from the first row in Fig. 5, the reconstructed results only contain some edge information of the object since the amplitude information is lost in the direct amplitude discard method. The PSNR values in the red box region show that the image quality of the direct amplitude discard method is quite low. The second row is the reconstruction results from the double phase method, the amplitude information is reserved well in the results, and the phase distribution is nearly uniform, while there are some structure noise appears in the results since the down-sampling operation is applied in it. The third row shows the results from the SGD amplitude loss method. Since the amplitude constraints are applied for each depth layer, the quality of the reconstructed results is higher than that of the double phase method. In contrast, the phase distributions in the reconstructed results are nonuniform since there are no phase constraints in the optimization process. The fourth row shows the results of the proposed method. The results are of high quality, and the phase distribution is nearly uniform. The fifth row shows the reconstruction results from the complex amplitude reconstruction, which has the highest PSNR values. It can be seen from the fourth and fifth rows in Fig. 5, the reconstruction results from the proposed method are similar to that of the complex amplitude reconstruction, they have a similar defocus effect and amplitude distribution. This is because the complex loss function is used to optimize the phase and amplitude distribution simultaneously. Fig. 6 shows the details of the reconstruction results from the double phase method and the proposed method when the letter “B” is focused. As shown in the figure, the results of the proposed method have a smoother amplitude distribution compared with the double phase method. Figures 7(a)–7(c) and 7(d)–7(f) are the reconstructed amplitude and phase results of the SGD amplitude loss method, proposed method, and the complex amplitude reconstruction, respectively. In Figs. 7(a) and 7(b), the PSNR values are obtained by comparing the reconstructed results of the SGD amplitude loss method and the proposed method with the complex amplitude reconstruction result, a higher PSNR means that the reconstruction result is closer to the complex amplitude reconstruction result. From Fig. 7, it can be seen that the defocus blur effect of the proposed method is better than that of the SGD amplitude loss method, the PSNR is much higher than that of the SGD amplitude loss method, and both amplitude and phase distributions of the proposed method are matched well with the complex amplitude reconstruction results.

 figure: Fig. 6.

Fig. 6. Details of the reconstruction results from the double phase method and the proposed method when the letter “B” is focused.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Comparison among the SGD amplitude loss method, proposed method, and the complex amplitude reconstruction, when the letter “B” is defocused ($z=0.25$ m). (a)-(c) Amplitude distributions of the SGD amplitude loss method, proposed method, and the complex amplitude reconstruction. (d)-(f) Phase distributions of the SGD amplitude loss method, proposed method, and the complex amplitude reconstruction.

Download Full Size | PDF

In order to demonstrate the effectiveness of the proposed method for the complex 3D object, a 3D dragon with a depth map is used as the input object, as shown in Figs. 8(a) and 8(b). The distance between the hologram plane and target plane is set to $z_0=0.25$ m, $z_t$ is set to 0.00 m, and the length of this object $l_0$ is set to 0.10 m, and then the object is sliced into 256 different depth layers.

 figure: Fig. 8.

Fig. 8. (a) Intensity and (b) depth channel of the 3D dragon.

Download Full Size | PDF

Figure 9 shows the reconstruction results of the proposed method when the input object is a 3D dragon. Figure 9(a) shows the result when the reconstruction distance $z=z_0+z_t+l_0/N\cdot 235$ is set to the head of the dragon. Figure 9(b) shows the result when distance $z=z_0+z_t+l_0/N\cdot 115$ is set to the tail of the dragon. As we can see from Fig. 9, the depth information of the 3D object is clear, and the details of the dragon in the reconstructed results are reserved well. The reconstruction results proved the effectiveness of the proposed method for the complicated object.

 figure: Fig. 9.

Fig. 9. Simulation results of the proposed method for the 3D dragon when (a) the “head” is focused and (b) the “tail” is focused.

Download Full Size | PDF

Figure 10 shows the optimization time in the hologram optimization process and the calculation time in the target complex amplitude generation process. As we can see from the yellow line in Fig. 10, since there is no need to compare the amplitude at each depth layer in the proposed method, the SGD optimization time for the multi-depth object with different number of depth layers is nearly unchanged. The optimization times for single-depth and multi-depth objects are around 6.9 seconds. The blue line in Fig. 10 shows the calculation time of the target complex amplitude, which depends on the number of depth layers, and there is a linear relationship between them.

 figure: Fig. 10.

Fig. 10. Calculation time of the target complex amplitude generation and the SGD optimization process.

Download Full Size | PDF

A comparison of optimization time between the proposed method and the SGD amplitude loss method for 100 steps with a different number of depth layers is given in Table 1. From Table 1, we can see that the optimization time of the proposed method is much less than that of the SGD amplitude loss method when the number of depth layers increases. In the SGD amplitude loss method, the optimization time is increased dramatically according to the increase in the number of depth layers. The ratio in Table 1 is defined as the optimization time of the SGD amplitude method divide by that of the proposed method.

Tables Icon

Table 1. Comparison of the optimization time for 100 steps

3.2 Optical results

The optical reconstruction system is shown in Fig. 11. The phase-only spatial light modulator (SLM) is provided by the Xi’an Institute of Optics and Precision Mechanics company. The pixel pitch and resolution of the SLM are 6.4 µm and $1920\times 1080$, respectively. The frame rate and the phase modulation range of the SLM are 60 Hz and [0, 2$\pi$], respectively. The wavelength of the input light source is 532.0 nm. The $4-f$ filter system is applied in the holographic reconstruction process, and the focal length of the two Fourier lenses (lens 1 and lens 2) are both 300.0 mm. We display the pre-calculated holograms on the SLM and use a complementary metal-oxide-semiconductor (CMOS) camera (Canon EOS 77D without lens) to capture the reconstruction results.

 figure: Fig. 11.

Fig. 11. Optical reconstruction system.

Download Full Size | PDF

The optical reconstruction results of the letters “A”, "B”, "C”, "D” are shown in Fig. 12. The first row is the result of the direct amplitude discard method. Similar to the simulation results, the optical results only contain the edge information since the amplitude part of the object is lost, the PSNR values in the red boxes show that the reconstruction quality is very low. The double phase can suppress the speckle noise well in the reconstruction results, as shown in the second row in Fig. 12, while there is another kind of noise that appears in the results since the down-sampling operation is used in the hologram generation process. The reconstructed results of the SGD amplitude loss method are shown in the third row. Different from the simulation results, there are some noises that appear in the optical reconstruction results. This is because the optical reconstruction system is not an ideal system (such as dust in the system and stretches in the lens). The optical reconstruction results are not that accurate as of the simulation reconstruction. Therefore, this non-ideal system causes the nonuniform phase distribution status in the SGD amplitude method to become worsen, which leads to the speckle noise in the results. However, for results of the proposed method as shown in the fourth row, because the phase distribution in the reconstructed results is nearly uniform, thereby even in a non-ideal optical reconstruction system, the influence of the error will be smaller than a nonuniform phase distribution. It can be clearly seen that the speckle noise that appears in the results of the proposed method is less than that of the SGD amplitude loss method, and the proposed method has the highest PSNR value among these methods. Besides, from Fig. 13, it can be seen that the reconstructed results of the proposed method have a better defocus blur effect, which is matched well with the simulation results.

 figure: Fig. 12.

Fig. 12. Optical reconstruction results of simple letters with the direct amplitude discard method, double phase method, SGD amplitude loss method, the proposed method, and the complex amplitude reconstruction. Column (a) is the reconstructed result when $z=0.25$ m. Columns (b)-(e) are the detailed optical reconstruction results with different methods when $z=0.25$ m, 0.29 m, 0.33 m and 0.37 m.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. (a) Defocus results of the SGD amplitude loss method, and (b) the proposed method when $z=0.25$ m.

Download Full Size | PDF

The optical reconstruction results for the complex 3D object are shown in Fig. 14. Figures 14(a) and 14(b) show the results when the camera focuses on the head and the tail of the dragon, respectively. It can be seen that the results of the proposed method are matched well with the simulation results. The reconstructed dragon has clean texture details and contained continuous depth information. Since the complex amplitude is reconstructed from the phase-only hologram, the results have a clean defocus blur effect. When the head of the dragon is focused, the tail is blurred. Similarly, when the tail is focused, the head of the dragon is blurred.

 figure: Fig. 14.

Fig. 14. Optical reconstruction results of the proposed method for the 3D dragon when (a) the “head” is focused and (b) the “tail” is focused.

Download Full Size | PDF

4. Discussion

4.1 Influence of propagation distance on the SGD method

The SGD method is a useful optimization tool in hologram generation, which can achieve high-quality results in holographic reconstruction. However, there is a limitation in the use of the SGD method in the hologram generation, that is the diffraction distance. If the object is set very close to the hologram plane, the effectiveness of the SGD method will decrease. The loss value for different propagation distances of the proposed method in the hologram optimization of the 3D dragon is given in Fig. 15.

 figure: Fig. 15.

Fig. 15. Loss values with different propagation distances.

Download Full Size | PDF

It can be seen from the above curves when the diffraction distance is set to a small value, which leads the SGD optimization to converge to a local minimum value. This is because the transform from a phase-only hologram to the target complex amplitude needs the propagation distance set to be longer since the diffraction result of the ASM is nearly unchanged when the distance is set close to zero. Thus, we need to select a proper propagation distance to avoid the convergence problem. The distance limitation condition can be explained based on the maximum spreading angle of the SLM, which is given by the following equation:

$$z_0 \ge \frac{max(m,n)\cdot p}{2tan(arcsin(\lambda /2p))},$$
where m, n are the number of sampling points in $x, y$ dimensions, $p$ is the pixel pitch of the SLM, and $\lambda$ is the wavelength. According to Eq. (8), the propagation distance $z_0$ should be larger than 0.147 m in this case. Moreover, the diffraction distance cannot set to be very long either, since some high frequencies of the object will be lost during the long-distance propagation [32]. The simulation results of the reconstructed dragon’s head with different propagation distances are shown in Fig. 16.

 figure: Fig. 16.

Fig. 16. Reconstruction results of the dragon’s head with different $z_0$. (a) $z=0.00$ m, (b) $z=0.05$ m, (c) $z=0.15$ m, (d) $z=0.25$ m, (e) $z=0.40$ m and (f) $z=0.60$ m.

Download Full Size | PDF

From Fig. 16, we can see that when the diffraction distance is set to a small value, the reconstruction quality is relatively low, as shown in Figs. 16(a) and 16(b), while the quality is improved according to the increase of the propagation distance, as shown in Figs. 16(c) and 16(d). When the distance becomes large continually, the quality of the reconstruction result decreases again, as shown in Figs. 16(e) and 16(f). Therefore, we need to be careful about the selection of the propagation distance. As shown in Figs. 15 and 16, if the propagation distance is selected properly, the optimization can converge around 40 steps. While the distance is not chosen properly, the optimization might be hard to converge, and the quality of the reconstruction result will degrade.

5. Conclusion

In this work, we propose an effective multi-depth hologram generation method based on the SGD algorithm with the complex loss function. The optimization time is reduced heavily compared with the SGD amplitude loss method. Since the proposed method can optimize the amplitude and phase simultaneously, the phase distribution and defocus blur effect are matched well with the complex amplitude reconstruction results, and the proposed method is obtained the highest image quality in the optical reconstruction compared with other methods. Moreover, the diffraction method in the target complex amplitude generation process can be replaced by a high-speed point-based diffraction method, and the whole hologram generation time can be accelerated further. Additionally, the proposed method can also be applied in other fields such as augmented reality display, optical tweezers, and beam shaping since it can reconstruct a relatively accurate complex amplitude through the phase-only hologram.

Funding

National Research Foundation of Korea (2020K2A9A2A06038623); National Natural Science Foundation of China (62011540406, 62020106010).

Acknowledgments

This work was supported under the framework of international cooperation program managed by National Research Foundation of Korea (2020K2A9A2A06038623) and the National Natural Science Foundation of China (62011540406, 62020106010).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. F. Yaraş, H. Kang, and L. Onural, “State of the art in holographic displays: A survey,” J. Display Technol. 6(10), 443–454 (2010). [CrossRef]  

2. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]  

3. C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 195 (2018). [CrossRef]  

4. N. Chen, C. Zuo, E. Y. Lam, and B. Lee, “3D imaging based on depth measurement technologies,” Sensors 18(11), 3711 (2018). [CrossRef]  

5. R. Zhao, L. Huang, and Y. Wang, “Recent advances in multi-dimensional metasurfaces holographic technologies,” PhotoniX 1(1), 20–24 (2020). [CrossRef]  

6. L. B. Lesem, P. M. Hirsch, and J. A. Jordan, “The kinoform: A new wavefront reconstruction device,” IBM J. Res. Dev. 13(2), 150–155 (1969). [CrossRef]  

7. H. Dammann and K. Görtler, “High-efficiency in-line multiple imaging by means of multiple phase holograms,” Opt. Commun. 3(5), 312–315 (1971). [CrossRef]  

8. T. Shimobaba, T. Kakue, Y. Endo, R. Hirayama, D. Hiyama, S. Hasegawa, Y. Nagahama, M. Sano, M. Oikawa, T. Sugie, and T. Ito, “Random phase-free kinoform for large objects,” Opt. Express 23(13), 17269–17274 (2015). [CrossRef]  

9. P. Tsang, Y. Chow, and T.-C. Poon, “Generation of patterned-phase-only holograms (PPOHs),” Opt. Express 25(8), 9088–9093 (2017). [CrossRef]  

10. B. Lee, D. Yoo, J. Jeong, S. Lee, D. Lee, and B. Lee, “Wide-angle speckleless DMD holographic display using structured illumination with temporal multiplexing,” Opt. Lett. 45(8), 2148–2151 (2020). [CrossRef]  

11. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]  

12. R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

13. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

14. H. Akahori, “Spectrum leveling by an iterative algorithm with a dummy area for synthesizing the kinoform,” Appl. Opt. 25(5), 802–811 (1986). [CrossRef]  

15. L. Chen, H. Zhang, Z. He, X. Wang, L. Cao, and G. Jin, “Weighted constraint iterative algorithm for phase hologram generation,” Appl. Sci. 10(10), 3652 (2020). [CrossRef]  

16. D. Wang, C. Liu, C. Shen, Y. Xing, and Q. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX 1(1), 6–15 (2020). [CrossRef]  

17. P. Chakravarthula, E. Tseng, T. Srivastava, H. Fuchs, and F. Heide, “Learned hardware-in-the-loop phase retrieval for holographic near-eye displays,” ACM Trans. Graph. 39(6), 186 (2020). [CrossRef]  

18. P. W. M. Tsang and T.-C. Poon, “Novel method for converting digital fresnel hologram to phase-only hologram based on bidirectional error diffusion,” Opt. Express 21(20), 23680–23686 (2013). [CrossRef]  

19. Y. Qi, C. Chang, and J. Xia, “Speckleless holographic display by complex modulation based on double-phase method,” Opt. Express 24(26), 30368–30378 (2016). [CrossRef]  

20. Y. K. Kim, J. S. Lee, and Y. H. Won, “Low-noise high-efficiency double-phase hologram by multiplying a weight factor,” Opt. Lett. 44(15), 3649–3652 (2019). [CrossRef]  

21. O. Mendoza-Yero, G. Mínguez-Vega, and J. Lancis, “Encoding complex fields by using a phase-only optical element,” Opt. Lett. 39(7), 1740–1743 (2014). [CrossRef]  

22. Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, “Neural holography with camera-in-the-loop training,” ACM Trans. Graph. 39(6), 185 (2020). [CrossRef]  

23. S. Choi, J. Kim, Y. Peng, and G. Wetzstein, “Optimizing image quality for holographic near-eye displays with michelson holography,” Optica 8(2), 143–146 (2021). [CrossRef]  

24. J. Zhang, N. Pégard, J. Zhong, H. Adesnik, and L. Waller, “3D computer-generated holography by non-convex optimization,” Optica 4(10), 1306–1313 (2017). [CrossRef]  

25. P. Chakravarthula, Y. Peng, J. Kollin, H. Fuchs, and F. Heide, “Wirtinger holography for near-eye displays,” ACM Trans. Graph. 38(6), 213 (2019). [CrossRef]  

26. G. Kuo, L. Waller, R. Ng, and A. Maimone, “High resolution étendue expansion for holographic displays,” ACM Trans. Graph. 39(4), 66 (2020). [CrossRef]  

27. Y. N. Dauphin, H. De Vries, and Y. Bengio, “Equilibrated adaptive learning rates for non-convex optimization,” arXiv preprint arXiv:1502.04390 (2015).

28. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

29. H. Pang, J. Wang, A. Cao, M. Zhang, L. Shi, and Q. Deng, “Accurate hologram generation using layer-based method and iterative fourier transform algorithm,” IEEE Photonics J. 9(1), 2200108 (2017). [CrossRef]  

30. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

31. S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747 (2016).

32. J. Lee, J. Jeong, J. Cho, D. Yoo, B. Lee, and B. Lee, “Deep neural network for multi-depth hologram generation and its training strategy,” Opt. Express 28(18), 27137–27154 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Conventional SGD solution for multi-depth object with amplitude loss function.
Fig. 2.
Fig. 2. Schematic diagram of the proposed method.
Fig. 3.
Fig. 3. Position relationship among the hologram plane, reconstruction plane, target plane, and the 3D object.
Fig. 4.
Fig. 4. (a) Amplitude of the generated target complex amplitude, (b) generated mask, (c) and (d) real and imaginary parts of the target complex amplitude.
Fig. 5.
Fig. 5. Simulation results with direct amplitude discard method, double phase method, SGD amplitude loss method, the proposed method, and the complex amplitude reconstruction. Column (a) is the reconstructed result when $z=0.25$ m. Columns (b)-(e) are the detailed reconstruction results with different methods when $z=0.25$ m, 0.29 m, 0.33 m and 0.37 m. Column (f) is the phase distribution of the letter “D” with different methods when $z=0.37$ m.
Fig. 6.
Fig. 6. Details of the reconstruction results from the double phase method and the proposed method when the letter “B” is focused.
Fig. 7.
Fig. 7. Comparison among the SGD amplitude loss method, proposed method, and the complex amplitude reconstruction, when the letter “B” is defocused ( $z=0.25$ m). (a)-(c) Amplitude distributions of the SGD amplitude loss method, proposed method, and the complex amplitude reconstruction. (d)-(f) Phase distributions of the SGD amplitude loss method, proposed method, and the complex amplitude reconstruction.
Fig. 8.
Fig. 8. (a) Intensity and (b) depth channel of the 3D dragon.
Fig. 9.
Fig. 9. Simulation results of the proposed method for the 3D dragon when (a) the “head” is focused and (b) the “tail” is focused.
Fig. 10.
Fig. 10. Calculation time of the target complex amplitude generation and the SGD optimization process.
Fig. 11.
Fig. 11. Optical reconstruction system.
Fig. 12.
Fig. 12. Optical reconstruction results of simple letters with the direct amplitude discard method, double phase method, SGD amplitude loss method, the proposed method, and the complex amplitude reconstruction. Column (a) is the reconstructed result when $z=0.25$ m. Columns (b)-(e) are the detailed optical reconstruction results with different methods when $z=0.25$ m, 0.29 m, 0.33 m and 0.37 m.
Fig. 13.
Fig. 13. (a) Defocus results of the SGD amplitude loss method, and (b) the proposed method when $z=0.25$ m.
Fig. 14.
Fig. 14. Optical reconstruction results of the proposed method for the 3D dragon when (a) the “head” is focused and (b) the “tail” is focused.
Fig. 15.
Fig. 15. Loss values with different propagation distances.
Fig. 16.
Fig. 16. Reconstruction results of the dragon’s head with different $z_0$ . (a) $z=0.00$ m, (b) $z=0.05$ m, (c) $z=0.15$ m, (d) $z=0.25$ m, (e) $z=0.40$ m and (f) $z=0.60$ m.

Tables (1)

Tables Icon

Table 1. Comparison of the optimization time for 100 steps

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

$$U_z(x_1,y_1)= IFFT{\bigg\{}FFT{\bigg\{}U_0(x,y){\bigg\}}\cdot H_f(f_x,f_y){\bigg\}},$$
$$H_f(f_x,f_y)=\textrm{exp}\Big( ikz\sqrt{1-(\lambda f_x)^2-(\lambda f_y)^2}\Big),$$
$$\varphi_0(x,y)=\textrm{exp}({-}jkz_{ot}),$$
$$Mask(m,n)=\begin{cases} 1 \quad(A(m,n)>threshold) \\ 0 \quad(else), \end{cases}$$
$$Loss_{sum} = Loss_{real}(R_{r}, R_t) + Loss_{imag}(I_{r}, I_t) + \beta \cdot Loss_{bg}(A_{rb}, A_{tb}),$$
$$MSELoss = \frac{1}{mn}\sum_{m,n}^{}[C_r(m,n) - C_t(m,n)]^2,$$
$$PSNR = 10\, \textrm{log}\Bigg ( \frac{255^2}{\frac{1}{m_{1}n_{1}}\sum_{m_{1},n_{1}}[Y_0(m_{1},n_{1}) - Y_r(m_{1},n_{1})]^2}\Bigg ),$$
$$z_0 \ge \frac{max(m,n)\cdot p}{2tan(arcsin(\lambda /2p))},$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.