Abstract

In this Letter, we introduce a computer-generated hologram (CGH) optimization technique that can control the randomness of the reconstructed phase. The phase randomness significantly affects the eyebox size and depth of field in holographic near-eye displays. Our proposal is to synthesize the CGH through the sum of two terms computed from the target scene with a random phase. We set a weighting pattern for summation as the optimization variable, which enables the CGH to reflect the random phase during optimization. We evaluate the proposed algorithm on single-depth and multi-depth contents, and the performance is validated via simulations and experiments.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Holographic near-eye displays can support focus cues, making them promising solutions for augmented and virtual reality applications [1]. They aim to reproduce optical wave fields from virtual objects by complex amplitude modulation via spatial light modulators (SLMs). Compared to other three-dimensional display techniques such as multifocal displays [2] and light field displays [3,4], they have advantages of compact form factors, aberration compensation, and high resolution [5,6].

Unfortunately, commercialized SLMs so far cannot simultaneously modulate amplitude and phase. Therefore, computer-generated holograms (CGHs) displayed by SLMs should be either amplitude-only [7] or phase-only. In this Letter, we focus on the synthesis of phase-only CGH. The stochastic gradient descent (SGD) approach [8], double phase-amplitude coding (DPAC) method [5], and Gerchberg–Saxton (GS) algorithm [9] are known as representative techniques for phase-only CGH generation.

Peng et al. [8] and Choi et al. [10] demonstrated that SGD-based optimization methods achieved state-of-the-art image quality. However, their methods do not consider the phase randomness of the reconstructed complex amplitude. The phase randomness is important in holographic near-eye displays because it is associated with the eyebox size and depth of field [11]. The DPAC method, on the other hand, allows to manipulate the amplitude and phase independently. DPAC has been proven effective in previous studies when a uniform or sparse phase distribution is assumed. However, if DPAC tries to reproduce complex amplitude with a random phase distribution, the image quality is severely degraded. Similarly, the GS algorithm can support random phase, but the speckle pattern damages the reconstructed image.

Here, we propose a CGH optimization technique to improve image quality while controlling phase randomness. Our algorithm is inspired by DPAC, representing complex amplitude by linearly combining two phase-only terms according to a specific pixel weighting pattern (e.g., checkerboard pattern [5]). On the other hand, in our algorithm, the weighting pattern can be freely manipulated. We set the pattern as an optimization variable instead of CGH itself, and the pattern changes depending on the target scene.

Figure 1 illustrates how CGH is synthesized in our algorithm. First, complex amplitude ${u_s} \in {\mathbb{C}^{{N_x} \times {N_y}}}$ on SLM is calculated from the complex amplitude ${u_t} \in {\mathbb{C}^{{N_x} \times {N_y}}}$ in the reconstruction target plane, based on the angular spectrum method [12] as

$${u_s} = {\mathcal{F}^{- 1}}(\mathcal{F}({u_t}) \circ H),$$
where $\mathcal{F}(\cdot)$ and ${\mathcal{F}^{- 1}}(\cdot)$ represent Fourier transform and inverse Fourier transform, respectively, and $\circ$ means element-wise multiplication. SLM resolution is ${N_x} \times {N_y}$. $H$ is the transfer function defined as
$$H = \left\{{\begin{array}{*{20}{l}}{{e^{- i\frac{{2\pi}}{\lambda}z\sqrt {1 - {{(\lambda {f_x})}^2} - {{(\lambda {f_y})}^2}}}},}&{{\rm if}\;\sqrt {{f_x}^2 + {f_y}^2} \lt \frac{1}{\lambda}}\\0&{{\rm otherwise}}\end{array}} \right.,$$
where $z$ is the distance between the SLM and the reconstruction plane, $\lambda$ is the wavelength, and ${f_x}$, ${f_y}$ are the spatial frequencies. The phase of ${u_t}$ is initialized to a normally distributed random variable with zero mean, and truncated to be in the range of $[- \pi ,\pi)$. Second, two reference terms ${h_{p1,2}}$ for the synthesis of the CGH are derived from ${u_s}$ as
$$\begin{split}{}{{h_{p1}} = \angle {u_s} - {{\cos}^{- 1}}|{{\hat u}_s}|},\\{{h_{p2}} = \angle {u_s} + {{\cos}^{- 1}}|{{\hat u}_s}|,}\end{split}$$
where $\angle$ is the angle extraction function [5]. ${u_s}$ is normalized so that the amplitude of the normalized field ${\hat u_s}$ is [0, 1].
 figure: Fig. 1.

Fig. 1. Synthesis of a phase-only CGH in proposed algorithm.

Download Full Size | PPT Slide | PDF

 figure: Fig. 2.

Fig. 2. Comparisons between CGH algorithms for a single-depth scene with random phase: double phase-amplitude coding (DPAC) [5], Gerchberg–Saxton (GS) algorithm [9], stochastic gradient descent (SGD) approach [8], and the proposed algorithm. (a) Examples of numerically reconstructed intensities and phases. (b) Histograms of reconstructed phase distributions in (a). (c) Image quality and phase randomness evaluation by peak signal-to-noise ratio (PSNR), structural similarity (SSIM) index, and standard deviation ${\sigma _r}$ of reconstructed phase distribution.

Download Full Size | PPT Slide | PDF

After computing ${h_{p1,2}}$, the weighting pattern $M \in {\mathbb{R}^{{N_x} \times {N_y}}}$ is initialized. We calculate the phase-only CGH ${h_p}$ on SLM with ${h_{p1,2}}$ and $M$ as follows:

$${h_p} = {h_{p1}} \circ S(M) + {h_{p2}} \circ (1 - S(M)),$$
where $S(\cdot):{\mathbb{R}^{{N_x} \times {N_y}}} \to {\mathbb{R}^{{N_x} \times {N_y}}}$ represents a function that maps each element of $M$ into [0, 1]. We use a hard sigmoid function [13] for $S(\cdot)$, and it is defined as
$$S(\omega) = \left\{{\begin{array}{*{20}{l}}{0,}&{\omega \lt - 2.5}\\{0.2\omega + 0.5,}&{- 2.5 \le \omega \le 2.5}\\1&{\omega \gt 2.5}\end{array}} \right.,$$
where $\omega$ denotes an arbitrary variable. Other forms of mapping functions can also be used, which do not make significant differences in performance.

Complex amplitude ${u_r}$ in the distance of $z$ is represented as

$${u_r} = {\mathcal{F}^{- 1}}(\mathcal{F}({e ^{{ih}_p}}) \circ \overline H),$$
where $\bar H$ is the conjugate of transfer function $H$. Overall, we find the optimal weighting pattern by solving the following problem:
$$\mathop {{\rm minimize}}\limits_M \mathcal{L}(s \cdot |{u_r}|,|{u_t}|),$$
where $\mathcal{L}$ denotes the loss function, and we adopt a mean squared error criterion for the loss function in this Letter. $s = \sqrt {\sum |{u_t}{|^2}/\sum |{u_r}{|^2}}$ is a scale parameter to adjust the energy of reconstructed amplitude.

To assess the proposed algorithm, we investigate the reconstructed image quality by measuring the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index. Standard deviation ${\sigma _r}$ of the reconstructed phase distribution is also calculated to examine phase randomness. The results are provided in Fig. 2, comparing the proposed algorithm with DPAC, GS, and SGD approaches. During simulations, the target phase is supposed to be a random phase with a range of $[- \pi ,\pi)$.

For the simulation in Fig. 2, we suppose that light sources with wavelengths of 638 nm, 520 nm, 450 nm are used, and the virtual SLM has $1920 \times 1080$ pixels. We set the center $1480 \times 740$ pixels as the region of interest. The pixel pitch is set to 7.2 μm. The tensorflow library is used for implementation, and we adopt the Adam optimizer for variable update. All results for iterative algorithms are obtained after enough iterations to ensure convergence, and they are computed on a PC with NVIDIA Tesla P40 GPU. DPAC, which is a non-iterative method, takes about 0.025 s for CGH generation. The GS algorithm, SGD approach, and the proposed method take about 0.038, 0.040, and 0.041 s, respectively, for each iteration. The computation times are averaged over 100 trials.

As shown in Fig. 2(a), the SGD approach achieves the best image quality among the techniques. However, the reconstructed phase by the SGD approach exhibits a relatively smooth variation compared to the other techniques. The histograms of reconstructed phase distribution are shown in Fig. 2(b). The smooth variation of the phase implies the limited angular spectrum range of the reconstructed field as shown in Fig. 3, which visualizes normalized amplitude at the Fourier domain. The limited bandwidth can induce a reduction of effective eyebox size unless additional optical or mechanical elements are utilized for eyebox expansion [11,14].

On the other hand, the proposed algorithm shows moderate image quality and supports random phase. It allows the algorithm to fully utilize the angular spectrum bandwidth supported by the SLM. Although DPAC and GS algorithms can also support the random phase, reconstructed intensities are severely damaged by granular patterns.

We further investigate CGH rendering algorithms by measuring image quality and phase randomness for 100 images in the DIV2K test dataset [15]. In Fig. 2(c), the metrics are averaged over 100 images, and error bars denote standard error. It appears that the results from 100 images do not deviate significantly from the results in Figs. 2(a) and 2(b).

In Fig. 4, we provide simulation results showing how different multi-depth CGH rendering algorithms behave for a diffusive scene. In the simulation, two diffusive planes are assumed to float at different depths as shown in Fig. 4(a). Figure 4(b) visualizes reconstructed images at each depth. In particular, we compare our algorithm to a recently proposed method for multi-depth CGH optimization [16].

 figure: Fig. 3.

Fig. 3. Normalized amplitude at Fourier domain of reconstructed field by SGD and the proposed algorithm.

Download Full Size | PPT Slide | PDF

 figure: Fig. 4.

Fig. 4. Comparisons of CGH optimization algorithms for a diffusive multi-depth scene. (a) Schematic of the three-dimensional target scene. (b) Reconstructed intensities using stochastic gradient descent (SGD) approach, complex SGD, and the proposed method.

Download Full Size | PPT Slide | PDF

The complex SGD method [16] aims to minimize the discrepancy between the target and reconstruction in terms of complex amplitude. Therefore, the method can apply phase constraints during optimization. However, when it comes to approximating the complex amplitude with random phase, the image quality deteriorates due to the speckle pattern. On the other hand, our algorithm can alleviate the speckle, especially in uniform areas of in-focus images. It is because the loss function directly penalizes image discrepancies over the entire depth, as in the SGD approach [8,10]. Furthermore, out-of-focus results of the proposed algorithm appear to be diffusive in the figure, which indicates that the proposed algorithm can support the random phase for a multi-depth scene.

Our algorithm can also adjust the degree of phase randomness. We found that ${\sigma _r}$ of reconstructed phase distribution is proportional to ${\sigma _i}$ of initial phase distribution. To verify the claim, we compute ${\sigma _r}$ for various ${\sigma _i}$ values and show the results in Fig. 5(a). Each point in the plot represents the average value of ${\sigma _r}$ for the first 20 images of the DIV2K validation dataset, given a specific ${\sigma _i}$. We present exemplary results for different values of ${\sigma _r}$ in Fig. 5(b), and the corresponding phase distribution for each ${\sigma _r}$ is demonstrated in Fig. 5(c). In this example, we use the same multi-depth contents as in Fig. 4, and the resultant image for ${\sigma _r} = 1.815$ is shown in Fig. 4. It appears that the larger ${\sigma _r}$ induces much more blurring for the same depth discrepancy (5 mm), and thus a small depth of field.

 figure: Fig. 5.

Fig. 5. Demonstration of phase randomness modulation. (a) Standard deviation ${\sigma _r}$ of reconstructed phase distribution as a function of standard deviation ${\sigma _i}$ of initial phase distribution. (b) Results by the proposed method for different values of ${\sigma _r}$. Error maps for the ground truth image are visualized as pixel-wise SSIM. (c) Corresponding histograms of reconstructed phases.

Download Full Size | PPT Slide | PDF

The result of larger ${\sigma _r}$ shows more noise in the in-focus image as verified via pixel-wise SSIM. The random phase spreads the information from the target scene over the SLM widely compared to the uniform phase. The diffusion can induce the loss of information recorded outside the SLM, leading to artifacts [17]. In Fig. 5(c), the artifacts are relatively concentrated on the edge of the viewing window, which supports our argument.

For an experiment, we implement a holographic display prototype that consists of a phase-only SLM (3.6 μm pixel pitch, $3840 \times 2160$ pixels) and a green laser with 520 nm wavelength as shown in Fig. 6(a). To reduce the computational time for CGH optimization, we first render the CGH with a $1920 \times 1080$ resolution and 7.2 μm pixel pitch and resize it to 4 K via nearest-neighbor interpolation.

 figure: Fig. 6.

Fig. 6. Experimental results. (a) Photograph of experiment setup. Captured (b) in-focus and (c) out-of-focus images for CGH rendering algorithms. The brightness of DPAC results are increased by 30% for visibility.

Download Full Size | PPT Slide | PDF

Captured in-focus images are demonstrated in Fig. 6(b). Although the proposed algorithm does not achieve the best image quality and contrast, it shows better image quality than the GS algorithm and DPAC with random phase. Image quality can be further improved by adopting a camera-in-the-loop (CITL) approach to mitigate the gap between the simulation and experiment, as proposed by Peng et al. [8]. Note that our algorithm is validated to be able to support random phase by the out-of-focus images shown in Fig. 6(c), whereas the SGD approach is not.

In conclusion, we proposed a CGH rendering algorithm to optimize image quality while maintaining the phase randomness of a diffusive scene. Our algorithm showed better image quality than the DPAC method and GS algorithm, and it was quantitatively validated using DIV2K test images. Furthermore, the reconstructed phase by our algorithm followed a random distribution, while the phase by the SGD method did not. Another notable advantage of our algorithm is its ability to control the degree of randomness. We implemented the display prototype and conducted experiments. For the experiments, the CITL technique [8] was applied to our algorithm, which further improved the image quality of the proposed method. Moreover, according to the degree of blurring of out-of-focus images, we demonstrated that the reconstructed phase by our algorithm maintained randomness in experiments. We believe that the proposed algorithm can help extend the degree of freedom in designing holographic near-eye displays.

Funding

Institute of Information and Communications Technology Planning and Evaluation (IITP) Grant funded by the Korea Government (MSIT) (2020-0-00548).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. C. Chang, K. Bang, G. Wetzstein, B. Lee, and L. Gao, Optica 7, 1563 (2020). [CrossRef]  

2. D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, IEEE Trans. Vis. Comput. Graphics. (2020). [CrossRef]  

3. T. Zhan, Y.-H. Lee, and S.-T. Wu, Opt. Express 26, 4863 (2018). [CrossRef]  

4. Y. Jo, K. Bang, D. Yoo, B. Lee, and B. Lee, Opt. Lett. 46, 4212 (2021). [CrossRef]  

5. A. Maimone, A. Georgiou, and J. S. Kollin, ACM Trans. Graph. 36, 85 (2017). [CrossRef]  

6. S.-W. Nam, S. Moon, B. Lee, D. Kim, S. Lee, C.-K. Lee, and B. Lee, Opt. Express 28, 30836 (2020). [CrossRef]  

7. S. Jiao, D. Zhang, C. Zhang, Y. Gao, T. Lei, and X. Yuan, IEEE J. Sel. Top. Quantum Electron. 26, 2800108 (2020). [CrossRef]  

8. Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, ACM Trans. Graph. 39, 185 (2020). [CrossRef]  

9. R. W. Gerchberg, Optik 35, 237 (1972).

10. S. Choi, J. Kim, Y. Peng, and G. Wetzstein, Optica 8, 143 (2021). [CrossRef]  

11. J.-H. Park and S.-B. Kim, Opt. Express 26, 27076 (2018). [CrossRef]  

12. K. Matsushima and T. Shimobaba, Opt. Express 17, 19662 (2009). [CrossRef]  

13. C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, arXiv preprint arXiv:1811.03378 (2018).

14. C. Jang, K. Bang, G. Li, and B. Lee, ACM Trans. Graph. 37, 6 (2018). [CrossRef]  

15. E. Agustsson and R. Timofte, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017), pp. 126–135.

16. C. Chen, B. Lee, N.-N. Li, M. Chae, D. Wang, Q.-H. Wang, and B. Lee, Opt. Express 29, 15089 (2021). [CrossRef]  

17. Z. He, X. Sui, H. Zhang, G. Jin, and L. Cao, Appl. Opt. 60, A145 (2021). [CrossRef]  

References

  • View by:

  1. C. Chang, K. Bang, G. Wetzstein, B. Lee, and L. Gao, Optica 7, 1563 (2020).
    [Crossref]
  2. D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, IEEE Trans. Vis. Comput. Graphics. (2020).
    [Crossref]
  3. T. Zhan, Y.-H. Lee, and S.-T. Wu, Opt. Express 26, 4863 (2018).
    [Crossref]
  4. Y. Jo, K. Bang, D. Yoo, B. Lee, and B. Lee, Opt. Lett. 46, 4212 (2021).
    [Crossref]
  5. A. Maimone, A. Georgiou, and J. S. Kollin, ACM Trans. Graph. 36, 85 (2017).
    [Crossref]
  6. S.-W. Nam, S. Moon, B. Lee, D. Kim, S. Lee, C.-K. Lee, and B. Lee, Opt. Express 28, 30836 (2020).
    [Crossref]
  7. S. Jiao, D. Zhang, C. Zhang, Y. Gao, T. Lei, and X. Yuan, IEEE J. Sel. Top. Quantum Electron. 26, 2800108 (2020).
    [Crossref]
  8. Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, ACM Trans. Graph. 39, 185 (2020).
    [Crossref]
  9. R. W. Gerchberg, Optik 35, 237 (1972).
  10. S. Choi, J. Kim, Y. Peng, and G. Wetzstein, Optica 8, 143 (2021).
    [Crossref]
  11. J.-H. Park and S.-B. Kim, Opt. Express 26, 27076 (2018).
    [Crossref]
  12. K. Matsushima and T. Shimobaba, Opt. Express 17, 19662 (2009).
    [Crossref]
  13. C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, arXiv preprint arXiv:1811.03378 (2018).
  14. C. Jang, K. Bang, G. Li, and B. Lee, ACM Trans. Graph. 37, 6 (2018).
    [Crossref]
  15. E. Agustsson and R. Timofte, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017), pp. 126–135.
  16. C. Chen, B. Lee, N.-N. Li, M. Chae, D. Wang, Q.-H. Wang, and B. Lee, Opt. Express 29, 15089 (2021).
    [Crossref]
  17. Z. He, X. Sui, H. Zhang, G. Jin, and L. Cao, Appl. Opt. 60, A145 (2021).
    [Crossref]

2021 (4)

2020 (4)

C. Chang, K. Bang, G. Wetzstein, B. Lee, and L. Gao, Optica 7, 1563 (2020).
[Crossref]

S.-W. Nam, S. Moon, B. Lee, D. Kim, S. Lee, C.-K. Lee, and B. Lee, Opt. Express 28, 30836 (2020).
[Crossref]

S. Jiao, D. Zhang, C. Zhang, Y. Gao, T. Lei, and X. Yuan, IEEE J. Sel. Top. Quantum Electron. 26, 2800108 (2020).
[Crossref]

Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, ACM Trans. Graph. 39, 185 (2020).
[Crossref]

2018 (3)

2017 (1)

A. Maimone, A. Georgiou, and J. S. Kollin, ACM Trans. Graph. 36, 85 (2017).
[Crossref]

2009 (1)

1972 (1)

R. W. Gerchberg, Optik 35, 237 (1972).

Agustsson, E.

E. Agustsson and R. Timofte, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017), pp. 126–135.

Bang, K.

Cao, L.

Chae, M.

Chang, C.

Chen, C.

Cho, J.

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, IEEE Trans. Vis. Comput. Graphics. (2020).
[Crossref]

Choi, S.

S. Choi, J. Kim, Y. Peng, and G. Wetzstein, Optica 8, 143 (2021).
[Crossref]

Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, ACM Trans. Graph. 39, 185 (2020).
[Crossref]

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, IEEE Trans. Vis. Comput. Graphics. (2020).
[Crossref]

Gachagan, A.

C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, arXiv preprint arXiv:1811.03378 (2018).

Gao, L.

Gao, Y.

S. Jiao, D. Zhang, C. Zhang, Y. Gao, T. Lei, and X. Yuan, IEEE J. Sel. Top. Quantum Electron. 26, 2800108 (2020).
[Crossref]

Georgiou, A.

A. Maimone, A. Georgiou, and J. S. Kollin, ACM Trans. Graph. 36, 85 (2017).
[Crossref]

Gerchberg, R. W.

R. W. Gerchberg, Optik 35, 237 (1972).

He, Z.

Ijomah, W.

C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, arXiv preprint arXiv:1811.03378 (2018).

Jang, C.

C. Jang, K. Bang, G. Li, and B. Lee, ACM Trans. Graph. 37, 6 (2018).
[Crossref]

Jiao, S.

S. Jiao, D. Zhang, C. Zhang, Y. Gao, T. Lei, and X. Yuan, IEEE J. Sel. Top. Quantum Electron. 26, 2800108 (2020).
[Crossref]

Jin, G.

Jo, Y.

Y. Jo, K. Bang, D. Yoo, B. Lee, and B. Lee, Opt. Lett. 46, 4212 (2021).
[Crossref]

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, IEEE Trans. Vis. Comput. Graphics. (2020).
[Crossref]

Kim, D.

Kim, J.

Kim, S.-B.

Kollin, J. S.

A. Maimone, A. Georgiou, and J. S. Kollin, ACM Trans. Graph. 36, 85 (2017).
[Crossref]

Lee, B.

Lee, C.-K.

Lee, S.

S.-W. Nam, S. Moon, B. Lee, D. Kim, S. Lee, C.-K. Lee, and B. Lee, Opt. Express 28, 30836 (2020).
[Crossref]

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, IEEE Trans. Vis. Comput. Graphics. (2020).
[Crossref]

Lee, Y.-H.

Lei, T.

S. Jiao, D. Zhang, C. Zhang, Y. Gao, T. Lei, and X. Yuan, IEEE J. Sel. Top. Quantum Electron. 26, 2800108 (2020).
[Crossref]

Li, G.

C. Jang, K. Bang, G. Li, and B. Lee, ACM Trans. Graph. 37, 6 (2018).
[Crossref]

Li, N.-N.

Maimone, A.

A. Maimone, A. Georgiou, and J. S. Kollin, ACM Trans. Graph. 36, 85 (2017).
[Crossref]

Marshall, S.

C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, arXiv preprint arXiv:1811.03378 (2018).

Matsushima, K.

Moon, S.

Nam, S.-W.

Nwankpa, C.

C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, arXiv preprint arXiv:1811.03378 (2018).

Padmanaban, N.

Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, ACM Trans. Graph. 39, 185 (2020).
[Crossref]

Park, J.-H.

Peng, Y.

S. Choi, J. Kim, Y. Peng, and G. Wetzstein, Optica 8, 143 (2021).
[Crossref]

Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, ACM Trans. Graph. 39, 185 (2020).
[Crossref]

Shimobaba, T.

Sui, X.

Timofte, R.

E. Agustsson and R. Timofte, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017), pp. 126–135.

Wang, D.

Wang, Q.-H.

Wetzstein, G.

Wu, S.-T.

Yoo, D.

Y. Jo, K. Bang, D. Yoo, B. Lee, and B. Lee, Opt. Lett. 46, 4212 (2021).
[Crossref]

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, IEEE Trans. Vis. Comput. Graphics. (2020).
[Crossref]

Yuan, X.

S. Jiao, D. Zhang, C. Zhang, Y. Gao, T. Lei, and X. Yuan, IEEE J. Sel. Top. Quantum Electron. 26, 2800108 (2020).
[Crossref]

Zhan, T.

Zhang, C.

S. Jiao, D. Zhang, C. Zhang, Y. Gao, T. Lei, and X. Yuan, IEEE J. Sel. Top. Quantum Electron. 26, 2800108 (2020).
[Crossref]

Zhang, D.

S. Jiao, D. Zhang, C. Zhang, Y. Gao, T. Lei, and X. Yuan, IEEE J. Sel. Top. Quantum Electron. 26, 2800108 (2020).
[Crossref]

Zhang, H.

ACM Trans. Graph. (3)

A. Maimone, A. Georgiou, and J. S. Kollin, ACM Trans. Graph. 36, 85 (2017).
[Crossref]

Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, ACM Trans. Graph. 39, 185 (2020).
[Crossref]

C. Jang, K. Bang, G. Li, and B. Lee, ACM Trans. Graph. 37, 6 (2018).
[Crossref]

Appl. Opt. (1)

IEEE J. Sel. Top. Quantum Electron. (1)

S. Jiao, D. Zhang, C. Zhang, Y. Gao, T. Lei, and X. Yuan, IEEE J. Sel. Top. Quantum Electron. 26, 2800108 (2020).
[Crossref]

Opt. Express (5)

Opt. Lett. (1)

Optica (2)

Optik (1)

R. W. Gerchberg, Optik 35, 237 (1972).

Other (3)

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, IEEE Trans. Vis. Comput. Graphics. (2020).
[Crossref]

C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, arXiv preprint arXiv:1811.03378 (2018).

E. Agustsson and R. Timofte, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017), pp. 126–135.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Synthesis of a phase-only CGH in proposed algorithm.
Fig. 2.
Fig. 2. Comparisons between CGH algorithms for a single-depth scene with random phase: double phase-amplitude coding (DPAC) [5], Gerchberg–Saxton (GS) algorithm [9], stochastic gradient descent (SGD) approach [8], and the proposed algorithm. (a) Examples of numerically reconstructed intensities and phases. (b) Histograms of reconstructed phase distributions in (a). (c) Image quality and phase randomness evaluation by peak signal-to-noise ratio (PSNR), structural similarity (SSIM) index, and standard deviation ${\sigma _r}$ of reconstructed phase distribution.
Fig. 3.
Fig. 3. Normalized amplitude at Fourier domain of reconstructed field by SGD and the proposed algorithm.
Fig. 4.
Fig. 4. Comparisons of CGH optimization algorithms for a diffusive multi-depth scene. (a) Schematic of the three-dimensional target scene. (b) Reconstructed intensities using stochastic gradient descent (SGD) approach, complex SGD, and the proposed method.
Fig. 5.
Fig. 5. Demonstration of phase randomness modulation. (a) Standard deviation ${\sigma _r}$ of reconstructed phase distribution as a function of standard deviation ${\sigma _i}$ of initial phase distribution. (b) Results by the proposed method for different values of ${\sigma _r}$. Error maps for the ground truth image are visualized as pixel-wise SSIM. (c) Corresponding histograms of reconstructed phases.
Fig. 6.
Fig. 6. Experimental results. (a) Photograph of experiment setup. Captured (b) in-focus and (c) out-of-focus images for CGH rendering algorithms. The brightness of DPAC results are increased by 30% for visibility.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

u s = F 1 ( F ( u t ) H ) ,
H = { e i 2 π λ z 1 ( λ f x ) 2 ( λ f y ) 2 , i f f x 2 + f y 2 < 1 λ 0 o t h e r w i s e ,
h p 1 = u s cos 1 | u ^ s | , h p 2 = u s + cos 1 | u ^ s | ,
h p = h p 1 S ( M ) + h p 2 ( 1 S ( M ) ) ,
S ( ω ) = { 0 , ω < 2.5 0.2 ω + 0.5 , 2.5 ω 2.5 1 ω > 2.5 ,
u r = F 1 ( F ( e i h p ) H ¯ ) ,
m i n i m i z e M L ( s | u r | , | u t | ) ,

Metrics