Abstract
It is well recognized that it is challenging to realize high-fidelity and high-robustness ghost transmission through complex media in free space using coherent light source. In this paper, we report a new method to realize high-fidelity and high-robustness ghost transmission through complex media by generating random amplitude-only patterns as 2D information carriers using physics-driven untrained neural network (UNN). The random patterns are generated to encode analog signals (i.e., ghost) without any training datasets and labeled data, and are used as information carriers in a free-space optical channel. Coherent light source modulated by the random patterns propagates through complex media, and a single-pixel detector is utilized to collect light intensities at the receiving end. A series of optical experiments have been conducted to verify the proposed approach. Experimental results demonstrate that the proposed method can realize high-fidelity and high-robustness analog-signal (ghost) transmission in complex environments, e.g., around a corner, or dynamic and turbid water. The proposed approach using the designed physics-driven UNN could open an avenue for high-fidelity free-space ghost transmission through complex media.
© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. Introduction
Ghost imaging (GI) [1–4] was first proposed in quantum domain based on a spontaneous entangled phenomenon in parametric downconversion [1,2]. It was subsequently demonstrated that classical light source can mimic the entangled photon pairs [5,6], and GI can be conducted using pseudo-thermal source. With a single-pixel detector [7], GI emerged to be promising in various applications, e.g., X-ray [8] and Terahertz [9–13] as light source or to be used in scattering media [14–16].
In ghost diffraction, it is always a challenge to realizing high-fidelity analog-signal (ghost) transmission in complex environments. When optical wave propagates through complex media, it could suffer from reflection, absorption and scattering and several approaches were correspondingly investigated, e.g., wavefront shaping [17–19] and transmission matrix [20–22]. Wavefront shaping [17–19] can manipulate optical wave, e.g., in scattering environments. However, it requires a complex iterative process to optimize the parameters. The transmission matrix [20–22] is usually applied in optical imaging or transmission through a fixed scattering medium, and its flexibility could be limited. Much effort needs to be made to suppress noise or conduct wavefront compensation. Therefore, it is desirable to develop novel strategies to generate information carriers to realize high-fidelity and high-robustness ghost transmission in complex media.
In recent years, deep learning [23,24] has attracted much attention in solving inverse problems in optics [25–29], e.g., phase retrieval [26], digital holography [27], optical vortex [28], and computational imaging [29]. However, most work employs a large labeled dataset, and it could be time-consuming to obtain the training dataset in optical fields. A solution is to use simulation data to train the network. However, it could fail to work due to the difference between experimental datasets and training datasets. Recent work [30] broke the limits, and used untrained deep convolutional neural network (UNN) to effectively conduct imaging denoising and solve the tasks independent of the datasets. It was found that this neural network can capture information to reconstruct natural images, given a physical model and a degraded image. Therefore, UNN models can be used to reduce the time required to collect the datasets and solve model mismatch problem. The UNN has been extended to be applied in some fields [31–36], such as diffraction tomography [31], 3D imaging [32] and phase imaging [33,34]. It could also be desirable to apply UNN model to encode data information into random amplitude-only patterns in order to realize high-fidelity and high-robustness ghost transmission through complex media.
In this paper, a new method is proposed to generate a series of random amplitude-only patterns as information carriers with UNN model to realize high-fidelity ghost transmission through complex media in free space with coherent light source. A physical model is developed to be integrated into the designed UNN, and the UNN can capture prior information about each pixel of the transmitted analog signal (ghost) to be encoded into a 2D random pattern without any training datasets and labeled data. The generated random patterns are sequentially embedded into an amplitude-only spatial light modulator (SLM) to be illuminated by coherent light source. A single-pixel detector is used to collect light intensities at the receiving end, and high-fidelity analog signals (i.e., ghost) can be retrieved with a differential protocol. A series of optical experiments are conducted in complex environments, e.g., dynamic and turbid water. Experimental results demonstrate that the designed UNN is feasible and effective to generate 2D random patterns as information carriers for high-fidelity and high-robustness ghost transmission through complex media in free space.
2. Principle
2.1 Recording and retrieval
UNN model is designed to generate a series of random amplitude-only patterns as information carriers to encode a transmitted analog signal (i.e., a ghost). Each pixel of the analog signal is first encoded into a random pattern I(x,y). In ghost transmission, optical wave sequentially illuminates the generated random patterns embedded into a SLM, and a single-pixel detector is used to collect light intensities at the receiving end. Wave propagation [37–39] in complex media could be described by
2.2 Physics-driven UNN for pattern generation
Equations (2)–(4) are also used to describe a physical model $\mathrm{\Psi }$ about the developed ghost transmission through complex media in the designed UNN to generate patterns $I\textrm{(}x,y\textrm{)}\textrm{.}$ The designed UNN model is applied to encode a magnified analog signal (e.g., a magnification factor of 130000) into the series of 2D random amplitude-only patterns $I\textrm{(}x,y\textrm{)}\textrm{.}$ In the developed ghost transmission system, the pattern $I\textrm{(}x,y\textrm{)}$ generated by the designed UNN is fed into physical model $\mathrm{\Psi }\textrm{.}$ Then, a corresponding intensity value ${B_{gen}}$ can be calculated which approaches to a pixel value of original analog signal So via the UNN optimization.
A physics-driven neural network model is designed and applied for pattern generation, as shown in Fig. 1(a). The designed model consists of a contracting path, an expansive path, and skip connections [40]. The contracting and expansive paths are respectively stacked by four blocks. In the contracting path, each block consists of two $3 \times 3$ convolutional layers and one $2 \times 2$ max pooling layer for downsampling, and after each convolutional layer batch normalization (BN) and a rectified linear unit (ReLU) are used. The first block in the contracting path generates a 16-channel feature map, and the number of feature channels in the subsequent three blocks is sequentially doubled. The expansive path has a mirror symmetry of the contracting path. In each block of the expansive path, two $3 \times 3$ convolutional layers are used, and after each of them BN and ReLU are used. Transposed convolutional layer with a stride of 2 is used to upscale the multi-channel feature map. The number of feature channels is sequentially halved for the first three blocks in the expansive path, and the number of feature channels in the last block is 1. Skip connections in the networks connect feature maps in the contracting and expansive paths, and create the short paths. This enables a fast convergence, and tackles the vanishing gradient problem of deep networks. In the output layer, a $1 \times 1$ convolutional operation is applied, and sigmoid is used as an activation function.
The designed UNN does not need to use any training datasets and labeled data, and an input z is first generated with random noise ranging from 0 to 0.1. It is worth noting that other random distributions, e.g., Gaussian, can also be applied as an input. The parameters p in the network, i.e., weights and biases, are optimized to map the input z to an output pattern. With randomly initialized parameters p and the input z, an output ${I_{gen}}$ obtained in the pattern generation process can be derived with the designed neural network $U\textrm{(}{\boldsymbol z};p\textrm{)}\textrm{.}$
Here, size of the pattern ${I_{gen}}$ is $512 \times 512$ pixels. To keep dimension consistency in the designed neural network model, size of the input z is set as $512 \times 512$ pixels. Then, the output from the designed neural network model is fed into a physical model $\mathrm{\Psi }$ as shown in Fig. 1(b) to calculate an intensity value ${B_{gen}}$ described by
In the forward model as shown in Fig. 1(b), the pattern ${I_{gen}}$ is divided into two separate patterns ${{\textrm{(}h + {I_{gen}}\textrm{)}} / \textrm{2}}$ and ${{\textrm{(}h - {I_{gen}}\textrm{)}} / \textrm{2}}$ where h denotes a constant to be set as one in the pattern generation process. Fast Fourier transform (FFT) is employed to respectively transform these two patterns in the far field, and their zero frequencies are extracted and subtracted. Therefore, a value ${B_{gen}}$ can be correspondingly calculated. To make zero frequency of Fourier spectrum of the pattern ${I_{gen}}$ be scaled to a pixel value of analog signal, a loss function L in Fig. 1(c) is applied by using mean squared error (MSE) between a pixel Soi of analog signal and the calculated value ${B_{gen}}\textrm{.}$
The optimized parameters $p^\ast $ are obtained by using Adam optimizer [41] with a learning rate of 0.01. The whole process, as shown in Fig. 1(c), can be described by
The designed UNN maps a transmitted analog signal into random amplitude-only patterns $I\textrm{(}x,y\textrm{)}\textrm{.}$ The process is repeated until all pixels of the analog signal are encoded into the corresponding random patterns $I\textrm{(}x,y\textrm{)}\textrm{.}$ The pattern generation process is independent of any datasets, and does not require labeled data. The designed UNN model is implemented using Nvidia GeForce GTX 1080 Ti GPU with the codes in Pytorch.
Using the designed UNN, a series of random amplitude-only patterns, as shown in Fig. 1(d), are generated. Other pattern sizes (e.g., $128 \times 128,\;256 \times 256$ pixels) are also applicable in optical experiments. The number of random amplitude-only patterns is determined by the length of analog signal to be transmitted. Coherent light source is used to illuminate the generated random patterns placed in an optical channel, and light intensities are sequentially detected by a single-pixel detector after the propagation through complex media in free space. A flow chart of the proposed method is shown in Fig. 2. The proposed method can realize high-fidelity and high-robustness ghost transmission through complex media in free space, e.g., around a corner, or dynamic and turbid water.
3. Experimental results and discussion
A schematic optical experimental setup is shown in Fig. 3. A green laser with wavelength of 532.0 nm and the maximum output power of 50.0 mW is utilized as light source, and is expanded by an objective lens and collimated by a lens with a focal length of 100.0 mm. The collimated beam illuminates the series of generated random patterns sequentially embedded into an amplitude-only SLM (Holoeye, LC-R720) with a pixel pitch of 20.0 µm. Then, optical wave propagates through complex media. Finally, a series of light intensities are recorded by a single-pixel detector (Newport, 918D-UV-OD3R) at the receiving end. Here, a series of optical experiments are conducted to verify validity of the proposed method in complex media, i.e., around a corner, or dynamic and turbid water. Experimental results are obtained by testing a series of 1D irregular analog signals and 2D images using the proposed free-space ghost transmission system.
3.1 Ghost transmission around a corner
In Fig. 4, the propagating wave encounters reflection and scattering when a diffuser and opaque media (i.e., walls), are placed in the optical channel, and significant attenuation occurs. To verify effectiveness and robustness of the proposed method, optical experiments are conducted with two perpendicular screens to form non-line-of-sight in free space. A protective screen is placed between the SLM and single-pixel detector to block the propagating wave, and another screen is used to reflect optical wave. In our experiments, a black paper and a white paper are utilized to respectively emulate the protective and scattering screens. A separation distance d between the two screens can affect light intensity to be collected at the receiving end and quality of the retrieved signals. In this study, MSE and peak signal-to-noise ratio (PSNR) are calculated to evaluate quality of the retrieved signals, respectively described by
Figures 5(a)–5(d) show the typically experimental results, when different separation distances around a corner are used and the diffuser in Fig. 4 is not placed. In Figs. 5(a) and 5(b), the retrieved signals are obtained at the receiving end, when the separation distance is 3.0 cm. It is illustrated that the retrieved analog signals are of high quality, i.e., overlapping with original signals. The high PSNR values and low MSE values in Figs. 5(a) and 5(b) demonstrate that the proposed ghost transmission using the designed UNN can be realized with high fidelity, even when optical wave is disturbed around a corner. When the separation distance is small (e.g., 0.2 cm), there is a significant intensity attenuation and the retrieved signals are of low quality as shown in Figs. 5(c) and 5(d). A variation of PSNR values of the retrieved signals is further shown in Fig. 6, when the separation distance d is in a range of 0.2 cm to 4.0 cm. It is illustrated that quality of the retrieved signals declines with the smaller separation distance, since more wavefront is blocked. When the separation distance d is larger than 0.3 cm, effective information can be received and PSNR values are always stable at a high level, showing the superiority and high robustness of the proposed method. PSNR values of 40.83 dB and 40.49 dB are obtained respectively for the two retrieved signals, when optical wave propagates around a corner with a separation distance of 4.0 cm. As shown in Figs. 5 and 6, the proposed method can realize high-fidelity and high-robustness ghost transmission around a corner.
More complex environment is established and applied to verify the proposed method, and a diffuser (Thorlabs, DG10-1500) with a thickness of 2.0 mm is further placed before the corner as shown in Fig. 4. Figures 7(a)–7(d) show the experimentally retrieved analog signals. When the separation distance is 3.0 cm, the retrieved signals in Figs. 7(a) and 7(b) are still of high fidelity and have high overlapping with original analog signals. The high PSNR values and low MSE values verify effectiveness of the proposed ghost transmission. As the separation distance between two walls is small (e.g., 0.2 cm), experimentally retrieved analog signals deviate from their original ones due to the dramatically attenuated light intensity to be collected at the receiving end as shown in Figs. 7(c) and 7(d).
To analyze influence of the separation distance around a corner, a variation of PSNR values of the experimentally retrieved analog signals using different separation distances is shown in Fig. 8, when the diffuser is also placed before the corner in the optical channel. The two curves have a similar trend to show that PSNR values are higher with the longer separation distance d. When the separation distance d is larger than 0.5 cm, PSNR values of the retrieved analog signals keep steady and are higher than 35.50 dB. It is demonstrated by experimental results in Figs. 6 and 8 that a diffuser placed before the corner does not dramatically affect quality of the experimentally retrieved analog signal at the receiving end, and high-fidelity and high-robustness free-space ghost transmission can still be realized in complex media. It is experimentally verified that the proposed method using random patterns as information carriers generated by the designed UNN is feasible to realize high-fidelity optical data (ghost) transmission in complex environments. It is straightforward to apply the proposed method to realize high-fidelity ghost transmission in free space without scattering media.
3.2 Ghost transmission in dynamic and turbid water
To realize high-fidelity ghost transmission through dynamic and complex scattering media in free space (e.g., water) is still an open question. Since dynamic and turbid water has high absorption and scattering, it is challenging to optically transmit the data with high fidelity in free space [42,43]. When scattering environment in the optical transmission channel is dynamic as shown in Fig. 9, light intensities collected by the single-pixel detector would contain limited effective information. In addition, scaling factors were usually assumed as a constant in the whole recording process, which could make the retrieval of effective signals become difficult or impossible at the receiving end.
Here, a temporal correction is developed to overcome the aforementioned challenge by introducing a temporal carrier to correct a series of physically-existing dynamic scaling factors in the optical transmission channel. A fixed temporal carrier $T\textrm{(}x,y\textrm{)}$ with random values is numerically pre-generated and is not influenced by scattering environment. Before each generated amplitude-only pattern [${{\textrm{(}m + {I_i}\textrm{)}} / \textrm{2}}$ or ${{\textrm{(}m - {I_i}\textrm{)}} / \textrm{2}}$], the fixed temporal carrier $T\textrm{(}x,y\textrm{)}$ is used and embedded into the SLM, and its corresponding measurements can be described by
An experimental setup is shown in Fig. 9. The fixed temporal carrier T and each generated random pattern are alternately and sequentially embedded into the SLM. Then, the modulated wave passes through a transparent water tank (polymethyl methacrylate) with a dimension of 10.0 cm (L) × 15.0 cm (W) × 30.0 cm (H). A rotator is utilized to stir 3000.0 ml water in the tank to create a dynamic environment, and 10.0 ml skimmed milk diluted by 500.0 ml clean water is continuously dropped into water tank during optical experiments for free-space ghost transmission. Axial distance between the front face of water tank and single-pixel detector is 20.0 cm.
Figures 10(a)–10(d) show the experimentally retrieved analog signals, when 1000.0 revolutions per minute (rpm) are used in the rotator. In Figs. 10(a) and 10(c), the retrieved signals are obtained without temporal carriers. It is demonstrated that the retrieved pixels deviate from those of original signals. On the other hand, the retrieved signals can overlap with original ones as shown in Figs. 10(b) and 10(d), when the designed temporal carrier is further used. The retrieved signals in Figs. 10(b) and 10(d) have high PSNR values, showing that the proposed method with a temporal correction can effectively suppress noise in dynamic and turbid water.
A variation of PSNR values of the retrieved analog signals is shown in Fig. 11, when different rotation speeds, i.e., 500.0 rpm to 2000.0 rpm, are used. It is demonstrated that PSNR values of the experimentally retrieved signals without a temporal correction are always low, and analog signals cannot be correctly retrieved. When the proposed method with a temporal correction is applied, PSNR values of the retrieved signals remain at a high level. As shown in Fig. 11, when the rotation speed is lower than 1800.0 rpm, PSNR values of the retrieved analog signals are higher than 30.0 dB. The proposed method with a temporal correction shows high robustness against dynamic scattering. When the rotation speed increases to 2000.0 rpm, there is a dramatic drop in PSNR values. In this case, the propagating wave is significantly dissipated due to vortices generated by the rotator.
3.3 2D ghost transmission
The proposed method is also tested using 2D grayscale images. Two 8-bit images (64 × 64 pixels) from DIV2K [44] are encoded by using the designed UNN model, and then are transmitted using the optical setup in Fig. 3. Figures 12(a) and 12(d) show the experimentally retrieved images at the receiving end, when ghost transmission around a corner with a separation distance of 3.0 cm is conducted. Figures 12(b) and 12(e) show the experimentally retrieved images at the receiving end, when the diffuser is further placed before the corner. In a dynamic and turbid water environment, two retrieved images are shown in Figs. 12(c) and 12(f). A rotation speed of 1000.0 rpm is used in the rotator. Original images are shown in Figs. 12(g) and 12(h).
PSNR and structural similarity index measure (SSIM) [45] are calculated to evaluate quality of the experimentally retrieved images obtained at the receiving end. It is illustrated in Figs. 12(a)–12(f) that high-fidelity images can be retrieved at the receiving end. To clearly show quality of the retrieved images, pixels along the 30th column in Figs. 12(a)–12(c) are compared to those in original images. It is demonstrated in Figs. 13(a)–13(c) that the experimentally retrieved data overlaps with original data. MSE values of Figs. 13(a)–13(c) are 2.93 × 10−4, 2.47 × 10−4 and 2.04 × 10−4, and PSNR values are 35.33 dB, 36.07 dB and 36.90 dB, respectively.
4. Conclusion
We have proposed high-fidelity and high-robustness ghost transmission through complex media by the generation of a series of random amplitude-only patterns as 2D information carriers with a designed physics-driven UNN architecture. A physical model is developed and integrated into the designed UNN to encode each pixel of analog signals, which facilitates the designed neural network to learn prior. The 2D pattern generation does not require any training datasets and labeled data. A series of optical experiments demonstrate that the proposed method is feasible and effective, and high-fidelity ghost transmission in complex media using coherent light source is realized. It is expected that this work could open an avenue to realize high-fidelity and high-robustness ghost transmission in complex media.
Funding
Hong Kong Research Grants Council (C5011-19G, 15224921, 15223522); Guangdong Basic and Applied Basic Research Foundation (2022A1515011858); The Hong Kong Polytechnic University (G-R006, 1-W19E, 1-BD4Q).
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
References
1. D. N. Klyshko, “Effect of focusing on photon correlation in parametric light scattering,” Sov. Phys. JETP 67(6), 1131–1135 (1988).
2. A. V. Belinskii and D. N. Klyshko, “Two-photon optics: diffraction, holography, and transformation of two-dimensional signals,” Sov. Phys. JETP 78(3), 259–262 (1994).
3. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]
4. D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, and Y. H. Shih, “Observation of two-photon “ghost” interference and diffraction,” Phys. Rev. Lett. 74(18), 3600–3603 (1995). [CrossRef]
5. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classical correlation,” Phys. Rev. Lett. 93(9), 093602 (2004). [CrossRef]
6. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. H. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005). [CrossRef]
7. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]
8. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental X-ray ghost imaging,” Phys. Rev. Lett. 117(11), 113902 (2016). [CrossRef]
9. L. Olivieri, J. S. T. Gongora, L. Peters, V. Cecconi, A. Cutrona, J. Tunesi, R. Tucker, A. Pasquazi, and M. Peccianti, “Hyperspectral terahertz microscopy via nonlinear ghost imaging,” Optica 7(2), 186–191 (2020). [CrossRef]
10. L. Olivieri, L. Peters, V. Cecconi, A. Cutrona, M. Rowley, J. S. T. Gongora, A. Pasquazi, and M. Peccianti, “Terahertz nonlinear ghost imaging via plane decomposition: toward near-field micro-volumetry,” ACS Photonics 10(6), 1726–1734 (2023). [CrossRef]
11. S. Chen, Z. Feng, J. Li, W. Tan, L. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. Zhai, Z. Li, C. Qiu, X. Zhang, and L. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light: Sci. Appl. 9(1), 99 (2020). [CrossRef]
12. L. E. Barr, P. Karlsen, S. M. Hornett, I. R. Hooper, M. Mrnka, C. R. Lawrence, D. B. Phillips, and E. Hendry, “Super-resolution imaging for sub-IR frequencies based on total internal reflection,” Optica 8(1), 88–94 (2021). [CrossRef]
13. J. S. Totero Gongora, L. Olivieri, L. Peters, J. Tunesi, V. Cecconi, A. Cutrona, R. Tucker, V. Kumar, A. Pasquazi, and M. Peccianti, “Route to intelligent imaging reconstruction via terahertz nonlinear ghost imaging,” Micromachines 11(5), 521 (2020). [CrossRef]
14. V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23(11), 14424–14433 (2015). [CrossRef]
15. A. Ismagilov, A. Lappo-Danilevskaya, Y. Grachev, B. Nasedkin, V. Zalipaev, N. V. Petrov, and A. Tcypkin, “Ghost imaging via spectral multiplexing in the broadband terahertz range,” J. Opt. Soc. Am. B 39(9), 2335–2340 (2022). [CrossRef]
16. V. Cecconi, V. Kumar, A. Pasquazi, J. S. Totero Gongora, and M. Peccianti, “Nonlinear field-control of terahertz waves in random media for spatiotemporal focusing,” Open Res. Eur. 2, 32 (2023). [CrossRef]
17. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007). [CrossRef]
18. Z. Yaqoob, D. Psaltis, M. S. Feld, and C. H. Yang, “Optical phase conjugation for turbidity suppression in biological sample,” Nat. Photonics 2(2), 110–115 (2008). [CrossRef]
19. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6(5), 283–292 (2012). [CrossRef]
20. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104(10), 100601 (2010). [CrossRef]
21. M. Kim, W. Choi, Y. Choi, C. Yoon, and W. Choi, “Transmission matrix of a scattering medium and its applications in biophotonics,” Opt. Express 23(10), 12648–12668 (2015). [CrossRef]
22. M. Mounaix, D. Andreoli, H. Defienne, G. Volpe, O. Katz, S. Grésillon, and S. Gigan, “Spatiotemporal coherent control of light through a multiple scattering medium with the multispectral transmission matrix,” Phys. Rev. Lett. 116(25), 253901 (2016). [CrossRef]
23. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]
24. G. Ongie, A. Jalal, C. A. Metzler, R. G. Baraniuk, A. G. Dimakis, and R. Willett, “Deep learning techniques for inverse problems in imaging,” IEEE J. Sel. Areas Inf. Theory 1(1), 39–56 (2020). [CrossRef]
25. F. Wang, H. Wang, H. C. Wang, G. W. Li, and G. H. Situ, “Learning from simulation: an end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27(18), 25560–25572 (2019). [CrossRef]
26. A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018). [CrossRef]
27. Y. Rivenson, Y. C. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light: Sci. Appl. 8(1), 85 (2019). [CrossRef]
28. Z. W. Liu, S. Yan, H. G. Liu, and X. F. Chen, “Superhigh-resolution recognition of optical vortex modes assisted by a deep-learning method,” Phys. Rev. Lett. 123(18), 183902 (2019). [CrossRef]
29. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]
30. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Conference on Computer Vision and Pattern Recognition, 9446–9454 (IEEE, 2018).
31. K. C. Zhou and R. Horstmeyer, “Diffraction tomography with a deep image prior,” Opt. Express 28(9), 12872–12896 (2020). [CrossRef]
32. H. T. Yu, B. W. Han, L. F. Bai, D. L. Zheng, and J. Han, “Untrained deep learning-based fringe projection profilometry,” APL Photonics 7(1), 016102 (2022). [CrossRef]
33. F. Wang, Y. M. Bian, H. C. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. H. Situ, “Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). [CrossRef]
34. X. Y. Zhang, F. Wang, and G. H. Situ, “BlindNet: an untrained learning approach toward computational imaging with model uncertainty,” J. Phys. D: Appl. Phys. 55(3), 034001 (2022). [CrossRef]
35. S. P. Liu, X. F. Meng, Y. K. Yin, H. Z. Wu, and W. J. Jiang, “Computational ghost imaging based on an untrained neural network,” Opt. Lasers Eng. 147, 106744 (2021). [CrossRef]
36. F. Wang, C. L. Wang, M. L. Chen, W. L. Gong, Y. Zhang, S. S. Han, and G. H. Situ, “Far-field super-resolution ghost imaging with a deep neural network constraint,” Light: Sci. Appl. 11(1), 1 (2022). [CrossRef]
37. Y. Xiao, L. N. Zhou, and W. Chen, “Wavefront control through multi-layer scattering media using single-pixel detector for high-PSNR optical transmission,” Opt. Lasers Eng. 139, 106453 (2021). [CrossRef]
38. B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. H. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11(8), 684–689 (2015). [CrossRef]
39. E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22(14), 16945–16955 (2014). [CrossRef]
40. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, 234–241 (2015).
41. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in 3rd International Conference on Learning Representations (2015).
42. K. Nakamura, I. Mizukoshi, and M. Hanawa, “Optical wireless transmission of 405 nm, 1.45 Gbit/s optical IM/DD-OFDM signals through a 4.8 m underwater channel,” Opt. Express 23(2), 1558–1566 (2015). [CrossRef]
43. Y. Xiao, L. N. Zhou, and W. Chen, “High-resolution ghost imaging through complex scattering media via a temporal correction,” Opt. Lett. 47(15), 3692–3695 (2022). [CrossRef]
44. E. Agustsson and R. Timofte, “Ntire 2017 challenge on single image super-resolution: dataset and study,” in Conference on Computer Vision and Pattern Recognition Workshops, 126–135 (IEEE, 2017).
45. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]