## Abstract

Prior-free imaging beyond the memory effect (ME) is critical to seeing through the scattering media. However, methods proposed to exceed the ME range have suffered from the availability of prior information of imaging targets. Here, we propose a blind target position detection for large field-of-view scattering imaging. Only exploiting two captured multi-target near-field speckles at different imaging distances, the unknown number and locations of the isolated imaging targets are blindly reconstructed via the proposed scaling-vector-based detection. Autocorrelations can be calculated for the speckle regions centered by the derived positions via low-cross-talk region allocation strategy. Working with the modified phase retrieval algorithm, the complete scene of the multiple targets exceeding the ME range can be reconstructed without any prior information. The effectiveness of the proposed algorithm is verified by testing on a real scattering imaging system.

© 2020 Chinese Laser Press

## 1. INTRODUCTION

Scattering media widely exist in our daily life, which possess a non-uniform reflective and refractive index distribution. It disturbs the light rays coming from the imaging targets, which hinders the direct analysis of object information behind it using traditional optical systems. To overcome this challenge, nowadays, novel methods based on the memory effect (ME) [1–4] achieve non-invasive scattering imaging via speckle correlation [5–7]. Compared with other scattering imaging methods, such as the ballistic-light-based approach [8–13], wavefront shaping [14,15], and transmission matrix measurement [16–21], the speckle correlation technique realizes prior-free imaging only by traditional instruments and has the capability of quick imaging in currently inaccessible scenarios. However, the field-of-view (FOV) of this method is limited by the ME range. Prior-free imaging beyond the ME range is critical to seeing through scattering media.

To exceed the ME range, some techniques have been proposed by introducing the prior information of isolated imaging targets. Li *et al.* introduced the position prior of each imaging target during the point spread function (PSF) calibration [22]; Sahoo *et al.* used the wavelength prior of each imaging target during the PSF calibration [23]; and Guo *et al.* exploited the shape prior or the PSF closing to the imaging target to exceed the ME range [24]. Wang *et al.* proposed a dual-target non-invasive scattering imaging method via Fourier spectrum guessing and iterative energy constrained compensation [25]. However, this method can only be used for dual-target separation and the number of targets serves as the known prior information before reconstruction. Boniface *et al.* achieved non-invasive target localization beyond the ME by analyzing the speckle envelope of each target [26], but this localization method was not available when an unknown number of targets were illuminated simultaneously.

Here, we put forward a multi-target large FOV scattering imaging method based on the blind target position detection. It blindly detects the unknown number and positions of the isolated targets only using two multi-target near-field speckles captured at different imaging distances. The theoretical scaling relationship between two speckles is derived and demonstrated and the scaling centers correspond to the positions of imaging targets. Based on the theoretical derivation, scaling-vector-based target position detection is proposed, which recognizes the target position using the length and direction information of scaling vectors. After that, the autocorrelations can be calculated for speckle regions centered by the derived positions via the low-cross-talk region allocation strategy. Working with the modified phase retrieval algorithm [27] to select the optimal recovery with no prior information, especially for the autocorrelation with interference, the complete scene of multiple isolated targets exceeding the ME range is reconstructed. Experiments on a real imaging system demonstrate the effectiveness of the proposed algorithm in multi-target blind reconstruction through scattering media. Visually distinguishable reconstructions are experimentally achieved with the whole scene of multiple isolated targets exceeding the ME range. In principle, the exceeding times can be increased as long as the acquisition equipment can fully collect the scattered speckle in larger FOV. Further, we verified the accuracy for multi-target positioning, capability for multi-target blind reconstruction, and the experimental analysis about limitations and restraints of the proposed method in the end.

## 2. PRINCIPLE

The principle of the method is depicted in Fig. 1. We excite multiple isolated targets, ${O}_{1},{O}_{2},\dots ,{O}_{n}$, hidden behind a scattering medium, with a spatially incoherent illumination. Each imaging target falls within the ME range, while the spacing between any two targets is beyond. The large FOV speckle image at Distance 1, ${I}_{{d}_{1}}$, which consists of speckles produced by each target, ${I}_{{d}_{1}}^{1},{I}_{{d}_{1}}^{2},\dots ,{I}_{{d}_{1}}^{n}$, is captured by a 2D camera array. Certainly, the proposed imaging system is applied for the situation that the number and locations of the imaging targets cannot be seen directly from the captured speckle. This position information is hidden in ${I}_{{d}_{1}}$. To extract it, a second speckle ${I}_{{d}_{2}}$ is captured at another imaging distance, Distance 2. The ME range, $\mathrm{\Delta}\theta $, corresponds to the angle FOV of the diffuser, within which the points on the object plane produce random speckles with high correlation. For the spatially incoherent imaging system via speckle correlation, its ME range is constrained by $\mathrm{\Delta}\theta \ll \lambda /\pi L$ [5], where $\lambda $ denotes the wavelength of the spatially incoherent light source and $L$ is the effective thickness of the scattering media. Since the speckle patterns generated within the ME range are translational invariant, the large FOV speckle of multiple targets recorded by the camera array at Distance 1, ${I}_{{d}_{1}}$ in Fig. 1 can be formulated by (so as ${d}_{2}$ for ${I}_{{d}_{2}}$)

From Eq. (3), the number, the shapes, and the locations of the imaging targets are coupled together in the captured speckle. This is the problem with an infinite number of solutions when reconstructing all the target shapes only via ${I}_{{d}_{1}}$, since there are so many unknown variables in Eq. (3). One effective way for simplifying this problem and reducing the solution space is to detect the total number of the objects $n$ and the location of each object $({u}_{k},{v}_{k}),k=1,2,\dots ,n$ without any prior information. Thus, the blind target position detection algorithm is proposed and introduced in the following.

We exploit a second near-field speckle ${I}_{{d}_{2}}$ captured at another imaging distance for position detection. The theoretical scaling relationship between the two near-field speckles derived by us indicates that some areas (called scaling centers) in the near-field speckles do not scale with the imaging distance and these scaling-invariant areas correspond to the locations of imaging targets. To the best of our knowledge, no work has been proposed to investigate the relationship between two near-field speckles at different imaging distances. Inspired by the existing statistical analysis of the far-field speckles in 3D space [18,28,29], and the novel technique using speckle correlation to improve axial sectioning [30], without loss of generality, we first analyze the scaling relationship between two near-field on-axis PSFs. With an ideal pinhole at the optical axis in the object plane as the corresponding point where one imaging target (supposed as ${O}_{k}$) is located in the object plane, using the Fresnel diffraction formula, the field on the front surface of the scattering media, ${U}_{s}$, is expressed by

To describe the scaling relationship, we introduce the correlation function between on-axis ${\mathrm{PSF}}_{{d}_{1}}^{k}$ and ${\mathrm{PSF}}_{{d}_{2}}^{k}$ as

The simulated PSFs are conducted as shown in Fig. 2, where the scattering process is created from a random phase mask based on the projection model [31,32]. A point light source was set at the on-axis position in the object plane as the corresponding point where ${O}_{k}$ is located in the object plane. The simulated PSFs (normalized by removing the envelope) at different imaging distances through the modeled scattered layer are shown in Figs. 2(a) and 2(b). Visually, the intensity distribution of the two PSFs is strongly correlated, such as the green rectangles. Inspired by the wavefront slopes used from adaptive optics [33,34], we introduce the scaling vector into the algorithm to explore the detailed relationship between two imaging-distance PSFs, which is estimated by discrete block matching. We traverse ${\mathrm{PSF}}_{{d}_{2}}^{k}$ to search the optimal area most relevant to a selected block in ${\mathrm{PSF}}_{{d}_{1}}^{k}$, and the translational relation from the selected block in ${\mathrm{PSF}}_{{d}_{1}}^{k}$ to the optimal block in ${\mathrm{PSF}}_{{d}_{2}}^{k}$ is equivalent to the scaling vector from the center point of the selected block in ${\mathrm{PSF}}_{{d}_{1}}^{k}$ to ${\mathrm{PSF}}_{{d}_{2}}^{k}$. The block matching strategy can be expressed as

Then we calculate the $m$ value of each estimated scaling vector in Fig. 2(c), which equals the distance between $({x}_{{d}_{2}},{y}_{{d}_{2}})$ and the scaling center divided by the distance between $({x}_{{d}_{1}},{y}_{{d}_{1}})$ and the scaling center, as shown in Eq. (7). Theoretically, all the scaling vectors which were estimated by block matching with maximum cross correlation share the same $m$ value and the statistical analysis for Fig. 2(c) bears this out. The histogram distribution of $m$ values as shown in Fig. 2(d) demonstrates that $m$ stabilizes around 1.015 as a constant value, which meets the description of Eq. (7). Although the analytical solution of Eq. (7) is hard to derive, the scaling relationship between near-field PSFs statistical analysis does exist, which was proved by the above statistical analysis. We tentatively conclude that the on-axis $(u=0,v=0)$ near-field scaling relationship between ${\mathrm{PSF}}_{{d}_{1}}^{k}$ at Distance 1 plane and ${\mathrm{PSF}}_{{d}_{2}}^{k}$ at Distance 2 plane can be expressed as

where $\to $ denotes that ${\mathrm{PSF}}_{{d}_{1}}^{k}$ is most relevant to the coordinate-scaled ${\mathrm{PSF}}_{{d}_{2}}^{k}$ with the constant coordinate-scaled value $m$. Also, the on-axis scaling relationship in Eq. (9) can be generalized to the off-axis case asUnder the spatially incoherent illumination, the speckle generated by one certain imaging target (supposed as ${O}_{k}$) is the linear superposition of many highly correlated PSFs. Actually, each PSF possesses a different scaling center corresponding to one point belonging to the imaging target. Considering the schematic of the proposed scattering imaging system in Fig. 1, the size of each imaging target falls within the ME range but the spacing between any two targets beyond. All the PSFs that form the single-target speckle possess the same scaling center approximately where the imaging target located around. Equation (11) explains the scaling relationship between the single-target near-field speckles as

After that, the scaling-vector-based detection algorithm, as shown in Fig. 3, is proposed based on Eq. (11) for multiple targets in Fig. 1 under the spatially incoherent light source. First, the scaling vectors are estimated by block matching as Eq. (8) from two imaging-distance multi-target speckles. We traverse ${I}_{{d}_{2}}$ to search the optimal area most relevant to a selected block in ${I}_{{d}_{2}}$. The density of estimated scaling vectors vertically or horizontally can be adjusted appropriately with the speckle resolution. Then, multiple targets mean that multiple scaling centers exist around the estimated scaling vectors. In the case that there is an uncertain number of scaling centers, we use the length information of each estimated scaling vector to determine some regions where the targets may locate. Any position whose scaling vector length is below a certain threshold is listed as the possible region. Next, the connected component analysis (8-connected) [35] will be applied for clustering these regions and the number of the connected components equals the number of imaging targets. The algorithm will choose the area with the minimum length of the scaling vector in each connected component as the rough location of each target. Finally, the direction information of the scaling vector would be used for each rough location to adjust the positioned row and column where the imaging target located accurately, because the line defined by each scaling vector belonging to one connected component theoretically passes through the target position in that component. The proposed scaling-vector-based detection algorithm is assisted with the block matching method and the connected component analysis to achieve blind target position detection, only exploiting two captured multi-target near-field speckles at different imaging distances. The detected position information includes the number, $n$, and the locations of the imaging targets corresponding to the object plane, $({u}_{k},{v}_{k}),k=1,2,\dots ,n$, like the blue points in the Section 3.

After target position detection, the low-cross-talk region allocation strategy is proposed to extract the autocorrelation of each target, in order to simplify the infinite-solution problem in Eq. (3). We can select a small square region of the captured speckle, $\epsilon $, in ${I}_{{d}_{1}}$ for autocorrelation calculation, which is centered by one detected target location (${O}_{1}$ as an example) with side length $\beta $ as

Considering the envelope properties of speckle, the selected region affects the weight of each imaging target in ${I}_{{d}_{1}}$ by

This weight difference would be doubled if ${I}_{{d}_{1}}$ is transferred into the autocorrelation domain [25] as

However, the extracted autocorrelations of each imaging target do carry some interference from other autocorrelations in theory, which leads to an unstable output after the traditional phase retrieval algorithm with the random phase as the initial input. In order to improve the stability of reconstruction, the modified phase retrieval algorithm is applied in the paper especially for the autocorrelation signal with interference. First, we blindly reconstruct a number of object images by the “hybrid input-output” and the “error-reduction” algorithms via different random initial phases [27]. Then, for each object image, the part whose intensity is less than 20% of the maximum intensity of that image is regarded as unstable noise and the intensity of that part is set to zero. Finally, the object image with the least change in the autocorrelation domain between the unprocessed one and the processed one is taken as the final optimal output.

Working with the modified phase retrieval algorithm, each target can be reconstructed with the help of the detected number and locations of the imaging targets. Then, these targets will be placed at the detected position to form a complete scene of the multiple targets exceeding the ME range without any prior information, eventually. It has been verified by the real experiments that the approximations that appear in this section are acceptable and the extracted position information is sufficient for a visually distinguishable reconstruction. In addition, we discuss the limitations of the proposed algorithm in the next section.

## 3. EXPERIMENTAL DEMONSTRATION

In the following section, we first describe the optical setup of the scattering imaging system, then detail the tests on a real scattering imaging system, and finally analyze the limitations of the proposed algorithm.

#### A. Experimental Setup

The multi-target large FOV scattering imaging system setup via the blind target position detection is shown in Fig. 4, which is extended from the single-shot scattering system proposed by Katz *et al.* [5]. A narrow bandwidth 532 nm single-frequency CW laser (Cobolt SambaTM-100) serves as the light source whose coherence is attenuated via rotating ground glass. One Thorlabs Optics 220-grit diffuser whose effective ME range is 16.6 mrad is placed between multiple targets and the sensor plane [25]. We used a single CMOS camera (Filr, $\text{pixel size}=4.8\text{\hspace{0.17em}}\mathrm{\mu m}$, $1280\times 1024$ pixels) on a 3D moving platform (DHC, minimum $\text{scale}=10\text{\hspace{0.17em}}\mathrm{\mu m}$) to capture the large FOV speckles at different imaging distances.

#### B. Tests on a Real Scattering Imaging System

First, various multiple targets were tested to demonstrate the effectiveness of the proposed algorithm. The left column shows the test on the mask “2FL” and the right column shows the test on a larger and more complex scene “01234.” Figures 5(a) and 5(f) describe the detailed parameters of multiple targets. The minimum distance between any two targets is 3.5 mm, and each target size is within 0.5 mm. In this part, the object distance (${d}_{0}$) equals 120 mm. The total scene size is much larger than the ME range $(120\text{\hspace{0.17em}}\mathrm{mm}\times 16.6\text{\hspace{0.17em}}\mathrm{mrad})$, but each target size is smaller than it. The real multi-target large FOV near-field speckles through scattering media at different imaging distances are captured via a 3D moving camera. Figures 5(c) and 5(h) are captured with ${d}_{2}=17.0\text{\hspace{0.17em}}\mathrm{mm}$ and Figs. 5(d) and 5(i) with ${d}_{1}=16.5\text{\hspace{0.17em}}\mathrm{mm}$. Visually, no prior information (including the number and locations of the imaging targets) can be seen directly from these multi-target superimposed speckles. The red arrows in Fig. 5(e) show all the estimated scaling vectors using the block matching method based on the near-field speckles for mask “2FL,” which describe the scaling relationship between these two imaging-distance speckles. Obviously, there are three scaling centers that exist among these scaling vectors. Meanwhile, Fig. 5(e) shows the extracted connected components for mask “2FL” in the bottom right and the blue points denote the final detected multi-target locations. The number of the extracted connected components matches the number of the imaging targets and this multi-target scattering imaging problem is specified as the process of reconstructing three objects. After that, the autocorrelation of each imaging target as shown in Fig. 5(d) can be reconstructed via the operations as Eq. (14) in the selected speckle region centered by the detected position information in Fig. 5(e). Working with the modified phase retrieval algorithm, each imaging target can be reconstructed successfully and then put at its location to form a complete scene as shown in Fig. 5(b), which is visually close to the original one. The process of Figs. 5(h)–5(j) for target “01234” is the same as the process of Figs. 5(c)–5(e). The objective similarity evaluations between the reconstructions and the original targets by peak signal-to-noise ratio (PSNR) are shown in Table 1 and the detected target locations are shown in Figs. 5(b) and 5(g). Compared with the original scene in Figs. 5(a) and 5(f), these reconstructed results demonstrate that the multiple targets can be accurately positioned by scaling-vector-based position detection and the proposed method achieves multi-target large FOV blind reconstruction through scattering media in the real imaging system.

Second, to test the applicability of our proposed method for large FOV fluorescent biological observation through scattering media, the neuron-shape scattering imaging experiments are conducted. The neuron-shape mask scaled from the real dendrites of hippocampal neurons [36] has the complex shapes of targets and distribution irregularity of target locations, which increases the difficulty for reconstruction but satisfies the requirements for practical scattering scenes. The mask was set as the imaging targets in the proposed scattering imaging system with other experimental conditions consistent with Fig. 5. Figure 6(a) shows the neuron-shape mask for reconstruction and the detailed parameters. The reconstructed whole scene via the proposed method is shown in Fig. 6(b) and the main features are faithfully recovered. Certainly, no prior information can be seen directly from the captured large FOV speckle, as shown in Fig. 6(c), which was recorded with ${d}_{1}=16.5\text{\hspace{0.17em}}\mathrm{mm}$ as the Distance 1 plane. In principle, the presented millimeter-scale experiments can be scaled to micrometer- or meter-scale scenarios for scattering imaging exceeding the ME range.

#### C. Analyzing the Limitations

In the case that multiple targets are separated from each other as shown in Fig. 1, the proposed method blindly achieves multi-target localization and reconstruction exceeding the ME range. However, how to define the effective spacing between any two targets, beyond which the scaling-vector-based detection and scattering imaging algorithm works for multi-target speckles? To evaluate this limitation, we adjust the spacing between any two targets for mask “2FL” in Fig. 7(a) from 3.25 mm to 1.5 mm. Besides spacing, other parameters remain the same as those in Fig. 5(a), with the effective ME range keeping 2 mm in the object plane and the size of each imaging target falling within 0.5 mm. In this experimental environment, it is obvious that the reconstructed targets and locations are visually distinguishable when the spacing changes from 3.25 mm to 2.25 mm, as shown in Fig. 7(b). The estimated scaling vectors and detected locations with a spacing of 2.75 mm are shown in Fig. 7(d) as an example of reconstructions in good quality. As the spacing between any two targets reaches lower than 2.25 mm, the reconstructions are visually distorted, and the quality degrades. Meanwhile, the proposed blind position detection algorithm estimates the wrong number and the wrong locations of the imaging targets from the near-field speckles as shown in Fig. 7(e) with a spacing of 1.75 mm, which serves as misleading for the following reconstruction process. Objectively, the three-target averaged PSNRs curve between the reconstructions and the original targets with respect to the decreasing spacing is provided in Fig. 7(c). Theoretically, the autocorrelation of one selected target like Eq. (14) is noised more by other target autocorrelations with the decreasing spacing between any two targets, which makes the final reconstruction after phase retrieval worse. Meanwhile, according to Eq. (13), the captured multi-target speckle region centered by one detected target location (${O}_{1}$ as an example) is mainly composed of the speckle generated by ${O}_{1}$ and the speckles generated by other imaging targets only make a small contribution for this region. Therefore, the scaling vectors in this region regard the location of ${O}_{1}$ as the scaling center and there are multiple different scaling centers that exist between two imaging-distance speckles which correspond to the locations of different imaging targets. This is the reason why the proposed blind position detection algorithm can extract multi-target positions from two large FOV imaging-distance speckles, and Figs. 5(e) and 5(j) show the obvious boundaries of the speckle regions serving for different imaging targets. However, with the decreasing spacing, the weight differences in Eq. (13) are reduced and the scaling vectors in the speckle region centered by the location of ${O}_{1}$ are noised more by other imaging targets. That will result in the low-accuracy target localization, and even the wrong number of the identified imaging targets.

As for the times that the proposed algorithm can exceed the ME range, it is not restricted by theory, only if the camera array can capture the full speckles when the number of imaging targets increases.

## 4. CONCLUSION AND DISCUSSION

To summarize, we developed a multi-target large FOV scattering imaging method based on the blind target position detection. This technique only exploits two captured multi-target near-field speckles at different imaging distances, from which the target position cannot be seen directly. A major advantage of our approach is that target position information including the number and the locations of the imaging targets can be blindly reconstructed via the scaling-vector-based detection algorithm. After that, the autocorrelations can be calculated for speckle regions centered by the derived positions via the low-cross-talk region allocation strategy. Working with the modified phase retrieval algorithm, the whole scene of the multiple isolated targets exceeding the ME range can be successfully recovered. Unlike other methods of exceeding the ME range, no prior information of the target is required, which makes our technique more applicable for prior-information-free large FOV imaging and other scattering imaging techniques can cooperate with our proposed method to further improve the performance [25,26]. The real scattering imaging experiments demonstrate the effectiveness of the proposed method.

Actually, the detected target position information contains some errors compared with the initial multi-target positions. These errors come from three main sources: (1) the discrete sampling during camera acquisition and image processing; (2) the approximations of the theoretical derivation from Eq. (9) to Eq. (11) aiming to simplify the scattering model; and (3) a certain divergence angle of the experimental illumination, which makes the target positions change slightly from the object plane to the sensor plane. Actually, these errors did not affect the effectiveness of the proposed algorithm for target localization and reconstruction. On the other hand, the minimal effective spacing between any two targets for the proposed algorithm is not only limited by the target size and the properties of the scattering medium, but also the distance between the scattering medium and the camera sensor, and even the shape of the imaging target. In the future, a deeper research will be focused on this problem.

Additionally, the main time consumption of our proposed algorithm is spent in the process of estimating scaling vectors from two imaging-distance speckles and the time consumption will increase along with the increasing FOV of the captured speckles. When the resolution of the captured speckles in Figs. 5(c) and 5(d) is $1600\times 2500$ pixels, the length of the selected square region ($\beta $) is set as 400 pixels and the total time consumption in MATLAB 2018b is about 6.1 h. In addition, in the image processing part, the raw captured multi-target speckle with the slowly varying envelope is normalized by dividing the raw speckle by a low-pass-filtered version of it before calculating autocorrelations [5]. Especially in the process of estimating scaling vectors, the normalized speckle is further smoothed by a low-pass filter to remove the existing noises by camera which are spatially invariant at different imaging distances and improve the accuracy for localization.

Finally, the imaging distances, ${d}_{1}$ and ${d}_{2}$, are not fixed in the proposed method, so as the gap between two near-field speckles, ${d}_{2}-{d}_{1}$. In addition, these distance parameters were not used as the known information for reconstruction in this paper. In practice, the imaging distance can be adjusted to satisfy the requirements for different scenes, as long as the near-field scaling relationship between two imaging-distance speckles exists, and the light field techniques [37] may be applied to the proposed system to speed up the acquisition process. Meanwhile, the relationship between parameter $m$ and multiple variables (including ${d}_{0}$, ${d}_{1}$, ${d}_{2}$, and TM) via statistical analysis will be the research plan in the future, aiming to dig out more useful information from two near-field speckles at different imaging distances. Furthermore, the proposed method can still work theoretically when the imaging targets are sandwiched between two scattering layers or looked around corners [5], which will be tested experimentally in the future work, and the raw data used to generate the results presented in this manuscript is available at https://cloud.tsinghua.edu.cn/d/296c066dbcc243839f52/.

## Funding

National Natural Science Foundation of China (61827804, 61771275); Guangdong Special Support Plan for the scientific and technological innovation young talents (2016TQ03X998).

## Acknowledgment

We would like to thank Xiangsheng Xie for helpful discussions.

## Disclosures

The authors declare no conflicts of interest.

## REFERENCES

**1. **J. W. Goodman, *Speckle Phenomena in Optics: Theory and Applications* (Roberts, 2007).

**2. **S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. **61**, 834–837 (1988). [CrossRef]

**3. **I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. **61**, 2328–2331 (1988). [CrossRef]

**4. **I. Freund, “Looking through walls and around corners,” Phys. A **168**, 49–65 (1990). [CrossRef]

**5. **O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics **8**, 784–790 (2014). [CrossRef]

**6. **J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature **491**, 232–234 (2012). [CrossRef]

**7. **X. Yang, Y. Pu, and D. Psaltis, “Imaging blood cells through scattering biological tissue using speckle scanning microscopy,” Opt. Express **22**, 3405–3413 (2014). [CrossRef]

**8. **L. Wang, P. P. Ho, C. Liu, G. Zhang, and R. R. Alfano, “Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate,” Science **253**, 769–771 (1991). [CrossRef]

**9. **D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science **254**, 1178–1181 (1991). [CrossRef]

**10. **W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science **248**, 73–76 (1990). [CrossRef]

**11. **G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in *IEEE International Conference on Computational Photography* (2018), pp. 1–10.

**12. **S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics **9**, 253–258 (2015). [CrossRef]

**13. **V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods **7**, 603–614 (2010). [CrossRef]

**14. **I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. **32**, 2309–2311 (2007). [CrossRef]

**15. **A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics **6**, 283–292 (2012). [CrossRef]

**16. **E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. **6**, 33558 (2016). [CrossRef]

**17. **H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. **6**, 32696 (2016). [CrossRef]

**18. **X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. **8**, 4585 (2018). [CrossRef]

**19. **S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. **1**, 81 (2010). [CrossRef]

**20. **M. Mounaix, H. B. Aguiar, and S. Gigan, “Temporal recompression through a scattering medium via a broadband transmission matrix,” Optica **4**, 1289–1292 (2017). [CrossRef]

**21. **G. Kim and R. Menon, “Computational imaging enables a see-through lens-less camera,” Opt. Express **26**, 22826–22836 (2018). [CrossRef]

**22. **L. Li, Q. Li, S. Sun, H. Z. Lin, W. T. Liu, and P. X. Chen, “Imaging through scattering layers exceeding memory effect range with spatial-correlation-achieved point-spread-function,” Opt. Lett. **43**, 1670–1673 (2018). [CrossRef]

**23. **S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica **4**, 1209–1213 (2017). [CrossRef]

**24. **C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. **434**, 203–208 (2019). [CrossRef]

**25. **X. Wang, X. Jin, J. Li, X. Lian, X. Ji, and Q. Dai, “Prior-information-free single-shot scattering imaging beyond the memory effect,” Opt. Lett. **44**, 1423–1426 (2019). [CrossRef]

**26. **A. Boniface, B. Blochet, J. Dong, and S. Gigan, “Noninvasive light focusing in scattering media using speckle variance optimization,” Optica **6**, 1381–1385 (2019). [CrossRef]

**27. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**, 2758–2769 (1982). [CrossRef]

**28. **X. Jin, Z. Wang, X. Wang, and Q. Dai, “Depth of field extended scattering imaging by light field estimation,” Opt. Lett. **43**, 4871–4874 (2018). [CrossRef]

**29. **P. Jain and S. E. Sarma, “Measuring light transport using speckle patterns as structured illumination,” Sci. Rep. **9**, 11157 (2019). [CrossRef]

**30. **Y. Choi, P. Hosseini, W. Choi, R. R. Dasari, P. T. C. So, and Z. Yaqoob, “Dynamic speckle illumination wide-field reflection phase microscopy,” Opt. Lett. **39**, 6062–6065 (2014). [CrossRef]

**31. **S. Schott, J. Bertolotti, J. F. Leger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express **23**, 13505–13516 (2015). [CrossRef]

**32. **X. Jin, D. M. S. Wei, and Q. Dai, “Point spread function for diffuser cameras based on wave propagation and projection model,” Opt. Express **27**, 12748–12761 (2019). [CrossRef]

**33. **M. A. Van Dam and R. G. Lane, “Wave-front slope estimation,” J. Opt. Soc. Am. **17**, 1319–1324 (2000). [CrossRef]

**34. **J. Ko and C. C. Davis, “Comparison of the plenoptic sensor and the Shack-Hartmann sensor,” Appl. Opt. **56**, 3689–3698 (2017). [CrossRef]

**35. **R. M. Haralick and L. G. Shapiro, *Computer and Robot Vision* (Addison Wesley, 1992).

**36. **D. Brandner and G. Withers, “Multipolar neuron, Rattus from CIL:2907,” https://doi.org/doi:10.7295/W9CIL2907 (2010).

**37. **G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. **11**, 926–954 (2017). [CrossRef]