## Abstract

Optical microscopy in complex, inhomogeneous media is challenging due to the presence of multiply scattered light that limits the depths at which diffraction-limited resolution can be achieved. One way to circumvent the degradation in resolution is to use speckle- correlation-based imaging (SCI) techniques, which permit imaging of objects inside scattering media at diffraction-limited resolution. However, SCI methods are currently limited to imaging sparsely tagged objects in a dark-field scenario. In this work, we demonstrate the ability to image hidden, moving objects in a bright-field scenario. By using a deterministic phase modulator to generate a spatially incoherent light source, the background contribution can be kept constant between acquisitions and subtracted out. In this way, the signal arising from the object can be isolated, and the object can be reconstructed with high fidelity. With the ability to effectively isolate the object signal, our work is not limited to imaging bright objects in the dark-field case, but also works in bright-field scenarios, with non-emitting objects.

© 2017 Optical Society of America

## 1. Introduction

Optical imaging is challenging in turbid media, where multiple scattering of light causes a degradation of resolution and limits the depths at which we can reliably image (< 1mm in biological tissue) without having to resort to destructive optical clearing or sectioning techniques [1]. Many approaches currently exist to filter out the multiply scattered light and detect only the unscattered (ballistic) or minimally scattered photons. These include methods such as time and coherence gating, which separate the ballistic photons from the scattered photons based on their transit time to the detector [2, 3]; methods that rely on preserving the initial angular momentum or polarization modulation [4–7]; and methods that rely on spatial confinement, such as confocal and multi-photon microscopy [1, 8]. An issue with methods that rely on detecting only the minimally scattered photons is the maximum achievable depth of penetration, since the chance of detecting a quasi-ballistic photons decreases exponentially with increasing depth.

Instead of rejecting the scattered photons, other approaches have aimed to take advantage of the information inherent within the detected speckle field that arises from multiply scattered light. Wavefront shaping (WFS) techniques exploit the principles of time-reversal to undo the effect of scattering and enable focusing of light in thick, scattering media [9–12]. However, WFS usually requires long acquisition times to measure the transmission matrix, and/or the presence of a guide star. On the other hand, speckle-correlation-based imaging (SCI) approaches exploit the angular correlations inherent within the scattering process to reconstruct the hidden object and do not need long acquisition times or a guide star [13,14]. However, SCI methods are limited to working in dark-field scenarios, with sparsely-tagged objects [14], since the detected light must consist solely of light arising from the object.

In this work, we demonstrate imaging of hidden moving objects in a bright-field scenario by leveraging the temporal correlations inherent in the scattering process to separate and remove the dominating contribution from the background [15, 16]. To create a spatially incoherent light source, a spatial light modulator (SLM) was used to apply the same set of random phase patterns during different acquisitions. The use of a deterministic phase modulator ensured that the background contribution remained constant across the detected images. By removing the background component, the speckle pattern from the object was isolated, and the object was reconstructed with high fidelity. Using this technique, we experimentally demonstrate successful recovery of moving objects that would otherwise be obscured by scattering media.

## 2. Principle

Figure 1 presents an overview of our system. A moving object, hidden at a distance *u* behind a scattering media, is illuminated using a spatially incoherent, narrow-band light source. The scattered light is detected by a high-resolution camera that is placed at a distance *v* from the scattering media.

In the absence of any correlations in the scattering pattern, the detected image is merely a speckle intensity field. However, by exploiting the deterministic nature of scattering, the hidden object can be recovered [Fig. 1(c)]. Let us first consider the case where light is confined to emit solely within an isoplanatic range, as defined by the angular memory effect (ME). In this case, the detected light can be mathematically represented as

where*S*is the point spread function (PSF) of the light scattering process, or equivalently the speckle intensity distribution at the camera arising from a single point source at the object plane; and

*O*is the object, defined as the collection of points through which light can be transmitted [14]. For this paper, we use the operator * to denote convolution. The memory effect region can be approximated as $\delta x=\frac{u\lambda}{\pi L}$, where

*L*is the thickness of the scattering media,

*λ*is the wavelength of light, and

*u*is the distance between the scattering media and the object.

If we now consider the case of an absorptive object in a bright-field scenario, then the majority of the detected light arises from the background. Using superposition, the detected intensity image *I* can be mathematically described as

*B*is the speckle intensity image arising from the scattered light transmitted through the medium, and

*S**

*O*is the portion that the object would have contributed if it were transmitting, as opposed to blocking, light [Fig. 1(c,i)]. Due to the dominating contribution from the background

*B*, we cannot retrieve

*O*from

*I*alone. By acquiring multiple intensity images with the background, but not the object, constant between acquisitions, we can remove the background signal and thereby retrieve the object.

One strategy to achieve this is to use a moving object. If the object dimensions falls within the ME region, the contribution of the object in each image can be represented as the convolution of the object pattern with an acquisition-dependent PSF. As long as the rest of the sample is static, the speckle field arising from the background will remain unchanged and can be subtracted out by taking the difference between captures. That is,

*I*denotes the n

_{n}^{th}captured image. Since the scattering PSF is a delta-correlated process (

*S*(

_{n}*x*) ★

*S*(

_{n}*x*) ≈

*δ*(

*x*)), taking the autocorrelation (AC) of the image Δ

*I*yields the object autocorrelation (OAC), plus additional noise terms [Fig. 1(c,iii)]. That is,

*I*★ Δ

_{n}*I*as the speckle autocorrelation (SAC).

_{n}The object can be recovered from the SAC by using phase retrieval techniques, such as the Fienup iterative phase retrieval methods, to recover the Fourier phase [Fig. 1(c,iv)] [17]. The resultant object will have an image size dictated by the magnification of the system, $M=-\frac{v}{u}$.

#### 2.1. Effect of travel distance

Depending on the distance traveled by the object, the PSFs *S _{n}*,

*n*= 1, 2, ... may or may not be correlated. Figure 2 illustrates the effect of travel distance, relative to the ME range, on the SAC. The speckle intensity images

*I*

_{1},

*I*

_{2}were determined using simulation. For comparison, the autocorrelation of the object/target,

*A*=

*O*★

*O*has also been provided [Fig. 2(A, “Object AC”)]. For simplicity, only the case of two image captures (

*n*= 1, 2) has been considered.

For a moving object, the associated PSFs *S*_{1}, *S*_{2} will have a degree of correlation *C*(Δ**x**) based on the object travel distance Δ**x**. For scattering media with thicknesses *L* greater than the mean free path, the degree of correlation can be approximated using the angular correlation function

*L*is the thickness of the scattering medium, and $\mathrm{\Theta}\approx \frac{\mathrm{\Delta}\mathbf{x}}{u}$ [18–20]. When

*C*(Δ

**x**) > 0.5, the object is considered to have traveled within the ME field of view. The following sections describe three possible cases in more detail:

*C*(Δ

**x**) ≈ 1,

*C*(Δ

**x**) > 0.5, and

*C*(Δ

**x**) → 0.

### Case 1: Object travels distance where *C*(Δ**x**) ≈ 1

In the case where the object travels a small distance (such that *C*(Δ**x**) ≈ 1), we have

**x**= (

*x*,

*y*),

**x**= (

_{i}*x*,

_{i}*y*) are coordinates in the object plane and image plane respectively, Δ

_{i}**x**is the distance the object traveled in the object plane, and Δ

**x**=

_{i}*M*Δ

**x**. We can equivalently consider the PSF to be the same in both captures and have the object travel between captures.

That is,

*A*=

*O*★

*O*is the object autocorrelation (OAC). The SAC contains three copies of the OAC: a positive copy centered at

**x**= (0, 0), and two negative copies shifted by an amount commensurate with the object travel distance [Fig. 2(B, “Speckle AC”)].

Since *C*(Δ**x**) ≈ 1 when Δ**x** ≈ 0, the object may travel a distance shorter than the extent of its autocorrelation. In this case, the SAC will yield positive and negative copies of the OAC that overlap [Fig. 2(i)]. The OAC can be recovered using deconvolution [Fig. 2(i, ”Deconv. SAC.”)]. Using thresholding to remove the negative portions will adversely impact the positive copy and result in an incomplete estimation of the OAC [Fig. 2(i,”SAC>0”)]. For the results presented in Fig. 2, the objects were reconstructed by applying an iterative phase retrieval algorithm on the deconvolved SAC ([13,14,17]).

### Case 2: Object travels distance where *C*(Δ**x**) > 0.5

In the regime where the object travels within the angular ME range (*C*(Δ**x**) > 0.5), *S*_{1} and *S*_{2} are correlated. To highlight the impact of the degree of correlation *C*(Δ**x**) on the SAC, we can mathematically represent *S*_{2} as:

*S*is a speckle intensity pattern that is uncorrelated with

*S*

_{1}. The scatter PSFs in the equation above are mean-subtracted speckle intensities. Representing

*S*

_{2}in the form above allows us to preserve speckle intensity statistics (that is, the speckle intensity variance and mean satisfy 𝕍[

*S*

_{1}] = 𝕍[

*S*

_{2}] and 𝔼[

*S*

_{1}] = 𝔼[

*S*

_{2}] respectively.)

Using Eq. (11), Eqs. (4) and (5) become

The SAC still contains three copies of the OAC. However, the ratio of the intensity of the positive and negative OAC copies is determined by the ME correlation function *C*(Δ**x**). Moreover, since *S*_{2}≠ *S*_{1}, there is an additional noise term that increases with decreasing *C*(Δ**x**). Since there is no overlap between the positive and negative OAC copies, the OAC can be retrieved by either thresholding out the portions of the SAC that are smaller than the background value [Fig. 2(ii, “SAC>0”)], or by deconvolving the image [Fig. 2(ii, “Deconv. SAC.”)]. Appendix 1 provides more details on the deconvolution algorithm.

### Case 3: Object travels distance where *C*(Δ**x**) ≈ 0

In the case where the object travels outside the memory effect region between captures, *S*_{1} and *S*_{2} are uncorrelated, and Eq. (13) can be simplified as Eq. (5). Comparing the SAC in Fig. 2(iii) with those in Fig. 2(i–ii), we see that the SAC in the case where the object travels farther than the ME region exhibits more noise. This is expected due to the additional noise term caused by *S*_{1} ★ *S*_{2} that is not present in Case 1.

From above, in all cases (for *C*(Δ**x**) ∈ [0, 1)), we can successfully retrieve the object autocorrelation from the acquired speckle images, *S*_{1}, *S*_{2}. From the estimated OAC, phase retrieval techniques can then be applied to reconstruct the object at diffraction-limited resolution.

## 3. Results

For the experimental demonstration, a laser light beam (CrystaLaser CS532-150-S; *λ* = 532 nm) was expanded (1/*e*^{2} diameter of 20 cm) and reflected off a phase-only spatial light modulator (SLM; Holoeye PLUTO-VIS) to generate a spatially incoherent light source (Fig. 3).

An SLM was used in place of a rotating diffuser in order to generate a deterministic, temporally variant set of 50 to 100 random phase patterns. This set of patterns was used for all the acquisitions to ensure that the background light captured remained constant. The object and camera (pco.edge 5.5, PCO-Tech, USA) were placed at a distance *u* = 20 − 30 cm and *v* = 10 − 15 cm from the scattering media (DG10-120 diffuser; Thorlabs, USA) respectively (Fig. 3).

To ensure that only the object moved between successive image captures, a transmissive SLM (tSLM; Holoeye LC2002 with polarizer) coupled with a polarizer (Thorlabs, LPVISE200-A) was used for amplitude modulation, and served as the object (Fig. 4). For each object, a set of n=4 images, *I*_{1}, ....*I*_{4} were acquired, with the object moving 1.5mm between each acquisition. The raw camera images [Fig. 4(b)] display a seemingly random light pattern that is similar for different objects. This is due to the dominant contribution of the background.

From each successive pair of acquired images, the OAC [Fig. 4(d)] was estimated by deconvolving the SAC. The deconvolved SAC images were then averaged to reduce noise and yield a better estimate of the OAC. A Fienup-type iterative phase retrieval method was applied to reconstruct the hidden object with high fidelity [Fig. 4(e)] [13,14,17]. One modification that was made to the algorithm was to add an object support to the object constraints; this object support was determined from the OAC support [21,22]. In all cases, the obscured object was successfully reconstructed [Fig. 4(e)].

To experimentally demonstrate the effect of object travel distance, we moved an object a distance of 0.5, 1, and 3 mm between image acquisitions, and looked at the corresponding SAC and reconstructed object (Fig. 5). As expected, the SAC contained three copies of the OAC. We also compared the effect of processing the SAC using deconvolution [Fig. 5(b)] vs. thresholding [Fig. 5(c)].

For Case i, the object traveled a distance Δ**x** < *δ***x**, and both the object and SAC overlapped in space between successive acquisitions. In the case of object overlap, only the non-overlapping portion of the object can be retrieved [Fig. 5(i)]. Comparing the result of deconvolution vs thresholding, the reconstructed image from the deconvolved SAC more closely resembles the original object [Fig. 5(i,b)]. However, in both cases, what we are left with is an incomplete OAC and reonstructed object.

For Case ii, the object traveled a distance *δ***x** < Δ**x** ≤ 2*δ***x**. Since the OAC support is approximately twice the object support, the positive and negative copies of the OAC overlapped [Fig. 5(ii)] [21]. Due to the overlap, thresholding resulted in an imperfect object reconstruction [Fig. 5(ii,c)]. In contrast, by deconvolving, the signal from the negative copies can be used to gain a better estimate of the OAC, from which the object can be reconstructed [Fig. 5(ii,b)].

For Case iii, the object traveled a distance Δ**x** >> 2*δ***x**, and there was no overlap in the SAC. Due to the large Δ**x**, *C*(Δ**x**) decreased, and correspondingly, the noise increased. Since the signal-to-noise ratio (SNR) of the negative copies decreased, the entire OAC cannot be seen in the negative copies [Fig. 5(iii,a)]; thus, performing a deconvolution results in a noisy, imperfect OAC [Fig. 5(iii,b)], and it is more advisable to use thresholding to retain only the positive portion of the SAC [Fig. 5(iii,c)]. If we compare the reconstructed objects in both cases, we see that the object from the thresholded result more closely resembles the original object.

#### 3.1. Imaging moving objects hidden between scattering media

To further demonstrate our imaging technique, we placed a moving object between two diffusers (Newport 10^{o} Light Shaping Diffuser, Thorlabs DG10-220-MD) [Fig. 6(A)]. A moving object (a bent black wire) was flipped in and out of the light path between image captures, such that *I*_{2} = *B*. We blocked the partially-developed speckle field (from the propagation of the SLM phase pattern) and used only the fully-developed speckle pattern [23]. This fully-developed speckled pattern was transmitted through both scattering media and the moving object. The emitted scattered light was detected by a camera.

The background halo from each detected speckle intensity image was estimated and removed by performing Gaussian filtering (500×500 kernel, *σ* = 100), and then dividing each image by the background halo [14]. The SAC was then computed to estimate the OAC, from which phase retrieval was applied to reconstruct the hidden object. Although the object is fully obscured from both sides by scattering media and cannot be resolved from the camera image alone, using our technique, we were able to successfully reconstruct the hidden object with high fidelity [Fig. 6(B)].

## 4. Discussion and conclusion

In this paper, we demonstrated successful reconstruction of moving targets that were hidden behind an optically turbid media. Although the angular memory effect has already been used to demonstrate imaging of hidden targets, to the best of our knowledge, these prior systems were limited to imaging dark-field, sparsely-tagged objects [13,14,24]. We extended this work to imaging in the bright-field scenario by exploiting the temporal correlations inherent in the scattering process to remove the dominating contribution from the background and isolate the signal arising from the object [15,16]. Although we demonstrated our results on non-emitting objects in the bright-field scenario, our technique works equally well with transmissive or reflective objects. A cursory examination reveals that, when *I _{n}* =

*B*+

*S**

_{n}*O*and Δ

*I*=

*I*−

_{n}*I*

_{n+1}, the speckle autocorrelation is still given by Eq. (5), similar to imaging absorptive objects in the bright-field scenario. In the remainder of this section, we discuss some of the factors that impact system performance.

Firstly, our method depends on the angular correlations inherent in the scattering process. Thus, the object dimension should fall within the angular memory effect field of view (FOV), approximated using the full-width-half-maximum (FWHM) of the correlation function,
$\frac{u\lambda}{\pi L}$. The axial extent of the object, *δz*, should also fall within the axial decorrelation length
$\frac{2\lambda}{\pi}{\left(\frac{u}{D}\right)}^{2}$ [25]. Since the ME FOV is inversely proportional to *L*, our technique works best with thin scattering media, or through more anisotropically scattering media, since anisotropy enhances the angular memory effect range [20]. Strongly anisotropic media, such as biological tissue, also exhibit the translational memory effect, which may be exploited to further the fidelity of imaging through scattering layers [26].

Secondly, to maximize SNR and minimize overlap, the object travel distance should be such that *δx* < Δ*x* and *C*(Δ**x**) ≥ 0.5, since smaller values of *C*(Δ**x**) results in higher levels of noise. However, if the object moves such a large distance as to not fall within the laser light beam, then *I*_{2} = *B*, and Δ*I* = *S*_{1} * *O*, and we can also retrieve the object with high fidelity. In all these cases, successful retrieval of the object is dependent on the background light pattern remaining constant between successive image captures. Thus, the illuminated portion of the tissue should remain constant between image captures, and the time between image captures should fall well within the temporal decorrelation time of the scattering sample. For biological samples, the temporal decorrelation time is related to the motion of scatterers embedded within [27].

Imaging through biological samples can be achieved using a faster system. The imaging speed in our current design was limited by the refresh rate of the SLM(≈ 8 Hz) and by the exposure time required to capture an image (50–200ms). With a more powerful laser, or a faster deterministic random phase modulator, it would be possible to shorten our imaging time, and extend our work to imaging within non-static samples, such as biological tissue.

A third factor in the fidelity of the reconstruction is the complexity of the object and the size of the background relative to the object. The dynamic range of the camera should be large enough to resolve the equivalent speckle signal from the object. Since the signal contrast is inversely related to the object complexity [14], the dynamic range of the camera limits the maximum object complexity. To maximize the SNR, the camera exposure and laser power should be adjusted such that the full well depth of the camera is utilized. A camera with a larger well depth and dynamic range would provide higher SNR and the capability to image more complex objects. The diameter of the aperture in the system can be adjusted to fine-tune the image resolution and control the object complexity.

Lastly, each speckle grain at the camera should satisfy the Nyquist sampling criterion and be easily resolvable. At the same time, the number of speckle grains that are captured in each image should also be maximized in order to maximize SNR. Although the scattering PSFs are ideally a delta-correlated process, in practice, we are only sampling a finite extent of the PSF. Thus, the PSF autocorrelation yields a delta function plus some background noise which can be minimized by increasing the number of captured speckle grains [14]. Due to Nyquist requirements, the maximum number of speckle grains is a function of the camera resolution; thus, a high resolution camera would provide lower noise. Another method to reduce this speckle noise is to take multiple acquisitions and compute the average of the speckle autocorrelation images.

In conclusion, we demonstrated successful imaging of hidden moving targets through scattering samples. The temporal and angular correlations inherent in the scattered light pattern allowed us to reconstruct the hidden object in cases where multiply scattered light dominates over ballistic light. This paper presented a first proof of concept. Although we demonstrated imaging of binary-amplitude targets, our system can also be extended to imaging gray-scale targets [28]. Since our imaging technique utilizes the angular memory effect, it is scalable. Moreover, our method does not require access inside the scattering media, and can therefore be used as a black box imaging system. With appropriate optimization, this opens up potential for use in applications involving the tracking of moving object in turbulent atmospheres, such as fog or underwater.

## Appendix 1 - Deconvolving the speckle autocorrelation

To deconvolve the speckle autocorrelation (SAC), Δ*I* ★ Δ*I*, Weiner deconvolution was applied to reduce the deconvolution noise. We briefly describe the process here. We can rewrite Eq. (13) as

*h*(

**x**) = 2

_{i}*δ*(

**x**) −

_{i}*C*(Δ

**x**)

*δ*(

**x**± Δ

_{i}**x**),

_{i}*A*=

*O*★

*O*, and

*n*is the noise term. In this case, Weiner deconvolution estimates

*A*by applying

*ℱ*is the Fourier transform operator, and $k=\frac{\mathcal{F}(n)}{\mathcal{F}(g)}\approx \frac{1}{\mathit{SNR}}$ estimates the SNR level of your signal [29]. Since all object ACs have a peak value of $A({\mathbf{x}}_{\mathbf{i}}=(0,0))={\sum}_{\mathbf{x}}{O}^{2}$, to determine

*h*from the SAC, we estimated the value of

*C*(Δ

**x**) by taking the negative/positive peak values in the SAC. The locations of the negative peaks, with respect to the centered, positive peak, provided the value of the shift Δ

**x**.

_{i}## Funding

National Institutes of Health (NIH 1U01NS090577); GIST-Caltech Collaborative Research (CG2016); Natural Sciences and Engineering Research Council of Canada (NSERC PGSD3).

## Acknowledgments

The authors would like to thank Joshua Brake for helpful feedback on the manuscript.

## References and links

**1. **V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods **7**, 603–614 (2010). [CrossRef] [PubMed]

**2. **D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science **254**, 1178 (1991). [CrossRef] [PubMed]

**3. **S. Andersson-Engels, O. Jarlman, R. Berg, and S. Svanberg, “Time-resolved transillumination for medical diagnostics,” Opt. Lett. **15**, 1179–1181 (1990). [CrossRef] [PubMed]

**4. **G. H. Chapman, M. Trinh, N. Pfeiffer, G. Chu, and D. Lee, “Angular domain imaging of objects within highly scattering media using silicon micromachined collimating arrays,” IEEE J. Quantum Electron. **9**, 257–266 (2003). [CrossRef]

**5. **S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J.-S. Lee, Y.-S. Lim, Q.-H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photon. **9**, 253–258 (2015).

**6. **H. Ramachandran and A. Narayanan, “Two-dimensional imaging through turbid media using a continuous wave light source,” Opt. Commun. **154**, 255–260 (1998). [CrossRef]

**7. **S. Sudarsanam, J. Mathew, S. Panigrahi, J. Fade, M. Alouini, and H. Ramachandran, “Real-time imaging through strongly scattering media: seeing through turbid media, instantly,” Sci. Rep. **6**25033 (2016). [CrossRef] [PubMed]

**8. **F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods **2**, 932–940 (2005). [CrossRef] [PubMed]

**9. **A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photon. **6**, 283–292 (2012). [CrossRef]

**10. **I. M. Vellekoop and A. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. **32**, 2309–2311 (2007). [CrossRef] [PubMed]

**11. **X. Xu, H. Liu, and L. V. Wang, “Time-reversed ultrasonically encoded optical focusing into scattering media,” Nat. Photon. **5**, 154–157 (2011). [CrossRef]

**12. **Y. M. Wang, B. Judkewitz, C. A. DiMarzio, and C. Yang, “Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light,” Nat. Commun. **3**, 928 (2012). [CrossRef] [PubMed]

**13. **J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature **491**, 232–234 (2012). [CrossRef] [PubMed]

**14. **O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photon. **8**, 784–790 (2014). [CrossRef]

**15. **E. H. Zhou, H. Ruan, C. Yang, and B. Judkewitz, “Focusing on moving targets through scattering samples,” Optica **1**, 227–232 (2014). [CrossRef]

**16. **C. Ma, X. Xu, Y. Liu, and L. V. Wang, “Time-reversed adapted-perturbation (trap) optical focusing onto dynamic objects inside scattering media,” Nat. Photon. **8**, 931–936 (2014). [CrossRef]

**17. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**, 2758–2769 (1982). [CrossRef] [PubMed]

**18. **S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. lett. **61**, 834 (1988). [CrossRef] [PubMed]

**19. **R. Berkovits, M. Kaveh, and S. Feng, “Memory effect of waves in disordered systems: a real-space approach,” Phys. Rev. B **40**, 737 (1989). [CrossRef]

**20. **S. Schott, J. Bertolotti, J.-F. Léger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express **23**, 13505–13516 (2015). [CrossRef] [PubMed]

**21. **J. R. Fienup, T. Crimmins, and W. Holsztynski, “Reconstruction of the support of an object from the support of its autocorrelation,” J. Opt. Soc. Am. **72**, 610–624 (1982). [CrossRef]

**22. **J. Fienup and C. Wackerman, “Phase-retrieval stagnation problems and solutions,” J. Opt. Soc. Am. A **3**, 1897–1907 (1986). [CrossRef]

**23. **B. Ruffing and J. Fleischer, “Spectral correlation of partially or fully developed speckle patterns generated by rough surfaces,” J. Opt. Soc. Am. A **2**, 1637–1643 (1985). [CrossRef]

**24. **O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photon. **6**, 549–553 (2012). [CrossRef]

**25. **I. Freund, “Looking through walls and around corners,” Phys. A **168**, 49–65 (1990). [CrossRef]

**26. **B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. **11**, 684–689 (2015). [CrossRef]

**27. **J. Brake, M. Jang, and C. Yang, “Analyzing the relationship between decorrelation time and tissue thickness in acute rat brain slices using multispeckle diffusing wave spectroscopy,” J. Opt. Soc. Am. A **33**, 270–275 (2016). [CrossRef]

**28. **H. Li, T. Wu, J. Liu, C. Gong, and X. Shao, “Simulation and experimental verification for imaging of gray-scale objects through scattering layers,” Appl. Opt. **55**, 9731–9737 (2016). [CrossRef] [PubMed]

**29. **R. C. Gonzalez and R. E. Woods, *Digital Image Processing* (3rd Edition) (Prentice-Hall, Inc., 2006).