Tracking trajectories of objects is conventionally achieved by direct beam probing or by sequential imaging of the target during its evolution. However, these strategies fail quickly when the direct line of sight is inhibited. Here, we propose and experimentally demonstrate real-time tracking of objects, which are completely surrounded by scattering media that practically conceal the objects. We show that full 3D motion can be effectively encoded in the statistical properties of spatially diffused but temporally coherent radiation. The method relies on measurements of integrated scattered intensity performed anywhere outside the disturbance region, which renders flexibility for different sensing scenarios as well as low-light capabilities.
© 2017 Optical Society of America
Aside from imaging, tracking the position of a scattering object is of paramount importance for biomedical [1,2] and remote sensing applications [3–5]. Tracking scattering objects is commonly accomplished by RADAR and LADAR technologies that rely on directional probe beams that can be scanned spatially or angularly [3,5]. Although very powerful and widely spread, these approaches are less effective in perturbed environments or when operating on low-visibility targets obscured by other scattering media [6–10].
Of course, one way to track a target in the presence of an obscurant is to image it repeatedly over time. While remarkable advances have been made in defeating imaging obscurants, this approach requires complex optical instruments and excessive data processing, which may be impractical for tracking fast-moving objects [11–15].
However, to track an object, one doesn’t necessarily need to “see” it! Capturing successive images of a target for further processing is not critical for tracking. For instance, the movement of an object hidden from direct line of sight can be followed using a pulsed beam and time-gating the light scattered from the target, even though the light reaches the detector only through indirect paths . Of course, this method will also fail when the environmental scattering increases or the object is completely surrounded by scattering obscurants.
Here, we present a conceptually different approach for tracking an object in conditions where neither controlling the directionality of the beam nor scanning its direction are possible. We address the situation where the target is completely surrounded by heavily scattering media which completely conceal it. Conceptually, the target is placed inside a scattering enclosure that renders direct imaging impossible, as shown schematically in Fig. 1. In this scenario, a primary source of temporally coherent light is directed onto one of the scattering walls, which creates an effective secondary source for the radiation inside the scattering enclosure. The light scattered from the target is further randomized when passing through the scattering walls and is then collected outside the box by an integrating detector.
The tracking problem illustrated in Fig. 1 is solved by taking advantage of the fundamental properties of partially coherent light. We will prove that the temporal and the spatial characteristics of field created inside the scattering box can be used to encode the full 3D trajectory of an object, which is effectively invisible from the outside. Even though the object is completely surrounded by multiple scattering media, its motion can be tracked in real time through a statistical analysis of integrated light. We will show that, when the dynamics of the diffused light inside the enclosure can be controlled at will, the variance and the decorrelation time of the integrated intensity provide sufficient information about the motion of the target. Moreover, we will demonstrate both analytically and experimentally that and depend linearly and independently on the target displacement along the axial and transversal directions, respectively.
2. ENCODING MOTION IN SPECKLE STATISTICS
Coherent scattering generates optical fields that can vary both in time and space [16,17]. In this section, we will show that the trajectory of a scattering object can be recovered using the spatial and temporal statistics of a partially coherent field that illuminates the object. Let us consider the fluctuating scattered field that results from the coherent interaction between an illumination field and a generic spatially locally homogenous scattering potential , where and are the refractive index distribution and the wavenumber at wavelength . This scattering potential is characterized by its degree of spatial correlation and its average strength , where represents the average taken over different realizations of the scattering potential . The scattered field, considered to be statistically stationary at least in the wide sense, is fully characterized by its cross-correlation function , where denotes the average taken over different realizations of the interaction [19–21]. In practice, one usually measures the scattered field intensity , which, within the accuracy of the first Born approximation, is given by 1) that the intensity integrated over the entire volume of interaction varies as 2) describes the intensity outcome of the coherent process of interaction and depends on both the degree of spatial correlation of the scattering potential and the degree of spatial coherence of the illumination field. This dependence can be generically included in the pre-factor where , evaluated over the extent of the target, and the supported angular domain by the detector over which the scattered intensity is integrated. A detailed derivation of Eq. (2) is presented in Supplement 1. Of course, in practice, one cannot effectively collect the entire scattered intensity. It is worth noting, however, that in the scenario of interest here, further scrambling of the scattered field happens because of the propagation through the second diffusive layer. This directional homogenization together with the large-area integration makes the detected intensity well approximated by Eq. (2).
As is apparent from Eq. (2), the integrated intensity can vary by changing either the realization of the illumination intensity or the realization of the scattering potential. For rigid objects, the latter one is equivalent to changing the center of mass of the potential distribution, which can also be interpreted as the evolution of the object along a given trajectory. Consequently, it can be envisioned that if the statistical properties of the illumination intensity can be controlled, one can use the temporal fluctuations of the measured intensity to acquire information about the target motion. In other words, in a scattering experiment, information about the scattering potential can be retrieved by controlling the stochastic properties of the illumination field [22–26]. In the following, we will demonstrate that although the integrated intensity in Eq. (2) cannot provide spatially resolved information, its temporal fluctuations relate directly to the motion of the scattering target. We will show that the extent of the temporal correlations and the normalized variance of the time-varying signal can be used to track the position of the scattering target.
Let us start by examining the possibility to follow the transversal motion. Of course, when different realizations of the illuminating field are completely uncorrelated, the transversal motion of the target will have no effect on the statistics of the integrated intensity detected outside the “box.” However, if a certain degree of correlation exists between different illumination patterns, the target is exposed to more or less similar fields, depending on its transversal motion between successive realizations of the illumination field. Consequently, it is expected that the dynamics of will depend on the transversal velocity of the scattering object. One can then exploit the decorrelation time of the temporal autocorrelation function to characterize this transversal velocity component.
A certain degree of spatial correlation between successive speckle realizations can be created, for instance, by translating the illumination field inside of the box along the transversal direction . It follows that when the target moves along or , the integrated signal will decorrelate in time slower or faster. If the translation of the illumination speckle field can be imposed along any two non-parallel transversal vectors and , the dynamic signal can provide information about the 2D transversal motion of the target.
Of course, without feedback from inside the scattering “box,” one cannot deterministically modify the illumination speckle. One can, however, take advantage of the so-called memory effect to effectively translate a speckle pattern over short distances [27,28]. In spite of the multiple scatterings, when the primary source of illumination is tilted outside the box, the speckles inside follow over a small angular range , where is the thickness of the wall, as illustrated in Fig. 2(a). When the target moves, the memory effect corresponding to the scattered intensity from the target is affected: its angular range increases or decreases if the target moves along the same or in opposite direction. As a result, the fluctuations of the scattered intensity decorrelate at different rates depending on the target displacement.
Let us analyze this process in a more quantitative manner. When, during the measurement time , the speckles are translated with constant velocity along a given transversal direction while target moves with the velocity , the integrated intensity in Eq. (2) varies in time as . The corresponding autocorrelation function of these intensity fluctuations becomes3), and represent the spatial autocorrelation functions of the illumination speckle field intensity and of the strength of the targeted scattering potential, respectively, and denotes the convolution operator. It has been shown in [27–29] that as a speckle field decorrelates in time due to the memory effect, the corresponding intensity autocorrelation function varies as , where . It follows from Eq. (3) that the autocorrelation function of the measured intensity decays in time as
As can be seen, the decorrelation time depends on the difference between the transversal velocity of the speckle field and the target velocity. In addition, when the characteristic time of the speckle dynamics is smaller than the time scale associated with the target motion, , the target velocity along each axis can be approximated to be constant over a short measurement time (similar to the “frozen model” in ). Consequently, it can be shown that the integrated intensity decorrelates after a specific delay time,3)–(5) is presented in Supplement 1.
Having established means to encode the transversal motion of the target, we will now discuss the possibility of tracking its axial movement. As is well known, although the speckle field is statistically homogeneous in the transversal plane, its transversal correlation length, i.e., the extent of , increases with the distance from the secondary source, i.e., the wall through which the radiation enters the “box” [19,20]. This increase in speckle size provides adequate means for encoding the axial position of the target, because it affects the level of fluctuations of the detected intensity. From Eq. (2), if follows that the variance,23,26,30,31]. According to the Cauchy–Schwarz inequality, the variance attains its maximum when the correlation length of the illumination intensity is of the order of the characteristic length of the targeted scattering potential. In practice, the extent of , i.e., the speckle size in the illumination field, increases in propagation and can, therefore, be used to gate the axial position of the target. On the other hand, as illustrated in Fig. 2(b), the correlation length of is also inversely proportional with the size of the secondary source, , which can be used to change the speckle size at a specific longitudinal position . Varying leads to a stochastic resonance in the variance spectrum , which can be measured in a fraction of a second . The position of this resonance effectively encodes the axial location of the target. This property will be demonstrated in the next section.
In summary, the target motion can be encoded in the fluctuations of the detected intensity as long as the spatio-temporal properties of an ensemble of realizations of the illumination field can be controlled at will. This ensemble can be created in different ways. For instance, a practically simple and convenient procedure is to simultaneously adjust the size, the tilt, and the position of the illumination beam across the input face of the “scattering enclosure.” This process leads to fluctuations of the detected intensity , which are characterized by three independent parameters: the variance of the integrated intensity, and the decorrelation times and associated with the speckle translation along the - and -axes. As we have shown here, these measurable quantities encode changes in the , , and coordinates of the target in a linear fashion.
3. EXPERIMENTAL DEMONSTRATION
A. Statistical Properties of Integrated Intensity: Experimental Validations
We will now demonstrate experimentally that the decorrelation time depends linearly on the component of the target velocity parallel to the direction of the speckle motion. For this purpose, we place a scattering target near the center of a box made of 5 mm thick Plexiglass covered with scattering layers of synthetic acrylics having a thickness of about 650 μm and a scattering mean free path of 70 μm. The overall size of the diffusive enclosure was . Speckle fields are generated by illuminating it from the outside with an approximately 0.1 mW He–Ne laser beam with a wavelength that can be tilted and translated laterally to create different realizations of the random field inside the box. The beam can also be mildly focused by an adjustable lens to control the size of the secondary source of the diffuse radiation. In this arrangement, the ballistic light that passes thorough the box is attenuated more than eight orders of magnitude. Figure 3(a) illustrates the light scattered at the front and back walls of the scattering box. A large portion of the scattered field containing more than 1,000 speckles is collected by a lens and detected with a photomultiplier tube. We note that this collection system can be placed anywhere outside the scattering box. The details of the experimental setup, including the mechanical displacement of the target, the dynamic field generation, the detection of the scattered light, and the signal processing, are all included in Section S2 of Supplement 1.
The target is a Pegasus sign printed on a transparent sheet, as shown in the inset of Fig. 3(b). First, we will demonstrate the linear relation between the target transversal motion and the decorrelation time included in Eq. (5). Following the procedure described in the preceding section, a focusing lens was used to fix the size of the secondary source at , and then a controlled translation () and a tilt () of the illumination beam was introduced by adjusting the position and the inclination of the lens. Consequently, a transversal shift of the speckle field was created inside the scattering enclosure. The target was displaced with constant velocity along the same axis but in the opposite direction, while the integrated intensity was recorded for 1 s. The amplitude of the corresponding autocorrelation function as defined in Eq. (3) is plotted in Fig. 3(b) for different values of the time delay and for different transversal displacements of the target. For clarity, we have plotted over a limited range of time delays over which the first zero crossing is observed. As can be seen, the dark band corresponding to clearly demonstrates the linear dependence between the decorrelation time and the target displacement , as indicated in Eq. (5). Because, in this example, the target and the speckle field moved in opposite directions, the decorrelation time decreases when the target speed increases.
In the next step, we validate the non-monotonic variation of the integrated intensity variance as a function of the speckle size, which is suggested in Eq. (6). When the target’s axial location is fixed, the size of the speckles that illuminates it changes only by varying the size of secondary light source . In this condition, one can examine, as a function of , the stochastic resonance that occurs in the variance spectrum of the integrated intensity. The different realizations of the speckle field corresponding to a specific value of are generated by tilting and translating the primary beam across the front face of the scattering, as described earlier.
Figure 4(a) illustrates the phenomenon of stochastic resonance when the secondary source size, , was changed by varying the size of the illumination beam from 450 to 600 μm. The maximum in the variance spectrum is evident. Depending on the target structure, scanning over a larger range may result in multiple resonances, but, for our present purpose, a coarse scan over a short range is sufficient.
The variance spectrum, such as the one in Fig. 4(a), can now be used to identify an optimal size of the secondary source such that, locally, the spectrum changes linearly as function of the speckle size. For instance, in the example illustrated in Fig. 4(a), this value is , as indicated by the green dot. In practice, there could be additional considerations for identifying , such as (i) the overall range over which the linearity approximation is valid and (ii) the value of the local gradient that defines the sensitivity to changes in the speckle size.
Having identified and fixed an optimum size of the secondary source , any further changes in the size of the interacting speckle can only be due to changes in the axial location of the target. Consequently, the linear variation of with the average speckle size can be used to track the axial location of the scattering target. To demonstrate this experimentally, the size of the beam was kept constant, and the target axial location was varied. The measured variance of the integrated intensity is shown in Fig. 4(b), where the linear dependence between the displacement of the target and the variance of the integrated intensity is evident.
B. 3D Trajectory Recovery
In the following, we present a proof-of-concept demonstration of tracking the 3D trajectory of an object completely surrounded by a “scattering box,” as illustrated in Fig. 1. The variance and the decorrelation time of the recorded signal are evaluated to reconstruct the target trajectory inside the box. The procedure is as follows. In the first step, the optimum size of the secondary source is identified and kept fixed, as discussed in last section and illustrated in Fig. 4. Once the optimal range is found, the illumination beam is tilted and translated along the - and -axes, while the decorrelation times and are measured successively to determine the displacements along the and directions according to Eq. (5), as demonstrated in Fig. 3. At the same time, the variance of the fluctuations of the integrated intensity is recorded to provide the information about the motion along the direction. In this way, the entire 3D target trajectory can be recovered in real time. The motion of the target is approximated by a discrete, piecewise continuous trajectory in which each step is associated with one measurement. The duration of each measurement was 1 s.
Any increase or decrease in any of the measured parameters corresponds to movement along the corresponding direction. If the target moves with constant speed, the trajectory can easily be evaluated based on known time intervals between measurements and the fact that the three measurable quantities depend linearly on the target incremental displacements between successive measurements. Consequently, the motion along each direction can be represented as5) and (6) for each axis, independently of the time and target motion. Note that these constants are not actually needed if only the relative incremental motion, 8), is the index of the piecewise continuous step of the motion, and represents the first motion step. Equations (7) and (8) are discussed in detail in Supplement 1. According to Eq. (8), a scaled trajectory can be easily recovered without any a priori calibration. A typical example of such a recovery is illustrated in Fig. 5, and it is also dynamically presented in Visualization 1.
Nevertheless, knowing the constants in Eq. (7) allows recovering the magnification/demagnification factor involved in the scaled trajectory, as we describe in the following.
To obtain the absolute trajectory of the target, two pieces of information are required. First, one would need to know the location of the target at the beginning of the tracking procedure. This would permit placing the 3D trajectory at the right position in the system of coordinates of the measurement. Second, one needs to know the coefficients in Eq. (7). As these coefficients are constant throughout the tracking procedure, one has to find two unknown constants for each axis. For this purpose, an a priori calibration based on Eq. (7) can be used, where two sets of are measured for two prescribed displacements of the target. A typical result is illustrated in Fig. 6, where the target is moved over a 3D trajectory roughly included in a volume of 20 cubic millimeters. The experimental conditions of illumination and detection are the same as in the different example shown in Fig. 5.
Although recovering the target trajectory with or without a priori information follows the same rule, i.e., Eq. (7), there is an important difference between two methods in terms of accuracy.
C. Errors in Trajectory Reconstruction
Let us now discuss the possible errors that can be encountered in this tracking procedure. Of course, due to the limited size of the ensemble of field realizations, one can anticipate deviations from one measurement to another. It is important to realize that while the errors in recovering the incremental motion steps do not depend on the starting point, they may accumulate over time. In practice, the experiment can be affected by possible fluctuations in the laser power, by noisy photon registrations at the detector, by a non-uniform velocity of the moving speckles, etc. As a consequence, the trajectory will be reconstructed with a precision that varies from point to point. The recovery error can be defined as the difference between the exact and the estimated location of the target, relative to the average step size . The evolution of this error in recovering a scaled trajectory is illustrated in Fig. 7. As apparent from this figure, the errors in recovering the incremental target motion according to Eq. (8) do not accumulate, which can be advantageous for certain applications.
When using a priori calibrations, as described before, increasing the range of available data will improve the precision of evaluating the coefficients in Eq. (7). However, as opposed to the reconstruction of the scaled trajectory, in this case, the average error depends on both constants in Eq. (7). As a result, the average reconstruction error accumulates along the trajectory, even though the error in measuring the statistical parameters may either increase or decrease from one step to another. This evolution is illustrated in Fig. 8 for one hundred target trajectories developed over the same volume as in Fig. 6. Of course, because of this error accumulation, which is specific to any sequential measurement without continuous feedback or reference, one may need, at some point, to go through a recalibration process. However, if knowledge about a scaled trajectory suffices, then the tracking precision is constant along the entire duration of the measurement, as is the case in Fig. 7.
4. FURTHER DISCUSSION
Although the present experiments involved a static scattering enclosure, this is not an absolute restriction. The tracking method works as long as the characteristic time of the controlled variation of the speckle field, , the characteristic time of the target dynamics , and the characteristic time associated with changes in the properties of the scattering wall, , satisfy . Practically, the scattering enclosure can change as long as its dynamics is slower than that of the target. In this context, we also note that the speckle field generated inside the enclosure is quite sensitive to changes in the primary beam size, structure, and the angle of incidence, which permits manipulating the speckles’ dynamics over large ranges.
Besides precision, repeatability is another important characteristic of a measurement. Of course, the repeatability is primarily affected by dynamic perturbations, while fluctuations originating in statistically independent sources of noise have a lesser effect. Visualization 2, Visualization 3, and Visualization 4 demonstrate the repeatability in detecting the target motion along the transversal and axial directions.
We would like to emphasize that the tracking task can be achieved based on intensity measurements performed on any side of the scattering box. Even though, on average, the integrated intensity could differ on different sides of the box, our statistical approach is capable of extracting the same dynamic information, as illustrated in Visualization 5 and Visualization 6. In this case, the magnitude of the fluctuations may increase or decrease depending on the angular distribution of the light scattering from the object, as also discussed in the context of Eq. (2).
In deriving Eqs. (1)–(6), we considered that the evolution of the scattering potential is described by a single velocity vector. However, the same concept can be applied when the scattering potential is approximated by a collection of discrete objects with independent velocity vectors. In other words, the proposed tracking method can be generalized for tracking more than one object. To do so, one can use the independent component analysis [32,33] of the integrated intensity fluctuations to separate independent sources of fluctuations associated with each independent motion. This, of course, will come at the cost of increasing the measurement time and decreasing the level of detectable fluctuations, which, in turn, could affect the tracking errors.
Practically, the magnitude of the intensity fluctuations can be affected in different ways. First, when approximating the potential with a collection of discrete objects, the variation associated with each independent component reduces roughly by a factor of , where is the number of discrete objects. Second, the fluctuations are strongly affected by the target dimension , the target feature size , the size of the field of view , and, of course, by the speckle size . Although the size of target features does not provide sufficient information by itself, the ratio is an important factor. Choosing an optimum primary beam size helps keeping this ratio close to unity, which, according to Eq. (6), causes enhanced fluctuations. This is also demonstrated in Fig. 4. In addition, the ratio between the target size and the size of the field of view, , indicates how efficiently the target motion can modify the integrated intensity. As this factor grows, the contribution of the target motion to the dynamics of the integrated intensity increases, and the accuracy of the tracking procedure improves.
Measurements are always affected by noise. Technically, the finite length of the time series of the recorded intensities introduces deviations from the ideal statistical parameters. Such deviations can, of course, be reduced by increasing the measurement time. Roughly speaking, the variance of the unwanted fluctuations can be decreased by a factor of , where is the measurement time in each step.
We have demonstrated the ability to track the motion of a target completely surrounded and obscured by multiple scattering media. The concept of encoding the position of the target using statistical properties of diffused radiation is rather general, as there are no restrictions on the scattering properties of the target. The statistical analysis of the recorded signal renders robust information that is free of interferences from the inherent experimental perturbations. Furthermore, the method requires only measurements of the intensity integrated over large areas, which can be performed at any location outside the scattering enclosure. This feature, together with the experimental simplicity and versatility, is especially appealing for low-signal applications. In addition, because the movement along each direction is extracted independently, the approach is quite efficient in sensing scenarios involving different degrees of freedom.
We have shown that the motion and relative trajectory of an enclosed target can be detected without any feedback from the inside of the obscured region. However, having access to limited a priori knowledge, the procedure can also provide quantitative measurements of the trajectory.
Finally, in our experiments, we addressed the intriguing situation of detecting and tracking motion inside an obscuring box. However, the concept of using statistical properties of radiation to encode the position of scattering objects can be applied to other obscurant geometries, not necessarily flat, and also to different detection scenarios. Moreover, as this method follows the motion of the target’s center of mass, the rotation and tilt of the object will not affect the tracking accuracy. These characteristics should be of interest for a range of applications, including biomedical and remote sensing. Even though we presented optical experiments, this tracking procedure can also be implemented in other domains, such as acoustics and microwaves.
See Supplement 1 for supporting content.
1. I. Roy, T. Y. Ohulchanskyy, D. J. Bharali, H. E. Pudavar, R. A. Mistretta, N. Kaur, and P. N. Prasad, “Optical tracking of organically modified silica nanoparticles as DNA carriers: a nonviral, nanomedicine approach for gene delivery,” Proc. Natl. Acad. Sci. USA 102, 279–284 (2005). [CrossRef]
2. S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synth. Lect. Image Video Multimedia Process. 2, 1–152 (2006). [CrossRef]
3. F. Daum and R. Fitzgerald, “Decoupled Kalman filters for phased array radar tracking,” IEEE Trans. Autom. Control 28, 269–283 (1983). [CrossRef]
4. S. R. Cloude and E. Pottier, “A review of target decomposition theorems in radar polarimetry,” IEEE Trans. Geosci. Remote Sens. 34, 498–518 (1996). [CrossRef]
5. U. Wandinger, “Introduction to lidar,” in Lidar (Springer, 2005), pp. 1–18.
6. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012). [CrossRef]
7. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014). [CrossRef]
8. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015). [CrossRef]
9. G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015). [CrossRef]
10. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015). [CrossRef]
11. I. Vellekoop and A. Mosk, “Universal optimal transmission of light through disordered materials,” Phys. Rev. Lett. 101, 120601 (2008). [CrossRef]
12. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012). [CrossRef]
13. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012). [CrossRef]
14. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012). [CrossRef]
15. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014). [CrossRef]
16. J. W. Goodman, “Statistical properties of laser speckle patterns,” in Laser Speckle and Related Phenomena (Springer, 1975), pp. 9–75.
17. A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic, 1978).
18. E. Wolf, “Coherence effects in scattering,” in Introduction to the Theory of Coherence and Polarization of Light (Cambridge University, 2007).
19. A. T. Friberg and R. J. Sudol, “The spatial coherence properties of Gaussian Schell-model beams,” J. Mod. Opt. 30, 1075–1097 (1983).
20. F. Gori, M. Santarsiero, and A. Sona, “The change of width for a partially coherent beam on paraxial propagation,” Opt. Commun. 82, 197–203 (1991). [CrossRef]
21. J. W. Goodman, Statistical Optics (Wiley, 2015).
22. H. Bayley and P. S. Cremer, “Stochastic sensors inspired by biology,” Nature 413, 226–230 (2001). [CrossRef]
23. E. Baleine and A. Dogariu, “Variable coherence scattering microscopy,” Phys. Rev. Lett. 95, 193904 (2005). [CrossRef]
24. D. Haefner, S. Sukhov, and A. Dogariu, “Stochastic scattering polarimetry,” Phys. Rev. Lett. 100, 043901 (2008). [CrossRef]
25. T. W. Kohlgraf-Owens and A. Dogariu, “Transmission matrices of random media: means for spectral polarimetric measurements,” Opt. Lett. 35, 2236–2238 (2010). [CrossRef]
26. M. I. Akhlaghi and A. Dogariu, “Stochastic optical sensing,” Optica 3, 58–63 (2016). [CrossRef]
27. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988). [CrossRef]
28. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988). [CrossRef]
29. R. Berkovits, M. Kaveh, and S. Feng, “Memory effect of waves in disordered systems: a real-space approach,” Phys. Rev. B 40, 737–740 (1989). [CrossRef]
30. E. Baleine and A. Dogariu, “Variable coherence tomography,” Opt. Lett. 29, 1233–1235 (2004). [CrossRef]
31. J. Fleischer, “Imaging: making sensing of incoherence,” Nat. Photonics 10, 211–213 (2016). [CrossRef]
32. P. Comon, C. Jutten, and J. Herault, “Blind separation of sources, part II: problems statement,” Signal Process. 24, 11–20 (1991). [CrossRef]
33. C. Jutten and J. Herault, “Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture,” Signal Process. 24, 1–10 (1991). [CrossRef]