Abstract

Tracking trajectories of objects is conventionally achieved by direct beam probing or by sequential imaging of the target during its evolution. However, these strategies fail quickly when the direct line of sight is inhibited. Here, we propose and experimentally demonstrate real-time tracking of objects, which are completely surrounded by scattering media that practically conceal the objects. We show that full 3D motion can be effectively encoded in the statistical properties of spatially diffused but temporally coherent radiation. The method relies on measurements of integrated scattered intensity performed anywhere outside the disturbance region, which renders flexibility for different sensing scenarios as well as low-light capabilities.

© 2017 Optical Society of America

1. INTRODUCTION

Aside from imaging, tracking the position of a scattering object is of paramount importance for biomedical [1,2] and remote sensing applications [35]. Tracking scattering objects is commonly accomplished by RADAR and LADAR technologies that rely on directional probe beams that can be scanned spatially or angularly [3,5]. Although very powerful and widely spread, these approaches are less effective in perturbed environments or when operating on low-visibility targets obscured by other scattering media [610].

Of course, one way to track a target in the presence of an obscurant is to image it repeatedly over time. While remarkable advances have been made in defeating imaging obscurants, this approach requires complex optical instruments and excessive data processing, which may be impractical for tracking fast-moving objects [1115].

However, to track an object, one doesn’t necessarily need to “see” it! Capturing successive images of a target for further processing is not critical for tracking. For instance, the movement of an object hidden from direct line of sight can be followed using a pulsed beam and time-gating the light scattered from the target, even though the light reaches the detector only through indirect paths [10]. Of course, this method will also fail when the environmental scattering increases or the object is completely surrounded by scattering obscurants.

Here, we present a conceptually different approach for tracking an object in conditions where neither controlling the directionality of the beam nor scanning its direction are possible. We address the situation where the target is completely surrounded by heavily scattering media which completely conceal it. Conceptually, the target is placed inside a scattering enclosure that renders direct imaging impossible, as shown schematically in Fig. 1. In this scenario, a primary source of temporally coherent light is directed onto one of the scattering walls, which creates an effective secondary source for the radiation inside the scattering enclosure. The light scattered from the target is further randomized when passing through the scattering walls and is then collected outside the box by an integrating detector.

 

Fig. 1. Tracking a hidden target enclosed in a “scattering box” that impedes direct imaging. A coherent source of radiation generates a spatially and temporally varying field that illuminates the target. Fluctuations of the integrated intensity are detected outside the enclosure and are used to track the target position.

Download Full Size | PPT Slide | PDF

The tracking problem illustrated in Fig. 1 is solved by taking advantage of the fundamental properties of partially coherent light. We will prove that the temporal and the spatial characteristics of field created inside the scattering box can be used to encode the full 3D trajectory of an object, which is effectively invisible from the outside. Even though the object is completely surrounded by multiple scattering media, its motion can be tracked in real time through a statistical analysis of integrated light. We will show that, when the dynamics of the diffused light inside the enclosure can be controlled at will, the variance σi2 and the decorrelation time τi of the integrated intensity provide sufficient information about the motion of the target. Moreover, we will demonstrate both analytically and experimentally that σi2 and τi depend linearly and independently on the target displacement along the axial and transversal directions, respectively.

2. ENCODING MOTION IN SPECKLE STATISTICS

Coherent scattering generates optical fields that can vary both in time and space [16,17]. In this section, we will show that the trajectory r(t) of a scattering object can be recovered using the spatial and temporal statistics of a partially coherent field E(r,t) that illuminates the object. Let us consider the fluctuating scattered field Es(r,t) that results from the coherent interaction between an illumination field E(r,t) and a generic spatially locally homogenous scattering potential P(r,t)=k2(n2(r)1)/4π, where n(r) and k are the refractive index distribution and the wavenumber at wavelength λ. This scattering potential is characterized by its degree of spatial correlation γ(δr) and its average strength T(r,t)=P*(r,t)P(r,t)β, where β represents the average taken over different realizations of the scattering potential [18]. The scattered field, considered to be statistically stationary at least in the wide sense, is fully characterized by its cross-correlation function C(r1,r2,t,τ)=Es*(r1,t)Es(r2,t+τ)α, where α denotes the average taken over different realizations of the interaction [1921]. In practice, one usually measures the scattered field intensity Is(r,t)=C(r,r,t,0), which, within the accuracy of the first Born approximation, is given by [18]

Is(r,t)=P*(r,t)P(r,t)E*(r,t)E(r,t)G*(r,r)G(r,r)drdrα,
where G(r,r) is the Green’s function associated with the scattering problem. For reasons such as experimental simplicity, high signal level, etc., it is sometimes advantageous to detect the scattered intensity i(t)=AIs(r,t)dr integrated over areas A larger than the correlation length of the scattered field. It follows from Eq. (1) that the intensity integrated over the entire volume of interaction varies as
i(t)=Mr2VT(r,t)I(r,t)dr,
where the illumination field intensity is I(r,t)=E*(r,t)E(r,t)η, where η denotes the average taken over different realizations of the illumination field. Equation (2) describes the intensity outcome of the coherent process of interaction and depends on both the degree of spatial correlation γ(r) of the scattering potential and the degree of spatial coherence μ(r) of the illumination field. This dependence can be generically included in the pre-factor Mγ(δr)μ(δr)eik.δrdδrdk where δr=rr, evaluated over the extent of the target, and the supported angular domain by the detector over which the scattered intensity is integrated. A detailed derivation of Eq. (2) is presented in Supplement 1. Of course, in practice, one cannot effectively collect the entire scattered intensity. It is worth noting, however, that in the scenario of interest here, further scrambling of the scattered field Es(r,t) happens because of the propagation through the second diffusive layer. This directional homogenization together with the large-area integration makes the detected intensity well approximated by Eq. (2).

As is apparent from Eq. (2), the integrated intensity i(t) can vary by changing either the realization of the illumination intensity or the realization of the scattering potential. For rigid objects, the latter one is equivalent to changing the center of mass of the potential distribution, which can also be interpreted as the evolution of the object along a given trajectory. Consequently, it can be envisioned that if the statistical properties of the illumination intensity can be controlled, one can use the temporal fluctuations of the measured intensity to acquire information about the target motion. In other words, in a scattering experiment, information about the scattering potential can be retrieved by controlling the stochastic properties of the illumination field [2226]. In the following, we will demonstrate that although the integrated intensity i(t) in Eq. (2) cannot provide spatially resolved information, its temporal fluctuations relate directly to the motion of the scattering target. We will show that the extent of the temporal correlations and the normalized variance of the time-varying signal i(t) can be used to track the position of the scattering target.

Let us start by examining the possibility to follow the transversal motion. Of course, when different realizations of the illuminating field are completely uncorrelated, the transversal motion of the target will have no effect on the statistics of the integrated intensity detected outside the “box.” However, if a certain degree of correlation exists between different illumination patterns, the target is exposed to more or less similar fields, depending on its transversal motion between successive realizations of the illumination field. Consequently, it is expected that the dynamics of i(t) will depend on the transversal velocity vT, of the scattering object. One can then exploit the decorrelation time τi of the temporal autocorrelation function Ci(τ)=i(t)i(t+τ)t to characterize this transversal velocity component.

A certain degree of spatial correlation between successive speckle realizations can be created, for instance, by translating the illumination field inside of the box along the transversal direction +ρ. It follows that when the target moves along +ρ or ρ, the integrated signal i(t) will decorrelate in time slower or faster. If the translation of the illumination speckle field can be imposed along any two non-parallel transversal vectors ρ1 and ρ2, the dynamic signal i(t) can provide information about the 2D transversal motion of the target.

Of course, without feedback from inside the scattering “box,” one cannot deterministically modify the illumination speckle. One can, however, take advantage of the so-called memory effect to effectively translate a speckle pattern over short distances [27,28]. In spite of the multiple scatterings, when the primary source of illumination is tilted outside the box, the speckles inside follow over a small angular range Δθ1/L, where L is the thickness of the wall, as illustrated in Fig. 2(a). When the target moves, the memory effect corresponding to the scattered intensity from the target is affected: its angular range increases or decreases if the target moves along the same or in opposite direction. As a result, the fluctuations of the scattered intensity decorrelate at different rates depending on the target displacement.

 

Fig. 2. Schematic illustration of using (a) the memory effect associated with the light propagating through the scattering wall and (b) the increase of the illumination speckle size used to encode the transversal and axial motions of the target, respectively.

Download Full Size | PPT Slide | PDF

Let us analyze this process in a more quantitative manner. When, during the measurement time tM, the speckles are translated with constant velocity vI along a given transversal direction while target moves with the velocity vT, the integrated intensity in Eq. (2) varies in time as i(t)T(rvT·t)I(rvI·t)dr. The corresponding autocorrelation function of these intensity fluctuations becomes

Ci(τ)=tM10tMCI(Δr(t)τΔv)CT(Δr(t))dt,
where Δv=vIvT and Δr(t)=Δv·t. In Eq. (3), CI=I(r)*I(r) and CT=T(r)*T(r) represent the spatial autocorrelation functions of the illumination speckle field intensity and of the strength of the targeted scattering potential, respectively, and * denotes the convolution operator. It has been shown in [2729] that as a speckle field decorrelates in time due to the memory effect, the corresponding intensity autocorrelation function varies as CI(Δr(t)τ  Δv)(ξ/sinh(ξ))2CI(Δr(t)), where ξ|Δv|τ. It follows from Eq. (3) that the autocorrelation function of the measured intensity decays in time as
Ci(τ)(ξ/(sinh(ξ)))20tMCI(Δr(t))CT(Δr(t))dt.

As can be seen, the decorrelation time depends on the difference Δv between the transversal velocity of the speckle field and the target velocity. In addition, when the characteristic time τsp of the speckle dynamics is smaller than the time scale τt associated with the target motion, τsp<τt, the target velocity along each axis can be approximated to be constant over a short measurement time (similar to the “frozen model” in [18]). Consequently, it can be shown that the integrated intensity i(t) decorrelates after a specific delay time,

τi|vIvT|1vI1(1+vT,vI),
where v=|v|. Thus, if the speckle velocity is kept constant along a given direction, the decorrelation time of the integrated intensity will depend linearly on the component of the target velocity oriented along the same direction of motion. Equivalently, during the short measurement time, the decorrelation time τi of the detected intensity will depend linearly on the transversal motion of the target. A detailed derivation of Eqs. (3)–(5) is presented in Supplement 1.

Having established means to encode the transversal motion of the target, we will now discuss the possibility of tracking its axial movement. As is well known, although the speckle field is statistically homogeneous in the transversal plane, its transversal correlation length, i.e., the extent of CI(Δr), increases with the distance from the secondary source, i.e., the wall through which the radiation enters the “box” [19,20]. This increase in speckle size provides adequate means for encoding the axial position of the target, because it affects the level of fluctuations of the detected intensity. From Eq. (2), if follows that the variance,

σi2=Ci(0)0tMCI(Δr(t))CT(Δr(t))dt,
of the integrated intensity fluctuations i(t) is determined by the characteristic length scales associated with both the target and the speckle size that illuminates it [23,26,30,31]. According to the Cauchy–Schwarz inequality, the variance σi2 attains its maximum when the correlation length of the illumination intensity is of the order of the characteristic length of the targeted scattering potential. In practice, the extent of CI, i.e., the speckle size in the illumination field, increases in propagation and can, therefore, be used to gate the axial position of the target. On the other hand, as illustrated in Fig. 2(b), the correlation length of Cl is also inversely proportional with the size of the secondary source, d, which can be used to change the speckle size at a specific longitudinal position z. Varying d leads to a stochastic resonance in the variance spectrum σi2(d), which can be measured in a fraction of a second [26]. The position of this resonance effectively encodes the axial location of the target. This property will be demonstrated in the next section.

In summary, the target motion can be encoded in the fluctuations of the detected intensity as long as the spatio-temporal properties of an ensemble of realizations of the illumination field can be controlled at will. This ensemble can be created in different ways. For instance, a practically simple and convenient procedure is to simultaneously adjust the size, the tilt, and the position of the illumination beam across the input face of the “scattering enclosure.” This process leads to fluctuations of the detected intensity i(t), which are characterized by three independent parameters: the variance σi2 of the integrated intensity, and the decorrelation times τi,x and τi,y associated with the speckle translation along the x- and y-axes. As we have shown here, these measurable quantities encode changes in the X, Y, and Z coordinates of the target in a linear fashion.

3. EXPERIMENTAL DEMONSTRATION

A. Statistical Properties of Integrated Intensity: Experimental Validations

We will now demonstrate experimentally that the decorrelation time τi depends linearly on the component of the target velocity parallel to the direction of the speckle motion. For this purpose, we place a scattering target near the center of a box made of 5 mm thick Plexiglass covered with scattering layers of synthetic acrylics having a thickness of about 650 μm and a scattering mean free path of 70 μm. The overall size of the diffusive enclosure was 20  cm×20  cm×20  cm. Speckle fields are generated by illuminating it from the outside with an approximately 0.1 mW He–Ne laser beam with a wavelength λ=632  nm that can be tilted and translated laterally to create different realizations of the random field inside the box. The beam can also be mildly focused by an adjustable lens to control the size of the secondary source d of the diffuse radiation. In this arrangement, the ballistic light that passes thorough the box is attenuated more than eight orders of magnitude. Figure 3(a) illustrates the light scattered at the front and back walls of the scattering box. A large portion of the scattered field containing more than 1,000 speckles is collected by a lens and detected with a photomultiplier tube. We note that this collection system can be placed anywhere outside the scattering box. The details of the experimental setup, including the mechanical displacement of the target, the dynamic field generation, the detection of the scattered light, and the signal processing, are all included in Section S2 of Supplement 1.

 

Fig. 3. (a) Image of a laser beam with beam waist of d520  μm scattered at the front and back walls of the box. The scale bar is 2.5 cm. (b) Amplitude of the autocorrelation function |Ci(τ)| of the recorded intensity corresponding to different target transversal displacements Δx. The decorrelation time (black band) depends linearly on the target transversal motion, as expected from Eq. (5). The lower left inset illustrates the linear relation between the decorrelation time and the target transversal motion. The upper right inset shows the approximately 5  mm×5  mm size object under uniform illumination.

Download Full Size | PPT Slide | PDF

The target is a Pegasus sign printed on a transparent sheet, as shown in the inset of Fig. 3(b). First, we will demonstrate the linear relation between the target transversal motion and the decorrelation time included in Eq. (5). Following the procedure described in the preceding section, a focusing lens was used to fix the size of the secondary source at d=500  μm, and then a controlled translation (±1  mm) and a tilt (±5°) of the illumination beam was introduced by adjusting the position and the inclination of the lens. Consequently, a transversal shift of the speckle field was created inside the scattering enclosure. The target was displaced with constant velocity along the same axis but in the opposite direction, while the integrated intensity was recorded for 1 s. The amplitude of the corresponding autocorrelation function |Ci(τ)| as defined in Eq. (3) is plotted in Fig. 3(b) for different values of the time delay τ and for different transversal displacements Δx of the target. For clarity, we have plotted |Ci(τ)| over a limited range of time delays over which the first zero crossing is observed. As can be seen, the dark band corresponding to C(τ)=0 clearly demonstrates the linear dependence between the decorrelation time τi and the target displacement Δx, as indicated in Eq. (5). Because, in this example, the target and the speckle field moved in opposite directions, the decorrelation time decreases when the target speed increases.

In the next step, we validate the non-monotonic variation of the integrated intensity variance σi2 as a function of the speckle size, which is suggested in Eq. (6). When the target’s axial location is fixed, the size of the speckles that illuminates it changes only by varying the size of secondary light source d. In this condition, one can examine, as a function of d, the stochastic resonance that occurs in the variance spectrum σi2(d) of the integrated intensity. The different realizations of the speckle field corresponding to a specific value of d are generated by tilting and translating the primary beam across the front face of the scattering, as described earlier.

Figure 4(a) illustrates the phenomenon of stochastic resonance when the secondary source size, d, was changed by varying the size of the illumination beam from 450 to 600 μm. The maximum in the variance spectrum is evident. Depending on the target structure, scanning over a larger range may result in multiple resonances, but, for our present purpose, a coarse scan over a short range is sufficient.

 

Fig. 4. (a) Integrated intensity variance spectrum for varying secondary source size d. The dotted red curve indicates the shift in the variance spectrum as a function of the axial motion of a target for ±2  mm. The green dot shows the optimum secondary source size d0520  μm. (b) Linear dependency of the integrated intensity variance as a function of the target axial displacement for d=d0. For clarity, all measured variances are normalized by the value of the maximum variance.

Download Full Size | PPT Slide | PDF

The variance spectrum, such as the one in Fig. 4(a), can now be used to identify an optimal size d0 of the secondary source such that, locally, the spectrum changes linearly as function of the speckle size. For instance, in the example illustrated in Fig. 4(a), this value is d0=520  μm, as indicated by the green dot. In practice, there could be additional considerations for identifying d0, such as (i) the overall range over which the linearity approximation is valid and (ii) the value of the local gradient that defines the sensitivity to changes in the speckle size.

Having identified and fixed an optimum size of the secondary source d0, any further changes in the size of the interacting speckle can only be due to changes in the axial location of the target. Consequently, the linear variation of σi2 with the average speckle size can be used to track the axial location of the scattering target. To demonstrate this experimentally, the size d0 of the beam was kept constant, and the target axial location Z was varied. The measured variance of the integrated intensity is shown in Fig. 4(b), where the linear dependence between the displacement of the target and the variance of the integrated intensity is evident.

B. 3D Trajectory Recovery

In the following, we present a proof-of-concept demonstration of tracking the 3D trajectory of an object completely surrounded by a “scattering box,” as illustrated in Fig. 1. The variance and the decorrelation time of the recorded signal are evaluated to reconstruct the target trajectory inside the box. The procedure is as follows. In the first step, the optimum size d0 of the secondary source is identified and kept fixed, as discussed in last section and illustrated in Fig. 4. Once the optimal range is found, the illumination beam is tilted and translated along the x- and y-axes, while the decorrelation times τi,x and τi,y are measured successively to determine the displacements along the X and Y directions according to Eq. (5), as demonstrated in Fig. 3. At the same time, the variance of the fluctuations of the integrated intensity σi2 is recorded to provide the information about the motion along the Z direction. In this way, the entire 3D target trajectory can be recovered in real time. The motion of the target is approximated by a discrete, piecewise continuous trajectory in which each step is associated with one measurement. The duration of each measurement was 1 s.

Any increase or decrease in any of the measured parameters corresponds to movement along the corresponding direction. If the target moves with constant speed, the trajectory can easily be evaluated based on known time intervals between measurements and the fact that the three measurable quantities (τi,x,τi,y,σi2) depend linearly on the target incremental displacements (Δx,Δy,Δz) between successive measurements. Consequently, the motion along each direction can be represented as

ρ=aρξ+bρ,
where ρ{Δx,Δy,Δz} indicates the incremental motion along each axis, and ξ{τi,x,τi,y,σz2} represents the measured parameters associated with that axis. The constants aρ and bρ are defined by the linear relations established in Eqs. (5) and (6) for each axis, independently of the time and target motion. Note that these constants are not actually needed if only the relative incremental motion,
Δρm/Δρ0=Δξm/Δξ0,
is of interest. In Eq. (8), m is the index of the piecewise continuous step of the motion, and m=1 represents the first motion step. Equations (7) and (8) are discussed in detail in Supplement 1. According to Eq. (8), a scaled trajectory can be easily recovered without any a priori calibration. A typical example of such a recovery is illustrated in Fig. 5, and it is also dynamically presented in Visualization 1.

 

Fig. 5. (a) Experimental demonstration of 3D tracking: the blue line represents the imposed target displacement, while the red dashed line indicates the reconstructed trajectory. (b) One-dimensional representations of the imposed and recovered trajectories shown in (a), where tm denotes one measurement duration. The solid blue line denotes the exact trajectory, while the dashed red line indicates the reconstructed trajectory. Also, see Visualization 1.

Download Full Size | PPT Slide | PDF

Nevertheless, knowing the constants in Eq. (7) allows recovering the magnification/demagnification factor involved in the scaled trajectory, as we describe in the following.

To obtain the absolute trajectory of the target, two pieces of information are required. First, one would need to know the location of the target at the beginning of the tracking procedure. This would permit placing the 3D trajectory at the right position in the system of coordinates of the measurement. Second, one needs to know the coefficients {aρ,bρ} in Eq. (7). As these coefficients are constant throughout the tracking procedure, one has to find two unknown constants for each axis. For this purpose, an a priori calibration based on Eq. (7) can be used, where two sets of {τi,x,τi,y,σz2} are measured for two prescribed displacements of the target. A typical result is illustrated in Fig. 6, where the target is moved over a 3D trajectory roughly included in a volume of 20 cubic millimeters. The experimental conditions of illumination and detection are the same as in the different example shown in Fig. 5.

 

Fig. 6. Experimental demonstration of 3D tracking using a priori calibrations to extract the constants in Eq. (7). The blue line represents the imposed target displacement, while the red dashed line indicates the reconstructed trajectory.

Download Full Size | PPT Slide | PDF

Although recovering the target trajectory with or without a priori information follows the same rule, i.e., Eq. (7), there is an important difference between two methods in terms of accuracy.

C. Errors in Trajectory Reconstruction

Let us now discuss the possible errors that can be encountered in this tracking procedure. Of course, due to the limited size of the ensemble of field realizations, one can anticipate deviations from one measurement to another. It is important to realize that while the errors in recovering the incremental motion steps do not depend on the starting point, they may accumulate over time. In practice, the experiment can be affected by possible fluctuations in the laser power, by noisy photon registrations at the detector, by a non-uniform velocity of the moving speckles, etc. As a consequence, the trajectory will be reconstructed with a precision that varies from point to point. The recovery error ε can be defined as the difference between the exact and the estimated location of the target, relative to the average step size Δr. The evolution of this error in recovering a scaled trajectory is illustrated in Fig. 7. As apparent from this figure, the errors in recovering the incremental target motion according to Eq. (8) do not accumulate, which can be advantageous for certain applications.

 

Fig. 7. Evolution of the relative error ε/Δr during the tracking procedure. The error in reconstructing the target location is evaluated as ε2(t)=εx2(t)+εy2(t)+εz2(t), and Δr denotes the average step size in moving the target. The solid line and the shaded area indicate the average and the standard deviation of the error over one hundred trajectories.

Download Full Size | PPT Slide | PDF

When using a priori calibrations, as described before, increasing the range of available data will improve the precision of evaluating the coefficients in Eq. (7). However, as opposed to the reconstruction of the scaled trajectory, in this case, the average error depends on both constants in Eq. (7). As a result, the average reconstruction error accumulates along the trajectory, even though the error in measuring the statistical parameters {τi,x,τi,y,σi2} may either increase or decrease from one step to another. This evolution is illustrated in Fig. 8 for one hundred target trajectories developed over the same volume as in Fig. 6. Of course, because of this error accumulation, which is specific to any sequential measurement without continuous feedback or reference, one may need, at some point, to go through a recalibration process. However, if knowledge about a scaled trajectory suffices, then the tracking precision is constant along the entire duration of the measurement, as is the case in Fig. 7.

 

Fig. 8. Error evolution in recovering an absolute trajectory. The time variable is normalized by the measurement time tm. The error in reconstructing the target location is evaluated as ε2(t)=εx2(t)+εy2(t)+εz2(t). The solid line and the shaded area show the average and the standard deviation of errors over one hundred trajectories, respectively.

Download Full Size | PPT Slide | PDF

4. FURTHER DISCUSSION

Although the present experiments involved a static scattering enclosure, this is not an absolute restriction. The tracking method works as long as the characteristic time of the controlled variation of the speckle field, τsp, the characteristic time of the target dynamics τt, and the characteristic time associated with changes in the properties of the scattering wall, τw, satisfy τsp<τt<τw. Practically, the scattering enclosure can change as long as its dynamics is slower than that of the target. In this context, we also note that the speckle field generated inside the enclosure is quite sensitive to changes in the primary beam size, structure, and the angle of incidence, which permits manipulating the speckles’ dynamics over large ranges.

Besides precision, repeatability is another important characteristic of a measurement. Of course, the repeatability is primarily affected by dynamic perturbations, while fluctuations originating in statistically independent sources of noise have a lesser effect. Visualization 2, Visualization 3, and Visualization 4 demonstrate the repeatability in detecting the target motion along the transversal and axial directions.

We would like to emphasize that the tracking task can be achieved based on intensity measurements performed on any side of the scattering box. Even though, on average, the integrated intensity could differ on different sides of the box, our statistical approach is capable of extracting the same dynamic information, as illustrated in Visualization 5 and Visualization 6. In this case, the magnitude of the fluctuations may increase or decrease depending on the angular distribution of the light scattering from the object, as also discussed in the context of Eq. (2).

In deriving Eqs. (1)–(6), we considered that the evolution of the scattering potential is described by a single velocity vector. However, the same concept can be applied when the scattering potential is approximated by a collection of discrete objects with independent velocity vectors. In other words, the proposed tracking method can be generalized for tracking more than one object. To do so, one can use the independent component analysis [32,33] of the integrated intensity fluctuations to separate independent sources of fluctuations associated with each independent motion. This, of course, will come at the cost of increasing the measurement time and decreasing the level of detectable fluctuations, which, in turn, could affect the tracking errors.

Practically, the magnitude of the intensity fluctuations can be affected in different ways. First, when approximating the potential with a collection of discrete objects, the variation associated with each independent component reduces roughly by a factor of 1/no, where no is the number of discrete objects. Second, the fluctuations are strongly affected by the target dimension D, the target feature size lt, the size of the field of view L, and, of course, by the speckle size l. Although the size of target features does not provide sufficient information by itself, the ratio lt/l is an important factor. Choosing an optimum primary beam size d0 helps keeping this ratio close to unity, which, according to Eq. (6), causes enhanced fluctuations. This is also demonstrated in Fig. 4. In addition, the ratio between the target size and the size of the field of view, D/L, indicates how efficiently the target motion can modify the integrated intensity. As this factor grows, the contribution of the target motion to the dynamics of the integrated intensity increases, and the accuracy of the tracking procedure improves.

Measurements are always affected by noise. Technically, the finite length of the time series of the recorded intensities introduces deviations from the ideal statistical parameters. Such deviations can, of course, be reduced by increasing the measurement time. Roughly speaking, the variance of the unwanted fluctuations can be decreased by a factor of 1/tm, where tm is the measurement time in each step.

5. CONCLUSIONS

We have demonstrated the ability to track the motion of a target completely surrounded and obscured by multiple scattering media. The concept of encoding the position of the target using statistical properties of diffused radiation is rather general, as there are no restrictions on the scattering properties of the target. The statistical analysis of the recorded signal renders robust information that is free of interferences from the inherent experimental perturbations. Furthermore, the method requires only measurements of the intensity integrated over large areas, which can be performed at any location outside the scattering enclosure. This feature, together with the experimental simplicity and versatility, is especially appealing for low-signal applications. In addition, because the movement along each direction is extracted independently, the approach is quite efficient in sensing scenarios involving different degrees of freedom.

We have shown that the motion and relative trajectory of an enclosed target can be detected without any feedback from the inside of the obscured region. However, having access to limited a priori knowledge, the procedure can also provide quantitative measurements of the trajectory.

Finally, in our experiments, we addressed the intriguing situation of detecting and tracking motion inside an obscuring box. However, the concept of using statistical properties of radiation to encode the position of scattering objects can be applied to other obscurant geometries, not necessarily flat, and also to different detection scenarios. Moreover, as this method follows the motion of the target’s center of mass, the rotation and tilt of the object will not affect the tracking accuracy. These characteristics should be of interest for a range of applications, including biomedical and remote sensing. Even though we presented optical experiments, this tracking procedure can also be implemented in other domains, such as acoustics and microwaves.

 

See Supplement 1 for supporting content.

REFERENCES

1. I. Roy, T. Y. Ohulchanskyy, D. J. Bharali, H. E. Pudavar, R. A. Mistretta, N. Kaur, and P. N. Prasad, “Optical tracking of organically modified silica nanoparticles as DNA carriers: a nonviral, nanomedicine approach for gene delivery,” Proc. Natl. Acad. Sci. USA 102, 279–284 (2005). [CrossRef]  

2. S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synth. Lect. Image Video Multimedia Process. 2, 1–152 (2006). [CrossRef]  

3. F. Daum and R. Fitzgerald, “Decoupled Kalman filters for phased array radar tracking,” IEEE Trans. Autom. Control 28, 269–283 (1983). [CrossRef]  

4. S. R. Cloude and E. Pottier, “A review of target decomposition theorems in radar polarimetry,” IEEE Trans. Geosci. Remote Sens. 34, 498–518 (1996). [CrossRef]  

5. U. Wandinger, “Introduction to lidar,” in Lidar (Springer, 2005), pp. 1–18.

6. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012). [CrossRef]  

7. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014). [CrossRef]  

8. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015). [CrossRef]  

9. G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015). [CrossRef]  

10. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015). [CrossRef]  

11. I. Vellekoop and A. Mosk, “Universal optimal transmission of light through disordered materials,” Phys. Rev. Lett. 101, 120601 (2008). [CrossRef]  

12. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012). [CrossRef]  

13. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012). [CrossRef]  

14. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012). [CrossRef]  

15. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014). [CrossRef]  

16. J. W. Goodman, “Statistical properties of laser speckle patterns,” in Laser Speckle and Related Phenomena (Springer, 1975), pp. 9–75.

17. A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic, 1978).

18. E. Wolf, “Coherence effects in scattering,” in Introduction to the Theory of Coherence and Polarization of Light (Cambridge University, 2007).

19. A. T. Friberg and R. J. Sudol, “The spatial coherence properties of Gaussian Schell-model beams,” J. Mod. Opt. 30, 1075–1097 (1983).

20. F. Gori, M. Santarsiero, and A. Sona, “The change of width for a partially coherent beam on paraxial propagation,” Opt. Commun. 82, 197–203 (1991). [CrossRef]  

21. J. W. Goodman, Statistical Optics (Wiley, 2015).

22. H. Bayley and P. S. Cremer, “Stochastic sensors inspired by biology,” Nature 413, 226–230 (2001). [CrossRef]  

23. E. Baleine and A. Dogariu, “Variable coherence scattering microscopy,” Phys. Rev. Lett. 95, 193904 (2005). [CrossRef]  

24. D. Haefner, S. Sukhov, and A. Dogariu, “Stochastic scattering polarimetry,” Phys. Rev. Lett. 100, 043901 (2008). [CrossRef]  

25. T. W. Kohlgraf-Owens and A. Dogariu, “Transmission matrices of random media: means for spectral polarimetric measurements,” Opt. Lett. 35, 2236–2238 (2010). [CrossRef]  

26. M. I. Akhlaghi and A. Dogariu, “Stochastic optical sensing,” Optica 3, 58–63 (2016). [CrossRef]  

27. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988). [CrossRef]  

28. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988). [CrossRef]  

29. R. Berkovits, M. Kaveh, and S. Feng, “Memory effect of waves in disordered systems: a real-space approach,” Phys. Rev. B 40, 737–740 (1989). [CrossRef]  

30. E. Baleine and A. Dogariu, “Variable coherence tomography,” Opt. Lett. 29, 1233–1235 (2004). [CrossRef]  

31. J. Fleischer, “Imaging: making sensing of incoherence,” Nat. Photonics 10, 211–213 (2016). [CrossRef]  

32. P. Comon, C. Jutten, and J. Herault, “Blind separation of sources, part II: problems statement,” Signal Process. 24, 11–20 (1991). [CrossRef]  

33. C. Jutten and J. Herault, “Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture,” Signal Process. 24, 1–10 (1991). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. I. Roy, T. Y. Ohulchanskyy, D. J. Bharali, H. E. Pudavar, R. A. Mistretta, N. Kaur, and P. N. Prasad, “Optical tracking of organically modified silica nanoparticles as DNA carriers: a nonviral, nanomedicine approach for gene delivery,” Proc. Natl. Acad. Sci. USA 102, 279–284 (2005).
    [Crossref]
  2. S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synth. Lect. Image Video Multimedia Process. 2, 1–152 (2006).
    [Crossref]
  3. F. Daum and R. Fitzgerald, “Decoupled Kalman filters for phased array radar tracking,” IEEE Trans. Autom. Control 28, 269–283 (1983).
    [Crossref]
  4. S. R. Cloude and E. Pottier, “A review of target decomposition theorems in radar polarimetry,” IEEE Trans. Geosci. Remote Sens. 34, 498–518 (1996).
    [Crossref]
  5. U. Wandinger, “Introduction to lidar,” in Lidar (Springer, 2005), pp. 1–18.
  6. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
    [Crossref]
  7. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
    [Crossref]
  8. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
    [Crossref]
  9. G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
    [Crossref]
  10. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
    [Crossref]
  11. I. Vellekoop and A. Mosk, “Universal optimal transmission of light through disordered materials,” Phys. Rev. Lett. 101, 120601 (2008).
    [Crossref]
  12. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
    [Crossref]
  13. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
    [Crossref]
  14. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
    [Crossref]
  15. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
    [Crossref]
  16. J. W. Goodman, “Statistical properties of laser speckle patterns,” in Laser Speckle and Related Phenomena (Springer, 1975), pp. 9–75.
  17. A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic, 1978).
  18. E. Wolf, “Coherence effects in scattering,” in Introduction to the Theory of Coherence and Polarization of Light (Cambridge University, 2007).
  19. A. T. Friberg and R. J. Sudol, “The spatial coherence properties of Gaussian Schell-model beams,” J. Mod. Opt. 30, 1075–1097 (1983).
  20. F. Gori, M. Santarsiero, and A. Sona, “The change of width for a partially coherent beam on paraxial propagation,” Opt. Commun. 82, 197–203 (1991).
    [Crossref]
  21. J. W. Goodman, Statistical Optics (Wiley, 2015).
  22. H. Bayley and P. S. Cremer, “Stochastic sensors inspired by biology,” Nature 413, 226–230 (2001).
    [Crossref]
  23. E. Baleine and A. Dogariu, “Variable coherence scattering microscopy,” Phys. Rev. Lett. 95, 193904 (2005).
    [Crossref]
  24. D. Haefner, S. Sukhov, and A. Dogariu, “Stochastic scattering polarimetry,” Phys. Rev. Lett. 100, 043901 (2008).
    [Crossref]
  25. T. W. Kohlgraf-Owens and A. Dogariu, “Transmission matrices of random media: means for spectral polarimetric measurements,” Opt. Lett. 35, 2236–2238 (2010).
    [Crossref]
  26. M. I. Akhlaghi and A. Dogariu, “Stochastic optical sensing,” Optica 3, 58–63 (2016).
    [Crossref]
  27. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
    [Crossref]
  28. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
    [Crossref]
  29. R. Berkovits, M. Kaveh, and S. Feng, “Memory effect of waves in disordered systems: a real-space approach,” Phys. Rev. B 40, 737–740 (1989).
    [Crossref]
  30. E. Baleine and A. Dogariu, “Variable coherence tomography,” Opt. Lett. 29, 1233–1235 (2004).
    [Crossref]
  31. J. Fleischer, “Imaging: making sensing of incoherence,” Nat. Photonics 10, 211–213 (2016).
    [Crossref]
  32. P. Comon, C. Jutten, and J. Herault, “Blind separation of sources, part II: problems statement,” Signal Process. 24, 11–20 (1991).
    [Crossref]
  33. C. Jutten and J. Herault, “Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture,” Signal Process. 24, 1–10 (1991).
    [Crossref]

2016 (2)

M. I. Akhlaghi and A. Dogariu, “Stochastic optical sensing,” Optica 3, 58–63 (2016).
[Crossref]

J. Fleischer, “Imaging: making sensing of incoherence,” Nat. Photonics 10, 211–213 (2016).
[Crossref]

2015 (3)

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
[Crossref]

G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
[Crossref]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

2014 (2)

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

2012 (4)

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

2010 (1)

2008 (2)

D. Haefner, S. Sukhov, and A. Dogariu, “Stochastic scattering polarimetry,” Phys. Rev. Lett. 100, 043901 (2008).
[Crossref]

I. Vellekoop and A. Mosk, “Universal optimal transmission of light through disordered materials,” Phys. Rev. Lett. 101, 120601 (2008).
[Crossref]

2006 (1)

S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synth. Lect. Image Video Multimedia Process. 2, 1–152 (2006).
[Crossref]

2005 (2)

I. Roy, T. Y. Ohulchanskyy, D. J. Bharali, H. E. Pudavar, R. A. Mistretta, N. Kaur, and P. N. Prasad, “Optical tracking of organically modified silica nanoparticles as DNA carriers: a nonviral, nanomedicine approach for gene delivery,” Proc. Natl. Acad. Sci. USA 102, 279–284 (2005).
[Crossref]

E. Baleine and A. Dogariu, “Variable coherence scattering microscopy,” Phys. Rev. Lett. 95, 193904 (2005).
[Crossref]

2004 (1)

2001 (1)

H. Bayley and P. S. Cremer, “Stochastic sensors inspired by biology,” Nature 413, 226–230 (2001).
[Crossref]

1996 (1)

S. R. Cloude and E. Pottier, “A review of target decomposition theorems in radar polarimetry,” IEEE Trans. Geosci. Remote Sens. 34, 498–518 (1996).
[Crossref]

1991 (3)

F. Gori, M. Santarsiero, and A. Sona, “The change of width for a partially coherent beam on paraxial propagation,” Opt. Commun. 82, 197–203 (1991).
[Crossref]

P. Comon, C. Jutten, and J. Herault, “Blind separation of sources, part II: problems statement,” Signal Process. 24, 11–20 (1991).
[Crossref]

C. Jutten and J. Herault, “Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture,” Signal Process. 24, 1–10 (1991).
[Crossref]

1989 (1)

R. Berkovits, M. Kaveh, and S. Feng, “Memory effect of waves in disordered systems: a real-space approach,” Phys. Rev. B 40, 737–740 (1989).
[Crossref]

1988 (2)

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

1983 (2)

A. T. Friberg and R. J. Sudol, “The spatial coherence properties of Gaussian Schell-model beams,” J. Mod. Opt. 30, 1075–1097 (1983).

F. Daum and R. Fitzgerald, “Decoupled Kalman filters for phased array radar tracking,” IEEE Trans. Autom. Control 28, 269–283 (1983).
[Crossref]

Acton, S. T.

S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synth. Lect. Image Video Multimedia Process. 2, 1–152 (2006).
[Crossref]

Akhlaghi, M. I.

Baleine, E.

E. Baleine and A. Dogariu, “Variable coherence scattering microscopy,” Phys. Rev. Lett. 95, 193904 (2005).
[Crossref]

E. Baleine and A. Dogariu, “Variable coherence tomography,” Opt. Lett. 29, 1233–1235 (2004).
[Crossref]

Bawendi, M. G.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

Bayley, H.

H. Bayley and P. S. Cremer, “Stochastic sensors inspired by biology,” Nature 413, 226–230 (2001).
[Crossref]

Berkovits, R.

R. Berkovits, M. Kaveh, and S. Feng, “Memory effect of waves in disordered systems: a real-space approach,” Phys. Rev. B 40, 737–740 (1989).
[Crossref]

Bertolotti, J.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Bharali, D. J.

I. Roy, T. Y. Ohulchanskyy, D. J. Bharali, H. E. Pudavar, R. A. Mistretta, N. Kaur, and P. N. Prasad, “Optical tracking of organically modified silica nanoparticles as DNA carriers: a nonviral, nanomedicine approach for gene delivery,” Proc. Natl. Acad. Sci. USA 102, 279–284 (2005).
[Crossref]

Blum, C.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Buller, G. S.

G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
[Crossref]

Buttafava, M.

Cloude, S. R.

S. R. Cloude and E. Pottier, “A review of target decomposition theorems in radar polarimetry,” IEEE Trans. Geosci. Remote Sens. 34, 498–518 (1996).
[Crossref]

Comon, P.

P. Comon, C. Jutten, and J. Herault, “Blind separation of sources, part II: problems statement,” Signal Process. 24, 11–20 (1991).
[Crossref]

Cremer, P. S.

H. Bayley and P. S. Cremer, “Stochastic sensors inspired by biology,” Nature 413, 226–230 (2001).
[Crossref]

Daum, F.

F. Daum and R. Fitzgerald, “Decoupled Kalman filters for phased array radar tracking,” IEEE Trans. Autom. Control 28, 269–283 (1983).
[Crossref]

Dogariu, A.

Eliceiri, K.

Faccio, D.

G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
[Crossref]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

Feng, S.

R. Berkovits, M. Kaveh, and S. Feng, “Memory effect of waves in disordered systems: a real-space approach,” Phys. Rev. B 40, 737–740 (1989).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Fink, M.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Fitzgerald, R.

F. Daum and R. Fitzgerald, “Decoupled Kalman filters for phased array radar tracking,” IEEE Trans. Autom. Control 28, 269–283 (1983).
[Crossref]

Fleischer, J.

J. Fleischer, “Imaging: making sensing of incoherence,” Nat. Photonics 10, 211–213 (2016).
[Crossref]

Freund, I.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Friberg, A. T.

A. T. Friberg and R. J. Sudol, “The spatial coherence properties of Gaussian Schell-model beams,” J. Mod. Opt. 30, 1075–1097 (1983).

Gao, L.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref]

Gariepy, G.

G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
[Crossref]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

Gigan, S.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Goodman, J. W.

J. W. Goodman, “Statistical properties of laser speckle patterns,” in Laser Speckle and Related Phenomena (Springer, 1975), pp. 9–75.

J. W. Goodman, Statistical Optics (Wiley, 2015).

Gori, F.

F. Gori, M. Santarsiero, and A. Sona, “The change of width for a partially coherent beam on paraxial propagation,” Opt. Commun. 82, 197–203 (1991).
[Crossref]

Gupta, O.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

Haefner, D.

D. Haefner, S. Sukhov, and A. Dogariu, “Stochastic scattering polarimetry,” Phys. Rev. Lett. 100, 043901 (2008).
[Crossref]

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Henderson, R.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
[Crossref]

Herault, J.

P. Comon, C. Jutten, and J. Herault, “Blind separation of sources, part II: problems statement,” Signal Process. 24, 11–20 (1991).
[Crossref]

C. Jutten and J. Herault, “Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture,” Signal Process. 24, 1–10 (1991).
[Crossref]

Heshmat, B.

G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
[Crossref]

Ishimaru, A.

A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic, 1978).

Jutten, C.

C. Jutten and J. Herault, “Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture,” Signal Process. 24, 1–10 (1991).
[Crossref]

P. Comon, C. Jutten, and J. Herault, “Blind separation of sources, part II: problems statement,” Signal Process. 24, 11–20 (1991).
[Crossref]

Kane, C.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Katz, O.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Kaur, N.

I. Roy, T. Y. Ohulchanskyy, D. J. Bharali, H. E. Pudavar, R. A. Mistretta, N. Kaur, and P. N. Prasad, “Optical tracking of organically modified silica nanoparticles as DNA carriers: a nonviral, nanomedicine approach for gene delivery,” Proc. Natl. Acad. Sci. USA 102, 279–284 (2005).
[Crossref]

Kaveh, M.

R. Berkovits, M. Kaveh, and S. Feng, “Memory effect of waves in disordered systems: a real-space approach,” Phys. Rev. B 40, 737–740 (1989).
[Crossref]

Kohlgraf-Owens, T. W.

Krstajic, N.

G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
[Crossref]

Lagendijk, A.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Leach, J.

G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
[Crossref]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

Lee, P. A.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Lerosey, G.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Li, C.

G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
[Crossref]

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref]

Liang, J.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref]

Mistretta, R. A.

I. Roy, T. Y. Ohulchanskyy, D. J. Bharali, H. E. Pudavar, R. A. Mistretta, N. Kaur, and P. N. Prasad, “Optical tracking of organically modified silica nanoparticles as DNA carriers: a nonviral, nanomedicine approach for gene delivery,” Proc. Natl. Acad. Sci. USA 102, 279–284 (2005).
[Crossref]

Mosk, A.

I. Vellekoop and A. Mosk, “Universal optimal transmission of light through disordered materials,” Phys. Rev. Lett. 101, 120601 (2008).
[Crossref]

Mosk, A. P.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Ohulchanskyy, T. Y.

I. Roy, T. Y. Ohulchanskyy, D. J. Bharali, H. E. Pudavar, R. A. Mistretta, N. Kaur, and P. N. Prasad, “Optical tracking of organically modified silica nanoparticles as DNA carriers: a nonviral, nanomedicine approach for gene delivery,” Proc. Natl. Acad. Sci. USA 102, 279–284 (2005).
[Crossref]

Pottier, E.

S. R. Cloude and E. Pottier, “A review of target decomposition theorems in radar polarimetry,” IEEE Trans. Geosci. Remote Sens. 34, 498–518 (1996).
[Crossref]

Prasad, P. N.

I. Roy, T. Y. Ohulchanskyy, D. J. Bharali, H. E. Pudavar, R. A. Mistretta, N. Kaur, and P. N. Prasad, “Optical tracking of organically modified silica nanoparticles as DNA carriers: a nonviral, nanomedicine approach for gene delivery,” Proc. Natl. Acad. Sci. USA 102, 279–284 (2005).
[Crossref]

Pudavar, H. E.

I. Roy, T. Y. Ohulchanskyy, D. J. Bharali, H. E. Pudavar, R. A. Mistretta, N. Kaur, and P. N. Prasad, “Optical tracking of organically modified silica nanoparticles as DNA carriers: a nonviral, nanomedicine approach for gene delivery,” Proc. Natl. Acad. Sci. USA 102, 279–284 (2005).
[Crossref]

Raskar, R.

G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

Ray, N.

S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synth. Lect. Image Video Multimedia Process. 2, 1–152 (2006).
[Crossref]

Rosenbluh, M.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Roy, I.

I. Roy, T. Y. Ohulchanskyy, D. J. Bharali, H. E. Pudavar, R. A. Mistretta, N. Kaur, and P. N. Prasad, “Optical tracking of organically modified silica nanoparticles as DNA carriers: a nonviral, nanomedicine approach for gene delivery,” Proc. Natl. Acad. Sci. USA 102, 279–284 (2005).
[Crossref]

Santarsiero, M.

F. Gori, M. Santarsiero, and A. Sona, “The change of width for a partially coherent beam on paraxial propagation,” Opt. Commun. 82, 197–203 (1991).
[Crossref]

Silberberg, Y.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Small, E.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Sona, A.

F. Gori, M. Santarsiero, and A. Sona, “The change of width for a partially coherent beam on paraxial propagation,” Opt. Commun. 82, 197–203 (1991).
[Crossref]

Stone, A. D.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Sudol, R. J.

A. T. Friberg and R. J. Sudol, “The spatial coherence properties of Gaussian Schell-model beams,” J. Mod. Opt. 30, 1075–1097 (1983).

Sukhov, S.

D. Haefner, S. Sukhov, and A. Dogariu, “Stochastic scattering polarimetry,” Phys. Rev. Lett. 100, 043901 (2008).
[Crossref]

Thomson, R. R.

G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
[Crossref]

Tonolini, F.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

Tosi, A.

van Putten, E. G.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Veeraraghavan, A.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

Vellekoop, I.

I. Vellekoop and A. Mosk, “Universal optimal transmission of light through disordered materials,” Phys. Rev. Lett. 101, 120601 (2008).
[Crossref]

Velten, A.

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

Vos, W. L.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Wandinger, U.

U. Wandinger, “Introduction to lidar,” in Lidar (Springer, 2005), pp. 1–18.

Wang, L. V.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref]

Willwacher, T.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

Wolf, E.

E. Wolf, “Coherence effects in scattering,” in Introduction to the Theory of Coherence and Polarization of Light (Cambridge University, 2007).

Zeman, J.

IEEE Trans. Autom. Control (1)

F. Daum and R. Fitzgerald, “Decoupled Kalman filters for phased array radar tracking,” IEEE Trans. Autom. Control 28, 269–283 (1983).
[Crossref]

IEEE Trans. Geosci. Remote Sens. (1)

S. R. Cloude and E. Pottier, “A review of target decomposition theorems in radar polarimetry,” IEEE Trans. Geosci. Remote Sens. 34, 498–518 (1996).
[Crossref]

J. Mod. Opt. (1)

A. T. Friberg and R. J. Sudol, “The spatial coherence properties of Gaussian Schell-model beams,” J. Mod. Opt. 30, 1075–1097 (1983).

Nat. Commun. (2)

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015).
[Crossref]

Nat. Photonics (5)

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

J. Fleischer, “Imaging: making sensing of incoherence,” Nat. Photonics 10, 211–213 (2016).
[Crossref]

Nature (3)

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

H. Bayley and P. S. Cremer, “Stochastic sensors inspired by biology,” Nature 413, 226–230 (2001).
[Crossref]

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref]

Opt. Commun. (1)

F. Gori, M. Santarsiero, and A. Sona, “The change of width for a partially coherent beam on paraxial propagation,” Opt. Commun. 82, 197–203 (1991).
[Crossref]

Opt. Express (1)

Opt. Lett. (2)

Optica (1)

Phys. Rev. B (1)

R. Berkovits, M. Kaveh, and S. Feng, “Memory effect of waves in disordered systems: a real-space approach,” Phys. Rev. B 40, 737–740 (1989).
[Crossref]

Phys. Rev. Lett. (5)

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

I. Vellekoop and A. Mosk, “Universal optimal transmission of light through disordered materials,” Phys. Rev. Lett. 101, 120601 (2008).
[Crossref]

E. Baleine and A. Dogariu, “Variable coherence scattering microscopy,” Phys. Rev. Lett. 95, 193904 (2005).
[Crossref]

D. Haefner, S. Sukhov, and A. Dogariu, “Stochastic scattering polarimetry,” Phys. Rev. Lett. 100, 043901 (2008).
[Crossref]

Proc. Natl. Acad. Sci. USA (1)

I. Roy, T. Y. Ohulchanskyy, D. J. Bharali, H. E. Pudavar, R. A. Mistretta, N. Kaur, and P. N. Prasad, “Optical tracking of organically modified silica nanoparticles as DNA carriers: a nonviral, nanomedicine approach for gene delivery,” Proc. Natl. Acad. Sci. USA 102, 279–284 (2005).
[Crossref]

Signal Process. (2)

P. Comon, C. Jutten, and J. Herault, “Blind separation of sources, part II: problems statement,” Signal Process. 24, 11–20 (1991).
[Crossref]

C. Jutten and J. Herault, “Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture,” Signal Process. 24, 1–10 (1991).
[Crossref]

Synth. Lect. Image Video Multimedia Process. (1)

S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synth. Lect. Image Video Multimedia Process. 2, 1–152 (2006).
[Crossref]

Other (5)

U. Wandinger, “Introduction to lidar,” in Lidar (Springer, 2005), pp. 1–18.

J. W. Goodman, Statistical Optics (Wiley, 2015).

J. W. Goodman, “Statistical properties of laser speckle patterns,” in Laser Speckle and Related Phenomena (Springer, 1975), pp. 9–75.

A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic, 1978).

E. Wolf, “Coherence effects in scattering,” in Introduction to the Theory of Coherence and Polarization of Light (Cambridge University, 2007).

Supplementary Material (7)

NameDescription
» Supplement 1: PDF (1584 KB)      supplementary materials
» Visualization 1: MP4 (1325 KB)      video 1
» Visualization 2: MP4 (3235 KB)      video2
» Visualization 3: MP4 (6457 KB)      video 3
» Visualization 4: MP4 (18041 KB)      video 4
» Visualization 5: MP4 (3190 KB)      video 5
» Visualization 6: MP4 (10280 KB)      video 6

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Tracking a hidden target enclosed in a “scattering box” that impedes direct imaging. A coherent source of radiation generates a spatially and temporally varying field that illuminates the target. Fluctuations of the integrated intensity are detected outside the enclosure and are used to track the target position.
Fig. 2.
Fig. 2. Schematic illustration of using (a) the memory effect associated with the light propagating through the scattering wall and (b) the increase of the illumination speckle size used to encode the transversal and axial motions of the target, respectively.
Fig. 3.
Fig. 3. (a) Image of a laser beam with beam waist of d520  μm scattered at the front and back walls of the box. The scale bar is 2.5 cm. (b) Amplitude of the autocorrelation function |Ci(τ)| of the recorded intensity corresponding to different target transversal displacements Δx. The decorrelation time (black band) depends linearly on the target transversal motion, as expected from Eq. (5). The lower left inset illustrates the linear relation between the decorrelation time and the target transversal motion. The upper right inset shows the approximately 5  mm×5  mm size object under uniform illumination.
Fig. 4.
Fig. 4. (a) Integrated intensity variance spectrum for varying secondary source size d. The dotted red curve indicates the shift in the variance spectrum as a function of the axial motion of a target for ±2  mm. The green dot shows the optimum secondary source size d0520  μm. (b) Linear dependency of the integrated intensity variance as a function of the target axial displacement for d=d0. For clarity, all measured variances are normalized by the value of the maximum variance.
Fig. 5.
Fig. 5. (a) Experimental demonstration of 3D tracking: the blue line represents the imposed target displacement, while the red dashed line indicates the reconstructed trajectory. (b) One-dimensional representations of the imposed and recovered trajectories shown in (a), where tm denotes one measurement duration. The solid blue line denotes the exact trajectory, while the dashed red line indicates the reconstructed trajectory. Also, see Visualization 1.
Fig. 6.
Fig. 6. Experimental demonstration of 3D tracking using a priori calibrations to extract the constants in Eq. (7). The blue line represents the imposed target displacement, while the red dashed line indicates the reconstructed trajectory.
Fig. 7.
Fig. 7. Evolution of the relative error ε/Δr during the tracking procedure. The error in reconstructing the target location is evaluated as ε2(t)=εx2(t)+εy2(t)+εz2(t), and Δr denotes the average step size in moving the target. The solid line and the shaded area indicate the average and the standard deviation of the error over one hundred trajectories.
Fig. 8.
Fig. 8. Error evolution in recovering an absolute trajectory. The time variable is normalized by the measurement time tm. The error in reconstructing the target location is evaluated as ε2(t)=εx2(t)+εy2(t)+εz2(t). The solid line and the shaded area show the average and the standard deviation of errors over one hundred trajectories, respectively.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

Is(r,t)=P*(r,t)P(r,t)E*(r,t)E(r,t)G*(r,r)G(r,r)drdrα,
i(t)=Mr2VT(r,t)I(r,t)dr,
Ci(τ)=tM10tMCI(Δr(t)τΔv)CT(Δr(t))dt,
Ci(τ)(ξ/(sinh(ξ)))20tMCI(Δr(t))CT(Δr(t))dt.
τi|vIvT|1vI1(1+vT,vI),
σi2=Ci(0)0tMCI(Δr(t))CT(Δr(t))dt,
ρ=aρξ+bρ,
Δρm/Δρ0=Δξm/Δξ0,

Metrics