Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Functional imaging through scattering medium via fluorescence speckle demixing and localization

Open Access Open Access

Abstract

Recently, fluorescence-based optical techniques have emerged as a powerful tool to probe information in the mammalian brain. However, tissue heterogeneities prevent clear imaging of deep neuron bodies due to light scattering. While several up-to-date approaches based on ballistic light allow to retrieve information at shallow depths inside the brain, non-invasive localization and functional imaging at depth still remains a challenge. It was recently shown that functional signals from time-varying fluorescent emitters located behind scattering samples could be retrieved by using a matrix factorization algorithm. Here we show that the seemingly information-less, low-contrast fluorescent speckle patterns recovered by the algorithm can be used to locate each individual emitter, even in the presence of background fluorescence. We test our approach by imaging the temporal activity of large groups of fluorescent sources behind different scattering phantoms mimicking biological tissues, and through a brain slice with a thickness of ∼200 µm.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

For the past decade, several light-based technologies have revolutionized the field of neuroscience [13]. In particular, genetically encoded calcium indicators (GECIs) have emerged as a powerful tool to monitor information processing in the brain in multiple animal models with high spatial resolution, contrast, and specificity [47]. While these approaches allow to retrieve neuronal activity at high frame rates and even with subcellular resolution [810], several challenges in achieving large scale, deep, and possibly whole–brain neuronal activity recording in complex animal models such as mice are still present. In particular, tissue heterogeneities perturb the light wavefront as it travels through any sample, which limits the capability of any optical system to obtain sharp images (if the aberrations introduced by the medium are weak) or any spatial information at all (if the signal coming from the sample is fully scattered), at depth. In practice, this means that conventional microscopy is limited to imaging at depths that correspond to just a few mean free paths, which represents, at most, a few hundred microns in the brain [11]. On the other hand, multi-photon fluorescence microscopy allows to penetrate deeper in a noninvasive manner, but it is still ultimately set by out-of-focus fluorescence [6]. Alternatively, several micro-endoscopy approaches have emerged as a way to overcome this depth limit [12,13], but have the drawback of being invasive.

Recently, wavefront shaping (WS) approaches have been proven to allow image retrieval through highly scattering media. The main idea of these techniques is to use a spatial light modulator (SLM) to introduce controlled changes on the wavefront, thus being able to compensate for the scattering events with either optimization procedures [14], digital optical phase conjugation [15], or by measuring the transmission matrix (TM) of the system [16]. By doing so, it has been shown how it is possible to focus light through or inside scattering samples [17,18], and using the so-called memory effect (ME), scan this focus in order to image small hidden objects [1921]. In fact, the use of the ME has also allowed to design systems that do not even need to use WS techniques to obtain useful information [22,23]. As of late, computation based approaches, merging ideas from both the optimization and the TM fields, have been shown to provide non-invasive fluorescence imaging inside scattering media even beyond the limitations established by the ME range [24,25]. These approaches take advantage of the fact that the use of fluorescence contrast implies the incoherent addition of light signals coming from each individual emitter at the detector. When using time varying excitation, these signals can be efficiently unmixed by applying different matrix factorization approaches, such as non-negative matrix factorization (NMF). Taking advantage of this new paradigm, it has also been shown that NMF can be used to read out functional signals through the skull in mice from the incoherent, seemingly information-less fluorescent speckle patterns recorded from the sample [26].

Here, we tackle the problem of obtaining both the location and temporal activity (i.e. functional recording) of fluorescent functional signals through scattering media. From a set of fluctuating sources (mimicking a set of neurons), we show how it is possible to retrieve not only their temporal activity but also their spatial localization from the fluorescent signal reaching the detector. To do so, we rely on the fact that the signal coming from each source, after propagating through the scattering medium, generates a unique speckle pattern, which we refer to as fingerprint, at the sensor. All of these individual fingerprints are added incoherently at the image plane, where the sensor is placed. This generates a low-contrast image that will fluctuate over time according to the combined temporal activity of the emitters. From this fluctuating video, NMF can be used to decompose the recorded dataset into two different matrices, one containing the unmixed individual speckle fingerprints, and another one with their temporal activities. Crucially, studying the correlations between the different fingerprints allows to retrieve the individual position of each emitter, thanks to the ME. This is carried out using a simple post-processing algorithm based on deconvolving each fingerprint from all overlapping fingerprints of other sources. We demonstrate the validity of the approach by retrieving the temporal activity and localizing large fluorescent sources behind scattering phantoms mimicking biological tissue and through a brain slice with a thickness of $\sim 200$ $\mu m$. We also study the robustness of the process to background fluorescence.

2. Methods

2.1 Experimental setup

The experimental system can be seen on Fig. 1(a). A 473 nm blue laser (LSR-0473-PFM-00100-01, Laserglow Technologies) was used to illuminate a digital micromirror device (DMD), the surface of which was imaged onto the sample plane by a tube lens (LA1708-A, Thorlabs) and the lower objective (Plan-NEOFLUAR $\times 20$ 0.5 NA, Zeiss). We used the DMD (DLP LightCrafter 6500, Texas Instruments) to generate well-defined dynamic excitation patterns on a set of fluorescent sources, matching the dynamics of publicly available neuronal activity recordings [4]. In our experiments, the samples consisted of groups of randomly distributed beads (FluoSpheres F8836, Thermofisher Scientific) with a diameter of 10 $\mu m$, similar in size to common neuron cell bodies [27]. The samples extended over a field of view of about $160\times 160$ $\mu m^2$, with bead densities ranging between $\sim 47000$ beads/$mm^3$ and $\sim 78000$ beads/$mm^3$, similar to the densities found in two-photon calcium imaging experiments in the mouse brain [28]. The emission spectrum of the beads is close to the common green fluorescent activity indicators. After excitation, the fluorescent signal propagated through a scattering medium and was imaged onto a scientific complementary metal–oxide–semiconductor (sCMOS) camera (Iris 15 sCMOS, Teledyne Photometrics) by a microscope objective (RMS10X PLAN ACHROMAT 0.25NA, Olympus) and a tube lens. A bandpass filter (MF530-43, Thorlabs) was used to block any excitation light from reaching the sensor. Additionally, we incorporated a control imaging system that directly imaged the sample plane in reflection, without going through the scattering medium. To this end, a dichroic beam splitter (DM, FF496-SDi01, Semrock) was used to collect the backpropagating fluorescent signal and the sample plane was imaged onto a CMOS camera (ACE2014-55um, Basler) by means of another tube lens. This allowed to obtain the ground-truth spatial position of all the emitters, and this information was only used a posteriori to verify the quality of our reconstruction.

 figure: Fig. 1.

Fig. 1. Experimental setup and principle of recovering the temporal activity and location of fluorescent emitters through scattering media. a, A DMD is illuminated with a light source (blue laser) to excite a set of fluorescent beads with different spatio-temporal activations (orange dashed box). The signals are collected by a microscope objective and a tube lens (L), and add incoherently on the camera (sCMOS). A band-pass filter (BF) removes any residual excitation light. A second imaging system records the ground truth spatial and temporal information (for control and comparison purposes). To do so, a dichroic mirror (DM) and a lens (L) are placed below the sample. b, Detailed view of the sample excitation. The fluorescent beads are excited with the aid of the DMD, with temporal activities that match the dynamics of neuronal activity recordings. The fluorescent signal propagates through the scattering sample, generating a set of speckle patterns that vary over time.

Download Full Size | PDF

After propagating through the scattering medium, the combination of the signals emerging from each emitter generated a time-varying, low-contrast speckle pattern that was recorded with the sCMOS camera. We performed background envelope removal by using high-pass filtering on each frame of the video in order to enhance contrast. After this, we fed the processed video to the NMF algorithm. This process generated two different matrices, one containing the individual spatial fingerprints from each emitter, and another containing the independent temporal activities. From the spatial fingerprints, we recovered the positions of each individual emitter by using a deconvolution approach [25]. We detail both procedures (unmixing and localization) in what follows.

2.2 NMF-based unmixing

Any recorded dataset can be expressed as a three-dimensional spatio-temporal object, $I(x,y,t)$. Given the fact that fingerprints from each emitter add incoherently onto the sensor, it is possible to write the $k^{th}$ frame of the recorded video as:

$$I_k(x,y) =\sum_{s=1}^{s=N}w_s(x,y) \cdot h_s(t),$$
where $k$ is the index of the frame (ranging from 0 to the number of frames in the video), $s$ enumerates each of the individual sources, and $h_s(t)$ and $w_s(x,y)$ correspond to the emission level and the individual speckle pattern generated by the $s^{th}$ source, respectively. In other words, any frame of the video can be expressed as a linear combination of a reduced number of fingerprints (corresponding to the $N$ sources in the sample), with their weights determined by their temporal activity. Then, it is possible to express the full dataset in matrix form as:
$$I = W \cdot H,$$
where each column of $I$ contains a reshaped frame of the video in vector form, the columns of $W$ contain all the individual fingerprints (reshaped in vector form), and the rows of $H$ encode the temporal activities of each source. The goal is to estimate both $W$ and $H$ from the observations, $I$. Expressing the system in matrix form allows to clearly see the whole retrieval procedure as a matrix factorization problem. Moreover, while the size of $I$ can be quite large (tens of thousands of pixels and hundreds of frames), the rank of the matrix is much smaller (equal to the number of sources, neglecting noise). This means that both $W$ and $H$ are much smaller than $I$. Due to the physical characteristics of the system (fluorescent signals, intensity measurements), both matrices can only have positive elements. This allows to take advantage of established low-rank non-negative matrix factorization frameworks, which tackle the inverse problem of retrieving both $W$ and $H$, given $I$, by solving the minimization problem:
$$\min_{W,H>0} \Arrowvert I -W\cdot H\Arrowvert^{2}_F.$$

To solve Eq. (3), it is necessary to know the rank of $I$ (i.e., the number of sources). The rank can be estimated in a non invasive manner from the recorded dataset by comparing the outputs of different NMF runs with different ranks and minimizing the residual (see Supplement 1). In practice, it is also possible to add regularization terms to the minimization problem that stabilize solutions and incorporate different priors. In the present case, it makes sense to add a sparsity constraint in the temporal domain (GECI signals tend to have brief spikes, followed by longer decay times and periods of very low activity). Moreover, forcing some sparsity on $W$ helps retrieving higher contrast fingerprints, which makes localization easier. These regularization terms can be tuned depending on the experimental conditions (noise level, amount of a priori knowledge about the system, etc.) to improve the quality of the results and to reduce the post-processing time. In practice, we use the NMF solver contained in the scikit-learn package [29] (see Supplement 1).

2.3 Spatial localization

Once the NMF procedure has been carried out, $W$ and $H$ provide the individual fingerprints and the temporal activities of each independent source, respectively. In previous work, the information contained in $H$ was shown to contain the functional signals from the sources [26]. However, no information about the spatial position of the emitters was retrieved. Recently, it has also been shown in the context of structural imaging that it is possible to locate different emitters from their speckle fingerprints (even beyond the ME range) by using the information encoded in $W$ [25]. The key idea is that neighboring emitters generate laterally shifted speckle patterns, and evaluating these lateral shifts reveals the $(x,y)$ positions of the sources. Moreover, even if the span of the source distribution goes beyond the ME range, the full location map can be retrieved under the condition that the sample is dense enough (i.e., if the maximum distance between any two neighboring sources is not longer than the ME range). In that case, it is possible to advance the position analysis from one emitter to its nearest neighbor by studying the correlation between their fingerprints, and build a full location map.

Assuming perfect memory effect, the relationship between two fingerprints $w_i$ and $w_j$ can be written as the convolution of one of the fingerprints and a delta function:

$$w_i = w_j \ast \delta(x-x_0^{i,j},y-y_0^{i,j}),$$
where $x_0^{i,j}$ and $y_0^{i,j}$ account for the lateral shift between the two fingerprints ($i,j$). Due to the ME, this shift is directly proportional to the relative position between the sources. Then, the lateral shift ($x_0^{i,j},y_0^{i,j}$) between any pair of fingerprints $w_i,w_j$, can be experimentally retrieved via a deconvolution (which we refer to as the operator $D$) between the two fingerprints:
$$\delta(x-x_0^{i,j},y-y_0^{i,j}) = D(w_i, w_j).$$

The result of this deconvolution is a spike akin to a delta function, offset from the center by a distance that corresponds to the lateral shift between the two emitters (see Fig. 2(a)). This is under the condition that the two fingerprints are indeed laterally shifted versions of each other (i.e., their respective sources lie within the ME range). In practice, due to the finite ME range, the deconvolved peak, which represents a correlation function, decreases in amplitude as the distance increases. For very distant emitters, the two fingerprints will not be correlated anymore, and the result of the deconvolution, rather than having that particular structure, will be a low amplitude noise–like pattern. Carrying out the deconvolution between all the possible pairs of fingerprints, and tracking the positions of the correlation peaks, allows to build a relative location map of all the emitters. For a particular emitter, $s$, it is possible to retrieve the partial location map ($M_s$) in its vicinity by adding the result of all the deconvolutions related to that emitter:

$$M_s = \sum_{i=1}^{i=N}D(w_s, w_i).$$

 figure: Fig. 2.

Fig. 2. Retrieving the spatial position from different fluorescent emitters by studying the correlations between their speckle fingerprints. a, Deconvolution between fingerprints coming from close (first and second) or far away (second and third) fingerprints. In the first case, the two fingerprints are highly correlated, but laterally-shifted images, and the deconvolution yields a delta-like spike, which position from the center of the image provides the lateral shift between the two. In the second case, the two fingerprints come from emitters separated a distance longer than the ME range, and thus they are not correlated. In this scenario, the deconvolution provides a low-amplitude noisy image with no useful information. b, For each emitter, deconvolution between its fingerprint and the fingerprints of all emitters provides a location map of emitters in its vicinity. c, Stitching all the partial location maps provides the localization of all the emitters in the sample.

Download Full Size | PDF

This partial position map represents the relative positions, centered around the emitter $s$, of all the sources that lie at a distance from $s$ lower than the ME range (Fig. 2(b), lowest row of images). Note that this partial location map is retrieved by adding the result of all the deconvolutions (including the fingerprints that are not correlated). The correlated fingerprints will generate high amplitude spikes at the positions of the sources, and the uncorrelated fingerprints will yield a low–level noise background. By adding all the partial position maps (corrected by shifting all of them with respect to the same source) it is possible to obtain the full location map as:

$$M = \sum_{i=1}^{i=N}M_i(x-x_0^{1,i},y-y_0^{1,i}).$$

In our experiments, we performed this deconvolution by using a Wiener-Hunt approach from the scikit-image python module (skimage.restoration.wiener) [30]. A full step-by-step analysis of the procedure can be found in Supplement 1, and the code at [31].

2.4 Results

We tested our approach under different experimental conditions. First, we retrieved both the temporal activity and the spatial positions of a group of beads through a scattering medium consisting of a $\sim 210$ $\mu m$ thick parafilm layer ($l_s \sim 170$ $\mu m$, $g \sim 0.8$ [32]). We designed our system with the possibility to vary the distance between the bead plane and the scattering material. This allowed us to slightly change the range of the ME while maintaining a condition of non-ballistic image information, and thus test the system’s capability to retrieve extended objects. In a first experiment, the distance between the bead plane and the scattering phantom was set to 1.7 mm, corresponding to a very favorable situation with large ME range and signal-to-noise ratio (SNR). The source distribution consisted of 19 randomly distributed beads, which were excited with the DMD. A total recording of 500 frames with an integration time of 500 ms was acquired. An example of the images obtained under these conditions can be seen in the inset of Fig. 1(a). Fig. 3(a) shows the fingerprints of a subset of six sources, superimposed in a false-color image, demonstrating the unmixing capability of the approach. It can be seen that some of these fingerprints (corresponding to close-by sources) are similar, i.e., exhibit high spatial correlation, but are shifted laterally. For example, sources number three and four (red and yellow) are related by a small horizontal shift. A close inspection of the ground truth temporal activities together with the ones retrieved by NMF (Fig. 3(b)) shows that the NMF algorithm is able to unmix the traces of each emitter, presenting a good agreement with the expected dynamics of neuronal activity recordings. There are, however, several factors that limit the accuracy on the retrieval of the smallest signal spikes. The presence of noise in the recorded images, in combination with the reduction in contrast resulting from the superposition of multiple speckle patterns, hinders the performance of the NMF procedure. As a result, the smaller activity spikes, which generate very low intensity fluctuations on the camera, get lost in the reconstruction. Increasing camera integration times and/or using brighter sources would help retrieving the lost information or avoiding crosstalks (i.e., some spikes from one emitter appearing in the trace of a different emitter). By using the unmixed speckles and the spatial localization procedure described before, it is possible to obtain an image of the relative location of the full group of nineteen sources, as shown in Fig. 3(c).

 figure: Fig. 3.

Fig. 3. Location of fluorescent emitters through scattering media. From the captured video, an unmixing algorithm allows to retrieve all the individual speckle patterns generated by each source (a) and their temporal activities (b). c, Studying the correlations between speckle patterns allows to retrieve the location of each individual emitter.

Download Full Size | PDF

As a way to study more realistic scenarios for biological samples, we then moved to study the capability of the method to unmix and locate samples when the datasets are corrupted by background fluorescence, and with thicker volumetric scattering media. Under realistic conditions, the SNR of GECI signals is often very low due to scattering and attenuation in the sample. In order to mimic this condition, we built a sample consisting of two different depth planes, and increased the thickness of the scattering phantom. We placed beads on one side of a microscope coverslip and a thin uniform layer of fluorescent paint on the other. With such a sample, our illumination system allows us to excite both the beads (mimicking neurons, with their characteristic dynamics) and the paint layer (to mimic out-of-focus background fluorescence). Thus, we could excite a region with an arbitrary size during the full dataset recording, which reduced the contrast of the recorded dataset in a controlled manner. We show a schematic of this system in Fig. 4(a). Under brightfield illumination, the beads appear as bright disks on a diffuse background (Fig. 4(b)). In our experiments, we tested different background levels to determine the highest amounts of background that the system could tolerate before failing to unmix and locate all the sources. To generate the background, we excited a region of the sample containing only fluorescent paint and no beads with a constant signal during the whole 500 frames acquisition. An example of the spatial mask generated on the DMD to illuminate the sample can be seen in Fig. 4(c). Here, the green disks correspond to the regions of the DMD that are imaged onto the beads, and the magenta disks generate the constant background signal from the out-of-focus plane. Fig. 4(d) shows the locations of the sources present in the sample. In this case, the scattering sample consisted of several layers of parafilm with a total thickness of 0.75 mm, placed at a distance from the beads of 0.65 mm. To visualize the loss in contrast due to the background, Fig. 4(e-g) show different examples of the speckle patterns recorded by the camera when only the background is excited (e), when a single source is excited (f), or when both the background and a single source are excited (g). Also, it is possible to estimate the signal-to-background ratio (SBR) of our measurements by calculating the total energy of the speckle patterns associated with the background or each one of the individual sources. By doing so, we found out that the system was able to unmix and locate all the sources down to an SBR of approximately $1.6$. The unmixed temporal activities can be found in Supplement 1 (Fig. S3).

 figure: Fig. 4.

Fig. 4. Retrieving emitter locations in the presence of background signal. a, Experimental configuration for the generation of out-of-focus fluorescent light. A thin layer of fluorescent paint is placed on the bottom side of the coverslip. As the excitation light passes through the sample, this layer generates fluorescent signal coming from a different axial plane from that of the sources. During the experiments, the DMD can be used to generate constant illumination of this layer of paint, thus generating a constant out-of-focus background that mimics auto-fluorescence signal commonly found in biological samples. b, Image of the sample plane when using wide-field illumination. c, Spatial mask generated on the DMD to excite both the sources (beads) and the background (paint layer). In this case, the paint layer is excited at three different spots (marked magenta) in the center of the field of view. d, Retrieved localization of the sources after the unmixing procedure. e, Speckle pattern captured by the detector when exciting only the fluorescent paint layer at the three spots shown in (c). f, Speckle pattern generated onto the detector when exciting a single bead. g, Single frame from the full recorded video, showing the low contrast speckle pattern that results from the excitation of both the fluorescent paint layer and the emitters during the experiments.

Download Full Size | PDF

Last, we tested our system with biological tissue as the scattering medium. In this case, we placed a 200 $\mu m$ fixed brain slice at a distance from the beads of $\sim 680$ $\mu m$, and recorded a full dataset (500 frames with an integration time of 2 s). Fig. 5(a) shows the ground truth bead distribution, with a total extent of about 130 $\mu m$. Under bright-field illumination, the image retrieved by the system was a low–contrast speckle pattern, as shown in Fig. 5(b), which does not contain any obvious spatial information about the source location. After the NMF procedure, we were able to retrieve the spatial positions of all the sources (Fig. 5(c)). In Fig. 5(d) we show a comparison between the retrieved temporal traces and the ground truth activities that we used to excite each source. To do so, we calculated the correlation coefficient between all the pairs of traces, showing a very good agreement. The range of the ME was estimated to be about 75 $\mu m$, so the full source distribution extended approximately 1.7 times the ME range. The unmixed temporal activities for the emitters in the experiment can be found in Supplement 1 (Fig. S3).

 figure: Fig. 5.

Fig. 5. Localization and temporal activity retrieval through biological tissue. a, Ground truth source distribution. b, Bright-field image through a 200 $\mu m$ brain slice. c, Localization retrieval of the sources in the field of view. d, Temporal activity comparison between the retrieved traces and the ground truth excitations on the sources.

Download Full Size | PDF

3. Discussion

In this work, we have shown that it is possible to retrieve not only the individual temporal traces, but also the location of fluorescent, temporally modulated, extended sources through scattering media. We have demonstrated the principle using both phantoms and fixed biological tissue as scattering samples, with a number of sources in the order of a few tens, randomly distributed in a 2D plane, and with temporal activities extracted from public available neuronal recordings. Even though the scattering introduced by tissue strongly scatters the light distribution emanating from the sources, an unmixing algorithm based on NMF allows to retrieve the individual speckle fingerprints coming from each emitter without requiring any ballistic information. Studying the correlation between different fingerprints provides the location of each individual source, even beyond the ME range.

While the proposed technique is able to provide useful information in this proof-of principle but realistic scenario, some challenges still hinder its application in vivo. First, there are many applications where it is of interest to retrieve the temporal activities from emitters placed at multiple depth planes. Although the NMF algorithm is not affected by this [26], sources located at different axial positions will generate fingerprints that will not be related by just a lateral shift. In order to retrieve the location in these cases, more sophisticated analyses, based on lateral shifts and scale changes [33], or combining light-field [34] and optimization approaches [35], could be explored. Also, while the method allows to retrieve the location of sources even if they span over large distances, close neighboring emitters are still required to lay inside the ME range. Even though this is a strong requirement, neural networks present in the brain show intricate and highly-packed neuron distributions, which should help fulfilling this prerequisite. Second, precise localization depends on the NMF algorithm successfully unmixing the fingerprints. For this to happen, the recorded frames need to present high enough contrast values. While multiple parameters reduce the contrast during the experiments, the most relevant are the number and size of sources present in the sample and out-of-focus fluorescence. Given the nature of the emitters, fingerprints are added incoherently at the sensor, generating a speckle pattern which contrast decreases with the number of sources as $1/\sqrt {N}$. In order to extend the number of sources from tens to hundreds, we will probably need to add more priors to the NMF algorithm with the aim to compensate for the drop in performance due to the contrast loss. On top of that, out of focus fluorescence will further decrease this value by introducing a spurious signal which does not contain any information about the temporal activity of the sources. On in vivo experiments, the background also tends to evolve over time due to hemodynamics and brain movement in the specimen, which would introduce additional challenges on the NMF-based unmixing. Different approaches, such as motion correction, could be explored to try minimizing these effects. Last, using different contrast mechanisms, such as multiphoton fluorescence, could also help reducing out-of-focus signal, increasing the SBR to levels where successful unmixing can be achieved.

Funding

Kavli Foundation; National Institute of Neurological Disorders and Stroke (1RF1NS113251).

Acknowledgments

Research reported in this publication was supported by the National Institute of Neurological Disorders and Stroke of the National Institutes of Health under award number 1RF1NS113251 (A.V.) and the Kavli Foundation through the Kavli Neural System Institute (A.V.), in particular through a Kavli Neural Systems Institute postdoctoral fellowship (T.N.). We thank L. Bourdieu, W. Akermann, and F. Xia for providing biological samples.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in [31].

Supplemental document

See Supplement 1 for supporting content.

References

1. E. S. Boyden, F. Zhang, E. Bamberg, G. Nagel, and K. Deisseroth, “Millisecond-timescale, genetically targeted optical control of neural activity,” Nat. Neurosci. 8(9), 1263–1268 (2005). [CrossRef]  

2. K. Deisseroth, “Optogenetics,” Nat. Methods 8(1), 26–29 (2011). [CrossRef]  

3. F. Helmchen and A. Konnerth, Imaging in Neuroscience: A Laboratory Manual (Cold Spring Harbor Laboratory, 2011).

4. T.-W. Chen, T. J. Wardill, Y. Sun, S. R. Pulver, S. L. Renninger, A. Baohan, E. R. Schreiter, R. A. Kerr, M. B. Orger, V. Jayaraman, L. L. Looger, K. Svoboda, and D. S. Kim, “Ultrasensitive fluorescent proteins for imaging neuronal activity,” Nature 499(7458), 295–300 (2013). [CrossRef]  

5. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005). [CrossRef]  

6. S. Weisenburger and A. Vaziri, “A Guide to Emerging Technologies for Large-Scale and Whole-Brain Optical Imaging of Neuronal Activity,” Annu. Rev. Neurosci. 41(1), 431–452 (2018). [CrossRef]  

7. J. Demas, J. Manley, F. Tejera, K. Barber, H. Kim, F. M. Traub, B. Chen, and A. Vaziri, “High-speed, cortex-wide volumetric recording of neuroactivity at cellular resolution using light beads microscopy,” Nat. Methods 18(9), 1103–1111 (2021). [CrossRef]  

8. V. Iyer, T. M. Hoogland, and P. Saggau, “Fast Functional Imaging of Single Neurons Using Random-Access Multiphoton (RAMP) Microscopy,” J. Neurophysiol. 95(1), 535–545 (2006). [CrossRef]  

9. G. Katona, G. Szalay, P. Maák, A. Kaszás, M. Veress, D. Hillier, B. Chiovini, E. S. Vizi, B. Roska, and B. Rózsa, “Fast two-photon in vivo imaging with three-dimensional random-access scanning in large tissue volumes,” Nat. Methods 9(2), 201–208 (2012). [CrossRef]  

10. R. Prevedel, A. J. Verhoef, A. J. Pernía-Andrade, S. Weisenburger, B. S. Huang, T. Nöbauer, A. Fernández, J. E. Delcour, P. Golshani, A. Baltuska, and A. Vaziri, “Fast volumetric calcium imaging across multiple cortical layers using sculpted light,” Nat. Methods 13(12), 1021–1028 (2016). [CrossRef]  

11. V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7(8), 603–614 (2010). [CrossRef]  

12. S. Ohayon, A. Caravaca-Aguirre, R. Piestun, and J. J. DiCarlo, “Minimally invasive multimode optical fiber microendoscope for deep brain fluorescence imaging,” Biomed. Opt. Express 9(4), 1492 (2018). [CrossRef]  

13. S. A. Vasquez-Lopez, R. Turcotte, V. Koren, M. Plöschner, Z. Padamsey, M. J. Booth, T. Či zmár, and N. J. Emptage, “Subcellular spatial resolution achieved for deep-brain imaging in vivo using a minimally invasive multimode fiber,” Light: Sci. Appl. 7(1), 110 (2018). [CrossRef]  

14. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309 (2007). [CrossRef]  

15. M. Cui and C. Yang, “Implementation of a digital optical phase conjugation system and its application to study the robustness of turbidity suppression by phase conjugation,” Opt. Express 18(4), 3444–3455 (2010). [CrossRef]  

16. S. Rotter and S. Gigan, “Light fields in complex media: Mesoscopic scattering meets wave control,” Rev. Mod. Phys. 89(1), 015005 (2017). [CrossRef]  

17. M. Kim, W. Choi, Y. Choi, C. Yoon, and W. Choi, “Transmission matrix of a scattering medium and its applications in biophotonics,” Opt. Express 23(10), 12648 (2015). [CrossRef]  

18. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9(9), 563–571 (2015). [CrossRef]  

19. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988). [CrossRef]  

20. C.-L. Hsieh, Y. Pu, R. Grange, G. Laporte, and D. Psaltis, “Imaging through turbid layers by scanning the phase conjugated second harmonic radiation from a nanoparticle,” Opt. Express 18(20), 20723 (2010). [CrossRef]  

21. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012). [CrossRef]  

22. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

23. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

24. A. Boniface, J. Dong, and S. Gigan, “Non-invasive focusing and imaging in scattering media with a fluorescence-based transmission matrix,” Nat. Commun. 11(1), 6154 (2020). [CrossRef]  

25. L. Zhu, F. Soldevila, C. Moretti, A. d’Arco, A. Boniface, X. Shao, H. B. de Aguiar, and S. Gigan, “Large field-of-view non-invasive imaging through scattering layers using fluctuating random illumination,” Nat. Commun. 13(1), 1447 (2022). Number: 1 Publisher: Nature Publishing Group. [CrossRef]  

26. C. Moretti and S. Gigan, “Readout of fluorescence functional signals through highly scattering tissue,” Nat. Photonics 14(6), 361–364 (2020). [CrossRef]  

27. S. Bovetti, C. Moretti, S. Zucca, M. Dal Maschio, P. Bonifazi, and T. Fellin, “Simultaneous high-speed imaging and optogenetic inhibition in the intact mouse brain,” Sci. Rep. 7(1), 40041 (2017). [CrossRef]  

28. A. Giovannucci, J. Friedrich, P. Gunn, J. Kalfon, B. L. Brown, S. A. Koay, J. Taxidis, F. Najafi, J. L. Gauthier, P. Zhou, B. S. Khakh, D. W. Tank, D. B. Chklovskii, and E. A. Pnevmatikakis, “CaImAn an open source tool for scalable calcium imaging data analysis,” eLife 8, e38173 (2019). Publisher: eLife Sciences Publications, Ltd. [CrossRef]  

29. https://scikit-learn/stable/modules/generated/sklearn.decomposition.nmf.html,”.

30. https://scikit-image.org/docs/stable/api/skimage.restoration.html,”.

31. F. Soldevila, Functional imaging through scattering medium via fluorescence speckle demixing and localization, Github2023, https://github.com/labogigan/specklelocate.

32. A. Boniface, B. Blochet, J. Dong, and S. Gigan, “Noninvasive light focusing in scattering media using speckle variance optimization,” Optica 6(11), 1381 (2019). [CrossRef]  

33. Y. Okamoto, R. Horisaki, and J. Tanida, “Noninvasive three-dimensional imaging through scattering media by three-dimensional speckle correlation,” Opt. Lett. 44(10), 2526 (2019). [CrossRef]  

34. N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3(5), 517 (2016). [CrossRef]  

35. H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23(11), 14461–14471 (2015). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplementary Information

Data availability

Data underlying the results presented in this paper are available in [31].

31. F. Soldevila, Functional imaging through scattering medium via fluorescence speckle demixing and localization, Github2023, https://github.com/labogigan/specklelocate.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Experimental setup and principle of recovering the temporal activity and location of fluorescent emitters through scattering media. a, A DMD is illuminated with a light source (blue laser) to excite a set of fluorescent beads with different spatio-temporal activations (orange dashed box). The signals are collected by a microscope objective and a tube lens (L), and add incoherently on the camera (sCMOS). A band-pass filter (BF) removes any residual excitation light. A second imaging system records the ground truth spatial and temporal information (for control and comparison purposes). To do so, a dichroic mirror (DM) and a lens (L) are placed below the sample. b, Detailed view of the sample excitation. The fluorescent beads are excited with the aid of the DMD, with temporal activities that match the dynamics of neuronal activity recordings. The fluorescent signal propagates through the scattering sample, generating a set of speckle patterns that vary over time.
Fig. 2.
Fig. 2. Retrieving the spatial position from different fluorescent emitters by studying the correlations between their speckle fingerprints. a, Deconvolution between fingerprints coming from close (first and second) or far away (second and third) fingerprints. In the first case, the two fingerprints are highly correlated, but laterally-shifted images, and the deconvolution yields a delta-like spike, which position from the center of the image provides the lateral shift between the two. In the second case, the two fingerprints come from emitters separated a distance longer than the ME range, and thus they are not correlated. In this scenario, the deconvolution provides a low-amplitude noisy image with no useful information. b, For each emitter, deconvolution between its fingerprint and the fingerprints of all emitters provides a location map of emitters in its vicinity. c, Stitching all the partial location maps provides the localization of all the emitters in the sample.
Fig. 3.
Fig. 3. Location of fluorescent emitters through scattering media. From the captured video, an unmixing algorithm allows to retrieve all the individual speckle patterns generated by each source (a) and their temporal activities (b). c, Studying the correlations between speckle patterns allows to retrieve the location of each individual emitter.
Fig. 4.
Fig. 4. Retrieving emitter locations in the presence of background signal. a, Experimental configuration for the generation of out-of-focus fluorescent light. A thin layer of fluorescent paint is placed on the bottom side of the coverslip. As the excitation light passes through the sample, this layer generates fluorescent signal coming from a different axial plane from that of the sources. During the experiments, the DMD can be used to generate constant illumination of this layer of paint, thus generating a constant out-of-focus background that mimics auto-fluorescence signal commonly found in biological samples. b, Image of the sample plane when using wide-field illumination. c, Spatial mask generated on the DMD to excite both the sources (beads) and the background (paint layer). In this case, the paint layer is excited at three different spots (marked magenta) in the center of the field of view. d, Retrieved localization of the sources after the unmixing procedure. e, Speckle pattern captured by the detector when exciting only the fluorescent paint layer at the three spots shown in (c). f, Speckle pattern generated onto the detector when exciting a single bead. g, Single frame from the full recorded video, showing the low contrast speckle pattern that results from the excitation of both the fluorescent paint layer and the emitters during the experiments.
Fig. 5.
Fig. 5. Localization and temporal activity retrieval through biological tissue. a, Ground truth source distribution. b, Bright-field image through a 200 $\mu m$ brain slice. c, Localization retrieval of the sources in the field of view. d, Temporal activity comparison between the retrieved traces and the ground truth excitations on the sources.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

I k ( x , y ) = s = 1 s = N w s ( x , y ) h s ( t ) ,
I = W H ,
min W , H > 0 I W H F 2 .
w i = w j δ ( x x 0 i , j , y y 0 i , j ) ,
δ ( x x 0 i , j , y y 0 i , j ) = D ( w i , w j ) .
M s = i = 1 i = N D ( w s , w i ) .
M = i = 1 i = N M i ( x x 0 1 , i , y y 0 1 , i ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.