Understanding the mechanisms of perception, cognition, and behavior requires instruments that are capable of recording and controlling the electrical activity of many neurons simultaneously and at high speeds. All-optical approaches are particularly promising since they are minimally invasive and potentially scalable to experiments interrogating thousands or millions of neurons. Conventional light-field microscopy provides a single-shot 3D fluorescence capture method with good light efficiency and fast speed, but suffers from low spatial resolution and significant image degradation due to scattering in deep layers of brain tissue. Here, we propose a new compressive light-field microscopy method to address both problems, offering a path toward measurement of individual neuron activity across large volumes of tissue. The technique relies on spatial and temporal sparsity of fluorescence signals, allowing one to identify and localize each neuron in a 3D volume, with scattering and aberration effects naturally included and without ever reconstructing a volume image. Experimental results on live zebrafish track the activity of an estimated 800+ neural structures at 100 Hz sampling rate.
© 2016 Optical Society of America
Brain tissue is a dense network of neurons that exchange information by means of electrical signals called action potentials. Understanding the mechanisms by which the brain processes information requires the ability to detect action potentials from many individual neurons simultaneously across large volumes of tissue. Engineered calcium-sensitive proteins  and voltage-sensitive dyes  enable optical detection of action potentials without disturbing the neuron’s physiology. However, in deep layers of brain tissue, optical aberrations  and scattering are generally too strong to resolve individual neurons with conventional fluorescence microscopy, so more advanced methods are required. The most popular of these, two-photon microscopy , uses a nonlinear effect to restrict fluorescence excitation to a small spot or plane  which scans or hops through the volume point by point [6,7]. Light-sheet microscopy  achieves faster 3D acquisition by scanning in only one dimension, and confocal light-sheet microscopy  gives improved performance in strongly scattering tissue at the expense of photon efficiency and speed. Another variant implements light-sheet imaging with a single objective, giving practical benefits at the expense of spatial resolution . All of these methods involve scanning, so frame rates are limited for large-volume imaging.
Light-field imaging [11–13] captures full-volume 3D information in a single shot. A light-field measurement includes both the position (, ) and angle of incidence () of light rays reaching the sensor. In contrast, a traditional 2D sensor only captures the position of the rays. With 4D light-field information, it is possible to later adjust focus, change perspective, or retrieve 3D images in post-processing. Conveniently, any microscope can be converted into a light-field imager by making a simple and inexpensive hardware modification: a microlens array placed in front of the sensor. Traditional light-field imaging makes a ray optics assumption that breaks down for microscopy, but can be corrected by wave-optics models [14,15]. The main advantages of light-field microscopy for 3D imaging are its fast capture speed (limited only by the camera’s frame rate) and its photon efficiency, since all the photons that reach the image plane are captured. Unfortunately, these benefits come at the cost of a severe loss of spatial resolution since the limited number of pixels on the sensor must be spread across four dimensions instead of two. Various attempts have been made to improve resolution though deconvolution  or additional measurements [17–19].
Light-field microscopy has already provided promising results for functional brain imaging  with 3D volume image reconstructions to quantify the fluorescence levels of individual neurons. However, the number of pixels on the sensor limits the number of voxels that can be reconstructed with fidelity, and thus the number of neurons that can be monitored. Here, we incorporate sparsity-based algorithms that enable large volumes to be captured with high spatial resolution, provided that only a sparse set of neurons are active at once. We skip the step of explicitly reconstructing a 3D image and instead attempt to simply distinguish and localize each neural structure in 3D.
The prime advantage of our method over previous light-field microscopy work is that the data collection requirements scale not with the number of voxels to be reconstructed, but rather with the number of active neurons at a particular time. Hence, it may be possible in the future to use a conventional sensor for recording the activity of thousands or millions of neurons in real time. Since brain activity is not always sparse, we add a processing step that implements an independent component analysis on the raw video data to separate out temporally correlated neural activity. This results in spatially sparse components that satisfy our model, even for densely active neural experiments.
A major impediment for neural activity tracking is optical scattering. Digitally undoing the effects of multiple scattering in 3D is an ill-posed nonlinear problem that is difficult or impossible to solve . Conventional light-field microscopy ignores scattering effects when reconstructing volume images, so deeper sources blur. Recently, we showed that phase space (e.g., light field) measurements can be robust to scattering when an appropriate wave-optical multislice forward model is used together with compressive methods for sparse (in 3D) samples . Here, we extend these ideas to the problem of localizing neurons and quantifying 3D fluorescence in brain tissue.
We further exploit the fact that functional imaging need not reconstruct a visual rendering of the 3D shape of neurons, but only needs to distinguish and localize them. Our algorithm estimates the light-field signature for each neuron and maps its 3D location without ever producing a traditional image. In reality, because different structures of one neuron may be spatially distributed, our method may distinguish any active structures (e.g., neurons, axons, dendrites, astrocytes, glia cells, membranes, synaptic terminals). Aberrations and scattering effects are incorporated into the light-field signatures, so do not degrade discrimination ability, though they may impact 3D localization accuracy. Video data can then be decomposed directly, skipping the error-prone 3D image reconstruction step and directly reaching the final goal: a quantitative measurement of fluorescence in each individual neural structure. The result is a task-based approach that is well-suited for 3D in vivo functional brain monitoring. We demonstrate our method experimentally for zebrafish neural activity tracking with 800+ neural structures at 100 fps.
A. Experimental Setup
The experimental setup (shown in Fig. 1) is a fluorescence microscope that has been modified by introducing a microlens array at the imaging plane, with the sensor placed at the back focal plane of the array. By the principles of light-field imaging, both position and direction of propagation () of light rays can be reconstructed from the 2D intensity captured by the sensor. This is because each microlens’ sub-image corresponds to the local pupil plane (angular distribution of rays). Capturing a 4D light field, , using a 2D sensor necessitates a tradeoff between spatial and angular sampling. Our microlens array design choice is controlled by two parameters. The pitch (microlens diameter) controls spatial resolution in the (, ) plane; here, we use 150 μm pitch, which corresponds to at the sample. The focal length of each microlens () controls the range of angles measured based on numerical aperture (), which is chosen to match the microscope output port for the case of a water immersion objective (). Angular range and sampling will determine the axial resolution of the reconstruction.
We call the 2D intensity measurement at the sensor plane, , a light-field measurement since it maps to the 4D light-field information for each time, : . The sampling of the 4D light field on a 2D plane of pixels is given by and , where are the lateral coordinates at the sensor, is the pitch of the microlens array (square lattice), and is the floor function. We use a relay with magnification to record the signal on the sensor with a square of edge length pixels under each microlens. This achieves good angular (and hence, axial) sampling by sacrificing lateral resolution, which will be improved in post-processing. The resulting field of view (at the sample) is a 200 μm square, with microlenses in each direction, for a total of pixels on the sCMOS sensor (Andor Zyla 4.2). Each acquired frame contains full-volume fluorescence data, with temporal resolution equal to the camera’s frame rate, .
For functional brain activity monitoring, calcium ions entering the cell through specialized channels will change the conformational state of the genetically encoded calcium dye, GCaMP6, so that the fluorescence of each neuron correlates with its action potential firing rate . The timescale of calcium diffusion is faster than the response time of GCaMP6. The light-field measurement at a given time, , can then be written as a linear superposition of the individual contributions of the neurons. We decompose the measurement into a set of independent spatial components that change over time:
Our goal is to build up a dictionary of light-field signatures for each neuron, so that subsequent data frames can be decomposed into their constituent neural signatures. Of course, it is not feasible to sequentially activate each neuron and directly measure the dictionary; hence, we must calibrate the system using only uncontrolled data. We do this using a training video of light-field measurements in which all of the neurons of interest activate at least once during the video’s capture time. This may be done either before the experiment of interest or as part of the actual experiment. From this video, we are able to extract the light-field signatures of individual neurons and their 3D positions, which together make up our dictionary. Since our extraction method requires spatial sparsity, an optional preprocessing step may be employed to exploit temporal correlations for generating sparse components from nonsparse video data.
B. Light-Field Signature Identification
Our training routine aims to extract the light-field signature for each neuron from video data of many neurons firing at random. We cannot assume that only one neuron is active in each frame, but we will assume, for now, that the active neurons in any one frame, , are sparse in 3D (spatial sparsity condition). We can then use a compressed sensing approach to separate and localize individual neural structures in 3D from scattered and aberrated light-field measurements having multiple neurons active at once.
Every point source traces out a 2D hyperplane in the 4D light-field space (see Fig. 2). The position where this plane crosses the axes defines the source’s lateral position and the tilt of the plane defines its depth. Scattering repeatedly spreads information along the angle dimensions as light propagates, causing deeper sources to both tilt and spread. Neurons have fairly compact cell bodies, so calcium fluorescence in the cytoplasm is mainly confined to a 5 μm region around the nucleus, which behaves similarly to a point source. Using this forward model and searching for a sparse solution, it is possible to robustly estimate the 3D position of each active neuron.
We use an accelerated proximal gradient algorithm to solve the following -regularized optimization problem for :19]: 19].
We solve the optimization problem in Eq. (2) for at each time frame and add the resulting light-field signatures and 3D positions to our dictionary. Each solution provides the sparse set of neural structures active in that frame, along with their 3D positions. Each light-field signature corresponding to neuron is a normalized time-independent quantity:2 shows an example sparse frame from our zebrafish experiments (Fig. 4) and its extracted light-field signatures and 3D positions of detected neurons. Neurons near the surface have fairly compact light-field signatures, whereas deeper neurons spread and scatter to larger areas of the sensor, as expected.
To demonstrate 3D detection and localization capabilities, we show experimental results for a simple test object with and without scattering. Our nonscattering sample is a static suspension () of 1 μm sparsely distributed fluorescent beads in a 200 μm slice of agarose gel. As a proxy to ground-truth knowledge of the bead positions, we use two-photon microscopy to scan the imaging volume. We then record a single-shot light-field frame, which is shown in Fig. 3(a) along with several space-angle slices of the 4D light field. Each bead traces out a tilted line in the space-angle plot, as expected. After estimating the 3D bead positions using Eq. (2), we compare the results to our two-photon data [Figs. 3(d) and 3(e)]. Both detect the same set of beads, with a median difference in position of 1.3 μm in the plane and 12.8 μm along the axis. Assuming that the two-photon result is accurate, the error in our scheme is small enough to distinguish individual neurons and localize them.
Next, we test our method with scattering tissue by repeating the same experiment after placing a 100 μm slice of mouse brain tissue on top of the sample. This emulates conditions that would normally prevent good depth reconstructions. To get a sense of the amount of scattering, we show 2D intensity images in Fig. 3(c). The scattered image is degraded, yet there is still structure in the 4D light-field measurement. Despite scattering, the median difference in detected positions between our algorithm and the two-photon data is 1.8 μm in the plane and 15.5 μm along the axis, slightly worse than the nonscattering case. For comparison, we show that traditional light-field refocusing (see Visualization 1) and threshold-based detection (see Visualization 2) fails. However, our method’s detection and localization capabilities are not significantly affected by the presence of optical scattering in this case.
C. Independent Component Analysis
The detection and localization of neurons described thus far requires spatial sparsity in each video frame. Very large dictionaries of light-field signatures can be mapped out by videos with many frames, but only a sparse set may be active in each frame. More sparsity also leads to better localization (see Supplement 1). Unfortunately, our raw data is not always spatially sparse, in particular when the brain experiences periods of intense activity such as responding to a strong stimulus.
To address this issue, we implement a preprocessing step on the video data: Independent component analysis (ICA). The purpose of this optional first step is to take advantage of the temporal diversity of action potentials across many frames in order to computationally isolate time-correlated sets of neurons that can be spatially distinguished. With a large-enough training dataset, ICA provides spatially sparse light-field components.
We represent this space–time separation as follows:7) therefore becomes 8)] into singular values (see Supplement 1). Equation (8) seeks to reduce the dimensionality of a dataset by finding a locally optimal choice of and such that , and is known as non-negative matrix factorization (NMF) . NMF has been shown to be a nondeterministic, polynomial-time (NP-hard) problem, but there exist polynomial-time local-search heuristics that are guaranteed to converge . Traditional NMF is known to yield a sparse decomposition, yet does not exploit some of the characteristic properties of functional brain imaging. Here, in order to facilitate further processing of independent components, we slightly modify the standard NMF objective function. We implement a sparsity constraint known as “Lasso” regularization [24,25] on the temporal components: 26]. The optimal solution is given by and . Since temporal correlations between neurons’ activity are very common, the independent component extraction does not guarantee identification of individual neurons but rather of sparse spatial components, (see Visualization 3).
To summarize, our method for building the dictionary of light-field signatures has two steps. First, training data video is acquired and decomposed into spatially sparse components with ICA. Second, each sparse component is decomposed into light-field signatures which represent the footprints of single neurons (Fig. 2). By acquiring enough data frames, it becomes possible to identify every active neuron in the volume of interest as long as two neurons are not fully correlated in both space and time. Should the ICA step make the same neuron appear in multiple components, we identify and merge the signatures by evaluating the mutual distances between identified neurons and detecting overlaps within the typical size of one neuron (here we use a 4 μm mutual distance threshold). The method is naturally robust to optical scattering and aberrations, whose effects are included in the light-field signature. In fact, scattering may help to make each neuron’s signature more distinguishable (Fig. 2).
We build the dictionary from the set of all extracted light-field signatures, each of which also comes with an estimated 3D position of the neuron. The dictionary of signatures for identified neurons is denoted by or, as in Eq. (9), with a matrix representation, , defined by . Although the 3D position accuracy degrades with depth and scattering (see Supplement 1), the ability to distinguish neurons remains intact deep into the scattering tissue.
D. Neural Activity Reconstruction from Experimental Data
After the completion of the training step, the dictionary of light-field signatures can be used to efficiently decompose any single-shot measurement acquired by the light-field microscope (including the training data) into a linear positive combination of elements of the dictionary (see Supplement 1). The number of active neurons, , in one frame should be smaller than the number of sensor pixels (). Even though a raw data frame may not necessarily show sparsity nor represent a sparse set of active neurons, the amount of fluorescence in each neuron, , in each frame is the solution of a non-negative least square optimization problem given by4. A five-day-old Tg(NeuroD:GCaMP6f) zebrafish expressing GCaMP6 in the telencephalon is placed in the microscope. The generation of the transgenic zebrafish line will be the subject of a future publication . The fish is live, awake, and immobilized in 2% low-temperature melting agarose. 40 independent components are extracted from 500 diverse frames from the calibration step. Each independent component is then separated into single neuron signatures. The final dictionary contains a set of 802 light-field signatures, as well as an estimated position in 3D for each corresponding calcium source [see Fig. 4(b)]. We then record 10 s of spontaneous brain activity at 100 fps. The solution of Eq. (12) provides a quantitative measurement of fluorescence for all neurons in the field of view for which a light-field signature has been identified. Figure 4(a) shows color-coded lines representing the normalized change of fluorescence, , as a function of time, given by 4 and show a video reconstruction of the 3D activity in Visualization 4.
Scattering is minimal in zebrafish (), but motion is a limiting problem. In Fig. 4, artifacts appear at three time points when the zebrafish attempted to move, making the results inaccurate for the duration of muscular activity (dark blue lines). Once the zebrafish returns to rest, the dictionary becomes valid again (residual error drops, see Visualization 5). In this experiment, every neuron expresses GCaMP and the resulting 802 detected active sources are most likely neurons but may also be, for example, dendrites whose size is comparable to the spatial resolution. In future applications, this issue can be solved by limiting the expression of functional markers of neural activity to a localized volume, for instance near the nucleus , or by capturing a two-photon structural scan before the experiment to disambiguate. Future work will explore ways to correct for motion by periodically recalibrating the dictionary or implementing motion-correction algorithms in light-field space. Further improvements may come from taking into account the specific temporal dynamics of calcium fluorescence  and accounting for inhomogeneous scattering.
A. Spatial Resolution
In traditional light-field microscopy where volume image reconstruction is the goal, resolution can be obtained by measuring the size of the point-spread function  or spatial bandwidth . In brain tissue, resolution is further complicated by a dependence on scattering and density of neurons. Here, our method operates without ever reconstructing a volume image. In order to experimentally demonstrate our ability to resolve closely spaced neurons in scattering tissue, the experimental setup is slightly modified [see Fig. 5(a)]. We place a slice of mouse brain tissue (that does not express any particular fluorescence) above an artificial “neuron”. The artificial neuron is implemented by a second objective focusing light (in the emission spectral range, ) to a spot the size of a typical neuron cell body (). This source is controllably moved through the 3D space in order to collect measurements through the tissue for each known source position. Two active neurons can be mimicked by adding the measurements from two positions of the source. This data is then input into our algorithm in order to compare and quantify localization performance. By removing the microlens array, we can also compare to conventional 2D fluorescence microscopy. The thickness of the slice is varied from 100 μm to 400 μm so as to mimic scattering from various depths.
We define the spatial resolution along a given axis (, , or ) as the minimum allowed separation distance , or between two neurons for identification as two separate sources. This provides an upper limit on the number of neurons, , that can be simultaneously observed in a volume, , of brain tissue:5(c) and 5(d), we display the light-field measurements from two source positions (40 μm separation) simultaneously on a two-color scale, with one in green and the other in red. The two sources yield distinct light-field measurements, as expected, but also distinct 2D fluorescence images. Since our algorithm never reconstructs a 3D image, it can potentially be applied directly to 2D images without ever using the microlens array.
To compare these two situations, consider two sources, 1 and 2, and the corresponding measurements, and . We compute a metric for distinguishability, , given by
Light-field measurements provide better distinguishability than 2D fluorescence images, particularly in the axial dimension. Figures 5(d) and 5(e) plot experimentally measured distinguishability as the separation distance between two sources is increased, for both the lateral and axial dimensions and with both light-field and 2D fluorescence data. To get a sense for how distinguishability relates to localization error in our algorithm, we recover 3D positions as the source is moved along through 300 μm of scattering brain tissue, using the light-field data [see Fig. 5(f)].
In the absence of noise, accurate decomposition of a training dataset into light-field signatures is possible as soon as the light-field measurements associated with any two different neurons are not strictly identical. In theory, a single-pixel difference would be sufficient (). Practically, at full frame rate and in low-light conditions, a conservative condition for identification  is to compare the distinguishability, , to the signal to noise ratio (SNR) in the light-field measurements:6(a) and 6(b). Overall, the light-field data provides better localization resolution in all dimensions as compared to 2D fluorescence images parsed by the same algorithm. Light-field deconvolution methods  are expected to have performance somewhere in between these two.
To understand how our resolution metric relates to functional imaging capabilities, we deduce from Eq. (14) the maximal density of neurons, , that can be resolved by light-field data versus 2D fluorescence data, then compare both to the density of neurons typically observed in layers I to IV of mouse brain (Primary somatosensory cortex)  [see Fig. 6(c)]. This plot confirms experimental observations: 2D fluorescence microscopy is unable to identify neurons located below Layer I in the barrel cortex. However, light-field data enables a 1000-fold improvement (tenfold improvement along each axis) in the neuron density that can be resolved as compared to 2D fluorescence, so it is a promising avenue toward neural activity tracking in all layers.
We have demonstrated compressive light-field microscopy as a path toward directly addressing the needs of neuroscience for accurate, quantitative measurement of fluorescence activity in the living brain. Our method enables single-shot capture of volumetric brain activity with neuron-scale resolution. We exploit both spatial and temporal sparsity in order to distinguish and 3D localize individual neural structures. Because the light-field signatures are calibrated in situ, the strategy is robust to optical scattering and allows for real-time readout of brain activity without ever reconstructing a 3D image. Conveniently, it does not require careful alignment or calibration and can be implemented with inexpensive lenslet arrays. Since the data requirements scale with the number of active neurons in a single frame, not the number of voxels reconstructed, we believe that this method can scale to extremely large networks of neurons and be amenable to use with patterned stimulation, enabling functional activity mapping of the entire mouse brain cortex.
David and Lucille Packard Foundation; New York Stem Cell Foundation (NYSCF); Arnold and Mabel Beckman Foundation.
The authors thank Andrew Prendergast and Claire Wyart for sharing the Tg(NeuroD:GCaMP6f) zebrafish transgenic line, as well as Benjamin Recht, Ehud Isacoff, Claire Oldfield, Elizabeth Carroll, Alan Mardinly, Evan Lyall, Ian Oldenburg, and Eric Jonas. L. W. acknowledges a fellowship from the David and Lucille Packard Foundation. H. A. is a New York Stem Cell Foundation Robertson Investigator and acknowledges support from the Arnold and Mabel Beckman Foundation.
See Supplement 1 for supporting content.
1. T.-W. Chen, T. J. Wardill, Y. Sun, S. R. Pulver, S. L. Renninger, A. Baohan, E. R. Schreiter, R. A. Kerr, M. B. Orger, V. Jayaraman, L. L. Logger, K. Svoboda, and D. S. Kim, “Ultrasensitive fluorescent proteins for imaging neuronal activity,” Nature 499, 295–300 (2013).
2. C. Petersen, A. Grinvald, and B. Sakmann, “Spatiotemporal dynamics of sensory responses in layer 2/3 of rat barrel cortex measured in vivo by voltage-sensitive dye imaging combined with whole-cell voltage recordings and neuron reconstructions,” J. Neurosci. 23, 1298–1309 (2003).
3. A. Bègue, E. Papagiakoumou, B. Leshem, R. Conti, L. Enke, D. Oron, and V. Emiliani, “Two-photon excitation in scattering media by spatiotemporally shaped beams and their application in optogenetic stimulation,” Biomed. Opt. Express 4, 2869–2879 (2013).
4. W. Denk, J. Strickler, and W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990). [CrossRef]
5. T. Schrödel, R. Prevedel, K. Aumayr, M. Zimmer, and A. Vaziri, “Brain-wide 3D imaging of neuronal activity in caenorhabditis elegans with sculpted light,” Nat. Methods 10, 1013–1020 (2013).
6. G. Katona, G. Szalay, P. Maak, A. Kaszas, M. Veress, D. Hillier, B. Chiovini, E. S. Vizi, B. Roska, and B. Rozsa, “Fast two-photon in vivo imaging with three-dimensional random-access scanning in large tissue volumes,” Nat. Methods 9, 201–208 (2012).
7. S. J. Yang, W. E. Allen, I. Kauvar, A. S. Andalman, N. P. Young, C. K. Kim, J. H. Marshel, G. Wetzstein, and K. Deisseroth, “Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing,” Opt. Express 23, 32573–32581 (2015).
8. P. Keller, A. Schmidt, J. Wittbrodt, and E. Stelzer, “Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy,” Science 322, 1065–1069 (2008). [CrossRef]
9. E. Baumgart and U. Kubitscheck, “Scanned light sheet microscopy with confocal slit detection,” Opt. Express 20, 21805–21814 (2012).
10. M. B. Bouchard, V. Voleti, C. S. Mendes, C. Lacefield, W. B. Grueber, R. S. Mann, R. M. Bruno, and E. M. Hillman, “Swept confocally-aligned planar excitation (scape) microscopy for high-speed volumetric imaging of behaving organisms,” Nat. Photonics 9, 113–119 (2015).
11. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New York (ACM, 1996), pp. 31–42.
12. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in ACM SIGGRAPH 2006 Papers, Boston, Massachusetts (ACM, 2006), pp. 924–934.
13. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006). [CrossRef]
14. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-d deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
15. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).
16. N. Cohen, S. Yang, A. Andalman, M. Broxton, L. Grosenick, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express 22, 24817–24839 (2014).
17. C.-H. Lu, S. Muenzel, and J. Fleischer, “High-resolution light-field microscopy,” in Computational Optical Sensing and Imaging, OSA Technical Digest (online) (Optical Society of America, 2013), paper CTh3B.2.
18. L. Waller, G. Situ, and J. W. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012).
19. H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23, 14461–14471 (2015).
20. R. Prevedel, Y.-G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11, 727–730 (2014).
21. M. Lax, “Multiple scattering of waves,” Rev. Mod. Phys. 23, 287–310 (1951). [CrossRef]
22. D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Advances in Neural Information Processing Systems 13, T. Leen, T. Dietterich, and V. Tresp, eds. (MIT, 2001), pp. 556–562.
23. S. A. Vavasis, “On the complexity of nonnegative matrix factorization,” SIAM J. Optim. 20, 1364–1377 (2009).
24. R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. R. Stat. Soc. Ser. B 58, 267–288 (1994).
25. A. Y. Ng, “Feature selection, L1 vs. L2 regularization, and rotational invariance,” in Twenty-first International Conference on Machine Learning, Banff, Alberta (ACM, 2004), p. 78.
26. H. Kim, “Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis,” Bioinformatics 23, 1495–1502 (2007). [CrossRef]
27. C. S. Oldfield, A. R. Huth, M. Chavez, E. Carroll, A. Prendergast, T. Qu, A. Hoagland, C. Wyart, and E. Y. Isacoff, University of California, Berkeley, Berkeley, CA 94720 and CNRS-UMR-7225, 75005 Paris, France, are preparing a paper to be called “Experience shapes hunting behavior by increasing the impact of information transfer from visual to motor areas.”
28. C. K. Kim, A. Miri, L. C. Leung, A. Berndt, P. Mourrain, D. W. Tank, and R. D. Burdine, “Prolonged, brain-wide expression of nuclear-localized gcamp3 for functional circuit mapping,” Front. Neural Circuits 8, 00138 (2014).
29. E. Pnevmatikakis, D. Soudry, Y. Gao, T. A. Machado, J. Merel, D. Pfau, T. Reardon, Y. Mu, C. Lacefield, W. Yang, M. Ahrens, R. Bruno, T. M. Jessell, D. Peterka, R. Yuste, and L. Paninski, “Simultaneous denoising, deconvolution, and demixing of calcium imaging data,” Neuron 89, 285–299 (2016). [CrossRef]
30. M. E. Lopes, “Estimating unknown sparsity in compressed sensing,” arXiv:1204.4227 (2012).
31. H. Meyer, V. Wimmer, M. Oberlaender, C. De Kock, B. Sakmann, and M. Helmstaedter, “Number and laminar distribution of neurons in a thalamocortical projection column of rat vibrissal cortex,” Cereb. Cortex 20, 2277–2286 (2010).