Imaging increasingly large neuronal populations at high rates pushed multi-photon microscopy into the photon-deprived regime. We present PySight, an add-on hardware and software solution tailored for photon-deprived imaging conditions. PySight more than triples the measured median amplitude of neuronal calcium transients in awake mice and facilitates single-trial intravital voltage imaging in fruit flies. Its unique data streaming architecture allowed us to image a fruit fly’s brain olfactory response over at 73 volumes per second, while retaining over 200 times lower data rates than those of a conventional data acquisition system with comparable voxel sizes (). PySight requires no electronics expertise or custom synchronization boards, and its open-source software is extensible to any imaging method based on single-pixel (bucket) detectors. PySight offers an optimal data acquisition scheme for ever increasing imaging volumes of turbid living tissue.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Multi-photon laser scanning microscopy (MPLSM) provides a glimpse into the functioning mammalian brain with subcellular resolution [1–3]. Recent improvements in optical microscope design, laser sources, and fluorophores [4,5] have extended the use of MPLSM to challenging applications, such as imaging of very large neuronal populations [6–8] and fast volumetric imaging . These applications face a common inherent limitation: a given rate of photon detection is spread across a rapidly increasing number of voxels sampled per second. In the resulting photon-deprived regime, several photodetector-induced noise sources reduce the correlation between the total electrical charge acquired from the photodetector, and the actual number of photons it has detected [10–13]. Photon counting arrives at a more accurate estimate of the number of detected photons following each laser pulse by thresholding electrical current fluctuations into uniform photon detection events [10–14]. This improvement is particularly useful in neuronal calcium and voltage imaging, where a small increase in imaging conditions has a large impact on spike detectability [15,16]. Once photon detection events are discretized, their absolute arrival time can be registered, rather than the number of photons detected in each time bin. This data acquisition modality, known as “time-stamping” or “time-tagging”, is agnostic to the number of voxels sampled per second, which could exceed voxels per second in modern MPLSM imaging techniques . In light of the inherent sparsity of neuronal dynamics  and the effort to identify fast transients correctly in time, conventional acquisition of mostly empty voxels is sub-optimal with respect to time-stamping acquisition.
Time-correlated single-photon counting (TCSPC) modules have been previously incorporated into MPLSM, offering fluorescence lifetime and phosphorescence lifetime imaging [19–23]. While some TCSPC modules are capable of a time-stamping mode of acquisition, they are mostly ill-suited for tracking neuronal activity across a volume of living brain tissue, due to their additive dead time following photon detection, their limited sustained count rate, and their insufficient memory depth (see Table S2 in Supplement 1). Alternative methods for rapid fluorescence lifetime estimation are incapable of time-stamping, and their applicability for photon-deprived imaging conditions through turbid media, such as living brain, has not been demonstrated [24–28]. A few groups have devised different photon-counting-apparatus-based approaches, such as home-built or customized discriminators, complex programmable logic devices, field programmable gate arrays, and oscilloscopes [8,10,11,14,29–37]. For example, Vučinić and Sejnowski have developed an elegant MPLSM that uses a digital oscilloscope for photon counting, volumetric beam steering, and fluorescence lifetime imaging, at a fraction of the cost of a conventional MPLSM . However, their freely available application framework, neurospy, is limited by the memory depth of currently available oscilloscopes (e.g., sampling points for a 10 s long, single-channel recording at a required detection probability of for incoming photodetector pulses). Other solutions have obtained a sustainable acquisition duration and better imaging conditions than those reached with analog integration [8,10,29,35,36] with a temporal resolution as good as 2.2 ns [36,37]. Nevertheless, none of these setups support a time-stamping acquisition mode, which would have soothed their insatiable demand for bit rate and storage space.
Finally, some photodetector manufacturers now offer bundled photon counting modules [38,39], where the preamplifier, the discriminator [38,39], and even the subsequent event counter  are integrated with the photodetector. While these products are easy to incorporate into an existing MPLSM, their current performance is sub-optimal in terms of pulse-pair resolution (20 ns for Ref. ) or sustained count rate (2 MHz for Ref. ).
We introduce here PySight, an add-on solution that seamlessly embeds time-stamped photon counting into most existing multi-photon imaging systems. It combines commercial, off-the-shelf hardware with open-source software, tailored for rapid planar and volumetric imaging. PySight uniquely time stamps each photon detection event with 100 ps accuracy, resulting in a modest data throughput, while exceeding the spatio-temporal resolution of existing volumetric imaging setups [9,33,40,41].
2. METHODS AND MATERIALS
A. Animal Preparations
All imaging experiments and surgical procedures were approved by the Tel Aviv University ethics committee for animal use and welfare and followed pertinent Institutional Animal Care and Use Committee (IACUC) and local guidelines. A detailed description of all animal preparations and experimental protocols is provided in Section 6 of Supplement 1.
B. Data Acquisition
1. Imaging Systems
Data for the planar and volumetric calcium imaging experiments was acquired with a movable objective two-photon microscope (MOM) by Sutter Instrument Company, modified to house a 8 kHz resonant-galvanometric scanning unit. Additional modifications for fast volumetric imaging are described in the following sections. The laser source used was an 80 MHz 140 fs Ti:sapphire laser (Chameleon Ultra II, Coherent, Inc.) tuned to 940 nm. Data for the planar voltage imaging was acquired with Sutter’s DF-Scope at 910 nm. Complete details can be found in Section 6 of Supplement 1.
2. Planar Data Acquisition
The collected light was directed at a fast GaAsP photomultiplier tube (PMT, H10770PA-40SEL, Hamamatsu Photonics K.K.) through two dichroic mirrors (BrightLine FF735-Di01-25x36, Semrock and 565dcxr, Chroma Technology Corporation) and a bandpass filter (525/70-2P, Chroma Technology Corporation). For PySight acquisition, the output cable of the PMT was connected directly to a high-bandwidth preamplifier (TA1000B-100-50, Fast ComTec GmbH). The amplified signal was then conveyed to a fast analog input of an ultrafast multiscaler (MCS6A-2T8, Fast ComTec GmbH). Another analog input channel of the multiscaler was dedicated for attenuated transistor–transistor logic (TTL) pulses from the scanning software (ScanImage, Vidrio Technologies, LLC.), which were configured to be logically high when the galvanometric mirror scanned through the field of view (FOV).
For analog acquisition, the output cable of the PMT was connected directly to a high-speed current amplifier (DHCPA-100, FEMTO Messtechnik GmbH). The preamplifier was DC-coupled and set to a bandwidth of 80 MHz on its high gain setting, and the output’s full bandwidth was used. The amplified signal was then conveyed to a National Instruments FlexRIO (PXIe-1073) digitizer with the NI 5734 adapter module set to a sampling frequency of 120 MHz.
During planar calcium imaging, the gain of the PMT was adjusted to produce the highest available signal to noise ratio (SNR) for its respective acquisition scheme (analog or PySight) calculated in real time. Immediately afterwards, the acquisition scheme was altered by rerouting the PMT’s output. Once connected, the same FOV was re-imaged for the same period of time using the same parameters. The only difference between the two schemes was the PMT’s gain, which is calibrated to a higher value when imaging using the multiscaler due to its built-in discriminators. This allows the experimenter to filter out much of the multiplicative PMT noise while retaining high photosensitivity.
During planar voltage imaging, the line signal from the resonant mirrors of the DF-Scope (Sutter Instrument Company) was connected to an analog input of the multiscaler. This does not affect the performance of the imaging system in any way. The output cable of the PMT was first routed either to a fast amplifier (TA1000B-100-50, Fast ComTec) connected to the multiscaler (for PySight-based acquisition) or to a slower preamplifier (C7319, Hamamatsu) connected to a National Instruments FlexRIO digitizer. After 5 to 10 imaging sessions, each 30 s long, conducted using a given acquisition scheme, the output cable of the PMT was rerouted to the other data acquisition scheme, and the experiment was repeated with the same overall parameters, as elaborated in Section 6 of Supplement 1. The peak count rate of the multiscaler is 10 GHz (see Table S2) and so is the nominal sampling frequency of the multiscaler’s internal 10 MHz oscillator . An optional frequency divider with an on-board comparator (PRL-260BNT-220, Pulse Research Lab) was used to convert the readings from the internal photodiode of the Ti:sapphire laser (Chameleon Ultra II, Coherent, Inc.) into a 10 MHz clock signal, which was fed into the multiscaler’s 10 MHz reference clock input [see Fig. 1(a)]. Such synchronization between the laser and the multiscaler is necessary for fluorescence lifetime imaging and other applications detailed in the discussion section below, but not for ordinary two-photon imaging, be it planar or volumetric. A frequency divider with an on-board comparator was chosen, since the internal photodiode of the Chameleon Ultra II laser outputs a double-humped, wavelength-dependent signal. Other femtosecond lasers, such as Mira900 (Coherent Inc.) and orange HP10 (Menlo Systems GmbH), output a more well-behaved synchronization signal that might be compatible with standard TTL frequency dividers. If the photodiode has to be thresholded, and prime timing performances are paramount, consider using a dedicated fast timing discriminator with a lower temporal jitter (e.g., TD2000, Fast ComTec GmbH).
3. Volumetric Data Acquisition
For volumetric imaging, a tunable acoustic gradient index of the refraction lens (TAG Lens 2.5, TAG Optics, Inc.) was inserted into the beam path upstream of the resonant-galvo system, as elaborated in Section 6 of Supplement 1. The TAG lens driver (DrvKit 3.3, TAG Optics Inc.) was configured to output 44 ns long TTL synchronization pulses once per focal oscillation. These were attenuated through two 20 dB RF attenuators (27-9300-20 Cinch Connectivity Solutions) before being conveyed to a fast analog input of the ultrafast multiscaler.
C. Data Analysis
1. Planar Intravital Calcium Imaging
Data acquired through the MCS6A multiscaler was parsed by PySight (Code 1 ) to HDF5 files, which were then converted to standard tagged image file (TIF) format. Data acquired through ScanImage was saved online in TIF format. TIF files from both acquisition modalities were analyzed with the CaImAn framework , bundling methods for motion correction, source extraction (utilizing a constrained non-negative matrix factorization approach), and denoising. The analysis on both acquisition types (Fig. 2) was done using the very same parameters. To compare the mean values, a two-sample, two-sided test with Welch’s correction was used. A step-by-step demonstration of the analysis pipeline, from a list of photon arrival times to neuronal calcium traces, is presented as a Jupyter notebook in Code 2 .
2. Planar Intravital Voltage Imaging
Data acquired using both the multiscaler and MScan was converted to a TIF format and processed using identical custom Python scripts available upon request. The analysis first required the user to mark the region of interest (ROI) containing the anatomical structure in question. The output consisted of the mean fluorescence trace inside the ROI across all 5 to 10 repetitions of each fly, as well as the individual fluorescence traces per repetition per animal.
3. Volumetric Intravital Calcium Imaging
Each trial was parsed by PySight (Code 1 ) to a four-dimensional (4D) volume consisting of voxels in . Its intensity profile along the axial dimension was normalized by the axial intensity profile acquired in a dilute fluorescein solution, as elaborated above. A cuboid volumetric ROI was manually selected for each olfactory structure, and the normalized brightness of all of its voxels was evenly summed. The median brightness in the first 4.7 s (346 volumes) of each trial was considered to be the baseline brightness. Fluorescence variations were then calculated by subtracting the baseline brightness from the instantaneous brightness and dividing the result by the baseline brightness.
The size of the list files for the twelve trials ranged between 154–157 and 162–163 megabytes for isoamyl acetate and 2-Pentanone trials, respectively. The mean data throughput was calculated by dividing the size of the largest list file (163 megabytes) by the total acquisition length (33.54 s), neglecting the list file header length.
Similarly for volumetric imaging of the antennal lobes, each trial was parsed by PySight to a 4D volume consisting of voxels in . Its intensity profile along the axial dimension was normalized by the axial intensity profile acquired in a dilute fluorescein solution, as elaborated above. The median brightness in the first 5 s (335 volumes) of each trial was considered to be the baseline brightness. Fluorescence variations were then calculated by subtracting the baseline brightness from the instantaneous brightness and dividing the result by the baseline brightness. An ellipsoid volumetric ROI was manually selected for two glomeruli (A and B), and the normalized brightness of all of their voxels were evenly summed. Conversely, to identify glomeruli C that were more responsive to isoamyl acetate than to 2-Pentanone, we first binned the 4D volume from each of the 12 trials times in to reduce the computational load. We then summed the intensity-corrected brightness along 5.5 s, following the first odor puff onset, and applied a three-dimensional (3D) Gaussian filter with a standard deviation of voxels () for each individual trial. Voxels within the antennal lobes were considered to have a preferential response to isoamyl acetate if their minimal brightness across the isoamyl acetate odor puff trials was higher than the maximal brightness across the 2-Pentanone odor puff trials. The binary mask of odor-preferential voxels was applied on the trial-summed (unsmoothed, full-sized) 4D volume. Its time-collapsed 3D volume was rendered in AMIRA 6.4 (Thermo Fisher Scientific) yielding Fig. 3(d), and its respective temporal fluorescence variation traces [Figs. 3(e)–3(f)] were calculated using Python scripts available from the code repository  and documentation therein. A sequential montage of antennal lobe slices in different axial planes was prepared by binning the trial-summed 4D volume times in , then displaying every sixth slice along the axial dimension, resulting in 6.6 μm thick axial slices evenly spaced 39.6 μm apart. Color intensity was normalized for each color mask separately, thereby artificially highlighting the glomeruli of interest.
PySight is an open-source software written in Python, capable of parsing photon lists into multi-dimensional renderable volumes. Usually PySight is given a recording from the MCS6A multiscaler in conjunction with several parameters of the imaging system, such as its scanning frequency. PySight bins the precise timing of photon detection events into a multi-dimensional histogram (Fig. S1, Supplement 1), which can be visualized using either native Python tools or designated rendering software, such as ImageJ . PySight can be run using a cross-platform graphical user interface (Fig. S2, Supplement 1), as well as through simple Python commands, on single recordings or on multiple recordings simultaneously. PySight can also parse time-stamped data files generated by other photon-counting hardware (see Section 5 in Supplement 1), as long as its format follows the guidelines listed in the package’s documentation .
PySight can be installed via the Python pip application: pip install pysight. The full source code can be found at https://github.com/PBLab/python-pysight, published under the Creative Commons Attribution License. Code 2  further demonstrates how to use PySight, and Section 2 of Supplement 1 discusses its underlying algorithm and demonstrates its user interface.
A. System Architecture
The anatomy of conventional multi-photon systems involves, among others, a pulsed laser source, beam steering elements and their auxiliary optics, a collection arm with one or more PMTs, pre-amplifiers, and an analog-to-digital acquisition board . Figure 1(a) depicts such a system with the PySight photon-counting add-on. Electrical pulses following photon detections in each PMT are optionally amplified with a high-bandwidth preamplifier (TA1000B-100-50, Fast ComTec). The amplified pulses are then conveyed to an ultrafast multiscaler (MCS6A, Fast Comtec), where a software-controlled discriminator threshold determines the voltage amplitude that will be accepted as an event. The arrival time of each event is registered at a temporal resolution of 100 ps with no dead time between events. This basic setup, along with the PySight software package (Code 1 ), is sufficient for multi-dimensional imaging.
Converting the detected photon arrival times into a multi-dimensional time series is a matter of interpolating the corresponding instantaneous position of the laser beam focal point within the sample. PySight computes the difference between photon arrival times and the respective synchronization signals from the laser beam steering elements (Section 2 of Supplement 1). Using a few key inputs from the user like the scanning mirror’s frequency, it then builds a multi-dimensional histogram and populates each voxel with the respective number of photons that were collected when the laser beam focused on it. The histogram can either be rendered and viewed directly or be processed further by registering it to the moving brain’s frame of reference and computing quantitative metrics about its content (neuronal activity, blood flow, etc.) [4,5]. As rendering takes place off-line, experimental monitoring is done by routing one of the multiscaler outputs [SYNC, Fig. 1(a)] to the analog-to-digital card of the existing system. The output of this channel is similar to that of a high-end discriminator, which already reduces PMT-dependent noise. Detailed instructions on system setup and use are provided in Section 1 of Supplement 1, while the full outline of our optical system is found in Fig. S6.
Figure 1(b) shows summed time lapses of the same FOV at different times using PySight and conventional analog data acquisition. The summed time lapses and frames were normalized in accordance with Ref.  by subtracting the offset of the analog image and comparing its mean to that of PySight. This linear normalization process enhances the fact that the background level is higher in the analog image than in the PySight-generated image, whereas the peak brightness values (during increased neuronal activity as reflected by increased calcium signal intensity, see Fig. 2) are lower than those in the PySight-generated counterpart. Figure 1(c) shows single frames acquired with a dwell time of 44 ns, corresponding to 3 or 4 laser pulses per pixel, whereas Fig. 1(d) exemplifies their respective pixel intensity distribution. The photon flux regime encountered here (average count rates ranging between 0.007 and 0.2 photons per pixel) is well within the range where photon counting outperforms analog integration [11,12]. Evidently, less than 3% of the pixels contained any photon, and only a handful of pixels out of a million were found to have four photons. Analog integration dithers these five discrete values of photon counts per pixel into some subset of its 16 bit sampling range, but this dithering does not signify a larger dynamic range. On the contrary, it only increases the probability that noisy current fluctuations in the absence of a photon detection event will amount to similar pixel values as a rarely occurring genuine single-photon detection event, thereby corrupting the resolvable contrast, as observed in Fig. 1(b). Actual photon detection events are also smeared in the analog integration mode, which fails to distinguish the large peak height distribution of a single photon  and its after-pulsing artifacts  from the extremely rare genuine instances in which 2–4 photons per pixel were detected. By correctly nullifying empty pixels and discretizing photon detection events regardless of their excessive peak height, PySight generates cleaner images [Figs. 1(b) and 1(c)]. It further rejects periodic ripple noise artifacts (see Fig. S4 in Supplement 1) characteristic of some sensitive photodetectors and timing pre-amplifiers , which is particularly beneficial during rapid imaging under photon-deprived conditions.
B. PySight Improves Calcium Imaging in Awake Mice
We used PySight to image neurons expressing GCaMP6f under the Thy-1 promoter in awake mice within a normal photon flux regime and compared its performance to analog integration within the same FOV, imaging conditions, and during the same imaging session. We analyzed both analog and PySight-generated movies (two mice, four FOVs per acquisition type) using CaImAn, a calcium analysis framework . PySight’s noise suppression allowed us to use about five times higher PMT gains (control voltage of 850 mV versus 650 mV in analog imaging), which gave rise to improved calcium imaging and analysis under standard imaging conditions (Fig. 2). Following peak detection filtering, which resulted in a mean firing rate of about 0.2 Hz for both acquisition types, we found that calcium imaging with PySight produces considerably higher calcium transients when comparing spike-like events from the entire FOV [Fig. 2(e)]; analog: median of 16% for 311 cells, PySight: median of 57% for 324 cells, , Mann–Whitney test). Accordingly, the mean calcium transient was 247.1% higher with PySight and 26.1% using analog imaging (, Welch’s two-sided test, same cell numbers). The improved imaging conditions with PySight were retained when using the same high PMT gain for both imaging modalities (see addendum of Code 2 ).
Having direct access to time-stamped photon counts allowed us to estimate the relationship between and the number of detected photons (Section 3 of Supplement 1): on average a of 100% corresponds to 5.28 photons per second per . Additionally, it reduced the mean data throughput by a factor of 7.5–11.5 compared to the same number of 16 bit pixels during analog acquisition.
C. PySight Enables Rapid Intravital Volumetric Imaging
Next, we utilize the exquisite temporal precision (100 ps) of PySight’s hardware for ultrafast volumetric imaging. We implemented the fastest continuous axial scanning scheme available today, based on an ultrasonic variofocal lens (TAG lens), in a setup and fashion (Fig. S6 in Supplement 1) similar to Kong et al. . Figure 3 demonstrates volumetric calcium imaging of olfactory brain areas in live Drosophila using a TAG lens. The TAG lens modulates the effective focal depth of the excited volume sinusoidally at a rate of 189 kHz with no synchronization to the scanning mirrors that steer the beam laterally. Despite the asynchronous scanning, PySight successfully resolves the volumetric origin of each collected photon (see Fig. 3 and Section 4 of Supplement 1) based on synchronization TTL pulses delivered by the TAG lens driver and the planar scanning software. This simple solution obviates the phased-locked loops and photodiode-based synchronization apparatus devised in earlier studies for analogous data acquisition systems [9,48].
We imaged the antennal lobes, mushroom bodies, and lateral horns of a GCaMP6f-expressing Drosophila, while the fly was repeatedly exposed to two different odors: 2-Pentanone and isoamyl acetate. The imaged volume, spanning , was large enough to image all six non-coplanar olfactory regions simultaneously (see Fig. 3 and Section 5 of Supplement 1). At an imaging rate of 73.4 volumes per second, the resulting mean data throughput of 5.1 MB/s was 221 times smaller than the data throughput incurred by 8 bit analog imaging at the same voxelization ( voxels in ). While the lateral horn and antennal lobe responded similarly to the 2-Pentanone puffs with prolonged onset responses, their responses consistently diverged for isoamyl acetate puffs (see Figs. 3(b) and 3(c) and Section 4 of Supplement 1).
Zooming in on the antennal lobes, distinct glomeruli were identified based on their morphology, response dynamics, and odor preference [see Figs. 3(d)–3(f)]. Glomerulus A exhibits a graduated weakly adapting response to 2-Pentanone contrasted by a weak response to isoamyl acetate, whereas glomerulus B exhibits a sustained response to isoamyl acetate that peaks well after the odor puff has ceased. While glomeruli A and B were identified according to their morphological structure, glomeruli C were identified according to their preferential response to isoamyl acetate, which was consistently stronger than their response to 2-Pentanone. Glomeruli C were identified in the anterior and medial edges of both antennal lobes [labeled in green in Figs. 3(d)–3(f)]. All these response dynamics are absent from an empty volumetric ROI selected right on top of the right antennal lobe (see Section 4 of Supplement 1). Hence, PySight is capable of resolving distinct laminar dynamics simultaneously sampled by the variofocal lens rather than merely extending the effective depth of field using Bessel beams, as demonstrated earlier .
D. Planar Intravital Voltage Imaging
To demonstrate the portability and ease of use of our add-on hardware, we performed voltage imaging in a different laboratory across campus using a commercial multi-photon system (DF-Scope, Sutter, Inc). Detection—in single trials—of genetically encoded voltage indicators (GEVI) responses is an exceptionally challenging application due to their small fractional changes, fast photobleaching, and low replacement in the cell membrane . Under two-photon imaging, detection of single spikes required the use of photon counting, either through direct measurement of fluorescence changes or using fluorescence lifetime imaging . Recent advancement in the development of GEVIs allowed for recording responses to visual stimuli in the fly brain , albeit requiring trial averaging. Here, PySight resolved single-trial odor responses in vivo in neurites expressing ASAP2f on the fly antennal lobe (Fig. 4).
The neuroscience community is steadily striding towards high-throughput, rapid volumetric imaging of multiple brain regions [3–5]. To achieve these feats, data acquisition hardware has to maintain single-photon sensitivity for tracking dim features, while still handling spouts of photons emitted from sparse bright features over lengthy imaging sessions. Analog data acquisition systems lump the electrical charge sampled per voxel, thereby degrading SNR and imposing a trade-off between spatio-temporal resolution and data throughput. Conversely, the data throughput with the time-stamping photon counter used here scales only with the number of detected photons, alleviating the need to sacrifice resolution in order to converge at realistic write speed or storage space.
We have shown here how a suite of commercial off-the-shelf hardware and our open-source software dramatically boosts neuronal calcium imaging in mice that are awake and voltage imaging in the fly brain. PySight has more than tripled the measured median amplitude of neuronal calcium transients in calcium imaging of cortical layers 2–3 of mice that are awake (Fig. 2), owing in part to the thresholding step that eliminates most of the variance in PMT current fluctuations in the presence and absence of detected photons [10–12]. Its added value is expected to grow with imaging depth and speed [17,51], especially given the advent of multi-dimensional imaging systems, measuring multi-spectral excitability [20,52] and hyper-spectral emission [53,54], as well as fluorescence/phosphorescence lifetime [11,14,19–28,33]. Moreover, PySight facilitated rapid continuous volumetric imaging of the fly’s olfactory system with unprecedented spatio-temporal resolution, dissecting the odor response dynamics of individual glomeruli in 4D [Figs. 3(d)–3(f)].
When tracking rapid dynamics using ultrafast lasers with repetition rates in the order of 100 MHz or less and voxel dwell times of tens of nanoseconds at most, no more than a few photon counts per voxel are observable in each individual frame in time [See Fig. 1(d) and Fig. S3 in Supplement 1]. Accordingly, we have observed a mean count rate of 0.03 photons per pixel in individual frames [See Fig. 1(d)], and even the brightest pixel of the brightest neuronal cell body exhibited a peak count rate of 0.35 photons per pixel, corresponding to 0.1 photons per pulse (See Fig. S3 in Supplement 1). Photon counting is clearly advantageous over analog integration in this regime, as found in previous studies [10,11,29,36]. The precise cross-over point beyond which analog integration is preferable over photon counting depends on the particular parameters of each imaging system [11,12], but usually resides well beyond an average count rate of 0.5 photons per laser pulse and can be extended to higher photon fluxes using statistical inference methods [11,55–58].
We chose not to assess the improvement in imaging conditions by common measures, such as SNR or image-wide mean-squared error, for several reasons. First, commonly used photodetectors suffer from a long-tailed distribution of noise spikes, which is underestimated by the nominal SNR . Moreover, due to the low density of dynamic features in individual frames [less than 3% non-zero pixels in Fig. 1(d)], a naive computation of the mean-squared error across the entire image will be poorly correlated with their actual detectability. Finally, these two metrics are suitable for characterizing an imaging system, while we were more interested in measuring how photon counting improves neuronal calcium transient contrast. We therefore opted for the calculation of , which is a well-established metric for estimating intracellular calcium concentration transients and is positively correlated with the sensitivity index of the metric [15,16].
Users may also synchronize the photon counter with their laser pulses through its reference clock input (see section), enabling several applications, such as fluorescence lifetime imaging [11,14,19,20,22–28,33], image restoration , temporal demultiplexing of interleaved beamlets [33,40], and post-hoc gated noise reduction. Moreover, as PySight provides direct access to the photon count in each voxel, it allows implementing photon misdetections correction algorithms that can further increase the dynamic range of the rendered images [11,55–58]. Finally, although PySight has been built around specific hardware, the open-source code can handle any list of photon arrival times through a well-documented application interface (see Section 2 of Supplement 1). A non-exhaustive table of compatible hardware is provided in Section 5 of Supplement 1. As PySight is extensible to any imaging method based on single-pixel (bucket) detectors, including fast compressive bioimaging [59,60], its versatility allows multiple experimenters to use the same software independent of experimental setups. Together with its superior noise suppression and high spatio-temporal precision, PySight offers an optimal data acquisition scheme for ever increasing imaging volumes of turbid living tissue.
H2020 European Research Council (ERC) (639416, 676844); United States–Israel Binational Science Foundation (BSF) (2014509); Israel Science Foundation (ISF) (1019/15, 1994/15); Fondation Leducq (15CVD02).
L. G. and P. B. thank Jonathan Driscoll for helpful discussions through the hardware selection process. The authors thank the reviewers of the manuscript for constructive and insightful comments.
H. H., L. G., O. C., and P. B. designed PySight. H. H. and L. G. wrote and implemented PySight. H. H., L. G., D. K., and S. I. performed experiments. M. P. designed and provided the experimental setup and all materials for the fly experiments. H. H., L. G., and P. B. prepared the manuscript.
Please see Supplement 1 for supporting content.
1. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990). [CrossRef]
2. W. R. Zipfel, R. M. Williams, and W. W. Webb, “Nonlinear magic: multiphoton microscopy in the biosciences,” Nat. Biotechnol. 21, 1369–1377 (2003). [CrossRef]
3. W. Yang and R. Yuste, “In vivo imaging of neural activity,” Nat. Methods 14, 349–359 (2017). [CrossRef]
4. A. Urban, L. Golgher, C. Brunner, A. Gdalyahu, H. Har-Gil, D. Kain, G. Montaldo, L. Sironi, and P. Blinder, “Understanding the neurovascular unit at multiple scales: advantages and limitations of multi-photon and functional ultrasound imaging,” Adv. Drug Delivery Rev. 119, 73–100 (2017). [CrossRef]
5. R. Blau, M. Neeman, and R. Satchi-Fainaro, “Emerging nanomedical solutions for angiogenesis regulation,” Adv. Drug Delivery Rev. 119, 1–2 (2017). [CrossRef]
6. P. S. Tsai, C. Mateo, J. J. Field, C. B. Schaffer, M. E. Anderson, and D. Kleinfeld, “Ultra-large field-of-view two-photon microscopy,” Opt. Express 23, 13833–13847 (2015). [CrossRef]
7. N. J. Sofroniew, D. Flickinger, J. King, and K. Svoboda, “A large field of view two-photon mesoscope with subcellular resolution for in vivo imaging,” eLife 5, e14472 (2016). [CrossRef]
8. J. N. Stirman, I. T. Smith, M. W. Kudenov, and S. L. Smith, “Wide field-of-view, multi-region, two-photon imaging of neuronal activity in the mammalian brain,” Nat. Biotechnol. 34, 857–862 (2016). [CrossRef]
9. L. Kong, J. Tang, J. P. Little, Y. Yu, T. Lämmermann, C. P. Lin, R. N. Germain, and M. Cui, “Continuous volumetric imaging via an optical phase-locked ultrasound lens,” Nat. Methods 12, 759–762 (2015). [CrossRef]
10. S. Moon and D. Y. Kim, “Analog single-photon counter for high-speed scanning microscopy,” Opt. Express 16, 13990–14003 (2008). [CrossRef]
11. J. D. Driscoll, A. Y. Shih, S. Iyengar, J. J. Field, G. A. White, J. A. Squier, G. Cauwenberghs, and D. Kleinfeld, “Photon counting, censor corrections, and lifetime imaging for improved detection in two-photon microscopy,” J. Neurophysiol. 105, 3106–3113 (2011). [CrossRef]
12. R. D. Muir, D. J. Kissick, and G. J. Simpson, “Statistical connection of binomial photon counting and photon averaging in high dynamic range beam-scanning microscopy,” Opt. Express 20, 10406–10415 (2012). [CrossRef]
13. W. Becker, The bh TCSPC Handbook, 6th ed. (Becker & Hickl, 2014).
14. D. Vučinić and T. J. Sejnowski, “A compact multiphoton 3D imaging system for recording fast neuronal activity,” PLoS One 2, e699 (2007). [CrossRef]
15. B. A. Wilt, J. E. Fitzgerald, and M. J. Schnitzer, “Photon shot noise limits on optical detection of neuronal spikes and estimation of spike timing,” Biophys. J. 104, 51–62 (2013). [CrossRef]
16. E. J. O. Hamel, B. F. Grewe, J. G. Parker, and M. J. Schnitzer, “Cellular level brain imaging in behaving mammals: an engineering approach,” Neuron 86, 140–159 (2015). [CrossRef]
17. A. Kazemipour, O. Novak, D. Flickinger, J. S. Marvin, J. King, P. Borden, S. Druckmann, K. Svoboda, L. L. Looger, and K. Podgorski, “Kilohertz frame-rate two-photon tomography,” bioRxiv 357269 (2018).
18. G. Buzsáki and K. Mizuseki, “The log-dynamic brain: how skewed distributions affect network operations,” Nat. Rev. Neurosci. 15, 264–278 (2014). [CrossRef]
19. W. Becker, A. Bergmann, M. A. Hink, K. König, K. Benndorf, and C. Biskup, “Fluorescence lifetime imaging by time-correlated single-photon counting,” Microsc. Res. Tech. 63, 58–66 (2004). [CrossRef]
20. W. Becker, Advanced Time-Correlated Single Photon Counting Applications (Springer, 2015), Vol. 111.
21. D. Brinks, A. J. Klein, and A. E. Cohen, “Two-photon lifetime imaging of voltage indicating proteins as a probe of absolute membrane voltage,” Biophys. J. 109, 914–921 (2015). [CrossRef]
22. D. B. Papkovsky and R. I. Dmitriev, “Imaging of oxygen and hypoxia in cell and tissue samples,” Cell Mol. Life Sci. 75, 2963–2980 (2018). [CrossRef]
23. W. Becker, S. Frere, and I. Slutsky, “Recording Ca++ transients in neurons by TCSPC FLIM,” in Advanced Optical Methods for Brain Imaging (Springer, 2019), pp. 103–110.
24. X. Y. Dow, S. Z. Sullivan, R. D. Muir, and G. J. Simpson, “Video-rate two-photon excited fluorescence lifetime imaging system with interleaved digitization,” Opt. Lett. 40, 3296–3299 (2015). [CrossRef]
25. M. G. Giacomelli, Y. Sheikine, H. Vardeh, J. L. Connolly, and J. G. Fujimoto, “Rapid imaging of surgical breast excisions using direct temporal sampling two photon fluorescent lifetime imaging,” Biomed. Opt. Express 6, 4317–4325 (2015). [CrossRef]
26. M. Eibl, S. Karpf, D. Weng, H. Hakert, T. Pfeiffer, J. P. Kolb, and R. Huber, “Single pulse two photon fluorescence lifetime imaging (SP-FLIM) with MHz pixel rate,” Biomed. Opt. Express 8, 3132–3142 (2017). [CrossRef]
27. S. Karpf and B. Jalali, “High speed two-photon lifetime imaging,” arXiv:1709.00512 (2017).
28. J. Ryu, U. Kang, J. Kim, H. Kim, J. H. Kang, H. Kim, D. K. Sohn, J.-H. Jeong, H. Yoo, and B. Gweon, “Real-time visualization of two-photon fluorescence lifetime imaging microscopy using a wavelength-tunable femtosecond pulsed laser,” Biomed. Opt. Express 9, 3449–3463 (2018). [CrossRef]
29. W. Amir, R. Carriles, E. E. Hoover, T. A. Planchon, C. G. Durfee, and J. A. Squier, “Simultaneous imaging of multiple focal planes using a two-photon scanning microscope,” Opt. Lett. 32, 1731–1733 (2007). [CrossRef]
30. R. Carriles, K. E. Sheetz, E. E. Hoover, J. A. Squier, and V. Barzda, “Simultaneous multifocal, multiphoton, photon counting microscopy,” Opt. Express 16, 10364–10371 (2008). [CrossRef]
31. K. E. Sheetz, E. E. Hoover, R. Carriles, D. Kleinfeld, and J. A. Squier, “Advancing multifocal nonlinear microscopy: development and application of a novel multibeam Yb:KGd(Wo4)2 oscillator,” Opt. Express 16, 17574–17584 (2008). [CrossRef]
32. E. Chandler, E. Hoover, J. Field, K. Sheetz, W. Amir, R. Carriles, S.-Y. Ding, and J. Squier, “High-resolution mosaic imaging with multifocal, multiphoton photon-counting microscopy,” Appl. Opt. 48, 2067–2077 (2009). [CrossRef]
33. E. E. Hoover, J. J. Field, D. G. Winters, M. D. Young, E. V. Chandler, J. C. Speirs, J. T. Lapenna, S. M. Kim, S.-Y. Ding, R. A. Bartels, J. W. Wang, and J. A. Squier, “Eliminating the scattering ambiguity in multifocal, multimodal, multiphoton imaging systems,” J. Biophoton. 5, 425–436 (2012). [CrossRef]
34. E. J. Botcherby, C. W. Smith, M. M. Kohl, D. Débarre, M. J. Booth, R. Juškaitis, O. Paulsen, and T. Wilson, “Aberration-free three-dimensional multiphoton imaging of neuronal activity at KHz rates,” Proc. Natl. Acad. Sci. USA 109, 2919–2924 (2012). [CrossRef]
35. M. Samim, “Nonlinear polarimetric microscopy for biomedical imaging,” Ph.D. thesis (University of Toronto, 2015).
36. X. Wu, L. Toro, E. Stefani, and Y. Wu, “Ultrafast photon counting applied to resonant scanning STED microscopy,” J. Microsc. 257, 31–38 (2015). [CrossRef]
37. Y. Wu, X. Wu, L. Toro, and E. Stefani, “Resonant-scanning dual-color STED microscopy with ultrafast photon counting: a concise guide,” Methods 88, 48–56 (2015). [CrossRef]
38. Hamamatsu, “Photon counting head PMT modules,” https://www.hamamatsu.com/jp/en/product/optical-sensors/pmt/pmt-module/photon-counting-head/index.html.
39. PerkinElmer, “Gigahertz photon detection module,” 2009, http://www.perkinelmer.com/pdfs/downloads/dts_gigahertzphotoncountingmodule.pdf.
40. A. Cheng, J. T. Gonçalves, P. Golshani, K. Arisaka, and C. Portera-Cailliau, “Simultaneous two-photon calcium imaging at different depths with spatiotemporal multiplexing,” Nat. Methods 8, 139–142 (2011). [CrossRef]
41. R. Prevedel, A. J. Verhoef, A. J. Pernía-Andrade, S. Weisenburger, B. S. Huang, T. Nöbauer, A. Fernández, J. E. Delcour, P. Golshani, A. Baltuska, and A. Vaziri, “Fast volumetric calcium imaging across multiple cortical layers using sculpted light,” Nat. Methods 13, 1021–1028 (2016). [CrossRef]
42. FAST, FAST ComTec MCS6A Multiscaler (2017).
43. H. Har-Gil, “Pysight,” 2017, https://github.com/pblab/python-pysight.
44. E. Pnevmatikakis, D. Soudry, Y. Gao, T. A. Machado, J. Merel, D. Pfau, T. Reardon, Y. Mu, C. Lacefield, W. Yang, M. Ahrens, R. Bruno, T. M. Jessell, D. Peterka, R. Yuste, and L. Paninski, “Simultaneous denoising, deconvolution, and demixing of calcium imaging data,” Neuron 89, 285–299 (2016). [CrossRef]
45. J. Schindelin, C. T. Rueden, M. C. Hiner, and K. W. Eliceiri, “The ImageJ ecosystem: an open platform for biomedical image analysis,” Mol. Reprod. Dev. 82, 518–529 (2015). [CrossRef]
46. H. Har-Gil, “Pysight documentation,” 2018, https://python-pysight.readthedocs.io/en/latest/.
47. H. Har-Gil, “Pysight demonstration,” 2018, https://github.com/pblab/python-pysight/blob/master/examples/pysightdemo.ipynb.
48. S. Piazza, P. Bianchini, C. Sheppard, A. Diaspro, and M. Duocastella, “Enhanced volumetric imaging in 2-photon microscopy via acoustic lens beam shaping,” J. Biophoton. 11, e201700050 (2017). [CrossRef]
49. R. Lu, W. Sun, Y. Liang, A. Kerlin, J. Bierfeld, J. D. Seelig, D. E. Wilson, B. Scholl, B. Mohar, M. Tanimoto, M. Koyama, D. Fitzpatrick, M. B. Orger, and N. Ji, “Video-rate volumetric functional imaging of the brain at synaptic resolution,” Nat. Neurosci. 20, 620–628 (2017). [CrossRef]
50. H. H. Yang, F. St-Pierre, X. Sun, X. Ding, M. Z. Lin, and T. R. Clandinin, “Subcellular imaging of voltage and calcium signals reveals neural processing in vivo,” Cell 166, 245–257 (2016). [CrossRef]
51. A. C. Geiger, J. A. Newman, S. Sreehari, S. Z. Sullivan, C. A. Bouman, and G. J. Simpson, “Sparse sampling image reconstruction in Lissajous trajectory beam-scanning multiphoton microscopy,” Proc. SPIE 10076, 1007606 (2017). [CrossRef]
52. M. Eibl, S. Karpf, H. Hakert, T. Blömker, J. P. Kolb, C. Jirauschek, and R. Huber, “Pulse-to-pulse wavelength switching of a nanosecond fiber laser by four-wave mixing seeded stimulated Raman amplification,” Opt. Lett. 42, 4406–4409 (2017). [CrossRef]
53. A. J. Radosevich, M. B. Bouchard, S. A. Burgess, B. R. Chen, and E. M. Hillman, “Hyperspectral in vivo two-photon microscopy of intrinsic contrast,” Opt. Lett. 33, 2164–2166 (2008). [CrossRef]
54. A. J. Bares, C. B. Schaffer, and S. Tilley, “Hyperspectral multiphoton microscope for biomedical applications,” U.S. patent application US20180196246A1 (July 12, 2018).
55. D. J. Kissick, R. D. Muir, and G. J. Simpson, “Statistical treatment of photon/electron counting: extending the linear dynamic range from the dark count rate to saturation,” Anal. Chem. 82, 10129–10134 (2010). [CrossRef]
56. S. Isbaner, N. Karedla, D. Ruhlandt, S. C. Stein, A. Chizhik, I. Gregor, and J. Enderlein, “Dead-time correction of fluorescence lifetime measurements and fluorescence lifetime imaging,” Opt. Express 24, 9429–9445 (2016). [CrossRef]
57. B. Simsek and S. Iyengar, “On the distribution of photon counts with censoring in two-photon laser scanning microscopy,” J. Math. Imaging Vis. 58, 47–56 (2017). [CrossRef]
58. M. Patting, P. Reisch, M. Sackrow, R. Dowler, M. Koenig, and M. Wahl, “Fluorescence decay data analysis correcting for detector pulse pile-up at very high count rates,” Opt. Eng. 57, 031305 (2018). [CrossRef]
59. Z. Li, J. Suo, X. Hu, C. Deng, J. Fan, and Q. Dai, “Efficient single-pixel multispectral imaging via non-mechanical spatio-spectral modulation,” Sci. Rep. 7, 41435 (2017). [CrossRef]
60. Q. Guo, H. Chen, Y. Wang, M. Chen, S. Yang, and S. Xie, “High-speed real-time image compression based on all-optical discrete cosine transformation,” Proc. SPIE 10076, 100760E (2017). [CrossRef]