Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Decoding cortical brain states from widefield calcium imaging data using visibility graph

Open Access Open Access

Abstract

Widefield optical imaging of neuronal populations over large portions of the cerebral cortex in awake behaving animals provides a unique opportunity for investigating the relationship between brain function and behavior. In this paper, we demonstrate that the temporal characteristics of calcium dynamics obtained through widefield imaging can be utilized to infer the corresponding behavior. Cortical activity in transgenic calcium reporter mice (n=6) expressing GCaMP6f in neocortical pyramidal neurons is recorded during active whisking (AW) and no whisking (NW). To extract features related to the temporal characteristics of calcium recordings, a method based on visibility graph (VG) is introduced. An extensive study considering different choices of features and classifiers is conducted to find the best model capable of predicting AW and NW from calcium recordings. Our experimental results show that temporal characteristics of calcium recordings identified by the proposed method carry discriminatory information that are powerful enough for decoding behavior.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

One of the major goals in neuroscience is to understand the relationship between brain function and behavior [1–10]. Towards this goal, imaging techniques capable of recording large numbers of spatially distributed neurons, with high temporal resolution, are critical for understanding how neuronal population contribute to changes in brain states and behavior. Widefield fluorescence imaging of genetically encoded calcium indicators (GECIs) is one such technique [11]. Newly developed GECIs such as GCaMP6 have improved sensitivity and brightness [12,13] that, when expressed in transgenic reporter mice, enable imaging of neuronal activity of genetically defined neuronal populations over large portions of the cerebral cortex [12,14–17]. Although widefield imaging lacks the micrometer-scale spatial resolution of non-linear optical methods such as two-photon laser-scanning microscopy [18,19], the use of epifluorescence optical imaging allows for easier implementation, higher temporal resolution, and much larger fields of view [20,21]. Two-photon calcium imaging can be used to track individual neurons over time as animals learn [22,23], but it is difficult to study neurons in spatially segregated cortical areas. Furthermore, long-term widefield imaging can be performed through either cranial windows or a minimally invasive intact skull preparation in living subjects over multiple weeks [9,24]. These developments in widefield imaging have opened new possibilities for studying large-scale dynamics of brain activity in relation to behavior [9, 10], for example, during locomotion and active whisker movements in mice [6,14,25–29].

Inferring about the behavior, intent, or the engagement of a particular cognitive process, from neuroimaging data, finds applications in several domains including brain machine interfaces (BMIs) [30–32]. Depending on the type of physiological activity that is monitored, various computational techniques have been suggested to infer or decode the intent or the cognitive state of the subject from recorded brain activities. Methods based on functional specificity [33,34], brain connectivity patterns [2,3,35], and power spectral density [36], to name a few, have been suggested. However, the estimation power of such methods has been limited to distinguishing very distinct classes of motor activities or cognitive processes [37]. As such the community has been searching for alternative methods to improve the power of inference.

Given the time-varying nature of the brain function, in this work, we focus on the time domain information. We hypothesize that there exist “characteristics” in the time course of cortical activities that are specific to the corresponding behavior. The key challenge is to develop methods that can reliably identify such discriminatory characteristics in cortical recordings. To test the hypothesis, we use transgenic calcium reporter mice expressing GCaMP6f specifically in neocortical pyramidal neurons to image neural activity in nearly the entire left hemisphere and medial portions of the right hemisphere in head-fixed mice, including sensory and motor areas of the neocortex. For behavior, we focus on active whisking (AW) and no whisking (NW). Quiet wakefulness, in the absence of locomotion or whisking, is associated with low frequency synchronized cortical activity, while locomotion and whisking are associated with higher frequency desynchronized activity in primary sensory areas of the cortex [6, 38–41]. Recent studies indicate that active, arousal-related behaviors such as locomotion and whisking are associated with widespread modulation of cortical activation [14,29]. Therefore, prior evidence exists for differences in the time courses of activities related to changes in behavioral states.

To identify features in calcium imaging data that would be unique to behavior (here AW or NW), we propose to use visibility graph (VG) [42]. As will be discussed, VG provides a means to “quantify” various properties of a given time series, enabling a path to extract temporal-based features that are unique to the characteristics of the time series. We construct the VG representation of the recordings for each region of interest (ROI), extract the graph measures, and build features based on the graph measures for all ROIs. We conduct an extensive study to identify the best model capable of inferring AW and NW for each subject, from cortical recordings. Fig. 1 provides a summary of the procedure.

 figure: Fig. 1

Fig. 1 Summary of the proposed analysis procedure.

Download Full Size | PDF

The novelty of our work is the introduction of the visibility graph for extracting features that are related to the temporal characteristics of recorded calcium time series. It is shown that the temporal features of calcium recordings extracted through VG, carry discriminatory information for inferring the corresponding behavior. While in this study, we consider cortical signals from the entire left hemisphere and medial part of the right hemisphere, and focus on whisking condition, given the data-driven nature of the proposed approach, we expect that it would be also applicable to recorded activity from other areas of the brain, such as the thalamus and deep layers of motor cortex, for inferring other forms of behavior or cognitive states.

2. Materials and methods

Before discussing details of data collection and the analysis procedure, we provide clarification about some terminology used throughout the paper. Note that in this study we use the term “decode” and “infer” interchangeably.

The imaged area here refers to the optically accessible cortical area. The imaged area in this study covers the entire left hemisphere, and medial part of the right hemisphere of the cortex.

Behavior in this study is related to whisking condition. Two classes of behavior, active whisking (AW) and no whisking (NW), are considered here. We use the term “brain state” and “behavior” interchangeably.

Features are measures extracted from cortical recordings. To examine how well the proposed features from recorded calcium transients can discriminate the two classes of AW and NW, classification experiments are performed. In these experiments, a classifer refers to the algorithm that is used to perform classification.

A predictive model refers to a trained classifier. The ability of the model to correctly infer (or predict) the whisking condition (AW or NW) from features extracted from cortical recordings, is tested using k-fold cross validation.

We now discuss the widefield imaging experiments, and the methods used in the analysis.

2.1. Animals and surgery

Six mice expressing GCaMP6f in cortical excitatory neurons were used for widefield transcranial imaging [14, 44]. All procedures were carried out with the approval of the Rutgers University Institutional Animal Care and Use Committee. Triple transgenic mice expressed Cre recombinase in Emx1-positive excitatory pyramidal neurons (The Jackson Laboratory; 005628), tTA under the control of the Camk2a promoter (The Jackson Laboratory; 007004) or ZtTA (3/6 mice) under the control of the CAG promoter into the ROSA26 locus (The Jackson Laboratory; 012266) and TITL-GCaMP6f (The Jackson Laborotory; Ai93; 024103). At 7 to 11 weeks of age, mice were outfitted with a transparent skull and an attached fixation post using methods similar to those described previously [9, 14, 45]. Mice were anesthetized with isoflurane (3% induction and 1.5% maintenance) in 100% oxygen, and placed in a stereotaxic frame (Stoelting) with temperature maintained at 36 °C with a thermostatically controlled heating blanket (FHC). The scalp was sterilized with betadine scrub and infiltrated with bupivacaine (0.25%) prior to incision. The skull was lightly scraped to detach muscle and periosteum and irrigated with sterile 0.9% saline. The skull was made transparent using a light-curable bonding agent (iBond Total Etch, Heraeus Kulzer International) followed by a transparent dental composite (Tetric Evoflow, Ivoclar Vivadent). A custom aluminum headpost was affixed to the right side of the skull and the transparent window was surrounded by a raised border constructed using another dental composite (Charisma, Heraeus Kulzer International). Carprofen (5 mg/kg) was administered postoperatively. Following a recovery period of one to two weeks, mice were acclimated to handling and head fixation for an additional week prior to imaging. Mice were housed on a reversed light cycle and all handling and imaging took place during the dark phase of the cycle.

2.2. Widefield imaging of cortical activity and whisker movement recording

Imaging of GCaMP6f was carried out in head-fixed mice with the transparent skull covered with glycerol and a glass coverslip. A schematic of the imaging system is shown in Fig. 2(a). A custom macroscope [24] allowed for simultaneous visualization of nearly the entire left hemisphere and medial portions of the right hemisphere (as seen in Fig. 2(a)). The cortex was illuminated with 460 nm LED (Aculed VHL) powered by a Prizmatix current controller (BLCC-2). Excitation light was filtered (479/40; Semrock FF01-479/40-25) and reflected by a dichroic mirror (Linos DC-Blue G38 1323 036) through the objective lens (Navitar 25 mm / f0.95 lens, inverted). GCaMP6f fluorescence was filtered (535/40; Chroma D535/40m emission filter) and acquired using a MiCam Ultima CMOS camera (Brain vision) fitted with a 50 mm / f0.95 lens (Navitar). Images were captured on a 100 × 100 pixel sensor. Spontaneous cortical activity was acquired in 20.47 s blocks at 100 frames per second with 20 s between blocks (Fig. 3). Sixteen blocks were acquired in each session and mice were imaged in two sessions in a day. A sample frame obtained during a block is shown in Fig. 2(b). For the corresponding 20 s movie see Visualization 1.

 figure: Fig. 2

Fig. 2 a) Left: Illustration of the experimental setup used for widefield imaging of cortical activity of mice expressing GCaMP6f and simultaneous recording of whisker movement. Right, top: raw image of neocortical surface through transparent skull preparation. M1, S1, and V1 are schematically labeled. Asterisk indicates position of Bregma. Right, bottom: ROIs are superimposed on a map based on the Allen Institute common coordinate framework v3 of mouse cortex (brain-map.org; adapted from [43]). ROI: 1, Retrosplenial area, lateral agranular part (RSPagl); 2, Retrosplenial area, dorsal (RSPd); 3, 4, 9, Secondary motor area (MOs); 5, 7, 8, 10, Primary motor area (MOp); 6, Primary somatosensory area, mouth (SSp-m) / upper limb (SSp-ul); 11, 16, Primary somatosensory area, lower limb (SSp-ll); 12, SS-ul; 13, Primary somatosensory area, nose (SSp-n); 14, 20, Primary somatosensory area, barrel field (SSp-bfd); 15, SSp-bfd / Primary somatosensory area, unassigned (SSp-un); 17, Retrosplenial area, lateral agranular part (RSPagl); 18, Anterior visual area (VISa) / Primary somatosensory area, trunk (SSp-tr); 19, VISa / SSp-tr / SSp-bfd; 21, Supplementary somatosensory area (SSs); 22, Auditory area (AUD); 23, Temporal association areas (TEa); 24, SSp-bfd / Rostrolateral visual area (VISrl); 25, 29, 30, Primary visual area (VISp); 26, Anteromedial visual area (VISam); 27, RSPagl / RSPd; 28, Posteromedial visual area (VISpm). b) A sample 20 s movie obtained during a block. Frames corresponding to “AW” are identified by “W”, shown on the top left of frames (see Visualization 1).

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Experimental protocol that was followed for each subject. Each subject participated in two sessions per day. In each session, spontaneous activity was acquired for sixteen 20.47 s blocks, with 20 s of rest between blocks.

Download Full Size | PDF

In addition, all whiskers contralateral to the imaged cortical hemisphere were monitored with high-speed video at 500 frames/s using a Photonfocus DR1 camera triggered by a Master-9 pulse generator (AMPI) and Streampix (Norpix) software. Whiskers were illuminated from below with 850 nm infrared light. The mean whisker position was tracked and measured as the changes in angle (in degree) using a well-established, automated whisker-tracking algorithm, freely available in MATLAB [46], that computes the frame-by-frame center of mass of all whiskers in the camera’s field of view. The angle of the center of mass of all whiskers is similar to the average angle of all whiskers tracked individually, because the whiskers do not move independently.

2.2.1. Preprocessing of calcium signals

Changes in GCaMP6f relative fluorescence (ΔF/F0) for each frame within a recording were calculated by subtracting and then dividing by the baseline. The baseline was defined as the average intensity of the first 49 frames. Two blocks (one from subject #2 and one from subject #3) were excluded from further analysis due to loss of whisker movement data. The length of blocks were shortened to 20 s from 20.47 s for the remaining parts of analysis.

Thirty 5 × 5 pixel regions of interest (ROIs) distributed over the cortex (see Fig. 2(a)) in each frame were defined based on location relative to the bregma point on the skull. In 5/6 mice, whisker stimulation by piezo bending element was used to map the location of S1 barrel cortex. The 30 ROIs were positioned to cover and fill space between areas including somatosensory, visual and motor areas of the cortex (S1, V1, M1) (see Fig. 2(a)). Each pixel is 65 μm side length, and 5 × 5 pixel ROI is 325 × 325 μm. This size ROI is the approximate dimension of a cortical column in sensory cortex, and is consistent with the standard practices in the field [15,47]. These studies, which examined sensory mapping, spontaneous activity, and task-related activation, have shown that widefield calcium signals do not display signals with resolution better than these dimensions, and therefore, smaller ROIs are not beneficial. The choice of ROI size is therefore, suitable and standard for comparison across different existing datasets. ROI locations were kept the same across subjects. Time series associated with each ROI were obtained by finding the average of pixel intensities within the corresponding ROI.

2.2.2. Labeling data related to active whisking and no whisking conditions

In order to investigate the relationship between behavior and the cortical activity, it is necessary to identify the duration in the recordings that are related to “active whisking” (AW) and “no whisking” (NW) conditions. Here we developed a method to automatically label the duration related to each condition, according to the whisker movement recordings.

The whisker movement time series was segmented using a sliding window. For a given segment i, the standard deviation (SD) of the signal (σwi) is computed as σwi=j=1N(xjμi)2, where μi represents the mean and N denotes the number of samples within the segment. This procedure generates a new time series of σwi s, representing the extent to which the whisker is in motion over the course of observation. A threshold was then set to identify whether the recordings correspond to active whisking (above the threshold) or no whisking (below threshold) conditions. After testing different threshold values and visually inspecting the raw whisker movement signals, a threshold value of 10 was used.

As an example, sample images and time series corresponding to two ROIs (6 and 27) along with whisking movement signal, recorded in block #1 from subject #1, are shown in Fig. 4. The top row illustrates a series of baseline-corrected images. Shown also are the averaged image for the duration of (6.01 − 6.20) s (labeled in red in Fig. 4(c)), where no clear calcium transients are present, and the averaged image for the duration of (13.21 − 13.40) s (labeled in blue in 4(c)), where calcium transients are present.

 figure: Fig. 4

Fig. 4 Sample images and time series recorded from block #1 of subject #1. (a)–(b) baseline-corrected images, (c) time series corresponding to ROI-6 and ROI-27, (d) measured angle corresponding to whisker movement signal recorded from the same block, and (e) standard deviation-based time series of the signal, (d) where the threshold level used for labeling AW and NW conditions is shown as a red line.

Download Full Size | PDF

The measured angle corresponding to whisker movement recordings of the same block is shown in Fig. 4(d), and in Fig. 4(e) the time series obtained based on the standard deviation calculation of sliding window approach discussed in Section 2.2.2 is plotted. The threshold level for determining AW and NW conditions over time, is visualized by a red horizontal line.

2.3. Visibility graph

Here, we first describe the procedure used to construct the visibility graph for a given time series and extracting graph measures.

2.3.1. VG construction

Visibility graph is an effective tool that can be employed to reveal the temporal structure of the time series at different time scales [42,48–50]. Recently, VG has been receiving increased attention in various studies related to human brain function such as those involving sleep [51], epilepsy [52,53], Alzheimer’s disease [54], and differentiating resting-state and task-execution states [55]. In these studies, VG has been applied to time series obtained from various imaging modalities such as electroencephalography (EEG) [51–54,56], functional near-infrared spectroscopy (fNIRS) [55], and functional magnetic resonance imaging (fMRI) [57].

VG maps a time series to a graph, thereby, providing a tool to “visually” investigate different properties of the time series [42, 50]. The VG associated with a given time series x = [x(1), · · · , x(N)] of N points is constructed as follows. Each point in x is considered as a node in the graph (i.e. for an N-point time series, the graph will have N nodes). The link between node pairs is formed only if the nodes are considered to be naturally visible. That is, in the graph, there will be an undirected and unweighted link between nodes i and j, if and only if, for any point p (i < p < j) in the time series, the following condition holds

x(p)<x(j)+[x(i)x(j)][t(j)t(p)t(j)t(i)],
where t(j), t(p) and t(i) are the time corresponding to points j, p, and i [42]. That is, two nodes i and j are connected, if the straight line connecting two data points (t(i), x(i)) and (t(j), x(j)), does not intersect the height of any data point (t(p), x(p)) that exists between them. Accordingly, in the adjacency matrix Ax = {ai,j} (i, j = 1, · · · , N), the element ai,j will be set to 1 if the nodes i and j are connected given the definition above, and 0 if otherwise.

2.3.2. Graph measures as features

Once the time series x of N points is mapped to a graph with adjacency matrix Ax = {ai,j} (i, j = 1, · · · , N) via VG, the topological measures of the graph can be utilized to investigate different properties of the time series. Here, we consider three of such measures: Edge Density (D), Averaged Clustering Coefficient (C), and Characteristic Pathlength (L), as defined below.

  • Edge Density (D) measures the fraction of existing edges in the graph with respect to the maximum possible number of edges [58]. The edge density is obtained as
    D=1N(N1)i,jai,j.
    It can be shown that for a globally convex time series, the value of D would be 1, and for a time series with large number of fluctuations, the value of D would be small. Therefore, the edge density can be considered as a measure of irregularity of fluctuations in the time series [59].
  • Averaged Clustering Coefficient (C) is obtained as the average of local clustering coefficients of all nodes in the graph. The local clustering coefficient of the node i (Ci) is defined as the fraction of its connected neighboring nodes to the maximum number of possible connections among the neighboring nodes [58]. The averaged clustering coefficient is computed as
    C=1Ni=1NCi=1Ni,j,laijailajlKi(Ki1),
    where Ki represents the degree of node i (the number of edges connected to node i). A large value of C indicates dominant convexity of the time series [59].
  • Characteristic Pathlength (L) is found as the average of the shortest pathlength between all node pairs in the graph. The characteristic pathlength is obtained as
    L=1N(N1)i,jlij,
    where lij denotes the shortest pathlengh between nodes i and j.

2.4. Classification

To learn models of inferring behavior (as measured by AW and NW) from recordings obtained via widefield calcium imaging of cortical activity, classification experiments are performed. Specifically, we wish to learn classifiers in the following form:

f:VGmeasures(t0,t0+w){AW,NW},
where VG Measures (t0, t0 +w) represents graph measures that are extracted from VGs associated with calcium signals within the segment [t0, t0 + w], and w denotes the window length used for segmentation 3.1.

Here, we briefly describe the feature extraction process, the classifiers, and the measures used to evaluate the classification performance. Classification experiments were executed using GraphLab [60].

2.4.1. Feature extraction

Three graph measures were extracted from the VG associated with each segment (identified by the sliding window) of recordings obtained from individual ROIs. To extensively investigate which measures will result in a better model, seven types of feature vectors were formed. These were D, C, L, D + C, D + L, C + L, and D + C + L. In all cases, feature vectors were constructed using measures from all the ROIs. For example, when considering D as features, for each segment, a feature vector of 30 × 1 is constructed (where 30 represents the number of ROIs).

Five different sliding window duration (1, 1.5, 2, 2.5, and 3 s) were considered for segmentation. As such, the number of segments per recording block varies based on the sliding window duration (39 for 1 s window, 38 for 1.5 s window, 37 for 2 s window, 36 for 2.5 s window, and 35 for 3 s window). There are 32 blocks for subject #1, 4, 5, 6, and 31 blocks for subject #2 and 3. Table 1 summarizes the number of blocks, and the number of AW/NW segments for each subject, when the window duration of 2 s, and window step of 0.5 s are used.

Tables Icon

Table 1. Number of blocks and number of AW/NW segments for each subject, when the window length of 2 s with a step size of 0.5 s is used.

2.4.2. Classifiers and evaluation measures

Three commonly-used classifiers were used to perform classification: 1) k-nearest neighbor (kNN), 2) regularized logistic regression (LR), and 3) random forest (RF). These classifiers have been shown to offer good performance with neuroimaging data in several studies [61–69]. Here, for kNN, k in the range of 1 and 10 is used, for LR, 2-norm regularization is used, and the weight of the regularization was set between 10−2 and 101.5, and for RF, the subsampling ratio is selected to be 40%, 70% or 100%.

To evaluate the classification performance, three measures, accuracy (AC), sensitivity (SE), and specificity (SP), were used [70].

First, separate classifiers were trained for each subject. A ten-fold cross-validation was used to test the performance of the models. For each subject, the data were randomly partitioned into ten subsamples. Classification experiments were repeated ten times, where during each, one subsample was assigned as the testing dataset, and the remaining subsamples were assigned as training dataset. For every subject, the classification performance was evaluated using the measures described above, and then results were averaged across the ten repetitions.

3. Results

3.1. VG construction from calcium signals

The preprocessed calcium signals were segmented using sliding windows with the fixed step of 50 time points (0.5 s). Five different window lengths were used: 100, 150, 200, 250, and 300 time points (corresponding to 1, 1.5, 2, 2.5, and 3 s, respectively). The VG was constructed for each segment of the time series obtained from each ROI. For each VG, three graph measures, D, C, and L were extracted. As a result, for a given sliding window length, recordings from each ROI of each recording block, result in three time series for D, C and L. Our objective is to use this information and develop models to predict the behavior of active whisking and no whisking, from recorded calcium signals.

Representative preprocessed calcium signals from four ROIs (6, 8, 19 and 30) of the recording block #1 from subject #1 are shown in Fig. 5. For signals from each ROI, two segments of 2 s duration, corresponding to AW and NW, are also shown. For each of these segments, the VG is constructed and their corresponding adjacency matrices are presented. As segments have the same duration (2 s or 200 time points), the number of nodes in all graphs will be the same. In these matrices, the dark color represents no connection, and the light color represents the existence of an edge. For each ROI, the distinctions between the patterns of the matrices related to AW and NW can be revealed via the three graph measures D, C and L. The values for these measures are compared for AW and NW and each ROI in Fig. 5(u).

 figure: Fig. 5

Fig. 5 Preprocessed calcium signals of recording block #1 from subject #1 from ROI-6 (a), ROI-8 (f), ROI-30 (k) and ROI-19 (p). For each case, 2 s segments of signals corresponding to AW (shown in red in (b), (g), (l) and (q)) and NW (shown in blue in (c), (h), (m) and (r)) conditions as determined from whisker movement recordings. For each ROI, the adjacency matrices for 2 s AW are shown in (d), (i), (n), and (s), and for 2 s NW are shown in (e), (j), (o), and (t). Measures extracted from VG of 2 s duration of AW time series (shown in red) and from VG of 2 s NW time series (shown in blue) are also shown in (u) for each ROI.

Download Full Size | PDF

As can be seen, distinct patterns (e.g. in terms of amplitude and width of calcium transients), for the same whisking condition (AW or NW) are observed in signals obtained from different ROIs distributed over the cortex, suggesting that different cortical regions have potentially different relationships with behavior. For example, for ROIs in or close to M1 (ROI-6 and ROI-8) the measure D is larger during NW compared to AW, suggesting that there are more number of edges in the VG representation of recordings from this region for NW as compared to AW. For ROIs close to S1 (ROI-19) the measure L appears to be smaller during NW compared to AW, suggesting that there are less connections in the VG representation of recordings from this region for NW as compared to AW. In V1 (e.g. ROI-30), the measure C is larger during NW as compared to AW, indicating the presence of smaller clusters in the VG representation of recordings from this region during AW as compared to NW. These results suggest that different regions of the brain follow different temporal dynamics during behavior, and such differences can be revealed and quantitatively described via VG measures D, C and L.

The graph measures shown in Fig. 5(u) correspond to two segments of the time series for each of the four ROIs. Using the sliding window of length 2 s, VGs can be constructed for each segment of the time series, and from each VG, the three mentioned graph measures can be extracted. Figs. 6(a) to (c) show the results of such analysis for all ROIs, illustrating the temporal evolution of D, C and L, respectively. The simultaneously obtained whisker movement recording is also shown in Fig. 6(d). It can be clearly seen that different patterns are observed for VG measures for duration corresponding to AW and NW across all ROIs.

 figure: Fig. 6

Fig. 6 Color-coded graph measures for all ROIs as a function of time during a recording block. (a) Edge density (D), (b) Averaged clustering coefficient (C), and (c) Characteristic pathlength (L). (d) Whisker movement recording obtained simultaneously in the same block.

Download Full Size | PDF

3.2. Classification results

For each subject, we performed comprehensive investigation on how the selection of various parameters (e.g. various window sizes for extracting VG measures, and performing classification based on different selection of feature types) will impact the classification results. For each choice of window size, features were constructed based on individual or a combination of measures from the corresponding VG. Figs. 7, 8 and 9 illustrate the evaluation measures obtained for each subject when kNN, LR, and RF were used as the classifier, respectively.

 figure: Fig. 7

Fig. 7 Classification results when using kNN as classifier.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Classification results when using regularized logistic regression (LR) as classifier.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Classification results when using random forest (RF) as classifier.

Download Full Size | PDF

It was found out that while the performance is subject dependent (due to individual variability as well as variability in whisking behavior across subjects (see Table 1)), with a proper choice for features and window length, all classifiers result in high levels of accuracy and specificity for all subjects. The sensitivity remains to be relatively modest, however, considering the imbalanced dataset between AW and NW (e.g. only 23% of the samples belonged to the AW condition for 2 s window duration), the obtained significantly better accuracy than naive classifier (in which all the testing samples are assigned the label associated to the majority class in the training set), demonstrating the effectiveness of the VG measures in providing features that carry discriminatory information for AW and NW. In majority of scenarios, classification based on either C or L did not result in good performance, while classification based on feature D + C or D led to the best sensitivity results for majority of the subjects.

For each classifier, information about the choice of window length (w), features, and parameters that have resulted in the best sensitivity among all the explored options, are summarized in Table 2. Consistent with the observation made from Figs. 7, 8, and 9, it can be seen that, in all cases, the graph measure D, either individually or jointly with others, has been identified as the optimum feature. For classifiers kNN and LR, the feature D + C across most subjects has resulted in the best sensitivity results, while for the RF classifier, the measure D by itself has worked as the optimum feature. In terms of duration of segments for constructing VGs, window duration of equal or larger than 2 s has resulted in the optimum performance. In addition, for most cases, the sensitivity measure dropped as the window size for extracting features goes below 150 points.

Tables Icon

Table 2. Classification results for best sensitivity obtained for each subject when using kNN, regularized logistic regression (LR), and random forest (RF) as classifier. Features, window lengths (w), and related parameters from which the optimum results have been obtained are also listed (SS is short for subsample). Note that “+” in the “Feature” rows represent using multiparametric approach for performing the classification.

Overall, kNN and LR deliver almost always slightly better performance than RF, but using the right features, all classifiers are able to successfully differentiate the whisking conditions, demonstrating that features based on visibility graph carry discriminatory information. To summarize the performance of the classifiers, we repeated the classification across subjects by using unified parameters that led to the best classification performance in majority of the subjects in Table 2. We used D + C as the feature, and 2 s as the window length. The results are presented in Table 3, where as can be seen, on average, an accuracy larger than 86% is achieved across all subjects.

Tables Icon

Table 3. Classification performance using unified parameters across subjects and classifiers. D + C is used as the feature, and w = 200 points is used as the window length for extracting features in all cases.

4. Discussions and conclusions

Measuring brain states over wide areas of cortex is of central importance for understanding sensory processing and sensorimotor integration. Changes in brain states influence the processing of incoming sensory information. For example, data from several sensory modalities including somatosensation, vision, and audition, indicate that the cortical representations of stimuli vary depending on the neocortical state when the stimulus arrives [71–74]. In mice, natural spontaneous behaviors such as locomotion and self-generated whisker movements influence brain states through increased behavioral arousal and activation of ascending neuromodulator systems [40,75]. Studies in mice using widefield imaging of voltage and calcium sensors during whisking or locomotion have provided important information on the spatiotemporal modulations of brain states [14,29], and relating these dynamic optical signals to behavior is an area of great interest. This line of research will be advanced by the development of several new transgenic calcium reporter mice [76,77] and cranial window methods [16].

Studies from several sensory modalities including somatosensation, vision, and audition have reported changes in the cortical representation of stimuli that vary depending on the neocortical state when the stimulus arrives.

The VGs constructed here corresponded to segments of recordings as identified by the moving window of length w. We performed a comprehensive study (five different window lengths, seven types of features per choice of window length, and three classifiers) to find the model that can be used to infer the behavior (AW or NW) from calcium imaging data. All classifiers delivered high accuracy and specificity and moderate sensitivity, with kNN and LR offering better performances than RF. Considering the imbalanced dataset between AW and NW (e.g. only 23% of the samples belonged to the AW condition for 2 s window length), the obtained significantly better-than-naive-classifier demonstrates the effectiveness of the VG measures in providing features that carry discriminatory information for AW and NW. Other techniques for learning from imbalanced data, such as [78], can also be incorporated to achieve an even better performance. Regardless, as it was shown, the obtained performance was comparable to the scenario in which the number of spikes, inferred from calcium signals, are used as features.

Additionally, among the three considered visibility graph measures (D, C and L), it was observed that the measure D, was identified as the feature providing the best sensitivity results, for all subjects and all choice of classifiers, either individually or jointly with other measures (e.g. D + C)). This observation indicates that the measure D carries the strongest discriminatory information among the three considered VG measures. Given that D is related to the number of edges in the graph that are associated with the fluctuations in the time series, this result shows that variations in the patterns, and in the relative timing of the fluctuations with respect to one another, play key roles in differentiating the two states. Furthermore, it was demonstrated that the proposed method is capable of providing features common across subjects, which result in successful classification performance.

It is worth noting that, the three different classifiers were implemented independently, to demonstrate the robustness of the VG measures as features. The logistic regression classifier is robust to noise and can avoid overfitting by using regularization. The random forest classifier can handle nonlinear and very high dimensional features. The kNN classifier is considered computationally expensive but it is simple to implement and supports incremental learning in data stream. As presented, all classifiers were able to successfully differentiate the whisking conditions demonstrating the robustness of the VG metrics in capturing the temporal characteristic of optical imaging data.

4.1. Comparison with spike rate inference-based feature extraction approach

The proposed approach was applied directly to the recorded calcium signals, without using methods such as template matching [79,80], deconvolution [81,82], Bayseian inference [83,84], supervised learning [85], or independent component analysis [86]. Here, we compare the classification performance of the proposed approach with the scenario in which the number of spikes are used as features for each condition.

To infer the spiking events from calcium recordings, we used the FluoroSNNAP [80] toolbox in MATLAB, which utilizes a commonly-used template-matching algorithm. The same window sizes that were considered in VG-based analysis, were also considered for spike-based analysis. For each segment, feature vectors were constructed by concatenating the number of detected spikes from all ROIs. The regularized logistic regression was used as the classifier, with the same 2 penalty weights as was set before. Similar to the VG-based feature extraction technique, the performance was evaluated using the same cross-validation procedure described earlier.

Results for the area-under-the-ROC-curve (AUC) are presented in Table 4 for each window size. It is shown that the VG-based approach provides a better performance. This result further confirms the capabilities of VG-based measures in identifying discriminatory features related to different behavior from calcium recordings.

Tables Icon

Table 4. Performance comparison of classification experiments based on i) VG-based feature extraction from all ROIs, ii), Spike-based feature extraction from all ROIs, iii) Variance-based feature extraction from all ROIs, iv) VG-based feature extraction only from ROI-20, and v) VG-based feature extraction from ROIs 25–30.

4.2. Comparison with signal variance-based feature extraction approach

We carried out another analysis to compare the classification performance of the proposed approach with the scenario in which the variance of the signal is used as features for all candidate window sizes. For each segment, feature vectors were constructed by concatenating the variance from all ROIs. The same classifier and regularization optimization process similar to VG-based approach was used. The AUC values based on 10-fold cross validation was used to compare the classification performance. The results are summarized in Table 4 for each window size, which shows the VG-based method provides a better performance regardless of the selection of window sizes.

4.3. Comparison with VG-based features from the somatosensory cortex

We carried out an additional analysis to examine whether the classification results will be different if only signals recorded from the ROIs located in the primary somatosensory cortex are considered, since layer 4 “barrels” in primary somatosensory cortex receive sensory input from the whiskers. Among the ROI locations, the ROI-20 was in close proximity of the primary somatosensory cortex, according to the location of bregma and functional mapping experiments in a subset of mice. Using the same parameter settings used earlier, classification was performed based on VG measures extracted from ROI-20 signals. Results for AUC are shown in Table 4. It can be seen that when features from all ROIs (covering large area of the cortex) are used, the classification performance is significantly better. This result is consistent with previous work [14,26,29,87], which suggest that brain state modulation is widespread across many cortical regions.

In a related analysis, we further used VG-based features extracted from ROIs 25–30, which did not show the epileptiform-like events during NW (as seen in signals obtained from ROI 6). Results are summarized in Table 4, suggesting that VG is capable of decoding behavior from ROIs with various dynamic properties. It should be noted that VG analysis in this paper, uses a relatively fast time scale (2 s) compared to the blood-flow related signals that can reduce fluorescent calcium signals. Contamination is particularly strong for sensory-evoked signals [11, 77], but less of a concern here for signals related to spontaneous behavioral state transitions.

4.4. Concluding remarks

To the best of our knowledge, this work is the first study demonstrating that it is possible to infer behavior from the temporal characteristics of calcium recordings, extracted through visibility graph. As such the proposed method could have applications in BMIs involving human [30], or in rodents and primates [31,32], where from brain recordings subject’s intention should be inferred. Due to differences in the nature of recorded signals or experimental conditions, a direct and fair comparison with these studies and the results shown here cannot be made, but the classification results for accuracy presented here are comparable to the results that have been reported in [30,88,89]. Additionally, the proposed methodology in combination with widefield optical imaging of ensembles of neurons in awake behaving animals, can open up several new opportunities to study various aspects of brain function and its relationship to behavior. It could also be employed to develop quantitative biomarkers based on VG measures. While here we considered three VG measures (D, L and C), a wide range of other graph measures [90] could also be used to possibly improve the classification performance. It can be concluded that VG is very effective in providing “quantitative” measures that can reveal differences in recorded calcium time series.

Future work will include 1) exploring the inclusion of other graph measures as features, 2) expanding VG to multilayer VG [49], where information about the dependency of time series will also be incorporated in the models, and 3) employing deep learning in developing predictive models, and 4) applying the methods to experiments involving learned behaviors and diverse cortical cell types.

Funding

Siemens Healthineers; National Science Foundation (NSF 1605646); New Jersey Commission on Brain Injury Research (CBIR16IRG032); National Institutes of Health (R01NS094450).

Acknowledgement

The authors thank Aseem Utrankar for help with whisker tracking and Dr. Yelena Bibineyshvili for assistance with data acquisition.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References and links

1. C. Mehring, J. Rickert, E. Vaadia, S. C. de Oliveira, A. Aertsen, and S. Rotter, “Inference of hand movements from local field potentials in monkey motor cortex,” Nat. Neurosci. 6, 1253–1254 (2003). [CrossRef]   [PubMed]  

2. J. Richiardi, H. Eryilmaz, S. Schwartz, P. Vuilleumier, and D. Van De Ville, “Decoding brain states from fMRI connectivity graphs,” NeuroImage 56, 616–626 (2011). [CrossRef]  

3. W. Shirer, S. Ryali, E. Rykhlevskaia, V. Menon, and M. Greicius, “Decoding subject-driven cognitive states with whole-brain connectivity patterns,” Cereb. Cortex. 22, 158–165 (2012). [CrossRef]  

4. D. Ringuette, M. A. Jeffrey, S. Dufour, P. L. Carlen, and O. Levi, “Continuous multi-modality brain imaging reveals modified neurovascular seizure response after intervention,” Biomed. Opt. Express 8, 873–889 (2017). [CrossRef]   [PubMed]  

5. D. A. McCormick, M. J. McGinley, and D. B. Salkoff, “Brain state dependent activity in the cortex and thalamus,” Curr. Opin. Neurobiol. 31, 133–140 (2015). [CrossRef]  

6. M. J. McGinley, M. Vinck, J. Reimer, R. Batista-Brito, E. Zagha, C. R. Cadwell, A. S. Tolias, J. A. Cardin, and D. A. McCormick, “Waking state: rapid variations modulate neural and behavioral responses,” Neuron 87, 1143–1161 (2015). [CrossRef]   [PubMed]  

7. E. Nurse, B. S. Mashford, A. J. Yepes, I. Kiral-Kornek, S. Harrer, and D. R. Freestone, “Decoding EEG and LFP signals using deep learning: heading TrueNorth,” in Proc. of the ACM Int. Conf. on Comp. Front. (2016), pp. 259–266.

8. C. Koch, M. Massimini, M. Boly, and G. Tononi, “Neural correlates of consciousness: progress and problems,” Nat. Rev. Neurosci. 17, 307–321 (2016). [CrossRef]   [PubMed]  

9. G. Silasi, D. Xiao, M. P. Vanni, A. C. Chen, and T. H. Murphy, “Intact skull chronic windows for mesoscopic wide-field imaging in awake mice,” J. Neurosci. Methods 267, 141–149 (2016). [CrossRef]   [PubMed]  

10. T. Murakami, T. Yoshida, T. Matsui, and K. Ohki, “Wide-field Ca2+ imaging reveals visually evoked activity in the retrosplenial area,” Front. Mol. Neurosci. 8, 20 (2015). [CrossRef]  

11. Y. Ma, M. A. Shaik, S. H. Kim, M. G. Kozberg, D. N. Thibodeaux, H. T. Zhao, H. Yu, and E. M. Hillman, “Wide-field optical mapping of neural activity and brain haemodynamics: considerations and novel approaches,” Phil. Trans. R. Soc. B 371, 20150360 (2016). [CrossRef]   [PubMed]  

12. L. Madisen, A. R. Garner, D. Shimaoka, A. S. Chuong, N. C. Klapoetke, L. Li, A. van der Bourg, Y. Niino, L. Egolf, C. Monetti, H. Gu, M. Mills, A. Cheng, B. Tasic, T. N. Nguyen, S. M. Sunkin, A. Benucci, A. Nagy, A. Miyawaki, F. Helmchen, R. M. Empson, T. Knopfel, E. S. Boyden, R. C. Reid, M. Carandini, and H. Zeng, “Transgenic mice for intersectional targeting of neural sensors and effectors with high specificity and performance,” Neuron 85, 942–958 (2015). [CrossRef]  

13. T. W. Chen, T. J. Wardill, Y. Sun, S. R. Pulver, S. L. Renninger, A. Baohan, E. R. Schreiter, R. A. Kerr, M. B. Orger, V. Jayaraman, L. L. Looger, K. Svoboda, and D. S. Kim, “Ultrasensitive fluorescent proteins for imaging neuronal activity,” Nature 499, 295–300 (2013). [CrossRef]   [PubMed]  

14. N. A. Steinmetz, C. Buetfering, J. Lecoq, C. Lee, A. Peters, E. Jacobs, P. Coen, D. Ollerenshaw, M. Valley, S. de Vries, M. Garrett, J. Zhuang, P. A. Groblewski, S. Manavi, J. Miles, C. White, E. Lee, F. Griffin, J. Larkin, K. Roll, S. Cross, T. V. Nguyen, R. Larsen, J. Pendergraft, T. Daigle, B. Tasic, C. L. Thompson, J. Waters, S. Olsen, D. Margolis, H. Zeng, M. Hausser, M. Carandini, and K. Harris, “Aberrant cortical activity in multiple GCaMP6-expressing transgenic mouse lines,” eNeuro4(5), ENEURO.0207–17 (2017). [CrossRef]   [PubMed]  

15. M. P. Vanni and T. H. Murphy, “Mesoscale transcranial spontaneous activity mapping in GCaMP3 transgenic mice reveals extensive reciprocal connections between areas of somatomotor cortex,” J. Neurosci. 34, 15931–15946 (2014). [CrossRef]  

16. T. H. Kim, Y. Zhang, J. Lecoq, J. C. Jung, J. Li, H. Zeng, C. M. Niell, and M. J. Schnitzer, “Long-term optical access to an estimated One million neurons in the live mouse cortex,” Cell Rep. 17, 3385–3394 (2016). [CrossRef]   [PubMed]  

17. D. Xiao, M. P. Vanni, C. C. Mitelut, A. W. Chan, J. M. LeDue, Y. Xie, A. C. Chen, N. V. Swindale, and T. H. Murphy, “Mapping cortical mesoscopic networks of single spiking cortical or sub-cortical neurons,” Elife 6, e19976 (2017). [CrossRef]   [PubMed]  

18. W. Denk and K. Svoboda, “Photon upmanship: why multiphoton imaging is more than a gimmick,” Neuron 18, 351–357 (1997). [CrossRef]   [PubMed]  

19. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2, 932–940 (2005). [CrossRef]   [PubMed]  

20. A. Grinvald, D. Omer, S. Naaman, and D. Sharon, “Imaging the dynamics of mammalian neocortical population activity in-vivo,” in Membrane Potential Imaging in the Nervous System and Heart (Springer, 2015), pp. 243–271. [CrossRef]  

21. B. A. Wilt, L. D. Burns, E. T. Wei Ho, K. K. Ghosh, E. A. Mukamel, and M. J. Schnitzer, “Advances in light microscopy for neuroscience,” Annu. Rev. Neurosci. 32, 435–506 (2009). [CrossRef]   [PubMed]  

22. M. L. Andermann, A. M. Kerlin, and R. Reid, “Chronic cellular imaging of mouse visual cortex during operant behavior and passive viewing,” Front. Cell Neurosci. 4, 3 (2010). doi:. [PubMed]  

23. J. L. Chen, D. J. Margolis, A. Stankov, L. T. Sumanovski, B. L. Schneider, and F. Helmchen, “Pathway-specific reorganization of projection neurons in somatosensory cortex during learning,” Nat. Neurosci. 18, 1101–1108 (2015). [CrossRef]   [PubMed]  

24. M. Minderer, W. R. Liu, L. T. Sumanovski, S. Kugler, F. Helmchen, and D. J. Margolis, “Chronic imaging of cortical sensory map dynamics using a genetically encoded calcium indicator,” J. Physiol.-London 590, 99–107 (2012). [CrossRef]  

25. J. Reimer, E. Froudarakis, C. R. Cadwell, D. Yatsenko, G. H. Denfield, and A. S. Tolias, “Pupil fluctuations track fast switching of cortical states during quiet wakefulness,” Neuron 84, 355–362 (2014). [CrossRef]   [PubMed]  

26. M. J. McGinley, S. V. David, and D. A. McCormick, “Cortical membrane potential signature of optimal states for sensory signal detection,” Neuron 87, 179–192 (2015). [CrossRef]   [PubMed]  

27. M. Vinck, R. Batista-Brito, U. Knoblich, and J. A. Cardin, “Arousal and locomotion make distinct contributions to cortical activity patterns and visual encoding,” Neuron 86, 740–754 (2015). [CrossRef]   [PubMed]  

28. L. Zhu, C. R. Lee, D. J. Margolis, and L. Najafizadeh, “Probing the dynamics of spontaneous cortical activities via widefield Ca+2 imaging in GCaMP6 transgenic mice,” in Wavelets and Sparsity XVII (SPIE, 2017), p. 103940C1.

29. D. Shimaoka, K. D. Harris, and M. Carandini, “Effects of Arousal on Mouse Sensory Cortex Depend on Modality,” Cell Rep. 22, 3160–3167 (2018). [CrossRef]   [PubMed]  

30. N. Naseer and K.-S. Hong, “Classification of functional near-infrared spectroscopy signals corresponding to the right-and left-wrist motor imagery for development of a brain–computer interface,” Neurosci. Lett. 553, 84–89 (2013). [CrossRef]   [PubMed]  

31. S. J. Bensmaia and L. E. Miller, “Restoring sensorimotor function through intracortical interfaces: progress and looming challenges,” Nat. Rev. Neurosci. 15, 313–325 (2014). [CrossRef]   [PubMed]  

32. D. J. O’shea, E. Trautmann, C. Chandrasekaran, S. Stavisky, J. C. Kao, M. Sahani, S. Ryu, K. Deisseroth, and K. V. Shenoy, “The need for calcium imaging in nonhuman primates: New motor neuroscience and brain-machine interfaces,” Exp. Neurol. 287, 437–451 (2017). [CrossRef]  

33. N. Kanwisher, “Functional specificity in the human brain: a window into the functional architecture of the mind,” Proc. Natl. Acad. Sci. 107, 11163–11170 (2010). [CrossRef]   [PubMed]  

34. F. Hutzler, “Reverse inference is not a fallacy per se: Cognitive processes can be inferred from functional imaging data,” NeuroImage 84, 1061–1069 (2014). [CrossRef]  

35. S. Salsabilian, C. R. Lee, D. J. Margolis, and L. Najafizadeh, “Using connectivity to infer behavior from cortical activity recorded through widefield transcranial imaging,” in Biophotonics Congress: Biomedical Optics Congress 2018 (Microscopy/Translational/Brain/OTS), OSA Technical Digest (Optical Society of America, 2018), paper BTu2C.4.

36. B. Blankertz, C. Sannelli, S. Halder, E. M. Hammer, A. Kübler, K.-R. Müller, G. Curio, and T. Dickhaus, “Neurophysiological predictor of SMR-based BCI performance,” NeuroImage 51, 1303–1309 (2010). [CrossRef]   [PubMed]  

37. R. A. Poldrack, “Can cognitive processes be inferred from neuroimaging data?” Trends Cogn. Sci. 10, 59–63 (2006). [CrossRef]   [PubMed]  

38. J. F. Poulet and C. C. Petersen, “Internal brain state regulates membrane potential synchrony in barrel cortex of behaving mice,” Nature 454, 881–885 (2008). [CrossRef]   [PubMed]  

39. E. Eggermann, Y. Kremer, S. Crochet, and C. C. Petersen, “Cholinergic signals in mouse barrel cortex during active whisker sensing,” Cell Rep. 9, 1654–1660 (2014). [CrossRef]   [PubMed]  

40. K. D. Harris and A. Thiele, “Cortical state and attention,” Nat. Rev. Neurosci. 12, 509–523 (2011). [CrossRef]   [PubMed]  

41. I. Ferezou, S. Bolea, and C. C. Petersen, “Visualizing the cortical representation of whisker touch: voltage-sensitive dye imaging in freely moving mice,” Neuron 50, 617–629 (2006). [CrossRef]   [PubMed]  

42. L. Lacasa, B. Luque, F. Ballesteros, J. Luque, and J. C. Nuno, “From time series to complex networks: the visibility graph,” Proc. Natl. Acad. Sci. 105, 4972–4975 (2008). [CrossRef]   [PubMed]  

43. S. Musall, M. T. Kaufman, S. Gluf, and A. Churchland, “Movement-related activity dominates cortex during sensory-guided decision making,” bioRxiv p. 308288 (2018).

44. L. Zhu, C. R. Lee, D. J. Margolis, and L. Najafizadeh, “Predicting behavior from cortical activity recorded through widefield transcranial imaging,” in Conference on Lasers and Electro-Optics, OSA Technical Digest (online) (Optical Society of America, 2017), paper ATu3B.1.

45. C. R. Lee and D. J. Margolis, “Pupil dynamics reflect behavioral choice and learning in a go/nogo tactile decision-making task in mice,” Front. Behav. Neurosci. 10, 200 (2016). [CrossRef]   [PubMed]  

46. P. M. Knutsen, D. Derdikman, and E. Ahissar, “Tracking whisker and head movements in unrestrained behaving rodents,” J. Physiol. 93, 2294–2301 (2005).

47. W. E. Allen, I. V. Kauvar, M. Z. Chen, E. B. Richman, S. J. Yang, K. Chan, V. Gradinaru, B. E. Deverman, L. Luo, and K. Deisseroth, “Global Representations of Goal-Directed Behavior in Distinct Cell Types of Mouse Neocortex,” Neuron 94, 891–907 (2017). [CrossRef]   [PubMed]  

48. B. Luque, L. Lacasa, F. J. Ballesteros, and A. Robledo, “Analytical properties of horizontal visibility graphs in the Feigenbaum scenario,” Chaos 22, 013109 (2012). [CrossRef]   [PubMed]  

49. L. Lacasa, V. Nicosia, and V. Latora, “Network structure of multivariate time series,” Sci. Rep. 5, 15508 (2015). [CrossRef]   [PubMed]  

50. M. Stephen, C. Gu, and H. Yang, “Visibility graph based time series analysis,” PloS one 10, e0143015 (2015). [CrossRef]   [PubMed]  

51. G. Zhu, Y. Li, and P. Wen, “Analysing epileptic EEGs with a visibility graph algorithm,” in IEEE Int. Conf. on Biomed. Eng. and Inform. (BMEI) (IEEE, 2012), pp. 432–436.

52. C. Hao, Z. Chen, and Z. Zhao, “Analysis and prediction of epilepsy based on visibility graph,” in IEEE Int. Conf. on Inform. Sci. and Cont. Eng. (ICISCE) (IEEE, 2016), pp. 1271–1274.

53. Z. K. Gao, Q. Cai, Y. X. Yang, W. D. Dang, and S. S. Zhang, “Multiscale limited penetrable horizontal visibility graph for analyzing nonlinear time series,” Sci. Rep. 6, 35622 (2016). [CrossRef]   [PubMed]  

54. J. Wang, C. Yang, R. Wang, H. Yu, Y. Cao, and J. Liu, “Functional brain networks in Alzheimer’s disease: EEG analysis based on limited penetrable visibility graph and phase space method,” Physica A: Statistical Mechanics and its Applications 460, 174–187 (2016). [CrossRef]  

55. L. Zhu and L. Najafizadeh, “Temporal dynamics of fNIRS-recorded signals revealed via visibility graph,” in “OSA Technical Digest,” (Optical Society of America, 2016), pp. JW3A–53.

56. G. Zhu, Y. Li, P. P. Wen, and S. Wang, “Analysis of alcoholic EEG signals based on horizontal visibility graph entropy,” Brain Inform 1, 19–25 (2014). [CrossRef]   [PubMed]  

57. L. Lacasa, S. Sannino, S. Stramaglia, and D. Marinazzo, “Visibility graphs for fMRI data: multiplex temporal graphs and their modulations across resting state networks,” Network Neuroscience 3208–221 (2017).

58. M. Rubinov and O. Sporns, “Complex network measures of brain connectivity: uses and interpretations,” NeuroImage 52, 1059–1069 (2010). [CrossRef]  

59. R. V. Donner and J. F. Donges, “Visibility graph analysis of geophysical time series: Potentials and possible pitfalls,” Acta Geophysica 60, 589–623 (2012). [CrossRef]  

60. Y. Low, J. E. Gonzalez, A. Kyrola, D. Bickson, C. E. Guestrin, and J. Hellerstein, “GraphLab: A new framework for parallel machine learning,” arXiv preprint:1408.2041 (2014).

61. A. Subasi and E. Ercelebi, “Classification of eeg signals using neural network and logistic regression,” Computer Methods and Programs in Biomedicine 78, 87–99 (2005). [CrossRef]   [PubMed]  

62. N. D. Schiff, J. T. Giacino, K. Kalmar, J. D. Victor, K. Baker, M. Gerber, B. Fritz, B. Eisenberg, T. Biondi, J. O’Connor, E. J. Kobylarz, S. Farris, A. Machado, C. McCagg, F. Plum, J. J. Fins, and A. R. Rezai, “Behavioural improvements with thalamic stimulation after severe traumatic brain injury,” Nature 448, 600–603 (2007). [CrossRef]   [PubMed]  

63. K. R. Gray, P. Aljabar, R. A. Heckemann, A. Hammers, D. Rueckert, and I. Alzheimer’s Disease Neuroimaging, “Random forest-based similarity measures for multi-modal classification of Alzheimer’s disease,” NeuroImage 65, 167–175 (2013). [CrossRef]  

64. C. Donos, M. Dümpelmann, and A. Schulze-Bonhage, “Early seizure detection algorithm based on intracranial EEG and random forest classification,” Int. J. of Neur. Syst. 25, 1550023 (2015). [CrossRef]  

65. W. Chen, Y. Wang, G. Cao, G. Chen, and Q. Gu, “A random forest model based classification scheme for neonatal amplitude-integrated EEG,” Biomed. Eng. OnlineS4 (2014). [CrossRef]  

66. A. Page, C. Sagedy, E. Smith, N. Attaran, T. Oates, and T. Mohsenin, “A flexible multichannel EEG feature extractor and classifier for seizure detection,” IEEE Trans. on Cir. and Syst. II: Express Briefs 62, 109–113 (2015).

67. H. U. Amin, A. S. Malik, N. Kamel, and M. Hussain, “A novel approach based on data redundancy for feature extraction of EEG signals,” Brain Topogr. 29, 207–217 (2016). [CrossRef]  

68. W. A. Chaovalitwongse and R. C. Sachdeo, “On the time series K-nearest neighbor classification of abnormal brain activity,” IEEE Trans. on Syst. Man and Cyber. Part a-Systems and Humans 37, 1005–1016 (2007). [CrossRef]  

69. F. Pereira, T. Mitchell, and M. Botvinick, “Machine learning classifiers and fMRI: a tutorial overview,” NeuroImage 45, 199–209 (2009). [CrossRef]  

70. T. Fawcett, “An introduction to ROC analysis,” Pattern Recognit. Lett. 27, 861–874 (2006). [CrossRef]  

71. S. Sachidhanandam, V. Sreenivasan, A. Kyriakatos, Y. Kremer, and C. C. Petersen, “Membrane potential correlates of sensory perception in mouse barrel cortex,” Nat. Neurosci. 16, 1671–1677 (2013). [CrossRef]   [PubMed]  

72. S. J. Kayser, S. W. McNair, and C. Kayser, “Prestimulus influences on auditory perception from sensory representations and decision processes,” Proc. Natl. Acad. Sci. 113, 4842–4847 (2016). [CrossRef]   [PubMed]  

73. I. Carcea, M. N. Insanally, and R. C. Froemke, “Dynamics of auditory cortical activity during behavioural engagement and auditory perception,” Nat. Commun. 8, 14412 (2017). [CrossRef]   [PubMed]  

74. A. Kyriakatos, V. Sadashivaiah, Y. Zhang, A. Motta, M. Auffret, and C. C. Petersen, “Voltage-sensitive dye imaging of mouse neocortex during a whisker detection task,” Neurophotonics 4, 031204 (2017). [CrossRef]  

75. J. Reimer, M. J. McGinley, Y. Liu, C. Rodenkirch, Q. Wang, D. A. McCormick, and A. S. Tolias, “Pupil fluctuations track rapid changes in adrenergic and cholinergic activity in cortex,” Nat. Commun. 7, 13289 (2016). [CrossRef]   [PubMed]  

76. H. Dana, T.-W. Chen, A. Hu, B. C. Shields, C. Guo, L. L. Looger, D. S. Kim, and K. Svoboda, “Thy1-GCaMP6 transgenic mice for neuronal population imaging in vivo,” PloS one 9, e108697 (2014). [CrossRef]   [PubMed]  

77. J. B. Wekselblatt, E. D. Flister, D. M. Piscopo, and C. M. Niell, “Large-scale imaging of cortical dynamics during sensory perception and behavior,” J. Physiol. 115, 2852–2866 (2016).

78. H. B. He and E. A. Garcia, “Learning from imbalanced data,” IEEE Trans. on Knowledge and Data Eng. 21, 1263–1284 (2009). [CrossRef]  

79. J. Oñativia, S. R. Schultz, and P. L. Dragotti, “A finite rate of innovation algorithm for fast and accurate spike detection from two-photon calcium imaging,” J. Neural. Eng. 10, 046017 (2013). [CrossRef]   [PubMed]  

80. T. P. Patel, K. Man, B. L. Firestein, and D. F. Meaney, “Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging,” J. Neurosci. Methods 243, 26–38 (2015). [CrossRef]   [PubMed]  

81. J. Friedrich, P. Zhou, and L. Paninski, “Fast online deconvolution of calcium imaging data,” PLoS Comput Biol 13, e1005423 (2017). [CrossRef]   [PubMed]  

82. I. J. Park, Y. V. Bobkov, B. W. Ache, and J. C. Principe, “Quantifying bursting neuron activity from calcium signals using blind deconvolution,” J. Neurosci. Methods 218, 196–205 (2013). [CrossRef]  

83. J. T. Vogelstein, A. M. Packer, T. A. Machado, T. Sippy, B. Babadi, R. Yuste, and L. Paninski, “Fast nonnegative deconvolution for spike train inference from population calcium imaging,” J. Physiol. 104, 3691–3704 (2010).

84. E. A. Pnevmatikakis, D. Soudry, Y. Gao, T. A. Machado, J. Merel, D. Pfau, T. Reardon, Y. Mu, C. Lacefield, W. Yang, M. Ahrens, R. Bruno, T. Jessell, D. Peterka, R. Yuste, and L. Paninsk, “Simultaneous denoising, deconvolution, and demixing of calcium imaging data,” Neuron 89, 285–299 (2016). [CrossRef]   [PubMed]  

85. L. Theis, P. Berens, E. Froudarakis, J. Reimer, M. R. Rosón, T. Baden, T. Euler, A. S. Tolias, and M. Bethge, “Benchmarking spike rate inference in population calcium imaging,” Neuron 90, 471–482 (2016). [CrossRef]   [PubMed]  

86. E. A. Mukamel, A. Nimmerjahn, and M. J. Schnitzer, “Automated analysis of cellular signals from large-scale calcium imaging data,” Neuron 63, 747–760 (2009). [CrossRef]   [PubMed]  

87. V. Sreenivasan, V. Esmaeili, T. Kiritani, K. Galan, S. Crochet, and C. C. Petersen, “Movement initiation signals in mouse whisker motor cortex,” Neuron 92, 1368–1382 (2016). [CrossRef]   [PubMed]  

88. X.-W. Wang, D. Nie, and B.-L. Lu, “Emotional state classification from eeg data using machine learning approach,” Neurocomputing 129, 94–106 (2014). [CrossRef]  

89. J. Zhang, X. Li, S. T. Foldes, W. Wang, J. L. Collinger, D. J. Weber, and A. Bagić, “Decoding brain states based on magnetoencephalography from prespecified cortical regions,” IEEE Trans. on Biomed. Eng. 63, 30–42 (2016). [CrossRef]  

90. E. Bullmore and O. Sporns, “Complex brain networks: graph theoretical analysis of structural and functional systems,” Nat. Rev. Neurosci. 10, 186–198 (2009). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Visualization 1       The video shows cortical activity recorded through widefield calcium imaging. Frames corresponding to ``AW' are identified by ``W', shown on the top left of frames.
Visualization 1       The video shows cortical activity recorded through widefield calcium imaging. Frames corresponding to ``AW' are identified by ``W', shown on the top left of frames.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Summary of the proposed analysis procedure.
Fig. 2
Fig. 2 a) Left: Illustration of the experimental setup used for widefield imaging of cortical activity of mice expressing GCaMP6f and simultaneous recording of whisker movement. Right, top: raw image of neocortical surface through transparent skull preparation. M1, S1, and V1 are schematically labeled. Asterisk indicates position of Bregma. Right, bottom: ROIs are superimposed on a map based on the Allen Institute common coordinate framework v3 of mouse cortex (brain-map.org; adapted from [43]). ROI: 1, Retrosplenial area, lateral agranular part (RSPagl); 2, Retrosplenial area, dorsal (RSPd); 3, 4, 9, Secondary motor area (MOs); 5, 7, 8, 10, Primary motor area (MOp); 6, Primary somatosensory area, mouth (SSp-m) / upper limb (SSp-ul); 11, 16, Primary somatosensory area, lower limb (SSp-ll); 12, SS-ul; 13, Primary somatosensory area, nose (SSp-n); 14, 20, Primary somatosensory area, barrel field (SSp-bfd); 15, SSp-bfd / Primary somatosensory area, unassigned (SSp-un); 17, Retrosplenial area, lateral agranular part (RSPagl); 18, Anterior visual area (VISa) / Primary somatosensory area, trunk (SSp-tr); 19, VISa / SSp-tr / SSp-bfd; 21, Supplementary somatosensory area (SSs); 22, Auditory area (AUD); 23, Temporal association areas (TEa); 24, SSp-bfd / Rostrolateral visual area (VISrl); 25, 29, 30, Primary visual area (VISp); 26, Anteromedial visual area (VISam); 27, RSPagl / RSPd; 28, Posteromedial visual area (VISpm). b) A sample 20 s movie obtained during a block. Frames corresponding to “AW” are identified by “W”, shown on the top left of frames (see Visualization 1).
Fig. 3
Fig. 3 Experimental protocol that was followed for each subject. Each subject participated in two sessions per day. In each session, spontaneous activity was acquired for sixteen 20.47 s blocks, with 20 s of rest between blocks.
Fig. 4
Fig. 4 Sample images and time series recorded from block #1 of subject #1. (a)–(b) baseline-corrected images, (c) time series corresponding to ROI-6 and ROI-27, (d) measured angle corresponding to whisker movement signal recorded from the same block, and (e) standard deviation-based time series of the signal, (d) where the threshold level used for labeling AW and NW conditions is shown as a red line.
Fig. 5
Fig. 5 Preprocessed calcium signals of recording block #1 from subject #1 from ROI-6 (a), ROI-8 (f), ROI-30 (k) and ROI-19 (p). For each case, 2 s segments of signals corresponding to AW (shown in red in (b), (g), (l) and (q)) and NW (shown in blue in (c), (h), (m) and (r)) conditions as determined from whisker movement recordings. For each ROI, the adjacency matrices for 2 s AW are shown in (d), (i), (n), and (s), and for 2 s NW are shown in (e), (j), (o), and (t). Measures extracted from VG of 2 s duration of AW time series (shown in red) and from VG of 2 s NW time series (shown in blue) are also shown in (u) for each ROI.
Fig. 6
Fig. 6 Color-coded graph measures for all ROIs as a function of time during a recording block. (a) Edge density (D), (b) Averaged clustering coefficient (C), and (c) Characteristic pathlength (L). (d) Whisker movement recording obtained simultaneously in the same block.
Fig. 7
Fig. 7 Classification results when using kNN as classifier.
Fig. 8
Fig. 8 Classification results when using regularized logistic regression (LR) as classifier.
Fig. 9
Fig. 9 Classification results when using random forest (RF) as classifier.

Tables (4)

Tables Icon

Table 1 Number of blocks and number of AW/NW segments for each subject, when the window length of 2 s with a step size of 0.5 s is used.

Tables Icon

Table 2 Classification results for best sensitivity obtained for each subject when using kNN, regularized logistic regression (LR), and random forest (RF) as classifier. Features, window lengths (w), and related parameters from which the optimum results have been obtained are also listed (SS is short for subsample). Note that “+” in the “Feature” rows represent using multiparametric approach for performing the classification.

Tables Icon

Table 3 Classification performance using unified parameters across subjects and classifiers. D + C is used as the feature, and w = 200 points is used as the window length for extracting features in all cases.

Tables Icon

Table 4 Performance comparison of classification experiments based on i) VG-based feature extraction from all ROIs, ii), Spike-based feature extraction from all ROIs, iii) Variance-based feature extraction from all ROIs, iv) VG-based feature extraction only from ROI-20, and v) VG-based feature extraction from ROIs 25–30.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

x ( p ) < x ( j ) + [ x ( i ) x ( j ) ] [ t ( j ) t ( p ) t ( j ) t ( i ) ] ,
D = 1 N ( N 1 ) i , j a i , j .
C = 1 N i = 1 N C i = 1 N i , j , l a i j a i l a j l K i ( K i 1 ) ,
L = 1 N ( N 1 ) i , j l i j ,
f : VG measures ( t 0 , t 0 + w ) { AW , NW } ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.