Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Passive BCI based on drowsiness detection: an fNIRS study

Open Access Open Access

Abstract

We use functional near-infrared spectroscopy (fNIRS) to discriminate the alert and drowsy states for a passive brain-computer interface (BCI). The passive brain signals for the drowsy state are acquired from the prefrontal and dorsolateral prefrontal cortex. The experiment is performed on 13 healthy subjects using a driving simulator, and their brain activity is recorded using a continuous-wave fNIRS system. Linear discriminant analysis (LDA) is employed for training and testing, using the data from the prefrontal, left- and right-dorsolateral prefrontal regions. For classification, eight features are tested: mean oxyhemoglobin, mean deoxyhemoglobin, skewness, kurtosis, signal slope, number of peaks, sum of peaks, and signal peak, in 0~5, 0~10, and 0~15 second time windows, respectively. The results show that the best performance for classification is achieved using mean oxyhemoglobin, the signal peak, and the sum of peaks as features. The average accuracies in the right dorsolateral prefrontal cortex (83.1, 83.4 and 84.9% in the 0~5, 0~10 and 0~15 second time windows, respectively) show that the proposed method has an effective utility for detection of drowsiness for a passive BCI.

© 2015 Optical Society of America

1. Introduction

Non-invasive brain-computer interface (BCI) methods measure brain activities either by detecting the electrophysiological signals [1–3] or by determining the hemodynamic responses [4–10]. The electrophysiological phenomena are generated due to neuronal firing as a result of brain tasks [1]. The hemodynamic response is produced when the blood releases glucose at a greater rate to active neurons in the areas of inactive neurons [9, 10]. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) are the leading non-invasive BCI modalities in terms of cost and portability [1, 7]. Recently researchers have combined these two modalities for improved BCI performance and better control command signal generation [11, 12].

The body of BCI research can be categorized into active, reactive and passive brain tasks [13]. In an active BCI, the brain signal is made directly and intentionally by the user, which is independent of external events: Motion intention, motor imageries, and mental tasks fall into this category [11, 14–21]. In a reactive BCI, the brain signal is generated in reaction to external stimulation: All audio, video and pain stimuli generate a reactive signal in the brain that can be used for reactive BCI [3, 22–26]. In a passive BCI, an arbitrary activity generated without any objective control is used: Fatigue estimation [27], for example, falls under the passive BCI domain. Whereas the active and reactive types of BCI are based on the users’ will for generation of commands that are used in control applications [3, 7, 11], the passive type of BCI is an effective tool for monitoring brain activity changes during normal tasks [13].

Drowsiness (a passive brain activity caused by fatigue or sleep deprivation) is a significant contributor to traffic accidents [28]. The drowsiness activity has been found to occur in the prefrontal cortex (PFC) while driving a vehicle [29]. Also, increased brain activity has been reported during the sleep transition between wakefulness and the rapid eye movement state [30]. Thus, developing an effective system based on the drowsy state of the driver is an essential prerequisite for real-life safe driving. Previous studies have used eye-blinking [31] and head-nodding-related tasks [32] to detect drowsiness. However, in those studies, drowsiness was sometimes incorrectly detected due to false detection of visual attributes [33]. In the last decade, brain activity has been studied using several functional brain imaging modalities for steering, vigilance and drowsy states [34–44]. These studies showed the possibility of detection of vigilance [35], fatigue [42, 43] and drowsiness [41, 44, 45]. Mostly, they considered the neural correlates necessary for detection of drowsiness [35, 40–46]. However, to avoid false alerts, the characteristics of the hemodynamic response due to drowsiness, along with the neuronal response, should also be studied.

In this study, we investigate the feasibility of detecting the drowsy state via the hemodynamic brain activity for a passive BCI. The brain signals are measured from the prefrontal and dorsolateral prefrontal cortex regions. fNIRS is used to detect the drowsiness activity of the subjects in three different time windows for classification using eight different feature sets. Mean oxyhemoglobin, signal peak and sum of peaks are found to be the three features that allow the best classification performance. To the best of our knowledge, this is the first fNIRS investigation that has targeted the right dorsolateral prefrontal cortex to discriminate the unintentional drowsy state from the active / alert states using three features (i.e., signal mean, signal peak, and sum of peaks) for a passive BCI, which provide good classification accuracies.

2. Methods

2.1. Experimental procedure

A total of 13 healthy adults were recruited (all male, mean age: 28.5 ± 4.8). Two among the selected participants were left handed. The subjects had not participated in any drowsy state detection experiment previously. Also, none of the participants reported a previous history of any psychiatric, neurological or visual disorder. All of them had normal or corrected-to-normal vision, and all provided verbal consent after having been informed in detail about the experimental procedure. The experiment was conducted in accordance with the latest Declaration of Helsinki.

Prior to the experiment, the subjects were deprived of almost ten hours of sleep. The subjects stayed awake during the night time, while the experiment was conducted in the morning. They were each seated in a comfortable chair and asked to relax. A screen was placed in front of them at a distance of almost 70 cm. The subjects were asked to drive a car in a virtual environment for an hour using a driving simulator (city car driving) while keeping the speed of the car within 40~60 km/hr. In order to avoid motion artifacts, the subjects were asked to keep their head and body movement to a minimum. A five minutes’ pretrial was performed for the subjects to familiarize with the simulator and to adjust the baseline for their brain signals. The subjects each drove the car for 30 ± 5 minutes while they were visually inspected. The car was driven on a given track with medium traffic density and light pedestrian movement.

Three different criteria are used to determine the drowsy state in the data: The first criterion is to use a recorded marker(s) from the driving simulator data, in which the subjects are asked to maintain the car at a steady speed. If the subject makes any traffic violation or loses control of the car, a marker is placed at the time of violation or mistake. The second criterion is to inspect the subject during the driving session visually by eyes. Usually, the subject blinks rapidly in the drowsy state (see [30]). The time where the subject closes eyes for more than 10 seconds or blinks rapidly is to be noted. The time interval of the first and the second criteria has been compared to set the labels of drowsiness. At the end of an experiment, the subjects are asked about their mental condition during the driving session. The questionnaire was used to assess the state of mind of the subjects when the mistakes were made. The three criteria were used together to give the label of the drowsy state in the data. The experiment was stopped if the subject was not able to drive, or requested a break, due to excessive fatigue. Actually, one experiment was halted due to the subject’s request, whose data were discarded. Figure 1 depicts a block diagram indicating the experimental protocol.

 figure: Fig. 1

Fig. 1 The experimental scheme used for drowsiness detection.

Download Full Size | PDF

2.2. Sensor configuration

Figure 2 shows the optodes configuration in this work. The NIRS signals were acquired using 7 sources and 16 detectors forming a combinational pair of 28 channels that were placed on the PFC and dorsolateral prefrontal cortex (DPFC) according to the International 10-20 System [14, 30]. The data from the right DPFC were recorded through channels 1~8, and the data from the left DPFC were recorded by channels 21~28. Channels 1~8 are labeled as region A, channels 9~20 as region B, and channels 21~28 as region C, respectively.

 figure: Fig. 2

Fig. 2 The placement of optodes over the prefrontal and dorsolateral prefrontal cortex regions.

Download Full Size | PDF

2.3. Signal acquisition and processing

The brain signals were recorded using a continuous-wave imaging system (DYNOT, NIRx Medical Technologies, USA) with 760 and 830 nm wavelengths, respectively. The data were obtained at the sampling rate of 1.81 Hz. Gaussian filtering was used to remove the respiratory, heartbeat and other motion artifacts from the data [47–51]. The modified Beer-Lambert law [52] was used to convert the raw intensity values to the oxygenated and deoxygenated hemoglobin concentration changes (i.e., ∆HbO and ∆HbR). The modified Beer-Lambert law is given as

A(t;λ)=lnIin(λ)Iout(t;λ)=α(λ)×c(λ)×l×d(λ)+η,
[ΔcHbO(t)ΔcHbR(t)]=[αHbO(λ1)αHbR(λ1)αHbO(λ2)αHbR(λ2)]1[ΔA(t;λ1)ΔA(t;λ2)]1l×d(λ),
where A is the absorbance of light (optical density), Iin is the incident intensity of light, Iout is the detected intensity of light, α is the specific extinction coefficient in μM−1cm−1, c is the absorber concentration in μM, l is the distance between the source and detector in cm, d is the differential path-length factor, and η is the loss of light due to scattering.

2.4. Feature extraction and classification

One important aspect in feature extraction is to determine the proper size of data in time, which is most appropriate in extracting the features of drowsiness and alert state. In this work, three different times windows (i.e., 0~5, 0~10, and 0~15 seconds) are investigated. For each time window, eight different features (i.e., mean ∆HbO, mean ∆HbR, skewness, kurtosis, slope, number of peaks, sum of peaks, and signal peak) are computed (using the averaged signal in each region): The signal means of ∆HbO and ∆HbR are calculated as follows.

Mk=1Nj=1windowkXj,
where M is the mean value, the subscript k denotes the different window size (i.e., 0~5, 0~10, 0~15 sec), N is the number of observations, and Xk represent the HbO or HbR data in the given window. The skewness is computed as follows.
skewk=E(XMk)3σ3,
where skew is the skewness, σ is the standard deviation of X in a single time window, and E is the expectation or expected value of X. The kurtosis is computed as follows.
kurtk=E(XMk)4σ4,
where kurt is the kurtosis. Once a window size is selected, the signal slope (SS) is calculated using the polyfit function in MATLAB 8.1.0 (MathWorks, USA), the number of peaks (NoP) is gauged by counting the total number of local maxima of the averaged ∆HbO signals, the sum of peaks (SoP) is computed by taking the sum of the maxima values, the signal peak (P) is estimated using the MATLAB max function, and the frequency of peaks (f) is determined by dividing NoP by the window size. It is remarked that NoP and f give the same information. All these features have been calculated for regions A, B and C separately.

After the eight features discussed above are calculated for each time window, their values are then normalized and rescaled between 0 and 1 by the following equation.

a=aminamaxamina,
where a Rn represents the feature value, a is the re-scaled value between 0 and 1, max a denotes the largest value, and min a indicates the smallest value. These normalized feature values are used in a two-class classifier for training and testing the data by using linear discriminant analysis (LDA) to find the optimal separation between the drowsy and alert states [53].

Let xiR2 (where i denotes the classification class; drowsiness and alert) denote the samples, μi be the sample mean of class i, and μ be the total mean over all the samples. That is,

μi=1nixclassix,μ=1nlxl,
where ni is the number of samples of class i, and n is the total number of samples. The optimal projection matrix V for the LDA that maximizes the following Fisher’s criterion is
J(V)=det(VTSBV)det(VTSWV),
where SB and SW are the between-class scatter matrix and the within-class scatter matrix, respectively, given by
SB=i=1mni(μiμ)(μiμ)T,
SW=i=1mxlclassi(xlμi)(xlμi)T,
where the total number of classes are given by m. Equation (8) was treated as an eigenvalue problem to obtain the optimal vector V that corresponds to the largest eigenvalue. For training and testing of the data and to estimate the performance of the classifier, we used the 10-fold cross-validation method in [54].

3. Results

Example signals of ∆HbO and ∆HbR in regions A, B and C are shown in Fig. 3. The data have been labelled into drowsy and alert state using the experimental procedure criteria (see Section 2.1). The first task is to determine the most active region among the three regions during the drowsy state since it is beneficial to use a small region for BCI for drowsiness detection (instead of the entire PFC).

 figure: Fig. 3

Fig. 3 ∆HbO and ∆HbR changes in the prefrontal and dorsolateral prefrontal brain regions (Subject 3): Region A consists of channels 1~8, region B channels 9~20, and region C channels 21~28.

Download Full Size | PDF

The data are segmented into windows and each window is labelled as a trial. For a 30 min data set, if the entire data is cut into 0~5 sec time window, we can compute 360 values (i.e., 30 × 60 / 5) for a feature. Similarly, 180 and 120 feature values if 0~10 sec and 0~15 sec window sizes are chosen. To label the drowsy status of each channel (i.e., to see the most observable features for drowsiness activity), the feature values during the drowsy period are averaged. By the same token, to label the alert status, the same procedure is carried out for the alert period. Tables 1, 2, and 3 show the comparison of eight features in the drowsy and alert state for three time windows in region A, respectively.

Tables Icon

Table 1. Eight features in region A (before normalization): 0~5 sec window

Tables Icon

Table 2. Eight features in region A (before normalization): 0~10 sec window

Tables Icon

Table 3. Eight features in region A (before normalization): 0~15 sec window

It is observed, from Tables 1-3, that the ∆HbO mean, peak value, sum of peaks, and number of peaks are distinguishable among eight features in each channel in region A. For regions B and C, instead of showing the entire data like the above (due to the space limitation), the subtraction of the feature value in the alert state from that of the drowsy state is compared to observe the differences in two states. Tables 4 and 5 show the difference of the feature values in the drowsy and alert state in region B and C, respectively. As shown in Tables 4 and 5, the difference between the drowsy and alert state in region B and C is not significant as compared to region A. Thus, it is concluded that region A is the most active during the drowsy state than region B and C.

Tables Icon

Table 4. Difference of the feature values between the drowsy and alert states (region B, 5 sec window)

Tables Icon

Table 5. Difference of the feature values between the drowsy and alert states (region C, 5 sec window)

Now, the spatial averaging of the signals in each region (i.e., A, B and C) is carried out: Eight channels from A, twelve channels from B, and another eight channels from C have been averaged to get Fig. 4. The spatial average also shows the fact that drowsiness is more prominent in region A. Although there are some variations in regions B and C corresponding to the drowsy state, the most significant changes are observed in region A. Figure 4 shows the hemodynamic changes in the three regions of Subject 3.

 figure: Fig. 4

Fig. 4 Comparison of the regional averages in A, B, and C showing the drowsy and alert states (Subject 3).

Download Full Size | PDF

In order to verify our observation that the drowsiness activity appears more significantly in region A, we compared the brain signal data of four subjects (Subjects 1, 3, 4 and 6), as shown in Fig. 5. For each subject, the spatial average of channel 1~8 is taken to obtain the results. The “Drowsy state” markers are placed in Fig. 5 using the experimental procedure criterion (see Section 2.1). It can be seen that the drowsy period may vary on the subject’s physiological condition.

 figure: Fig. 5

Fig. 5 HbX corresponding to the drowsy state (Subs. 1, 2, 4 and 6; see Fig. 4 for Sub. 3, Sub. 5 is omitted)

Download Full Size | PDF

Figure 6 depicts 28 2-D feature spaces using a combination of two features (Subject 2). 360 data points in Fig. 6 were generated by cutting the drowsy and alert period by 5 sec window (i.e., the number of data points will depend on the window size). It is clear that the mean of ∆HbO and signal peaks provide the best data separation (see Tables 6 and 7).

 figure: Fig. 6

Fig. 6 28 2-class feature spaces combining the mean ∆HbO, mean ∆HbR, skewness, kurtosis, slope, number of peaks, sum of peaks and signal peak for separating the drowsy and non-drowsy states: The red triangle represents the non-drowsy (alert) state and the blue circle represents the drowsy state (Subject 2, region A, 0~5 sec time window).

Download Full Size | PDF

Tables Icon

Table 6. Accuracies obtained by a combination of two features (0~15 sec window, Subject 2)

Tables Icon

Table 7. Averaged accuracies over all the subjects (0~15 sec window)

Table 8 tabulates the classification accuracies of the alert and drowsy states using only the best-performing features (i.e., the mean ∆HbO, signal peak, and sum of peaks). Figure 7(a) plots the average classification accuracy variations for each time window in the three segmented brain regions. It shows that the average accuracy in region A is higher than the others. Whereas the average accuracy in the 0~5 sec time window is lower than that in the 0~15 sec time window, the difference is small, and the average accuracy is higher than 70%; thus the 0~5 sec time window can be considered to be most suitable out of three candidates for BCI. Table 9 compares the %-accuracy and its computation time for two classification methods: LDA and support vector machines (SVM). In is noted that the SVM classifier shows an improved classification accuracy by 1.9% over the LDA classifier. However, it is also noted that the computation time is almost 1 sec in the case of SVM.

Tables Icon

Table 8. Classification accuracies (%) in three different brain regions

 figure: Fig. 7

Fig. 7 (a) The overall average of the classification accuracies over 13 subjects in different time windows; (b) the individual average accuracies and standard deviations of 13 subjects in regions A, B, and C; (c) the channel-wise number of occurrence of the drowsy state over 13 subjects (i.e., four subjects showed the drowsy state in Ch. 1).

Download Full Size | PDF

Tables Icon

Table 9. Performance comparison of two classical classifiers (for region A)

Figure 7(b) shows the mean and standard deviation for each subject in the three segmented regions. The results were obtained by taking the average of the three time windows for each subject in regions A, B and C, respectively. Overall, region A can be deemed to be better suited for drowsiness activity detection. Figure 7(c) plots the data variations in each channel (over all the subjects); those channels with higher signal variation are found in region A.

4. Discussion

According to the previous study on vigilance using NIRS [55], a significant peak occurs in the event-related hemodynamic response between 5 and 8 seconds of reaction time. Here in this study, the 0~5 second window could detect the vigilance activity, thus reducing the time necessary for generation of a warning signal command. Also, it was observed that the brain activity of a drowsy person shows a ∆HbO increase in the PFC region. The opposite (i.e., the mean ∆HbO is low in the drowsy state) was reported in the previous work [56] that analyzed a reduction in brain activity during non-rapid eye movement (NERM) sleep and compared it with wakefulness. A point to be noted here is that none of participants went into NERM sleep during the experiment; instead, they focused on steering the car. This escalation in ∆HbO is the result of the increased attention level devoted to driving. Moreover, the subjects reported that they had to focus more when making turns. Thus, it can be deduced that a drowsy subject requires increased activity to focus and monitor surroundings; furthermore, the increase in the ∆HbO level suggest that the results are consistent with other literature [30, 55, 57].

The most common features used for fNIRS are the mean of ∆HbO/∆HbR and the signal slope [7, 58, 59]. The mentioned features are used for the active type BCI tasks where the stimuli are given for a fixed time interval followed by a resting period. The present study focused on detecting a passive activity using fNIRS. Since the passive activity of drowsiness is an arbitrary activity generated without any subjective control [13], the trend of drowsy data showed a significant increase in ∆HbO as the subject focused more on driving in drowsy condition. There were no consistent trends in the drowsy state (see [57]), therefore the signal slope, kurtosis and skewness are not well differentiated. The mean ∆HbO has a higher value in the drowsy state than in the alert state. Also, the observed peak values of ∆HbO are higher during the drowsy state. The sum of peaks in each time window is higher than the sum of the peaks in the alert state (see Tables 1, 2, 3 ,4, and 5). Therefore, the best features for the passive activity are the mean of ∆HbO and the peak values. The sum of peaks can also assist in increasing the accuracy as they are also discriminated during the drowsy state. We did not use the frequency of peaks as a feature for classification; as this is calculated by dividing the number of peaks by the window size; this information is redundant for the classifier.

Although there were variations in the hemodynamic responses of the subjects due to trial-to-trial variability [60], the average classification results for the right and left PFCs definitively established that the right region is more active. A similar result was reported in an fMRI study [61], where an increased cerebral blood flow was observed in the right frontal lobe when sleep preceded self-awakening. It should also be noted that the participants recruited for the present experiment were not professional drivers, and had only limited experience in driving a vehicle. The results might differ when using the brain activity of professional drivers.

This study showed that classification of drowsiness and the alert state over the right DPFC yields higher classification results than the PFC and left DPFC regions. The average accuracy varied between 84.9 and 64.4% from right to left in the PFC region in the 0~15 to 0~5 second windows. The significances of the obtained accuracies were computed by t-test. The accuracy obtained in region A was compared with those obtained in region B and region C, respectively. The p-values obtained using the accuracies for region A vs. region B and region C in the 0~5 second window were 0.0001. For the 0~10 second window, the p-values also were 0.0001. This showed that the results obtained are significant, and also, that channels 1~8 are more active in the drowsy state than are the other channels. Additionally, there were no significant variations in the average accuracies among the three time windows, see Fig. 7(a); thus the minimum time window, 0~5 seconds, can be used for drowsiness detection using fNIRS.

The previous EEG studies have shown the feasibility of detection of driver vigilance and fatigue using neural correlates [35, 42–44]. In the study of EEG [41], 19 features were used to detect the drowsiness state. Seven parameters were trained using a neural network classifier to get 83.6% accurate drowsiness detection. In the current work, the SVM classifier was able to achieve 87.3% accuracy using only 3 features. Another study [44] using a vision based method (i.e., eyelid closure degree) combined with EEG had demonstrated 87.5% and 70% accuracies for male and females, respectively. Also, in [45], EEG was combined with electrooculography (EOG) to acquire 89% accurate detection. Though the accuracies using ECD and EOG were higher than that of the current method, the results of the current study can further be improved by combining it with EEG, EOG, and/or an eye tracking system. Moreover, to the best of our knowledge, this is the first passive fNIRS-BCI study to perform a classification of the alert and drowsy states using the hemodynamic response. Also, this is the first work that has used the spatial filtering by segmenting the prefrontal brain region to identify the region of interest for drowsiness detection. The results using the eight features are significant for a passive BCI. Also, the features providing the best performance for fNIRS signals in the drowsy state are selected. However, the command is generated in a minimum 5 second window. This delay in time can be further reduced to within 2 seconds using the initial dip as a feature [7].

5. Conclusions

This study investigated the feasibility of detecting the drowsy state using functional near-infrared spectroscopy (fNIRS) for a passive brain-computer interface (BCI). Drowsiness was detected by monitoring the brain activity for a driving task using eight features and three time windows. Mean of oxyhemoglobin and signal peak were found to be best suited for classification of the drowsy state. The classification results showed that the right prefrontal region is more active during the drowsy condition while driving.

Acknowledgment

This work was supported by the National Research Foundation of Korea under the Ministry of Science, ICT and Future Planning, Korea (grant no. NRF-2014 -R1A2A1A10049727).

References and links

1. L. F. Nicolas-Alonso and J. Gomez-Gil, “Brain computer interfaces, a review,” Sensors (Basel) 12(2), 1211–1279 (2012). [CrossRef]   [PubMed]  

2. S. Coyle, T. Ward, and C. Markham, “Brain-computer interfaces: A review,” Interdiscip. Sci. Rev. 28(2), 112–118 (2003). [CrossRef]  

3. A. Turnip, K.-S. Hong, and M.-Y. Jeong, “Real-time feature extraction of P300 component using adaptive nonlinear principal component analysis,” Biomed. Eng. Online 10(1), 83 (2011). [CrossRef]   [PubMed]  

4. N. K. Logothetis, “What we can do and what we cannot do with fMRI,” Nature 453(7197), 869–878 (2008). [CrossRef]   [PubMed]  

5. M. P. van den Heuvel and H. E. Hulshoff Pol, “Exploring the brain network: A review on resting-state fMRI functional connectivity,” Eur. Neuropsychopharmacol. 20(8), 519–534 (2010). [CrossRef]   [PubMed]  

6. M. Welvaert and Y. Rosseel, “A review of fMRI simulation studies,” PLoS One 9(7), e101953 (2014). [CrossRef]   [PubMed]  

7. N. Naseer and K.-S. Hong, “fNIRS-based brain-computer interfaces: A review,” Front. Hum. Neurosci. 9, 3 (2015). [PubMed]  

8. M. Strait and M. Scheutz, “What we can and cannot (yet) do with functional near infrared spectroscopy,” Front. Neurosci. 8, 117 (2014). [CrossRef]   [PubMed]  

9. D. A. Boas, C. E. Elwell, M. Ferrari, and G. Taga, “Twenty years of functional near-infrared spectroscopy: Introduction for the special issue,” Neuroimage 85(Pt 1), 1–5 (2014). [CrossRef]   [PubMed]  

10. M. Ferrari and V. Quaresima, “A brief review on the history of human functional near-infrared spectroscopy (fNIRS) development and fields of application,” Neuroimage 63(2), 921–935 (2012). [CrossRef]   [PubMed]  

11. M. J. Khan, M. J. Hong, and K.-S. Hong, “Decoding of four movement directions using hybrid NIRS-EEG brain-computer interface,” Front. Hum. Neurosci. 8, 244 (2014). [CrossRef]   [PubMed]  

12. S. Fazli, J. Mehnert, J. Steinbrink, G. Curio, A. Villringer, K.-R. Müller, and B. Blankertz, “Enhanced performance by a hybrid NIRS-EEG brain computer interface,” Neuroimage 59(1), 519–529 (2012). [CrossRef]   [PubMed]  

13. T. O. Zander and C. Kothe, “Towards passive brain-computer interfaces: Applying brain-computer interface technology to human-machine systems in general,” J. Neural Eng. 8(2), 025005 (2011). [CrossRef]   [PubMed]  

14. N. Naseer, M. J. Hong, and K.-S. Hong, “Online binary decision decoding using functional near-infrared spectroscopy for the development of brain-computer interface,” Exp. Brain Res. 232(2), 555–564 (2014). [CrossRef]   [PubMed]  

15. N. Naseer and K.-S. Hong, “Classification of functional near-infrared spectroscopy signals corresponding to the right- and left-wrist motor imagery for development of a brain-computer interface,” Neurosci. Lett. 553, 84–89 (2013). [CrossRef]   [PubMed]  

16. K.-S. Hong, N. Naseer, and Y.-H. Kim, “Classification of prefrontal and motor cortex signals for three-class fNIRS-BCI,” Neurosci. Lett. 587, 87–92 (2015). [CrossRef]   [PubMed]  

17. R. Sitaram, H. Zhang, C. Guan, M. Thulasidas, Y. Hoshi, A. Ishikawa, K. Shimizu, and N. Birbaumer, “Temporal classification of multichannel near-infrared spectroscopy signals of motor imagery for developing a brain-computer interface,” Neuroimage 34(4), 1416–1427 (2007). [CrossRef]   [PubMed]  

18. S. M. Coyle, T. E. Ward, and C. M. Markham, “Brain-computer interface using a simplified functional near-infrared spectroscopy system,” J. Neural Eng. 4(3), 219–226 (2007). [CrossRef]   [PubMed]  

19. L. C. Schudlo and T. Chau, “Dynamic topographical pattern classification of multichannel prefrontal NIRS signals: II. Online differentiation of mental arithmetic and rest,” J. Neural Eng. 11(1), 016003 (2014). [CrossRef]   [PubMed]  

20. S. D. Power, A. Kushki, and T. Chau, “Intersession consistency of single-trial classification of the prefrontal response to mental arithmetic and the no-control state by NIRS,” PLoS One 7(7), e37791 (2012). [CrossRef]   [PubMed]  

21. S. D. Power, A. Kushki, and T. Chau, “Towards a system-paced near-infrared spectroscopy brain-computer interface: Differentiating prefrontal activity due to mental arithmetic and mental singing from the no-control state,” J. Neural Eng. 8(6), 066004 (2011). [CrossRef]   [PubMed]  

22. H.-J. Hwang, D. Hwan Kim, C.-H. Han, and C.-H. Im, “A new dual-frequency stimulation method to increase the number of visual stimuli for multi-class SSVEP-based brain-computer interface (BCI),” Brain Res. 1515, 66–77 (2013). [CrossRef]   [PubMed]  

23. Y. Tomita, F.-B. Vialatte, G. Dreyfus, Y. Mitsukura, H. Bakardjian, and A. Cichocki, “Bimodal BCI using simultaneously NIRS and EEG,” IEEE Trans. Biomed. Eng. 61(4), 1274–1284 (2014). [CrossRef]   [PubMed]  

24. H. Santosa, M. J. Hong, and K.-S. Hong, “Lateralization of music processing auditory cortex: An fNIRS study,” Front. Behav. Neurosci. 8, UNSP 418 (2014).

25. K.-S. Hong and H.-D. Nguyen, “State-space models of impulse hemodynamic responses over motor, somatosensory, and visual cortices,” Biomed. Opt. Express 5(6), 1778–1798 (2014). [CrossRef]   [PubMed]  

26. X.-S. Hu, K.-S. Hong, and S. S. Ge, “fNIRS-based online deception decoding,” J. Neural Eng. 9(2), 026012 (2012). [CrossRef]   [PubMed]  

27. B. T. Jap, S. Lal, P. Fischer, and E. Bekiaris, “Using EEG spectral components to assess algorithms for detecting fatigue,” Expert Syst. Appl. 36(2), 2352–2359 (2009). [CrossRef]  

28. W. Vanlaar, H. Simpson, D. Mayhew, and R. Robertson, “Fatigued and drowsy driving: A survey of attitudes, opinions and behaviors,” J. Safety Res. 39(3), 303–309 (2008). [CrossRef]   [PubMed]  

29. T. Liu, “Positive correlation between drowsiness and prefrontal activation during a simulated speed-control driving task,” Neuroreport 25(16), 1316–1319 (2014). [CrossRef]   [PubMed]  

30. Y. Kubota, N. N. Takasu, S. Horita, M. Kondo, M. Shimizu, T. Okada, T. Wakamura, and M. Toichi, “Dorsolateral prefrontal cortical oxygenation during REM sleep in humans,” Brain Res. 1389, 83–92 (2011). [CrossRef]   [PubMed]  

31. P. P. Caffier, U. Erdmann, and P. Ullsperger, “Experimental evaluation of eye-blink parameters as a drowsiness measure,” Eur. J. Appl. Physiol. 89(3), 319–325 (2003). [CrossRef]   [PubMed]  

32. Q. Ji, Z. W. Zhu, and P. L. Lan, “Real-time nonintrusive monitoring and prediction of driver fatigue,” IEEE Trans. Vehicular Technol. 53(4), 1052–1068 (2004). [CrossRef]  

33. J. Horne and L. Reyner, “Vehicle accidents related to sleep: A review,” Occup. Environ. Med. 56(5), 289–294 (1999). [CrossRef]   [PubMed]  

34. K. Yoshino, N. Oka, K. Yamamoto, H. Takahashi, and T. Kato, “Functional brain imaging using near-infrared spectroscopy during actual driving on an expressway,” Front. Hum. Neurosci. 7, 882 (2013). [CrossRef]   [PubMed]  

35. C. T. Lin, C. H. Chuang, C. S. Huang, S. F. Tsai, S. W. Lu, Y. H. Chen, and L. W. Ko, “Wireless and wearable EEG system for evaluating driver vigilance,” IEEE Trans. Biomed. Circuits Syst. 8(2), 165–176 (2014). [CrossRef]   [PubMed]  

36. F. X. Graydon, R. Young, M. D. Benton, R. J. Genik, S. Posse, L. Hsieh, and C. Green, “Visual event detection during simulated driving: Identifying the neural correlates with functional neuroimaging,” Transp. Res. Pt. F-Traffic Psychol. Behav. 7, 271–286 (2004).

37. E. Horikawa, N. Okamura, M. Tashiro, Y. Sakurada, M. Maruyama, H. Arai, K. Yamaguchi, H. Sasaki, K. Yanai, and M. Itoh, “The neural correlates of driving performance identified using positron emission tomography,” Brain Cogn. 58(2), 166–171 (2005). [CrossRef]   [PubMed]  

38. Y. Uchiyama, H. Toyoda, H. Sakai, D. Shin, K. Ebe, and N. Sadato, “Suppression of brain activity related to a car-following task with an auditory task: An fMRI study,” Transp. Res. Pt. F-Traffic Psychol. Behav. 15, 25–37 (2012).

39. T. A. Schweizer, K. Kan, Y. Hung, F. Tam, G. Naglie, and S. J. Graham, “Brain activity during driving with distraction: An immersive fMRI study,” Front. Hum. Neurosci. 7, 53 (2013). [CrossRef]   [PubMed]  

40. B.-G. Lee, B.-L. Lee, and W.-Y. Chung, “Mobile healthcare for automatic driving sleep-onset detection using wavelet-based EEG and respiration signals,” Sensors (Basel) 14(10), 17915–17936 (2014). [CrossRef]   [PubMed]  

41. A. Garcés Correa, L. Orosco, and E. Laciar, “Automatic detection of drowsiness in EEG records based on multimodal analysis,” Med. Eng. Phys. 36(2), 244–249 (2014). [CrossRef]   [PubMed]  

42. J. Wang, Y. Y. Wu, H. Qu, and G. H. Xu, “EEG-based fatigue driving detection using correlation dimension,” J. Vibroeng. 16, 407–413 (2014).

43. S. Hu, G. Zheng, and B. Peters, “Driver fatigue detection from electroencephalogram spectrum after electrooculography artefact removal,” IET Intell. Transp. Syst. 7(1), 105–113 (2013). [CrossRef]  

44. G. Li and W.-Y. Chung, “Estimation of eye closure degree using EEG sensors and its application in driver drowsiness detection,” Sensors (Basel) 14(9), 17491–17515 (2014). [CrossRef]   [PubMed]  

45. R. N. Roy, S. Charbonnier, and S. Bonnet, “Eye blink characterization from frontal EEG electrodes using source separation and pattern recognition algorithms,” Biomed. Signal Process. Control 14, 256–264 (2014). [CrossRef]  

46. A. Picot, S. Charbonnier, and A. Caplier, “On-line detection of drowsiness using brain and visual information,” IEEE Trans. Syst. Man Cybern. A Syst. Hum. 42(3), 764–775 (2012). [CrossRef]  

47. H. Santosa, M. J. Hong, S.-P. Kim, and K.-S. Hong, “Noise reduction in functional near-infrared spectroscopy signals by independent component analysis,” Rev. Sci. Instrum. 84(7), 073106 (2013). [CrossRef]   [PubMed]  

48. M. R. Bhutta, K.-S. Hong, B.-M. Kim, M. J. Hong, Y.-H. Kim, and S.-H. Lee, “Note: Three wavelengths near-infrared spectroscopy system for compensating the light absorbance by water,” Rev. Sci. Instrum. 85(2), 026111 (2014). [CrossRef]   [PubMed]  

49. M. A. Kamran and K.-S. Hong, “Linear parameter-varying model and adaptive filtering technique for detecting neuronal activities: An fNIRS study,” J. Neural Eng. 10(5), 056002 (2013). [CrossRef]   [PubMed]  

50. J. W. Barker, A. Aarabi, and T. J. Huppert, “Autoregressive model based algorithm for correcting motion and serially correlated errors in fNIRS,” Biomed. Opt. Express 4(8), 1366–1379 (2013). [CrossRef]   [PubMed]  

51. J. Li and L. Qiu, “Temporal correlation of spontaneous hemodynamic activity in language areas measured with functional near-infrared spectroscopy,” Biomed. Opt. Express 5(2), 587–595 (2014). [CrossRef]   [PubMed]  

52. W. B. Baker, A. B. Parthasarathy, D. R. Busch, R. C. Mesquita, J. H. Greenberg, and A. G. Yodh, “Modified Beer-Lambert law for blood flow,” Biomed. Opt. Express 5(11), 4053–4075 (2014). [CrossRef]   [PubMed]  

53. S. Lemm, B. Blankertz, T. Dickhaus, and K. R. Müller, “Introduction to machine learning for brain imaging,” Neuroimage 56(2), 387–399 (2011). [CrossRef]   [PubMed]  

54. F. Lotte, M. Congedo, A. Lécuyer, F. Lamarche, and B. Arnaldi, “A review of classification algorithms for EEG-based brain-computer interfaces,” J. Neural Eng. 4(2), R1–R13 (2007). [CrossRef]   [PubMed]  

55. C. Bogler, J. Mehnert, J. Steinbrink, and J. D. Haynes, “Decoding vigilance with NIRS,” PLoS One 9(7), e101729 (2014). [CrossRef]   [PubMed]  

56. C. Kaufmann, R. Wehrle, T. C. Wetter, F. Holsboer, D. P. Auer, T. Pollmächer, and M. Czisch, “Brain activation and hypothalamic functional connectivity during human non-rapid eye movement sleep: an EEG/fMRI study,” Brain 129(3), 655–667 (2006). [CrossRef]   [PubMed]  

57. G. R. Poudel, C. R. H. Innes, and R. D. Jones, “Cerebral perfusion differences between drowsy and nondrowsy individuals after acute sleep restriction,” Sleep 35(8), 1085–1096 (2012). [PubMed]  

58. M. R. Bhutta, M. J. Hong, Y.-H. Kim, and K.-S. Hong, “Single-trial lie detection using a combined fNIRS-polygraph system,” Front. Psychol. 6, 709 (2015). [CrossRef]   [PubMed]  

59. N. Naseer and K.-S. Hong, “Decoding answers to four-choice questions using functional near-infrared spectroscopy,” J. Near Infrared Spectrosc. 23(1), 23–31 (2015). [CrossRef]  

60. X.-S. Hu, K.-S. Hong, and S. S. Ge, “Reduction of trial-to-trial variability in functional near-infrared spectroscopy signals by accounting for resting-state functional connectivity,” J. Biomed. Opt. 18(1), 017003 (2013). [CrossRef]   [PubMed]  

61. S. Aritake, S. Higuchi, H. Suzuki, K. Kuriyama, M. Enomoto, T. Soshi, S. Kitamura, A. Hida, and K. Mishima, “Increased cerebral blood flow in the right frontal lobe area during sleep precedes self-awakening in humans,” BMC Neurosci. 13(1), 153 (2012). [CrossRef]   [PubMed]  

62. S. Fazli, J. Mehnert, J. Steinbrink, and B. Blankertz, “Using NIRS as a predictor for EEG-based BCI performance,” in Proceedings of IEEE Conference of Engineering in Medicine and Biology Society (IEEE EMBC, 2012), pp. 4911–4914. [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 The experimental scheme used for drowsiness detection.
Fig. 2
Fig. 2 The placement of optodes over the prefrontal and dorsolateral prefrontal cortex regions.
Fig. 3
Fig. 3 ∆HbO and ∆HbR changes in the prefrontal and dorsolateral prefrontal brain regions (Subject 3): Region A consists of channels 1~8, region B channels 9~20, and region C channels 21~28.
Fig. 4
Fig. 4 Comparison of the regional averages in A, B, and C showing the drowsy and alert states (Subject 3).
Fig. 5
Fig. 5 HbX corresponding to the drowsy state (Subs. 1, 2, 4 and 6; see Fig. 4 for Sub. 3, Sub. 5 is omitted)
Fig. 6
Fig. 6 28 2-class feature spaces combining the mean ∆HbO, mean ∆HbR, skewness, kurtosis, slope, number of peaks, sum of peaks and signal peak for separating the drowsy and non-drowsy states: The red triangle represents the non-drowsy (alert) state and the blue circle represents the drowsy state (Subject 2, region A, 0~5 sec time window).
Fig. 7
Fig. 7 (a) The overall average of the classification accuracies over 13 subjects in different time windows; (b) the individual average accuracies and standard deviations of 13 subjects in regions A, B, and C; (c) the channel-wise number of occurrence of the drowsy state over 13 subjects (i.e., four subjects showed the drowsy state in Ch. 1).

Tables (9)

Tables Icon

Table 1 Eight features in region A (before normalization): 0~5 sec window

Tables Icon

Table 2 Eight features in region A (before normalization): 0~10 sec window

Tables Icon

Table 3 Eight features in region A (before normalization): 0~15 sec window

Tables Icon

Table 4 Difference of the feature values between the drowsy and alert states (region B, 5 sec window)

Tables Icon

Table 5 Difference of the feature values between the drowsy and alert states (region C, 5 sec window)

Tables Icon

Table 6 Accuracies obtained by a combination of two features (0~15 sec window, Subject 2)

Tables Icon

Table 7 Averaged accuracies over all the subjects (0~15 sec window)

Tables Icon

Table 8 Classification accuracies (%) in three different brain regions

Tables Icon

Table 9 Performance comparison of two classical classifiers (for region A)

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

A(t;λ)=ln I in (λ) I out (t;λ) =α(λ)×c(λ)×l×d(λ)+η,
[ Δ c HbO (t) Δ c HbR (t) ]= [ α HbO ( λ 1 ) α HbR ( λ 1 ) α HbO ( λ 2 ) α HbR ( λ 2 ) ] 1 [ ΔA(t; λ 1 ) ΔA(t; λ 2 ) ] 1 l×d(λ) ,
M k = 1 N j=1 window k X j ,
ske w k = E (X M k ) 3 σ 3 ,
kur t k = E (X M k ) 4 σ 4 ,
a = amin a max amin a
μ i = 1 n i xclass i x ,μ= 1 n l x l ,
J(V)= det( V T S B V) det( V T S W V) ,
S B = i=1 m n i ( μ i μ) ( μ i μ) T ,
S W = i=1 m x l classi ( x l μ i ) ( x l μ i ) T ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.