Youngeun Lee
Postdoctoral Scholar, Psychiatry
All Publications
-
Predicting Task Activation Maps from Resting-State Functional Connectivity using Deep Learning.
bioRxiv : the preprint server for biology
2024
Abstract
Recent work has shown that deep learning is a powerful tool for predicting brain activation patterns evoked through various tasks using resting state features. We replicate and improve upon this recent work to introduce two models, BrainSERF and BrainSurfGCN, that perform at least as well as the state-of-the-art while greatly reducing memory and computational footprints. Our performance analysis observed that low predictability was associated with a possible lack of task engagement derived from behavioral performance. Furthermore, a deficiency in model performance was also observed for closely matched task contrasts, likely due to high individual variability confirmed by low test-retest reliability. Overall, we successfully replicate recently developed deep learning architecture and provide scalable models for further research.
View details for DOI 10.1101/2024.09.10.612309
View details for PubMedID 39314460
View details for PubMedCentralID PMC11419026
-
Learning to Estimate Palpation Forces in Robotic Surgery From Visual-Inertial Data
IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS
2023; 5 (3): 496-506
View details for DOI 10.1109/TMRB.2023.3295008
View details for Web of Science ID 001047342600005
-
Diff-E: Diffusion-based Learning for Decoding Imagined Speech EEG
ISCA-INT SPEECH COMMUNICATION ASSOC. 2023: 1159-1163
View details for DOI 10.21437/Interspeech.2023-1381
View details for Web of Science ID 001186650301063
-
2020 International brain-computer interface competition: A review.
Frontiers in human neuroscience
2022; 16: 898300
Abstract
The brain-computer interface (BCI) has been investigated as a form of communication tool between the brain and external devices. BCIs have been extended beyond communication and control over the years. The 2020 international BCI competition aimed to provide high-quality neuroscientific data for open access that could be used to evaluate the current degree of technical advances in BCI. Although there are a variety of remaining challenges for future BCI advances, we discuss some of more recent application directions: (i) few-shot EEG learning, (ii) micro-sleep detection (iii) imagined speech decoding, (iv) cross-session classification, and (v) EEG(+ear-EEG) detection in an ambulatory environment. Not only did scientists from the BCI field compete, but scholars with a broad variety of backgrounds and nationalities participated in the competition to address these challenges. Each dataset was prepared and separated into three data that were released to the competitors in the form of training and validation sets followed by a test set. Remarkable BCI advances were identified through the 2020 competition and indicated some trends of interest to BCI researchers.
View details for DOI 10.3389/fnhum.2022.898300
View details for PubMedID 35937679
View details for PubMedCentralID PMC9354666
-
EEG-Transformer: Self-attention from Transformer Architecture for Decoding EEG of Imagined Speech
IEEE. 2022
View details for DOI 10.1109/BCI53720.2022.9735124
View details for Web of Science ID 000814683300048
-
Toward Imagined Speech based Smart Communication System: Potential Applications on Metaverse Conditions
IEEE. 2022
View details for DOI 10.1109/BCI53720.2022.9734827
View details for Web of Science ID 000814683300009
-
Mobile BCI dataset of scalp- and ear-EEGs with ERP and SSVEP paradigms while standing, walking, and running.
Scientific data
2021; 8 (1): 315
Abstract
We present a mobile dataset obtained from electroencephalography (EEG) of the scalp and around the ear as well as from locomotion sensors by 24 participants moving at four different speeds while performing two brain-computer interface (BCI) tasks. The data were collected from 32-channel scalp-EEG, 14-channel ear-EEG, 4-channel electrooculography, and 9-channel inertial measurement units placed at the forehead, left ankle, and right ankle. The recording conditions were as follows: standing, slow walking, fast walking, and slight running at speeds of 0, 0.8, 1.6, and 2.0 m/s, respectively. For each speed, two different BCI paradigms, event-related potential and steady-state visual evoked potential, were recorded. To evaluate the signal quality, scalp- and ear-EEG data were qualitatively and quantitatively validated during each speed. We believe that the dataset will facilitate BCIs in diverse mobile environments to analyze brain activities and evaluate the performance quantitatively for expanding the use of practical BCIs.
View details for DOI 10.1038/s41597-021-01094-4
View details for PubMedID 34930915
View details for PubMedCentralID PMC8688416
-
A Real-Time Movement Artifact Removal Method for Ambulatory Brain-Computer Interfaces.
IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society
2020; 28 (12): 2660-2670
Abstract
Recently, practical brain-computer interfaces (BCIs) have been widely investigated for detecting human intentions in real world. However, performance differences still exist between the laboratory and the real world environments. One of the main reasons for such differences comes from the user's unstable physical states (e.g., human movements are not strictly controlled), which produce unexpected signal artifacts. Hence, to minimize the performance degradation of electroencephalography (EEG)-based BCIs, we present a novel artifact removal method named constrained independent component analysis with online learning (cIOL). The cIOL can find and reject the noise-like components related to human body movements (i.e., movement artifacts) in the EEG signals. To obtain movement information, isolated electrodes are used to block electrical signals from the brain using high-resistance materials. We estimate artifacts with movement information using constrained independent component analysis from EEG signals and then extract artifact-free signals using online learning in each sample. In addition, the cIOL is evaluated by signal processing under 16 different experimental conditions (two types of EEG devices × two BCI paradigms × four different walking speeds). The experimental results show that the cIOL has the highest accuracy in both scalp- and ear-EEG, and has the highest signal-to-noise ratio in scalp-EEG among the state-of-the-art methods, except for the case of steady-state visual evoked potential at 2.0 m/s with superposition problem.
View details for DOI 10.1109/TNSRE.2020.3040264
View details for PubMedID 33232242
-
EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy
GIGASCIENCE
2019; 8 (5)
Abstract
Electroencephalography (EEG)-based brain-computer interface (BCI) systems are mainly divided into three major paradigms: motor imagery (MI), event-related potential (ERP), and steady-state visually evoked potential (SSVEP). Here, we present a BCI dataset that includes the three major BCI paradigms with a large number of subjects over multiple sessions. In addition, information about the psychological and physiological conditions of BCI users was obtained using a questionnaire, and task-unrelated parameters such as resting state, artifacts, and electromyography of both arms were also recorded. We evaluated the decoding accuracies for the individual paradigms and determined performance variations across both subjects and sessions. Furthermore, we looked for more general, severe cases of BCI illiteracy than have been previously reported in the literature.Average decoding accuracies across all subjects and sessions were 71.1% (± 0.15), 96.7% (± 0.05), and 95.1% (± 0.09), and rates of BCI illiteracy were 53.7%, 11.1%, and 10.2% for MI, ERP, and SSVEP, respectively. Compared to the ERP and SSVEP paradigms, the MI paradigm exhibited large performance variations between both subjects and sessions. Furthermore, we found that 27.8% (15 out of 54) of users were universally BCI literate, i.e., they were able to proficiently perform all three paradigms. Interestingly, we found no universally illiterate BCI user, i.e., all participants were able to control at least one type of BCI system.Our EEG dataset can be utilized for a wide range of BCI-related research questions. All methods for the data analysis in this study are supported with fully open-source scripts that can aid in every step of BCI technology. Furthermore, our results support previous but disjointed findings on the phenomenon of BCI illiteracy.
View details for DOI 10.1093/gigascience/giz002
View details for Web of Science ID 000474856100010
View details for PubMedID 30698704
View details for PubMedCentralID PMC6501944
-
Mental fatigue in central-field and peripheral-field steady-state visually evoked potential and its effects on event-related potential responses.
Neuroreport
2018; 29 (15): 1301-1308
Abstract
The steady-state visually evoked potential (SSVEP) is a natural response of the brain to visual stimulation at specific frequencies and is used widely for electroencephalography-based brain-computer interface (BCI) systems. Although the SSVEP is useful for its high level of decoding accuracy, visual fatigue from the repetitive visual flickering is an unavoidable problem. In addition, hybrid BCI systems that combine the SSVEP with the event-related potential (ERP) have been proposed recently. These hybrid BCI systems would improve the decoding accuracy; however, the competing effect by simultaneous presentation of the visual stimulus could possibly supervene the signal in the hybrid system. Nevertheless, previous studies have not sufficiently reported these problems of visual fatigue with SSVEP stimuli or the competing effect in the SSVEP+ERP system. In this study, two different experiments were designed to explore our claims. The first experiment evaluated the visual fatigue level and decoding accuracy for the different types of SSVEP stimuli, which were the peripheral-field SSVEP (pSSVEP) and the central-field SSVEP (cSSVEP). We report that the pSSVEP could reduce the visual fatigue level by avoiding direct exposure of the eye-retina to the flickering visual stimulus, while also delivering a decoding accuracy comparable to that of cSSVEP. The second experiment was designed to examine the competing effect of the SSVEP stimuli on ERP performance and vice versa. To do this, the visual stimuli of ERP and SSVEP were presented simultaneously as part of the BCI speller layout. We found a clear competing effect wherein the evoked brain potentials were influenced by the SSVEP stimulus and the band power at the target frequencies was also decreased significantly by the ERP stimuli. Nevertheless, these competing effects did not lead to a significant loss in decoding accuracy; their features preserved sufficient information for discriminating a target class. Our work is the first to evaluate the visual fatigue and competing effect together, which should be considered when designing BCI applications. Furthermore, our findings suggest that the pSSVEP is a viable substitution for the cSSVEP because of its ability to reduce the level of visual fatigue while maintaining a minimal loss of decoding accuracy.
View details for DOI 10.1097/WNR.0000000000001111
View details for PubMedID 30102642
View details for PubMedCentralID PMC6143225
https://orcid.org/0000-0003-2610-7028