Laura Gwilliams
Assistant Professor of Psychology and, by courtesy, of Linguistics
Bio
Laura Gwilliams is jointly appointed between Stanford Psychology, Wu Tsai Neurosciences Institute and Stanford Data Science. Her work is focused on understanding the neural representations and operations that give rise to speech comprehension in the human brain. To do so, she brings together insight from neuroscience, linguistics and machine learning, and takes advantage of recording techniques that operate at distinct spatial scales (MEG, ECoG and Neuropixels).
Academic Appointments
-
Assistant Professor, Psychology
-
Assistant Professor (By courtesy), Linguistics
-
Member, Bio-X
-
Member, Stanford Data Science
-
Member, Wu Tsai Neurosciences Institute
2023-24 Courses
- Unravelling the Inner-Workings of the Brain: Data Science for Neuroscience Capstone
DATASCI 125 (Spr) -
Independent Studies (3)
- Directed Study
BIOE 391 (Spr) - Graduate Research
PSYCH 275 (Aut, Win, Spr) - Special Laboratory Projects
PSYCH 195 (Spr)
- Directed Study
Stanford Advisees
-
Postdoctoral Faculty Sponsor
Jill Kries -
Doctoral Dissertation Advisor (AC)
Irmak Ergin
All Publications
-
Introducing MEG-MASC a high-quality magneto-encephalography dataset for evaluating natural speech processing.
Scientific data
2023; 10 (1): 862
Abstract
The "MEG-MASC" dataset provides a curated set of raw magnetoencephalography (MEG) recordings of 27 English speakers who listened to two hours of naturalistic stories. Each participant performed two identical sessions, involving listening to four fictional stories from the Manually Annotated Sub-Corpus (MASC) intermixed with random word lists and comprehension questions. We time-stamp the onset and offset of each word and phoneme in the metadata of the recording, and organize the dataset according to the 'Brain Imaging Data Structure' (BIDS). This data collection provides a suitable benchmark to large-scale encoding and decoding analyses of temporally-resolved brain responses to speech. We provide the Python code to replicate several validations analyses of the MEG evoked responses such as the temporal decoding of phonetic features and word frequency. All code and MEG, audio and text data are publicly available to keep with best practices in transparent and reproducible research.
View details for DOI 10.1038/s41597-023-02752-5
View details for PubMedID 38049487
View details for PubMedCentralID 7513462