Bio


Laura Gwilliams is jointly appointed between Stanford Psychology, Wu Tsai Neurosciences Institute and Stanford Data Science. Her work is focused on understanding the neural representations and operations that give rise to speech comprehension in the human brain. To do so, she brings together insight from neuroscience, linguistics and machine learning, and takes advantage of recording techniques that operate at distinct spatial scales (MEG, ECoG and Neuropixels).

Academic Appointments


Administrative Appointments


  • Co-director, SDS Center for Neural Data Science (2025 - Present)
  • Faculty Director, Koret Human Neurosciences Community Laboratory (2024 - Present)

Honors & Awards


  • Neuroscience Fellowship Award, Klingenstein Philanthropies (2025)
  • Early Career Award, Whitehall Foundation (2024)
  • Glushko Dissertation Prize, Cognitive Science Society (2021)

2025-26 Courses


Stanford Advisees


All Publications


  • The spatio-temporal dynamics of phoneme encoding in aging and aphasia. The Journal of neuroscience : the official journal of the Society for Neuroscience Kries, J., Vandermosten, M., Gwilliams, L. 2025

    Abstract

    During successful language comprehension, speech sounds (phonemes) are encoded within a series of neural patterns that evolve over time. Here we tested whether these neural dynamics of speech encoding are altered for individuals with a language disorder. We recorded EEG responses from human brains of 39 individuals with post-stroke aphasia (13♀/26♂) and 24 healthy age-matched controls (i.e., older adults; 8♀/16♂) during 25 minutes of natural story listening. We estimated the duration of phonetic feature encoding, speed of evolution across neural populations, and the spatial location of encoding over EEG sensors. First, we establish that phonetic features are robustly encoded in EEG responses of healthy older adults. Second, when comparing individuals with aphasia to healthy controls, we find significantly decreased phonetic encoding in the aphasic group after shared initial processing pattern (0.08-0.25s after phoneme onset). Phonetic features were less strongly encoded over left-lateralized electrodes in the aphasia group compared to controls, with no difference in speed of neural pattern evolution. Finally, we observed that healthy controls, but not individuals with aphasia, encode phonetic features longer when uncertainty about word identity is high, indicating that this mechanism - encoding phonetic information until word identity is resolved - is crucial for successful comprehension. Together, our results suggest that aphasia may entail failure to maintain lower-order information long enough to recognize lexical items.Significance statement This study reveals robust decoding of speech sound properties, so-called phonetic features, from EEG recordings in older adults, as well as decreased phonetic processing in individuals with a language disorder (aphasia) compared to healthy controls. This was most prominent over left-hemispheric electrodes. Additionally, we observed that healthy controls, but not individuals with aphasia, encode phonetic features longer when uncertainty about word identity is high, indicating that this mechanism - encoding phonetic information until word identity is resolved - is crucial for successful language processing. These insights deepen our understanding of disrupted mechanisms in a language disorder, and show how the integration between language processing levels works in the healthy aging, neurotypical brain.

    View details for DOI 10.1523/JNEUROSCI.1001-25.2025

    View details for PubMedID 41461535

  • Shared and language-specific phonological processing in the human temporal lobe NATURE Bhaya-Grossman, I., Leonard, M. K., Zhang, Y., Gwilliams, L., Johnson, K., Lu, J., Chang, E. F. 2025

    Abstract

    All spoken languages are produced by the human vocal tract, which defines the limited set of possible speech sounds. Despite this constraint, however, there exists incredible diversity in the world's 7,000 spoken languages, each of which is learned through extensive experience hearing speech in language-specific contexts1. It remains unknown which elements of speech processing in the brain depend on daily language experience and which do not. In this study, we recorded high-density cortical activity from adult participants with diverse language backgrounds as they listened to speech in their native language and an unfamiliar foreign language. We found that, regardless of language experience, both native and foreign languages elicited similar cortical responses in the superior temporal gyrus (STG), associated with shared acoustic-phonetic processing of foundational speech sound features2,3, such as vowels and consonants. However, only during native language listening did we observe enhanced neural encoding in the STG for word boundaries, word frequency and language-specific sound sequence statistics. In a separate cohort of bilingual participants, this encoding of word- and sequence-level information appeared for both familiar languages in the same individual and in the same STG neural populations. These results indicate that experience-dependent language processing involves dynamic integration of both shared acoustic-phonetic and language-specific sequence- and word-level information in the STG.

    View details for DOI 10.1038/s41586-025-09748-8

    View details for Web of Science ID 001618064000001

    View details for PubMedID 41261133

    View details for PubMedCentralID 4350233

  • Human cortical dynamics of auditory word form encoding. Neuron Zhang, Y., Leonard, M. K., Bhaya-Grossman, I., Gwilliams, L., Chang, E. F. 2025

    Abstract

    We perceive continuous speech as a series of discrete words, despite the lack of clear acoustic boundaries. The superior temporal gyrus (STG) encodes phonetic elements like consonants and vowels, but it is unclear how whole words are encoded. Using high-density cortical recordings and spoken narratives, we investigated how the human brain represents auditory word forms. STG activity exhibits a distinctive reset at word boundaries, marked by a sharp drop in cortical activity. Between resets, STG encodes acoustic-phonetic, prosodic, and lexical features, supporting integration of phonological features into coherent word forms. This process tracks the relative elapsed time within words, independent of absolute duration, providing a flexible encoding of variable word lengths. Similar dynamics were found in deeper layers of a self-supervised artificial speech network. Finally, a bistable word perception task revealed trial-by-trial STG responses to perceived word boundaries. Together, these findings support a new dynamical model of auditory word forms.

    View details for DOI 10.1016/j.neuron.2025.10.011

    View details for PubMedID 41205609

  • Hierarchical dynamic coding coordinates speech comprehension in the human brain. Proceedings of the National Academy of Sciences of the United States of America Gwilliams, L., Marantz, A., Poeppel, D., King, J. R. 2025; 122 (42): e2422097122

    Abstract

    Speech comprehension involves transforming an acoustic waveform into meaning. To do so, the human brain generates a hierarchy of features that converts the sensory input into increasingly abstract language properties. However, little is known about how rapid incoming sequences of hierarchical features are continuously coordinated. Here, we propose that each language feature is supported by a dynamic neural code, which represents the sequence history of hierarchical features in parallel. To test this "hierarchical dynamic coding" (HDC) hypothesis, we use time-resolved decoding of brain activity to track the construction, maintenance, and update of a comprehensive hierarchy of language features spanning phonetic, word form, lexical-syntactic, syntactic, and semantic representations. For this, we recorded 21 native English participants with magnetoencephalography (MEG), while they listened to two hours of short stories in English. Our analyses reveal three main findings. First, the brain represents and simultaneously maintains a sequence of hierarchical features. Second, the duration of these representations depends on their level in the language hierarchy. Third, each representation is maintained by a dynamic neural code, which evolves at a speed commensurate with its corresponding linguistic level. This HDC preserves the maintenance of information over time while limiting destructive interference between successive features. Overall, HDC reveals how the human brain maintains and updates the continuously unfolding language hierarchy during natural speech comprehension, thereby anchoring linguistic theories to their biological implementations.

    View details for DOI 10.1073/pnas.2422097122

    View details for PubMedID 41105708

  • Dynamics of pitch perception in the auditory cortex. The Journal of neuroscience : the official journal of the Society for Neuroscience Abrams, E. B., Marantz, A., Krementsov, I., Gwilliams, L. 2025

    Abstract

    The ability to perceive pitch allows human listeners to experience music, recognize the identity and emotion conveyed by conversational partners, and make sense of their auditory environment. A pitch percept is formed by weighting different acoustic cues (e.g., signal fundamental frequency and inter-harmonic spacing) and contextual cues (expectation). How and when such cues are neurally encoded and integrated remains debated. In this study, twenty-eight participants (16 female) listened to tone sequences with different acoustic cues (pure tones, complex missing fundamental tones, and tones with an ambiguous mixture), placed in predictable and less predictable sequences, while magnetoencephalography was recorded. Decoding analyses revealed that pitch was encoded in neural responses to all three tone types, in the low-to-mid auditory cortex and sensorimotor cortex bilaterally, with right-hemisphere dominance. The pattern of activity generalized across cue-types, offset in time: pitch was neurally encoded earlier for harmonic tones (∼85ms) than pure tones (∼95ms). For ambiguous tones, pitch emerged significantly earlier in predictable contexts than unpredictable. The results suggest that a unified neural representation of pitch emerges by integrating independent pitch cues, and that context alters the dynamics of pitch generation when acoustic cues are ambiguous.Significance Statement Pitch enables humans to enjoy music, understand the emotional intent of a conversational partner, distinguish lexical items in tonal languages, and make sense of the acoustic environment. The study of pitch has lasted over a century, with conflicting accounts of how and when the brain integrates spectrotemporal information to map different sound sources onto a single and stable pitch percept. Our results answer crucial questions about the emergence of perceptual pitch in the brain: namely, that place and temporal cues to pitch seem to be accounted for by early auditory cortex, that a common representation of perceptual pitch emerges early in the right hemisphere, and that the temporal dynamics of pitch representations are modulated by expectation.

    View details for DOI 10.1523/JNEUROSCI.1111-24.2025

    View details for PubMedID 39909567

  • Computational Architecture of Speech Comprehension in the Human Brain ANNUAL REVIEW OF LINGUISTICS Gwilliams, L., Bhaya-Grossman, I., Zhang, Y., Scott, T., Harper, S., Levy, D. 2025; 11: 209-226
  • What we mean when we say semantic: Toward a multidisciplinary semantic glossary. Psychonomic bulletin & review Reilly, J., Shain, C., Borghesani, V., Kuhnke, P., Vigliocco, G., Peelle, J. E., Mahon, B. Z., Buxbaum, L. J., Majid, A., Brysbaert, M., Borghi, A. M., De Deyne, S., Dove, G., Papeo, L., Pexman, P. M., Poeppel, D., Lupyan, G., Boggio, P., Hickok, G., Gwilliams, L., Fernandino, L., Mirman, D., Chrysikou, E. G., Sandberg, C. W., Crutch, S. J., Pylkkänen, L., Yee, E., Jackson, R. L., Rodd, J. M., Bedny, M., Connell, L., Kiefer, M., Kemmerer, D., de Zubicaray, G., Jefferies, E., Lynott, D., Siew, C. S., Desai, R. H., McRae, K., Diaz, M. T., Bolognesi, M., Fedorenko, E., Kiran, S., Montefinese, M., Binder, J. R., Yap, M. J., Hartwigsen, G., Cantlon, J., Bi, Y., Hoffman, P., Garcea, F. E., Vinson, D. 2024

    Abstract

    Tulving characterized semantic memory as a vast repository of meaning that underlies language and many other cognitive processes. This perspective on lexical and conceptual knowledge galvanized a new era of research undertaken by numerous fields, each with their own idiosyncratic methods and terminology. For example, "concept" has different meanings in philosophy, linguistics, and psychology. As such, many fundamental constructs used to delineate semantic theories remain underspecified and/or opaque. Weak construct specificity is among the leading causes of the replication crisis now facing psychology and related fields. Term ambiguity hinders cross-disciplinary communication, falsifiability, and incremental theory-building. Numerous cognitive subdisciplines (e.g., vision, affective neuroscience) have recently addressed these limitations via the development of consensus-based guidelines and definitions. The project to follow represents our effort to produce a multidisciplinary semantic glossary consisting of succinct definitions, background, principled dissenting views, ratings of agreement, and subjective confidence for 17 target constructs (e.g., abstractness, abstraction, concreteness, concept, embodied cognition, event semantics, lexical-semantic, modality, representation, semantic control, semantic feature, simulation, semantic distance, semantic dimension). We discuss potential benefits and pitfalls (e.g., implicit bias, prescriptiveness) of these efforts to specify a common nomenclature that other researchers might index in specifying their own theoretical perspectives (e.g., They said X, but I mean Y).

    View details for DOI 10.3758/s13423-024-02556-7

    View details for PubMedID 39231896

    View details for PubMedCentralID 4215955

  • Speech prosody enhances the neural processing of syntax. Communications biology Degano, G., Donhauser, P. W., Gwilliams, L., Merlo, P., Golestani, N. 2024; 7 (1): 748

    Abstract

    Human language relies on the correct processing of syntactic information, as it is essential for successful communication between speakers. As an abstract level of language, syntax has often been studied separately from the physical form of the speech signal, thus often masking the interactions that can promote better syntactic processing in the human brain. However, behavioral and neural evidence from adults suggests the idea that prosody and syntax interact, and studies in infants support the notion that prosody assists language learning. Here we analyze a MEG dataset to investigate how acoustic cues, specifically prosody, interact with syntactic representations in the brains of native English speakers. More specifically, to examine whether prosody enhances the cortical encoding of syntactic representations, we decode syntactic phrase boundaries directly from brain activity, and evaluate possible modulations of this decoding by the prosodic boundaries. Our findings demonstrate that the presence of prosodic boundaries improves the neural representation of phrase boundaries, indicating the facilitative role of prosodic cues in processing abstract linguistic features. This work has implications for interactive models of how the brain processes different linguistic features. Future research is needed to establish the neural underpinnings of prosody-syntax interactions in languages with different typological characteristics.

    View details for DOI 10.1038/s42003-024-06444-7

    View details for PubMedID 38902370

    View details for PubMedCentralID 3216045

  • Negation mitigates rather than inverts the neural representations of adjectives. PLoS biology Zuanazzi, A., Ripollés, P., Lin, W. M., Gwilliams, L., King, J. R., Poeppel, D. 2024; 22 (5): e3002622

    Abstract

    Combinatoric linguistic operations underpin human language processes, but how meaning is composed and refined in the mind of the reader is not well understood. We address this puzzle by exploiting the ubiquitous function of negation. We track the online effects of negation ("not") and intensifiers ("really") on the representation of scalar adjectives (e.g., "good") in parametrically designed behavioral and neurophysiological (MEG) experiments. The behavioral data show that participants first interpret negated adjectives as affirmative and later modify their interpretation towards, but never exactly as, the opposite meaning. Decoding analyses of neural activity further reveal significant above chance decoding accuracy for negated adjectives within 600 ms from adjective onset, suggesting that negation does not invert the representation of adjectives (i.e., "not bad" represented as "good"); furthermore, decoding accuracy for negated adjectives is found to be significantly lower than that for affirmative adjectives. Overall, these results suggest that negation mitigates rather than inverts the neural representations of adjectives. This putative suppression mechanism of negation is supported by increased synchronization of beta-band neural activity in sensorimotor areas. The analysis of negation provides a steppingstone to understand how the human brain represents changes of meaning over time.

    View details for DOI 10.1371/journal.pbio.3002622

    View details for PubMedID 38814982

    View details for PubMedCentralID PMC11139306

  • Hierarchical dynamic coding coordinates speech comprehension in the brain. bioRxiv : the preprint server for biology Gwilliams, L., Marantz, A., Poeppel, D., King, J. R. 2024

    Abstract

    Speech comprehension requires the human brain to transform an acoustic waveform into meaning. To do so, the brain generates a hierarchy of features that converts the sensory input into increasingly abstract language properties. However, little is known about how these hierarchical features are generated and continuously coordinated. Here, we propose that each linguistic feature is dynamically represented in the brain to simultaneously represent successive events. To test this 'Hierarchical Dynamic Coding' (HDC) hypothesis, we use time-resolved decoding of brain activity to track the construction, maintenance, and integration of a comprehensive hierarchy of language features spanning acoustic, phonetic, sub-lexical, lexical, syntactic and semantic representations. For this, we recorded 21 participants with magnetoencephalography (MEG), while they listened to two hours of short stories. Our analyses reveal three main findings. First, the brain incrementally represents and simultaneously maintains successive features. Second, the duration of these representations depend on their level in the language hierarchy. Third, each representation is maintained by a dynamic neural code, which evolves at a speed commensurate with its corresponding linguistic level. This HDC preserves the maintenance of information over time while limiting the interference between successive features. Overall, HDC reveals how the human brain continuously builds and maintains a language hierarchy during natural speech comprehension, thereby anchoring linguistic theories to their biological implementations.

    View details for DOI 10.1101/2024.04.19.590280

    View details for PubMedID 38659750

    View details for PubMedCentralID PMC11042271

  • Introducing MEG-MASC a high-quality magneto-encephalography dataset for evaluating natural speech processing. Scientific data Gwilliams, L., Flick, G., Marantz, A., Pylkkänen, L., Poeppel, D., King, J. R. 2023; 10 (1): 862

    Abstract

    The "MEG-MASC" dataset provides a curated set of raw magnetoencephalography (MEG) recordings of 27 English speakers who listened to two hours of naturalistic stories. Each participant performed two identical sessions, involving listening to four fictional stories from the Manually Annotated Sub-Corpus (MASC) intermixed with random word lists and comprehension questions. We time-stamp the onset and offset of each word and phoneme in the metadata of the recording, and organize the dataset according to the 'Brain Imaging Data Structure' (BIDS). This data collection provides a suitable benchmark to large-scale encoding and decoding analyses of temporally-resolved brain responses to speech. We provide the Python code to replicate several validations analyses of the MEG evoked responses such as the temporal decoding of phonetic features and word frequency. All code and MEG, audio and text data are publicly available to keep with best practices in transparent and reproducible research.

    View details for DOI 10.1038/s41597-023-02752-5

    View details for PubMedID 38049487

    View details for PubMedCentralID 7513462

  • Top-down information shapes lexical processing when listening to continuous speech LANGUAGE COGNITION AND NEUROSCIENCE Gwilliams, L., Marantz, A., Poeppel, D., King, J. 2024; 39 (8): 1045-1058
  • Neural dynamics of phoneme sequences reveal position-invariant code for content and order. Nature communications Gwilliams, L., King, J. R., Marantz, A., Poeppel, D. 2022; 13 (1): 6606

    Abstract

    Speech consists of a continuously-varying acoustic signal. Yet human listeners experience it as sequences of discrete speech sounds, which are used to recognise discrete words. To examine how the human brain appropriately sequences the speech signal, we recorded two-hour magnetoencephalograms from 21 participants listening to short narratives. Our analyses show that the brain continuously encodes the three most recently heard speech sounds in parallel, and maintains this information long past its dissipation from the sensory input. Each speech sound representation evolves over time, jointly encoding both its phonetic features and the amount of time elapsed since onset. As a result, this dynamic neural pattern encodes both the relative order and phonetic content of the speech sequence. These representations are active earlier when phonemes are more predictable, and are sustained longer when lexical identity is uncertain. Our results show how phonetic sequences in natural speech are represented at the level of populations of neurons, providing insight into what intermediary representations exist between the sensory input and sub-lexical units. The flexibility in the dynamics of these representations paves the way for further understanding of how such sequences may be used to interface with higher order structure such as lexical identity.

    View details for DOI 10.1038/s41467-022-34326-1

    View details for PubMedID 36329058

    View details for PubMedCentralID PMC9633780