Dr. Abrams is a Clinical Associate Professor in the Department of Psychiatry and Behavioral Sciences at Stanford University where he conducts research investigating the brain bases of social communication impairments in children with autism spectrum disorders (ASD). Dr. Abrams research focuses on understanding why children with ASD often "tune out" from the social world around them and how this impacts social and brain development. His research employs a combination of psychophysical, cognitive, and brain imaging techniques, with the goal of identifying key neural features underlying social deficits in children with ASD.
Dr. Abrams received his undergraduate degree from University of Arizona followed by a period in industry as an acoustical engineer in the San Francisco Bay Area. He subsequently completed his graduate degree from Northwestern University and joined the Stanford University community as a postdoctoral researcher in 2008. Dr. Abrams joined the Stanford faculty in 2014 and was promoted to Clinical Assistant Professor in 2018 and Clinical Associate Professor in 2021.
Dr. Abrams’s research program has been supported by multiple funding agencies including the NIH, NARSAD, and the National Organization for Hearing Research Foundation.
Dr. Abrams lives in the Bay Area with his wife, children, and gifted Labrador retriever, Meatball.
Clinical Associate Professor, Psychiatry and Behavioral Sciences
Member, Wu Tsai Neurosciences Institute
Honors & Awards
Sex differences in voice processing systems in autism, Brain and Behavior Research Foundation (NARSAD) (2019–2021)
Connectivity of voice processing brain networks in female children with autism, Stanford Women and Sex Differences in Medicine (2017–2018)
CHRI Pilot Early Career Award, Lucile Packard Foundation for Children’s Health (2017)
K01 Research Scientist Development Award, NIMH, NIH/NIMH (2014-2017)
Postdoctoral National Research Service Award, NIH/NIDCD (2010-2012)
Independence Blue Cross Grant in Auditory Science Award, National Organization for Hearing Research Foundation (2006)
Research Training in Neuroscience, NIH/NIDCD (2002-2003)
Graduate Fellowship, Northwestern University (2000-2001)
Ph.D., Northwestern University, Auditory Cognitive Neuroscience (2008)
B.F.A., University of Arizona (1994)
Current Research and Scholarly Interests
Autism spectrum disorders (ASD) are among the most pervasive neurodevelopmental disorders and are characterized by significant deficits in social communication. A common observation in children with ASD is that affected individuals often “tune out” from social interactions, which likely impacts the development of social, communication, and language skills. My primary research goals are to understand why children with ASD often tune out from the social world and how this impacts social skill and brain development, and to identify remediation strategies that motivate children with ASD to engage in social interactions. The theoretical framework that guides my work is that social impairments in ASD stem from a primary deficit in identifying social stimuli, such as human voices and faces, as rewarding and salient stimuli, thereby precluding children with ASD from engaging with these stimuli.
My program of research has provided important information regarding the brain circuits underlying social deficits in ASD. Importantly, these findings have consistently implicated key structures of the brain’s reward and salience processing systems, and support the hypothesis that impaired reward attribution to social stimuli is a critical aspect of social difficulties in ASD. The first study produced by this program of research was published in the Proceedings of the National Academy of Sciences and showed that children with ASD have weak brain connectivity between voice processing regions of cortex and the distributed reward circuit and amygdala. Moreover, the strength of these speech-reward brain connections predicted social communication abilities in these children. A second study, which was recently published in eLife, examined neural processing of mother’s voice, a biologically salient and implicitly rewarding sound which is associated with cognitive and social development, in children with ASD. Results from this study identified a relationship between social communication abilities in children with ASD and brain activation in reward and salience processing regions during mother’s voice processing. A third study, published in Proceedings of the National Academy of Sciences, showed that mother’s voice activates an extended voice processing network, including reward and salience processing regions, in typically developing children. Moreover, the strength of brain connectivity between voice-selective and reward and salience processing regions predicted social communication abilities in these neurotypical children. Together, results provide novel support for the hypothesis that deficits in representing the reward value of social stimuli, including the human voice, impede children with ASD from actively engaging with these stimuli and consequently impair social skill development.
My future research will leverage these findings by examining several important questions related to social information processing in children with ASD. First, we aim to study longitudinal development of social brain circuitry in minimally verbal children with ASD, a severely affected subpopulation that has been vastly underrepresented in the ASD literature. Second, we aim to examine the efficacy of naturalistic developmental behavioral interventions, such as Pivotal Response Treatment, for children with ASD and their relation to changes in social brain and reward circuitry. Third, we aim to examine distinct neural profiles in female children with ASD who, on average, have better social communication abilities compared to their male counterparts.
A neurodevelopmental shift in reward circuitry from mother's to nonfamilial voices in adolescence.
The Journal of neuroscience : the official journal of the Society for Neuroscience
The social world of young children primarily revolves around parents and caregivers, who play a key role in guiding children's social and cognitive development. However, a hallmark of adolescence is a shift in orientation towards nonfamilial social targets, an adaptive process that prepares adolescents for their independence. Little is known regarding neurobiological signatures underlying changes in adolescents' social orientation. Using functional brain imaging of human voice processing in children and adolescents (ages 7-16), we demonstrate distinct neural signatures for mother's voice and nonfamilial voices across child and adolescent development in reward and social valuation systems, instantiated in nucleus accumbens and ventromedial prefrontal cortex. While younger children showed increased activity in these brain systems for mother's voice compared to nonfamilial voices, older adolescents showed the opposite effect with increased activity for nonfamilial compared to mother's voice. Findings uncover a critical role for reward and social valuative brain systems in the pronounced changes in adolescents' orientation towards nonfamilial social targets. Our approach provides a template for examining developmental shifts in social reward and motivation in individuals with pronounced social impairments, including adolescents with autism.Significance Statement:Children's social worlds undergo a transformation during adolescence. While socialization in young children revolves around parents and caregivers, adolescence is characterized by a shift in social orientation towards nonfamilial social partners. Here we show that this shift is reflected in neural activity measured from reward processing regions in response to brief vocal samples. When younger children hear their mother's voice, reward processing regions show greater activity compared to when they hear nonfamilial, unfamiliar voices. Strikingly, older adolescents show the opposite effect, with increased activity for nonfamilial compared to mother's voice. Findings identify the brain basis of adolescents' switch in social orientation towards nonfamilial social partners and provides a template for understanding neurodevelopment in clinical populations with social and communication difficulties.
View details for DOI 10.1523/JNEUROSCI.2018-21.2022
View details for PubMedID 35483917
Impaired voice processing in reward and salience circuits predicts social communication in children with autism.
Engaging with vocal sounds is critical for children's social-emotional learning, and children with autism spectrum disorder (ASD) often 'tune out' voices in their environment. Little is known regarding the neurobiological basis of voice processing and its link to social impairments in ASD. Here, we perform the first comprehensive brain network analysis of voice processing in children with ASD. We examined neural responses elicited by unfamiliar voices and mother's voice, a biologically salient voice for social learning, and identified a striking relationship between social communication abilities in children with ASD and activation in key structures of reward and salience processing regions. Functional connectivity between voice-selective and reward regions during voice processing predicted social communication in children with ASD and distinguished them from typically developing children. Results support the Social Motivation Theory of ASD by showing reward system deficits associated with the processing of a critical social stimulus, mother's voice, in children with ASD.Editorial note: This article has been through an editorial process in which the authors decide how to respond to the issues raised during peer review. The Reviewing Editor's assessment is that minor issues remain unresolved (see decision letter).
View details for PubMedID 30806350
Neural circuits underlying mother's voice perception predict social communication abilities in children
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
2016; 113 (22): 6295-6300
The human voice is a critical social cue, and listeners are extremely sensitive to the voices in their environment. One of the most salient voices in a child's life is mother's voice: Infants discriminate their mother's voice from the first days of life, and this stimulus is associated with guiding emotional and social function during development. Little is known regarding the functional circuits that are selectively engaged in children by biologically salient voices such as mother's voice or whether this brain activity is related to children's social communication abilities. We used functional MRI to measure brain activity in 24 healthy children (mean age, 10.2 y) while they attended to brief (<1 s) nonsense words produced by their biological mother and two female control voices and explored relationships between speech-evoked neural activity and social function. Compared to female control voices, mother's voice elicited greater activity in primary auditory regions in the midbrain and cortex; voice-selective superior temporal sulcus (STS); the amygdala, which is crucial for processing of affect; nucleus accumbens and orbitofrontal cortex of the reward circuit; anterior insula and cingulate of the salience network; and a subregion of fusiform gyrus associated with face perception. The strength of brain connectivity between voice-selective STS and reward, affective, salience, memory, and face-processing regions during mother's voice perception predicted social communication skills. Our findings provide a novel neurobiological template for investigation of typical social development as well as clinical disorders, such as autism, in which perception of biologically and socially salient voices may be impaired.
View details for DOI 10.1073/pnas.1602948113
View details for Web of Science ID 000376784600059
View details for PubMedID 27185915
Underconnectivity between voice-selective cortex and reward circuitry in children with autism
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
2013; 110 (29): 12060-12065
Individuals with autism spectrum disorders (ASDs) often show insensitivity to the human voice, a deficit that is thought to play a key role in communication deficits in this population. The social motivation theory of ASD predicts that impaired function of reward and emotional systems impedes children with ASD from actively engaging with speech. Here we explore this theory by investigating distributed brain systems underlying human voice perception in children with ASD. Using resting-state functional MRI data acquired from 20 children with ASD and 19 age- and intelligence quotient-matched typically developing children, we examined intrinsic functional connectivity of voice-selective bilateral posterior superior temporal sulcus (pSTS). Children with ASD showed a striking pattern of underconnectivity between left-hemisphere pSTS and distributed nodes of the dopaminergic reward pathway, including bilateral ventral tegmental areas and nucleus accumbens, left-hemisphere insula, orbitofrontal cortex, and ventromedial prefrontal cortex. Children with ASD also showed underconnectivity between right-hemisphere pSTS, a region known for processing speech prosody, and the orbitofrontal cortex and amygdala, brain regions critical for emotion-related associative learning. The degree of underconnectivity between voice-selective cortex and reward pathways predicted symptom severity for communication deficits in children with ASD. Our results suggest that weak connectivity of voice-selective cortex and brain structures involved in reward and emotion may impair the ability of children with ASD to experience speech as a pleasurable stimulus, thereby impacting language and social skill development in this population. Our study provides support for the social motivation theory of ASD.
View details for DOI 10.1073/pnas.1302982110
View details for Web of Science ID 000322086100085
View details for PubMedID 23776244
Aberrant Emotional Prosody Circuitry Predicts Social Communication Impairments in Children With Autism.
Biological psychiatry. Cognitive neuroscience and neuroimaging
Emotional prosody provides acoustical cues that reflect a communication partner's emotional state and is crucial for successful social interactions. Many children with autism have deficits in recognizing emotions from voices; however, the neural basis for these impairments is unknown. We examined brain circuit features underlying emotional prosody processing deficits and their relationship to clinical symptoms of autism.We used an event-related functional magnetic resonance imaging task to measure neural activity and connectivity during processing of sad and happy emotional prosody and neutral speech in 22 children with autism and 21 matched control children (7-12 years old). We employed functional connectivity analyses to test competing theoretical accounts that attribute emotional prosody impairments to either sensory processing deficits in auditory cortex or theory of mind deficits instantiated in the temporoparietal junction (TPJ).Children with autism showed specific behavioral impairments for recognizing emotions from voices. They also showed aberrant functional connectivity between voice-sensitive auditory cortex and the bilateral TPJ during emotional prosody processing. Neural activity in the bilateral TPJ during processing of both sad and happy emotional prosody stimuli was associated with social communication impairments in children with autism. In contrast, activity and decoding of emotional prosody in auditory cortex was comparable between autism and control groups and did not predict social communication impairments.Our findings support a social-cognitive deficit model of autism by identifying a role for TPJ dysfunction during emotional prosody processing. Our study underscores the importance of tuning in to vocal-emotional cues for building social connections in children with autism.
View details for DOI 10.1016/j.bpsc.2022.09.016
View details for PubMedID 36635147
Neural decoding of emotional prosody in voice-sensitive auditory cortex predicts social communication abilities in children.
Cerebral cortex (New York, N.Y. : 1991)
During social interactions, speakers signal information about their emotional state through their voice, which is known as emotional prosody. Little is known regarding the precise brain systems underlying emotional prosody decoding in children and whether accurate neural decoding of these vocal cues is linked to social skills. Here, we address critical gaps in the developmental literature by investigating neural representations of prosody and their links to behavior in children. Multivariate pattern analysis revealed that representations in the bilateral middle and posterior superior temporal sulcus (STS) divisions of voice-sensitive auditory cortex decode emotional prosody information in children. Crucially, emotional prosody decoding in middle STS was correlated with standardized measures of social communication abilities; more accurate decoding of prosody stimuli in the STS was predictive of greater social communication abilities in children. Moreover, social communication abilities were specifically related to decoding sadness, highlighting the importance of tuning in to negative emotional vocal cues for strengthening social responsiveness and functioning. Findings bridge an important theoretical gap by showing that the ability of the voice-sensitive cortex to detect emotional cues in speech is predictive of a child's social skills, including the ability to relate and interact with others.
View details for DOI 10.1093/cercor/bhac095
View details for PubMedID 35296892
- Epidural labour analgesia and autism spectrum disorder: is the current evidence sufficient to dismiss an association? BRITISH JOURNAL OF ANAESTHESIA 2022; 128 (3): 393-398
Mothers adapt their voice during children's adolescent development.
1800; 12 (1): 951
Mothers alter their speech in a stereotypical manner when addressing infants using high pitch, a wide pitch range, and distinct timbral features. Mothers reduce their vocal pitch after early childhood; however, it is not known whether mother's voice changes through adolescence as children become increasingly independent from their parents. Here we investigate the vocal acoustics of 50 mothers ofolder children (ages 7-16) to determine: (1) whether pitch changes associated with child-directed speech decrease with age; (2) whether other acoustical features associated with child-directed speech change with age; and, (3) the relative contribution of acoustical features in predicting child's age. Results reveal that mothers of older children used lower pitched voices than mothers of younger children, and mother's voice pitch height predicted their child's age. Crucially, these effects were present after controlling for mother's age, accounting for aging-related pitch reductions. Brightness, a timbral feature correlated with pitch height, also showed an inverse relation with child's age but did not improve prediction of child's age beyond that accounted for by pitch height. Other acoustic features did not predict child age. Findings suggest that mother's voice adapts to match their child's developmental progression into adolescence and this adaptation is independent of mother's age.
View details for DOI 10.1038/s41598-022-04863-2
View details for PubMedID 35046478
Intrinsic functional architecture of the human speech processing network.
Cortex; a journal devoted to the study of the nervous system and behavior
2020; 129: 41–56
Speech engages distributed temporo-fronto-parietal brain regions, however a comprehensive understanding of its intrinsic functional network architecture is lacking. Here we investigate the human speech processing network using the largest sample to date, high temporal resolution resting-state fMRI data, network stability analysis, and theoretically informed models. Network consensus analysis revealed three stable functional modules encompassing: (1) superior temporal plane (STP) and Area Spt, (2) superior temporal sulcus (STS) + ventral frontoparietal cortex, and (3) dorsal frontoparietal cortex. The STS + ventral frontoparietal cortex module showed the highest participation coefficient, and a hub-like organization linking STP with frontoparietal cortical nodes. Node-wise analysis revealed key connectivity features underlying this modular architecture, including a leftward asymmetric connectivity profile, and differential connectivity of STS and STP, with frontoparietal cortex. Our findings, replicated across cohorts, reveal a tripartite functional network architecture supporting speech processing and provide a novel template for future studies.
View details for DOI 10.1016/j.cortex.2020.03.013
View details for PubMedID 32428761
A Pivotal Response Treatment Package for Children With Autism Spectrum Disorder: An RCT.
OBJECTIVES: Our aim was to conduct a randomized controlled trial to evaluate a pivotal response treatment package (PRT-P) consisting of parent training and clinician-delivered in-home intervention on the communication skills of children with autism spectrum disorder.METHODS: Forty-eight children with autism spectrum disorder and significant language delay between 2 and 5 years old were randomly assigned to PRT-P (n = 24) or the delayed treatment group (n = 24) for 24 weeks. The effect of treatment on child communication skills was assessed via behavioral coding of parent-child interactions, standardized parent-report measures, and blinded clinician ratings.RESULTS: Analysis of child utterances during the structured laboratory observation revealed that, compared with the delayed treatment group, children in PRT-P demonstrated greater improvement in frequency of functional utterances (F1,41 = 6.07; P = .026; d = 0.61). The majority of parents in the PRT-P group (91%) were able to implement pivotal response treatment (PRT) with fidelity within 24 weeks. Children receiving PRT-P also demonstrated greater improvement on the Brief Observation of Social Communication Change, on the Clinical Global Impressions Improvement subscale, and in number of words used on a parent-report questionnaire.CONCLUSIONS: This is the first 24-week randomized controlled trial in which community treatment is compared with the combination of parent training and clinician-delivered PRT. PRT-P was effective for improving child social communication skills and for teaching parents to implement PRT. Additional research will be needed to understand the optimal combination of treatment settings, intensity, and duration, and to identify child and parent characteristics associated with treatment response.
View details for DOI 10.1542/peds.2019-0178
View details for PubMedID 31387868
- Quantitative Analysis of Heterogeneity in Academic Achievement of Children With Autism CLINICAL PSYCHOLOGICAL SCIENCE 2019; 7 (2): 362–80
The visual word form area (VWFA) is part of both language and attention circuitry.
2019; 10 (1): 5601
While predominant models of visual word form area (VWFA) function argue for its specific role in decoding written language, other accounts propose a more general role of VWFA in complex visual processing. However, a comprehensive examination of structural and functional VWFA circuits and their relationship to behavior has been missing. Here, using high-resolution multimodal imaging data from a large Human Connectome Project cohort (N = 313), we demonstrate robust patterns of VWFA connectivity with both canonical language and attentional networks. Brain-behavior relationships revealed a striking pattern of double dissociation: structural connectivity of VWFA with lateral temporal language network predicted language, but not visuo-spatial attention abilities, while VWFA connectivity with dorsal fronto-parietal attention network predicted visuo-spatial attention, but not language abilities. Our findings support a multiplex model of VWFA function characterized by distinct circuits for integrating language and attention, and point to connectivity-constrained cognition as a key principle of human brain organization.
View details for DOI 10.1038/s41467-019-13634-z
View details for PubMedID 31811149
Neural signatures of co-occurring reading and mathematical difficulties.
Impaired abilities in multiple domains is common in children with learning difficulties. Co-occurrence of low reading and mathematical abilities (LRLM) appears in almost every second child with learning difficulties. However, little is known regarding the neural bases of this combination. Leveraging a unique and tightly controlled sample including children with LRLM, isolated low reading ability (LR), and isolated low mathematical ability (LM), we uncover a distinct neural signature in children with co-occurring low reading and mathematical abilities differentiable from LR and LM. Specifically, we show that LRLM is neuroanatomically distinct from both LR and LM based on reduced cortical folding of the right parahippocampal gyrus, a medial temporal lobe region implicated in visual associative learning. LRLM children were further distinguished from LR and LM by patterns of intrinsic functional connectivity between parahippocampal gyrus and brain circuitry underlying reading and numerical quantity processing. Our results critically inform cognitive and neural models of LRLM by implicating aberrations in both domain-specific and domain-general brain regions involved in reading and mathematics. More generally, our results provide the first evidence for distinct multimodal neural signatures associated with LRLM, and suggest that this population displays an independent phenotype of learning difficulty that cannot be explained simply as a combination of isolated low reading and mathematical abilities.
View details for PubMedID 29920856
Individual Differences in Human Auditory Processing: Insights From Single-Trial Auditory Midbrain Activity in an Animal Model.
Cerebral cortex (New York, N.Y. : 1991)
2017; 27 (11): 5095-5115
Auditory-evoked potentials are classically defined as the summations of synchronous firing along the auditory neuraxis. Converging evidence supports a model whereby timing jitter in neural coding compromises listening and causes variable scalp-recorded potentials. Yet the intrinsic noise of human scalp recordings precludes a full understanding of the biological origins of individual differences in listening skills. To delineate the mechanisms contributing to these phenomena, in vivo extracellular activity was recorded from inferior colliculus in guinea pigs to speech in quiet and noise. Here we show that trial-by-trial timing jitter is a mechanism contributing to auditory response variability. Identical variability patterns were observed in scalp recordings in human children, implicating jittered timing as a factor underlying reduced coding of dynamic speech features and speech in noise. Moreover, intertrial variability in human listeners is tied to language development. Together, these findings suggest that variable timing in inferior colliculus blurs the neural coding of speech in noise, and propose a consequence of this timing jitter for human behavior. These results hint both at the mechanisms underlying speech processing in general, and at what may go awry in individuals with listening difficulties.
View details for DOI 10.1093/cercor/bhw293
View details for PubMedID 28334187
View details for PubMedCentralID PMC6410521
Population responses in primary auditory cortex simultaneously represent the temporal envelope and periodicity features in natural speech
2017; 348: 31-43
Speech perception relies on a listener's ability to simultaneously resolve multiple temporal features in the speech signal. Little is known regarding neural mechanisms that enable the simultaneous coding of concurrent temporal features in speech. Here we show that two categories of temporal features in speech, the low-frequency speech envelope and periodicity cues, are processed by distinct neural mechanisms within the same population of cortical neurons. We measured population activity in primary auditory cortex of anesthetized guinea pig in response to three variants of a naturally produced sentence. Results show that the envelope of population responses closely tracks the speech envelope, and this cortical activity more closely reflects wider bandwidths of the speech envelope compared to narrow bands. Additionally, neuronal populations represent the fundamental frequency of speech robustly with phase-locked responses. Importantly, these two temporal features of speech are simultaneously observed within neuronal ensembles in auditory cortex in response to clear, conversation, and compressed speech exemplars. Results show that auditory cortical neurons are adept at simultaneously resolving multiple temporal features in extended speech sentences using discrete coding mechanisms.
View details for DOI 10.1016/j.heares.2017.02.010
View details for Web of Science ID 000401204000003
View details for PubMedID 28216125
Brain State Differentiation and Behavioral Inflexibility in Autism†.
2015; 25 (12): 4740-4747
Autism spectrum disorders (ASDs) are characterized by social impairments alongside cognitive and behavioral inflexibility. While social deficits in ASDs have extensively been characterized, the neurobiological basis of inflexibility and its relation to core clinical symptoms of the disorder are unknown. We acquired functional neuroimaging data from 2 cohorts, each consisting of 17 children with ASDs and 17 age- and IQ-matched typically developing (TD) children, during stimulus-evoked brain states involving performance of social attention and numerical problem solving tasks, as well as during intrinsic, resting brain states. Effective connectivity between key nodes of the salience network, default mode network, and central executive network was used to obtain indices of functional organization across evoked and intrinsic brain states. In both cohorts examined, a machine learning algorithm was able to discriminate intrinsic (resting) and evoked (task) functional brain network configurations more accurately in TD children than in children with ASD. Brain state discriminability was related to severity of restricted and repetitive behaviors, indicating that weak modulation of brain states may contribute to behavioral inflexibility in ASD. These findings provide novel evidence for a potential link between neurophysiological inflexibility and core symptoms of this complex neurodevelopmental disorder.
View details for DOI 10.1093/cercor/bhu161
View details for PubMedID 25073720
View details for PubMedCentralID PMC4635916
Neurobiological Underpinnings of Math and Reading Learning Disabilities
JOURNAL OF LEARNING DISABILITIES
2013; 46 (6): 549-569
The primary goal of this review is to highlight current research and theories describing the neurobiological basis of math (MD), reading (RD), and comorbid math and reading disability (MD+RD). We first describe the unique brain and cognitive processes involved in acquisition of math and reading skills, emphasizing similarities and differences in each domain. Next we review functional imaging studies of MD and RD in children, integrating relevant theories from experimental psychology and cognitive neuroscience to characterize the functional neuroanatomy of cognitive dysfunction in MD and RD. We then review recent research on the anatomical correlates of MD and RD. Converging evidence from morphometry and tractography studies are presented to highlight distinct patterns of white matter pathways which are disrupted in MD and RD. Finally, we examine how the intersection of MD and RD provides a unique opportunity to clarify the unique and shared brain systems which adversely impact learning and skill acquisition in MD and RD, and point out important areas for future work on comorbid learning disabilities.
View details for DOI 10.1177/0022219413483174
View details for Web of Science ID 000325479000006
View details for PubMedID 23572008
Reply to Brock: Renewed focus on the voice and social reward in children with autism.
Proceedings of the National Academy of Sciences of the United States of America
2013; 110 (42): E3974-?
View details for PubMedID 24278966
Multivariate Activation and Connectivity Patterns Discriminate Speech Intelligibility in Wernicke's, Broca's, and Geschwind's Areas
2013; 23 (7): 1703-1714
The brain network underlying speech comprehension is usually described as encompassing fronto-temporal-parietal regions while neuroimaging studies of speech intelligibility have focused on a more spatially restricted network dominated by the superior temporal cortex. Here we use functional magnetic resonance imaging with a novel whole-brain multivariate pattern analysis (MVPA) to more fully characterize neural responses and connectivity to intelligible speech. Consistent with previous univariate findings, intelligible speech elicited greater activity in bilateral superior temporal cortex relative to unintelligible speech. However, MVPA identified a more extensive network that discriminated between intelligible and unintelligible speech, including left-hemisphere middle temporal gyrus, angular gyrus, inferior temporal cortex, and inferior frontal gyrus pars triangularis. These fronto-temporal-parietal areas also showed greater functional connectivity during intelligible, compared with unintelligible, speech. Our results suggest that speech intelligibly is encoded by distinct fine-grained spatial representations and within-task connectivity, rather than differential engagement or disengagement of brain regions, and they provide a more complete view of the brain network serving speech comprehension. Our findings bridge a divide between neural models of speech comprehension and the neuroimaging literature on speech intelligibility, and suggest that speech intelligibility relies on differential multivariate response and connectivity patterns in Wernicke's, Broca's, and Geschwind's areas.
View details for DOI 10.1093/cercor/bhs165
View details for Web of Science ID 000321163700020
View details for PubMedID 22693339
View details for PubMedCentralID PMC3673181
Inter-subject synchronization of brain responses during natural music listening.
European journal of neuroscience
2013; 37 (9): 1458-1469
Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic 'real-world' music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences.
View details for DOI 10.1111/ejn.12173
View details for PubMedID 23578016
Inferior colliculus contributions to phase encoding of stop consonants in an animal model
2011; 282 (1-2): 108-118
The human auditory brainstem is known to be exquisitely sensitive to fine-grained spectro-temporal differences between speech sound contrasts, and the ability of the brainstem to discriminate between these contrasts is important for speech perception. Recent work has described a novel method for translating brainstem timing differences in response to speech contrasts into frequency-specific phase differentials. Results from this method have shown that the human brainstem response is surprisingly sensitive to phase differences inherent to the stimuli across a wide extent of the spectrum. Here we use an animal model of the auditory brainstem to examine whether the stimulus-specific phase signatures measured in human brainstem responses represent an epiphenomenon associated with far-field (i.e., scalp-recorded) measurement of neural activity, or alternatively whether these specific activity patterns are also evident in auditory nuclei that contribute to the scalp-recorded response, thereby representing a more fundamental temporal processing phenomenon. Responses in anaesthetized guinea pigs to three minimally-contrasting consonant-vowel stimuli were collected simultaneously from the cortical surface vertex and directly from central nucleus of the inferior colliculus (ICc), measuring volume conducted neural activity and multiunit, near-field activity, respectively. Guinea pig surface responses were similar to human scalp-recorded responses to identical stimuli in gross morphology as well as phase characteristics. Moreover, surface-recorded potentials shared many phase characteristics with near-field ICc activity. Response phase differences were prominent during formant transition periods, reflecting spectro-temporal differences between syllables, and showed more subtle differences during the identical steady state periods. ICc encoded stimulus distinctions over a broader frequency range, with differences apparent in the highest frequency ranges analyzed, up to 3000 Hz. Based on the similarity of phase encoding across sites, and the consistency and sensitivity of response phase measured within ICc, results suggest that a general property of the auditory system is a high degree of sensitivity to fine-grained phase information inherent to complex acoustical stimuli. Furthermore, results suggest that temporal encoding in ICc contributes to temporal features measured in speech-evoked scalp-recorded responses.
View details for DOI 10.1016/j.heares.2011.09.001
View details for Web of Science ID 000298724300012
View details for PubMedID 21945200
Decoding Temporal Structure in Music and Speech Relies on Shared Brain Resources but Elicits Different Fine-Scale Spatial Patterns
2011; 21 (7): 1507-1518
Music and speech are complex sound streams with hierarchical rules of temporal organization that become elaborated over time. Here, we use functional magnetic resonance imaging to measure brain activity patterns in 20 right-handed nonmusicians as they listened to natural and temporally reordered musical and speech stimuli matched for familiarity, emotion, and valence. Heart rate variability and mean respiration rates were simultaneously measured and were found not to differ between musical and speech stimuli. Although the same manipulation of temporal structure elicited brain activation level differences of similar magnitude for both music and speech stimuli, multivariate classification analysis revealed distinct spatial patterns of brain responses in the 2 domains. Distributed neuronal populations that included the inferior frontal cortex, the posterior and anterior superior and middle temporal gyri, and the auditory brainstem classified temporal structure manipulations in music and speech with significant levels of accuracy. While agreeing with previous findings that music and speech processing share neural substrates, this work shows that temporal structure in the 2 domains is encoded differently, highlighting a fundamental dissimilarity in how the same neural resources are deployed.
View details for DOI 10.1093/cercor/bhq198
View details for Web of Science ID 000291750400005
View details for PubMedID 21071617
View details for PubMedCentralID PMC3116734
A possible role for a paralemniscal auditory pathway in the coding of slow temporal information
2011; 272 (1-2): 125-134
Low-frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low-frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery.
View details for DOI 10.1016/j.heares.2010.10.009
View details for Web of Science ID 000288418100014
View details for PubMedID 21094680
Sparse logistic regression for whole-brain classification of fMRI data
2010; 51 (2): 752-764
Multivariate pattern recognition methods are increasingly being used to identify multiregional brain activity patterns that collectively discriminate one cognitive condition or experimental group from another, using fMRI data. The performance of these methods is often limited because the number of regions considered in the analysis of fMRI data is large compared to the number of observations (trials or participants). Existing methods that aim to tackle this dimensionality problem are less than optimal because they either over-fit the data or are computationally intractable. Here, we describe a novel method based on logistic regression using a combination of L1 and L2 norm regularization that more accurately estimates discriminative brain regions across multiple conditions or groups. The L1 norm, computed using a fast estimation procedure, ensures a fast, sparse and generalizable solution; the L2 norm ensures that correlated brain regions are included in the resulting solution, a critical aspect of fMRI data analysis often overlooked by existing methods. We first evaluate the performance of our method on simulated data and then examine its effectiveness in discriminating between well-matched music and speech stimuli. We also compared our procedures with other methods which use either L1-norm regularization alone or support vector machine-based feature elimination. On simulated data, our methods performed significantly better than existing methods across a wide range of contrast-to-noise ratios and feature prevalence rates. On experimental fMRI data, our methods were more effective in selectively isolating a distributed fronto-temporal network that distinguished between brain regions known to be involved in speech and music processing. These findings suggest that our method is not only computationally efficient, but it also achieves the twin objectives of identifying relevant discriminative brain regions and accurately classifying fMRI data.
View details for DOI 10.1016/j.neuroimage.2010.02.040
View details for Web of Science ID 000277141200026
View details for PubMedID 20188193
View details for PubMedCentralID PMC2856747
- Rapid acoustic processing in the auditory brainstem is not related to cortical asymmetry for the syllable rate of speech Clinical Neurophysiology 2010; 121: 1343-1350
Abnormal Cortical Processing of the Syllable Rate of Speech in Poor Readers
JOURNAL OF NEUROSCIENCE
2009; 29 (24): 7686-7693
Children with reading impairments have long been associated with impaired perception for rapidly presented acoustic stimuli and recently have shown deficits for slower features. It is not known whether impairments for low-frequency acoustic features negatively impact processing of speech in reading-impaired individuals. Here we provide neurophysiological evidence that poor readers have impaired representation of the speech envelope, the acoustical cue that provides syllable pattern information in speech. We measured cortical-evoked potentials in response to sentence stimuli and found that good readers indicated consistent right-hemisphere dominance in auditory cortex for all measures of speech envelope representation, including the precision, timing, and magnitude of cortical responses. Poor readers showed abnormal patterns of cerebral asymmetry for all measures of speech envelope representation. Moreover, cortical measures of speech envelope representation predicted up to 41% of the variability in standardized reading scores and 50% in measures of phonological processing across a wide range of abilities. Our findings strongly support a relationship between acoustic-level processing and higher-level language abilities, and are the first to link reading ability with cortical processing of low-frequency acoustic features in the speech signal. Our results also support the hypothesis that asymmetric routing between cerebral hemispheres represents an important mechanism for temporal encoding in the human auditory system, and the need for an expansion of the temporal processing hypothesis for reading disabilities to encompass impairments for a wider range of speech features than previously acknowledged.
View details for DOI 10.1523/JNEUROSCI.5242-08.2009
View details for Web of Science ID 000267131000008
View details for PubMedID 19535580
Relating Structure to Function: Heschl's Gyrus and Acoustic Processing
JOURNAL OF NEUROSCIENCE
2009; 29 (1): 61-69
The way in which normal variations in human neuroanatomy relate to brain function remains largely uninvestigated. This study addresses the question by relating anatomical measurements of Heschl's gyrus (HG), the structure containing human primary auditory cortex, to how this region processes temporal and spectral acoustic information. In this study, subjects' right and left HG were identified and manually indicated on anatomical magnetic resonance imaging scans. Volumes of gray matter, white matter, and total gyrus were recorded, and asymmetry indices were calculated. Additionally, cortical auditory activity in response to noise stimuli varying orthogonally in temporal and spectral dimensions was assessed and related to the volumetric measurements. A high degree of anatomical variability was seen, consistent with other reports in the literature. The auditory cortical responses showed the expected leftward lateralization to varying rates of stimulus change and rightward lateralization of increasing spectral information. An explicit link between auditory structure and function is then established, in which anatomical variability of auditory cortex is shown to relate to individual differences in the way that cortex processes acoustic information. Specifically, larger volumes of left HG were associated with larger extents of rate-related cortex on the left, and larger volumes of right HG related to larger extents of spectral-related cortex on the right. This finding is discussed in relation to known microanatomical asymmetries of HG, including increased myelination of its fibers, and implications for language learning are considered.
View details for DOI 10.1523/JNEUROSCI.3489-08.2009
View details for Web of Science ID 000262298200008
View details for PubMedID 19129385
Right-hemisphere auditory cortex is dominant for coding syllable patterns in speech
JOURNAL OF NEUROSCIENCE
2008; 28 (15): 3958-3965
Cortical analysis of speech has long been considered the domain of left-hemisphere auditory areas. A recent hypothesis poses that cortical processing of acoustic signals, including speech, is mediated bilaterally based on the component rates inherent to the speech signal. In support of this hypothesis, previous studies have shown that slow temporal features (3-5 Hz) in nonspeech acoustic signals lateralize to right-hemisphere auditory areas, whereas rapid temporal features (20-50 Hz) lateralize to the left hemisphere. These results were obtained using nonspeech stimuli, and it is not known whether right-hemisphere auditory cortex is dominant for coding the slow temporal features in speech known as the speech envelope. Here we show strong right-hemisphere dominance for coding the speech envelope, which represents syllable patterns and is critical for normal speech perception. Right-hemisphere auditory cortex was 100% more accurate in following contours of the speech envelope and had a 33% larger response magnitude while following the envelope compared with the left hemisphere. Asymmetries were evident regardless of the ear of stimulation despite dominance of contralateral connections in ascending auditory pathways. Results provide evidence that the right hemisphere plays a specific and important role in speech processing and support the hypothesis that acoustic processing of speech involves the decomposition of the signal into constituent temporal features by rate-specialized neurons in right- and left-hemisphere auditory cortex.
View details for DOI 10.1523/JNEUROSCI.0187-08.2008
View details for Web of Science ID 000255012400015
View details for PubMedID 18400895
Sensory-based learning disability: Insights from brainstem processing of speech sounds
INTERNATIONAL JOURNAL OF AUDIOLOGY
2007; 46 (9): 524-532
Speech-evoked auditory brainstem responses (speech-ABR) provide a reliable marker of learning disability in a substantial subgroup of individuals with language-based learning problems (LDs). Here we review work describing the properties of the speech-ABR in typically developing children and in children with LD. We also review studies on the relationships between speech-ABR and the commonly used click-ABR, and between speech-ABR and auditory processing at the level of the cortex. In a critical examination of previously published data, we conclude that as many as 40% of LDs have abnormal speech-ABRs and that these individuals are also likely to exhibit abnormal cortical processing. Yet, the profile of learning problems these individuals exhibit is unspecific. Leaving open the question of causality, these data suggest that speech-ABR can be used to identify a large sub-population of LDs, those with abnormal auditory physiological function. Further studies are required to determine the functional relationships among abnormal speech-ABR, speech perception, and the pattern of literacy-related and cognitive deficits in LD.
View details for DOI 10.1080/14992020701383035
View details for Web of Science ID 000250278200007
View details for PubMedID 17828668
Auditory brainstem timing predicts cerebral asymmetry for speech
JOURNAL OF NEUROSCIENCE
2006; 26 (43): 11131-11137
The left hemisphere of the human cerebral cortex is dominant for processing rapid acoustic stimuli, including speech, and this specialized activity is preceded by processing in the auditory brainstem. It is not known to what extent the integrity of brainstem encoding of speech impacts patterns of asymmetry at cortex. Here, we demonstrate that the precision of temporal encoding of speech in auditory brainstem predicts cerebral asymmetry for speech sounds measured in a group of children spanning a range of language skills. Results provide strong evidence that timing deficits measured at the auditory brainstem negatively impact rapid acoustic processing by specialized structures of cortex, and demonstrate a delicate relationship between cortical activation patterns and the temporal integrity of cortical input.
View details for DOI 10.1523/JNEUROSCI.2744-06.2006
View details for Web of Science ID 000241553900020
View details for PubMedID 17065453