Speech is a critical communication signal for the development of social skills and language function. Autism spectrum disorders affect 1 in 88 school-age children and are characterized by deficits in social communication and language skills, and many of these individuals also experience speech perception difficulties. My primary research goals are to understand the brain bases of social communication and language impairments in children with ASD, and to describe neural changes associated with remediation of these behavioral deficits. The theoretical framework that motivates my work is that impaired perception and neural decoding of speech impact social skill and language development in many children with ASD. Moreover, I believe that a grasp of these relationships is central to understanding the etiology of these disorders and will provide insight into their remediation.
I have initiated a program of research to further our understanding of auditory brain function serving key elements of speech perception in children with ASD. The first study produced by this program of research was recently published in the Proceedings of the National Academy of Sciences and shows that children with ASD have weak brain connectivity between voice-selective regions of cortex and the distributed reward circuit and amygdala. Moreover, the strength of these speech-reward brain connections predicts social communication abilities in these children. These results provide novel support for the hypothesis that deficits in representing the reward value of social stimuli, including speech, impede children with ASD from actively engaging with these stimuli and consequently impair social skill development. My future research will leverage this finding by probing this aberrant brain circuit in detailed explorations of speech perception in children with ASD. An important component of my future research is to explore neural plasticity associated with training programs designed to ameliorate social communication deficits in children with ASD, with a focus on the speech-reward brain circuit identified in my recent publication. In addition to my interest in studying social communication and language impairments in children with ASD, my research program also includes investigating the relationship between speech perception impairments and phonological and reading difficulties in children with reading disorders (RD). This work is a continuation of my dissertation work, which examined neural decoding of temporal features in speech in children with RD.
Clinical Assistant Professor, Psychiatry and Behavioral Sciences
Honors & Awards
CHRI Pilot Early Career Award, Lucile Packard Foundation for Children’s Health (2017)
K01 Research Scientist Development Award, NIMH, NIH/NIMH (2014-2017)
Postdoctoral National Research Service Award, NIH/NIDCD (2010-2012)
Independence Blue Cross Grant in Auditory Science Award, National Organization for Hearing Research Foundation (2006)
Research Training in Neuroscience, NIH/NIDCD (2002-2003)
Graduate Fellowship, Northwestern University (2000-2001)
Ph.D., Northwestern University, Auditory Cognitive Neuroscience (2008)
B.F.A., University of Arizona (1994)
Current Research and Scholarly Interests
Language impairments affect up to 19% of school age children and these deficits are predictive of long-term problems affecting learning, academic achievement, and behavior. My primary research goal is to understand the neurobiological foundations of language impairments. Specifically, I am interested in how the perception and neural coding of speech impact language and other behavioral deficits in children, with a focus on children with reading disabilities (RD) and autism spectrum disorders (ASD). The theoretical framework that motivates my work is that impaired perception and neural decoding of speech are causally related to language deficits in many affected children. Moreover, I believe that a grasp of these relationships is central to understanding the etiology of these disorders and will provide insight into their remediation.
Temporal Features of Speech in the Reading-Impaired Auditory System: Speech contains a number of temporal features that are important for perception. Some of these temporal features are relatively slow, such patterns of syllables in speech, and some features are much faster and enable us to discriminate the word gab from dab. My dissertation work was the first to show that cortical processing of syllable patterns is related to phonological and reading impairments in children with reading disorders. Additionally, this work identified a relationship between brainstem and leftward cortical asymmetry for processing rapid elements of speech in both normal and reading-impaired children. A fundamental question raised by my dissertation work is why does the auditory system preferentially route slow speech features to right-hemisphere auditory cortex and rapid features to left-hemisphere? A plausible explanation is that lateralized brain structures beyond auditory cortex may facilitate the discrimination of these temporal features. As part of my postdoctoral research at Stanford, I am examining this question with research funded by a postdoctoral NRSA from NIH/NIDCD. I am collecting fMRI data while unimpaired adults attend to both slow and rapid speech features as a means of identifying brain structures that accurately discriminate temporal information within these two time ranges. Results will provide new information regarding extended brain networks that facilitate the discrimination of slow and fast temporal features of speech, and may provide clues as to why it is advantageous for the auditory system to route this temporal information in an asymmetric manner. Importantly, I plan to use this novel experimental design for examining individuals with RD to gain a deeper understanding of the brain networks underlying impaired temporal processing in the RD auditory system.
The Neural Basis of Phonological Processing and Speaker Identity in the Autistic Brain: Impaired phonological processing and abnormal perception of human voice are two critical, yet understudied, aspects of language and social impairments in children with ASD. Despite the prevalence and adverse impact of these deficits, the brain mechanisms underlying these phenomena have received surprisingly little experimental investigation, particularly in children with ASD. I have initiated a study to further our understanding of basic auditory function underlying decoding of phonological content (what is being said) and speaker identity (who is saying it) in children with ASD, compared to typically developing children matched on age and language ability. This study is funded with an R21 from NIH/NIDCD that I co-wrote with my postdoctoral mentor, Dr. Vinod Menon.
Neural circuits underlying mother's voice perception predict social communication abilities in children
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
2016; 113 (22): 6295-6300
The human voice is a critical social cue, and listeners are extremely sensitive to the voices in their environment. One of the most salient voices in a child's life is mother's voice: Infants discriminate their mother's voice from the first days of life, and this stimulus is associated with guiding emotional and social function during development. Little is known regarding the functional circuits that are selectively engaged in children by biologically salient voices such as mother's voice or whether this brain activity is related to children's social communication abilities. We used functional MRI to measure brain activity in 24 healthy children (mean age, 10.2 y) while they attended to brief (<1 s) nonsense words produced by their biological mother and two female control voices and explored relationships between speech-evoked neural activity and social function. Compared to female control voices, mother's voice elicited greater activity in primary auditory regions in the midbrain and cortex; voice-selective superior temporal sulcus (STS); the amygdala, which is crucial for processing of affect; nucleus accumbens and orbitofrontal cortex of the reward circuit; anterior insula and cingulate of the salience network; and a subregion of fusiform gyrus associated with face perception. The strength of brain connectivity between voice-selective STS and reward, affective, salience, memory, and face-processing regions during mother's voice perception predicted social communication skills. Our findings provide a novel neurobiological template for investigation of typical social development as well as clinical disorders, such as autism, in which perception of biologically and socially salient voices may be impaired.
View details for DOI 10.1073/pnas.1602948113
View details for Web of Science ID 000376784600059
View details for PubMedID 27185915
Underconnectivity between voice-selective cortex and reward circuitry in children with autism
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
2013; 110 (29): 12060-12065
Individuals with autism spectrum disorders (ASDs) often show insensitivity to the human voice, a deficit that is thought to play a key role in communication deficits in this population. The social motivation theory of ASD predicts that impaired function of reward and emotional systems impedes children with ASD from actively engaging with speech. Here we explore this theory by investigating distributed brain systems underlying human voice perception in children with ASD. Using resting-state functional MRI data acquired from 20 children with ASD and 19 age- and intelligence quotient-matched typically developing children, we examined intrinsic functional connectivity of voice-selective bilateral posterior superior temporal sulcus (pSTS). Children with ASD showed a striking pattern of underconnectivity between left-hemisphere pSTS and distributed nodes of the dopaminergic reward pathway, including bilateral ventral tegmental areas and nucleus accumbens, left-hemisphere insula, orbitofrontal cortex, and ventromedial prefrontal cortex. Children with ASD also showed underconnectivity between right-hemisphere pSTS, a region known for processing speech prosody, and the orbitofrontal cortex and amygdala, brain regions critical for emotion-related associative learning. The degree of underconnectivity between voice-selective cortex and reward pathways predicted symptom severity for communication deficits in children with ASD. Our results suggest that weak connectivity of voice-selective cortex and brain structures involved in reward and emotion may impair the ability of children with ASD to experience speech as a pleasurable stimulus, thereby impacting language and social skill development in this population. Our study provides support for the social motivation theory of ASD.
View details for DOI 10.1073/pnas.1302982110
View details for Web of Science ID 000322086100085
View details for PubMedID 23776244
Brain State Differentiation and Behavioral Inflexibility in Autism†.
2015; 25 (12): 4740-4747
Autism spectrum disorders (ASDs) are characterized by social impairments alongside cognitive and behavioral inflexibility. While social deficits in ASDs have extensively been characterized, the neurobiological basis of inflexibility and its relation to core clinical symptoms of the disorder are unknown. We acquired functional neuroimaging data from 2 cohorts, each consisting of 17 children with ASDs and 17 age- and IQ-matched typically developing (TD) children, during stimulus-evoked brain states involving performance of social attention and numerical problem solving tasks, as well as during intrinsic, resting brain states. Effective connectivity between key nodes of the salience network, default mode network, and central executive network was used to obtain indices of functional organization across evoked and intrinsic brain states. In both cohorts examined, a machine learning algorithm was able to discriminate intrinsic (resting) and evoked (task) functional brain network configurations more accurately in TD children than in children with ASD. Brain state discriminability was related to severity of restricted and repetitive behaviors, indicating that weak modulation of brain states may contribute to behavioral inflexibility in ASD. These findings provide novel evidence for a potential link between neurophysiological inflexibility and core symptoms of this complex neurodevelopmental disorder.
View details for DOI 10.1093/cercor/bhu161
View details for PubMedID 25073720
View details for PubMedCentralID PMC4635916
Neurobiological Underpinnings of Math and Reading Learning Disabilities
JOURNAL OF LEARNING DISABILITIES
2013; 46 (6): 549-569
The primary goal of this review is to highlight current research and theories describing the neurobiological basis of math (MD), reading (RD), and comorbid math and reading disability (MD+RD). We first describe the unique brain and cognitive processes involved in acquisition of math and reading skills, emphasizing similarities and differences in each domain. Next we review functional imaging studies of MD and RD in children, integrating relevant theories from experimental psychology and cognitive neuroscience to characterize the functional neuroanatomy of cognitive dysfunction in MD and RD. We then review recent research on the anatomical correlates of MD and RD. Converging evidence from morphometry and tractography studies are presented to highlight distinct patterns of white matter pathways which are disrupted in MD and RD. Finally, we examine how the intersection of MD and RD provides a unique opportunity to clarify the unique and shared brain systems which adversely impact learning and skill acquisition in MD and RD, and point out important areas for future work on comorbid learning disabilities.
View details for DOI 10.1177/0022219413483174
View details for Web of Science ID 000325479000006
View details for PubMedID 23572008
Reply to Brock: Renewed focus on the voice and social reward in children with autism.
Proceedings of the National Academy of Sciences of the United States of America
2013; 110 (42): E3974-?
View details for PubMedID 24278966
Multivariate Activation and Connectivity Patterns Discriminate Speech Intelligibility in Wernicke's, Broca's, and Geschwind's Areas
2013; 23 (7): 1703-1714
The brain network underlying speech comprehension is usually described as encompassing fronto-temporal-parietal regions while neuroimaging studies of speech intelligibility have focused on a more spatially restricted network dominated by the superior temporal cortex. Here we use functional magnetic resonance imaging with a novel whole-brain multivariate pattern analysis (MVPA) to more fully characterize neural responses and connectivity to intelligible speech. Consistent with previous univariate findings, intelligible speech elicited greater activity in bilateral superior temporal cortex relative to unintelligible speech. However, MVPA identified a more extensive network that discriminated between intelligible and unintelligible speech, including left-hemisphere middle temporal gyrus, angular gyrus, inferior temporal cortex, and inferior frontal gyrus pars triangularis. These fronto-temporal-parietal areas also showed greater functional connectivity during intelligible, compared with unintelligible, speech. Our results suggest that speech intelligibly is encoded by distinct fine-grained spatial representations and within-task connectivity, rather than differential engagement or disengagement of brain regions, and they provide a more complete view of the brain network serving speech comprehension. Our findings bridge a divide between neural models of speech comprehension and the neuroimaging literature on speech intelligibility, and suggest that speech intelligibility relies on differential multivariate response and connectivity patterns in Wernicke's, Broca's, and Geschwind's areas.
View details for DOI 10.1093/cercor/bhs165
View details for Web of Science ID 000321163700020
View details for PubMedID 22693339
Inter-subject synchronization of brain responses during natural music listening.
European journal of neuroscience
2013; 37 (9): 1458-1469
Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic 'real-world' music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences.
View details for DOI 10.1111/ejn.12173
View details for PubMedID 23578016
Inferior colliculus contributions to phase encoding of stop consonants in an animal model
2011; 282 (1-2): 108-118
The human auditory brainstem is known to be exquisitely sensitive to fine-grained spectro-temporal differences between speech sound contrasts, and the ability of the brainstem to discriminate between these contrasts is important for speech perception. Recent work has described a novel method for translating brainstem timing differences in response to speech contrasts into frequency-specific phase differentials. Results from this method have shown that the human brainstem response is surprisingly sensitive to phase differences inherent to the stimuli across a wide extent of the spectrum. Here we use an animal model of the auditory brainstem to examine whether the stimulus-specific phase signatures measured in human brainstem responses represent an epiphenomenon associated with far-field (i.e., scalp-recorded) measurement of neural activity, or alternatively whether these specific activity patterns are also evident in auditory nuclei that contribute to the scalp-recorded response, thereby representing a more fundamental temporal processing phenomenon. Responses in anaesthetized guinea pigs to three minimally-contrasting consonant-vowel stimuli were collected simultaneously from the cortical surface vertex and directly from central nucleus of the inferior colliculus (ICc), measuring volume conducted neural activity and multiunit, near-field activity, respectively. Guinea pig surface responses were similar to human scalp-recorded responses to identical stimuli in gross morphology as well as phase characteristics. Moreover, surface-recorded potentials shared many phase characteristics with near-field ICc activity. Response phase differences were prominent during formant transition periods, reflecting spectro-temporal differences between syllables, and showed more subtle differences during the identical steady state periods. ICc encoded stimulus distinctions over a broader frequency range, with differences apparent in the highest frequency ranges analyzed, up to 3000 Hz. Based on the similarity of phase encoding across sites, and the consistency and sensitivity of response phase measured within ICc, results suggest that a general property of the auditory system is a high degree of sensitivity to fine-grained phase information inherent to complex acoustical stimuli. Furthermore, results suggest that temporal encoding in ICc contributes to temporal features measured in speech-evoked scalp-recorded responses.
View details for DOI 10.1016/j.heares.2011.09.001
View details for Web of Science ID 000298724300012
View details for PubMedID 21945200
Decoding Temporal Structure in Music and Speech Relies on Shared Brain Resources but Elicits Different Fine-Scale Spatial Patterns
2011; 21 (7): 1507-1518
Music and speech are complex sound streams with hierarchical rules of temporal organization that become elaborated over time. Here, we use functional magnetic resonance imaging to measure brain activity patterns in 20 right-handed nonmusicians as they listened to natural and temporally reordered musical and speech stimuli matched for familiarity, emotion, and valence. Heart rate variability and mean respiration rates were simultaneously measured and were found not to differ between musical and speech stimuli. Although the same manipulation of temporal structure elicited brain activation level differences of similar magnitude for both music and speech stimuli, multivariate classification analysis revealed distinct spatial patterns of brain responses in the 2 domains. Distributed neuronal populations that included the inferior frontal cortex, the posterior and anterior superior and middle temporal gyri, and the auditory brainstem classified temporal structure manipulations in music and speech with significant levels of accuracy. While agreeing with previous findings that music and speech processing share neural substrates, this work shows that temporal structure in the 2 domains is encoded differently, highlighting a fundamental dissimilarity in how the same neural resources are deployed.
View details for DOI 10.1093/cercor/bhq198
View details for Web of Science ID 000291750400005
View details for PubMedID 21071617
A possible role for a paralemniscal auditory pathway in the coding of slow temporal information
2011; 272 (1-2): 125-134
Low-frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low-frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery.
View details for DOI 10.1016/j.heares.2010.10.009
View details for Web of Science ID 000288418100014
View details for PubMedID 21094680
Sparse logistic regression for whole-brain classification of fMRI data
2010; 51 (2): 752-764
Multivariate pattern recognition methods are increasingly being used to identify multiregional brain activity patterns that collectively discriminate one cognitive condition or experimental group from another, using fMRI data. The performance of these methods is often limited because the number of regions considered in the analysis of fMRI data is large compared to the number of observations (trials or participants). Existing methods that aim to tackle this dimensionality problem are less than optimal because they either over-fit the data or are computationally intractable. Here, we describe a novel method based on logistic regression using a combination of L1 and L2 norm regularization that more accurately estimates discriminative brain regions across multiple conditions or groups. The L1 norm, computed using a fast estimation procedure, ensures a fast, sparse and generalizable solution; the L2 norm ensures that correlated brain regions are included in the resulting solution, a critical aspect of fMRI data analysis often overlooked by existing methods. We first evaluate the performance of our method on simulated data and then examine its effectiveness in discriminating between well-matched music and speech stimuli. We also compared our procedures with other methods which use either L1-norm regularization alone or support vector machine-based feature elimination. On simulated data, our methods performed significantly better than existing methods across a wide range of contrast-to-noise ratios and feature prevalence rates. On experimental fMRI data, our methods were more effective in selectively isolating a distributed fronto-temporal network that distinguished between brain regions known to be involved in speech and music processing. These findings suggest that our method is not only computationally efficient, but it also achieves the twin objectives of identifying relevant discriminative brain regions and accurately classifying fMRI data.
View details for DOI 10.1016/j.neuroimage.2010.02.040
View details for Web of Science ID 000277141200026
View details for PubMedID 20188193
- Rapid acoustic processing in the auditory brainstem is not related to cortical asymmetry for the syllable rate of speech Clinical Neurophysiology 2010; 121: 1343-1350
Abnormal Cortical Processing of the Syllable Rate of Speech in Poor Readers
JOURNAL OF NEUROSCIENCE
2009; 29 (24): 7686-7693
Children with reading impairments have long been associated with impaired perception for rapidly presented acoustic stimuli and recently have shown deficits for slower features. It is not known whether impairments for low-frequency acoustic features negatively impact processing of speech in reading-impaired individuals. Here we provide neurophysiological evidence that poor readers have impaired representation of the speech envelope, the acoustical cue that provides syllable pattern information in speech. We measured cortical-evoked potentials in response to sentence stimuli and found that good readers indicated consistent right-hemisphere dominance in auditory cortex for all measures of speech envelope representation, including the precision, timing, and magnitude of cortical responses. Poor readers showed abnormal patterns of cerebral asymmetry for all measures of speech envelope representation. Moreover, cortical measures of speech envelope representation predicted up to 41% of the variability in standardized reading scores and 50% in measures of phonological processing across a wide range of abilities. Our findings strongly support a relationship between acoustic-level processing and higher-level language abilities, and are the first to link reading ability with cortical processing of low-frequency acoustic features in the speech signal. Our results also support the hypothesis that asymmetric routing between cerebral hemispheres represents an important mechanism for temporal encoding in the human auditory system, and the need for an expansion of the temporal processing hypothesis for reading disabilities to encompass impairments for a wider range of speech features than previously acknowledged.
View details for DOI 10.1523/JNEUROSCI.5242-08.2009
View details for Web of Science ID 000267131000008
View details for PubMedID 19535580
Relating Structure to Function: Heschl's Gyrus and Acoustic Processing
JOURNAL OF NEUROSCIENCE
2009; 29 (1): 61-69
The way in which normal variations in human neuroanatomy relate to brain function remains largely uninvestigated. This study addresses the question by relating anatomical measurements of Heschl's gyrus (HG), the structure containing human primary auditory cortex, to how this region processes temporal and spectral acoustic information. In this study, subjects' right and left HG were identified and manually indicated on anatomical magnetic resonance imaging scans. Volumes of gray matter, white matter, and total gyrus were recorded, and asymmetry indices were calculated. Additionally, cortical auditory activity in response to noise stimuli varying orthogonally in temporal and spectral dimensions was assessed and related to the volumetric measurements. A high degree of anatomical variability was seen, consistent with other reports in the literature. The auditory cortical responses showed the expected leftward lateralization to varying rates of stimulus change and rightward lateralization of increasing spectral information. An explicit link between auditory structure and function is then established, in which anatomical variability of auditory cortex is shown to relate to individual differences in the way that cortex processes acoustic information. Specifically, larger volumes of left HG were associated with larger extents of rate-related cortex on the left, and larger volumes of right HG related to larger extents of spectral-related cortex on the right. This finding is discussed in relation to known microanatomical asymmetries of HG, including increased myelination of its fibers, and implications for language learning are considered.
View details for DOI 10.1523/JNEUROSCI.3489-08.2009
View details for Web of Science ID 000262298200008
View details for PubMedID 19129385
Right-hemisphere auditory cortex is dominant for coding syllable patterns in speech
JOURNAL OF NEUROSCIENCE
2008; 28 (15): 3958-3965
Cortical analysis of speech has long been considered the domain of left-hemisphere auditory areas. A recent hypothesis poses that cortical processing of acoustic signals, including speech, is mediated bilaterally based on the component rates inherent to the speech signal. In support of this hypothesis, previous studies have shown that slow temporal features (3-5 Hz) in nonspeech acoustic signals lateralize to right-hemisphere auditory areas, whereas rapid temporal features (20-50 Hz) lateralize to the left hemisphere. These results were obtained using nonspeech stimuli, and it is not known whether right-hemisphere auditory cortex is dominant for coding the slow temporal features in speech known as the speech envelope. Here we show strong right-hemisphere dominance for coding the speech envelope, which represents syllable patterns and is critical for normal speech perception. Right-hemisphere auditory cortex was 100% more accurate in following contours of the speech envelope and had a 33% larger response magnitude while following the envelope compared with the left hemisphere. Asymmetries were evident regardless of the ear of stimulation despite dominance of contralateral connections in ascending auditory pathways. Results provide evidence that the right hemisphere plays a specific and important role in speech processing and support the hypothesis that acoustic processing of speech involves the decomposition of the signal into constituent temporal features by rate-specialized neurons in right- and left-hemisphere auditory cortex.
View details for DOI 10.1523/JNEUROSCI.0187-08.2008
View details for Web of Science ID 000255012400015
View details for PubMedID 18400895
Sensory-based learning disability: Insights from brainstem processing of speech sounds
INTERNATIONAL JOURNAL OF AUDIOLOGY
2007; 46 (9): 524-532
Speech-evoked auditory brainstem responses (speech-ABR) provide a reliable marker of learning disability in a substantial subgroup of individuals with language-based learning problems (LDs). Here we review work describing the properties of the speech-ABR in typically developing children and in children with LD. We also review studies on the relationships between speech-ABR and the commonly used click-ABR, and between speech-ABR and auditory processing at the level of the cortex. In a critical examination of previously published data, we conclude that as many as 40% of LDs have abnormal speech-ABRs and that these individuals are also likely to exhibit abnormal cortical processing. Yet, the profile of learning problems these individuals exhibit is unspecific. Leaving open the question of causality, these data suggest that speech-ABR can be used to identify a large sub-population of LDs, those with abnormal auditory physiological function. Further studies are required to determine the functional relationships among abnormal speech-ABR, speech perception, and the pattern of literacy-related and cognitive deficits in LD.
View details for DOI 10.1080/14992020701383035
View details for Web of Science ID 000250278200007
View details for PubMedID 17828668
Auditory brainstem timing predicts cerebral asymmetry for speech
JOURNAL OF NEUROSCIENCE
2006; 26 (43): 11131-11137
The left hemisphere of the human cerebral cortex is dominant for processing rapid acoustic stimuli, including speech, and this specialized activity is preceded by processing in the auditory brainstem. It is not known to what extent the integrity of brainstem encoding of speech impacts patterns of asymmetry at cortex. Here, we demonstrate that the precision of temporal encoding of speech in auditory brainstem predicts cerebral asymmetry for speech sounds measured in a group of children spanning a range of language skills. Results provide strong evidence that timing deficits measured at the auditory brainstem negatively impact rapid acoustic processing by specialized structures of cortex, and demonstrate a delicate relationship between cortical activation patterns and the temporal integrity of cortical input.
View details for DOI 10.1523/JNEUROSCI.2744-06.2006
View details for Web of Science ID 000241553900020
View details for PubMedID 17065453