I am a neuroscientist focused on auditory-vocal function in human social communication. My expertise covers psychological, neurobiological, and acoustic perspectives on speech and music, their conveyance of affect, social-significance, and origins in mammalian vocal behavior. I combine psychophysiological, psychoacoustic, neuroendocrine and pharmacological methods to study perception and behavior in human subjects. I graduated from the University of California San Diego in 2006 with summa cum laude honors in Biological Psychology (BS) and Neurophilosophy (BA). I hold a graduate certificate in Cognitive Neuroscience from Duke University (2009), and a PhD in Neurobiology from Duke University School of Medicine (2012). My postdoc at the University of Vienna (2012-18) focused on bioacoustics and auditory-motor synchrony. My work has been recognized with awards including a young investigator award from the University of Vienna and an innovation award from the Social and Affective Neuroscience society. At Stanford, I am working to develop an objective assessment of auditory-vocal affect perception for clinical research on autism in collaboration with scientists, engineers, and artists from departments of Psychiatry, Psychology, and Music. My work is funded by NIMH and the Wu Tsai Neuroscience Institute. Along the way, I have been fortunate to receive mentorship from Drs. Patricia Churchland, Dale Purves, Tecumseh Fitch, and Karen Parker.
Instructor, Psychiatry and Behavioral Sciences
- Pupillometry of Groove: Evidence for Noradrenergic Arousal in the Link Between Music and Movement FRONTIERS IN NEUROSCIENCE 2019; 12
Temporal modulation in speech, music, and animal vocal communication: evidence of conserved function.
Annals of the New York Academy of Sciences
Speech is a distinctive feature of our species. It is the default channel for language and constitutes our primary mode of social communication. Determining the evolutionary origins of speech is a challenging prospect, in large part because it appears to be unique in the animal kingdom. However, direct comparisons between speech and other forms of acoustic communication, both in humans (music) and animals (vocalization), suggest that important components of speech are shared across domains and species. In this review, we focus on a single aspect of speech-temporal patterning-examining similarities and differences across speech, music, and animal vocalization. Additional structure is provided by focusing on three specific functions of temporal patterning across domains: (1) emotional expression, (2) social interaction, and (3) unit identification. We hypothesize an evolutionary trajectory wherein the ability to identify units within a continuous stream of vocal sounds derives from social vocal interaction, which, in turn, derives from vocal emotional communication. This hypothesis implies that unit identification has parallels in music and precursors in animal vocal communication. Accordingly, we demonstrate the potential of comparisons between fundamental domains of biological acoustic communication to provide insight into the evolution of language.
View details for DOI 10.1111/nyas.14228
View details for PubMedID 31482571
Comparing Chalk With Cheese-The EGG Contact Quotient Is Only a Limited Surrogate of the Closed Quotient
JOURNAL OF VOICE
2017; 31 (4): 401–9
The electroglottographic (EGG) contact quotient (CQegg), an estimate of the relative duration of vocal fold contact per vibratory cycle, is the most commonly used quantitative analysis parameter in EGG. The purpose of this study is to quantify the CQegg's relation to the closed quotient, a measure more directly related to glottal width changes during vocal fold vibration and the respective sound generation events. Thirteen singers (six females) phonated in four extreme phonation types while independently varying the degree of breathiness and vocal register. EGG recordings were complemented by simultaneous videokymographic (VKG) endoscopy, which allows for calculation of the VKG closed quotient (CQvkg). The CQegg was computed with five different algorithms, all used in previous research. All CQegg algorithms produced CQegg values that clearly differed from the respective CQvkg, with standard deviations around 20% of cycle duration. The difference between CQvkg and CQegg was generally greater for phonations with lower CQvkg. The largest differences were found for low-quality EGG signals with a signal-to-noise ratio below 10 dB, typically stemming from phonations with incomplete glottal closure. Disregarding those low-quality signals, we found the best match between CQegg and CQvkg for a CQegg algorithm operating on the first derivative of the EGG signal. These results show that the terms "closed quotient" and "contact quotient" should not be used interchangeably. They relate to different physiological phenomena. Phonations with incomplete glottal closure having an EGG signal-to-noise ratio below 10 dB are not suited for CQegg analysis.
View details for DOI 10.1016/j.jvoice.2016.11.007
View details for Web of Science ID 000406147000002
View details for PubMedID 28017461