Chris Chafe is a composer, improvisor, and cellist, developing much of his music alongside computer-based research. He is Director of Stanford University's Center for Computer Research in Music and Acoustics (CCRMA). In 2019, he was International Visiting Research Scholar at the Peter Wall Institute for Advanced Studies The University of British Columbia, Visiting Professor at the Politecnico di Torino, and Edgard-Varèse Guest Professor at the Technical University of Berlin. At IRCAM (Paris) and The Banff Centre (Alberta), he has pursued methods for digital synthesis, music performance and real-time internet collaboration. CCRMA's jacktrip project involves live concertizing with musicians the world over. Online collaboration software and research into latency factors continue to evolve. An active performer either on the net or physically present, his music reaches audiences in sometimes novel venues. An early network project was a simultaneous five-country concert was hosted at the United Nations in 2009. Chafe’s works include gallery and museum music installations which are now into their second decade with “musifications” resulting from collaborations with artists, scientists and MD’s. Recent work includes the Earth Symphony, the Brain Stethoscope project (Gnosisong), PolarTide for the 2013 Venice Biennale, Tomato Quintet for the transLife:media Festival at the National Art Museum of China and Sun Shot played by the horns of large ships in the port of St. Johns, Newfoundland.

Academic Appointments

Administrative Appointments

  • Director, Center for Computer Research in Music and Acoustics (1996 - Present)

Honors & Awards

  • Center for Digital Health Award, Stanford University (2023)
  • High-Impact Technology Grant, Stanford University (2023)
  • Coastal Futures Ecoacoustic Music Prize, Coastal Futures Conservatory (2022)
  • Edgard Varèse Guest Professorship, Technical University of Berlin (2019)
  • International Visiting Research Scholar, Peter Wall Institute for Advanced Studies, University of British Columbia (2019)
  • Reisdency Scholarship, Banff Centre for the Arts (2019)
  • Visiting Professor, Politecnico di Torino (2019)
  • Bio-X Interdisciplinary Initiatives Seed Grant, Stanford University (2018)
  • Synthetic Aesthetics Residency, AHRC / NSF (2010)
  • Research Award, NSF (2008)
  • iCore Professorship, Banff Centre for the Arts (2008)
  • Media X Award, Stanford University (2003)
  • OTL Birdseed Award, Stanford University (2003)
  • Net Challenge Prize, IEEE / ACM SC2000 (2000)

Program Affiliations

  • Symbolic Systems Program


  • Chris Chafe, Josef Parvizi. "United States Patent 11471088 Handheld or Wearable Device for Recording or Sonifying Brain Signals", Leland Stanford Junior University, Oct 18, 2022
  • Alexander Grant, Chris Chafe, Josef Parvizi, Jianchun Yi, Raymond Woo. "United States Patent 10849553 Systems and methods for processing sonified brain signals", CeriBell, Inc., Mar 27, 2019
  • Chris Chafe, Josef Parvizi. "United States Patent 11045150 Method of Sonifying Brain Electrical Activity", Leland Stanford Junior University, Nov 27, 2018
  • Chris Chafe, Josef Parvizi. "United States Patent 9,888,884 Method of Sonifying Signals Obtained from a Living Subject", Leland Stanford Junior University, Feb 13, 2018
  • Chris Chafe. "United States Patent 9,354,335 Determining Location Information of Microseismic Events During Hydraulic Fracturing", Leland Stanford Junior University, May 31, 2016
  • Chris Chafe. "United States Patent 14/301,270 Glitch-Free Frequency Modulation Synthesis of Sounds", Leland Stanford Junior University, Oct 23, 2014
  • Chris Chafe. "United States Patent 7,522,734 Distributed Acoustical Reverberation for Audio Collaboration", Leland Stanford Junior University, May 21, 2009
  • Chris Chafe. "United States Patent 6,801,939 Method for Evaluating Quality of Service of a Digital Network Connection", Leland Stanford Junior University, May 14, 2004
  • Chris Chafe. "United States Patent 5,508,473 Music Synthesizer and Method for Simulating Period Synchronous Noise Associated with Air Flows in Wind Instruments", Leland Stanford Junior University,, Apr 16, 1996
  • Chris Chafe. "United States Patent 5,157,216 Musical Synthesizer System and Method Using Pulsed Noise for Simulating the Noise Component of Musical Tones", Leland Stanford Junior University, Oct 20, 1992

2023-24 Courses

Stanford Advisees

All Publications

  • A Content Adaptive Learnable Time-Frequency Representation for Audio Signal Processing IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) Verma, P., Chafe, C. 2023
  • Web-Based Networked Music Performances via WebRTC: A Low-Latency PCM Audio Solution JOURNAL OF THE AUDIO ENGINEERING SOCIETY Sacchetto, M., Gastaldi, P., Chafe, C., Rottondi, C., Servetti, A. 2022; 70 (11): 926-937
  • Experiencing Remote Classical Music Performance Over Long Distance: A JackTrip Concert Between Two Continents During the Pandemic JOURNAL OF THE AUDIO ENGINEERING SOCIETY Bosi, M., Servetti, A., Chafe, C., Rottondi, C. 2021; 69 (12): 934-945
  • Temporal Coordination in Piano Duet Networked Music Performance (NMP): Interactions Between Acoustic Transmission Latency and Musical Role Asymmetries. Frontiers in psychology Washburn, A., Wright, M. J., Chafe, C., Fujioka, T. 2021; 12: 707090


    Today's audio, visual, and internet technologies allow people to interact despite physical distances, for casual conversation, group workouts, or musical performance. Musical ensemble performance is unique because interaction integrity critically depends on the timing between each performer's actions and when their acoustic outcomes arrive. Acoustic transmission latency (ATL) between players is substantially longer for networked music performance (NMP) compared to traditional in-person spaces where musicians can easily adapt. Previous work has shown that longer ATLs slow the average tempo in ensemble performance, and that asymmetric co-actor roles and empathy-related traits affect coordination patterns in joint action. Thus, we are interested in how musicians collectively adapt to a given latency and how such adaptation patterns vary with their task-related and person-related asymmetries. Here, we examined how two pianists performed duets while hearing each other's auditory outcomes with an ATL of 10, 20, or 40 ms. To test the hypotheses regarding task-related asymmetries, we designed duets such that pianists had: (1) a starting or joining role and (2) a similar or dissimilar musical part compared to their co-performer, with respect to pitch range and melodic contour. Results replicated previous clapping-duet findings showing that longer ATLs are associated with greater temporal asynchrony between partners and increased average tempo slowing. While co-performer asynchronies were not affected by performer role or part similarity, at the longer ATLs starting performers displayed slower tempos and smaller tempo variability than joining performers. This asymmetry of stability vs. flexibility between starters and joiners may sustain coordination, consistent with recent joint action findings. Our data also suggest that relative independence in musical parts may mitigate ATL-related challenges. Additionally, there may be a relationship between co-performer differences in empathy-related personality traits such as locus of control and coordination during performance under the influence of ATL. Incorporating the emergent coordinative dynamics between performers could help further innovation of music technologies and composition techniques for NMP.

    View details for DOI 10.3389/fpsyg.2021.707090

    View details for PubMedID 34630213

  • Improved Real-Time Monophonic Pitch Tracking with the Extended Complex Kalman Filter JOURNAL OF THE AUDIO ENGINEERING SOCIETY Das, O., Smith, J. O., Chafe, C. 2020; 68 (1-2): 78–86
  • A Deep Learning Approach for Low-Latency Packet Loss Concealment of Audio Signals in Networked Music Performance Applications Verme, P., Mezza, A., Chafe, C., Rottondi, C., Balandin, S. IEEE. 2020: 268–75
  • Delayed feedback embedded in perception-action coordination cycles results in anticipation behavior during synchronized rhythmic action: A dynamical systems approach. PLoS computational biology Roman, I. R., Washburn, A. n., Large, E. W., Chafe, C. n., Fujioka, T. n. 2019; 15 (10): e1007371


    Dancing and playing music require people to coordinate actions with auditory rhythms. In laboratory perception-action coordination tasks, people are asked to synchronize taps with a metronome. When synchronizing with a metronome, people tend to anticipate stimulus onsets, tapping slightly before the stimulus. The anticipation tendency increases with longer stimulus periods of up to 3500ms, but is less pronounced in trained individuals like musicians compared to non-musicians. Furthermore, external factors influence the timing of tapping. These factors include the presence of auditory feedback from one's own taps, the presence of a partner performing coordinated joint tapping, and transmission latencies (TLs) between coordinating partners. Phenomena like the anticipation tendency can be explained by delay-coupled systems, which may be inherent to the sensorimotor system during perception-action coordination. Here we tested whether a dynamical systems model based on this hypothesis reproduces observed patterns of human synchronization. We simulated behavior with a model consisting of an oscillator receiving its own delayed activity as input. Three simulation experiments were conducted using previously-published behavioral data from 1) simple tapping, 2) two-person alternating beat-tapping, and 3) two-person alternating rhythm-clapping in the presence of a range of constant auditory TLs. In Experiment 1, our model replicated the larger anticipation observed for longer stimulus intervals and adjusting the amplitude of the delayed feedback reproduced the difference between musicians and non-musicians. In Experiment 2, by connecting two models we replicated the smaller anticipation observed in human joint tapping with bi-directional auditory feedback compared to joint tapping without feedback. In Experiment 3, we varied TLs between two models alternately receiving signals from one another. Results showed reciprocal lags at points of alternation, consistent with behavioral patterns. Overall, our model explains various anticipatory behaviors, and has potential to inform theories of adaptive human synchronization.

    View details for DOI 10.1371/journal.pcbi.1007371

    View details for PubMedID 31671096

  • Detecting silent seizures by their sound Epilepsia Parvizi, J., Gururangan, K., Razavi, B., Chafe, C. 2018; 59 (4): 877-884


    The traditional approach to interpreting electroencephalograms (EEGs) requires physicians with formal training to visually assess the waveforms. This approach can be less practical in critical settings where a trained EEG specialist is not readily available to review the EEG and diagnose ongoing subclinical seizures, such as nonconvulsive status epilepticus.We have developed a novel method by which EEG data are converted to sound in real time by letting the underlying electrophysiological signal modulate a voice tone that is in the audible range. Here, we explored whether individuals without any prior EEG training could listen to 15-second sonified EEG and determine whether the EEG represents seizures or nonseizure conditions. We selected 84 EEG samples to represent seizures (n = 7), seizure-like activity (n = 25), or nonperiodic, nonrhythmic activity (normal or focal/generalized slowing, n = 52). EEGs from single channels in the left and right hemispheres were then converted to sound files. After a 4-minute training video, medical students (n = 34) and nurses (n = 30) were asked to designate each audio sample as "seizure" or "nonseizure." We then compared their performance with that of EEG-trained neurologists (n = 12) and medical students (n = 29) who also diagnosed the same EEGs on visual display.Nonexperts listening to single-channel sonified EEGs detected seizures with remarkable sensitivity (students, 98% ± 5%; nurses, 95% ± 14%) compared to experts or nonexperts reviewing the same EEGs on visual display (neurologists, 88% ± 11%; students, 76% ± 19%). If the EEGs contained seizures or seizure-like activity, nonexperts listening to sonified EEGs rated them as seizures with high specificity (students, 85% ± 9%; nurses, 82% ± 12%) compared to experts or nonexperts viewing the EEGs visually (neurologists, 90% ± 7%; students, 65% ± 20%).Our study confirms that individuals without EEG training can detect ongoing seizures or seizure-like rhythmic periodic patterns by listening to sonified EEG. Although sonification of EEG cannot replace the traditional approaches to EEG interpretation, it provides a meaningful triage tool for fast assessment of patients with suspected subclinical seizures.

    View details for DOI 10.1111/epi.14043

  • Op 1254: Music for Neutrons, Networks and Solenoids using a Restored Organ in a Nuclear Reactor Handberg, L., Elblaus, L., Chafe, C., Canfield-Dafilou, E., ACM ASSOC COMPUTING MACHINERY. 2018: 537–41
  • Don't Be Alarmed: Sonifying Autonomous Vehicle Perception to Increase Situation Awareness Gang, N., Sibi, S., Michon, R., Mok, B., Chafe, C., Ju, W., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2018: 237–46
  • Mobile Music, Sensors, Physical Modeling, and Digital Fabrication: Articulating the Augmented Mobile Instrument APPLIED SCIENCES-BASEL Michon, R., Smith, J., Wright, M., Chafe, C., Granzow, J., Wang, G. 2017; 7 (12)

    View details for DOI 10.3390/app7121311

    View details for Web of Science ID 000419175800107

  • An Overview on Networked Music Performance Technologies IEEE ACCESS Rottondi, C., Chafe, C., Allocchio, C., Sarti, A. 2016; 4: 8823-8843
  • Synthetic Sound from Synthetic Biology SYNTHETIC AESTHETICS: INVESTIGATING SYNTHETIC BIOLOGY'S DESIGNS ON NATURE Chafe, C., Leguia, M., Ginsberg, A., Calvert, J., Schyfter, P., Elfick, A., Endy, D. 2014: 219–30
  • Sound synthesis for a brain stethoscope. journal of the Acoustical Society of America Chafe, C., Caceres, J., Iorga, M. 2013; 134 (5): 4053-?


    Exploratory ascultation of brain signals has been prototyped in a project involving neurologists, real-time EEG and techniques for computer-based sound synthesis. In a manner similar to using a stethoscope, the listener can manipulate the location being listened to. Sounds which are heard are sonifications of electrode signals. We present a method for exploring sounds from arrays of sensors as sounds which are useful for distinguishing brain states. The approach maps brain wave signals to modulations characteristic of human voice. Computer-synthesized voices "sing" the dynamics of wakefulness, sleep, seizures, and other states. The goal of the project is to create a recognizable inventory of such vocal "performances" and allow the user to probe source locations in the sensor array in real time.

    View details for DOI 10.1121/1.4830793

    View details for PubMedID 24181199

  • Internet rooms from internet audio. journal of the Acoustical Society of America Chafe, C., Granzow, J. 2013; 133 (5): 3347-?


    Music rehearsal and concert performance at a distance over long-haul optical fiber is a reality because of expanding network capacity to support low-latency, uncompressed audio streaming. Multichannel sound exchanged across the globe in real time creates "rooms" for synchronous performance. Nearby connections work well and musicians feel like they are playing together in the same room. Larger, continental-size, distances remain a challenge because of transmission delay and seemingly subtle but perceptually important cues which are in conflict with qualities expected of natural rooms. Establishing plausible, room-like reverberation between the endpoints helps mitigate these difficulties and expand the distance across which remotely located musicians perform together comfortably. The paper presents a working implementation for distributed reverberation and qualitative evaluations of reverberated versus non-reverberated conditions over the same long-haul connection.

    View details for DOI 10.1121/1.4805672

    View details for PubMedID 23655010

  • JackTrip/SoundWIRE Meets Server Farm COMPUTER MUSIC JOURNAL Caceres, J., Chafe, C. 2010; 34 (3): 29-34
  • Effect of temporal separation on synchronization in rhythmic performance PERCEPTION Chafe, C., Caceres, J., Gurevich, M. 2010; 39 (7): 982-992


    A variety of short time delays inserted between pairs of subjects were found to affect their ability to synchronize a musical task. The subjects performed a clapping rhythm together from separate sound-isolated rooms via headphones and without visual contact. One-way time delays between pairs were manipulated electronically in the range of 3 to 78 ms. We are interested in quantifying the envelope of time delay within which two individuals produce synchronous performances. The results indicate that there are distinct regimes of mutually coupled behavior, and that 'natural time delay'--delay within the narrow range associated with travel times across spatial arrangements of groups and ensembles--supports the most stable performance. Conditions outside of this envelope, with time delays both below and above it, create characteristic interaction dynamics in the mutually coupled actions of the duo. Trials at extremely short delays (corresponding to unnaturally close proximity) had a tendency to accelerate from anticipation. Synchronization lagged at longer delays (larger than usual physical distances) and produced an increasingly severe deceleration and then deterioration of performed rhythms. The study has implications for music collaboration over the Internet and suggests that stable rhythmic performance can be achieved by 'wired ensembles' across distances of thousands of kilometers.

    View details for DOI 10.1068/p6465

    View details for Web of Science ID 000281270900011

    View details for PubMedID 20842974

  • JackTrip: Under the Hood of an Engine for Network Audio JOURNAL OF NEW MUSIC RESEARCH Caceres, J., Chafe, C. 2010; 39 (3): 183-187
  • Tapping into the Internet as an Acoustical/Musical Medium CONTEMPORARY MUSIC REVIEW Chafe, C. 2009; 28 (4-5): 413-420
  • Analysis of Flute Control Parameters: A Comparison Between a Novice and an Experienced Flautist ACTA ACUSTICA UNITED WITH ACUSTICA de la Cuadra, P., Fabre, B., Montgermont, N., Chafe, C. 2008; 94 (5): 740-749

    View details for DOI 10.3813/AAA.918091

    View details for Web of Science ID 000260966500012

  • Neural dynamics of event segmentation in music: Converging evidence for dissociable ventral and dorsal networks NEURON Sridharan, D., Levitin, D. J., Chafe, C. H., Berger, J., Menon, V. 2007; 55 (3): 521-532


    The real world presents our sensory systems with a continuous stream of undifferentiated information. Segmentation of this stream at event boundaries is necessary for object identification and feature extraction. Here, we investigate the neural dynamics of event segmentation in entire musical symphonies under natural listening conditions. We isolated time-dependent sequences of brain responses in a 10 s window surrounding transitions between movements of symphonic works. A strikingly right-lateralized network of brain regions showed peak response during the movement transitions when, paradoxically, there was no physical stimulus. Model-dependent and model-free analysis techniques provided converging evidence for activity in two distinct functional networks at the movement transition: a ventral fronto-temporal network associated with detecting salient events, followed in time by a dorsal fronto-parietal network associated with maintaining attention and updating working memory. Our study provides direct experimental evidence for dissociable and causally linked ventral and dorsal networks during event segmentation of ecologically valid auditory stimuli.

    View details for DOI 10.1016/j.neuron.2007.07.003

    View details for Web of Science ID 000248711000017

    View details for PubMedID 17678862

  • Cyberinstruments via physical modeling synthesis: Compositional applications LEONARDO MUSIC JOURNAL Kojs, J., Serafin, S., Chafe, C. 2007; 17: 61-66
  • Oxygen flute: A computer music instrument that grows JOURNAL OF NEW MUSIC RESEARCH Chafe, C. 2005; 34 (3): 219-226
  • Physical model synthesis with application to Internet acoustics IEEE International Conference on Acoustics, Speech, and Signal Processing Chafe, C., Wilson, S., Walling, D. IEEE. 2002: 4056–4059