Chris Chafe is a composer, improvisor and cellist, developing much of his music alongside computer-based research. He is Director of Stanford University's Center for Computer Research in Music and Acoustics (CCRMA). At IRCAM (Paris) and The Banff Centre (Alberta), he pursued methods for digital synthesis, music performance and real-time internet collaboration. CCRMA's SoundWIRE project involves live concertizing with musicians the world over. Online collaboration software including jacktrip and research into latency factors continue to evolve. An active performer either on the net or physically present, his music reaches audiences in dozens of countries and sometimes at novel venues. A simultaneous five-country concert was hosted at the United Nations in 2009. Chafe's works are available from Centaur Records and various online media. Gallery and museum music installations are into their second decade with "musifications" resulting from collaborations with artists, scientists and MD's. Recent works include Tomato Quintet for the transLife:media Festival at the National Art Museum of China, Phasor for contrabass and Sun Shot played by the horns of large ships in the port of St. Johns, Newfoundland. Chafe premiered DiPietro's concerto, Finale, for electric cello and orchestra in 2012.

Academic Appointments

Administrative Appointments

  • Director, Center for Computer Research in Music and Acoustics (1996 - Present)

Program Affiliations

  • Symbolic Systems Program

Stanford Advisees

All Publications

  • Detecting silent seizures by their sound Epilepsia Parvizi, J., Gururangan, K., Razavi, B., Chafe, C. 2018; 59 (4): 877-884


    The traditional approach to interpreting electroencephalograms (EEGs) requires physicians with formal training to visually assess the waveforms. This approach can be less practical in critical settings where a trained EEG specialist is not readily available to review the EEG and diagnose ongoing subclinical seizures, such as nonconvulsive status epilepticus.We have developed a novel method by which EEG data are converted to sound in real time by letting the underlying electrophysiological signal modulate a voice tone that is in the audible range. Here, we explored whether individuals without any prior EEG training could listen to 15-second sonified EEG and determine whether the EEG represents seizures or nonseizure conditions. We selected 84 EEG samples to represent seizures (n = 7), seizure-like activity (n = 25), or nonperiodic, nonrhythmic activity (normal or focal/generalized slowing, n = 52). EEGs from single channels in the left and right hemispheres were then converted to sound files. After a 4-minute training video, medical students (n = 34) and nurses (n = 30) were asked to designate each audio sample as "seizure" or "nonseizure." We then compared their performance with that of EEG-trained neurologists (n = 12) and medical students (n = 29) who also diagnosed the same EEGs on visual display.Nonexperts listening to single-channel sonified EEGs detected seizures with remarkable sensitivity (students, 98% ± 5%; nurses, 95% ± 14%) compared to experts or nonexperts reviewing the same EEGs on visual display (neurologists, 88% ± 11%; students, 76% ± 19%). If the EEGs contained seizures or seizure-like activity, nonexperts listening to sonified EEGs rated them as seizures with high specificity (students, 85% ± 9%; nurses, 82% ± 12%) compared to experts or nonexperts viewing the EEGs visually (neurologists, 90% ± 7%; students, 65% ± 20%).Our study confirms that individuals without EEG training can detect ongoing seizures or seizure-like rhythmic periodic patterns by listening to sonified EEG. Although sonification of EEG cannot replace the traditional approaches to EEG interpretation, it provides a meaningful triage tool for fast assessment of patients with suspected subclinical seizures.

    View details for DOI 10.1111/epi.14043

  • An Overview on Networked Music Performance Technologies IEEE ACCESS Rottondi, C., Chafe, C., Allocchio, C., Sarti, A. 2016; 4: 8823-8843
  • Sound synthesis for a brain stethoscope. journal of the Acoustical Society of America Chafe, C., Caceres, J., Iorga, M. 2013; 134 (5): 4053-?


    Exploratory ascultation of brain signals has been prototyped in a project involving neurologists, real-time EEG and techniques for computer-based sound synthesis. In a manner similar to using a stethoscope, the listener can manipulate the location being listened to. Sounds which are heard are sonifications of electrode signals. We present a method for exploring sounds from arrays of sensors as sounds which are useful for distinguishing brain states. The approach maps brain wave signals to modulations characteristic of human voice. Computer-synthesized voices "sing" the dynamics of wakefulness, sleep, seizures, and other states. The goal of the project is to create a recognizable inventory of such vocal "performances" and allow the user to probe source locations in the sensor array in real time.

    View details for DOI 10.1121/1.4830793

    View details for PubMedID 24181199

  • Internet rooms from internet audio. journal of the Acoustical Society of America Chafe, C., Granzow, J. 2013; 133 (5): 3347-?


    Music rehearsal and concert performance at a distance over long-haul optical fiber is a reality because of expanding network capacity to support low-latency, uncompressed audio streaming. Multichannel sound exchanged across the globe in real time creates "rooms" for synchronous performance. Nearby connections work well and musicians feel like they are playing together in the same room. Larger, continental-size, distances remain a challenge because of transmission delay and seemingly subtle but perceptually important cues which are in conflict with qualities expected of natural rooms. Establishing plausible, room-like reverberation between the endpoints helps mitigate these difficulties and expand the distance across which remotely located musicians perform together comfortably. The paper presents a working implementation for distributed reverberation and qualitative evaluations of reverberated versus non-reverberated conditions over the same long-haul connection.

    View details for DOI 10.1121/1.4805672

    View details for PubMedID 23655010

  • JackTrip/SoundWIRE Meets Server Farm COMPUTER MUSIC JOURNAL Caceres, J., Chafe, C. 2010; 34 (3): 29-34
  • Effect of temporal separation on synchronization in rhythmic performance PERCEPTION Chafe, C., Caceres, J., Gurevich, M. 2010; 39 (7): 982-992


    A variety of short time delays inserted between pairs of subjects were found to affect their ability to synchronize a musical task. The subjects performed a clapping rhythm together from separate sound-isolated rooms via headphones and without visual contact. One-way time delays between pairs were manipulated electronically in the range of 3 to 78 ms. We are interested in quantifying the envelope of time delay within which two individuals produce synchronous performances. The results indicate that there are distinct regimes of mutually coupled behavior, and that 'natural time delay'--delay within the narrow range associated with travel times across spatial arrangements of groups and ensembles--supports the most stable performance. Conditions outside of this envelope, with time delays both below and above it, create characteristic interaction dynamics in the mutually coupled actions of the duo. Trials at extremely short delays (corresponding to unnaturally close proximity) had a tendency to accelerate from anticipation. Synchronization lagged at longer delays (larger than usual physical distances) and produced an increasingly severe deceleration and then deterioration of performed rhythms. The study has implications for music collaboration over the Internet and suggests that stable rhythmic performance can be achieved by 'wired ensembles' across distances of thousands of kilometers.

    View details for DOI 10.1068/p6465

    View details for Web of Science ID 000281270900011

    View details for PubMedID 20842974

  • JackTrip: Under the Hood of an Engine for Network Audio JOURNAL OF NEW MUSIC RESEARCH Caceres, J., Chafe, C. 2010; 39 (3): 183-187
  • Tapping into the Internet as an Acoustical/Musical Medium CONTEMPORARY MUSIC REVIEW Chafe, C. 2009; 28 (4-5): 413-420
  • Analysis of Flute Control Parameters: A Comparison Between a Novice and an Experienced Flautist ACTA ACUSTICA UNITED WITH ACUSTICA de la Cuadra, P., Fabre, B., Montgermont, N., Chafe, C. 2008; 94 (5): 740-749

    View details for DOI 10.3813/AAA.918091

    View details for Web of Science ID 000260966500012

  • Neural dynamics of event segmentation in music: Converging evidence for dissociable ventral and dorsal networks NEURON Sridharan, D., Levitin, D. J., Chafe, C. H., Berger, J., Menon, V. 2007; 55 (3): 521-532


    The real world presents our sensory systems with a continuous stream of undifferentiated information. Segmentation of this stream at event boundaries is necessary for object identification and feature extraction. Here, we investigate the neural dynamics of event segmentation in entire musical symphonies under natural listening conditions. We isolated time-dependent sequences of brain responses in a 10 s window surrounding transitions between movements of symphonic works. A strikingly right-lateralized network of brain regions showed peak response during the movement transitions when, paradoxically, there was no physical stimulus. Model-dependent and model-free analysis techniques provided converging evidence for activity in two distinct functional networks at the movement transition: a ventral fronto-temporal network associated with detecting salient events, followed in time by a dorsal fronto-parietal network associated with maintaining attention and updating working memory. Our study provides direct experimental evidence for dissociable and causally linked ventral and dorsal networks during event segmentation of ecologically valid auditory stimuli.

    View details for DOI 10.1016/j.neuron.2007.07.003

    View details for Web of Science ID 000248711000017

    View details for PubMedID 17678862

  • Cyberinstruments via physical modeling synthesis: Compositional applications LEONARDO MUSIC JOURNAL Kojs, J., Serafin, S., Chafe, C. 2007; 17: 61-66
  • Oxygen flute: A computer music instrument that grows JOURNAL OF NEW MUSIC RESEARCH Chafe, C. 2005; 34 (3): 219-226
  • Physical model synthesis with application to Internet acoustics IEEE International Conference on Acoustics, Speech, and Signal Processing Chafe, C., Wilson, S., Walling, D. IEEE. 2002: 4056–4059