Bio
Chris Chafe is a composer, improvisor, and cellist, developing much of his music alongside computer-based research. He is Director of Stanford University's Center for Computer Research in Music and Acoustics (CCRMA). In 2019, he was International Visiting Research Scholar at the Peter Wall Institute for Advanced Studies The University of British Columbia, Visiting Professor at the Politecnico di Torino, and Edgard-Varèse Guest Professor at the Technical University of Berlin. At IRCAM (Paris) and The Banff Centre (Alberta), he has pursued methods for digital synthesis, music performance and real-time internet collaboration. CCRMA's jacktrip project involves live concertizing with musicians the world over. Online collaboration software and research into latency factors continue to evolve. An active performer either on the net or physically present, his music reaches audiences in sometimes novel venues. An early network project was a simultaneous five-country concert was hosted at the United Nations in 2009. Chafe’s works include gallery and museum music installations which are now into their second decade with “musifications” resulting from collaborations with artists, scientists and MD’s. Recent work includes the Earth Symphony, the Brain Stethoscope project (Gnosisong), PolarTide for the 2013 Venice Biennale, Tomato Quintet for the transLife:media Festival at the National Art Museum of China and Sun Shot played by the horns of large ships in the port of St. Johns, Newfoundland.
Administrative Appointments
-
Director, Center for Computer Research in Music and Acoustics (1996 - Present)
Honors & Awards
-
Center for Digital Health Award, Stanford University (2023)
-
High-Impact Technology Grant, Stanford University (2023)
-
Coastal Futures Ecoacoustic Music Prize, Coastal Futures Conservatory (2022)
-
Edgard Varèse Guest Professorship, Technical University of Berlin (2019)
-
International Visiting Research Scholar, Peter Wall Institute for Advanced Studies, University of British Columbia (2019)
-
Reisdency Scholarship, Banff Centre for the Arts (2019)
-
Visiting Professor, Politecnico di Torino (2019)
-
Bio-X Interdisciplinary Initiatives Seed Grant, Stanford University (2018)
-
Synthetic Aesthetics Residency, AHRC / NSF (2010)
-
Research Award, NSF (2008)
-
iCore Professorship, Banff Centre for the Arts (2008)
-
Media X Award, Stanford University (2003)
-
OTL Birdseed Award, Stanford University (2003)
-
Net Challenge Prize, IEEE / ACM SC2000 (2000)
Program Affiliations
-
Symbolic Systems Program
Patents
-
Chris Chafe, Josef Parvizi. "United States Patent 11471088 Handheld or Wearable Device for Recording or Sonifying Brain Signals", Leland Stanford Junior University, Oct 18, 2022
-
Alexander Grant, Chris Chafe, Josef Parvizi, Jianchun Yi, Raymond Woo. "United States Patent 10849553 Systems and methods for processing sonified brain signals", CeriBell, Inc., Mar 27, 2019
-
Chris Chafe, Josef Parvizi. "United States Patent 11045150 Method of Sonifying Brain Electrical Activity", Leland Stanford Junior University, Nov 27, 2018
-
Chris Chafe, Josef Parvizi. "United States Patent 9,888,884 Method of Sonifying Signals Obtained from a Living Subject", Leland Stanford Junior University, Feb 13, 2018
-
Chris Chafe. "United States Patent 9,354,335 Determining Location Information of Microseismic Events During Hydraulic Fracturing", Leland Stanford Junior University, May 31, 2016
-
Chris Chafe. "United States Patent 14/301,270 Glitch-Free Frequency Modulation Synthesis of Sounds", Leland Stanford Junior University, Oct 23, 2014
-
Chris Chafe. "United States Patent 7,522,734 Distributed Acoustical Reverberation for Audio Collaboration", Leland Stanford Junior University, May 21, 2009
-
Chris Chafe. "United States Patent 6,801,939 Method for Evaluating Quality of Service of a Digital Network Connection", Leland Stanford Junior University, May 14, 2004
-
Chris Chafe. "United States Patent 5,508,473 Music Synthesizer and Method for Simulating Period Synchronous Noise Associated with Air Flows in Wind Instruments", Leland Stanford Junior University,, Apr 16, 1996
-
Chris Chafe. "United States Patent 5,157,216 Musical Synthesizer System and Method Using Pulsed Noise for Simulating the Noise Component of Musical Tones", Leland Stanford Junior University, Oct 20, 1992
2024-25 Courses
- Ensemble Sonification of Temporal Data
MUSIC 153DZ (Win) - Fundamentals of Computer-Generated Sound
MUSIC 220A (Aut) - Research Seminar in Computer-Generated Music
MUSIC 220C (Spr) -
Independent Studies (12)
- Concentrations Project
MUSIC 198 (Aut, Win, Spr) - First Individual Undergraduate Projects in Composition I
MUSIC 125 (Aut, Win, Spr, Sum) - Independent Study
MUSIC 199 (Aut, Win, Spr, Sum) - Independent Study
MUSIC 299 (Aut, Win, Spr, Sum) - Independent Study
SYMSYS 196 (Aut, Win, Spr, Sum) - Individual Graduate Projects in Composition
MUSIC 325 (Aut, Win, Spr, Sum) - MA/MST Capstone Project
MUSIC 298 (Aut, Win, Spr, Sum) - PhD Dissertation Proposal
MUSIC 398 (Aut, Win, Spr, Sum) - Practicum Internship
MUSIC 390 (Aut, Win, Spr, Sum) - Readings in Music Theory
MUSIC 321 (Aut, Win, Spr, Sum) - Research in Computer-Generated Music
MUSIC 220D (Aut, Win, Spr, Sum) - Senior Honors Tutorial
SYMSYS 190 (Aut, Win, Spr, Sum)
- Concentrations Project
-
Prior Year Courses
2023-24 Courses
- Ensemble Sonification of Temporal Data
COMM 153D, MUSIC 153D (Win) - Ensemble Sonification of Temporal Data
MUSIC 153DZ (Win) - Fundamentals of Computer-Generated Sound
MUSIC 220A (Aut) - Research Seminar in Computer-Generated Music
MUSIC 220C (Spr)
2022-23 Courses
- Fundamentals of Computer-Generated Sound
MUSIC 220A (Aut) - Network Music Performance
MUSIC 153AZ (Aut) - Network Performance Practice
ARTSINST 141, MUSIC 153A (Aut) - Research Seminar in Computer-Generated Music
MUSIC 220C (Spr)
2021-22 Courses
- Fundamentals of Computer-Generated Sound
MUSIC 220A (Aut) - Research Seminar in Computer-Generated Music
MUSIC 220C (Spr)
- Ensemble Sonification of Temporal Data
Stanford Advisees
-
Doctoral Dissertation Reader (AC)
Kunwoo Kim, Lloyd May, Michael Mulshine, Barbara Nerness, Marise van Zyl -
Postdoctoral Faculty Sponsor
Hassan Estakhrian -
Master's Program Advisor
Logan Kibler, Calvin McCormack, Nathan Sariowan, Chengyi Xing, Ningxin Zhang -
Doctoral (Program)
Celeste Betancur, Soohyun Kim, Kimia Koochakzadeh-Yazdi, Walker Smith
All Publications
-
A Content Adaptive Learnable Time-Frequency Representation for Audio Signal Processing
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
2023
View details for DOI 10.1109/ICASSP49357.2023.10095171
-
Web-Based Networked Music Performances via WebRTC: A Low-Latency PCM Audio Solution
JOURNAL OF THE AUDIO ENGINEERING SOCIETY
2022; 70 (11): 926-937
View details for DOI 10.17743/jaes.2022.0021
View details for Web of Science ID 001068184800003
-
Experiencing Remote Classical Music Performance Over Long Distance: A JackTrip Concert Between Two Continents During the Pandemic
JOURNAL OF THE AUDIO ENGINEERING SOCIETY
2021; 69 (12): 934-945
View details for DOI 10.17743/jaes.2021.0056
View details for Web of Science ID 000729765900005
-
Temporal Coordination in Piano Duet Networked Music Performance (NMP): Interactions Between Acoustic Transmission Latency and Musical Role Asymmetries.
Frontiers in psychology
2021; 12: 707090
Abstract
Today's audio, visual, and internet technologies allow people to interact despite physical distances, for casual conversation, group workouts, or musical performance. Musical ensemble performance is unique because interaction integrity critically depends on the timing between each performer's actions and when their acoustic outcomes arrive. Acoustic transmission latency (ATL) between players is substantially longer for networked music performance (NMP) compared to traditional in-person spaces where musicians can easily adapt. Previous work has shown that longer ATLs slow the average tempo in ensemble performance, and that asymmetric co-actor roles and empathy-related traits affect coordination patterns in joint action. Thus, we are interested in how musicians collectively adapt to a given latency and how such adaptation patterns vary with their task-related and person-related asymmetries. Here, we examined how two pianists performed duets while hearing each other's auditory outcomes with an ATL of 10, 20, or 40 ms. To test the hypotheses regarding task-related asymmetries, we designed duets such that pianists had: (1) a starting or joining role and (2) a similar or dissimilar musical part compared to their co-performer, with respect to pitch range and melodic contour. Results replicated previous clapping-duet findings showing that longer ATLs are associated with greater temporal asynchrony between partners and increased average tempo slowing. While co-performer asynchronies were not affected by performer role or part similarity, at the longer ATLs starting performers displayed slower tempos and smaller tempo variability than joining performers. This asymmetry of stability vs. flexibility between starters and joiners may sustain coordination, consistent with recent joint action findings. Our data also suggest that relative independence in musical parts may mitigate ATL-related challenges. Additionally, there may be a relationship between co-performer differences in empathy-related personality traits such as locus of control and coordination during performance under the influence of ATL. Incorporating the emergent coordinative dynamics between performers could help further innovation of music technologies and composition techniques for NMP.
View details for DOI 10.3389/fpsyg.2021.707090
View details for PubMedID 34630213
-
A GENERATIVE MODEL FOR RAW AUDIO USING TRANSFORMER ARCHITECTURES
IEEE. 2021: 230-237
View details for Web of Science ID 000835744900030
-
A Deep Learning Approach for Low-Latency Packet Loss Concealment of Audio Signals in Networked Music Performance Applications
IEEE. 2020: 268–75
View details for Web of Science ID 000628527300034
-
Improved Real-Time Monophonic Pitch Tracking with the Extended Complex Kalman Filter
JOURNAL OF THE AUDIO ENGINEERING SOCIETY
2020; 68 (1-2): 78–86
View details for DOI 10.17743/jaes.2019.0053
View details for Web of Science ID 000607787600008
-
Delayed feedback embedded in perception-action coordination cycles results in anticipation behavior during synchronized rhythmic action: A dynamical systems approach.
PLoS computational biology
2019; 15 (10): e1007371
Abstract
Dancing and playing music require people to coordinate actions with auditory rhythms. In laboratory perception-action coordination tasks, people are asked to synchronize taps with a metronome. When synchronizing with a metronome, people tend to anticipate stimulus onsets, tapping slightly before the stimulus. The anticipation tendency increases with longer stimulus periods of up to 3500ms, but is less pronounced in trained individuals like musicians compared to non-musicians. Furthermore, external factors influence the timing of tapping. These factors include the presence of auditory feedback from one's own taps, the presence of a partner performing coordinated joint tapping, and transmission latencies (TLs) between coordinating partners. Phenomena like the anticipation tendency can be explained by delay-coupled systems, which may be inherent to the sensorimotor system during perception-action coordination. Here we tested whether a dynamical systems model based on this hypothesis reproduces observed patterns of human synchronization. We simulated behavior with a model consisting of an oscillator receiving its own delayed activity as input. Three simulation experiments were conducted using previously-published behavioral data from 1) simple tapping, 2) two-person alternating beat-tapping, and 3) two-person alternating rhythm-clapping in the presence of a range of constant auditory TLs. In Experiment 1, our model replicated the larger anticipation observed for longer stimulus intervals and adjusting the amplitude of the delayed feedback reproduced the difference between musicians and non-musicians. In Experiment 2, by connecting two models we replicated the smaller anticipation observed in human joint tapping with bi-directional auditory feedback compared to joint tapping without feedback. In Experiment 3, we varied TLs between two models alternately receiving signals from one another. Results showed reciprocal lags at points of alternation, consistent with behavioral patterns. Overall, our model explains various anticipatory behaviors, and has potential to inform theories of adaptive human synchronization.
View details for DOI 10.1371/journal.pcbi.1007371
View details for PubMedID 31671096
-
Detecting silent seizures by their sound
Epilepsia
2018; 59 (4): 877-884
Abstract
The traditional approach to interpreting electroencephalograms (EEGs) requires physicians with formal training to visually assess the waveforms. This approach can be less practical in critical settings where a trained EEG specialist is not readily available to review the EEG and diagnose ongoing subclinical seizures, such as nonconvulsive status epilepticus.We have developed a novel method by which EEG data are converted to sound in real time by letting the underlying electrophysiological signal modulate a voice tone that is in the audible range. Here, we explored whether individuals without any prior EEG training could listen to 15-second sonified EEG and determine whether the EEG represents seizures or nonseizure conditions. We selected 84 EEG samples to represent seizures (n = 7), seizure-like activity (n = 25), or nonperiodic, nonrhythmic activity (normal or focal/generalized slowing, n = 52). EEGs from single channels in the left and right hemispheres were then converted to sound files. After a 4-minute training video, medical students (n = 34) and nurses (n = 30) were asked to designate each audio sample as "seizure" or "nonseizure." We then compared their performance with that of EEG-trained neurologists (n = 12) and medical students (n = 29) who also diagnosed the same EEGs on visual display.Nonexperts listening to single-channel sonified EEGs detected seizures with remarkable sensitivity (students, 98% ± 5%; nurses, 95% ± 14%) compared to experts or nonexperts reviewing the same EEGs on visual display (neurologists, 88% ± 11%; students, 76% ± 19%). If the EEGs contained seizures or seizure-like activity, nonexperts listening to sonified EEGs rated them as seizures with high specificity (students, 85% ± 9%; nurses, 82% ± 12%) compared to experts or nonexperts viewing the EEGs visually (neurologists, 90% ± 7%; students, 65% ± 20%).Our study confirms that individuals without EEG training can detect ongoing seizures or seizure-like rhythmic periodic patterns by listening to sonified EEG. Although sonification of EEG cannot replace the traditional approaches to EEG interpretation, it provides a meaningful triage tool for fast assessment of patients with suspected subclinical seizures.
View details for DOI 10.1111/epi.14043
-
Op 1254: Music for Neutrons, Networks and Solenoids using a Restored Organ in a Nuclear Reactor
ASSOC COMPUTING MACHINERY. 2018: 537–41
View details for DOI 10.1145/3173225.3173304
View details for Web of Science ID 000476944600070
-
Don't Be Alarmed: Sonifying Autonomous Vehicle Perception to Increase Situation Awareness
ASSOC COMPUTING MACHINERY. 2018: 237–46
View details for DOI 10.1145/3239060.3265636
View details for Web of Science ID 000455217200024
-
Mobile Music, Sensors, Physical Modeling, and Digital Fabrication: Articulating the Augmented Mobile Instrument
APPLIED SCIENCES-BASEL
2017; 7 (12)
View details for DOI 10.3390/app7121311
View details for Web of Science ID 000419175800107
-
An Overview on Networked Music Performance Technologies
IEEE ACCESS
2016; 4: 8823-8843
View details for DOI 10.1109/ACCESS.2016.2628440
View details for Web of Science ID 000395542100034
-
Synthetic Sound from Synthetic Biology
SYNTHETIC AESTHETICS: INVESTIGATING SYNTHETIC BIOLOGY'S DESIGNS ON NATURE
2014: 219–30
View details for Web of Science ID 000337605500014
-
Sound synthesis for a brain stethoscope.
journal of the Acoustical Society of America
2013; 134 (5): 4053-?
Abstract
Exploratory ascultation of brain signals has been prototyped in a project involving neurologists, real-time EEG and techniques for computer-based sound synthesis. In a manner similar to using a stethoscope, the listener can manipulate the location being listened to. Sounds which are heard are sonifications of electrode signals. We present a method for exploring sounds from arrays of sensors as sounds which are useful for distinguishing brain states. The approach maps brain wave signals to modulations characteristic of human voice. Computer-synthesized voices "sing" the dynamics of wakefulness, sleep, seizures, and other states. The goal of the project is to create a recognizable inventory of such vocal "performances" and allow the user to probe source locations in the sensor array in real time.
View details for DOI 10.1121/1.4830793
View details for PubMedID 24181199
-
Internet rooms from internet audio.
journal of the Acoustical Society of America
2013; 133 (5): 3347-?
Abstract
Music rehearsal and concert performance at a distance over long-haul optical fiber is a reality because of expanding network capacity to support low-latency, uncompressed audio streaming. Multichannel sound exchanged across the globe in real time creates "rooms" for synchronous performance. Nearby connections work well and musicians feel like they are playing together in the same room. Larger, continental-size, distances remain a challenge because of transmission delay and seemingly subtle but perceptually important cues which are in conflict with qualities expected of natural rooms. Establishing plausible, room-like reverberation between the endpoints helps mitigate these difficulties and expand the distance across which remotely located musicians perform together comfortably. The paper presents a working implementation for distributed reverberation and qualitative evaluations of reverberated versus non-reverberated conditions over the same long-haul connection.
View details for DOI 10.1121/1.4805672
View details for PubMedID 23655010
-
JackTrip/SoundWIRE Meets Server Farm
COMPUTER MUSIC JOURNAL
2010; 34 (3): 29-34
View details for Web of Science ID 000281629100006
-
Effect of temporal separation on synchronization in rhythmic performance
PERCEPTION
2010; 39 (7): 982-992
Abstract
A variety of short time delays inserted between pairs of subjects were found to affect their ability to synchronize a musical task. The subjects performed a clapping rhythm together from separate sound-isolated rooms via headphones and without visual contact. One-way time delays between pairs were manipulated electronically in the range of 3 to 78 ms. We are interested in quantifying the envelope of time delay within which two individuals produce synchronous performances. The results indicate that there are distinct regimes of mutually coupled behavior, and that 'natural time delay'--delay within the narrow range associated with travel times across spatial arrangements of groups and ensembles--supports the most stable performance. Conditions outside of this envelope, with time delays both below and above it, create characteristic interaction dynamics in the mutually coupled actions of the duo. Trials at extremely short delays (corresponding to unnaturally close proximity) had a tendency to accelerate from anticipation. Synchronization lagged at longer delays (larger than usual physical distances) and produced an increasingly severe deceleration and then deterioration of performed rhythms. The study has implications for music collaboration over the Internet and suggests that stable rhythmic performance can be achieved by 'wired ensembles' across distances of thousands of kilometers.
View details for DOI 10.1068/p6465
View details for Web of Science ID 000281270900011
View details for PubMedID 20842974
-
JackTrip: Under the Hood of an Engine for Network Audio
JOURNAL OF NEW MUSIC RESEARCH
2010; 39 (3): 183-187
View details for DOI 10.1080/09298215.2010.481361
View details for Web of Science ID 000284114500001
-
Tapping into the Internet as an Acoustical/Musical Medium
CONTEMPORARY MUSIC REVIEW
2009; 28 (4-5): 413-420
View details for DOI 10.1080/07494460903422362
View details for Web of Science ID 000275376300008
-
Analysis of Flute Control Parameters: A Comparison Between a Novice and an Experienced Flautist
ACTA ACUSTICA UNITED WITH ACUSTICA
2008; 94 (5): 740-749
View details for DOI 10.3813/AAA.918091
View details for Web of Science ID 000260966500012
-
Neural dynamics of event segmentation in music: Converging evidence for dissociable ventral and dorsal networks
NEURON
2007; 55 (3): 521-532
Abstract
The real world presents our sensory systems with a continuous stream of undifferentiated information. Segmentation of this stream at event boundaries is necessary for object identification and feature extraction. Here, we investigate the neural dynamics of event segmentation in entire musical symphonies under natural listening conditions. We isolated time-dependent sequences of brain responses in a 10 s window surrounding transitions between movements of symphonic works. A strikingly right-lateralized network of brain regions showed peak response during the movement transitions when, paradoxically, there was no physical stimulus. Model-dependent and model-free analysis techniques provided converging evidence for activity in two distinct functional networks at the movement transition: a ventral fronto-temporal network associated with detecting salient events, followed in time by a dorsal fronto-parietal network associated with maintaining attention and updating working memory. Our study provides direct experimental evidence for dissociable and causally linked ventral and dorsal networks during event segmentation of ecologically valid auditory stimuli.
View details for DOI 10.1016/j.neuron.2007.07.003
View details for Web of Science ID 000248711000017
View details for PubMedID 17678862
-
Cyberinstruments via physical modeling synthesis: Compositional applications
LEONARDO MUSIC JOURNAL
2007; 17: 61-66
View details for Web of Science ID 000251596800030
-
Oxygen flute: A computer music instrument that grows
JOURNAL OF NEW MUSIC RESEARCH
2005; 34 (3): 219-226
View details for DOI 10.1080/09298210500280687
View details for Web of Science ID 000233304900001
-
Physical model synthesis with application to Internet acoustics
IEEE International Conference on Acoustics, Speech, and Signal Processing
IEEE. 2002: 4056–4059
View details for Web of Science ID 000177510401015
-
DREAM MACHINE 1990
COMPUTER MUSIC JOURNAL
1991; 15 (4): 62-64
View details for Web of Science ID A1991GW25600018
-
TOWARD AN INTELLIGENT EDITOR OF DIGITAL AUDIO - RECOGNITION OF MUSICAL CONSTRUCTS
COMPUTER MUSIC JOURNAL
1982; 6 (1): 30-41
View details for Web of Science ID A1982NQ21400003