Jonathan Berger is the Denning Family Provostial Professor in Music at Stanford University, where he teaches composition, music theory, and cognition at the Center for Computer Research in Music and Acoustics (CCRMA).
Jonathan is a 2017 Guggenheim Fellow and a 2016 winner of the Rome Prize.
He was the founding co-director of the Stanford Institute for Creativity and the Arts (SICA, now the Stanford Arts Institute) and founding director of Yale University’s Center for Studies in Music Technology
Described as “gripping” by both the New York Times and the Chicago Tribune, “poignant”, “richly evocative” (San Francisco Chronicle), “taut, and hauntingly beautiful” (NY Times), Jonathan Berger’s recent works deal with both consciousness and conscience. The Kronos Quartet toured recent monodrama, My Lai internationally. Thrice commissioned by The National Endowment for the Arts, Berger’a recent commissions include The Mellon and Rockefeller Foundations, Chamber Music Society, Lincoln Center, and Chamber Music America. Upcoming commissions include an oratorio entitled The Ritual of Breath, and Leonardo, for baritone and chamber orchestra.
In addition to composition, Berger is an active researcher with over 80 publications in a wide range of fields relating to music, science and technology and has held research grants from DARPA, the Wallenberg Foundation, The National Academy of Sciences, the Keck Foundation, and others.
Honors & Awards
The Elliott Carter Fellow (Rome Prize), American Academy in Rome (2016-2017)
Guggenheim Fellowship, Guggenheim Foundation (2018)
Symbolic Systems Program
Musical Archeoacoustics, Stanford University, The National Academy of Sciences/Keck Foundation, Museo Nazionale Roma, Palazzo Altemps (1/1/2017 - Present)
Seeking the interactions between architecture and musical style and performance
- Jonathan Abel, Consulting Professor, Stanford University
- Timothy Weaver, Professor, University of Denver
- Talya Berger, Senior Lecturer, Stanford University
Music Engagement Research Initiative, Stanford University
We seek to increase our understanding of how and why humans engage with music. Our research integrates industrial datasets detailing musical performance, audition, and discovery; imaging studies aimed at determining the neural correlates of music engagement; and performance studies investigating the impact of musical schemas on amateur and expert interpretations of written scores.
- Blair Kaneshiro, Research Scientist, Stanford University
- Doctoral Seminar in Composition
MUSIC 323 (Aut)
OSPFLOR 96 (Win)
- Research Seminar in Computer-Generated Music
MUSIC 220C (Spr)
Independent Studies (8)
- Concentrations Project
MUSIC 198 (Aut, Win, Spr)
- Independent Study
MUSIC 199 (Aut, Win, Spr, Sum)
- Individual Graduate Projects in Composition
MUSIC 325 (Aut, Win, Spr, Sum)
- Individual Undergraduate Projects in Composition
MUSIC 125 (Aut, Win, Spr)
- Practicum Internship
MUSIC 390 (Aut, Win, Spr, Sum)
- Readings in Music Theory
MUSIC 321 (Aut, Win, Spr, Sum)
- Research in Computer-Generated Music
MUSIC 220D (Aut, Win, Spr, Sum)
- Senior Honors Tutorial
SYMSYS 190 (Aut, Win, Spr)
- Concentrations Project
Prior Year Courses
- Computational Music Theory & Analysis
MUSIC 258A (Aut)
- Immersion in the Arts: Living in Culture, Challenging
ITALIC 93 (Spr)
- Immersion in the Arts: Living in Culture, Creating
ITALIC 91 (Aut)
- Immersion in the Arts: Living in Culture, Interpreting
ITALIC 92 (Win)
- Music, Mind, and Human Behavior
MUSIC 1A (Win)
- The Aesthetics of Data
MUSIC 15N (Aut)
- Computational Music Theory & Analysis
Doctoral Dissertation Reader (AC)
Alex Chechile, Charlie Sdraulig
Doctoral Dissertation Advisor (AC)
Elliot Canfield-Dafilou, Zhengshan Shi
Doctoral Dissertation Co-Advisor (AC)
Utku Asuroglu, Alex Chechile
Master's Program Advisor
Elliot Canfield-Dafilou, Noah Fram, Camille Noufi, Zhengshan Shi
Natural music evokes correlated EEG responses reflecting temporal structure and beat.
The brain activity of multiple subjects has been shown to synchronize during salient moments of natural stimuli, suggesting that correlation of neural responses indexes a brain state operationally termed 'engagement'. While past electroencephalography (EEG) studies have considered both auditory and visual stimuli, the extent to which these results generalize to music-a temporally structured stimulus for which the brain has evolved specialized circuitry-is less understood. Here we investigated neural correlation during natural music listening by recording EEG responses from N=48 adult listeners as they heard real-world musical works, some of which were temporally disrupted through shuffling of short-term segments (measures), reversal, or randomization of phase spectra. We measured correlation between multiple neural responses (inter-subject correlation) and between neural responses and stimulus envelope fluctuations (stimulus-response correlation) in the time and frequency domains. Stimuli retaining basic musical features, such as rhythm and melody, elicited significantly higher behavioral ratings and neural correlation than did phase-scrambled controls. However, while unedited songs were self-reported as most pleasant, time-domain correlations were highest during measure-shuffled versions. Frequency-domain measures of correlation (coherence) peaked at frequencies related to the musical beat, although the magnitudes of these spectral peaks did not explain the observed temporal correlations. Our findings show that natural music evokes significant inter-subject and stimulus-response correlations, and suggest that the neural correlates of musical 'engagement' may be distinct from those of enjoyment.
View details for DOI 10.1016/j.neuroimage.2020.116559
View details for PubMedID 31978543
Characterizing Listener Engagement with Popular Songs Using Large-Scale Music Discovery Data
FRONTIERS IN PSYCHOLOGY
Music discovery in everyday situations has been facilitated in recent years by audio content recognition services such as Shazam. The widespread use of such services has produced a wealth of user data, specifying where and when a global audience takes action to learn more about music playing around them. Here, we analyze a large collection of Shazam queries of popular songs to study the relationship between the timing of queries and corresponding musical content. Our results reveal that the distribution of queries varies over the course of a song, and that salient musical events drive an increase in queries during a song. Furthermore, we find that the distribution of queries at the time of a song's release differs from the distribution following a song's peak and subsequent decline in popularity, possibly reflecting an evolution of user intent over the "life cycle" of a song. Finally, we derive insights into the data size needed to achieve consistent query distributions for individual songs. The combined findings of this study suggest that music discovery behavior, and other facets of the human experience of music, can be studied quantitatively using large-scale industrial data.
View details for DOI 10.3389/fpsyg.2017.00416
View details for Web of Science ID 000397317600001
View details for PubMedID 28386241
The impact of audiovisual biofeedback on 4D functional and anatomic imaging: Results of a lung cancer pilot study.
Radiotherapy and oncology
2016; 120 (2): 267-272
The impact of audiovisual (AV) biofeedback on four dimensional (4D) positron emission tomography (PET) and 4D computed tomography (CT) image quality was investigated in a prospective clinical trial (NCT01172041).4D-PET and 4D-CT images of ten lung cancer patients were acquired with AV biofeedback (AV) and free breathing (FB). The 4D-PET images were analyzed for motion artifacts by comparing 4D to 3D PET for gross tumor volumes (GTVPET) and maximum standardized uptake values (SUVmax). The 4D-CT images were analyzed for artifacts by comparing normalized cross correlation-based scores (NCCS) and quantifying a visual assessment score (VAS). A Wilcoxon signed-ranks test was used for statistical testing.The impact of AV biofeedback varied widely. Overall, the 3D to 4D decrease of GTVPET was 1.2±1.3cm(3) with AV and 0.6±1.8cm(3) for FB. The 4D-PET increase of SUVmax was 1.3±0.9 with AV and 1.3±0.8 for FB. The 4D-CT NCCS were 0.65±0.27 with AV and 0.60±0.32 for FB (p=0.08). The 4D-CT VAS was 0.0±2.7.This study demonstrated a high patient dependence on the use of AV biofeedback to reduce motion artifacts in 4D imaging. None of the hypotheses tested were statistically significant. Future development of AV biofeedback will focus on optimizing the human-computer interface and including patient training sessions for improved comprehension and compliance.
View details for DOI 10.1016/j.radonc.2016.05.016
View details for PubMedID 27256597
In Search of a Perceptual Metric for Timbre: Dissimilarity Judgments among Synthetic Sounds with MFCC-Derived Spectral Envelopes
JOURNAL OF THE AUDIO ENGINEERING SOCIETY
2012; 60 (9): 674-685
View details for Web of Science ID 000310187100002
Commissioning and quality assurance for a respiratory training system based on audiovisual biofeedback.
Journal of applied clinical medical physics
2010; 11 (4): 3262-?
A respiratory training system based on audiovisual biofeedback has been implemented at our institution. It is intended to improve patients' respiratory regularity during four-dimensional (4D) computed tomography (CT) image acquisition. The purpose is to help eliminate the artifacts in 4D-CT images caused by irregular breathing, as well as improve delivery efficiency during treatment, where respiratory irregularity is a concern. This article describes the commissioning and quality assurance (QA) procedures developed for this peripheral respiratory training system, the Stanford Respiratory Training (START) system. Using the Varian real-time position management system for the respiratory signal input, the START software was commissioned and able to acquire sample respiratory traces, create a patient-specific guiding waveform, and generate audiovisual signals for improving respiratory regularity. Routine QA tests that include hardware maintenance, visual guiding-waveform creation, auditory sounds synchronization, and feedback assessment, have been developed for the START system. The QA procedures developed here for the START system could be easily adapted to other respiratory training systems based on audiovisual biofeedback.
View details for PubMedID 21081883
- Analysis of Pitch Perception of Inharmonicity in Pipa Strings Using Response Surface Methodology JOURNAL OF NEW MUSIC RESEARCH 2010; 39 (1): 63-73
Commissioning and quality assurance for a respiratory training system based on audiovisual biofeedback
JOURNAL OF APPLIED CLINICAL MEDICAL PHYSICS
2010; 11 (4): 42-56
View details for Web of Science ID 000284215700006
Neural dynamics of event segmentation in music: Converging evidence for dissociable ventral and dorsal networks
2007; 55 (3): 521-532
The real world presents our sensory systems with a continuous stream of undifferentiated information. Segmentation of this stream at event boundaries is necessary for object identification and feature extraction. Here, we investigate the neural dynamics of event segmentation in entire musical symphonies under natural listening conditions. We isolated time-dependent sequences of brain responses in a 10 s window surrounding transitions between movements of symphonic works. A strikingly right-lateralized network of brain regions showed peak response during the movement transitions when, paradoxically, there was no physical stimulus. Model-dependent and model-free analysis techniques provided converging evidence for activity in two distinct functional networks at the movement transition: a ventral fronto-temporal network associated with detecting salient events, followed in time by a dorsal fronto-parietal network associated with maintaining attention and updating working memory. Our study provides direct experimental evidence for dissociable and causally linked ventral and dorsal networks during event segmentation of ecologically valid auditory stimuli.
View details for DOI 10.1016/j.neuron.2007.07.003
View details for Web of Science ID 000248711000017
View details for PubMedID 17678862
- Melody extraction and musical onset detection via probabilistic models of framewise STFT peak data IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING 2007; 15 (4): 1257-1272
SICIB: An interactive music composition system using body movements
COMPUTER MUSIC JOURNAL
2001; 25 (2): 25-36
View details for Web of Science ID 000169754300003