Bio


Jonathan Berger is the Denning Family Provostial Professor in Music at Stanford University, where he teaches composition, music theory, and cognition at the Center for Computer Research in Music and Acoustics (CCRMA).
Jonathan is a 2017 Guggenheim Fellow and a 2016 winner of the Rome Prize.
He was the founding co-director of the Stanford Institute for Creativity and the Arts (SICA, now the Stanford Arts Institute) and founding director of Yale University’s Center for Studies in Music Technology
Described as “gripping” by both the New York Times and the Chicago Tribune, “poignant”, “richly evocative” (San Francisco Chronicle), “taut, and hauntingly beautiful” (NY Times), Jonathan Berger’s recent works deal with both consciousness and conscience. His monodrama, My Lai, toured internationally. The Kronos Quartet's recording was released by Smithsonian/Folkways. His opera, The Ritual of Breath is the Rite to Resist will be performed at Lincoln Center in July 2024.
Thrice commissioned by The National Endowment for the Arts, Berger’a recent commissions include The Mellon and Rockefeller Foundations, Chamber Music Society, Lincoln Center, and Chamber Music America.
Upcoming commissions include a new work for the Kronos Quartet.
In addition to composition, Berger is an active researcher with over 80 publications in a wide range of fields relating to music, science and technology and has held research grants from DARPA, the Wallenberg Foundation, The National Academy of Sciences, the Keck Foundation, and others.
Berger is the PI of a major grant from the Templeton Religion Trust to study how music and architecture interact to create a sense of awe.

Academic Appointments


Honors & Awards


  • The Elliott Carter Fellow (Rome Prize), American Academy in Rome (2016-2017)
  • Guggenheim Fellowship, Guggenheim Foundation (2018)

Program Affiliations


  • Symbolic Systems Program

Professional Education


  • DMA, Stanford University, Music Composition (1983)

Projects


  • Musical Archeoacoustics, Stanford University, The National Academy of Sciences/Keck Foundation, Museo Nazionale Roma, Palazzo Altemps (1/1/2017 - Present)

    Seeking the interactions between architecture and musical style and performance

    Location

    Italy

    Collaborators

    • Jonathan Abel, Consulting Professor, Stanford University
    • Timothy Weaver, Professor, University of Denver
    • Talya Berger, Senior Lecturer, Stanford University
  • Music Engagement Research Initiative, Stanford University

    We seek to increase our understanding of how and why humans engage with music. Our research integrates industrial datasets detailing musical performance, audition, and discovery; imaging studies aimed at determining the neural correlates of music engagement; and performance studies investigating the impact of musical schemas on amateur and expert interpretations of written scores.

    Location

    Stanford, CA

    Collaborators

    • Blair Kaneshiro, Research Scientist, Stanford University

2023-24 Courses


Stanford Advisees


All Publications


  • Syncopation as Probabilistic Expectation: Conceptual, Computational, and Experimental Evidence. Cognitive science Fram, N. R., Berger, J. 2023; 47 (12): e13390

    Abstract

    Definitions of syncopation share two characteristics: the presence of a meter or analogous hierarchical rhythmic structure and a displacement or contradiction of that structure. These attributes are translated in terms of a Bayesian theory of syncopation, where the syncopation of a rhythm is inferred based on a hierarchical structure that is, in turn, learned from the ongoing musical stimulus. Several experiments tested its simplest possible implementation, with equally weighted priors associated with different meters and independence of auditory events, which can be decomposed into two terms representing note density and deviation from a metric hierarchy. A computational simulation demonstrated that extant measures of syncopation fall into two distinct factors analogous to the terms in the simple Bayesian model. Next, a series of behavioral experiments found that perceived syncopation is significantly related to both terms, offering support for the general Bayesian construction of syncopation. However, we also found that the prior expectations associated with different metric structures are not equal across meters and that there is an interaction between density and hierarchical deviation, implying that auditory events are not independent from each other. Together, these findings provide evidence that syncopation is a manifestation of a form of temporal expectation that can be directly represented in Bayesian terms and offer a complementary, feature-driven approach to recent Bayesian models of temporal prediction.

    View details for DOI 10.1111/cogs.13390

    View details for PubMedID 38043104

  • Inducing and disrupting flow during music performance. Frontiers in psychology Zielke, J., Anglada-Tort, M., Berger, J. 2023; 14: 1187153

    Abstract

    Flow is defined as a state of total absorption in an activity, involving focused attention, deep engagement, loss of self-conscious awareness, and self-perceived temporal distortion. Musical flow has been associated with enhanced performance, but the bulk of previous research has investigated flow mechanisms using self-report methodology. Thus, little is known about the precise musical features that may induce or disrupt flow. This work aims to consider the experience of flow from a music performance perspective in order to investigate these features and introduces a method of measuring flow in real time. In Study 1, musicians reviewed a self-selected video of themselves performing, noting first, where in the performance they recalled "losing themselves" in the music, and second, where their focused state was interrupted. Thematic analysis of participant flow experiences suggests temporal, dynamic, pitch and timbral dimensions associated with the induction and disruption of flow. In Study 2, musicians were brought into the lab and recorded while performing a self-selected musical composition. Next, participants were asked to estimate the duration of their performance, and to rewatch their recordings to mark those places in which they recalled "losing themselves in the moment." We found that the proportion of performance time spent in flow significantly correlated with self-reported flow intensity, providing an intrinsic measure of flow and confirming the validity of our method to capture flow states in music performance. We then analyzed the music scores and participants' performed melodies. The results showed that stepwise motion, repeated sequence, and a lack of disjunct motion are common to flow state entry points, whereas disjunct motion and syncopation are common to flow state exit points. Overall, such initial findings suggest directions that warrant future study and, altogether, they have implications regarding utilizing flow in music performance contexts.

    View details for DOI 10.3389/fpsyg.2023.1187153

    View details for PubMedID 37333611

    View details for PubMedCentralID PMC10272888

  • ACOUSTICALLY-DRIVEN PHONEME REMOVAL THAT PRESERVES VOCAL AFFECT CUES. Proceedings of the ... IEEE International Conference on Acoustics, Speech, and Signal Processing. ICASSP (Conference) Noufi, C., Berger, J., Frank, M., Parker, K., Bowling, D. L. 2023; 2023

    Abstract

    In this paper, we propose a method for removing linguistic information from speech for the purpose of isolating paralinguistic indicators of affect. The immediate utility of this method lies in clinical tests of sensitivity to vocal affect that are not confounded by language, which is impaired in a variety of clinical populations. The method is based on simultaneous recordings of speech audio and electroglotto-graphic (EGG) signals. The speech audio signal is used to estimate the average vocal tract filter response and amplitude envelop. The EGG signal supplies a direct correlate of voice source activity that is mostly independent of phonetic articulation. These signals are used to create a third signal designed to capture as much paralinguistic information from the vocal production system as possible-maximizing the retention of bioacoustic cues to affect-while eliminating phonetic cues to verbal meaning. To evaluate the success of this method, we studied the perception of corresponding speech audio and transformed EGG signals in an affect rating experiment with online listeners. The results show a high degree of similarity in the perceived affect of matched signals, indicating that our method is effective.

    View details for DOI 10.1109/icassp49357.2023.10095942

    View details for PubMedID 37701064

    View details for PubMedCentralID PMC10495117

  • Enhancing Non-Speech Information Communicated in Closed Captioning Through Critical Design May, L., Park, S., Berger, J., ACM ASSOC COMPUTING MACHINERY. 2023
  • Characterizing the Relationship Between the COVID-19 Pandemic and U.S. Classical Musicians' Wellbeing. Frontiers in sociology Wang, G., Fram, N. R., Carstensen, L. L., Berger, J. 2022; 7: 848098

    Abstract

    The COVID-19 pandemic has devastated the economic and social wellbeing of communities worldwide. Certain groups have been disproportionately impacted by the strain of the pandemic, such as classical musicians. The COVID-19 pandemic has greatly harmed the classical music industry, silencing the world's concert halls and theaters. In an industry characterized by instability, a shock as great as COVID-19 may bring negative effects that far outlast the pandemic itself. This study investigates the wellbeing of classical musicians during the COVID-19 pandemic. 68 professional classical musicians completed a questionnaire composed of validated measures of future time horizons, emotional experience, social relationships, and life satisfaction. Findings show that feelings of loneliness had a significant negative association with other measures of wellbeing and were significantly mediated by increased social integration and perceived social support from colleagues, friends, and family. These findings help to characterize the present psychological, emotional, and social wellness of classical musicians in the United States, the first step toward mitigating the hazardous impacts of COVID-19 on this vulnerable group's mental health and wellness.

    View details for DOI 10.3389/fsoc.2022.848098

    View details for PubMedID 35399192

  • Hiting Pause: How User Perceptions of Collaborative Playlists Evolved in the United States During the COVID-19 Pandemic Park, S., Redmond, E., Berger, J., Kaneshiro, B., ACM ASSOC COMPUTING MACHINERY. 2022
  • How Music Can Literally Heal the Heart Chew, E. Scientific American. 2021 ; Scientific American (September 18 2021):
  • Collaborating in Isolation: Assessing the Effects of the Covid-19 Pandemic on Patterns of Collaborative Behavior Among Working Musicians. Frontiers in psychology Fram, N. R., Goudarzi, V., Terasawa, H., Berger, J. 2021; 12: 674246

    Abstract

    The Covid-19 pandemic severely limited collaboration among musicians in rehearsal and ensemble performance, and demanded radical shifts in collaborative practices. Understanding the nature of these changes in music creators' patterns of collaboration, as well as how musicians shifted prioritizations and adapted their use of the available technologies, can offer invaluable insights into the resilience and importance of different aspects of musical collaboration. In addition, assessing changes in the collaboration networks among music creators can improve the current understanding of genre and style formation and evolution. We used an internet survey distributed to music creators, including performers, composers, producers, and engineers, all active before and during the pandemic, to assess their perceptions of how their music, collaborative practice, and use of technology were impacted by shelter-in-place orders associated with Covid-19, as well as how they adapted over the course of the pandemic. This survey was followed by Zoom interviews with a subset of participants. Along with confirming previous results showing increased reliance on nostalgia for musical inspiration, we found that participants' collaborative behaviors were surprisingly resilient to pandemic-related changes. In addition, participant responses appeared to be driven by a relatively small number of underlying factors, representing approaches to musical collaboration such as musical extroversion or musical introversion, inspiration clusters such as activist musicking, and style or genre clusters.

    View details for DOI 10.3389/fpsyg.2021.674246

    View details for PubMedID 34349700

  • Inter-subject Correlation While Listening to Minimalist Music: A Study of Electrophysiological and Behavioral Responses to Steve Reich's Piano Phase. Frontiers in neuroscience Dauer, T., Nguyen, D. T., Gang, N., Dmochowski, J. P., Berger, J., Kaneshiro, B. 1800; 15: 702067

    Abstract

    Musical minimalism utilizes the temporal manipulation of restricted collections of rhythmic, melodic, and/or harmonic materials. One example, Steve Reich's Piano Phase, offers listeners readily audible formal structure with unpredictable events at the local level. For example, pattern recurrences may generate strong expectations which are violated by small temporal and pitch deviations. A hyper-detailed listening strategy prompted by these minute deviations stands in contrast to the type of listening engagement typically cultivated around functional tonal Western music. Recent research has suggested that the inter-subject correlation (ISC) of electroencephalographic (EEG) responses to natural audio-visual stimuli objectively indexes a state of "engagement," demonstrating the potential of this approach for analyzing music listening. But can ISCs capture engagement with minimalist music, which features less obvious expectation formation and has historically received a wide range of reactions? To approach this question, we collected EEG and continuous behavioral (CB) data while 30 adults listened to an excerpt from Steve Reich's Piano Phase, as well as three controlled manipulations and a popular-music remix of the work. Our analyses reveal that EEG and CB ISC are highest for the remix stimulus and lowest for our most repetitive manipulation, no statistical differences in overall EEG ISC between our most musically meaningful manipulations and Reich's original piece, and evidence that compositional features drove engagement in time-resolved ISC analyses. We also found that aesthetic evaluations corresponded well with overall EEG ISC. Finally we highlight co-occurrences between stimulus events and time-resolved EEG and CB ISC. We offer the CB paradigm as a useful analysis measure and note the value of minimalist compositions as a limit case for the neuroscientific study of music listening. Overall, our participants' neural, continuous behavioral, and question responses showed strong similarities that may help refine our understanding of the type of engagement indexed by ISC for musical stimuli.

    View details for DOI 10.3389/fnins.2021.702067

    View details for PubMedID 34955706

  • Natural music evokes correlated EEG responses reflecting temporal structure and beat. NeuroImage Kaneshiro, B. n., Nguyen, D. T., Norcia, A. M., Dmochowski, J. P., Berger, J. n. 2020: 116559

    Abstract

    The brain activity of multiple subjects has been shown to synchronize during salient moments of natural stimuli, suggesting that correlation of neural responses indexes a brain state operationally termed 'engagement'. While past electroencephalography (EEG) studies have considered both auditory and visual stimuli, the extent to which these results generalize to music-a temporally structured stimulus for which the brain has evolved specialized circuitry-is less understood. Here we investigated neural correlation during natural music listening by recording EEG responses from N=48 adult listeners as they heard real-world musical works, some of which were temporally disrupted through shuffling of short-term segments (measures), reversal, or randomization of phase spectra. We measured correlation between multiple neural responses (inter-subject correlation) and between neural responses and stimulus envelope fluctuations (stimulus-response correlation) in the time and frequency domains. Stimuli retaining basic musical features, such as rhythm and melody, elicited significantly higher behavioral ratings and neural correlation than did phase-scrambled controls. However, while unedited songs were self-reported as most pleasant, time-domain correlations were highest during measure-shuffled versions. Frequency-domain measures of correlation (coherence) peaked at frequencies related to the musical beat, although the magnitudes of these spectral peaks did not explain the observed temporal correlations. Our findings show that natural music evokes significant inter-subject and stimulus-response correlations, and suggest that the neural correlates of musical 'engagement' may be distinct from those of enjoyment.

    View details for DOI 10.1016/j.neuroimage.2020.116559

    View details for PubMedID 31978543

  • A Method for Studying Interactions between Music Performance and Rooms with Real-Time Virtual Acoustics Canfield-Dafilou, E. K., Callery, E. F., Abel, J. S., Berger, J. J., Loughran, R., AngusWhiteoak, J. AUDIO ENGINEERING SOC INC. 2019
  • Characterizing Listener Engagement with Popular Songs Using Large-Scale Music Discovery Data FRONTIERS IN PSYCHOLOGY Kaneshiro, B., Ruan, F., Baker, C. W., Berger, J. 2017; 8

    Abstract

    Music discovery in everyday situations has been facilitated in recent years by audio content recognition services such as Shazam. The widespread use of such services has produced a wealth of user data, specifying where and when a global audience takes action to learn more about music playing around them. Here, we analyze a large collection of Shazam queries of popular songs to study the relationship between the timing of queries and corresponding musical content. Our results reveal that the distribution of queries varies over the course of a song, and that salient musical events drive an increase in queries during a song. Furthermore, we find that the distribution of queries at the time of a song's release differs from the distribution following a song's peak and subsequent decline in popularity, possibly reflecting an evolution of user intent over the "life cycle" of a song. Finally, we derive insights into the data size needed to achieve consistent query distributions for individual songs. The combined findings of this study suggest that music discovery behavior, and other facets of the human experience of music, can be studied quantitatively using large-scale industrial data.

    View details for DOI 10.3389/fpsyg.2017.00416

    View details for Web of Science ID 000397317600001

    View details for PubMedID 28386241

  • The impact of audiovisual biofeedback on 4D functional and anatomic imaging: Results of a lung cancer pilot study. Radiotherapy and oncology Yang, J., Yamamoto, T., Pollock, S., Berger, J., Diehn, M., Graves, E. E., Loo, B. W., Keall, P. J. 2016; 120 (2): 267-272

    Abstract

    The impact of audiovisual (AV) biofeedback on four dimensional (4D) positron emission tomography (PET) and 4D computed tomography (CT) image quality was investigated in a prospective clinical trial (NCT01172041).4D-PET and 4D-CT images of ten lung cancer patients were acquired with AV biofeedback (AV) and free breathing (FB). The 4D-PET images were analyzed for motion artifacts by comparing 4D to 3D PET for gross tumor volumes (GTVPET) and maximum standardized uptake values (SUVmax). The 4D-CT images were analyzed for artifacts by comparing normalized cross correlation-based scores (NCCS) and quantifying a visual assessment score (VAS). A Wilcoxon signed-ranks test was used for statistical testing.The impact of AV biofeedback varied widely. Overall, the 3D to 4D decrease of GTVPET was 1.2±1.3cm(3) with AV and 0.6±1.8cm(3) for FB. The 4D-PET increase of SUVmax was 1.3±0.9 with AV and 1.3±0.8 for FB. The 4D-CT NCCS were 0.65±0.27 with AV and 0.60±0.32 for FB (p=0.08). The 4D-CT VAS was 0.0±2.7.This study demonstrated a high patient dependence on the use of AV biofeedback to reduce motion artifacts in 4D imaging. None of the hypotheses tested were statistically significant. Future development of AV biofeedback will focus on optimizing the human-computer interface and including patient training sessions for improved comprehension and compliance.

    View details for DOI 10.1016/j.radonc.2016.05.016

    View details for PubMedID 27256597

  • In Search of a Perceptual Metric for Timbre: Dissimilarity Judgments among Synthetic Sounds with MFCC-Derived Spectral Envelopes JOURNAL OF THE AUDIO ENGINEERING SOCIETY Terasawa, H., Berger, J., Makino, S. 2012; 60 (9): 674-685
  • Commissioning and quality assurance for a respiratory training system based on audiovisual biofeedback. Journal of applied clinical medical physics Cui, G., Gopalan, S., Yamamoto, T., Berger, J., Maxim, P. G., Keall, P. J. 2010; 11 (4): 3262-?

    Abstract

    A respiratory training system based on audiovisual biofeedback has been implemented at our institution. It is intended to improve patients' respiratory regularity during four-dimensional (4D) computed tomography (CT) image acquisition. The purpose is to help eliminate the artifacts in 4D-CT images caused by irregular breathing, as well as improve delivery efficiency during treatment, where respiratory irregularity is a concern. This article describes the commissioning and quality assurance (QA) procedures developed for this peripheral respiratory training system, the Stanford Respiratory Training (START) system. Using the Varian real-time position management system for the respiratory signal input, the START software was commissioned and able to acquire sample respiratory traces, create a patient-specific guiding waveform, and generate audiovisual signals for improving respiratory regularity. Routine QA tests that include hardware maintenance, visual guiding-waveform creation, auditory sounds synchronization, and feedback assessment, have been developed for the START system. The QA procedures developed here for the START system could be easily adapted to other respiratory training systems based on audiovisual biofeedback.

    View details for PubMedID 21081883

  • Analysis of Pitch Perception of Inharmonicity in Pipa Strings Using Response Surface Methodology JOURNAL OF NEW MUSIC RESEARCH Chin, S. H., Berger, J. 2010; 39 (1): 63-73
  • Commissioning and quality assurance for a respiratory training system based on audiovisual biofeedback JOURNAL OF APPLIED CLINICAL MEDICAL PHYSICS Cui, G., Gopalan, S., Yamamoto, T., Berger, J., Maxim, P. G., Keall, P. J. 2010; 11 (4): 42-56
  • Neural dynamics of event segmentation in music: Converging evidence for dissociable ventral and dorsal networks NEURON Sridharan, D., Levitin, D. J., Chafe, C. H., Berger, J., Menon, V. 2007; 55 (3): 521-532

    Abstract

    The real world presents our sensory systems with a continuous stream of undifferentiated information. Segmentation of this stream at event boundaries is necessary for object identification and feature extraction. Here, we investigate the neural dynamics of event segmentation in entire musical symphonies under natural listening conditions. We isolated time-dependent sequences of brain responses in a 10 s window surrounding transitions between movements of symphonic works. A strikingly right-lateralized network of brain regions showed peak response during the movement transitions when, paradoxically, there was no physical stimulus. Model-dependent and model-free analysis techniques provided converging evidence for activity in two distinct functional networks at the movement transition: a ventral fronto-temporal network associated with detecting salient events, followed in time by a dorsal fronto-parietal network associated with maintaining attention and updating working memory. Our study provides direct experimental evidence for dissociable and causally linked ventral and dorsal networks during event segmentation of ecologically valid auditory stimuli.

    View details for DOI 10.1016/j.neuron.2007.07.003

    View details for Web of Science ID 000248711000017

    View details for PubMedID 17678862

  • Melody extraction and musical onset detection via probabilistic models of framewise STFT peak data IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING Thornburg, H., Leistikow, R. J., Berger, J. 2007; 15 (4): 1257-1272
  • SICIB: An interactive music composition system using body movements COMPUTER MUSIC JOURNAL Morales-Manzanares, R., Morales, E. F., Dannenberg, R., Berger, J. 2001; 25 (2): 25-36