Bio


Research topics include neural oscillations for auditory perception, auditory-motor coupling, brain plasticity in development and aging, and recovery from stroke with music-supported therapy.

Her post-doctoral and research-associate work at Rotman Research Institute in Toronto was supported by awards from the Canadian Institutes of Health Research. Her research continues to explore the biological nature of human musical ability by examining brain activities with non-invasive human neurophysiological measures such as magnetoencephalography (MEG) and electroencephalography (EEG).

Academic Appointments


Administrative Appointments


  • Associate Professor, Center for Computer Research in Music and Acoustics, Department of Music, Stanford University (2019 - Present)
  • Assistant Professor, Center for Computer Research in Music and Acoustics, Department of Music, Stanford University (2012 - 2019)
  • Scientific associate, Rotman Research Institute (2010 - 2012)
  • Research associate, Communication & Information Laboratory, Dai Nippon Printing Company (1993 - 2000)
  • Visiting Academic Fellow at the MEG lab, Rotman Research Institute (2001 - 2003)
  • Instructor, Developmental Neurophysiology, Department of Psychology, University of Toronto Mississauga (2008 - 2008)
  • Instructor, National Institute for Physiological Sciences (2001 - 2001)
  • Teaching assistant, Department of Electrical Engineering, Waseda University (1992 - 1993)

Honors & Awards


  • Piano performance Certificate Grade 10 First Class with Distinction, Royal Conservatory of Music (2016)
  • Research Fellow supported by Centre for Stroke Recovery at Baycrest, Rotman Research Institute at Baycrest (2008-2009)
  • Research Fellow, McMaster University (2007-2009)
  • One of the top 10 articles on early childhood development, Centre of Excellence for Early Childhood Development (CEEDC) (2007)
  • One of top 50 discoveries of 2006, Natural Sciences and Engineering Research Council of Canada (2006)
  • Post-doctral Fellowship Award, Canadian Institutes of Health Research (2004-2007)
  • Post-doctoral Research Fellow, University of Toronto (2003-2004)
  • PhD-scholarship award, The Japan Scholarship Foundation (2000-2003)
  • Finalist at The PTNA Piano Competition, National Piano Teachers' Association of Japan (1980)

Boards, Advisory Committees, Professional Organizations


  • Reviewer, Psychomusicology (2012 - Present)
  • CIHR Grant competition External reviewer, Canadian Institutes of Health Research grant competition (2012 - Present)
  • SSHRC Grant competition External Review, Social Sciences and Humanities Research Council (2014 - Present)
  • National Science Foundation Grant Competition External Review, National Science Foundation (2013 - Present)
  • Member, Internal review committee for grant proposals, Stanford University (2010 - 2012)
  • Session chair, Society for Music perception and cognition meeting (2013 - 2013)
  • Consultant for stroke rehabilitation research project and data sharing agreement transaction, Stanford, Rotman Research Institute, Sunnybrook Hospital (2012 - 2013)
  • Reviewer, PlosONE (2012 - 2013)
  • Reviewer, Psychophyiology (2012 - 2013)
  • Reviewer, Frontiers in Human Neuroscience (2012 - 2013)
  • Reviewer, Frontiers in Systems Neuroscience (2012 - 2013)
  • Reviewer, Brain Research (2012 - 2013)
  • Reviewer, Cortex (2012 - 2013)
  • Reviewer, Psychology of music, mind and brain (2012 - 2013)
  • Reviewer, Journal of Neuroscience (High Wire Press, Society of Neuroscience)
  • Reviewer, Cerebral Cortex (Oxford Journals)
  • Reviewer, PLosOne (The Public Library of Science)
  • Reviewer, Journal of Cognitive Neuroscience (MIT press)
  • Reviewer, European Journal of Neuroscience (Splinger)
  • Reviewer, The Journal of the Acoustical Society of America (the Acoustical Society of America)
  • Reviewer, Progress in Neurobiology
  • Reviewer, Neuroscience
  • Reviewer, Neuroscience Letters
  • Reviewer, Brain Research
  • Reviewer, Clinical Neurophysiology
  • Reviewer, Neuroscience Research
  • Reviewer, Brain and Cognition (Elsevier)
  • Reviewer, Ear and Hearing (Lippincott Williams & Wilkins)
  • Reviewer, Music Perception (University of California Press)
  • Reviewer, Psychology of Music
  • Reviewer, Review of Educational Research (Sage)
  • Reviewer, BMC Neuroscience (Biomed Central)
  • Member, Society for Music Perception and Cognition (2013 - Present)
  • Member, The Society of Neuroscience
  • Member, Cognitive Neuroscience Society

Professional Education


  • Ph.D., Department of Physiological Science, School of Life Science, Graduate University for Advanced Studies, Department of Integrative Physiology, National Institute for Physiological Sciences, Physiology (2003)
  • M.Sc., Department of Electrical Engineering, Graduate School of Science and Engineering, Waseda University, Information System Engineering (1993)
  • B.Eng, Department of Electrical Engineering, School of Science and Engineering, Waseda University, System Engineering (1990)

Patents


  • Takako Fujioka. "United States Patent 2000-003360 Document analysis systems", Dai Nippon Printing Company
  • Takako Fujioka. "United States Patent 2000-003361 Document analysis systems", Dai Nippon Printing Company
  • Takako Fujioka. "United States Patent 2000-003362 Document analysis systems,", Dai Nippon Printing Company
  • Takako Fujioka. "United States Patent 2000-259670 Document analysis systems", Dai Nippon Printing Company
  • Takako Fujioka. "United States Patent 2000-259671 Information formation system, Information retrieval system and record medium", Dai Nippon Printing Company

2024-25 Courses


Stanford Advisees


All Publications


  • Replicability of neural responses to speech accent is driven by study design and analytical parameters. Scientific reports Strauber, C. B., Ali, L. R., Fujioka, T., Thille, C., McCandliss, B. D. 2021; 11 (1): 4777

    Abstract

    Recent studies have reported evidence thatlisteners'brains processmeaning differently inspeech withan in-group as compared to anout-group accent. However, among studies that have used electroencephalography (EEG) to examine neural correlates of semantic processing of speech in different accents, the details of findings are often in conflict, potentially reflecting critical variations in experimental design and/or data analysis parameters. To determine which of these factors might be driving inconsistencies in results across studies, we systematically investigate how analysis parameter sets from several of these studies impact results obtained from our own EEG data set. Data were collected from forty-nine monolingual North American English listeners in an event-related potential (ERP) paradigm as they listened to semantically congruent and incongruent sentences spoken in an American accent and an Indian accent. Several key effects of in-group as compared to out-group accent were robust across the range of parameters found in the literature, including more negative scalp-wide responses to incongruence in the N400 range, more positive posterior responses to congruence in the N400 range, and more positive posterior responses to incongruence in the P600 range. These findings, however, are not fully consistent with the reported observations of the studies whose parameters we used, indicatingvariation in experimental design may be at play. Other reported effects only emerged under a subset of the analytical parameters tested, suggesting that analytical parameters also drive differences. We hope this spurs discussion of analytical parameters and investigation of the contributions of individual study design variables in this growing field.

    View details for DOI 10.1038/s41598-021-82782-4

    View details for PubMedID 33637784

  • Temporal Coordination in Piano Duet Networked Music Performance (NMP): Interactions Between Acoustic Transmission Latency and Musical Role Asymmetries. Frontiers in psychology Washburn, A., Wright, M. J., Chafe, C., Fujioka, T. 2021; 12: 707090

    Abstract

    Today's audio, visual, and internet technologies allow people to interact despite physical distances, for casual conversation, group workouts, or musical performance. Musical ensemble performance is unique because interaction integrity critically depends on the timing between each performer's actions and when their acoustic outcomes arrive. Acoustic transmission latency (ATL) between players is substantially longer for networked music performance (NMP) compared to traditional in-person spaces where musicians can easily adapt. Previous work has shown that longer ATLs slow the average tempo in ensemble performance, and that asymmetric co-actor roles and empathy-related traits affect coordination patterns in joint action. Thus, we are interested in how musicians collectively adapt to a given latency and how such adaptation patterns vary with their task-related and person-related asymmetries. Here, we examined how two pianists performed duets while hearing each other's auditory outcomes with an ATL of 10, 20, or 40 ms. To test the hypotheses regarding task-related asymmetries, we designed duets such that pianists had: (1) a starting or joining role and (2) a similar or dissimilar musical part compared to their co-performer, with respect to pitch range and melodic contour. Results replicated previous clapping-duet findings showing that longer ATLs are associated with greater temporal asynchrony between partners and increased average tempo slowing. While co-performer asynchronies were not affected by performer role or part similarity, at the longer ATLs starting performers displayed slower tempos and smaller tempo variability than joining performers. This asymmetry of stability vs. flexibility between starters and joiners may sustain coordination, consistent with recent joint action findings. Our data also suggest that relative independence in musical parts may mitigate ATL-related challenges. Additionally, there may be a relationship between co-performer differences in empathy-related personality traits such as locus of control and coordination during performance under the influence of ATL. Incorporating the emergent coordinative dynamics between performers could help further innovation of music technologies and composition techniques for NMP.

    View details for DOI 10.3389/fpsyg.2021.707090

    View details for PubMedID 34630213

  • Expressing melodic grouping discontinuities: Evidence from violinists' rubato and motion MUSICAE SCIENTIAE Huberth, M., Davis, S., Fujioka, T. 2020; 24 (4): 494–514
  • Predictability of higher-order temporal structure of musical stimuli is associated with auditory evoked response. International journal of psychophysiology : official journal of the International Organization of Psychophysiology Dauer, T. n., Nerness, B. n., Fujioka, T. n. 2020

    Abstract

    Sound predictability resulting from repetitive patterns can be implicitly learned and often neither requires nor captures our conscious attention. Recently, predictive coding theory has been used as a framework to explain how predictable or expected stimuli evoke and gradually attenuate obligatory neural responses over time compared to those elicited by unpredictable events. However, these results were obtained using the repetition of simple auditory objects such as pairs of tones or phonemes. Here we examined whether the same principle would hold for more abstract temporal structures of sounds. If this is the case, we hypothesized that a regular repetition schedule of a set of musical patterns would reduce neural processing over the course of listening compared to stimuli with an irregular repetition schedule (and the same set of musical patterns). Electroencephalography (EEG) was recorded while participants passively listened to 6-8 min sequences in which five different four-tone patterns with temporally regular or irregular repetition were presented successively in a randomized order. N1 amplitudes in response to the first tone of each musical pattern were significantly less negative at the end of the regular sequence compared to the beginning, while such reduction was absent in the irregular sequence. These results extend previous findings by showing that N1 reflects automatic learning of the predictable higher-order structure of sound sequences, while continuous engagement of preattentive auditory processing is necessary for the unpredictable structure.

    View details for DOI 10.1016/j.ijpsycho.2020.04.002

    View details for PubMedID 32325078

  • Induced Beta Power Modulations during Isochronous Auditory Beats Reflect Intentional Anticipation before Gradual Tempo Changes. Scientific reports Graber, E. n., Fujioka, T. n. 2020; 10 (1): 4207

    Abstract

    Induced beta-band power modulations in auditory and motor-related brain areas have been associated with automatic temporal processing of isochronous beats and explicit, temporally-oriented attention. Here, we investigated how explicit top-down anticipation before upcoming tempo changes, a sustained process commonly required during music performance, changed beta power modulations during listening to isochronous beats. Musicians' electroencephalograms were recorded during the task of anticipating accelerating, decelerating, or steady beats after direction-specific visual cues. In separate behavioural testing for tempo-change onset detection, such cues were found to facilitate faster responses, thus effectively inducing high-level anticipation. In the electroencephalograms, periodic beta power reductions in a frontocentral topographic component with seed-based source contributions from auditory and sensorimotor cortices were apparent after isochronous beats with anticipation in all conditions, generally replicating patterns found previously during passive listening to isochronous beats. With anticipation before accelerations, the magnitude of the power reduction was significantly weaker than in the steady condition. Between the accelerating and decelerating conditions, no differences were found, suggesting that the observed beta patterns may represent an aspect of high-level anticipation common before both tempo changes, like increased attention. Overall, these results indicate that top-down anticipation influences ongoing auditory beat processing in beta-band networks.

    View details for DOI 10.1038/s41598-020-61044-9

    View details for PubMedID 32144306

  • Central auditory processing in adults with chronic stroke without hearing loss: a magnetoencephalography study Clinical Neurophysiology Fujioka, T., Freigang, C., Honjo, K., Chen, J. J., Chen, J. L., Black, S. E., Stuss, D. T., Dawson, D. R., Ross, B. 2020
  • Performance monitoring of self and other in a turn-taking piano duet: A dual-EEG study SOCIAL NEUROSCIENCE Huberth, M., Dauer, T., Nanou, C., Roman, I., Gang, N., Reid, W., Wright, M., Fujioka, T. 2019; 14 (4): 449–61
  • Auditory rhyme processing in expert freestyle rap lyricists and novices: An ERP study NEUROPSYCHOLOGIA Cross, K., Fujioka, T. 2019; 129: 223–35
  • Musical Role Asymmetries in Piano Duet Performance Influence Alpha-Band Neural Oscillation and Behavioral Synchronization. Frontiers in neuroscience Washburn, A., Roman, I., Huberth, M., Gang, N., Dauer, T., Reid, W., Nanou, C., Wright, M., Fujioka, T. 2019; 13: 1088

    Abstract

    Recent work in interpersonal coordination has revealed that neural oscillations, occurring spontaneously in the human brain, are modulated during the sensory, motor, and cognitive processes involved in interpersonal interactions. In particular, alpha-band (8-12 Hz) activity, linked to attention in general, is related to coordination dynamics and empathy traits. Researchers have also identified an association between each individual's attentiveness to their co-actor and the relative similarity in the co-actors' roles, influencing their behavioral synchronization patterns. We employed music ensemble performance to evaluate patterns of behavioral and neural activity when roles between co-performers are systematically varied with complete counterbalancing. Specifically, we designed a piano duet task, with three types of co-actor dissimilarity, or asymmetry: (1) musical role (starting vs. joining), (2) musical task similarity (similar vs. dissimilar melodic parts), and (3) performer animacy (human-to-human vs. human-to-non-adaptive computer). We examined how the experience of these asymmetries in four initial musical phrases, alternatingly played by the co-performers, influenced the pianists' performance of a subsequent unison phrase. Electroencephalography was recorded simultaneously from both performers while playing keyboards. We evaluated note-onset timing and alpha modulation around the unison phrase. We also investigated whether each individual's self-reported empathy was related to behavioral and neural activity. Our findings revealed closer behavioral synchronization when pianists played with a human vs. computer partner, likely because the computer was non-adaptive. When performers played with a human partner, or a joining performer played with a computer partner, having a similar vs. dissimilar musical part did not have a significant effect on their alpha modulation immediately prior to unison. However, when starting performers played with a computer partner with a dissimilar vs. similar part there was significantly greater alpha synchronization. In other words, starting players attended less to the computer partner playing a similar accompaniment, operating in a solo-like mode. Moreover, this alpha difference based on melodic similarity was related to a difference in note-onset adaptivity, which was in turn correlated with performer trait empathy. Collectively our results extend previous findings by showing that musical ensemble performance gives rise to a socialized context whose lasting effects encompass attentiveness, perceptual-motor coordination, and empathy.

    View details for DOI 10.3389/fnins.2019.01088

    View details for PubMedID 31680824

  • Effects of extramusical information and human presence on perceived emotion intensity in electronic music Psychomusicology: Music, Mind, and Brain Grace, V., Huberth, M., Fujioka, T. 2019; 29 (2-3): 117–127

    View details for DOI 10.1037/pmu0000223

  • Endogenous Expectations for Sequence Continuation after Auditory Beat Accelerations And Decelerations Revealed by P3a and Induced Beta-Band Responses. Neuroscience Graber, E. n., Fujioka, T. n. 2019

    Abstract

    People commonly synchronize taps to rhythmic sounds and can continue tapping after the sounds stop, indicating that time intervals between sounds can be internalized. Here, we investigate what happens in the brain after simply listening to auditory beats in order to understand more about the automatic internalization of temporal intervals without tapping. Electroencephalograms were recorded while musicians attended to accelerating, decelerating, or steady click sequences. Evoked responses and induced beta power modulations (13-30 Hz) were examined for one beat following the last physical beat of each sequence (termed the silent beat) and compared to responses obtained during physical beats near the sequence endings. In response to the silent beat, P3a was observed with the largest amplitude occurring after accelerations and the smallest after decelerations. Late beta power modulations were also found after the silent beat, and the magnitude of the beta-power suppressions were significantly correlated with the concurrent P3a amplitudes. In contrast, physical beats elicited P2 response and early beta suppression, likely reflecting a combination of stimulus-related processing and temporal prediction. These results suggest that the activities observed after the silent beat were not produced via sustained entrainment after the physical beats, but via automatically-formed expectation for an additional beat. Therefore, beta modulations may be generated endogenously by expectation violation, while P3a amplitudes may relate to strength of expectation, with acceleration endings causing the strongest expectations for sequence continuation.

    View details for DOI 10.1016/j.neuroscience.2019.06.010

    View details for PubMedID 31220540

  • Delayed feedback embedded in perception-action coordination cycles results in anticipation behavior during synchronized rhythmic action: A dynamical systems approach. PLoS computational biology Roman, I. R., Washburn, A. n., Large, E. W., Chafe, C. n., Fujioka, T. n. 2019; 15 (10): e1007371

    Abstract

    Dancing and playing music require people to coordinate actions with auditory rhythms. In laboratory perception-action coordination tasks, people are asked to synchronize taps with a metronome. When synchronizing with a metronome, people tend to anticipate stimulus onsets, tapping slightly before the stimulus. The anticipation tendency increases with longer stimulus periods of up to 3500ms, but is less pronounced in trained individuals like musicians compared to non-musicians. Furthermore, external factors influence the timing of tapping. These factors include the presence of auditory feedback from one's own taps, the presence of a partner performing coordinated joint tapping, and transmission latencies (TLs) between coordinating partners. Phenomena like the anticipation tendency can be explained by delay-coupled systems, which may be inherent to the sensorimotor system during perception-action coordination. Here we tested whether a dynamical systems model based on this hypothesis reproduces observed patterns of human synchronization. We simulated behavior with a model consisting of an oscillator receiving its own delayed activity as input. Three simulation experiments were conducted using previously-published behavioral data from 1) simple tapping, 2) two-person alternating beat-tapping, and 3) two-person alternating rhythm-clapping in the presence of a range of constant auditory TLs. In Experiment 1, our model replicated the larger anticipation observed for longer stimulus intervals and adjusting the amplitude of the delayed feedback reproduced the difference between musicians and non-musicians. In Experiment 2, by connecting two models we replicated the smaller anticipation observed in human joint tapping with bi-directional auditory feedback compared to joint tapping without feedback. In Experiment 3, we varied TLs between two models alternately receiving signals from one another. Results showed reciprocal lags at points of alternation, consistent with behavioral patterns. Overall, our model explains various anticipatory behaviors, and has potential to inform theories of adaptive human synchronization.

    View details for DOI 10.1371/journal.pcbi.1007371

    View details for PubMedID 31671096

  • Effects of Visual Predictive Information and Sequential Context on Neural Processing of Musical Syntax. Frontiers in psychology Shin, H., Fujioka, T. 2018; 9: 2528

    Abstract

    The early right anterior negativity (ERAN) in event-related potentials (ERPs) is typically elicited by syntactically unexpected events in Western tonal music. We examined how visual predictive information influences syntactic processing, how musical or non-musical cues have different effects, and how they interact with sequential effects between trials, which could modulate with the strength of the sense of established tonality. The EEG was recorded from musicians who listened to chord sequences paired with one of four types of visual stimuli; two provided predictive information about the syntactic validity of the last chord through either musical notation of the whole sequence, or the word "regular" or "irregular," while the other two, empty musical staves or a blank screen, provided no information. Half of the sequences ended with the syntactically invalid Neapolitan sixth chord, while the other half ended with the Tonic chord. Clear ERAN was observed in frontocentral electrodes in all conditions. A principal component analysis (PCA) was performed on the grand average response in the audio-only condition, to separate spatio-temporal dynamics of different scalp areas as principal components (PCs) and use them to extract auditory-related neural activities in the other visual-cue conditions. The first principal component (PC1) showed a symmetrical frontocentral topography, while the second (PC2) showed a right-lateralized frontal concentration. A source analysis confirmed the relative contribution of temporal sources to the former and a right frontal source to the latter. Cue predictability affected only the ERAN projected onto PC1, especially when the previous trial ended with the Tonic chord. The ERAN in PC2 was reduced in the trials following Neapolitan endings in general. However, the extent of this reduction differed between cue-styles, whereby it was nearly absent when musical notation was used, regardless of whether the staves were filled with notes or empty. The results suggest that the right frontal areas carry out the primary role in musical syntactic analysis and integration of the ongoing context, which produce schematic expectations that, together with the veridical expectation incorporated by the temporal areas, inform musical syntactic processing in musicians.

    View details for DOI 10.3389/fpsyg.2018.02528

    View details for PubMedID 30618951

    View details for PubMedCentralID PMC6300505

  • Variability in stroke motor outcome is explained by structural and functional integrity of the motor system SCIENTIFIC REPORTS Lam, T. K., Binns, M. A., Honjo, K., Dawson, D. R., Ross, B., Stuss, D. T., Black, S. E., Chen, J., Fujioka, T., Chen, J. L. 2018; 8: 9480

    Abstract

    Biomarkers that represent the structural and functional integrity of the motor system enable us to better assess motor outcome post-stroke. The degree of overlap between the stroke lesion and corticospinal tract (CST Injury) is a measure of the structural integrity of the motor system, whereas the left-to-right motor cortex resting state connectivity (LM1-RM1 rs-connectivity) is a measure of its functional integrity. CST Injury and LM1-RM1 rs-connectivity each individually correlate with motor outcome post-stroke, but less is understood about the relationship between these biomarkers. Thus, this study investigates the relationship between CST Injury and LM1-RM1 rs-connectivity, individually and together, with motor outcome. Twenty-seven participants with upper limb motor deficits post-stroke completed motor assessments and underwent MRI at one time point. CST Injury and LM1-RM1 rs-connectivity were derived from T1-weighted and resting state functional MRI scans, respectively. We performed hierarchical multiple regression analyses to determine the contribution of each biomarker in explaining motor outcome. The interaction between CST Injury and LM1-RM1 rs-connectivity does not significantly contribute to the variability in motor outcome. However, inclusion of both CST Injury and LM1-RM1 rs-connectivity explains more variability in motor outcome, than either alone. We suggest both biomarkers provide distinct information about an individual's motor outcome.

    View details for PubMedID 29930399

  • The effects of music-supported therapy on motor, cognitive, and psychosocial functions in chronic stroke. Annals of the New York Academy of Sciences Fujioka, T., Dawson, D. R., Wright, R., Honjo, K., Chen, J. L., Chen, J. J., Black, S. E., Stuss, D. T., Ross, B. 2018

    Abstract

    Neuroplasticity accompanying learning is a key mediator of stroke rehabilitation. Training in playing music in healthy populations and patients with movement disorders requires resources within motor, sensory, cognitive, and affective systems, and coordination among these systems. We investigated effects of music-supported therapy (MST) in chronic stroke on motor, cognitive, and psychosocial functions compared to conventional physical training (GRASP). Twenty-eight adults with unilateral arm and hand impairment were randomly assigned to MST (n=14) and GRASP (n=14) and received 30h of training over a 10-week period. The assessment was conducted at four time points: before intervention, after 5weeks, after 10weeks, and 3months after training completion. As for two of our three primary outcome measures concerning motor function, all patients slightly improved in Chedoke-McMaster Stroke Assessment hand score, while the time to complete Action Research Arm Test became shorter in the MST group. The third primary outcome measure for well-being, Stroke Impact Scale, was improved for emotion and social communication earlier in MST and coincided with the improved executive function for task switching and music rhythm perception. The results confirmed previous findings and expanded the potential usage of MST for enhancing quality of life in community-dwelling chronic-stage survivors.

    View details for PubMedID 29797585

  • PERFORMERS' MOTIONS REFLECT THE INTENTION TO EXPRESS SHORT OR LONG MELODIC GROUPINGS MUSIC PERCEPTION Huberth, M., Fujioka, T. 2018; 35 (4): 437–53
  • Neural coupling between contralesional motor and frontoparietal networks correlates with motor ability in individuals with chronic stroke. Journal of the neurological sciences Lam, T. K., Dawson, D. R., Honjo, K. n., Ross, B. n., Binns, M. A., Stuss, D. T., Black, S. E., Chen, J. J., Levine, B. T., Fujioka, T. n., Chen, J. L. 2018; 384: 21–29

    Abstract

    Movement is traditionally viewed as a process that involves motor brain regions. However, movement also implicates non-motor regions such as prefrontal and parietal cortex, regions whose integrity may thus be important for motor recovery after stroke. Importantly, focal brain damage can affect neural functioning within and between distinct brain networks implicated in the damage. The aim of this study is to investigate how resting state connectivity (rs-connectivity) within and between motor and frontoparietal networks are affected post-stroke in correlation with motor outcome. Twenty-seven participants with chronic stroke with unilateral upper limb deficits underwent motor assessments and magnetic resonance imaging. Participants completed the Chedoke-McMaster Stroke Assessment as a measure of arm (CMSA-Arm) and hand (CMSA-Hand) impairment and the Action Research Arm Test (ARAT) as a measure of motor function. We used a seed-based rs-connectivity approach defining the motor (seed=contralesional primary motor cortex (M1)) and frontoparietal (seed=contralesional dorsolateral prefrontal cortex (DLPFC)) networks. We analyzed the rs-connectivity within each network (intra-network connectivity) and between both networks (inter-network connectivity), and performed correlations between: a) intra-network connectivity and motor assessment scores; b) inter-network connectivity and motor assessment scores. We found: a) Participants with high rs-connectivity within the motor network (between M1 and supplementary motor area) have higher CMSA-Hand stage (z=3.62, p=0.003) and higher ARAT score (z=3.41, p=0.02). Rs-connectivity within the motor network was not significantly correlated with CMSA-Arm stage (z=1.83, p>0.05); b) Participants with high rs-connectivity within the frontoparietal network (between DLPFC and mid-ventrolateral prefrontal cortex) have higher CMSA-Hand stage (z=3.64, p=0.01). Rs-connectivity within the frontoparietal network was not significantly correlated with CMSA-Arm stage (z=0.93, p=0.03) or ARAT score (z=2.53, p=0.05); and c) Participants with high rs-connectivity between motor and frontoparietal networks have higher CMSA-Hand stage (rs=0.54, p=0.01) and higher ARAT score (rs=0.54, p=0.009). Rs-connectivity between the motor and frontoparietal networks was not significantly correlated with CMSA-Arm stage (rs=0.34, p=0.13). Taken together, the connectivity within and between the motor and frontoparietal networks correlate with motor outcome post-stroke. The integrity of these regions may be important for an individual's motor outcome. Motor-frontoparietal connectivity may be a potential biomarker of motor recovery post-stroke.

    View details for PubMedID 29249372

  • Beta-band oscillations during passive listening to metronome sounds reflect improved timing representation after short-term musical training in healthy older adults. The European journal of neuroscience Fujioka, T., Ross, B. 2017; 46 (8): 2339-2354

    Abstract

    Sub-second time intervals in musical rhythms provide predictive cues about future events to performers and listeners through an internalized representation of timing. While the acuity of automatic, sub-second timing as well as cognitively controlled, supra-second timing declines with ageing, musical experts are less affected. This study investigated the influence of piano training on temporal processing abilities in older adults using behavioural and neuronal correlates. We hypothesized that neuroplastic changes in beta networks, caused by training in sensorimotor coordination with timing processing, can be assessed even in the absence of movement. Behavioural performance of internal timing stability was assessed with synchronization-continuation finger-tapping paradigms. Magnetoencephalography (MEG) was recorded from older adults before and after one month of one-on-one training. For neural measures of automatic timing processing, we focused on beta oscillations (13-30 Hz) during passive listening to metronome beats. Periodic beta-band modulations in older adults before training were similar to previous findings in young listeners at a beat interval of 800 ms. After training, behavioural performance for continuation tapping was improved and accompanied by an increased range of beat-induced beta modulation, compared to participants who did not receive training. Beta changes were observed in the caudate, auditory, sensorimotor and premotor cortices, parietal lobe, cerebellum and medial prefrontal cortex, suggesting that increased resources are involved in timing processing and goal-oriented monitoring as well as reward-based sensorimotor learning.

    View details for DOI 10.1111/ejn.13693

    View details for PubMedID 28887898

  • Sound-making actions lead to immediate plastic changes of neuromagnetic evoked responses and induced beta-band oscillations during perception. journal of neuroscience Ross, B., Barat, M., Fujioka, T. 2017

    Abstract

    Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at β-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of β-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas β-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning.SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association modifies subsequent perception. Our study demonstrated the immediate effects of sound-making experience on perception using magnetoencephalographic recordings, as reflected in the increased auditory evoked P2 wave, increased responsiveness of β oscillations, and enhanced connectivity between auditory and sensorimotor cortices. The importance of motor learning was underscored as the changes were much smaller in a control group using a key press to generate the sounds instead of learning to play the musical instrument. The results support the rapid integration of a feedforward model during perception and provide a neurophysiological basis for the application of music making in motor rehabilitation training.

    View details for DOI 10.1523/JNEUROSCI.3613-16.2017

    View details for PubMedID 28539421

  • Neural representation of a melodic motif: Effects of polyphonic contexts BRAIN AND COGNITION Huberth, M., Fujioka, T. 2017; 111: 144-155

    Abstract

    In music, a melodic motif is often played repeatedly in different pitch ranges and at different times. Event-related potential (ERP) studies have shown that the mismatch negativity (MMN) reflects memory trace processing that encodes two separate melodic lines ("voices") with different motifs. Here we investigated whether a single motif presented in two voices is encoded as a single entity or two separate entities, and whether motifs overlapping in time impede or enhance encoding strength. Electroencephalogram (EEG) from 11 musically-trained participants was recorded while they passively listened to sequences of 5-note motifs where the 5th note either descended (standard) or ascended (deviant) relative to the previous note (20% deviant rate). Motifs were presented either in one pitch range, or alternated between two pitch ranges, creating an "upper" and a "lower" voice. Further, motifs were either temporally isolated (silence in between), or temporally concurrent with two tones overlapping. When motifs were temporally isolated, MMN amplitude in the one-pitch-range condition was similar to that in the two-pitch-range upper voice. In contrast, no MMN, but P3a, was observed in the two-pitch-range lower voice. When motifs were temporally concurrent and presented in two pitch ranges, MMN exhibited a more posterior distribution in the upper voice, but again, was absent in the lower voice. These results suggest that motifs presented in two separate voices are not encoded entirely independently, but hierarchically, causing asymmetry between the upper and lower voice encoding even when no simultaneous pitches are presented.

    View details for DOI 10.1016/j.bandc.2016.11.003

    View details for Web of Science ID 000392366300016

    View details for PubMedID 27940303

  • 40-Hz oscillations underlying perceptual binding in young and older adults. Psychophysiology Ross, B., Fujioka, T. 2016; 53 (7): 974-990

    Abstract

    Auditory object perception requires binding of elementary features of complex stimuli. Synchronization of high-frequency oscillation in neural networks has been proposed as an effective alternative to binding via hard-wired connections because binding in an oscillatory network can be dynamically adjusted to the ever-changing sensory environment. Previously, we demonstrated in young adults that gamma oscillations are critical for sensory integration and found that they were affected by concurrent noise. Here, we aimed to support the hypothesis that stimulus evoked auditory 40-Hz responses are a component of thalamocortical gamma oscillations and examined whether this oscillatory system may become less effective in aging. In young and older adults, we recorded neuromagnetic 40-Hz oscillations, elicited by monaural amplitude-modulated sound. Comparing responses in quiet and under contralateral masking with multitalker babble noise revealed two functionally distinct components of auditory 40-Hz responses. The first component followed changes in the auditory input with high fidelity and was of similar amplitude in young and older adults. The second, significantly smaller in older adults, showed a 200-ms interval of amplitude and phase rebound and was strongly attenuated by contralateral noise. The amplitude of the second component was correlated with behavioral speech-in-noise performance. Concurrent noise also reduced the P2 wave of auditory evoked responses at 200-ms latency, but not the earlier N1 wave. P2 modulation was reduced in older adults. The results support the model of sensory binding through thalamocortical gamma oscillations. Limitation of neural resources for this process in older adults may contribute to their speech-in-noise understanding deficits.

    View details for DOI 10.1111/psyp.12654

    View details for PubMedID 27080577

  • Beta-Band Oscillations Represent Auditory Beat and Its Metrical Hierarchy in Perception and Imagery JOURNAL OF NEUROSCIENCE Fujioka, T., Ross, B., Trainor, L. J. 2015; 35 (44): 15187-15198
  • Beta-Band Oscillations Represent Auditory Beat and Its Metrical Hierarchy in Perception and Imagery. The Journal of neuroscience : the official journal of the Society for Neuroscience Fujioka, T., Ross, B., Trainor, L. J. 2015; 35 (45): 15187-98

    Abstract

    Dancing to music involves synchronized movements, which can be at the basic beat level or higher hierarchical metrical levels, as in a march (groups of two basic beats, one-two-one-two …) or waltz (groups of three basic beats, one-two-three-one-two-three …). Our previous human magnetoencephalography studies revealed that the subjective sense of meter influences auditory evoked responses phase locked to the stimulus. Moreover, the timing of metronome clicks was represented in periodic modulation of induced (non-phase locked) β-band (13-30 Hz) oscillation in bilateral auditory and sensorimotor cortices. Here, we further examine whether acoustically accented and subjectively imagined metric processing in march and waltz contexts during listening to isochronous beats were reflected in neuromagnetic β-band activity recorded from young adult musicians. First, we replicated previous findings of beat-related β-power decrease at 200 ms after the beat followed by a predictive increase toward the onset of the next beat. Second, we showed that the β decrease was significantly influenced by the metrical structure, as reflected by differences across beat type for both perception and imagery conditions. Specifically, the β-power decrease associated with imagined downbeats (the count "one") was larger than that for both the upbeat (preceding the count "one") in the march, and for the middle beat in the waltz. Moreover, beamformer source analysis for the whole brain volume revealed that the metric contrasts involved auditory and sensorimotor cortices; frontal, parietal, and inferior temporal lobes; and cerebellum. We suggest that the observed β-band activities reflect a translation of timing information to auditory-motor coordination.With magnetoencephalography, we examined β-band oscillatory activities around 20 Hz while participants listened to metronome beats and imagined musical meters such as a march and waltz. We demonstrated that β-band event-related desynchronization in the auditory cortex differentiates between beat positions, specifically between downbeats and the following beat. This is the first demonstration of β-band oscillations related to hierarchical and internalized timing information. Moreover, the meter representation in the β oscillations was widespread across the brain, including sensorimotor and premotor cortices, parietal lobe, and cerebellum. The results extend current understanding of the role of β oscillations in neural processing of predictive timing.

    View details for DOI 10.1523/JNEUROSCI.2397-15.2015

    View details for PubMedID 26558788

  • Neural correlates of intentional switching from ternary to binary meter in a musical hemiola pattern FRONTIERS IN PSYCHOLOGY Fujioka, T., Fidali, B. C., Ross, B. 2014; 5
  • Neural correlates of intentional switching from ternary to binary meter in a musical hemiola pattern. Frontiers in psychology Fujioka, T., Fidali, B. C., Ross, B. 2014; 5: 1257

    Abstract

    Musical rhythms are often perceived and interpreted within a metrical framework that integrates timing information hierarchically based on interval ratios. Endogenous timing processes facilitate this metrical integration and allow us using the sensory context for predicting when an expected sensory event will happen ("predictive timing"). Previously, we showed that listening to metronomes and subjectively imagining the two different meters of march and waltz modulated the resulting auditory evoked responses in the temporal lobe and motor-related brain areas such as the motor cortex, basal ganglia, and cerebellum. Here we further explored the intentional transitions between the two metrical contexts, known as hemiola in the Western classical music dating back to the sixteenth century. We examined MEG from 12 musicians while they repeatedly listened to a sequence of 12 unaccented clicks with an interval of 390 ms, and tapped to them with the right hand according to a 3 + 3 + 2 + 2 + 2 hemiola accent pattern. While participants listened to the same metronome sequence and imagined the accents, their pattern of brain responses significantly changed just before the "pivot" point of metric transition from ternary to binary meter. Until 100 ms before the pivot point, brain activities were more similar to those in the simple ternary meter than those in the simple binary meter, but the pattern was reversed afterwards. A similar transition was also observed at the downbeat after the pivot. Brain areas related to the metric transition were identified from source reconstruction of the MEG using a beamformer and included auditory cortices, sensorimotor and premotor cortices, cerebellum, inferior/middle frontal gyrus, parahippocampal gyrus, inferior parietal lobule, cingulate cortex, and precuneus. The results strongly support that predictive timing processes related to auditory-motor, fronto-parietal, and medial limbic systems underlie metrical representation and its transitions.

    View details for DOI 10.3389/fpsyg.2014.01257

    View details for PubMedID 25429274

    View details for PubMedCentralID PMC4228837

  • Human cortical responses to slow and fast binaural beats reveal multiple mechanisms of binaural hearing JOURNAL OF NEUROPHYSIOLOGY Ross, B., Miyazaki, T., Thompson, J., Jamali, S., Fujioka, T. 2014; 112 (8): 1871-1884
  • Human cortical responses to slow and fast binaural beats reveal multiple mechanisms of binaural hearing. Journal of neurophysiology Ross, B., Miyazaki, T., Thompson, J., Jamali, S., Fujioka, T. 2014; 112 (8): 1871-84

    Abstract

    When two tones with slightly different frequencies are presented to both ears, they interact in the central auditory system and induce the sensation of a beating sound. At low difference frequencies, we perceive a single sound, which is moving across the head between the left and right ears. The percept changes to loudness fluctuation, roughness, and pitch with increasing beat rate. To examine the neural representations underlying these different perceptions, we recorded neuromagnetic cortical responses while participants listened to binaural beats at a continuously varying rate between 3 Hz and 60 Hz. Binaural beat responses were analyzed as neuromagnetic oscillations following the trajectory of the stimulus rate. Responses were largest in the 40-Hz gamma range and at low frequencies. Binaural beat responses at 3 Hz showed opposite polarity in the left and right auditory cortices. We suggest that this difference in polarity reflects the opponent neural population code for representing sound location. Binaural beats at any rate induced gamma oscillations. However, the responses were largest at 40-Hz stimulation. We propose that the neuromagnetic gamma oscillations reflect postsynaptic modulation that allows for precise timing of cortical neural firing. Systematic phase differences between bilateral responses suggest that separate sound representations of a sound object exist in the left and right auditory cortices. We conclude that binaural processing at the cortical level occurs with the same temporal acuity as monaural processing whereas the identification of sound location requires further interpretation and is limited by the rate of object representations.

    View details for DOI 10.1152/jn.00224.2014

    View details for PubMedID 25008412

  • Beat-induced fluctuations in auditory cortical beta-band activity: using EEG to measure age-related changes FRONTIERS IN PSYCHOLOGY Cirelli, L. K., Bosnyak, D., Manning, F. C., Spinelli, C., Marie, C., Fujioka, T., Ghahremani, A., Trainor, L. J. 2014; 5

    Abstract

    People readily extract regularity in rhythmic auditory patterns, enabling prediction of the onset of the next beat. Recent magnetoencephalography (MEG) research suggests that such prediction is reflected by the entrainment of oscillatory networks in the brain to the tempo of the sequence. In particular, induced beta-band oscillatory activity from auditory cortex decreases after each beat onset and rebounds prior to the onset of the next beat across tempi in a predictive manner. The objective of the present study was to examine the development of such oscillatory activity by comparing electroencephalography (EEG) measures of beta-band fluctuations in 7-year-old children to adults. EEG was recorded while participants listened passively to isochronous tone sequences at three tempi (390, 585, and 780 ms for onset-to-onset interval). In adults, induced power in the high beta-band (20-25 Hz) decreased after each tone onset and rebounded prior to the onset of the next tone across tempo conditions, consistent with MEG findings. In children, a similar pattern was measured in the two slower tempo conditions, but was weaker in the fastest condition. The results indicate that the beta-band timing network works similarly in children, although there are age-related changes in consistency and the tempo range over which it operates.

    View details for DOI 10.3389/fpsyg.2014.00742

    View details for Web of Science ID 000339007800001

    View details for PubMedCentralID PMC4093753

  • Neuromagnetic beta and gamma oscillations in the somatosensory cortex after music training in healthy older adults and a chronic stroke patient. Clinical neurophysiology Jamali, S., Fujioka, T., Ross, B. 2014; 125 (6): 1213-1222

    Abstract

    Extensive rehabilitation training can lead to functional improvement even years after a stroke. Although neuronal plasticity is considered as a main origin of such ameliorations, specific subtending mechanisms need further investigation. Our aim was to obtain objective neuromagnetic measures sensitive to brain reorganizations induced by a music-supported training.We applied 20-Hz vibrotactile stimuli to the index finger and the ring finger, recorded somatosensory steady-state responses with magnetoencephalography, and analyzed the cortical sources displaying oscillations synchronized with the external stimuli in two groups of healthy older adults before and after musical training or without training. In addition, we applied the same analysis for an anecdotic report of a single chronic stroke patient with hemiparetic arm and hand problems, who received music-supported therapy (MST).Healthy older adults showed significant finger separation within the primary somatotopic map. Beta dipole sources were more anterior located compared to gamma sources. An anterior shift of sources and increases in synchrony between the stimuli and beta and gamma oscillations were observed selectively after music training. In the stroke patient a normalization of somatotopic organization was observed after MST, with digit separation recovered after training and stimulus induced gamma synchrony increased.The proposed stimulation paradigm captures the integrity of primary somatosensory hand representation. Source position and synchronization between the stimuli and gamma activity are indices, sensitive to music-supported training. Responsiveness was also observed in a chronic stroke patient, encouraging for the music-supported therapy. Notably, changes in somatosensory responses were observed, even though the therapy did not involve specific sensory discrimination training.The proposed protocol can be used for monitoring changes in neuronal organization during training and will improve the understanding of the brain mechanisms underlying rehabilitation.

    View details for DOI 10.1016/j.clinph.2013.10.045

    View details for PubMedID 24290848

  • Beat-induced fluctuations in auditory cortical beta-band activity: using EEG to measure age-related changes. Frontiers in psychology Cirelli, L. K., Bosnyak, D., Manning, F. C., Spinelli, C., Marie, C., Fujioka, T., Ghahremani, A., Trainor, L. J. 2014; 5: 742-?

    Abstract

    People readily extract regularity in rhythmic auditory patterns, enabling prediction of the onset of the next beat. Recent magnetoencephalography (MEG) research suggests that such prediction is reflected by the entrainment of oscillatory networks in the brain to the tempo of the sequence. In particular, induced beta-band oscillatory activity from auditory cortex decreases after each beat onset and rebounds prior to the onset of the next beat across tempi in a predictive manner. The objective of the present study was to examine the development of such oscillatory activity by comparing electroencephalography (EEG) measures of beta-band fluctuations in 7-year-old children to adults. EEG was recorded while participants listened passively to isochronous tone sequences at three tempi (390, 585, and 780 ms for onset-to-onset interval). In adults, induced power in the high beta-band (20-25 Hz) decreased after each tone onset and rebounded prior to the onset of the next tone across tempo conditions, consistent with MEG findings. In children, a similar pattern was measured in the two slower tempo conditions, but was weaker in the fastest condition. The results indicate that the beta-band timing network works similarly in children, although there are age-related changes in consistency and the tempo range over which it operates.

    View details for DOI 10.3389/fpsyg.2014.00742

    View details for PubMedID 25071691

    View details for PubMedCentralID PMC4093753

  • Synchronization of beta and gamma oscillations in the somatosensory evoked neuromagnetic steady-state response EXPERIMENTAL NEUROLOGY Ross, B., Jamali, S., Miyazaki, T., Fujioka, T. 2013; 245: 40-51

    Abstract

    The sensory evoked neuromagnetic response consists of superimposition of an immediately stimulus-driven component and induced changes in the autonomous brain activity, each having distinct functional relevance. Commonly, the strength of phase locking in neural activities has been used to differentiate the different responses. The steady-state response is a strong oscillatory neural activity, which is evoked with rhythmic stimulation, and provides an effective tool to investigate oscillatory brain networks. In this case, both the sensory response and intrinsic activity, representing higher order processes, are highly synchronized to the stimulus. In this study we hypothesized that temporal dynamics of oscillatory activities would characterize the differences between the two types of activities and that beta and gamma oscillations are differently involved in this distinction. We used magnetoencephalography (MEG) for studying how ongoing steady-state responses elicited by a 20-Hz vibro-tactile stimulus to the right index finger were affected by a concurrent isolated touch stimulus to the same hand ring finger. SI source activity showed oscillations at multiples of 20 Hz with characteristic differences in the beta band and the gamma band. The response amplitudes were largest at 20 Hz (beta) and significantly reduced at 40 Hz and 60 Hz (gamma), although synchronization strength, indicated by inter-trial coherence (ITC), did not substantially differ between 20 Hz and 40 Hz. Moreover, the beta oscillations showed a fast onset, whereas the amplitude of gamma oscillations increased slowly and reached the steady state 400 ms after onset of the vibration stimulus. Most importantly, the pulse stimuli interacted only with gamma oscillations in a way that gamma oscillations decreased immediately after the concurrent stimulus onset and recovered slowly, resembling the initial slope. Such time course of gamma oscillations is similar to our previous observations in the auditory system. The time constant is in line with the time required for conscious perception of the sensory stimulus. Based on the observed different spectro-temporal dynamics, we propose that while beta activities likely relate to independent representation of the sensory input, gamma oscillation likely relates to binding of sensory information for higher order processing.

    View details for DOI 10.1016/j.expneurol.2012.08.019

    View details for Web of Science ID 000320350300005

    View details for PubMedID 22955055

  • Sound envelope encoding in the auditory cortex revealed by neuromagnetic responses in the theta to gamma frequency bands BRAIN RESEARCH Miyazaki, T., Thompson, J., Fujioka, T., Ross, B. 2013; 1506: 64-75

    Abstract

    Amplitude fluctuations of natural sounds carry multiple types of information represented at different time scales, such as syllables and voice pitch in speech. However, it is not well understood how such amplitude fluctuations at different time scales are processed in the brain. In the present study we investigated the effect of the stimulus rate on the cortical evoked responses using magnetoencephalography (MEG). We used a two-tone complex sound, whose envelope fluctuated at the difference frequency and induced an acoustic beat sensation. When the beat rate was continuously swept between 3Hz and 60Hz, auditory evoked response showed distinct transient waves at slow rates, while at fast rates continuous sinusoidal oscillations similar to the auditory steady-state response (ASSR) were observed. We further derived temporal modulation transfer functions (TMTF) from amplitudes of the transient responses and from the ASSR. The results identified two critical rates of 12.5Hz and 25Hz, at which consecutive transient responses overlapped with each other. These stimulus rates roughly corresponded to the rates at which the perceptual quality of the sound envelope is known to change. Low rates (> 10Hz) are perceived as loudness fluctuation, medium rates as acoustical flutter, and rates above 25Hz as roughness. We conclude that these results reflect cortical processes that integrate successive acoustic events at different time scales for extracting complex features of natural sound.

    View details for DOI 10.1016/j.brainres.2013.01.047

    View details for Web of Science ID 000318208800007

    View details for PubMedID 23399682

  • Introduction to The neurosciences and music IV: learning and memory. Annals of the New York Academy of Sciences Altenmüller, E., Demorest, S. M., Fujioka, T., Halpern, A. R., Hannon, E. E., Loui, P., MAJNO, M., Oechslin, M. S., Osborne, N., Overy, K., PALMER, C., Peretz, I., Pfordresher, P. Q., Särkämö, T., Wan, C. Y., Zatorre, R. J. 2012; 1252: 1-16

    Abstract

    The conference entitled "The Neurosciences and Music-IV: Learning and Memory'' was held at the University of Edinburgh from June 9-12, 2011, jointly hosted by the Mariani Foundation and the Institute for Music in Human and Social Development, and involving nearly 500 international delegates. Two opening workshops, three large and vibrant poster sessions, and nine invited symposia introduced a diverse range of recent research findings and discussed current research directions. Here, the proceedings are introduced by the workshop and symposia leaders on topics including working with children, rhythm perception, language processing, cultural learning, memory, musical imagery, neural plasticity, stroke rehabilitation, autism, and amusia. The rich diversity of the interdisciplinary research presented suggests that the future of music neuroscience looks both exciting and promising, and that important implications for music rehabilitation and therapy are being discovered.

    View details for DOI 10.1111/j.1749-6632.2012.06474.x

    View details for PubMedID 22524334

  • Internalized Timing of Isochronous Sounds Is Represented in Neuromagnetic Beta Oscillations JOURNAL OF NEUROSCIENCE Fujioka, T., Trainor, L. J., Large, E. W., Ross, B. 2012; 32 (5): 1791-1802

    Abstract

    Moving in synchrony with an auditory rhythm requires predictive action based on neurodynamic representation of temporal information. Although it is known that a regular auditory rhythm can facilitate rhythmic movement, the neural mechanisms underlying this phenomenon remain poorly understood. In this experiment using human magnetoencephalography, 12 young healthy adults listened passively to an isochronous auditory rhythm without producing rhythmic movement. We hypothesized that the dynamics of neuromagnetic beta-band oscillations (~20 Hz)-which are known to reflect changes in an active status of sensorimotor functions-would show modulations in both power and phase-coherence related to the rate of the auditory rhythm across both auditory and motor systems. Despite the absence of an intention to move, modulation of beta amplitude as well as changes in cortico-cortical coherence followed the tempo of sound stimulation in auditory cortices and motor-related areas including the sensorimotor cortex, inferior-frontal gyrus, supplementary motor area, and the cerebellum. The time course of beta decrease after stimulus onset was consistent regardless of the rate or regularity of the stimulus, but the time course of the following beta rebound depended on the stimulus rate only in the regular stimulus conditions such that the beta amplitude reached its maximum just before the occurrence of the next sound. Our results suggest that the time course of beta modulation provides a mechanism for maintaining predictive timing, that beta oscillations reflect functional coordination between auditory and motor systems, and that coherence in beta oscillations dynamically configure the sensorimotor networks for auditory-motor coupling.

    View details for DOI 10.1523/JNEUROSCI.4107-11.2012

    View details for Web of Science ID 000299977200025

    View details for PubMedID 22302818

  • Changes in neuromagnetic beta-band oscillation after music-supported stroke rehabilitation Conference on Neurosciences and Music-IV - Learning and Memory Fujioka, T., Ween, J. E., Jamali, S., Stuss, D. T., Ross, B. BLACKWELL SCIENCE PUBL. 2012: 294–304

    Abstract

    Precise timing of sound is crucial in music for both performing and listening. Indeed, listening to rhythmic sound sequences activates not only the auditory system but also the sensorimotor system. Previously, we showed the significance of neural beta-band oscillations (15-30 Hz) for the timing processing that involves such auditory-motor coordination. Thus, we hypothesized that motor rehabilitation training incorporating music playing will stimulate and enhance auditory-motor interaction in stroke patients. We examined three chronic patients who received Music-Supported Therapy following the protocols practiced by Schneider. Neuromagnetic beta-band activity was remarkably alike during passive listening to a metronome and during finger tapping, with or without the metronome, for either the paretic or nonparetic hand, suggesting a shared mechanism of the beta modulation. In the listening task, the magnitude of the beta decrease after the tone onset was more pronounced at the posttraining time point and was accompanied by improved arm and hand skills. The present case data give insight into the neural underpinnings of rehabilitation with music making and rhythmic auditory stimulation.

    View details for DOI 10.1111/j.1749-6632.2011.06436.x

    View details for Web of Science ID 000305518900038

    View details for PubMedID 22524371

  • Interference in dichotic listening: the effect of contralateral noise on oscillatory brain networks EUROPEAN JOURNAL OF NEUROSCIENCE Ross, B., Miyazaki, T., Fujioka, T. 2012; 35 (1): 106-118

    Abstract

    Coupling of thalamocortical networks through synchronous oscillations at gamma frequencies (30-80 Hz) has been suggested as a mechanism for binding of auditory sensory information into an object representation, which then becomes accessible for perception and cognition. This study investigated whether contralateral noise interferes with this step of central auditory processing. Neuromagnetic 40-Hz oscillations were examined in young healthy participants while they listened to amplitude-modulated sound in one ear and a multi-talker masking noise in the contralateral ear. Participants were engaged in a gap-detection task, for which their behavioural performance declined under masking. The amplitude modulation of the stimulus elicited steady 40-Hz oscillations with sources in bilateral auditory cortices. Analysis of the temporal dynamics of phase synchrony between source activity and the stimulus revealed two oscillatory components; the first was indicated by an instant onset in phase synchrony with the stimulus while the second showed a 200-ms time constant of gradual increase in phase synchrony after phase resetting by the gap. Masking abolished only the second component. This coincided with masking-related decrease of the P2 wave of the transient auditory-evoked responses whereas the N1 wave, reflecting early sensory processing, was unaffected. Given that the P2 response has been associated with object representation, we propose that the first 40-Hz component is related to representation of low-level sensory input whereas the second is related to internal auditory processing in thalamocortical networks. The observed modulation of oscillatory activity is discussed as reflecting a neural mechanism critical for speech understanding in noise.

    View details for DOI 10.1111/j.1460-9568.2011.07935.x

    View details for Web of Science ID 000298737500012

    View details for PubMedID 22171970

  • THE EFFECTS OF STIMULUS RATE AND TAPPING RATE ON TAPPING PERFORMANCE MUSIC PERCEPTION Zendel, B. R., Ross, B., Fujioka, T. 2011; 29 (1): 65-78
  • Development of auditory-specific brain rhythm in infants EUROPEAN JOURNAL OF NEUROSCIENCE Fujioka, T., Mourad, N., Trainor, L. J. 2011; 33 (3): 521-529

    Abstract

    Human infants rapidly develop their auditory perceptual abilities and acquire culture-specific knowledge in speech and music in the second 6 months of life. In the adult brain, neural rhythm around 10 Hz in the temporal lobes is thought to reflect sound analysis and subsequent cognitive processes such as memory and attention. To study when and how such rhythm emerges in infancy, we examined electroencephalogram (EEG) recordings in infants 4 and 12 months of age during sound stimulation and silence. In the 4-month-olds, the amplitudes of narrowly tuned 4-Hz brain rhythm, recorded from bilateral temporal electrodes, were modulated by sound stimuli. In the 12-month-olds, the sound-induced modulation occurred at faster 6-Hz rhythm at temporofrontal locations. The brain rhythms in the older infants consisted of more complex components, as even evident in individual data. These findings suggest that auditory-specific rhythmic neural activity, which is already established before 6 months of age, involves more speed-efficient long-range neural networks by the age of 12 months when long-term memory for native phoneme representation and for musical rhythmic features is formed. We suggest that maturation of distinct rhythmic components occurs in parallel, and that sensory-specific functions bound to particular thalamo-cortical networks are transferred to newly developed higher-order networks step by step until adult hierarchical neural oscillatory mechanisms are achieved across the whole brain.

    View details for DOI 10.1111/j.1460-9568.2010.07544.x

    View details for Web of Science ID 000286769800014

    View details for PubMedID 21226773

  • Comparison of artifact correction methods for infant EEG applied to extraction of event-related potential signals CLINICAL NEUROPHYSIOLOGY Fujioka, T., Mourad, N., He, C., Trainor, L. J. 2011; 122 (1): 43-51

    Abstract

    EEG recording is useful for neurological and cognitive assessment, but acquiring reliable data in infants and special populations has the challenges of limited recording time, high-amplitude background activity, and movement-related artifacts. This study objectively evaluated our previously proposed ERP analysis techniques.We compared three artifact removal techniques: Conventional Trial Rejection (CTR), Independent Channel Rejection (ICR; He et al., 2007), and Artifact Blocking (AB; Mourad et al., 2007). We embedded a synthesized auditory ERP signal into real EEG activity recorded from 4-month-old infants. We then compared the ability of the three techniques to extract that signal from the noise.Examination of correlation coefficients, variance in the gain across sensors, and residual power revealed that ICR and AB were significantly more successful than CTR at accurately extracting the signal. Overall performance of ICR and AB was comparable, although the AB algorithm introduced less spatial distortion than ICR.ICR and AB are improvements over CTR in cases where the signal-to-noise ratio is low.Both ICR and AB are improvements over standard techniques. AB can be applied to both continuous and epoched EEG.

    View details for DOI 10.1016/j.clinph.2010.04.036

    View details for Web of Science ID 000285406600010

    View details for PubMedID 20580601

  • Endogenous Neuromagnetic Activity for Mental Hierarchy of Timing JOURNAL OF NEUROSCIENCE Fujioka, T., Zendel, B. R., Ross, B. 2010; 30 (9): 3458-3466

    Abstract

    The frontal-striatal circuits, the cerebellum, and motor cortices play crucial roles in processing timing information on second to millisecond scales. However, little is known about the physiological mechanism underlying human's preference to robustly encode a sequence of time intervals into a mental hierarchy of temporal units called meter. This is especially salient in music: temporal patterns are typically interpreted as integer multiples of a basic unit (i.e., the beat) and accommodated into a global context such as march or waltz. With magnetoencephalography and spatial-filtering source analysis, we demonstrated that the time courses of neural activities index a subjectively induced meter context. Auditory evoked responses from hippocampus, basal ganglia, and auditory and association cortices showed a significant contrast between march and waltz metric conditions during listening to identical click stimuli. Specifically, the right hippocampus was activated differentially at 80 ms to the march downbeat (the count one) and approximately 250 ms to the waltz downbeat. In contrast, basal ganglia showed a larger 80 ms peak for march downbeat than waltz. The metric contrast was also expressed in long-latency responses in the right temporal lobe. These findings suggest that anticipatory processes in the hippocampal memory system and temporal computation mechanism in the basal ganglia circuits facilitate endogenous activities in auditory and association cortices through feedback loops. The close interaction of auditory, motor, and limbic systems suggests a distributed network for metric organization in temporal processing and its relevance for musical behavior.

    View details for DOI 10.1523/JNEUROSCI.3086-09.2010

    View details for Web of Science ID 000275191000031

    View details for PubMedID 20203205

  • Beta and Gamma Rhythms in Human Auditory Cortex during Musical Beat Processing Conference on the Neurosciences and Music III Fujioka, T., Trainor, L. J., Large, E. W., Ross, B. WILEY-BLACKWELL. 2009: 89–92

    Abstract

    We examined beta- (approximately 20 Hz) and gamma- (approximately 40 Hz) band activity in auditory cortices by means of magnetoencephalography (MEG) during passive listening to a regular musical beat with occasional omission of single tones. The beta activity decreased after each tone, followed by an increase, thus forming a periodic modulation synchronized with the stimulus. The beta decrease was absent after omissions. In contrast, gamma-band activity showed a peak after tone and omission, suggesting underlying endogenous anticipatory processes. We propose that auditory beta and gamma oscillations have different roles in musical beat encoding and auditory-motor interaction.

    View details for DOI 10.1111/j.1749-6632.2009.04779.x

    View details for Web of Science ID 000269652300010

    View details for PubMedID 19673759

  • Neural Representation of Transposed Melody in Infants at 6 Months of Age Conference on the Neurosciences and Music III Tew, S., Fujioka, T., He, C., Trainor, L. WILEY-BLACKWELL. 2009: 287–290

    Abstract

    We examined adults' and 6-month-old infants' event-related potentials in response to occasional changes (deviants) in a 4-note melody presented at different pitch levels from trial to trial. In both groups, responses to standard and deviant stimuli differed significantly; however, adults produced a typical mismatch negativity (MMN), whereas 6-month-old infants exhibited a slow positive wave. We conclude that 6-month-old infants, like adults, encode melodic information in terms of relative pitch distances, but that the underlying cortical activity differs significantly from that of adults.

    View details for DOI 10.1111/j.1749-6632.2009.04845.x

    View details for Web of Science ID 000269652300043

    View details for PubMedID 19673795

  • Auditory processing indexed by stimulus-induced alpha desynchronization in children INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY Fujioka, T., Ross, B. 2008; 68 (2): 130-140

    Abstract

    By means of magnetoencephalography (MEG), we investigated event-related synchronization and desynchronization (ERS/ERD) in auditory cortex activity, recorded from twelve children aged four to six years, while they passively listened to a violin tone and a noise-burst stimulus. Time-frequency analysis using Wavelet Transform was applied to single-trials of source waveforms observed from left and right auditory cortices. Stimulus-induced changes in non-phase-locked activities were evident. ERS in the beta range (13-30 Hz) lasted only for 100 ms after stimulus onset. This was followed by prominent alpha ERD, which showed a clear dissociation between the upper (12 Hz) and lower (8 Hz) alpha range in both left and right auditory cortices for both stimuli. The time courses of the alpha ERD (onset around 300 ms, peak at 500 ms, offset after 1500 ms) were similar to those previously found for older children and adults with auditory memory related tasks. For the violin tone only, the ERD lasted longer in the upper than the lower alpha band. The findings suggest that induced alpha ERD indexes auditory stimulus processing in children without specific cognitive task requirement. The left auditory cortex showed a larger and longer-lasting upper alpha ERD than did the right auditory cortex, likely reflecting hemispheric differences in maturational stages of neural oscillatory mechanisms.

    View details for DOI 10.1016/j.ijpsycho.2007.12.004

    View details for Web of Science ID 000256205900007

    View details for PubMedID 18331761

  • Simultaneous pitches are encoded separately in auditory cortex: an MMNm study NEUROREPORT Fujioka, T., Trainor, L. J., Ross, B. 2008; 19 (3): 361-366

    Abstract

    This study examined whether two simultaneous pitches have separate memory representations or an integrated representation in preattentive auditory memory. Mismatch negativity fields were examined when a pitch change occurred in either the higher-pitched or the lower-pitched tone at 25% probability each, thus making the total deviation rate of the two-tone dyad 50%. Clear MMNm was obtained for deviants in both tones confirming separate memory traces for concurrent tones. At the same time, deviants to the lower-pitched, but not higher-pitched, tone within the two-tone dyad elicited a reduced MMNm compared to when each tone was presented alone, indicating that the representations of two pitches are not completely independent.

    View details for Web of Science ID 000253301400020

    View details for PubMedID 18303582

  • Dynamics of thalamocortical circuits for sound processing revealed by magnetoencephalography The Journal of the Acoustical Society of America Fujioka, T. 2008; 124 (4): 2448
  • Time courses of cortical beta and gamma-band activity during listening to metronome sounds in different tempi The Journal of the Acoustical Society of America Fujioka, T. 2008; 124 (4): 2432
  • Aging in binaural hearing begins in mid-life: Evidence from cortical auditory-evoked responses to changes in interaural phase JOURNAL OF NEUROSCIENCE Ross, B., Fujioka, T., Tremblay, K. L., Picton, T. W. 2007; 27 (42): 11172-11178

    Abstract

    Older adults often have difficulty understanding speech in a noisy environment or with multiple speakers. In such situations, binaural hearing improves the signal-to-noise ratio. How does this binaural advantage change with increasing age? Using magnetoencephalography, we recorded cortical activity evoked by changes in interaural phase differences of amplitude-modulated tones. These responses occurred for frequencies up to 1225 Hz in young subjects but only up to 940 Hz in middle-aged and 760 Hz in older adults. Behavioral thresholds also decreased with increasing age but were more variable, likely because some older adults make effective use of compensatory mechanisms. The reduced frequency range for binaural hearing became significant in middle age, before decline in hearing sensation and the morphology of cortical responses, which became apparent only in the older subjects. This study provides evidence from human physiological data for the early onset of biological aging in binaural hearing.

    View details for DOI 10.1523/JNEUROSCI.1813-07.2007

    View details for Web of Science ID 000250223700004

    View details for PubMedID 17942712

  • Magnetoencephalographic study: Muscical cognition in auditory cortex Shinkei naika Fujioka, T. 2007; 66 (6)
  • One year of musical training affects development of auditory cortical-evoked fields in young children BRAIN Fujioka, T., Ross, B., Kakigi, R., Pantev, C., Trainor, L. J. 2006; 129: 2593-2608

    Abstract

    Auditory evoked responses to a violin tone and a noise-burst stimulus were recorded from 4- to 6-year-old children in four repeated measurements over a 1-year period using magnetoencephalography (MEG). Half of the subjects participated in musical lessons throughout the year; the other half had no music lessons. Auditory evoked magnetic fields showed prominent bilateral P100m, N250m, P320m and N450m peaks. Significant change in the peak latencies of all components except P100m was observed over time. Larger P100m and N450m amplitude as well as more rapid change of N250m amplitude and latency was associated with the violin rather than the noise stimuli. Larger P100m and P320m peak amplitudes in the left hemisphere than in the right are consistent with left-lateralized cortical development in this age group. A clear musical training effect was expressed in a larger and earlier N250m peak in the left hemisphere in response to the violin sound in musically trained children compared with untrained children. This difference coincided with pronounced morphological change in a time window between 100 and 400 ms, which was observed in musically trained children in response to violin stimuli only, whereas in untrained children a similar change was present regardless of stimulus type. This transition could be related to establishing a neural network associated with sound categorization and/or involuntary attention, which can be altered by music learning experience.

    View details for DOI 10.1093/brain/aw1247

    View details for Web of Science ID 000240925500008

    View details for PubMedID 16959812

  • Cortical oscillations related to processing congruent and incongruent grapheme-phoneme pairs NEUROSCIENCE LETTERS Herdman, A. T., Fujioka, T., Chau, W., Ross, B., Pantev, C., Picton, T. W. 2006; 399 (1-2): 61-66

    Abstract

    In this study, we investigated changes in cortical oscillations following congruent and incongruent grapheme-phoneme stimuli. Hiragana graphemes and phonemes were simultaneously presented as congruent or incongruent audiovisual stimuli to native Japanese-speaking participants. The discriminative reaction time was 57 ms shorter for congruent than incongruent stimuli. Analysis of MEG responses using synthetic aperture magnetometry (SAM) revealed that congruent stimuli evoked larger 2-10 Hz activity in the left auditory cortex within the first 250 ms after stimulus onset, and smaller 2-16 Hz activity in bilateral visual cortices between 250 and 500 ms. These results indicate that congruent visual input can modify cortical activity in the left auditory cortex.

    View details for DOI 10.1016/j.neulet.2006.01.069

    View details for Web of Science ID 000237526700012

    View details for PubMedID 16507333

  • Music cognition in the auditory cortex by magnetoencephalography Clinical Neuroscience Fujioka, T., Kakigi, R. 2006; 24 (10)
  • Sound discrimination in the auditory cortex by magnetoencephalography Clinical Neuroscience Fujioka, T., Kakigi, R. 2006; 24 (8)
  • Automatic encoding of polyphonic melodies in musicians and nonmusicians JOURNAL OF COGNITIVE NEUROSCIENCE Fujioka, T., Trainor, L. J., Ross, B., Kakigi, R., Pantev, C. 2005; 17 (10): 1578-1592

    Abstract

    In music, multiple musical objects often overlap in time. Western polyphonic music contains multiple simultaneous melodic lines (referred to as "voices") of equal importance. Previous electrophysiological studies have shown that pitch changes in a single melody are automatically encoded in memory traces, as indexed by mismatch negativity (MMN) and its magnetic counterpart (MMNm), and that this encoding process is enhanced by musical experience. In the present study, we examined whether two simultaneous melodies in polyphonic music are represented as separate entities in the auditory memory trace. Musicians and untrained controls were tested in both magnetoencephalogram and behavioral sessions. Polyphonic stimuli were created by combining two melodies (A and B), each consisting of the same five notes but in a different order. Melody A was in the high voice and Melody B in the low voice in one condition, and this was reversed in the other condition. On 50% of trials, a deviant final (5th) note was played either in the high or in the low voice, and it either went outside the key of the melody or remained within the key. These four deviations occurred with equal probability of 12.5% each. Clear MMNm was obtained for most changes in both groups, despite the 50% deviance level, with a larger amplitude in musicians than in controls. The response pattern was consistent across groups, with larger MMNm for deviants in the high voice than in the low voice, and larger MMNm for in-key than out-of-key changes, despite better behavioral performance for out-of-key changes. The results suggest that melodic information in each voice in polyphonic music is encoded in the sensory memory trace, that the higher voice is more salient than the lower, and that tonality may be processed primarily at cognitive stages subsequent to MMN generation.

    View details for Web of Science ID 000232884500008

    View details for PubMedID 16269098

  • Music recognition starts in the auditory cortex Rinsho-noha Fujioka, T., Kakigi, R. 2005
  • Musical training enhances automatic encoding of melodic contour and interval structure JOURNAL OF COGNITIVE NEUROSCIENCE Fujioka, T., Trainor, L. J., Ross, B., Kakigi, R., Pantev, C. 2004; 16 (6): 1010-1021

    Abstract

    In music, melodic information is thought to be encoded in two forms, a contour code (up/down pattern of pitch changes) and an interval code (pitch distances between successive notes). A recent study recording the mismatch negativity (MMN) evoked by pitch contour and interval deviations in simple melodies demonstrated that people with no formal music education process both contour and interval information in the auditory cortex automatically. However, it is still unclear whether musical experience enhances both strategies of melodic encoding. We designed stimuli to examine contour and interval information separately. In the contour condition there were eight different standard melodies (presented on 80% of trials), each consisting of five notes all ascending in pitch, and the corresponding deviant melodies (20%) were altered to descending on their final note. The interval condition used one five-note standard melody transposed to eight keys from trial to trial, and on deviant trials the last note was raised by one whole tone without changing the pitch contour. There was also a control condition, in which a standard tone (990.7 Hz) and a deviant tone (1111.0 Hz) were presented. The magnetic counterpart of the MMN (MMNm) from musicians and nonmusicians was obtained as the difference between the dipole moment in response to the standard and deviant trials recorded by magnetoencephalography. Significantly larger MMNm was present in musicians in both contour and interval conditions than in nonmusicians, whereas MMNm in the control condition was similar for both groups. The interval MMNm was larger than the contour MMNm in musicians. No hemispheric difference was found in either group. The results suggest that musical training enhances the ability to automatically register abstract changes in the relative pitch structure of melodies.

    View details for Web of Science ID 000222984200011

    View details for PubMedID 15298788

  • Auditory memory trace encodes polyphonic melody in musician Proceedings in the 8th International Conference on Music Perception and Cognition (CD-ROM) Fujioka, T., Trainor, L. J., Ross, B., Kakigi, R., Pantev, C. 2004
  • Cortical oscillations modulated by congruent and incongruent audiovisual stimuli. Neurology & clinical neurophysiology : NCN Herdman, A. T., Fujioka, T., Chau, W., Ross, B., Pantev, C., PICTON, T. W. 2004; 2004: 15-?

    Abstract

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

    View details for PubMedID 16012678

  • Cortical oscillations modulated by congruent and incongruent audiovisual stimuli Proceedings in the 14th international conference on biomagnetism Herdman, A. T., Fujioka, T., Chau, W., Ross, B., Pantev, C., Picton, T. W. 2004: 284–285
  • Auditory Memory Trace Encodes Polyphonic Melody Proceedings in the 14th international conference on biomagnetism Fujioka, T., Trainor, L. J., Ross, B., Kakigi, R., Pantev, C. 2004: 501–502
  • Static and dynamic representation of complex sounds: from tonotopy to musical notes Proceedings in the 14th international conference on biomagnetism Fujioka, T., Trainor, L. J., Ross, B., Kakigi, R., Pantev, C. 2004: 26
  • Tonotopic representation of missing fundamental complex sounds in the human auditory cortex EUROPEAN JOURNAL OF NEUROSCIENCE Fujioka, T., Ross, B., Okamoto, H., Takeshima, Y., Kakigi, R., Pantev, C. 2003; 18 (2): 432-440

    Abstract

    The N1m component of the auditory evoked magnetic field in response to tones and complex sounds was examined in order to clarify whether the tonotopic representation in the human secondary auditory cortex is based on perceived pitch or the physical frequency spectrum of the sound. The investigated stimulus parameters were the fundamental frequencies (F0 = 250, 500 and 1000 Hz), the spectral composition of the higher harmonics of the missing fundamental sounds (2nd to 5th, 6th to 9th and 10th to 13th harmonic) and the frequencies of pure tones corresponding to F0 and to the lowest component of each complex sound. Tonotopic gradients showed that high frequencies were more medially located than low frequencies for the pure tones and for the centre frequency of the complex tones. Furthermore, in the superior-inferior direction, the tonotopic gradients were different between pure tones and complex sounds. The results were interpreted as reflecting different processing in the auditory cortex for pure tones and complex sounds. This hypothesis was supported by the result of evoked responses to complex sounds having longer latencies. A more pronounced tonotopic representation in the right hemisphere gave evidence for right hemispheric dominance in spectral processing.

    View details for DOI 10.1046/j.1460-9568.2003.02769.x

    View details for Web of Science ID 000184316400022

    View details for PubMedID 12887425

  • Music and learning-induced cortical plasticity Conference on Neurosciences and Music: Mutual Interactions and Implications on Developmental Functions Pantev, C., Ross, B., Fujioka, T., Trainor, L. J., Schulte, M., Schulz, M. NEW YORK ACAD SCIENCES. 2003: 438–450

    Abstract

    Auditory stimuli are encoded by frequency-tuned neurons in the auditory cortex. There are a number of tonotopic maps, indicating that there are multiple representations, as in a mosaic. However, the cortical organization is not fixed due to the brain's capacity to adapt to current requirements of the environment. Several experiments on cerebral cortical organization in musicians demonstrate an astonishing plasticity. We used the MEG technique in a number of studies to investigate the changes that occur in the human auditory cortex when a skill is acquired, such as when learning to play a musical instrument. We found enlarged cortical representation of tones of the musical scale as compared to pure tones in skilled musicians. Enlargement was correlated with the age at which musicians began to practice. We also investigated cortical representations for notes of different timbre (violin and trumpet) and found that they are enhanced in violinists and trumpeters, preferentially for the timbre of the instrument on which the musician was trained. In recent studies we extended these findings in three ways. First, we show that we can use MEG to measure the effects of relatively short-term laboratory training involving learning to perceive virtual instead of spectral pitch and that the switch to perceiving virtual pitch is manifested in the gamma band frequency. Second, we show that there is cross-modal plasticity in that when the lips of trumpet players are stimulated (trumpet players assess their auditory performance by monitoring the position and pressure of their lips touching the mouthpiece of their instrument) at the same time as a trumpet tone, activation in the somatosensory cortex is increased more than it is during the sum of the separate lip and trumpet tone stimulation. Third, we show that musicians' automatic encoding and discrimination of pitch contour and interval information in melodies are specifically enhanced compared to those in nonmusicians in that musicians show larger functional mismatch negativity (MMNm) responses to occasional changes in melodic contour or interval, but that the two groups show similar MMNm responses to changes in the frequency of a pure tone.

    View details for DOI 10.1196/annals.1284.054

    View details for Web of Science ID 000188893400057

    View details for PubMedID 14681168

  • The auditory evoked magnetic fields to very high frequency tones NEUROSCIENCE Fujioka, T., Kakigi, R., Gunji, A., Takeshima, Y. 2002; 112 (2): 367-381

    Abstract

    We studied the auditory evoked magnetic fields (AEFs) in response to pure tones especially at very high frequencies (from 4000 Hz to 40,000 Hz). This is the first systematic study of AEFs using tones above 5000 Hz, the upper audible range of humans, and ultrasound. We performed two experiments. In the first, AEFs were recorded in 12 subjects from both hemispheres under binaural listening conditions. Six types of auditory stimulus (pure tones of five different frequencies: 4000 Hz, 8000 Hz, 10,000 Hz, 12,000 Hz, 14,000 Hz, and a click sound as the target stimulus) were used. In the second experiment, we used 1000 Hz, 15,000 Hz, and two ultrasounds with frequencies of 20,000 Hz and 40,000 Hz. The subjects could detect all stimuli in the first experiment but not the ultrasounds in the second experiment. We analyzed N1m, the main response with approximately 100 ms in peak latency, and made the following findings. (1) N1m responses to the tones up to 12,000 Hz were clearly recorded from at least one hemisphere in all 12 subjects. N1m for 14,000 Hz was identified in at least one hemisphere in 10 subjects, and in both hemispheres in six subjects. No significant response could be identified to ultrasounds over 20,000 Hz. (2) The amplitude of the N1m to the tones above 8000 Hz was significantly smaller than that to 4000 Hz in both hemispheres. There was a tendency for the peak latency of the N1m to be longer for the tones with higher frequencies, but no significant change was found. (3) The equivalent current dipole (ECD) of the N1m was located in the auditory cortex. There was a tendency for the ECD for the tones with higher frequencies to lie in more medial and posterior areas, but no significant change was found. (4) As for the interhemispheric difference, the N1m amplitude for all frequency tones was significantly larger and the ECDs were estimated to be located more anterior and medial in the right hemisphere than the left. The priority of the right hemisphere, that is the larger amplitude, for very high frequency tones was confirmed. (5) The orientation of the ECD in the left hemisphere became significantly more vertical the higher the tones. This result was consistent with previous studies which revealed the sensitivity of the frequency difference in the left hemisphere. From these findings we suggest that tonotopy in the auditory cortex exists up to the upper limit of audible range within the small area, where the directly air-conducted ultrasounds are not reflected.

    View details for Web of Science ID 000176525700012

    View details for PubMedID 12044454

  • Cortical representation of pitch, contour, and interval changes of melodies Proceedings in the 13th international conference on biomagnetism Fujioka, T., Trainor, L. J., Ross, B., Kakigi, R., Pantev, C. 2002
  • Cortical representation of pitch and timbre of the missing fundamental of complex sounds Proceedings in the 13th international conference on biomagnetis Fujioka, T., Okamoto, H., Takeshima, Y., Kakigi, R. edited by Nowak, H., Haueisen, J., Giessler, F., Hounker, R. 2002: 62–64
  • The auditory evoked magnetic fields to very high frequency tones Proceedings in the 16th Japanese Biomagnetism conference Fujioka, T., Kakigi, R., Gunji, A., Takeshima, Y. edited by Keiji, I., Ikehata, M. 2001: 204–5
  • SGML Document Structurization based by Categorical Relation of Semantic Value of the Words The 5th Annual Meeting of The Association for Natural Language Processing Fujioka, T. 1999: 397–398
  • Document Structurization by Index belong to the Multiple Categories The 12th Annual Conference of Japanese Society for Artificial Intelligence Fujioka, T. 1998
  • Parallelization Technique for Quasi-Destructive Graph Unification Algorithm Special Interest Group on Natural Language, Processing of the Information Processing Society of Japan Fujioka, T., Tomabechi, H., Furuse, O., Lida, H. 1990