Bio


Juan S. Gómez-Cañón is a researcher, engineer and musician from Colombia. He holds a Ph.D. in Information and Communication Technologies from the Universitat Pompeu Fabra (Barcelona, Spain). During his Ph.D., Juan researched human-centered and trustworthy machine learning methods to predict the emotions in music. His research focuses on deep learning, human-centered ML, personalization, dataset curation, and digital signal processing. Juan also holds a M.Sc. in Media Technology (Technische Universität Ilmenau, Germany), a B.Sc. in Electronics Engineering and a B.A. in Music (Universidad de los Andes, Colombia).

Stanford Advisors


All Publications


  • Personalisation and profiling using algorithms and not-so-popular Colombian music: goal-directed mechanisms in music emotion recognition EPJ DATA SCIENCE Gomez-Canon, J., Lennie, T., Eerola, T., Aragon, P., Cano, E., Herrera, P., Gomez, E. 2025; 14 (1): 80

    Abstract

    This work investigates how personalised Music Emotion Recognition (MER) systems may lead to sensitive profiling when applied to musically induced emotions in politically charged contexts. We focus on traditional Colombian music with explicit political content, including (1) vallenatos and social songs aligned with the left-wing guerrilla Fuerzas Armadas Revolucionarias de Colombia (FARC), and (2) corridos linked to sympathisers of the right-wing paramilitary group Autodefensas Unidas de Colombia (AUC). Using data from 49 participants with diverse political leanings, we train personalised machine learning models to predict induced emotional responses - particularly negative emotions. Our findings reveal that political identity plays a significant role in shaping emotional experiences of music with explicit political content, and that emotion recognition models can capture this variation to a certain extent. These results raise critical concerns about the potential misuse of emotion recognition technologies. What is often framed as a tool for wellbeing and emotional regulation could, in politically sensitive contexts, be repurposed for user profiling. This work highlights the ethical risks of deploying AI-driven emotion analysis without safeguards, particularly among populations that are politically or socially vulnerable. We argue that subjective emotional responses may constitute sensitive personal data, and that failing to account for their sociopolitical context could amplify harm and exclusion.The online version contains supplementary material available at 10.1140/epjds/s13688-025-00595-1.

    View details for DOI 10.1140/epjds/s13688-025-00595-1

    View details for Web of Science ID 001613812700001

    View details for PubMedID 41244640

    View details for PubMedCentralID PMC12615516

  • Beyond a Western Center of Music Information Retrieval: A Bibliometric Analysis of the First 25 Years of ISMIR Authorship TRANSACTIONS OF THE INTERNATIONAL SOCIETY FOR MUSIC INFORMATION RETRIEVAL Gomez-Canon, J., Siavichay, E., Cavdir, D., Kaneshiro, B., Porcaro, L. 2025; 8 (1): 372-387

    View details for DOI 10.5334/tismir.265

    View details for Web of Science ID 001620706400001

  • Participant and Musical Diversity in Music Psychology Research Music & Science Jakubowski, K., Ahmad, N., Armitage, J., Barrett, L., Edwards, A., Galbo, E., Gómez-Cañón, J. S., Graves, T. A., Jadzgevičiūtė, A., Kirts, C., Lahdelma, I., Lennie, T. M., Ramatally, A., Schlichting, J. L., Steliou, C., Vishwanath, K., Eerola, T. 2025; 8
  • Globally, songs and instrumental melodies are slower and higher and use more stable pitches than speech: A Registered Report. Science advances Ozaki, Y., Tierney, A., Pfordresher, P. Q., McBride, J. M., Benetos, E., Proutskova, P., Chiba, G., Liu, F., Jacoby, N., Purdy, S. C., Opondo, P., Fitch, W. T., Hegde, S., Rocamora, M., Thorne, R., Nweke, F., Sadaphal, D. P., Sadaphal, P. M., Hadavi, S., Fujii, S., Choo, S., Naruse, M., Ehara, U., Sy, L., Parselelo, M. L., Anglada-Tort, M., Hansen, N. C., Haiduk, F., Færøvik, U., Magalhães, V., Krzyżanowski, W., Shcherbakova, O., Hereld, D., Barbosa, B. S., Varella, M. A., van Tongeren, M., Dessiatnitchenko, P., Zar, S. Z., El Kahla, I., Muslu, O., Troy, J., Lomsadze, T., Kurdova, D., Tsope, C., Fredriksson, D., Arabadjiev, A., Sarbah, J. P., Arhine, A., Meachair, T. Ó., Silva-Zurita, J., Soto-Silva, I., Millalonco, N. E., Ambrazevičius, R., Loui, P., Ravignani, A., Jadoul, Y., Larrouy-Maestri, P., Bruder, C., Teyxokawa, T. P., Kuikuro, U., Natsitsabui, R., Sagarzazu, N. B., Raviv, L., Zeng, M., Varnosfaderani, S. D., Gómez-Cañón, J. S., Kolff, K., der Nederlanden, C. V., Chhatwal, M., David, R. M., Setiawan, I. P., Lekakul, G., Borsan, V. N., Nguqu, N., Savage, P. E. 2024; 10 (20): eadm9797

    Abstract

    Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we analyzed two global datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and (ii) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six preregistered predictions, five were strongly supported: Relative to speech, songs use (i) higher pitch, (ii) slower temporal rate, and (iii) more stable pitches, while both songs and speech used similar (iv) pitch interval size and (v) timbral brightness. Exploratory analyses suggest that features vary along a "musi-linguistic" continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech.

    View details for DOI 10.1126/sciadv.adm9797

    View details for PubMedID 38748798

    View details for PubMedCentralID PMC11095461

  • Bridging cognitive neuroscience and education: Insights from EEG recording during mathematical proof evaluation TRENDS IN NEUROSCIENCE AND EDUCATION Gashaj, V., Trninic, D., Formaz, C., Tobler, S., Gomez-Canon, J., Poikonen, H., Kapur, M. 2024; 35: 100226

    Abstract

    Much of modern mathematics education prioritizes symbolic formalism even at the expense of non-symbolic intuition, we contextualize our study in the ongoing debates on the balance between symbolic and non-symbolic reasoning. We explore the dissociation of oscillatory dynamics between algebraic (symbolic) and geometric (non-symbolic) processing in advanced mathematical reasoning during a naturalistic design.Employing mobile EEG technology, we investigated students' beta and gamma wave patterns over frontal and parietal regions while they engaged with mathematical demonstrations in symbolic and non-symbolic formats within a tutor-student framework. We used extended, naturalistic stimuli to approximate an authentic educational setting.Our findings reveal nuanced distinctions in neural processing, particularly in terms of gamma waves and activity in parietal regions. Furthermore, no clear overall format preference emerged from the neuroscientific perspective despite students rating symbolic demonstrations higher for understanding and familiarity.

    View details for DOI 10.1016/j.tine.2024.100226

    View details for Web of Science ID 001241428400001

    View details for PubMedID 38879197

  • TROMPA-MER: an open dataset for personalized music emotion recognition JOURNAL OF INTELLIGENT INFORMATION SYSTEMS Gomez-Canon, J., Gutierrez-Paez, N., Porcaro, L., Porter, A., Cano, E., Herrera-Boyer, P., Gkiokas, A., Santos, P., Hernandez-Leo, D., Karreman, C., Gomez, E. 2023; 60 (2): 549-570
  • Music Emotion Recognition: Toward new, robust standards in personalized and context-sensitive applications IEEE SIGNAL PROCESSING MAGAZINE Gomez-Canon, J., Cano, E., Eerola, T., Herrera, P., Hu, X., Yang, Y., Gomez, E. 2021; 38 (6): 106-114
  • Let's agree to disagree: Consensus Entropy Active Learning for Personalized Music Emotion Recognition Proceedings of the 22nd International Society for Music Information Retrieval Conference (ISMIR) Gómez-Cañón, J. S., Cano, E., Yang, Y., Herrera, P., Gómez, E. 2021: 237-245

    View details for DOI 10.5281/ZENODO.5624399

  • Emotion Annotation of Music: A Citizen Science Approach Gutierrez Paez, N., Sebastian Gomez-Canon, J., Porcaro, L., Santos, P., Hernandez-Leo, D., Gomez, E. edited by HernandezLeo, D., Hishiyama, R., Zurita, G., Weyers, B., Nolte, A., Ogata, H. SPRINGER INTERNATIONAL PUBLISHING AG. 2021: 51-66
  • LANGUAGE-SENSITIVE MUSIC EMOTION RECOGNITION MODELS: ARE WE REALLY THERE YET? Sebastian Gomez-Canon, J., Cano, E., Gabriela Pandrea, A., Herrera, P., Gomez, E., IEEE IEEE. 2021: 576-580
  • Transfer learning from speech to music: towards language-sensitive emotion recognition models Gomez Canon, J., Cano, E., Herrera, P., Gomez, E., IEEE IEEE. 2021: 136-140
  • Cross-Dataset Music Emotion Recognition: an End-to-End Approach Late breaking/Demo of the 21st International Society for Music Information Retrieval Conference (ISMIR) Pandrea, A. G., Gómez-Cañón, J. S., Herrera, P. 2020

    View details for DOI 10.5281/zenodo.4076771

  • Joyful for you and tender for us: the influence of individual characteristics and language on emotion labeling and classification Proceedings of the 21th International Society for Music Information Retrieval Conference (ISMIR) Gómez-Cañón, J. S., Cano, E., Herrera, P., Gómez, E. 2020: 853-860

    View details for DOI 10.5281/zenodo.4245567

  • Jazz Solo Instrument Classification with Convolutional Neural Networks, Source Separation, and Transfer Learning Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR) Gómez-Cañón, J. S., Abeßer, J., Cano, E. 2018: 577-584

    View details for DOI 10.5281/zenodo.1492480