Bio


Juan S. Gómez-Cañón is a researcher, engineer and musician from Colombia. He holds a Ph.D. in Information and Communication Technologies from the Universitat Pompeu Fabra (Barcelona, Spain). During his Ph.D., Juan researched human-centered and trustworthy machine learning methods to predict the emotions in music. He has authored several conference and journal papers on deep learning, human-centered design, personalization, dataset curation, and digital signal processing. Juan also holds a M.Sc. in Media Technology (Technische Universität Ilmenau, Germany), a B.Sc. in Electronics Engineering and a B.A. in Music (Universidad de los Andes, Colombia).

Stanford Advisors


All Publications


  • Globally, songs and instrumental melodies are slower and higher and use more stable pitches than speech: A Registered Report. Science advances Ozaki, Y., Tierney, A., Pfordresher, P. Q., McBride, J. M., Benetos, E., Proutskova, P., Chiba, G., Liu, F., Jacoby, N., Purdy, S. C., Opondo, P., Fitch, W. T., Hegde, S., Rocamora, M., Thorne, R., Nweke, F., Sadaphal, D. P., Sadaphal, P. M., Hadavi, S., Fujii, S., Choo, S., Naruse, M., Ehara, U., Sy, L., Parselelo, M. L., Anglada-Tort, M., Hansen, N. C., Haiduk, F., Færøvik, U., Magalhães, V., Krzyżanowski, W., Shcherbakova, O., Hereld, D., Barbosa, B. S., Varella, M. A., van Tongeren, M., Dessiatnitchenko, P., Zar, S. Z., El Kahla, I., Muslu, O., Troy, J., Lomsadze, T., Kurdova, D., Tsope, C., Fredriksson, D., Arabadjiev, A., Sarbah, J. P., Arhine, A., Meachair, T. Ó., Silva-Zurita, J., Soto-Silva, I., Millalonco, N. E., Ambrazevičius, R., Loui, P., Ravignani, A., Jadoul, Y., Larrouy-Maestri, P., Bruder, C., Teyxokawa, T. P., Kuikuro, U., Natsitsabui, R., Sagarzazu, N. B., Raviv, L., Zeng, M., Varnosfaderani, S. D., Gómez-Cañón, J. S., Kolff, K., der Nederlanden, C. V., Chhatwal, M., David, R. M., Setiawan, I. P., Lekakul, G., Borsan, V. N., Nguqu, N., Savage, P. E. 2024; 10 (20): eadm9797

    Abstract

    Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we analyzed two global datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and (ii) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six preregistered predictions, five were strongly supported: Relative to speech, songs use (i) higher pitch, (ii) slower temporal rate, and (iii) more stable pitches, while both songs and speech used similar (iv) pitch interval size and (v) timbral brightness. Exploratory analyses suggest that features vary along a "musi-linguistic" continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech.

    View details for DOI 10.1126/sciadv.adm9797

    View details for PubMedID 38748798

    View details for PubMedCentralID PMC11095461

  • Music Emotion Recognition: Toward new, robust standards in personalized and context-sensitive applications IEEE SIGNAL PROCESSING MAGAZINE Gomez-Canon, J., Cano, E., Eerola, T., Herrera, P., Hu, X., Yang, Y., Gomez, E. 2021; 38 (6): 106-114
  • Bridging cognitive neuroscience and education: Insights from EEG recording during mathematical proof evaluation TRENDS IN NEUROSCIENCE AND EDUCATION Gashaj, V., Trninic, D., Formaz, C., Tobler, S., Gomez-Canon, J., Poikonen, H., Kapur, M. 2024; 35: 100226

    Abstract

    Much of modern mathematics education prioritizes symbolic formalism even at the expense of non-symbolic intuition, we contextualize our study in the ongoing debates on the balance between symbolic and non-symbolic reasoning. We explore the dissociation of oscillatory dynamics between algebraic (symbolic) and geometric (non-symbolic) processing in advanced mathematical reasoning during a naturalistic design.Employing mobile EEG technology, we investigated students' beta and gamma wave patterns over frontal and parietal regions while they engaged with mathematical demonstrations in symbolic and non-symbolic formats within a tutor-student framework. We used extended, naturalistic stimuli to approximate an authentic educational setting.Our findings reveal nuanced distinctions in neural processing, particularly in terms of gamma waves and activity in parietal regions. Furthermore, no clear overall format preference emerged from the neuroscientific perspective despite students rating symbolic demonstrations higher for understanding and familiarity.

    View details for DOI 10.1016/j.tine.2024.100226

    View details for Web of Science ID 001241428400001

    View details for PubMedID 38879197

  • TROMPA-MER: an open dataset for personalized music emotion recognition JOURNAL OF INTELLIGENT INFORMATION SYSTEMS Gomez-Canon, J., Gutierrez-Paez, N., Porcaro, L., Porter, A., Cano, E., Herrera-Boyer, P., Gkiokas, A., Santos, P., Hernandez-Leo, D., Karreman, C., Gomez, E. 2023; 60 (2): 549-570
  • Let's agree to disagree: Consensus Entropy Active Learning for Personalized Music Emotion Recognition Proceedings of the 22nd International Society for Music Information Retrieval Conference (ISMIR) Gómez-Cañón, J. S., Cano, E., Yang, Y., Herrera, P., Gómez, E. 2021: 237-245

    View details for DOI 10.5281/ZENODO.5624399

  • Emotion Annotation of Music: A Citizen Science Approach Gutierrez Paez, N., Sebastian Gomez-Canon, J., Porcaro, L., Santos, P., Hernandez-Leo, D., Gomez, E., HernandezLeo, D., Hishiyama, R., Zurita, G., Weyers, B., Nolte, A., Ogata, H. SPRINGER INTERNATIONAL PUBLISHING AG. 2021: 51-66
  • LANGUAGE-SENSITIVE MUSIC EMOTION RECOGNITION MODELS: ARE WE REALLY THERE YET? Sebastian Gomez-Canon, J., Cano, E., Gabriela Pandrea, A., Herrera, P., Gomez, E., IEEE IEEE. 2021: 576-580
  • Transfer learning from speech to music: towards language-sensitive emotion recognition models Gomez Canon, J., Cano, E., Herrera, P., Gomez, E., IEEE IEEE. 2021: 136-140
  • Cross-Dataset Music Emotion Recognition: an End-to-End Approach Late breaking/Demo of the 21st International Society for Music Information Retrieval Conference (ISMIR) Pandrea, A. G., Gómez-Cañón, J. S., Herrera, P. 2020

    View details for DOI 10.5281/zenodo.4076771

  • Joyful for you and tender for us: the influence of individual characteristics and language on emotion labeling and classification Proceedings of the 21th International Society for Music Information Retrieval Conference (ISMIR) Gómez-Cañón, J. S., Cano, E., Herrera, P., Gómez, E. 2020: 853-860

    View details for DOI 10.5281/zenodo.4245567

  • Jazz Solo Instrument Classification with Convolutional Neural Networks, Source Separation, and Transfer Learning Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR) Gómez-Cañón, J. S., Abeßer, J., Cano, E. 2018: 577-584

    View details for DOI 10.5281/zenodo.1492480