All Publications


  • Factors influencing classification of frequency following responses to speech and music stimuli. Hearing research Losorelli, S., Kaneshiro, B., Musacchia, G. A., Blevins, N. H., Fitzgerald, M. B. 2020; 398: 108101

    Abstract

    Successful mapping of meaningful labels to sound input requires accurate representation of that sound's acoustic variances in time and spectrum. For some individuals, such as children or those with hearing loss, having an objective measure of the integrity of this representation could be useful. Classification is a promising machine learning approach which can be used to objectively predict a stimulus label from the brain response. This approach has been previously used with auditory evoked potentials (AEP) such as the frequency following response (FFR), but a number of key issues remain unresolved before classification can be translated into clinical practice. Specifically, past efforts at FFR classification have used data from a given subject for both training and testing the classifier. It is also unclear which components of the FFR elicit optimal classification accuracy. To address these issues, we recorded FFRs from 13 adults with normal hearing in response to speech and music stimuli. We compared labeling accuracy of two cross-validation classification approaches using FFR data: (1) a more traditional method combining subject data in both the training and testing set, and (2) a "leave-one-out" approach, in which subject data is classified based on a model built exclusively from the data of other individuals. We also examined classification accuracy on decomposed and time-segmented FFRs. Our results indicate that the accuracy of leave-one-subject-out cross validation approaches that obtained in the more conventional cross-validation classifications while allowing a subject's results to be analysed with respect to normative data pooled from a separate population. In addition, we demonstrate that classification accuracy is highest when the entire FFR is used to train the classifier. Taken together, these efforts contribute key steps toward translation of classification-based machine learning approaches into clinical practice.

    View details for DOI 10.1016/j.heares.2020.108101

    View details for PubMedID 33142106

  • Early experience with a patient-specific virtual surgical simulation for rehearsal of endoscopic skull-base surgery INTERNATIONAL FORUM OF ALLERGY & RHINOLOGY Won, T., Hwang, P., Lim, J., Cho, S., Paek, S., Losorelli, S., Vaisbuch, Y., Chan, S., Salisbury, K., Blevins, N. H. 2018; 8 (1): 54–63

    Abstract

    With the help of contemporary computer technology it is possible to create a virtual surgical environment (VSE) for training. This article describes a patient-specific virtual rhinologic surgical simulation platform that supports rehearsal of endoscopic skull-base surgery. We also share our early experience with select cases.A rhinologic VSE was developed, featuring a highly efficient direct 3-dimensional (3D) volume renderer with simultaneous stereoscopic feedback during surgical manipulation of the virtual anatomy, as well as high-fidelity haptic feedback. We conducted a retrospective analysis on 10 patients who underwent various forms of sinus and ventral skull-base surgery to assess the ability of the rhinologic VSE to replicate actual intraoperative findings.In all 10 cases, the simulation experience was realistic enough to perform dissections in a similar manner as in the actual surgery. Excellent correlation was found in terms of surgical exposure, anatomical features, and the locations of pathology.The current rhinologic VSE shows sufficient realism to allow patient-specific surgical rehearsal of the sinus and ventral skull base. Further validation studies are needed to assess the benefits of performing patient-specific rehearsal.

    View details for PubMedID 29105367

  • Identification and characterization of mouse otic sensory lineage genes FRONTIERS IN CELLULAR NEUROSCIENCE Hartman, B. H., Durruthy-Durruthy, R., Laske, R. D., Losorelli, S., Heller, S. 2015; 9

    Abstract

    Vertebrate embryogenesis gives rise to all cell types of an organism through the development of many unique lineages derived from the three primordial germ layers. The otic sensory lineage arises from the otic vesicle, a structure formed through invagination of placodal non-neural ectoderm. This developmental lineage possesses unique differentiation potential, giving rise to otic sensory cell populations including hair cells, supporting cells, and ganglion neurons of the auditory and vestibular organs. Here we present a systematic approach to identify transcriptional features that distinguish the otic sensory lineage (from early otic progenitors to otic sensory populations) from other major lineages of vertebrate development. We used a microarray approach to analyze otic sensory lineage populations including microdissected otic vesicles (embryonic day 10.5) as well as isolated neonatal cochlear hair cells and supporting cells at postnatal day 3. Non-otic tissue samples including periotic tissues and whole embryos with otic regions removed were used as reference populations to evaluate otic specificity. Otic populations shared transcriptome-wide correlations in expression profiles that distinguish members of this lineage from non-otic populations. We further analyzed the microarray data using comparative and dimension reduction methods to identify individual genes that are specifically expressed in the otic sensory lineage. This analysis identified and ranked top otic sensory lineage-specific transcripts including Fbxo2, Col9a2, and Oc90, and additional novel otic lineage markers. To validate these results we performed expression analysis on select genes using immunohistochemistry and in situ hybridization. Fbxo2 showed the most striking pattern of specificity to the otic sensory lineage, including robust expression in the early otic vesicle and sustained expression in prosensory progenitors and auditory and vestibular hair cells and supporting cells.

    View details for DOI 10.3389/fncel.7015.00079

    View details for Web of Science ID 000352432300001

    View details for PubMedID 25852475