Matthew Fitzgerald, PhD
Assistant Professor of Otolaryngology - Head & Neck Surgery (OHNS)
Otolaryngology (Head and Neck Surgery)
Web page: http://web.stanford.edu/people/fitzmb
Bio
I received my undergraduate degree in Communication Sciences and Disorders from The Wichita State University. I then traveled to Vanderbilt University to complete a M.S. in Audiology and Hearing Sciences, before completing a clinical fellowship at Henry Ford Hospital in Detroit, MI with Dr. Gary Jacobson. I subsequently completed a Ph.D. at Northwestern University in Communication Disorders with Dr. Beverly Wright exploring patterns of perceptual learning in individuals with normal hearing. Upon completion of my doctorate, I moved to the New York University School of Medicine for a post-doctoral fellowship in the Department of Otolaryngology. There, I worked with Dr. Mario Svirsky to identify recipients of cochlear implants who have not yet fully adapted to their device, and to provide tools which audiologists could use to modify the map to help these patients. I later joined the faculty at NYU, and also at Montclair State University. In 2015 I became the Chief of Audiology at Stanford, where I oversee the Audiology departments of both Stanford Hospital and the Lucille Packard Children’s Hospital.
Academic Appointments
-
Assistant Professor - University Medical Line, Otolaryngology (Head and Neck Surgery)
-
Member, Wu Tsai Neurosciences Institute
Boards, Advisory Committees, Professional Organizations
-
Member, American Speech Language Hearing Association (1995 - Present)
-
Member, Association for Research in Otolaryngology (2005 - Present)
-
Member, American Auditory Society (2005 - Present)
-
Member, American Academy of Audiology (2015 - Present)
Professional Education
-
B.A., The Wichita State University, Communication Sciences and Disorders
-
M.S., Vanderbilt University, Audiology and Hearing Sciences
-
Ph.D., Northwestern University, Communication Sciences and Disorders
Current Research and Scholarly Interests
My research encompasses several translational projects. One focus is to modify the routine audiologic test battery such that it places equal weight on hearing acuity and hearing function. This work includes measures of speech in noise, or electrophysiologic responses such as the FFR. I also explore tools to better assess and maximize performance in users of hearing aids and cochlear implants. Finally, I am also investigating the benefits of telemedicine, and new treatments for tinnitus.
2024-25 Courses
- Seminar in Music Perception and Cognition I
MUSIC 351A (Aut) -
Independent Studies (4)
- Directed Reading in Otolaryngology
OTOHNS 299 (Aut, Win, Spr, Sum) - Graduate Research
OTOHNS 399 (Aut, Win, Spr, Sum) - Medical Scholars Research
OTOHNS 370 (Aut, Win, Spr, Sum) - Undergraduate Research
OTOHNS 199 (Aut, Win, Spr, Sum)
- Directed Reading in Otolaryngology
-
Prior Year Courses
2023-24 Courses
- Seminar in Music Perception and Cognition I
MUSIC 351A (Aut)
2022-23 Courses
- Seminar in Music Perception and Cognition I
MUSIC 351A (Aut)
2021-22 Courses
- Seminar in Music Perception and Cognition I
MUSIC 351A (Aut)
- Seminar in Music Perception and Cognition I
All Publications
-
Safety and Early Outcomes of Cochlear Implantation of Nucleus Devices in Infants: A Multi-Centre Study.
Trends in hearing
2024; 28: 23312165241261480
Abstract
This multi-center study examined the safety and effectiveness of cochlear implantation of children between 9 and 11 months of age. The intended impact was to support practice regarding candidacy assessment and prognostic counseling of pediatric cochlear implant candidates. Data in the clinical chart of children implanted at 9-11 months of age with Cochlear Ltd devices at five cochlear implant centers in the United States and Canada were included in analyses. The study included data from two cohorts implanted with one or two Nucleus devices during the periods of January 1, 2012-December 31, 2017 (Cohort 1, n = 83) or between January 1, 2018 and May 15, 2020 (Cohort 2, n = 50). Major adverse events (requiring another procedure/hospitalization) and minor adverse events (managed with medication alone or underwent an expected course of treatment that did not require surgery or hospitalization) out to 2 years post-implant were monitored and outcomes measured by audiometric thresholds and parent-reports on the IT-MAIS and LittlEARS questionnaires were collected. Results revealed 60 adverse events in 41 children and 227 ears implanted (26%) of which 14 major events occurred in 11 children; all were transitory and resolved. Improved hearing with cochlear implant use was shown in all outcome measures. Findings reveal that the procedure is safe for infants and that they show clear benefits of cochlear implantation including increased audibility and hearing development.
View details for DOI 10.1177/23312165241261480
View details for PubMedID 38887094
View details for PubMedCentralID PMC11185016
-
Immigrant Status Disparities in Hearing Health Care Use in the United States.
Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery
2024
Abstract
To determine whether immigrant status is associated with likelihood of audiogram and hearing aid use among US adults with hearing loss.Cross-sectional study.Nationally representative data from 2009 to 2010, 2011 to 2012, 2015 to 2016, and 2017 to 2020 National Health and Nutrition Examination Survey (NHANES) cycles.This cross-sectional study of 4 merged cycles of NHANES included 12,455 adults with subjective (self-reported) or objective (audiometric) hearing loss. Sequentially adjusted logistic regressions were used to assess the association of immigration status with likelihood of having undergone audiogram among those with objective and self-reported hearing loss, and with likelihood of hearing aid use among candidates with objective hearing loss.Immigrants were less likely to have received an audiogram among subjects with subjective (odds ratio [OR]: 0.81, 95% confidence interval [CI]: 0.75-0.87), and objective (OR: 0.76, 95% CI: 0.72-0.81) hearing loss, compared to nonimmigrants. The association persisted for those with subjective (OR: 0.88, 95% CI: 0.81-0.96), and objective (OR: 0.87, 95% CI: 0.80-0.96) hearing loss after adjusting for sociodemographic factors, comorbidities, insurance, and hearing quality, but disappeared in both groups after adjusting for English proficiency. Immigrants were less likely to use hearing aids (OR: 0.90, 95% CI: 0.87-0.93). However, this association disappeared (OR: 0.98, 95% CI: 0.93-1.04) in the adjusted model.Immigrant status is a significant barrier to hearing health care and is associated with lower rates of audiometric testing and hearing aid use among individuals with hearing loss.
View details for DOI 10.1002/ohn.859
View details for PubMedID 38881377
-
Speech-in-Noise Assessment in the Routine Audiologic Test Battery: Relationship to Perceived Auditory Disability.
Ear and hearing
2024
Abstract
Self-assessment of perceived communication difficulty has been used in clinical and research practices for decades. Such questionnaires routinely assess the perceived ability of an individual to understand speech, particularly in background noise. Despite the emphasis on perceived performance in noise, speech recognition in routine audiologic practice is measured by word recognition in quiet (WRQ). Moreover, surprisingly little data exist that compare speech understanding in noise (SIN) abilities to perceived communication difficulty. Here, we address these issues by examining audiometric thresholds, WRQ scores, QuickSIN signal to noise ratio (SNR) loss, and perceived auditory disability as measured by the five questions on the Speech Spatial Questionnaire-12 (SSQ12) devoted to speech understanding (SSQ12-Speech5).We examined data from 1633 patients who underwent audiometric assessment at the Stanford Ear Institute. All individuals completed the SSQ12 questionnaire, pure-tone audiometry, and speech assessment consisting of ear-specific WRQ, and ear-specific QuickSIN. Only individuals with hearing threshold asymmetries ≤10 dB HL in their high-frequency pure-tone average (HFPTA) were included. Our primary objectives were to (1) examine the relationship between audiometric variables and the SSQ12-Speech5 scores, (2) determine the amount of variance in the SSQ12-Speech5 scores which could be predicted from audiometric variables, and (3) predict which patients were likely to report greater perceived auditory disability according to the SSQ12-Speech5.Performance on the SSQ12-Speech5 indicated greater perceived auditory disability with more severe degrees of hearing loss and greater QuickSIN SNR loss. Degree of hearing loss and QuickSIN SNR loss were found to account for modest but significant variance in SSQ12-Speech5 scores after accounting for age. In contrast, WRQ scores did not significantly contribute to the predictive power of the model. Degree of hearing loss and QuickSIN SNR loss were also found to have moderate diagnostic accuracy for determining which patients were likely to report SSQ12-Speech5 scores indicating greater perceived auditory disability.Taken together, these data indicate that audiometric factors including degree of hearing loss (i.e., HFPTA) and QuickSIN SNR loss are predictive of SSQ12-Speech5 scores, though notable variance remains unaccounted for after considering these factors. HFPTA and QuickSIN SNR loss-but not WRQ scores-accounted for a significant amount of variance in SSQ12-Speech5 scores and were largely effective at predicting which patients are likely to report greater perceived auditory disability on the SSQ12-Speech5. This provides further evidence for the notion that speech-in-noise measures have greater clinical utility than WRQ in most instances as they relate more closely to measures of perceived auditory disability.
View details for DOI 10.1097/AUD.0000000000001472
View details for PubMedID 38414136
-
A Large-Scale Study of the Relationship Between Degree and Type of Hearing Loss and Recognition of Speech in Quiet and Noise.
Ear and hearing
2024
Abstract
OBJECTIVES: Understanding speech in noise (SIN) is the dominant complaint of individuals with hearing loss. For decades, the default test of speech perception in routine audiologic assessment has been monosyllabic word recognition in quiet (WRQ), which does not directly address patient concerns, leading some to advocate that measures of SIN should be integrated into routine practice. However, very little is known with regard to how SIN abilities are affected by different types of hearing loss. Here, we examine performance on clinical measures of WRQ and SIN in a large patient base consisting of a variety of hearing loss types, including conductive (CHL), mixed (MHL), and sensorineural (SNHL) losses.DESIGN: In a retrospective study, we examined data from 5593 patients (51% female) who underwent audiometric assessment at the Stanford Ear Institute. All individuals completed pure-tone audiometry, and speech perception testing of monaural WRQ, and monaural QuickSIN. Patient ages ranged from 18 to 104 years (average = 57). The average age in years for the different classifications of hearing loss was 51.1 (NH), 48.5 (CHL), 64.2 (MHL), and 68.5 (SNHL), respectively. Generalized linear mixed-effect models and quartile regression were used to determine the relationship between hearing loss type and severity for the different speech-recognition outcome measures.RESULTS: Patients with CHL had similar performance to patients with normal hearing on both WRQ and QuickSIN, regardless of the hearing loss severity. In patients with MHL or SNHL, WRQ scores remained largely excellent with increasing hearing loss until the loss was moderately severe or worse. In contrast, QuickSIN signal to noise ratio (SNR) losses showed an orderly systematic decrease as the degree of hearing loss became more severe. This effect scaled with the data, with threshold-QuickSIN relationships absent for CHL, and becoming increasingly stronger for MHL and strongest in patients with SNHL. However, the variability in these data suggests that only 57% of the variance in WRQ scores, and 50% of the variance in QuickSIN SNR losses, could be accounted for by the audiometric thresholds. Patients who would not be differentiated by WRQ scores are shown to be potentially differentiable by SIN scores.CONCLUSIONS: In this data set, conductive hearing loss had little effect on WRQ scores or QuickSIN SNR losses. However, for patients with MHL or SNHL, speech perception abilities decreased as the severity of the hearing loss increased. In these data, QuickSIN SNR losses showed deficits in performance with degrees of hearing loss that yielded largely excellent WRQ scores. However, the considerable variability in the data suggests that even after classifying patients according to their type of hearing loss, hearing thresholds only account for a portion of the variance in speech perception abilities, particularly in noise. These results are consistent with the idea that variables such as cochlear health and aging add explanatory power over audibility alone.
View details for DOI 10.1097/AUD.0000000000001484
View details for PubMedID 38389129
-
Evaluation of Asymmetries in Speech-in Noise Abilities in Audiologic Screening for Vestibular Schwannoma.
Ear and hearing
2023
Abstract
Measures of speech-in-noise, such as the QuickSIN, are increasingly common tests of speech perception in audiologic practice. However, the effect of vestibular schwannoma (VS) on speech-in-noise abilities is unclear. Here, we compare the predictive ability of interaural QuickSIN asymmetry for detecting VS against other measures of audiologic asymmetry.A retrospective review of patients in our institution who received QuickSIN testing in addition to a regular audiologic battery between September 2015 and February 2019 was conducted. Records for patients with radiographically confirmed, unilateral, pretreatment VSs were identified. The remaining records excluding conductive pathologies were used as controls. The predictive abilities of various measures of audiologic asymmetry to detect VS were statistically compared.Our search yielded 73 unique VS patients and 2423 controls. Receiver operating characteristic curve analysis showed that QuickSIN asymmetry was more sensitive and specific than pure-tone average asymmetry and word-recognition-in-quiet asymmetry for detecting VS. Multiple logistic regression analysis revealed that QuickSIN asymmetry was more predictive of VS (odds ratio [OR] = 1.23, 95% confidence interval [CI] [1.10, 1.38], p < 0.001) than pure-tone average asymmetry (OR = 1.04, 95% CI [1.00, 1.07], p = 0.025) and word-recognition-in-quiet asymmetry (OR = 1.03, 95% CI [0.99, 1.06], p = 0.064).Between-ear asymmetries in the QuickSIN appear to be more efficient than traditional measures of audiologic asymmetry for identifying patients with VS. These results suggest that speech-in noise testing could be integrated into clinical practice without hindering the ability to identify retrocochlear pathology.
View details for DOI 10.1097/AUD.0000000000001397
View details for PubMedID 37707393
-
Preliminary Guidelines for Replacing Word-Recognition in Quiet With Speech in Noise Assessment in the Routine Audiologic Test Battery.
Ear and hearing
2023
Abstract
For decades, monosyllabic word-recognition in quiet (WRQ) has been the default test of speech recognition in routine audiologic assessment. The continued use of WRQ scores is noteworthy in part because difficulties understanding speech in noise (SIN) is perhaps the most common complaint of individuals with hearing loss. The easiest way to integrate SIN measures into routine clinical practice would be for SIN to replace WRQ assessment as the primary test of speech perception. To facilitate this goal, we predicted classifications of WRQ scores from the QuickSIN signal to noise ratio (SNR) loss and hearing thresholds.We examined data from 5808 patients who underwent audiometric assessment at the Stanford Ear Institute. All individuals completed pure-tone audiometry, and speech assessment consisting of monaural WRQ, and monaural QuickSIN. We then performed multiple-logistic regression to determine whether classification of WRQ scores could be predicted from pure-tone thresholds and QuickSIN SNR losses.Many patients displayed significant challenges on the QuickSIN despite having excellent WRQ scores. Performance on both measures decreased with hearing loss. However, decrements in performance were observed with less hearing loss for the QuickSIN than for WRQ. Most important, we demonstrate that classification of good or excellent word-recognition scores in quiet can be predicted with high accuracy by the high-frequency pure-tone average and the QuickSIN SNR loss.Taken together, these data suggest that SIN measures provide more information than WRQ. More important, the predictive power of our model suggests that SIN can replace WRQ in most instances, by providing guidelines as to when performance in quiet is likely to be excellent and does not need to be measured. Making this subtle, but profound shift to clinical practice would enable routine audiometric testing to be more sensitive to patient concerns, and may benefit both clinicians and researchers.
View details for DOI 10.1097/AUD.0000000000001409
View details for PubMedID 37703127
-
Prevalence of Cochlear Nerve Deficiency and Hearing Device Use in Children With Single-Sided Deafness
OTOLARYNGOLOGY-HEAD AND NECK SURGERY
2023
View details for DOI 10.1002/ohn.255
View details for Web of Science ID 000928877200001
-
Prevalence of Cochlear Nerve Deficiency and Hearing Device Use in Children With Single-Sided Deafness.
Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery
2023
Abstract
This study aimed to assess the prevalence of cochlear nerve deficiency (CND) in a cohort of pediatric patients with single-sided deafness (SSD). A secondary objective was to investigate trends in intervention and hearing device use in these children.Case series with chart review.Pediatric tertiary care center.Children ages 0 to 21 years with SSD (N = 190) who underwent computerized tomography (CT) and/or magnetic resonance imaging (MRI) were included. Diagnostic criteria for SSD included unilateral severe-to-profound sensorineural hearing loss with normal hearing sensitivity in the contralateral ear. Diagnostic criteria for CND included neuroradiologist report of an "aplastic or hypoplastic nerve" on MRI or a "stenotic cochlear aperture" on CT.The prevalence of CND was 42% for children with CT only, 76% for children with MRI only, and 63% for children with both MRI and CT. Of the children with MRI and CT, there was a 90% concordance across imaging modalities. About 36% of children with SSD had hearing devices that routed sound to the normal hearing ear (ie, bone conduction hearing device/contralateral routing of signal), while only 3% received a cochlear implant. Approximately 40% did not have a hearing device. Hearing device wear time averaged 2.9 hours per day and did not differ based on cochlear nerve status.There is a high prevalence of CND in children with SSD. Cochlear nerve status should be confirmed via MRI in children with SSD. The limited implementation and use of hearing devices observed for children with SSD reinforce the need for increased support for early and continuous intervention.
View details for DOI 10.1002/ohn.255
View details for PubMedID 36939463
-
Identifying Listeners Whose Speech Intelligibility Depends on a Quiet Extra Moment After a Sentence.
Journal of speech, language, and hearing research : JSLHR
2022: 1-14
Abstract
PURPOSE: An extra moment after a sentence is spoken may be important for listeners with hearing loss to mentally repair misperceptions during listening. The current audiologic test battery cannot distinguish between a listener who repaired a misperception versus a listener who heard the speech accurately with no need for repair. This study aims to develop a behavioral method to identify individuals who are at risk for relying on a quiet moment after a sentence.METHOD: Forty-three individuals with hearing loss (32 cochlear implant users, 11 hearing aid users) heard sentences that were followed by either 2 s of silence or 2 s of babble noise. Both high- and low-context sentences were used in the task.RESULTS: Some individuals showed notable benefit in accuracy scores (particularly for high-context sentences) when given an extra moment of silent time following the sentence. This benefit was highly variable across individuals and sometimes absent altogether. However, the group-level patterns of results were mainly explained by the use of context and successful perception of the words preceding sentence-final words.CONCLUSIONS: These results suggest that some but not all individuals improve their speech recognition score by relying on a quiet moment after a sentence, and that this fragility of speech recognition cannot be assessed using one isolated utterance at a time. Reliance on a quiet moment to repair perceptions would potentially impede the perception of an upcoming utterance, making continuous communication in real-world scenarios difficult especially for individuals with hearing loss. The methods used in this study-along with some simple modifications if necessary-could potentially identify patients with hearing loss who retroactively repair mistakes by using clinically feasible methods that can ultimately lead to better patient-centered hearing health care.SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.21644801.
View details for DOI 10.1044/2022_JSLHR-21-00622
View details for PubMedID 36472938
-
Remote Intraoperative Neural Response Telemetry: Technique and Results in Cochlear Implant Surgery.
Otology & neurotology : official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology
2022; 43 (6): 638-642
Abstract
Present results with remote intraoperative neural response telemetry (NRT) during cochlear implantation (CI) and its usefulness in overcoming the inefficiency of in person NRT.Case series.Tertiary academic otology practice.All patients undergoing primary or revision CI, both adult and pediatric, were enrolled.Remote intraoperative NRT performed by audiologists using a desktop computer to control a laptop in the operating room. Testing was performed over the hospital network using commercially available software. A single system was used to test all three FDA-approved manufacturers' devices.Success rate and time savings of remote NRT.Out of 254 procedures, 252 (99.2%) underwent successful remote NRT. In two procedures (0.7%), remote testing was unsuccessful, and required in-person testing to address technical issues.Both failed attempts were due to hardware failure (OR laptop or headpiece problems). There was no relation between success of the procedure and patient/surgical factors such as difficult anatomy, or the approach used for inner ear access. The audiologist time saved using this approach was considerable when compared with in-person testing.Remote intraoperative NRT testing during cochlear implantation can be performed effectively using standard hardware and remote-control software. Especially important during the Covid-19 pandemic, such a procedure can reduce in-person contacts, and limit the number of individuals in the operating room. Remote testing can provide additional flexibility and efficiency in audiologist schedules.
View details for DOI 10.1097/MAO.0000000000003537
View details for PubMedID 35761455
-
Outcomes in Patients Meeting Cochlear Implant Criteria in Noise but Not in Quiet.
Otology & neurotology : official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology
2022; 43 (1): 56-63
Abstract
OBJECTIVE: Evaluate outcomes in cochlear implant (CI) recipients qualifying in AzBio noise but not quiet, and identify factors associated with postimplantation improvement.STUDY DESIGN: Retrospective cohort study.SETTING: Tertiary otology/neurotology clinic.PATIENTS: This study included 212 implanted ears. The noise group comprised 23 ears with preoperative AzBio more than or equal to 40% in quiet and less than or equal to 40% in +10 signal-to-noise ratio (SNR). The quiet group included 189 ears with preoperative AzBio less than 40% in quiet. The two groups displayed similar demographics and device characteristics.INTERVENTIONS: Cochlear implantation.MAIN OUTCOME MEASURES: AzBio in quiet and noise.RESULTS: Mean AzBio quiet scores improved in both the quiet group (pre-implant: 12.7%, postimplant: 67.2%, p < 0.001) and noise group (pre-implant: 61.6%, postimplant: 73.8%, p = 0.04). Mean AzBio +10 SNR also improved in the quiet group (pre-implant: 15.8%, postimplant: 59.3%, p = 0.001) and noise group (pre-implant: 30.5%, postimplant: 49.1%, p = 0.01). However, compared with the quiet group, fewer ears in the noise group achieved within-subject improvement in AzBio quiet (≥15% improvement; quiet group: 90.3%, noise group: 43.8%, p < 0.001) and AzBio +10 SNR (quiet group: 100.0%, noise group: 45.5%, p < 0.001). Baseline AzBio quiet (p < 0.001) and Consonant-Nucleus-Consonant (CNC) scores (p = 0.004) were associated with within-subject improvement in AzBio quiet and displayed a higher area under the curve than either aided or unaided pure-tone average (PTA) (both p = 0.01).CONCLUSIONS: CI patients qualifying in noise display significant mean benefit in speech recognition scores but are less likely to benefit compared with those qualifying in quiet. Patients with lower baseline AzBio quiet scores are more likely to display postimplant improvement.
View details for DOI 10.1097/MAO.0000000000003351
View details for PubMedID 34889839
-
Valid Acoustic Models of Cochlear Implants: One Size Does Not Fit All.
Otology & neurotology : official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology
2021; 42 (10S): S2-S10
Abstract
HYPOTHESIS: This study tests the hypothesis that it is possible to find tone or noise vocoders that sound similar and result in similar speech perception scores to a cochlear implant (CI). This would validate the use of such vocoders as acoustic models of CIs. We further hypothesize that those valid acoustic models will require a personalized amount of frequency mismatch between input filters and output tones or noise bands.BACKGROUND: Noise or tone vocoders have been used as acoustic models of CIs in hundreds of publications but have never been convincingly validated.METHODS: Acoustic models were evaluated by single-sided deaf CI users who compared what they heard with the CI in one ear to what they heard with the acoustic model in the other ear. We evaluated frequency-matched models (both all-channel and 6-channel models, both tone and noise vocoders) as well as self-selected models that included an individualized level of frequency mismatch.RESULTS: Self-selected acoustic models resulted in similar levels of speech perception and similar perceptual quality as the CI. These models also matched the CI in terms of perceived intelligibility, harshness, and pleasantness.CONCLUSION: Valid acoustic models of CIs exist, but they are different from the models most widely used in the literature. Individual amounts of frequency mismatch may be required to optimize the validity of the model. This may be related to the basalward frequency mismatch experienced by postlingually deaf patients after cochlear implantation.
View details for DOI 10.1097/MAO.0000000000003373
View details for PubMedID 34766938
-
Treatment Tone Spacing and Acute Effects of Acoustic Coordinated Reset Stimulation in Tinnitus Patients.
Frontiers in network physiology
2021; 1: 734344
Abstract
Acoustic coordinated reset (aCR) therapy for tinnitus aims to desynchronize neuronal populations in the auditory cortex that exhibit pathologically increased coincident firing. The original therapeutic paradigm involves fixed spacing of four low-intensity tones centered around the frequency of a tone matching the tinnitus pitch, f T , but it is unknown whether these tones are optimally spaced for induction of desynchronization. Computational and animal studies suggest that stimulus amplitude, and relatedly, spatial stimulation profiles, of coordinated reset pulses can have a major impact on the degree of desynchronization achievable. In this study, we transform the tone spacing of aCR into a scale that takes into account the frequency selectivity of the auditory system at each therapeutic tone's center frequency via a measure called the gap index. Higher gap indices are indicative of more loosely spaced aCR tones. The gap index was found to be a significant predictor of symptomatic improvement, with larger gap indices, i.e., more loosely spaced aCR tones, resulting in reduction of tinnitus loudness and annoyance scores in the acute stimulation setting. A notable limitation of this study is the intimate relationship of hearing impairment with the gap index. Particularly, the shape of the audiogram in the vicinity of the tinnitus frequency can have a major impact on tone spacing. However, based on our findings we suggest hypotheses-based experimental protocols that may help to disentangle the impact of hearing loss and tone spacing on clinical outcome, to assess the electrophysiologic correlates of clinical improvement, and to elucidate the effects following chronic rather than acute stimulation.
View details for DOI 10.3389/fnetp.2021.734344
View details for PubMedID 36925569
View details for PubMedCentralID PMC10012992
-
Influence of electrode to cochlear duct length ratio on post-operative speech understanding outcomes.
Cochlear implants international
2021: 1-11
Abstract
OBJECTIVE: To assess whether the pre-operative electrode to cochlear duct length ratio (ECDLR), is associated with post-operative speech recognition outcomes.STUDY DESIGN: A retrospective chart review study.SETTING: Tertiary referral center.PATIENTS: The study included sixty-one adult CI recipients with a pre-operative computed tomography scan and a speech recognition test 12 months after implantation.INTERVENTIONS: The average of two raters' cochlear duct length (CDL) measurements and the length of the recipient's cochlear implant electrode array formed the basis for the electrode-to-cochlear duct length ratio (ECLDR). Speech recognition tests were compared as a function of ECDLR and electrode array length itself.MAIN OUTCOME MEASURES: The relationship between ECDLR and percent correct on speech recognition tests.RESULTS: A second order polynomial regression relating ECDLR to percent correct on the CNC words speech recognition test was statistically significant, as was a fourth order polynomial regression for the AzBio Quiet test. In contrast, there was no statistically significant relationship between speech recognition scores and electrode array length.CONCLUSIONS: ECDLR values can be statistically associated to speech-recognition outcomes. However, these ECDLR values cannot be predicted by the electrode length alone, and must include a measure of CDL.
View details for DOI 10.1080/14670100.2021.1979289
View details for PubMedID 34590531
-
Assessment of Inter- and Intra-Rater Reliability of Tablet-Based Software to Measure Cochlear Duct Length.
Otology & neurotology : official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology
2021
Abstract
OBJECTIVE: The objective of this study is to build upon previous work validating a tablet-based software to measure cochlear duct length (CDL). Here, we do so by greatly expanding the number of cochleae (n = 166) analyzed, and examined whether computed tomography (CT) slice thickness influences reliability of CDL measurements.STUDY DESIGN: Retrospective chart review study.SETTING: Tertiary referral center.PATIENTS: Eighty-three adult cochlear implant recipients were included in the study. Both cochleae were measured for each patient (n = 166).INTERVENTIONS: Three raters analyzed the scans of 166 cochleae at 2 different time points. Each rater individually identified anatomical landmarks that delineated the basal turn diameter and width. These coordinates were applied to the elliptic approximation method (ECA) to estimate CDL. The effect of CT scan slice thickness on the measurements was explored.MAIN OUTCOME MEASURES: The primary outcome measure is the strength of the inter- and intra-rater reliability.RESULTS: The mean CDL measured was 32.84 ± 2.03 mm, with a range of 29.03 to 38.07 mm. We observed no significant relationship between slice thickness and CDL measurement (F1,164 = 3.04; p = 0.08). The mean absolute difference in CDL estimations between raters was 1.76 ± 1.24 mm and within raters was 0.263 ± 0.200 mm. The intra-class correlation coefficient (ICC) between raters was 0.54 and ranged from 0.63 to 0.83 within raters.CONCLUSIONS: This software produces reliable measurements of CDL between and within raters, regardless of CT scan thickness.
View details for DOI 10.1097/MAO.0000000000003015
View details for PubMedID 33492059
-
Ambient Pressure Tympanometry in the Workup of Patulous Eustachian Tube and Neurotologic Disorders.
Clinical otolaryngology : official journal of ENT-UK ; official journal of Netherlands Society for Oto-Rhino-Laryngology & Cervico-Facial Surgery
2020
Abstract
In contrast to previous reports, respiration-synchronous APT wave patterns display low sensitivity (53.3%) in our retrospective cohort of 15 PET ears, as diagnosed by characteristic symptoms and otoscopy. In 327 non-PET ears, the largest cohort of non-PET ears evaluated to date, respiration-synchronous APT wave patterns demonstrate high specificity for PET (93.9%), consistent with previous literature. APT performed solely at rest and with ipsilateral nostril respiration displays similar sensitivity for PET as the full battery of respiratory maneuvers. Pulse-synchronous wave patterns at rest may suggest an alternative neurotologic diagnosis requiring further workup, such as superior semicircular canal dehiscence. Ambient pressure tympanometry is a rapid, simple and widely available tool that can be integrated into general otolaryngology clinics and warrants further study in the evaluation of PET and neurotologic disorders.
View details for DOI 10.1111/coa.13686
View details for PubMedID 33289958
-
Factors influencing classification of frequency following responses to speech and music stimuli.
Hearing research
2020; 398: 108101
Abstract
Successful mapping of meaningful labels to sound input requires accurate representation of that sound's acoustic variances in time and spectrum. For some individuals, such as children or those with hearing loss, having an objective measure of the integrity of this representation could be useful. Classification is a promising machine learning approach which can be used to objectively predict a stimulus label from the brain response. This approach has been previously used with auditory evoked potentials (AEP) such as the frequency following response (FFR), but a number of key issues remain unresolved before classification can be translated into clinical practice. Specifically, past efforts at FFR classification have used data from a given subject for both training and testing the classifier. It is also unclear which components of the FFR elicit optimal classification accuracy. To address these issues, we recorded FFRs from 13 adults with normal hearing in response to speech and music stimuli. We compared labeling accuracy of two cross-validation classification approaches using FFR data: (1) a more traditional method combining subject data in both the training and testing set, and (2) a "leave-one-out" approach, in which subject data is classified based on a model built exclusively from the data of other individuals. We also examined classification accuracy on decomposed and time-segmented FFRs. Our results indicate that the accuracy of leave-one-subject-out cross validation approaches that obtained in the more conventional cross-validation classifications while allowing a subject's results to be analysed with respect to normative data pooled from a separate population. In addition, we demonstrate that classification accuracy is highest when the entire FFR is used to train the classifier. Taken together, these efforts contribute key steps toward translation of classification-based machine learning approaches into clinical practice.
View details for DOI 10.1016/j.heares.2020.108101
View details for PubMedID 33142106
-
Health Literacy and Hearing Healthcare Use.
The Laryngoscope
2020
Abstract
To assess whether health literacy is associated with: 1) degree of hearing loss at initial presentation for audiogram and 2) hearing aid adoption for hearing aid candidates.We identified 1376 patients who underwent audiometric testing and completed a brief health literacy questionnaire at our institution. The association between health literacy and degree of hearing loss at initial presentation was examined using linear regression, adjusted for age, gender, marital status, education level, race, language, employment status, and insurance coverage. The association between health literacy and hearing aid adoption was examined in the subset of patients identified as hearing aid candidates using logistic regression, adjusted for demographic factors and insurance coverage.Patients with inadequate health literacy were more likely to present with more severe hearing loss (adjusted mean pure-tone average [PTA] difference, 5.38 dB, 95% confidence interval [CI] 2.75 to 8.01). For hearing aid candidates (n = 472 [41.6%]), health literacy was not associated with hearing aid adoption rate (odds ratio [OR] 0.85, 95% CI 0.40 to 1.76). Hearing aid coverage through Medicaid (OR 2.22, 95% CI 1.13 to 4.37), and moderate (OR 2.70, 95% CI 1.58 to 4.69) or moderate-severe (OR 2.23, 95% CI 1.19 to 4.16) hearing loss were associated with hearing aid adoption.In our population, patients with low health literacy are more likely to present with higher degrees of hearing loss, but no less likely to obtain hearing aids compared with patients with adequate health literacy. Hearing loss severity and hearing aid coverage by insurance appear to be the main drivers of hearing aid adoption.3 Laryngoscope, 2020.
View details for DOI 10.1002/lary.29313
View details for PubMedID 33305829
-
Ocular Vestibular-Evoked Myogenic Potential Amplitudes Elicited at 4 kHz Optimize Detection of Superior Semicircular Canal Dehiscence.
Frontiers in neurology
2020; 11: 879
Abstract
Introduction: High-resolution temporal bone computed tomography (CT) is considered the gold standard for diagnosing superior semicircular canal dehiscence (SCD). However, CT has been shown over-detect SCD and provide results that may not align with patient-reported symptoms. Ocular vestibular-evoked myogenic potentials (oVEMPs)-most commonly conducted at 500 Hz stimulation-are increasingly used to support the diagnosis and management of SCD. Previous research reported that stimulation at higher frequencies such as 4 kHz can have near-perfect sensitivity and specificity in detecting radiographic SCD. With a larger cohort, we seek to understand the sensitivity and specificity of 4 kHz oVEMPs for detecting clinically significant SCD, as well as subgroups of radiographic, symptomatic, and surgical SCD. We also investigate whether assessing the 4 kHz oVEMP n10-p15 amplitude rather than the binary n10 response alone would optimize the detection of SCD. Methods: We conducted a cross-sectional study of patients who have undergone oVEMP testing at 4 kHz. Using the diagnostic criteria proposed by Ward et al., patients were determined to have SCD if dehiscence was confirmed on temporal bone CT by two reviewers, patient-reported characteristic symptoms, and if they had at least one positive vestibular or audiometric test suggestive of SCD. Receiver operating characteristic (ROC) analysis was conducted to identify the optimal 4 kHz oVEMP amplitude cut-off. Comparison of 4 kHz oVEMP amplitude across radiographic, symptomatic, and surgical SCD subgroups was conducted using the Mann-Whitney U test. Results: Nine hundred two patients (n, ears = 1,804) underwent 4 kHz oVEMP testing. After evaluating 150 temporal bone CTs, we identified 49 patients (n, ears = 61) who had radiographic SCD. Of those, 33 patients (n, ears = 37) were determined to have clinically significant SCD. For this study cohort, 4 kHz oVEMP responses had a sensitivity of 86.5% and a specificity of 87.8%. ROC analysis demonstrated that accounting for the inter-amplitude of 4 kHz oVEMP was more accurate in detecting SCD than the presence of n10 response alone (AUC 91 vs. 87%). Additionally, using an amplitude cut-off of 15uV reduces false positive results and improves specificity to 96.8%. Assessing 4 kHz oVEMP response across SCD subgroups demonstrated that surgical and symptomatic SCD cases had significantly higher amplitudes, while radiographic SCD cases without characteristic symptoms had similar amplitudes compared to cases without evidence of SCD. Conclusion: Our results suggest that accounting for 4 kHz oVEMP amplitude can improve detection of SCD compared to the binary presence of n10 response. The 4 kHz oVEMP amplitude cut-off that maximizes sensitivity and specificity for our cohort is 15 uV. Our results also suggest that 4 kHz oVEMP amplitudes align better with symptomatic SCD cases compared to cases in which there is radiographic SCD but no characteristic symptoms.
View details for DOI 10.3389/fneur.2020.00879
View details for PubMedID 32982915
View details for PubMedCentralID PMC7477389
-
Rhythmic Wave Patterns on Ambient Pressure Tympanometry in Patients With Objective Tinnitus-associated Pathologies.
Otology & neurotology : official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology
2019
Abstract
OBJECTIVE: To introduce the concept of ambient pressure tympanometry (APT) and its association with pathologies that may present with objective tinnitus.STUDY DESIGN: Retrospective case series.SETTING: Tertiary referral center.SUBJECTS AND METHODS: Audiologists performed APT on adult patients as part of routine audiological testing. Ears with myoclonus and patulous Eustachian tube (PET) were identified via review of patient history and physical examination. All other conditions were verified via computed tomography (CT) temporal bone imaging. Ears with conditions that could impair tympanic membrane compliance, such as otosclerosis or tympanic membrane perforation, were excluded. APT findings were analyzed via a novel algorithm.RESULTS: A radiographic finding associated with objective tinnitus was confirmed in 67 ears that underwent CT imaging; 45 (67%) of these ears displayed rhythmic APT wave patterns. These included 28 ears with superior semicircular canal dehiscence, 4 ears with sigmoid sinus dehiscence, 6 ears with internal carotid artery dehiscence, 4 ears with glomus tumor, and 3 ears with encephalocele. In addition, we identified three ears with myoclonus and one ear with PET. In a subset of 30 ears with objective tinnitus symptoms that underwent CT imaging, 22 displayed rhythmic waves; of these 22 ears, 20 (91%) had a radiographic finding associated with objective tinnitus.CONCLUSIONS: Rhythmic APT wave patterns are common and may be associated with numerous temporal bone pathologies that may present with objective tinnitus. APT is a simple, rapid, and widely available tool that warrants further study to determine its value in screening of these otologic conditions.
View details for DOI 10.1097/MAO.0000000000002526
View details for PubMedID 31868782
-
Frequency-following response among neonates with progressive moderate hyperbilirubinemia.
Journal of perinatology : official journal of the California Perinatal Association
2019
Abstract
OBJECTIVE: To evaluate the feasibility of auditory monitoring of neurophysiological status using frequency-following response (FFR) in neonates with progressive moderate hyperbilirubinemia, measured by transcutaneous (TcB) levels.STUDY DESIGN: ABR and FFR measures were compared and correlated with TcB levels across three groups. Group I was a healthy cohort (n=13). Group II (n=28) consisted of neonates with progressive, moderate hyperbilirubinemia and Group III consisted of the same neonates, post physician-ordered phototherapy.RESULT: FFR amplitudes in Group I controls (TcB=83.1±32.5mol/L; 4.9±1.9mg/dL) were greater than Group II (TcB=209.3±48.0mol/L; 12.1±2.8mg/dL). After TcB was lowered by phototherapy, FFR amplitudes in Group III were similar to controls. Lower TcB levels correlated with larger FFR amplitudes (r=-0.291, p=0.015), but not with ABR wave amplitude or latencies.CONCLUSION: The FFR is a promising measure of the dynamic neurophysiological status in neonates, and may be useful in tracking neurotoxicity in infants with hyperbilirubinemia.
View details for DOI 10.1038/s41372-019-0421-y
View details for PubMedID 31263204
-
Occupational Noise Exposure and Risk for Noise-Induced Hearing Loss Due to Temporal Bone Drilling.
Otology & neurotology : official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology
2018; 39 (6): 693–99
Abstract
BACKGROUND: Noise-induced hearing loss is one of the most common occupational hazards in the United States. Several studies have described noise-induced hearing loss in patients following mastoidectomy. Although otolaryngologists care for patients with noise-induced hearing loss, few studies in the English literature have examined surgeons' occupational risk.METHODS: Noise dosimeters and sound level meters with octave band analyzers were used to assess noise exposure during drilling of temporal bones intraoperatively and in a lab setting. Frequency specific sound intensities were recorded. Sound produced using burrs of varying size and type were compared. Differences while drilling varying anatomic structures were assessed using drills from two manufacturers. Pure tone audiometry was performed on 7 to 10 otolaryngology residents before and after a temporal bone practicum to assess for threshold shifts.RESULTS: Noise exposure during otologic drilling can exceed over 100 dB for short periods of time, and is especially loud using large diameter burrs > 4 mm, with cutting as compared with diamond burrs, and while drilling denser bone such as the cortex. Intensity peaks were found at 2.5, 5, and 6.3 kHz. Drilling on the tegmen and sigmoid sinus revealed peaks at 10 and 12.5 kHz. No temporary threshold shifts were found at 3 to 6 kHz, but were found at 8 to 16 kHz, though this did not reach statistical significance.CONCLUSION: This article examines noise exposure and threshold shifts during temporal bone drilling. We were unable to find previous descriptions in the literature of measurements done while multiple people drilling simultaneously, during tranlabyrinthine surgery and a specific frequency characterization of the change in peach that appears while drilling on the tegmen. Hearing protection should be considered, which would still allow the surgeon to appreciate pitch changes associated with drilling on sensitive structures and communication with surgical team members. As professionals who specialize in promoting the restoration and preservation of hearing for others, otologic surgeons should not neglect hearing protection for themselves.
View details for PubMedID 29889779
-
Assessment of Hearing During the Early Years of the American Otological Society.
Otology & neurotology : official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology
2018; 39 (4S Suppl 1): S30–S42
Abstract
To describe the manner in which hearing was evaluated in American Otological Practice during the late 19th and early 20th centuries before introduction of the electric audiometer.Primary sources were the Transactions of the American Otological Society and American textbooks, especially those authored by Presidents of the Society.In the era before electric audiometry multiple methods were used for evaluating the thresholds of different frequencies. Tuning forks were important for lower frequencies, whisper, and speech for mid-frequencies, and Galton's whistle and Konig's rod evaluated high frequencies. Hearing threshold was often recorded as in terms of duration of a sound, or distance from the source, rather than intensity. Hearing ability was often recorded a fraction, for example, with the distance a watch tick could be heard over the distance of a normal hearing individual. A variety of devices, such as Politzer's Acoumeter, attempted to deliver sound in a calibrated manner, thus enhancing the accuracy and reproducibility of test results.The early years of the American Otological Society were marked by a number of ingenious efforts to standardize hearing assessment despite the technical limitations. These efforts facilitated the development of the audiometer, and continue to influence clinical practice even today.
View details for PubMedID 29533374
-
Detection of tones of unexpected frequency in amplitude-modulated noise.
The Journal of the Acoustical Society of America
2017; 142 (4): 2043
Abstract
Detection of a tonal signal in amplitude-modulated noise can improve with increases in noise bandwidth if the pattern of amplitude fluctuations is uniform across frequency, a phenomenon termed comodulation masking release (CMR). Most explanations for CMR rely on an assumption that listeners monitor frequency channels both at and remote from the signal frequency in conditions that yield the effect. To test this assumption, detectability was assessed for signals presented at expected and unexpected frequencies in wideband amplitude-modulated noise. Detection performance was high even for signals of unexpected frequency, suggesting that listeners were monitoring multiple frequency channels, as has been assumed.
View details for DOI 10.1121/1.5007718
View details for PubMedID 29092596
-
Self-Selection of Frequency Tables with Bilateral Mismatches in an Acoustic Simulation of a Cochlear Implant
JOURNAL OF THE AMERICAN ACADEMY OF AUDIOLOGY
2017; 28 (5): 385-394
Abstract
Many recipients of bilateral cochlear implants (CIs) may have differences in electrode insertion depth. Previous reports indicate that when a bilateral mismatch is imposed, performance on tests of speech understanding or sound localization becomes worse. If recipients of bilateral CIs cannot adjust to a difference in insertion depth, adjustments to the frequency table may be necessary to maximize bilateral performance.The purpose of this study was to examine the feasibility of using real-time manipulations of the frequency table to offset any decrements in performance resulting from a bilateral mismatch.A simulation of a CI was used because it allows for explicit control of the size of a bilateral mismatch. Such control is not available with users of CIs.A total of 31 normal-hearing young adults participated in this study.Using a CI simulation, four bilateral mismatch conditions (0, 0.75, 1.5, and 3 mm) were created. In the left ear, the analysis filters and noise bands of the CI simulation were the same. In the right ear, the noise bands were shifted higher in frequency to simulate a bilateral mismatch. Then, listeners selected a frequency table in the right ear that was perceived as maximizing bilateral speech intelligibility. Word-recognition scores were then assessed for each bilateral mismatch condition. Listeners were tested with both a standard frequency table, which preserved a bilateral mismatch, or with their self-selected frequency table.Consistent with previous reports, bilateral mismatches of 1.5 and 3 mm yielded decrements in word recognition when the standard table was used in both ears. However, when listeners used the self-selected frequency table, performance was the same regardless of the size of the bilateral mismatch.Self-selection of a frequency table appears to be a feasible method for ameliorating the negative effects of a bilateral mismatch. These data may have implications for recipients of bilateral CIs who cannot adapt to a bilateral mismatch, because they suggest that (1) such individuals may benefit from modification of the frequency table in one ear and (2) self-selection of a "most intelligible" frequency table may be a useful tool for determining how the frequency table should be altered to optimize speech recognition.
View details for DOI 10.3766/jaaa.15077
View details for Web of Science ID 000401497100003
View details for PubMedID 28534729
-
Bilateral Loudness Balancing and Distorted Spatial Perception in Recipients of Bilateral Cochlear Implants
EAR AND HEARING
2015; 36 (5): E225-E236
Abstract
To determine whether bilateral loudness balancing during mapping of bilateral cochlear implants (CIs) produces fused, punctate, and centered auditory images that facilitate lateralization with stimulation on single-electrode pairs.Adopting procedures similar to those that are practiced clinically, direct stimulation was used to obtain most-comfortable levels (C levels) in recipients of bilateral CIs. Three pairs of electrodes, located in the base, middle, and apex of the electrode array, were tested. These electrode pairs were loudness-balanced by playing right-left electrode pairs sequentially. In experiment 1, the authors measured the location, number, and compactness of auditory images in 11 participants in a subjective fusion experiment. In experiment 2, the authors measured the location and number of the auditory images while imposing a range of interaural level differences (ILDs) in 13 participants in a lateralization experiment. Six of these participants repeated the mapping process and lateralization experiment over three separate days to determine the variability in the procedure.In approximately 80% of instances, bilateral loudness balancing was achieved from relatively small adjustments to the C levels (≤3 clinical current units). More important, however, was the observation that in 4 of 11 participants, simultaneous bilateral stimulation regularly elicited percepts that were not fused into a single auditory object. Across all participants, approximately 23% of percepts were not perceived as fused; this contrasts with the 1 to 2% incidence of diplacusis observed with normal-hearing individuals. In addition to the unfused images, the perceived location was often offset from the physical ILD. On the whole, only 45% of percepts presented with an ILD of 0 clinical current units were perceived as fused and heard in the center of the head. Taken together, these results suggest that distortions to the spatial map remain common in bilateral CI recipients even after careful bilateral loudness balancing.The primary conclusion from these experiments is that, even after bilateral loudness balancing, bilateral CI recipients still regularly perceive stimuli that are unfused, offset from the assumed zero ILD, or both. Thus, while current clinical mapping procedures for bilateral CIs are sufficient to enable many of the benefits of bilateral hearing, they may not elicit percepts that are thought to be optimal for sound-source location. As a result, in the absence of new developments in signal processing for CIs, new mapping procedures may need to be developed for bilateral CI recipients to maximize the benefits of bilateral hearing.
View details for DOI 10.1097/AUD.0000000000000174
View details for Web of Science ID 000360630800003
View details for PubMedID 25985017
-
Bilateral cochlear implants with large asymmetries in electrode insertion depth: implications for the study of auditory plasticity
ACTA OTO-LARYNGOLOGICA
2015; 135 (4): 354-363
Abstract
The human frequency-to-place map may be modified by experience, even in adult listeners. However, such plasticity has limitations. Knowledge of the extent and the limitations of human auditory plasticity can help optimize parameter settings in users of auditory prostheses.To what extent can adults adapt to sharply different frequency-to-place maps across ears? This question was investigated in two bilateral cochlear implant users who had a full electrode insertion in one ear, a much shallower insertion in the other ear, and standard frequency-to-electrode maps in both ears.Three methods were used to assess adaptation to the frequency-to-electrode maps in each ear: (1) pitch matching of electrodes in opposite ears, (2) listener-driven selection of the most intelligible frequency-to-electrode map, and (3) speech perception tests. Based on these measurements, one subject was fitted with an alternative frequency-to-electrode map, which sought to compensate for her incomplete adaptation to the standard frequency-to-electrode map.Both listeners showed remarkable ability to adapt, but such adaptation remained incomplete for the ear with the shallower electrode insertion, even after extended experience. The alternative frequency-to-electrode map that was tested resulted in substantial increases in speech perception for one subject in the short insertion ear.
View details for DOI 10.3109/00016489.2014.1002052
View details for Web of Science ID 000351365500007
View details for PubMedID 25719506
-
Feasibility of Real-Time Selection of Frequency Tables in an Acoustic Simulation of a Cochlear Implant
EAR AND HEARING
2013; 34 (6): 763-772
Abstract
Perception of spectrally degraded speech is particularly difficult when the signal is also distorted along the frequency axis. This might be particularly important for post-lingually deafened recipients of cochlear implants (CIs), who must adapt to a signal where there may be a mismatch between the frequencies of an input signal and the characteristic frequencies of the neurons stimulated by the CI. However, there is a lack of tools that can be used to identify whether an individual has adapted fully to a mismatch in the frequency-to-place relationship and if so, to find a frequency table that ameliorates any negative effects of an unadapted mismatch. The goal of the proposed investigation is to test the feasibility of whether real-time selection of frequency tables can be used to identify cases in which listeners have not fully adapted to a frequency mismatch. The assumption underlying this approach is that listeners who have not adapted to a frequency mismatch will select a frequency table that minimizes any such mismatches, even at the expense of reducing the information provided by this frequency table.Thirty-four normal-hearing adults listened to a noise-vocoded acoustic simulation of a CI and adjusted the frequency table in real time until they obtained a frequency table that sounded "most intelligible" to them. The use of an acoustic simulation was essential to this study because it allowed the authors to explicitly control the degree of frequency mismatch present in the simulation. None of the listeners had any previous experience with vocoded speech, in order to test the hypothesis that the real-time selection procedure could be used to identify cases in which a listener has not adapted to a frequency mismatch. After obtaining a self-selected table, the authors measured consonant nucleus consonant word-recognition scores with that self-selected table and two other frequency tables: a "frequency-matched" table that matched the analysis filters with the noisebands of the noise-vocoder simulation, and a "right information" table that is similar to that used in most CI speech processors, but in this simulation results in a frequency shift equivalent to 6.5 mm of cochlear space.Listeners tended to select a table that was very close to, but shifted slightly lower in frequency from the frequency-matched table. The real-time selection process took on average 2 to 3 min for each trial, and the between-trial variability was comparable with that previously observed with closely related procedures. The word-recognition scores with the self-selected table were clearly higher than with the right-information table and slightly higher than with the frequency-matched table.Real-time self-selection of frequency tables may be a viable tool for identifying listeners who have not adapted to a mismatch in the frequency-to-place relationship, and to find a frequency table that is more appropriate for them. Moreover, the small but significant improvements in word-recognition ability observed with the self-selected table suggest that these listeners based their selections on intelligibility rather than some other factor. The within-subject variability in the real-time selection procedure was comparable with that of a genetic algorithm, and the speed of the real-time procedure appeared to be faster than either a genetic algorithm or a simplex procedure.
View details for Web of Science ID 000330361200011
View details for PubMedID 23807089
-
Factors influencing consistent device use in pediatric recipients of bilateral cochlear implants.
Cochlear implants international
2013; 14 (5): 257-265
Abstract
To determine which demographic or performance variables are associated with inconsistent use of a second implant in pediatric recipients of sequential bilateral cochlear implants (CIs).A retrospective chart review was conducted on pediatric recipients of sequential bilateral CIs. Children were divided into two age groups, 5-9 and 10-17 years of age. For each group, we examined whether inconsistent use of the second implant (CI-2) was associated with a variety of demographic variables, or speech-perception scores.In children aged 5-9 years, inconsistent use of CI-2 was not significantly associated with any demographic variable, but was related to both the word-recognition score with CI-2, and the difference in word-recognition scores between the first implant (CI-1) and CI-2. In children aged 10-17 years, these relationships were not significant due to smaller number of subjects. Finally, CI-2 word-recognition scores across all children were significantly correlated with the age of implantation for both CI-1 and CI-2, and the time between CI-1 and CI-2 surgeries.Speech-recognition scores obtained with CI-2, and the extent to which it differs from CI-1, are most closely related with inconsistent use of CI-2 in pediatric sequential implantees. These results are consistent with similar data previously reported by other investigators. While children implanted with CI-2 at a later age generally perform more poorly, most children still use both implants, and benefit from CI-2 even when receiving the implant as an adolescent.In pediatric recipients of sequential bilateral CIs, inconsistent use of CI-2 is related to the speech recognition scores with CI-2, and the difference in speech-recognition scores between CI-1 and CI-2. In addition, speech-recognition scores with CI-2 are related to the amount of time between CI-1 and CI-2 surgeries, and the age of implantation for both CI-1 and CI-2.
View details for DOI 10.1179/1754762812Y.0000000026
View details for PubMedID 23510638
-
Perceptual learning and generalization resulting from training on an auditory amplitude-modulation detection task
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA
2011; 129 (2): 898-906
Abstract
Fluctuations in sound amplitude provide important cues to the identity of many sounds including speech. Of interest here was whether the ability to detect these fluctuations can be improved with practice, and if so whether this learning generalizes to untrained cases. To address these issues, normal-hearing adults (n = 9) were trained to detect sinusoidal amplitude modulation (SAM; 80-Hz rate, 3-4 kHz bandpass carrier) 720 trials/day for 6-7 days and were tested before and after training on related SAM-detection and SAM-rate-discrimination conditions. Controls (n = 9) only participated in the pre- and post-tests. The trained listeners improved more than the controls on the trained condition between the pre- and post-tests, but different subgroups of trained listeners required different amounts of practice to reach asymptotic performance, ranging from 1 (n = 6) to 4-6 (n = 3) sessions. This training-induced learning did not generalize to detection with two untrained carrier spectra (5 kHz low-pass and 0.5-1.5 kHz bandpass) or to rate discrimination with the trained rate and carrier spectrum, but there was some indication that it generalized to detection with two untrained rates (30 and 150 Hz). Thus, practice improved the ability to detect amplitude modulation, but the generalization of this learning to untrained cases was somewhat limited.
View details for DOI 10.1121/1.3531841
View details for Web of Science ID 000287709700038
View details for PubMedID 21361447
-
A New Software Tool to Optimize Frequency Table Selection for Cochlear Implants
OTOLOGY & NEUROTOLOGY
2010; 31 (8): 1242-1247
Abstract
When cochlear implant (CI) users are allowed to self-select the "most intelligible" frequency-to-electrode table, some of them choose one that differs from the default frequency table that is normally used in clinical practice.CIs reproduce the tonotopicity of normal cochleas using frequency-to-electrode tables that assign stimulation of more basal electrodes to higher frequencies and more apical electrodes to lower frequency sounds. Current audiologic practice uses a default frequency-to-electrode table for most patients. However, individual differences in cochlear size, neural survival, and electrode positioning may result in different tables sounding most intelligible to different patients. No clinical tools currently exist to facilitate this fitting.A software tool was designed that enables CI users to self-select a most intelligible frequency table. Users explore a 2-dimensional space that represents a range of different frequency tables. Unlike existing tools, this software enables users to interactively audition speech processed by different frequency tables and quickly identify a preferred one. Pilot testing was performed in 11 long-term, postlingually deaf CI users.The software tool was designed, developed, tested, and debugged. Patients successfully used the tool to sample frequency tables and to self-select tables deemed most intelligible, which for approximately half of the users differed from the clinical default.A software tool allowing CI users to self-select frequency-to-electrode tables may help in fitting postlingually deaf users. This novel approach may transform current methods of CI fitting.
View details for DOI 10.1097/MAO.0b013e3181f2063e
View details for Web of Science ID 000282306900013
View details for PubMedID 20729774
-
Enhancing Perceptual Learning by Combining Practice with Periods of Additional Sensory Stimulation
JOURNAL OF NEUROSCIENCE
2010; 30 (38): 12868-12877
Abstract
Perceptual skills can be improved even in adulthood, but this learning seldom occurs by stimulus exposure alone. Instead, it requires considerable practice performing a perceptual task with relevant stimuli. It is thought that task performance permits the stimuli to drive learning. A corresponding assumption is that the same stimuli do not contribute to improvement when encountered separately from relevant task performance because of the absence of this permissive signal. However, these ideas are based on only two types of studies, in which the task was either always performed or not performed at all. Here we demonstrate enhanced perceptual learning on an auditory frequency-discrimination task in human listeners when practice on that target task was combined with additional stimulation. Learning was enhanced regardless of whether the periods of additional stimulation were interleaved with or provided exclusively before or after target-task performance, and even though that stimulation occurred during the performance of an irrelevant (auditory or written) task. The additional exposures were only beneficial when they shared the same frequency with, though they did not need to be identical to, those used during target-task performance. Their effectiveness also was diminished when they were presented 15 min after practice on the target task and was eliminated when that separation was increased to 4 h. These data show that exposure to an acoustic stimulus can facilitate learning when encountered outside of the time of practice on a perceptual task. By properly using additional stimulation one may markedly improve the efficiency of perceptual training regimens.
View details for DOI 10.1523/JNEUROSCI.0487-10.2010
View details for Web of Science ID 000282097600030
View details for PubMedID 20861390
-
Reimplantation of hybrid cochlear implant users with a full-length electrode after loss of residual hearing
OTOLOGY & NEUROTOLOGY
2008; 29 (2): 168-173
Abstract
To assess word recognition and pitch-scaling abilities of cochlear implant users first implanted with a Nucleus 10-mm Hybrid electrode array and then reimplanted with a full length Nucleus Freedom array after loss of residual hearing.Although electroacoustic stimulation is a promising treatment for patients with residual low-frequency hearing,a small subset of them lose that residual hearing. It is not clear whether these patients would be better served by leaving in the 10-mm array and providing electric stimulation through it, or by replacing it with a standard full-length array.Word recognition and pitch-scaling abilities were measured in 2 users of hybrid cochlear implants who lost their residual hearing in the implanted ear after a few months. Tests were repeated over several months, first with a 10-mm array, and after, these patients were reimplanted with a full array. The word recognition task consisted of 2 50-word consonant nucleus consonant (CNC) lists. In the pitch-scaling task, 6 electrodes were stimulated in pseudorandom order, and patients assigned a pitch value to the sensation elicited by each electrode.Shortly after reimplantation with the full electrode array, speech understanding was much better than with the 10-mm array. Patients improved their ability to perform the pitch-scaling task over time with the full array, although their performance on that task was variable, and the improvements were often small.1) Short electrode arrays may help preserve residual hearing but may also provide less benefit than traditional cochlear implants for some patients. 2) Pitch percepts in response to electric stimulation may be modified by experience.
View details for Web of Science ID 000252840700012
View details for PubMedID 18165793
-
What matched comparisons can and cannot tell us: The case of cochlear implants
EAR AND HEARING
2007; 28 (4): 571-579
Abstract
To examine the conclusions and possible misinterpretations that may or may not be drawn from the "outcome-matching method," a study design recently used in the cochlear implant literature. In this method, subject groups are matched not only on potentially confounding variables but also on an outcome measure that is closely related to the outcome measure under analysis. For example, subjects may be matched according to their speech perception scores in quiet, and their speech perception in noise is compared.The present study includes two components, a simulation study and a questionnaire. In the simulation study, the outcome-matching method was applied to pseudo-randomly generated data. Simulated speech perception scores in quiet and in noise were generated for two comparison groups, in two imaginary worlds. In both worlds, comparison group A performed only slightly worse in noise than in quiet, whereas comparison group B performed significantly worse in noise than in quiet. In Imaginary World 1, comparison group A had better speech perception scores than comparison group B. In Imaginary World 2, comparison group B had better speech perception scores than comparison group A. The outcome-matching method was applied to these data twice in each imaginary world: 1) matching scores in quiet and comparing in noise, and 2) matching scores in noise and comparing in quiet. This procedure was repeated 10,000 times. The second part of the study was conducted to address the level of misinterpretation that could arise from the outcome-matching method. A questionnaire was administered to 54 students in a senior level course on speech and hearing to assess their opinions about speech perception with two different models of cochlear implant devices. The students were instructed to fill out the questionnaire before and after reading a paper that used the outcome-matching method to examine speech perception in noise and in quiet with those two cochlear implant devices.When pseudorandom scores were matched in quiet, comparison group A's scores in noise were significantly better than comparison group B's scores. Results were different when scores were matched in noise: in this case, comparison group B's scores in quiet were significantly better than comparison group A's scores. Thus, the choice of outcome measure used for matching determined the result of the comparison. Additionally, results of the comparisons were identical regardless of whether they were conducted using data from Imaginary World 1 (where comparison group A is better) or from Imaginary World 2 (where comparison group B is better). After reading the paper that used the outcome-matching method, students' opinions about the two cochlear implants underwent a significant change even though, according to the simulation study, this opinion change was not warranted by the data.The outcome-matching method can provide important information about differences within a comparison group, but it cannot be used to determine whether a given device or clinical intervention is better than another one. Care must be used when interpreting the results of a study using the outcome-matching method.
View details for Web of Science ID 000247829000012
View details for PubMedID 17609617
-
The effect of perimodiolar placement on speech perception and frequency discrimination by cochlear implant users
ACTA OTO-LARYNGOLOGICA
2007; 127 (4): 378-383
Abstract
Neither speech understanding nor frequency discrimination ability was better in Nucleus Contour users than in Nucleus 24 straight electrode users. Furthermore, perimodiolar electrode placement does not result in better frequency discrimination.We addressed three questions related to perimodiolar electrode placement. First, do patients implanted with the Contour electrode understand speech better than with an otherwise identical device that has a straight electrode? Second, do these groups have different frequency discrimination abilities? Third, is the distance of the electrode from the modiolus related to frequency discrimination ability?Contour and straight electrode users were matched on four important variables. We then tested these listeners on CNC word and HINT sentence identification tasks, and on a formant frequency discrimination task. We also examined X-rays and measured the distance of the electrodes from the modiolus to determine whether there is a relationship between this factor and frequency discrimination ability.Both speech understanding and frequency discrimination abilities were similar for listeners implanted with the Contour vs a straight electrode. Furthermore, there was no linear relationship between electrode-modiolus distance and frequency discrimination ability. However, we did note a second-order relationship between these variables, suggesting that frequency discrimination is worse when the electrodes are either too close or too far away from the modiolus.
View details for DOI 10.1080/00016480701258671
View details for Web of Science ID 000246298700007
View details for PubMedID 17453457
-
Perceptual-learning evidence for separate processing of asynchrony and order tasks
JOURNAL OF NEUROSCIENCE
2006; 26 (49): 12708-12716
Abstract
Normal perception depends, in part, on accurate judgments of the temporal relationships between sensory events. Two such relative-timing skills are the ability to detect stimulus asynchrony and to discriminate stimulus order. Here we investigated the neural processes contributing to the performance of auditory asynchrony and order tasks in humans, using a perceptual-learning paradigm. In each of two parallel experiments, we tested listeners on a pretest and a posttest consisting of auditory relative-timing conditions. Between these two tests, we trained a subset of listeners approximately 1 h/d for 6-8 d on a single relative-timing condition. The trained listeners practiced asynchrony detection in one experiment and order discrimination in the other. Both groups were trained at sound onset with tones at 0.25 and 4.0 kHz. The remaining listeners in each experiment, who served as controls, did not receive multihour training during the 8-10 d between the pretest and posttest. These controls improved even without intervening training, adding to evidence that a single session of exposure to perceptual tasks can yield learning. Most importantly, each of the two groups of trained listeners learned more on their respective trained conditions than controls, but this learning occurred only on the two trained conditions. Neither group of trained listeners generalized their learning to the other task (order or asynchrony), an untrained temporal position (sound offset), or untrained frequency pairs. Thus, it appears that multihour training on relative-timing skills affects task-specific neural circuits that are tuned to a given temporal position and combination of stimulus components.
View details for DOI 10.1523/JNEUROSCI.2254-06.2006
View details for Web of Science ID 000242626100011
View details for PubMedID 17151274
-
Customized selection of frequency maps in an acoustic simulation of a cochlear implant.
Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference
2006; 1: 3596-3599
Abstract
Cochlear implants can restore hearing to deaf individuals by electrically stimulating the auditory nerve. They do so by assigning different frequencies to different stimulating electrodes via a frequency map. We have developed a device that enables us to change the frequency map in real time. Here, in normal-hearing adults listening to an acoustic simulation of a cochlear implant, we investigate what frequency maps are initially preferred, and how the ability to understand speech with that preferred map compares with two other maps. We show that naive listeners prefer a map that balances the need for low-frequency information with the desire for a naturally-sounding stimulus, and that initial performance with this listener-selected map is better than that with a map that distorts the signal to provide low-frequency information.
View details for PubMedID 17946188
-
A perceptual learning investigation of the pitch elicited by amplitude-modulated noise
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA
2005; 118 (6): 3794-3803
Abstract
Noise that is amplitude modulated at rates ranging from 40 to 850 Hz can elicit a sensation of pitch. Here, the processing of this temporally based pitch was investigated using a perceptual-learning paradigm. Nine listeners were trained (1 hour per day for 6-8 days) to discriminate a standard rate of sinusoidal amplitude modulation (SAM) from a faster rate in a single condition (150 Hz SAM rate, 5 kHz low-pass carrier). All trained listeners improved significantly on that condition. These trained listeners subsequently showed no more improvement than nine untrained controls on pure-tone and rippled-noise discrimination with the same pitch, and on SAM-rate discrimination with a 30 Hz rate, although they did show some improvement with a 300 Hz rate. In addition, most trained, but not control, listeners were worse at detecting SAM at 150 Hz after, compared to before training. These results indicate that listeners can learn to improve their ability to discriminate SAM rate with multiple-hour training and that the mechanism that is modified by learning encodes (1) the pitch of SAM noise but not that of pure tones and rippled noise, (2) different SAM rates separately, and (3) differences in SAM rate more effectively than cues for SAM detection.
View details for DOI 10.1121/1.2074687
View details for Web of Science ID 000234101000042
View details for PubMedID 16419824
-
The time course of attention in a simple auditory detection task
PERCEPTION & PSYCHOPHYSICS
2004; 66 (3): 508-516
Abstract
What is the time course of human attention in a simple auditory detection task? To investigate this question, we determined the detectability of a 20-msec, 1000-Hz tone presented at expected and unexpected times. Twelve listeners who expected the tone to occur at a specific time after a 300-msec narrowband noise rarely detected signals presented 150-375 msec before or 100-200 msec after that expected time. The shape of this temporal-attention window depended on the expected presentation time of the tone and the temporal markers available in the trials. Further, though expecting the signal to occur in silence, listeners often detected signals presented at unexpected times during the noise. Combined with previous data, these results further clarify the listening strategy humans use when trying to detect an expected sound: Humans seem to listen specifically for that sound, while ignoring the background in which it is presented, around the time when the sound is expected to occur.
View details for Web of Science ID 000222163300013
View details for PubMedID 15283074
-
Different patterns of human discrimination learning for two interaural cues to sound-source location
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
2001; 98 (21): 12307-12312
Abstract
Two of the primary cues used to localize the sources of sounds are interaural level differences (ILDs) and interaural time differences (ITDs). We conducted two experiments to explore how practice affects the human discrimination of values of ILDs and ongoing ITDs presented over headphones. We measured discrimination thresholds of 13 to 32 naive listeners in a variety of conditions during a pretest and again, 2 weeks later, during a posttest. Between those two tests, we trained a subset of listeners 1 h per day for 9 days on a single ILD or ITD condition. Listeners improved on both ILD and ITD discrimination. Improvement was initially rapid for both cue types and appeared to generalize broadly across conditions, indicating conceptual or procedural learning. A subsequent slower-improvement stage, which occurred solely for the ILD cue, only affected conditions with the trained stimulus frequency, suggesting that stimulus processing had fundamentally changed. These different learning patterns indicate that practice affects the attention to, or low-level encoding of, ILDs and ITDs at sites at which the two cue types are processed separately. Thus, these data reveal differences in the effect of practice on ILD and ITD discrimination, and provide insight into the encoding of these two cues to sound-source location in humans.
View details for Web of Science ID 000171558900087
View details for PubMedID 11593048