Πέμπτη 27 Ιουνίου 2019

Ear and Hearing

Prediction Model for Audiological Outcomes in Patients With GJB2 Mutations
Objectives: Recessive mutations in GJB2 are the most common genetic cause of sensorineural hearing impairment (SNHI) in humans. SNHI related to GJB2 mutations demonstrates a wide variation in audiological features, and there has been no reliable prediction model for hearing outcomes until now. The objectives of this study were to clarify the predominant factors determining hearing outcome and to establish a predictive model for SNHI in patients with GJB2 mutations. Design: A total of 434 patients confirmed to have biallelic GJB2 mutations were enrolled and divided into three groups according to their GJB2 genotypes. Audiological data, including hearing levels and audiogram configurations, were compared between patients with different genotypes. Univariate and multivariate generalized estimating equation (GEE) analyses were performed to analyze longitudinal data of patients with multiple audiological records. Results: Of the 434 patients, 346 (79.7%) were homozygous for the GJB2 p.V37I mutation, 55 (12.7%) were compound heterozygous for p.V37I and another GJB2 mutation, and 33 (7.6%) had biallelic GJB2 mutations other than p.V37I. There was a significant difference in hearing level and the distribution of audiogram configurations between the three groups. Multivariate GEE analyses on 707 audiological records of 227 patients revealed that the baseline hearing level and the duration of follow-up were the predominant predictors of hearing outcome, and that hearing levels in patients with GJB2 mutations could be estimated based on these two parameters: (Predicted Hearing Level [dBHL]) = 3.78 + 0.96 × (Baseline Hearing Level [dBHL]) + 0.55 × (Duration of Follow-Up [y]). Conclusion: The baseline hearing level and the duration of follow-up are the main prognostic factors for outcome of GJB2-related SNHI. These findings may have important clinical implications in guiding follow-up protocols and designing treatment plans in patients with GJB2 mutations. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: We thank all the patients and their parents for participating in the study. P.-Y. C. and C.-C. W. ascertained clinical data, analyzed data, and wrote the article. Y.-H.L. and Y.-H.L. performed genetic examinations and analyses. L.-H.T. collected and analyzed audiological data. T.-H.Y. collected clinical data. P.-L.C. supervised the genetic examinations and analyses. T.-C.L. and C.-J.H. supervised the whole study and provided critical revision. This work was supported by research grants from the National Health Research Institute (NHRI-EX106-10414PC to C.-C.W.), the Ministry of Science and Technology of Taiwan (MOST 103-2628-B-002-009-MY4 to C.-C.W.), and National Taiwan University Hospital Yunlin Branch (NTUHYL106.N005 to P.-Y.C.). The authors have no conflicts of interest to disclose. Received May 23, 2018; accepted March 8, 2019. Address for correspondence: Chen-Chi Wu, Department of Otolaryngology, National Taiwan University Hospital, 7, Chung-Shan South Road, Taipei, Taiwan. E-mail: chenchiwu@ntuh.gov.tw Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Human Frequency Following Responses to Vocoded Speech: Amplitude Modulation Versus Amplitude Plus Frequency Modulation
Objectives: The most commonly employed speech processing strategies in cochlear implants (CIs) only extract and encode amplitude modulation (AM) in a limited number of frequency channels. Zeng et al. (2005) proposed a novel speech processing strategy that encodes both frequency modulation (FM) and AM to improve CI performance. Using behavioral tests, they reported better speech, speaker, and tone recognition with this novel strategy than with the AM-alone strategy. Here, we used the scalp-recorded human frequency following responses (FFRs) to examine the differences in the neural representation of vocoded speech sounds with AM alone and AM + FM as the spectral and temporal cues were varied. Specifically, we were interested in determining whether the addition of FM to AM improved the neural representation of envelope periodicity (FFRENV) and temporal fine structure (FFRTFS), as reflected in the temporal pattern of the phase-locked neural activity generating the FFR. Design: FFRs were recorded from 13 normal-hearing, adult listeners in response to the original unprocessed stimulus (a synthetic diphthong /au/ with a 110-Hz fundamental frequency or F0 and a 250-msec duration) and the 2-, 4-, 8- and 16-channel sine vocoded versions of /au/ with AM alone and AM + FM. Temporal waveforms, autocorrelation analyses, fast Fourier Transform, and stimulus-response spectral correlations were used to analyze both the strength and fidelity of the neural representation of envelope periodicity (F0) and TFS (formant structure). Results: The periodicity strength in the FFRENV decreased more for the AM stimuli than for the relatively resilient AM + FM stimuli as the number of channels was increased. Regardless of the number of channels, a clear spectral peak of FFRENV was consistently observed at the stimulus F0 for all the AM + FM stimuli but not for the AM stimuli. Neural representation as revealed by the spectral correlation of FFRTFS was better for the AM + FM stimuli when compared to the AM stimuli. Neural representation of the time-varying formant-related harmonics as revealed by the spectral correlation was also better for the AM + FM stimuli as compared to the AM stimuli. Conclusions: These results are consistent with previously reported behavioral results and suggest that the AM + FM processing strategy elicited brainstem neural activity that better preserved periodicity, temporal fine structure, and time-varying spectral information than the AM processing strategy. The relatively more robust neural representation of AM + FM stimuli observed here likely contributes to the superior performance on speech, speaker, and tone recognition with the AM + FM processing strategy. Taken together, these results suggest that neural information preserved in the FFR may be used to evaluate signal processing strategies considered for CIs. ACKNOWLEDGMENTS: The authors thank Dr. Jackson Gandour for his assistance with statistical analysis. This work was supported by the National Institutes of Health (NIH), R01 DC008549 (A. K.) and the Department of Speech, Language, and Hearing Sciences, Purdue University. The authors have no conflicts of interest to disclose. Received January 24, 2019; accepted May 15, 2019. Address for correspondence: Ananthanarayan Krishnan, PhD, Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Dr, Rm 3060, West Lafayette, In 47907, USA. E-mail: rkrish@purdue.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

The Ototoxic Potential of Cobalt From Metal-on-Metal Hip Implants: Objective Auditory and Vestibular Outcome
Objectives: During the past decade, the initial popularity of metal-on-metal (MoM) hip implants has shown a progressive decline due to increasingly reported implant failure and revision surgeries. Local as well as systemic toxic side effects have been associated with excessive metal ion release from implants, in which cobalt (Co) plays an important role. The rare condition of systemic cobaltism seems to manifest as a clinical syndrome with cardiac, endocrine, and neurological symptoms, including hearing loss, tinnitus, and imbalance. In most cases described in the literature, revision surgery and the subsequent drop in blood Co level led to (partial) alleviation of the symptoms, suggesting a causal relationship with Co exposure. Moreover, the ototoxic potential of Co has recently been demonstrated in animal experiments. Since its ototoxic potential in humans is merely based on anecdotal case reports, the current study aimed to prospectively and objectively examine the auditory and vestibular function in patients implanted with a MoM hip prosthesis. Design: Twenty patients (15 males and 5 females, aged between 33 and 65 years) implanted with a primary MoM hip prosthesis were matched for age, gender, and noise exposure to 20 non-implanted control subjects. Each participant was subjected to an extensive auditory (conventional and high-frequency pure tone audiometry, transient evoked and distortion product otoacoustic emissions [TEOAEs and DPOAEs], auditory brainstem responses [ABR]) and vestibular test battery (cervical and ocular vestibular evoked myogenic potentials [cVEMPs and oVEMPs], rotatory test, caloric test, video head impulse test [vHIT]), supplemented with a blood sample collection to determine the plasma Co concentration. Results: The median [interquartile range] plasma Co concentration was 1.40 [0.70, 6.30] µg/L in the MoM patient group and 0.19 [0.09, 0.34] µg/L in the control group. Within the auditory test battery, a clear trend was observed toward higher audiometric thresholds (11.2 to 16 kHz), lower DPOAE (between 4 and 8 kHz), and total TEOAE (1 to 4 kHz) amplitudes, and a higher interaural latency difference for wave V of the ABR in the patient versus control group (0.01 ≤ p < 0.05). Within the vestibular test battery, considerably longer cVEMP P1 latencies, higher oVEMP amplitudes (0.01 ≤ p < 0.05), and lower asymmetry ratio of the vHIT gain (p < 0.01) were found in the MoM patients. In the patient group, no suggestive association was observed between the plasma Co level and the auditory or vestibular outcome parameters. Conclusions: The auditory results seem to reflect signs of Co-induced damage to the hearing function in the high frequencies. This corresponds to previous findings on drug-induced ototoxicity and the recent animal experiments with Co, which identified the basal cochlear outer hair cells as primary targets and indicated that the cellular mechanisms underlying the toxicity might be similar. The vestibular outcomes of the current study are inconclusive and require further elaboration, especially with respect to animal studies. The lack of a clear dose–response relationship may question the clinical relevance of our results, but recent findings in MoM hip implant patients have confirmed that this relationship can be complicated by many patient-specific factors. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: L.L. has received funding from the Special Research Fund of Ghent University (BOF) (grant no. 01D33015) and is currently receiving funding from the Research Foundation Flanders (FWO) (grant no. 1170718N), as a predoctoral research fellow. L.L. performed the experiments, analyzed the data, and wrote the article. L.M., B.V., S.D.G., and R.V. assisted with the data analysis and critically reviewed the article during the complete writing process. C.V.D.S. and K.D.S. were responsible for the recruitment of patients with a metal-on-metal implant and reviewed the article during the complete writing process. I.D., H.K., R.L., and F.L.W. critically reviewed the article during the complete writing process. The authors have no conflict of interest to disclose. Received December 5, 2018; accepted March 31, 2019. Address for correspondence: Laura Leyssens, MSc, Department of Rehabilitation Sciences, University of Ghent, Ghent University Hospital, Corneel Heymanslaan 10, 9000 Ghent, Belgium. E-mail: Laura.Leyssens@UGent.be Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Test-Retest Variability in the Characteristics of Envelope Following Responses Evoked by Speech Stimuli
Objectives: The objective of the present study was to evaluate the between-session test-retest variability in the characteristics of envelope following responses (EFRs) evoked by modified natural speech stimuli in young normal hearing adults. Design: EFRs from 22 adults were recorded in two sessions, 1 to 12 days apart. EFRs were evoked by the token /susa∫ i/ (2.05 sec) presented at 65 dB SPL and recorded from the vertex referenced to the neck. The token /susa∫ i/, spoken by a male with an average fundamental frequency [f0] of 98.53 Hz, was of interest because of its potential utility as an objective hearing aid outcome measure. Each vowel was modified to elicit two EFRs simultaneously by lowering the f0 in the first formant while maintaining the original f0 in the higher formants. Fricatives were amplitude-modulated at 93.02 Hz and elicited one EFR each. EFRs evoked by vowels and fricatives were estimated using Fourier analyzer and discrete Fourier transform, respectively. Detection of EFRs was determined by an F-test. Test-retest variability in EFR amplitude and phase coherence were quantified using correlation, repeated-measures analysis of variance, and the repeatability coefficient. The repeatability coefficient, computed as twice the standard deviation (SD) of test-retest differences, represents the ±95% limits of test-retest variation around the mean difference. Test-retest variability of EFR amplitude and phase coherence were compared using the coefficient of variation, a normalized metric, which represents the ratio of the SD of repeat measurements to its mean. Consistency in EFR detection outcomes was assessed using the test of proportions. Results: EFR amplitude and phase coherence did not vary significantly between sessions, and were significantly correlated across repeat measurements. The repeatability coefficient for EFR amplitude ranged from 38.5 nV to 45.6 nV for all stimuli, except for /∫/ (71.6 nV). For any given stimulus, the test-retest differences in EFR amplitude of individual participants were not correlated with their test-retest differences in noise amplitude. However, across stimuli, higher repeatability coefficients of EFR amplitude tended to occur when the group mean noise amplitude and the repeatability coefficient of noise amplitude were higher. The test-retest variability of phase coherence was comparable to that of EFR amplitude in terms of the coefficient of variation, and the repeatability coefficient varied from 0.1 to 0.2, with the highest value of 0.2 for /∫/. Mismatches in EFR detection outcomes occurred in 11 of 176 measurements. For each stimulus, the tests of proportions revealed a significantly higher proportion of matched detection outcomes compared to mismatches. Conclusions: Speech-evoked EFRs demonstrated reasonable repeatability across sessions. Of the eight stimuli, the shortest stimulus /∫/ demonstrated the largest variability in EFR amplitude and phase coherence. The test-retest variability in EFR amplitude could not be explained by test-retest differences in noise amplitude for any of the stimuli. This lack of explanation argues for other sources of variability, one possibility being the modulation of cortical contributions imposed on brainstem-generated EFRs. ACKNOWLEDGMENTS: This study was funded by a Collaborative Health Research Project grant from the Canadian Institutes of Health Research and the Natural Sciences and Engineering Research Council of Canada (grant no. 493836-2016). V.E. designed the study, performed the experiment, analyzed data, and wrote the article. D.P. discussed study design, helped with response analysis, and edited the article. S.S. and S.A. discussed results, and edited the article. The authors have no conflicts of interest to disclose. Received October 20, 2018; accepted March 8, 2019. Address for correspondence: Vijayalakshmi Easwar, 541 Waisman Centre, The University of Wisconsin-Madison, 1500 Highland Ave, Madison, WI 53705, USA. E-mail: veaswar@wisc.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

The Revised Hearing Handicap Inventory and Screening Tool Based on Psychometric Reevaluation of the Hearing Handicap Inventories for the Elderly and Adults
Objectives: The present study evaluates the items of the Hearing Handicap Inventory for the Elderly and Hearing Handicap Inventory for Adults (HHIE/A) using Mokken scale analysis (MSA), a type of nonparametric item response theory, and develops updated tools with optimal psychometric properties. Design: In a longitudinal study of age-related hearing loss, 1447 adults completed the HHIE/A and audiometric testing at baseline. Discriminant validity of the emotional consequences and social/situational effects subscales of the HHIE/A was assessed, and nonparametric item response theory was used to explore dimensionality of the items of the HHIE/A and to refine the scales. Results: The HHIE/A items form strong unidimensional scales measuring self-perceived hearing handicap, but with a lack of discriminant validity of the two distinct subscales. Two revised scales, the 18-item Revised Hearing Handicap Inventory and the 10-item Revised Hearing Handicap Inventory—Screening, were developed from the common items of the original HHIE/A that met the assumptions of MSA. The items on both of the revised scales can be ordered in terms of increasing difficulty. Conclusions: The results of the present study suggest that the newly developed Revised Hearing Handicap Inventory and Revised Hearing Handicap Inventory—Screening are strong unidimensional, clinically informative measures of self-perceived hearing handicap that can be used for adults of all ages. The real-data example also demonstrates that MSA is a valuable alternative to classical psychometric analysis. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Jayne Ahlstrom for editorial assistance. The authors thank the subjects who participated in this study. This work was supported (in part) by research grant P50 DC000422 from NIH/NIDCD and by the South Carolina Clinical and Translational Research (SCTR) Institute, with an academic home at the Medical University of South Carolina, NIH/NCATS Grant number UL1 TR001450. This investigation was conducted in a facility constructed with support from Research Facilities Improvement Program Grant Number C06 RR14516 from the NIH/NCRR. Portions of this article were presented at the Hearing Across the Lifespan 2018 conference, Cernobbio, Lake Como, Italy, June 7, 2018. The authors have no conflicts of interest to declare. Received December 31, 2018; accepted March 25, 2019. Address for correspondence: Christy Cassarly, Department of Public Health Sciences, Medical University of South Carolina, 135 Cannon St., Ste 303, MSC 835, Charleston, SC 29425, USA. E-mail: cassarly@musc.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

The Effects of GJB2 or SLC26A4 Gene Mutations on Neural Response of the Electrically Stimulated Auditory Nerve in Children
Objectives: This study aimed to (1) investigate the effect of GJB2 and SLC26A4 gene mutations on auditory nerve function in pediatric cochlear implant users and (2) compare their results with those measured in implanted children with idiopathic hearing loss. Design: Participants included 20 children with biallelic GJB2 mutations, 16 children with biallelic SLC26A4 mutations, and 19 children with idiopathic hearing loss. All subjects except for two in the SLC26A4 group had concurrent Mondini malformation and enlarged vestibular aqueduct. All subjects used Cochlear Nucleus devices in their test ears. For each subject, electrophysiological measures of the electrically evoked compound action potential (eCAP) were recorded using both anodic- and cathodic-leading biphasic pulses. Dependent variables (DVs) of interest included slope of eCAP input/output (I/O) function, the eCAP threshold, and eCAP amplitude measured at the maximum comfortable level (C level) of the anodic-leading stimulus (i.e., the anodic C level). Slopes of eCAP I/O functions were estimated using statistical modeling with a linear regression function. These DVs were measured at three electrode locations across the electrode array. Generalized linear mixed effect models were used to evaluate the effects of study group, stimulus polarity, and electrode location on each DV. Results: Steeper slopes of eCAP I/O function, lower eCAP thresholds, and larger eCAP amplitude at the anodic C level were measured for the anodic-leading stimulus compared with the cathodic-leading stimulus in all subject groups. Children with GJB2 mutations showed steeper slopes of eCAP I/O function and larger eCAP amplitudes at the anodic C level than children with SLC26A4 mutations and children with idiopathic hearing loss for both the anodic- and cathodic-leading stimuli. In addition, children with GJB2 mutations showed a smaller increase in eCAP amplitude when the stimulus changed from the cathodic-leading pulse to the anodic-leading pulse (i.e., smaller polarity effect) than children with idiopathic hearing loss. There was no statistically significant difference in slope of eCAP I/O function, eCAP amplitude at the anodic C level, or the size of polarity effect on all three DVs between children with SLC26A4 mutations and children with idiopathic hearing loss. These results suggested that better auditory nerve function was associated with GJB2 but not with SLC26A4 mutations when compared with idiopathic hearing loss. In addition, significant effects of electrode location were observed for slope of eCAP I/O function and the eCAP threshold. Conclusions: GJB2 and SLC26A4 gene mutations did not alter polarity sensitivity of auditory nerve fibers to electrical stimulation. The anodic-leading stimulus was generally more effective in activating auditory nerve fibers than the cathodic-leading stimulus, despite the presence of GJB2 or SLC26A4 mutations. Patients with GJB2 mutations appeared to have better functional status of the auditory nerve than patients with SLC26A4 mutations who had concurrent Mondini malformation and enlarged vestibular aqueduct and patients with idiopathic hearing loss. ACKNOWLEDGMENTS: We gratefully thank all subjects and their parents for participating in this study. J.L. participated in data collection and patient testing, prepared the initial draft of this article, provided critical comments, and approved the final version of this article. L.X., X.C., and R.W. participated in the data collection and patient testing, provided critical comments, and approved the final version of this article. X.B. conducted genetic tests in all study participants, provided critical comments, and approved the final version of this article. A.P. and Z.F. provided critical comments and approved the final version of this article. H.W. participated in designing this study, provided critical comments, and approved the final version of this article. S.H. designed the study, participated in data collection and patient testing, and drafted and approved the final version of this article. The authors have no conflicts of interest to declare. Received September 18, 2018; accepted March 17, 2019. Address for correspondence: Haibo Wang, Department of Otolaryngology—Head and Neck Surgery, Shandong Provincial Hospital Affiliated to Shandong University, Duanxing West Road, Huaiyin, Jinan 250022, Shandong, People's Republic of China. E-mail: Whboto11@163.Com or Shuman He, Eye and Ear Institute, Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, OH 43212, USA. E-mail: shuman.he@osumc.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Use of Commercial Virtual Reality Technology to Assess Verticality Perception in Static and Dynamic Visual Backgrounds
Objectives: The Subjective Visual Vertical (SVV) test and the closely related Rod and Disk Test (RDT) are measures of perceived verticality measured in static and dynamic visual backgrounds. However, the equipment used for these tests is variable across clinics and is often too expensive or too primitive to be appropriate for widespread use. Commercial virtual reality technology, which is now widely available, may provide a more suitable alternative for collecting these measures in clinical populations. This study was designed to investigate verticality perception in symptomatic patients using a modified RDT paradigm administered through a head-mounted display (HMD). Design: A group of adult patients referred by a physician for vestibular testing based on the presence of dizziness symptoms and a group of healthy adults without dizziness symptoms were included. We investigated degree of visual dependence in both groups by measuring SVV as a function of kinematic changes to the visual background. Results: When a dynamic background was introduced into the HMD to simulate the RDT, significantly greater shifts in SVV were found for the patient population than for the control population. In patients referred for vestibular testing, the SVV measured with the HMD was significantly correlated with traditional measures of SVV collected in a rotary chair when accounting for head tilt. Conclusions: This study provides initial proof of concept evidence that reliable SVV measures in static and dynamic visual backgrounds can be obtained using a low-cost commercial HMD system. This initial evidence also suggests that this tool can distinguish individuals with dizziness symptomatology based on SVV performance in dynamic visual backgrounds. Acknowledgment: The work was supported by Defense Health Affairs in support of the Army Hearing Program. The views expressed in this article are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The identification of specific products or scientific instrumentation does not constitute endorsement or implied endorsement on the part of the author, DoD, or any component agency. The views expressed in this presentation are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The authors have no conflicts of interest to disclose. Received March 27, 2018; accepted March 2, 2019. Address for correspondence: Ashley Zaleski-King, Walter Reed National Military Medical Center (WRNMMC), 8901 Rockville Pike, Bethesda, MD 20889, USA. E-mail: ashley.c.king8.civ@mail.mil Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Predicting Speech-in-Noise Deficits from the Audiogram
Objectives: In occupations that involve hearing critical tasks, individuals need to undergo periodic hearing screenings to ensure that they have not developed hearing losses that could impair their ability to safely and effectively perform their jobs. Most periodic hearing screenings are limited to pure-tone audiograms, but in many cases, the ability to understand speech in noisy environments may be more important to functional job performance than the ability to detect quiet sounds. The ability to use audiometric threshold data to identify individuals with poor speech-in-noise performance is of particular interest to the U.S. military, which has an ongoing responsibility to ensure that its service members (SMs) have the hearing abilities they require to accomplish their mission. This work investigates the development of optimal strategies for identifying individuals with poor speech-in-noise performance from the audiogram. Design: Data from 5487 individuals were used to evaluate a range of classifiers, based exclusively on the pure-tone audiogram, for identifying individuals who have deficits in understanding speech in noise. The classifiers evaluated were based on generalized linear models (GLMs), the speech intelligibility index (SII), binary threshold criteria, and current standards used by the U.S. military. The classifiers were evaluated in a detection theoretic framework where the sensitivity and specificity of the classifiers were quantified. In addition to the performance of these classifiers for identifying individuals with deficits understanding speech in noise, data from 500,733 U.S. Army SMs were used to understand how the classifiers would affect the number of SMs being referred for additional testing. Results: A classifier based on binary threshold criteria that was identified through an iterative search procedure outperformed a classifier based on the SII and ones based on GLMs with large numbers of fitted parameters. This suggests that the saturating nature of the SII is important, but that the weights of frequency channels are not optimal for identifying individuals with deficits understanding speech in noise. It is possible that a highly complicated model with many free parameters could outperform the classifiers considered here, but there was only a modest difference between the performance of a classifier based on a GLM with 26 fitted parameters and one based on a simple all-frequency pure-tone average. This suggests that the details of the audiogram are a relatively insensitive predictor of performance in speech-in-noise tasks. Conclusions: The best classifier identified in this study, which was a binary threshold classifier derived from an iterative search process, does appear to reliably outperform the current thresholds criteria used by the U.S. military to identify individuals with abnormally poor speech-in-noise performance, both in terms of fewer false alarms and a greater hit rate. Substantial improvements in the ability to detect SMs with impaired speech-in-noise performance can likely only be obtained by adding some form of speech-in-noise testing to the hearing monitoring program. While the improvements were modest, the overall benefit of adopting the proposed classifier is likely substantial given the number of SMs enrolled in U.S. military hearing conservation and readiness programs. ACKNOWLEDGMENTS: The authors thank Dr. Gary Kidd for sharing his TDT data and Dr. Ken Grant for sharing his SPRINT data. The authors also thank Kari Buchanan and the Hearing Center of Excellence for sharing the DOEHRS-HC data. All authors contributed equally to this work. All authors were involved in the data analysis and discussed the results and implications and commented on the manuscript at all stages. The views expressed in this article are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The authors have no conflicts of interest to disclose. Received December 3, 2017; accepted March 14, 2019. Address for correspondence: Daniel E. Shub, National Military Audiology and Speech Center, Walter Reed National Military Medical Center, 4954 North Palmer Road, Bethesda, MD 20889, USA. E-mail: daniel.e.shub.civ@mail.mil Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Children With Normal Hearing Are Efficient Users of Fundamental Frequency and Vocal Tract Length Cues for Voice Discrimination
Background: The ability to discriminate between talkers assists listeners in understanding speech in a multitalker environment. This ability has been shown to be influenced by sensory processing of vocal acoustic cues, such as fundamental frequency (F0) and formant frequencies that reflect the listener's vocal tract length (VTL), and by cognitive processes, such as attention and memory. It is, therefore, suggested that children who exhibit immature sensory and/or cognitive processing will demonstrate poor voice discrimination (VD) compared with young adults. Moreover, greater difficulties in VD may be associated with spectral degradation as in children with cochlear implants. Objectives: The aim of this study was as follows: (1) to assess the use of F0 cues, VTL cues, and the combination of both cues for VD in normal-hearing (NH) school-age children and to compare their performance with that of NH adults; (2) to assess the influence of spectral degradation by means of vocoded speech on the use of F0 and VTL cues for VD in NH children; and (3) to assess the contribution of attention, working memory, and nonverbal reasoning to performance. Design: Forty-one children, 8 to 11 years of age, were tested with nonvocoded stimuli. Twenty-one of them were also tested with eight-channel, noise-vocoded stimuli. Twenty-one young adults (18 to 35 years) were tested for comparison. A three-interval, three-alternative forced-choice paradigm with an adaptive tracking procedure was used to estimate the difference limens (DLs) for VD when F0, VTL, and F0 + VTL were manipulated separately. Auditory memory, visual attention, and nonverbal reasoning were assessed for all participants. Results: (a) Children' F0 and VTL discrimination abilities were comparable to those of adults, suggesting that most school-age children utilize both cues effectively for VD. (b) Children's VD was associated with trail making test scores that assessed visual attention abilities and speed of processing, possibly reflecting their need to recruit cognitive resources for the task. (c) Best DLs were achieved for the combined (F0 + VTL) manipulation for both children and adults, suggesting that children at this age are already capable of integrating spectral and temporal cues. (d) Both children and adults found the VTL manipulations more beneficial for VD compared with the F0 manipulations, suggesting that formant frequencies are more reliable for identifying a specific speaker than F0. (e) Poorer DLs were achieved with the vocoded stimuli, though the children maintained similar thresholds and pattern of performance among manipulations as the adults. Conclusions: The present study is the first to assess the contribution of F0, VTL, and the combined F0 + VTL to the discrimination of speakers in school-age children. The findings support the notion that many NH school-age children have effective spectral and temporal coding mechanisms that allow sufficient VD, even in the presence of spectrally degraded information. These results may challenge the notion that immature sensory processing underlies poor listening abilities in children, further implying that other processing mechanisms contribute to their difficulties to understand speech in a multitalker environment. These outcomes may also provide insight into VD processes of children under listening conditions that are similar to cochlear implant users. ACKNOWLEDGMENTS: The authors wish to acknowledge the contribution of the following undergraduate students from the Department of Communication Disorders at Tel Aviv University for assisting in data collection: Feigi Raiter, Feigi Grinvald, Shani Rabia, Adi Amsalem, Miri Rotem, Lea pantiat, Daniel Lex Rabinovitch, and Orpaz Shariki. The authors wish to thank Steyer grant (School of Health Professions, Tel-Aviv University) for their financial support. The authors specially thank all the adults and children who participated in the present study. All authors contributed to this work to a significant extent. All authors have read the article and agreed to submit it for publication after discussing the results and implications and commented on the article at all stages. All authors are, therefore, responsible for the reported research and have approved the final article as submitted. The authors have no conflicts of interest to declare. Received September 30, 2018; accepted March 17, 2019. Address for correspondence: Yael Zaltz, Department of Communication Disorders, The Stanley Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel. E-mail: yaelzaltz@gmail.com Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Switching Streams Across Ears to Evaluate Informational Masking of Speech-on-Speech
Objectives: This study aimed to evaluate the informational component of speech-on-speech masking. Speech perception in the presence of a competing talker involves not only informational masking (IM) but also a number of masking processes involving interaction of masker and target energy in the auditory periphery. Such peripherally generated masking can be eliminated by presenting the target and masker in opposite ears (dichotically). However, this also reduces IM by providing listeners with lateralization cues that support spatial release from masking (SRM). In tonal sequences, IM can be isolated by rapidly switching the lateralization of dichotic target and masker streams across the ears, presumably producing ambiguous spatial percepts that interfere with SRM. However, it is not clear whether this technique works with speech materials. Design: Speech reception thresholds (SRTs) were measured in 17 young normal-hearing adults for sentences produced by a female talker in the presence of a competing male talker under three different conditions: diotic (target and masker in both ears), dichotic, and dichotic but switching the target and masker streams across the ears. Because switching rate and signal coherence were expected to influence the amount of IM observed, these two factors varied across conditions. When switches occurred, they were either at word boundaries or periodically (every 116 msec) and either with or without a brief gap (84 msec) at every switch point. In addition, SRTs were measured in a quiet condition to rule out audibility as a limiting factor. Results: SRTs were poorer for the four switching dichotic conditions than for the nonswitching dichotic condition, but better than for the diotic condition. Periodic switches without gaps resulted in the worst SRTs compared to the other switch conditions, thus maximizing IM. Conclusions: These findings suggest that periodically switching the target and masker streams across the ears (without gaps) was the most efficient in disrupting SRM. Thus, this approach can be used in experiments that seek a relatively pure measure of IM, and could be readily extended to translational research. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Rachel Ellinger and Andrea Cunningham for their help with data collection. This work was supported by NIH R01 DC 60014 grant awarded to P. S., and an iCARE ITN (FP7-607139) European fellowship to A. C. The authors have no conflict of interest to disclose. Received June 4, 2018; accepted March 17, 2019. Address for correspondence: Axelle Calcus, Ecole Normale Supérieure, 29 rue d'Ulm, 75005 Paris, France. E-mail: axelle.calcus@ens.fr Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Δημοφιλείς αναρτήσεις