Δευτέρα 27 Μαΐου 2019

Ear and Hearing

Test-Retest Variability in the Characteristics of Envelope Following Responses Evoked by Speech Stimuli
Objectives: The objective of the present study was to evaluate the between-session test-retest variability in the characteristics of envelope following responses (EFRs) evoked by modified natural speech stimuli in young normal hearing adults. Design: EFRs from 22 adults were recorded in two sessions, 1 to 12 days apart. EFRs were evoked by the token /susa∫ i/ (2.05 sec) presented at 65 dB SPL and recorded from the vertex referenced to the neck. The token /susa∫ i/, spoken by a male with an average fundamental frequency [f0] of 98.53 Hz, was of interest because of its potential utility as an objective hearing aid outcome measure. Each vowel was modified to elicit two EFRs simultaneously by lowering the f0 in the first formant while maintaining the original f0 in the higher formants. Fricatives were amplitude-modulated at 93.02 Hz and elicited one EFR each. EFRs evoked by vowels and fricatives were estimated using Fourier analyzer and discrete Fourier transform, respectively. Detection of EFRs was determined by an F-test. Test-retest variability in EFR amplitude and phase coherence were quantified using correlation, repeated-measures analysis of variance, and the repeatability coefficient. The repeatability coefficient, computed as twice the standard deviation (SD) of test-retest differences, represents the ±95% limits of test-retest variation around the mean difference. Test-retest variability of EFR amplitude and phase coherence were compared using the coefficient of variation, a normalized metric, which represents the ratio of the SD of repeat measurements to its mean. Consistency in EFR detection outcomes was assessed using the test of proportions. Results: EFR amplitude and phase coherence did not vary significantly between sessions, and were significantly correlated across repeat measurements. The repeatability coefficient for EFR amplitude ranged from 38.5 nV to 45.6 nV for all stimuli, except for /∫/ (71.6 nV). For any given stimulus, the test-retest differences in EFR amplitude of individual participants were not correlated with their test-retest differences in noise amplitude. However, across stimuli, higher repeatability coefficients of EFR amplitude tended to occur when the group mean noise amplitude and the repeatability coefficient of noise amplitude were higher. The test-retest variability of phase coherence was comparable to that of EFR amplitude in terms of the coefficient of variation, and the repeatability coefficient varied from 0.1 to 0.2, with the highest value of 0.2 for /∫/. Mismatches in EFR detection outcomes occurred in 11 of 176 measurements. For each stimulus, the tests of proportions revealed a significantly higher proportion of matched detection outcomes compared to mismatches. Conclusions: Speech-evoked EFRs demonstrated reasonable repeatability across sessions. Of the eight stimuli, the shortest stimulus /∫/ demonstrated the largest variability in EFR amplitude and phase coherence. The test-retest variability in EFR amplitude could not be explained by test-retest differences in noise amplitude for any of the stimuli. This lack of explanation argues for other sources of variability, one possibility being the modulation of cortical contributions imposed on brainstem-generated EFRs. ACKNOWLEDGMENTS: This study was funded by a Collaborative Health Research Project grant from the Canadian Institutes of Health Research and the Natural Sciences and Engineering Research Council of Canada (grant no. 493836-2016). V.E. designed the study, performed the experiment, analyzed data, and wrote the article. D.P. discussed study design, helped with response analysis, and edited the article. S.S. and S.A. discussed results, and edited the article. The authors have no conflicts of interest to disclose. Received October 20, 2018; accepted March 8, 2019. Address for correspondence: Vijayalakshmi Easwar, 541 Waisman Centre, The University of Wisconsin-Madison, 1500 Highland Ave, Madison, WI 53705, USA. E-mail: veaswar@wisc.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

The Revised Hearing Handicap Inventory and Screening Tool Based on Psychometric Reevaluation of the Hearing Handicap Inventories for the Elderly and Adults
Objectives: The present study evaluates the items of the Hearing Handicap Inventory for the Elderly and Hearing Handicap Inventory for Adults (HHIE/A) using Mokken scale analysis (MSA), a type of nonparametric item response theory, and develops updated tools with optimal psychometric properties. Design: In a longitudinal study of age-related hearing loss, 1447 adults completed the HHIE/A and audiometric testing at baseline. Discriminant validity of the emotional consequences and social/situational effects subscales of the HHIE/A was assessed, and nonparametric item response theory was used to explore dimensionality of the items of the HHIE/A and to refine the scales. Results: The HHIE/A items form strong unidimensional scales measuring self-perceived hearing handicap, but with a lack of discriminant validity of the two distinct subscales. Two revised scales, the 18-item Revised Hearing Handicap Inventory and the 10-item Revised Hearing Handicap Inventory—Screening, were developed from the common items of the original HHIE/A that met the assumptions of MSA. The items on both of the revised scales can be ordered in terms of increasing difficulty. Conclusions: The results of the present study suggest that the newly developed Revised Hearing Handicap Inventory and Revised Hearing Handicap Inventory—Screening are strong unidimensional, clinically informative measures of self-perceived hearing handicap that can be used for adults of all ages. The real-data example also demonstrates that MSA is a valuable alternative to classical psychometric analysis. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Jayne Ahlstrom for editorial assistance. The authors thank the subjects who participated in this study. This work was supported (in part) by research grant P50 DC000422 from NIH/NIDCD and by the South Carolina Clinical and Translational Research (SCTR) Institute, with an academic home at the Medical University of South Carolina, NIH/NCATS Grant number UL1 TR001450. This investigation was conducted in a facility constructed with support from Research Facilities Improvement Program Grant Number C06 RR14516 from the NIH/NCRR. Portions of this article were presented at the Hearing Across the Lifespan 2018 conference, Cernobbio, Lake Como, Italy, June 7, 2018. The authors have no conflicts of interest to declare. Received December 31, 2018; accepted March 25, 2019. Address for correspondence: Christy Cassarly, Department of Public Health Sciences, Medical University of South Carolina, 135 Cannon St., Ste 303, MSC 835, Charleston, SC 29425, USA. E-mail: cassarly@musc.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

The Effects of GJB2 or SLC26A4 Gene Mutations on Neural Response of the Electrically Stimulated Auditory Nerve in Children
Objectives: This study aimed to (1) investigate the effect of GJB2 and SLC26A4 gene mutations on auditory nerve function in pediatric cochlear implant users and (2) compare their results with those measured in implanted children with idiopathic hearing loss. Design: Participants included 20 children with biallelic GJB2 mutations, 16 children with biallelic SLC26A4 mutations, and 19 children with idiopathic hearing loss. All subjects except for two in the SLC26A4 group had concurrent Mondini malformation and enlarged vestibular aqueduct. All subjects used Cochlear Nucleus devices in their test ears. For each subject, electrophysiological measures of the electrically evoked compound action potential (eCAP) were recorded using both anodic- and cathodic-leading biphasic pulses. Dependent variables (DVs) of interest included slope of eCAP input/output (I/O) function, the eCAP threshold, and eCAP amplitude measured at the maximum comfortable level (C level) of the anodic-leading stimulus (i.e., the anodic C level). Slopes of eCAP I/O functions were estimated using statistical modeling with a linear regression function. These DVs were measured at three electrode locations across the electrode array. Generalized linear mixed effect models were used to evaluate the effects of study group, stimulus polarity, and electrode location on each DV. Results: Steeper slopes of eCAP I/O function, lower eCAP thresholds, and larger eCAP amplitude at the anodic C level were measured for the anodic-leading stimulus compared with the cathodic-leading stimulus in all subject groups. Children with GJB2 mutations showed steeper slopes of eCAP I/O function and larger eCAP amplitudes at the anodic C level than children with SLC26A4 mutations and children with idiopathic hearing loss for both the anodic- and cathodic-leading stimuli. In addition, children with GJB2 mutations showed a smaller increase in eCAP amplitude when the stimulus changed from the cathodic-leading pulse to the anodic-leading pulse (i.e., smaller polarity effect) than children with idiopathic hearing loss. There was no statistically significant difference in slope of eCAP I/O function, eCAP amplitude at the anodic C level, or the size of polarity effect on all three DVs between children with SLC26A4 mutations and children with idiopathic hearing loss. These results suggested that better auditory nerve function was associated with GJB2 but not with SLC26A4 mutations when compared with idiopathic hearing loss. In addition, significant effects of electrode location were observed for slope of eCAP I/O function and the eCAP threshold. Conclusions: GJB2 and SLC26A4 gene mutations did not alter polarity sensitivity of auditory nerve fibers to electrical stimulation. The anodic-leading stimulus was generally more effective in activating auditory nerve fibers than the cathodic-leading stimulus, despite the presence of GJB2 or SLC26A4 mutations. Patients with GJB2 mutations appeared to have better functional status of the auditory nerve than patients with SLC26A4 mutations who had concurrent Mondini malformation and enlarged vestibular aqueduct and patients with idiopathic hearing loss. ACKNOWLEDGMENTS: We gratefully thank all subjects and their parents for participating in this study. J.L. participated in data collection and patient testing, prepared the initial draft of this article, provided critical comments, and approved the final version of this article. L.X., X.C., and R.W. participated in the data collection and patient testing, provided critical comments, and approved the final version of this article. X.B. conducted genetic tests in all study participants, provided critical comments, and approved the final version of this article. A.P. and Z.F. provided critical comments and approved the final version of this article. H.W. participated in designing this study, provided critical comments, and approved the final version of this article. S.H. designed the study, participated in data collection and patient testing, and drafted and approved the final version of this article. The authors have no conflicts of interest to declare. Received September 18, 2018; accepted March 17, 2019. Address for correspondence: Haibo Wang, Department of Otolaryngology—Head and Neck Surgery, Shandong Provincial Hospital Affiliated to Shandong University, Duanxing West Road, Huaiyin, Jinan 250022, Shandong, People's Republic of China. E-mail: Whboto11@163.Com or Shuman He, Eye and Ear Institute, Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, OH 43212, USA. E-mail: shuman.he@osumc.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Use of Commercial Virtual Reality Technology to Assess Verticality Perception in Static and Dynamic Visual Backgrounds
Objectives: The Subjective Visual Vertical (SVV) test and the closely related Rod and Disk Test (RDT) are measures of perceived verticality measured in static and dynamic visual backgrounds. However, the equipment used for these tests is variable across clinics and is often too expensive or too primitive to be appropriate for widespread use. Commercial virtual reality technology, which is now widely available, may provide a more suitable alternative for collecting these measures in clinical populations. This study was designed to investigate verticality perception in symptomatic patients using a modified RDT paradigm administered through a head-mounted display (HMD). Design: A group of adult patients referred by a physician for vestibular testing based on the presence of dizziness symptoms and a group of healthy adults without dizziness symptoms were included. We investigated degree of visual dependence in both groups by measuring SVV as a function of kinematic changes to the visual background. Results: When a dynamic background was introduced into the HMD to simulate the RDT, significantly greater shifts in SVV were found for the patient population than for the control population. In patients referred for vestibular testing, the SVV measured with the HMD was significantly correlated with traditional measures of SVV collected in a rotary chair when accounting for head tilt. Conclusions: This study provides initial proof of concept evidence that reliable SVV measures in static and dynamic visual backgrounds can be obtained using a low-cost commercial HMD system. This initial evidence also suggests that this tool can distinguish individuals with dizziness symptomatology based on SVV performance in dynamic visual backgrounds. Acknowledgment: The work was supported by Defense Health Affairs in support of the Army Hearing Program. The views expressed in this article are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The identification of specific products or scientific instrumentation does not constitute endorsement or implied endorsement on the part of the author, DoD, or any component agency. The views expressed in this presentation are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The authors have no conflicts of interest to disclose. Received March 27, 2018; accepted March 2, 2019. Address for correspondence: Ashley Zaleski-King, Walter Reed National Military Medical Center (WRNMMC), 8901 Rockville Pike, Bethesda, MD 20889, USA. E-mail: ashley.c.king8.civ@mail.mil Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Predicting Speech-in-Noise Deficits from the Audiogram
Objectives: In occupations that involve hearing critical tasks, individuals need to undergo periodic hearing screenings to ensure that they have not developed hearing losses that could impair their ability to safely and effectively perform their jobs. Most periodic hearing screenings are limited to pure-tone audiograms, but in many cases, the ability to understand speech in noisy environments may be more important to functional job performance than the ability to detect quiet sounds. The ability to use audiometric threshold data to identify individuals with poor speech-in-noise performance is of particular interest to the U.S. military, which has an ongoing responsibility to ensure that its service members (SMs) have the hearing abilities they require to accomplish their mission. This work investigates the development of optimal strategies for identifying individuals with poor speech-in-noise performance from the audiogram. Design: Data from 5487 individuals were used to evaluate a range of classifiers, based exclusively on the pure-tone audiogram, for identifying individuals who have deficits in understanding speech in noise. The classifiers evaluated were based on generalized linear models (GLMs), the speech intelligibility index (SII), binary threshold criteria, and current standards used by the U.S. military. The classifiers were evaluated in a detection theoretic framework where the sensitivity and specificity of the classifiers were quantified. In addition to the performance of these classifiers for identifying individuals with deficits understanding speech in noise, data from 500,733 U.S. Army SMs were used to understand how the classifiers would affect the number of SMs being referred for additional testing. Results: A classifier based on binary threshold criteria that was identified through an iterative search procedure outperformed a classifier based on the SII and ones based on GLMs with large numbers of fitted parameters. This suggests that the saturating nature of the SII is important, but that the weights of frequency channels are not optimal for identifying individuals with deficits understanding speech in noise. It is possible that a highly complicated model with many free parameters could outperform the classifiers considered here, but there was only a modest difference between the performance of a classifier based on a GLM with 26 fitted parameters and one based on a simple all-frequency pure-tone average. This suggests that the details of the audiogram are a relatively insensitive predictor of performance in speech-in-noise tasks. Conclusions: The best classifier identified in this study, which was a binary threshold classifier derived from an iterative search process, does appear to reliably outperform the current thresholds criteria used by the U.S. military to identify individuals with abnormally poor speech-in-noise performance, both in terms of fewer false alarms and a greater hit rate. Substantial improvements in the ability to detect SMs with impaired speech-in-noise performance can likely only be obtained by adding some form of speech-in-noise testing to the hearing monitoring program. While the improvements were modest, the overall benefit of adopting the proposed classifier is likely substantial given the number of SMs enrolled in U.S. military hearing conservation and readiness programs. ACKNOWLEDGMENTS: The authors thank Dr. Gary Kidd for sharing his TDT data and Dr. Ken Grant for sharing his SPRINT data. The authors also thank Kari Buchanan and the Hearing Center of Excellence for sharing the DOEHRS-HC data. All authors contributed equally to this work. All authors were involved in the data analysis and discussed the results and implications and commented on the manuscript at all stages. The views expressed in this article are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The authors have no conflicts of interest to disclose. Received December 3, 2017; accepted March 14, 2019. Address for correspondence: Daniel E. Shub, National Military Audiology and Speech Center, Walter Reed National Military Medical Center, 4954 North Palmer Road, Bethesda, MD 20889, USA. E-mail: daniel.e.shub.civ@mail.mil Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Children With Normal Hearing Are Efficient Users of Fundamental Frequency and Vocal Tract Length Cues for Voice Discrimination
Background: The ability to discriminate between talkers assists listeners in understanding speech in a multitalker environment. This ability has been shown to be influenced by sensory processing of vocal acoustic cues, such as fundamental frequency (F0) and formant frequencies that reflect the listener's vocal tract length (VTL), and by cognitive processes, such as attention and memory. It is, therefore, suggested that children who exhibit immature sensory and/or cognitive processing will demonstrate poor voice discrimination (VD) compared with young adults. Moreover, greater difficulties in VD may be associated with spectral degradation as in children with cochlear implants. Objectives: The aim of this study was as follows: (1) to assess the use of F0 cues, VTL cues, and the combination of both cues for VD in normal-hearing (NH) school-age children and to compare their performance with that of NH adults; (2) to assess the influence of spectral degradation by means of vocoded speech on the use of F0 and VTL cues for VD in NH children; and (3) to assess the contribution of attention, working memory, and nonverbal reasoning to performance. Design: Forty-one children, 8 to 11 years of age, were tested with nonvocoded stimuli. Twenty-one of them were also tested with eight-channel, noise-vocoded stimuli. Twenty-one young adults (18 to 35 years) were tested for comparison. A three-interval, three-alternative forced-choice paradigm with an adaptive tracking procedure was used to estimate the difference limens (DLs) for VD when F0, VTL, and F0 + VTL were manipulated separately. Auditory memory, visual attention, and nonverbal reasoning were assessed for all participants. Results: (a) Children' F0 and VTL discrimination abilities were comparable to those of adults, suggesting that most school-age children utilize both cues effectively for VD. (b) Children's VD was associated with trail making test scores that assessed visual attention abilities and speed of processing, possibly reflecting their need to recruit cognitive resources for the task. (c) Best DLs were achieved for the combined (F0 + VTL) manipulation for both children and adults, suggesting that children at this age are already capable of integrating spectral and temporal cues. (d) Both children and adults found the VTL manipulations more beneficial for VD compared with the F0 manipulations, suggesting that formant frequencies are more reliable for identifying a specific speaker than F0. (e) Poorer DLs were achieved with the vocoded stimuli, though the children maintained similar thresholds and pattern of performance among manipulations as the adults. Conclusions: The present study is the first to assess the contribution of F0, VTL, and the combined F0 + VTL to the discrimination of speakers in school-age children. The findings support the notion that many NH school-age children have effective spectral and temporal coding mechanisms that allow sufficient VD, even in the presence of spectrally degraded information. These results may challenge the notion that immature sensory processing underlies poor listening abilities in children, further implying that other processing mechanisms contribute to their difficulties to understand speech in a multitalker environment. These outcomes may also provide insight into VD processes of children under listening conditions that are similar to cochlear implant users. ACKNOWLEDGMENTS: The authors wish to acknowledge the contribution of the following undergraduate students from the Department of Communication Disorders at Tel Aviv University for assisting in data collection: Feigi Raiter, Feigi Grinvald, Shani Rabia, Adi Amsalem, Miri Rotem, Lea pantiat, Daniel Lex Rabinovitch, and Orpaz Shariki. The authors wish to thank Steyer grant (School of Health Professions, Tel-Aviv University) for their financial support. The authors specially thank all the adults and children who participated in the present study. All authors contributed to this work to a significant extent. All authors have read the article and agreed to submit it for publication after discussing the results and implications and commented on the article at all stages. All authors are, therefore, responsible for the reported research and have approved the final article as submitted. The authors have no conflicts of interest to declare. Received September 30, 2018; accepted March 17, 2019. Address for correspondence: Yael Zaltz, Department of Communication Disorders, The Stanley Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel. E-mail: yaelzaltz@gmail.com Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Switching Streams Across Ears to Evaluate Informational Masking of Speech-on-Speech
Objectives: This study aimed to evaluate the informational component of speech-on-speech masking. Speech perception in the presence of a competing talker involves not only informational masking (IM) but also a number of masking processes involving interaction of masker and target energy in the auditory periphery. Such peripherally generated masking can be eliminated by presenting the target and masker in opposite ears (dichotically). However, this also reduces IM by providing listeners with lateralization cues that support spatial release from masking (SRM). In tonal sequences, IM can be isolated by rapidly switching the lateralization of dichotic target and masker streams across the ears, presumably producing ambiguous spatial percepts that interfere with SRM. However, it is not clear whether this technique works with speech materials. Design: Speech reception thresholds (SRTs) were measured in 17 young normal-hearing adults for sentences produced by a female talker in the presence of a competing male talker under three different conditions: diotic (target and masker in both ears), dichotic, and dichotic but switching the target and masker streams across the ears. Because switching rate and signal coherence were expected to influence the amount of IM observed, these two factors varied across conditions. When switches occurred, they were either at word boundaries or periodically (every 116 msec) and either with or without a brief gap (84 msec) at every switch point. In addition, SRTs were measured in a quiet condition to rule out audibility as a limiting factor. Results: SRTs were poorer for the four switching dichotic conditions than for the nonswitching dichotic condition, but better than for the diotic condition. Periodic switches without gaps resulted in the worst SRTs compared to the other switch conditions, thus maximizing IM. Conclusions: These findings suggest that periodically switching the target and masker streams across the ears (without gaps) was the most efficient in disrupting SRM. Thus, this approach can be used in experiments that seek a relatively pure measure of IM, and could be readily extended to translational research. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Rachel Ellinger and Andrea Cunningham for their help with data collection. This work was supported by NIH R01 DC 60014 grant awarded to P. S., and an iCARE ITN (FP7-607139) European fellowship to A. C. The authors have no conflict of interest to disclose. Received June 4, 2018; accepted March 17, 2019. Address for correspondence: Axelle Calcus, Ecole Normale Supérieure, 29 rue d'Ulm, 75005 Paris, France. E-mail: axelle.calcus@ens.fr Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Genetic Inheritance of Late-Onset, Down-Sloping Hearing Loss and Its Implications for Auditory Rehabilitation
Objectives: Late-onset, down-sloping sensorineural hearing loss has many genetic and nongenetic etiologies, but the proportion of this commonly encountered type of hearing loss attributable to genetic causes is not well known. In this study, the authors performed genetic analysis using next-generation sequencing techniques in patients showing late-onset, down-sloping sensorineural hearing loss with preserved low-frequency hearing, and investigated the clinical implications of the variants identified. Design: From a cohort of patients with hearing loss at a tertiary referral hospital, 18 unrelated probands with down-sloping sensorineural hearing loss of late onset were included in this study. Down-sloping hearing loss was defined as a mean low-frequency threshold at 250 Hz and 500 Hz less than or equal to 40 dB HL and a mean high-frequency threshold at 1, 2, and 4 kHz greater than 40 dB HL. The authors performed whole-exome sequencing and segregation analysis to identify the genetic causes and evaluated the outcomes of auditory rehabilitation in the patients. Results: There were nine simplex and nine multiplex families included, in which the causative variants were found in six of 18 probands, demonstrating a detection rate of 33.3%. Various types of variants, including five novel and three known variants, were detected in the MYH14, MYH9, USH2A, COL11A2, and TMPRSS3 genes. The outcome of cochlear and middle ear implants in patients identified with pathogenic variants was satisfactory. There was no statistically significant difference between pathogenic variant-positive and pathogenic variant-negative groups in terms of onset age, family history of hearing loss, pure-tone threshold, or speech discrimination scores. Conclusions: The proportion of patients with late-onset, down-sloping hearing loss identified with potentially causative variants was unexpectedly high. Identification of the causative variants will offer insights on hearing loss progression and prognosis regarding various modes of auditory rehabilitation, as well as possible concomitant syndromic features. ACKNOWLEDGMENTS: This study was provided with bioresources from the National Biobank of Korea, Centers for Disease Control and Prevention, Republic of Korea (4845-301, 4851-302 and -307). This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (2015R1A1A1A05001472 to S.M.H., 2017M3A9E8029714 to J.J.S., 2014M3A9D5A01073865 to C.J.Y., 2018R1A5A2025079 to H.Y.G.) M.H.S., J.J., H.Y.G., and J.Y.C. designed the study conception. J.J, J.H.R., H.J.C., and J.S.L. performed the experiment. M.H.S., J.J., H.J.L., and B.N. analyzed and interpreted the data. M.H.S., J.J., H.J.L., B.N., H.Y.G., and J.H.R. wrote the article. The authors have no conflicts of interest to disclose. Received July 25, 2018; accepted March 2, 2019. Address for correspondence: Jae Young Choi, Department of Otorhino laryngology, Yonsei University College of Medicine, 50–1 Yonsei-ro, Seodaemun-gu, Seoul 120–752, Republic of Korea. E-mail: jychoi@yuhs.ac Address for correspondence: Heon Yung Gee, Department of Pharmacology and Brain Korea 21 Project for Medical Sciences, Yonsei University College of Medicine, 50–1 Yonsei-ro, Seodaemun-gu, Seoul 120–752, Republic of Korea. E-mail: hygee@yuhs.ac Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Improving Clinical Outcomes in Cochlear Implantation Using Glucocorticoid Therapy: A Review
Cochlear implant surgery is a successful procedure for auditory rehabilitation of patients with severe to profound hearing loss. However, cochlear implantation may lead to damage to the inner ear, which decreases residual hearing and alters vestibular function. It is now of increasing interest to preserve residual hearing during this surgery because this is related to better speech, music perception, and hearing in complex listening environments. Thus, different efforts have been tried to reduce cochlear implantation-related injury, including periprocedural glucocorticoids because of their anti-inflammatory properties. Different routes of administration have been tried to deliver glucocorticoids. However, several drawbacks still remain, including their systemic side effects, unknown pharmacokinetic profiles, and complex delivery methods. In the present review, we discuss the role of periprocedural glucocorticoid therapy to decrease cochlear implantation-related injury, thus preserving inner ear function after surgery. Moreover, we highlight the pharmacokinetic evidence and clinical outcomes which would sustain further interventions. ACKNOWLEDGMENTS: The authors have no conflicts of interest to disclose. Received October 8, 2018; accepted March 14, 2019. Address for correspondence: Cecilia Engmér Berglin, Department of Otorhinolaryngology, B53, Karolinska University Hospital, 141 86 Stockholm, Sweden. E-mail: cecilia.engmer-berglin@sll.se Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

The Effect of Hearing-Protection Devices on Auditory Situational Awareness and Listening Effort
Objectives: Hearing-protection devices (HPDs) are made available, and often are required, for industrial use as well as military training exercises and operational duties. However, these devices often are disliked, and consequently not worn, in part because they compromise situational awareness through reduced sound detection and localization performance as well as degraded speech intelligibility. In this study, we carried out a series of tests, involving normal-hearing subjects and multiple background-noise conditions, designed to evaluate the performance of four HPDs in terms of their modifications of auditory-detection thresholds, sound-localization accuracy, and speech intelligibility. In addition, we assessed their impact on listening effort to understand how the additional effort required to perceive and process auditory signals while wearing an HPD reduces available cognitive resources for other tasks. Design: Thirteen normal-hearing subjects participated in a protocol, which included auditory tasks designed to measure detection and localization performance, speech intelligibility, and cognitive load. Each participant repeated the battery of tests with unoccluded ears and four hearing protectors, two active (electronic) and two passive. The tasks were performed both in quiet and in background noise. Results: Our findings indicate that, in variable degrees, all of the tested HPDs induce performance degradation on most of the conducted tasks as compared to the open ear. Of particular note in this study is the finding of increased cognitive load or listening effort, as measured by visual reaction time, for some hearing protectors during a dual-task, which added working-memory demands to the speech-intelligibility task. Conclusions: These results indicate that situational awareness can vary greatly across the spectrum of HPDs, and that listening effort is another aspect of performance that should be considered in future studies. The increased listening effort induced by hearing protectors may lead to earlier cognitive fatigue in noisy environments. Further study is required to characterize how auditory performance is limited by the combination of hearing impairment and the use of HPDs, and how the effects of such limitations can be linked to safe and effective use of hearing protection to maximize job performance. ACKNOWLEDGMENTS: This work is sponsored by the US Army Natick Soldier Research, Development, and Engineering Center under Air Force Contract FA8721-05-C-0002 and/or FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of the Army. Distribution Statement A: Approved for public release. Distribution is unlimited. C.J.S. designed and performed experiments, analyzed data, provided statistical analysis and wrote the article; P.T.C provided data analysis and wrote the article; A.P.D, J.P.P., T.P., and J.B. collected and analyzed data; T.F.Q. and M.M. provided contributions to conception of the work and critical editing; P.P.C provided editing and final approval of the version to be published. The authors have no conflicts of interest to disclose. Received June 4, 2018; accepted February 21, 2019. Address for correspondence: Bioengineering Systems and Technologies Group, MIT Lincoln Laboratory, 244 Wood St. Lexington, MA 02421, USA. E-mail: christopher.smalt@ll.mit.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Δημοφιλείς αναρτήσεις