12
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Bottom-up and top-down neural signatures of disordered multi-talker speech perception in adults with normal hearing

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In social settings, speech waveforms from nearby speakers mix together in our ear canals. Normally, the brain unmixes the attended speech stream from the chorus of background speakers using a combination of fast temporal processing and cognitive active listening mechanisms. Of >100,000 patient records,~10% of adults visited our clinic because of reduced hearing, only to learn that their hearing was clinically normal and should not cause communication difficulties. We found that multi-talker speech intelligibility thresholds varied widely in normal hearing adults, but could be predicted from neural phase-locking to frequency modulation (FM) cues measured with ear canal EEG recordings. Combining neural temporal fine structure processing, pupil-indexed listening effort, and behavioral FM thresholds accounted for 78% of the variability in multi-talker speech intelligibility. The disordered bottom-up and top-down markers of poor multi-talker speech perception identified here could inform the design of next-generation clinical tests for hidden hearing disorders.

          eLife digest

          Our ears were not designed for the society our brains created. The World Health Organization estimates that a billion young adults are at risk for hearing problems due to prolonged exposure to high levels of noise. For many people, the first symptoms of hearing loss consist in an inability to follow a single speaker in crowded places such as restaurants.

          However, when Parthasarathy et al. examined over 100,000 records from the Massachusetts Eye and Ear audiology database, they found that around 10% of patients who complained about hearing difficulties were sent home with a clean bill of hearing health. This is because existing tests do not detect common problems related to understanding speech in complex, real-world environments: new tests are needed to spot these hidden hearing disorders. Parthasarathy et al. therefore focused on identifying biological measures that would reflect these issues.

          Normally, the brain can ‘unmix’ different speakers and focus on one person, but even in the context of normal hearing, some people are better at this than others. Parthasarathy et al pinpointed several behavioral and biological markers which, when combined, could predict most of this variability. This involved, for example, measuring the diameter of the pupil while people are listening to speech in the presence of several distracting voices (which mirrors how intensively they have to focus on the task) or measuring the participants’ ability to detect subtle changes in frequency (which reflects how fast-changing sound elements are encoded early on in the hearing system). The findings show that an over-reliance on high-level cognitive processes, such as increased listening effort, coupled with problems in the early processing of certain sound traits, was associated with problems in following a speaker in a busy environment.

          The biological and behavioral markers highlighted by Parthasarathy et al do not require specialized equipment or marathon sessions to be recorded. In theory, these tests could be implemented into most hospital hearing clinics to give patients and health providers objective data to understand, treat and monitor these hearing difficulties.

          Related collections

          Most cited references89

          • Record: found
          • Abstract: found
          • Article: not found

          Development of the Tinnitus Handicap Inventory

          To develop a self-report tinnitus handicap measure that is brief, easy to administer and interpret, broad in scope, and psychometrically robust.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Noise-induced cochlear neuropathy is selective for fibers with low spontaneous rates.

            Acoustic overexposure can cause a permanent loss of auditory nerve fibers without destroying cochlear sensory cells, despite complete recovery of cochlear thresholds (Kujawa and Liberman 2009), as measured by gross neural potentials such as the auditory brainstem response (ABR). To address this nominal paradox, we recorded responses from single auditory nerve fibers in guinea pigs exposed to this type of neuropathic noise (4- to 8-kHz octave band at 106 dB SPL for 2 h). Two weeks postexposure, ABR thresholds had recovered to normal, while suprathreshold ABR amplitudes were reduced. Both thresholds and amplitudes of distortion-product otoacoustic emissions fully recovered, suggesting recovery of hair cell function. Loss of up to 30% of auditory-nerve synapses on inner hair cells was confirmed by confocal analysis of the cochlear sensory epithelium immunostained for pre- and postsynaptic markers. In single fiber recordings, at 2 wk postexposure, frequency tuning, dynamic range, postonset adaptation, first-spike latency and its variance, and other basic properties of auditory nerve response were all completely normal in the remaining fibers. The only physiological abnormality was a change in population statistics suggesting a selective loss of fibers with low- and medium-spontaneous rates. Selective loss of these high-threshold fibers would explain how ABR thresholds can recover despite such significant noise-induced neuropathy. A selective loss of high-threshold fibers may contribute to the problems of hearing in noisy environments that characterize the aging auditory system.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Toward a Differential Diagnosis of Hidden Hearing Loss in Humans

              Recent work suggests that hair cells are not the most vulnerable elements in the inner ear; rather, it is the synapses between hair cells and cochlear nerve terminals that degenerate first in the aging or noise-exposed ear. This primary neural degeneration does not affect hearing thresholds, but likely contributes to problems understanding speech in difficult listening environments, and may be important in the generation of tinnitus and/or hyperacusis. To look for signs of cochlear synaptopathy in humans, we recruited college students and divided them into low-risk and high-risk groups based on self-report of noise exposure and use of hearing protection. Cochlear function was assessed by otoacoustic emissions and click-evoked electrocochleography; hearing was assessed by behavioral audiometry and word recognition with or without noise or time compression and reverberation. Both groups had normal thresholds at standard audiometric frequencies, however, the high-risk group showed significant threshold elevation at high frequencies (10–16 kHz), consistent with early stages of noise damage. Electrocochleography showed a significant difference in the ratio between the waveform peaks generated by hair cells (Summating Potential; SP) vs. cochlear neurons (Action Potential; AP), i.e. the SP/AP ratio, consistent with selective neural loss. The high-risk group also showed significantly poorer performance on word recognition in noise or with time compression and reverberation, and reported heightened reactions to sound consistent with hyperacusis. These results suggest that the SP/AP ratio may be useful in the diagnosis of “hidden hearing loss” and that, as suggested by animal models, the noise-induced loss of cochlear nerve synapses leads to deficits in hearing abilities in difficult listening situations, despite the presence of normal thresholds at standard audiometric frequencies.
                Bookmark

                Author and article information

                Contributors
                Role: Senior Editor
                Role: Reviewing Editor
                Journal
                eLife
                Elife
                eLife
                eLife
                eLife Sciences Publications, Ltd
                2050-084X
                21 January 2020
                2020
                : 9
                : e51419
                Affiliations
                [1 ]deptEaton-Peabody Laboratories Massachusetts Eye and Ear Infirmary BostonUnited States
                [2 ]deptDepartment of Otolaryngology – Head and Neck Surgery Harvard Medical School BostonUnited States
                [3 ]Bennett Statistical Consulting Inc BallstonUnited States
                [4 ]deptDepartment of Biostatistics Harvard TH Chan School of Public Health BostonUnited States
                Carnegie Mellon University United States
                Peking University China
                Peking University China
                Author information
                https://orcid.org/0000-0002-4573-8004
                https://orcid.org/0000-0002-5120-2409
                Article
                51419
                10.7554/eLife.51419
                6974362
                31961322
                9323882c-6033-4fae-9845-0dee88da9f9a
                © 2020, Parthasarathy et al

                This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

                History
                : 28 August 2019
                : 15 December 2019
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/100000002, National Institutes of Health;
                Award ID: P50-DC015857
                Award Recipient :
                The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
                Categories
                Research Article
                Human Biology and Medicine
                Neuroscience
                Custom metadata
                Ear canal EEG and pupillometry reveal disordered temporal processing in adults with normal hearing who struggle to understand conversations in noisy backgrounds.

                Life sciences
                fine structure,hidden hearing loss,cochlear synaptopathy,ffr,pupillometry,effortful listening,human

                Comments

                Comment on this article