12
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Auditory, Cognitive, and Linguistic Factors Predict Speech Recognition in Adverse Listening Conditions for Children With Hearing Loss

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Objectives: Children with hearing loss listen and learn in environments with noise and reverberation, but perform more poorly in noise and reverberation than children with normal hearing. Even with amplification, individual differences in speech recognition are observed among children with hearing loss. Few studies have examined the factors that support speech understanding in noise and reverberation for this population. This study applied the theoretical framework of the Ease of Language Understanding (ELU) model to examine the influence of auditory, cognitive, and linguistic factors on speech recognition in noise and reverberation for children with hearing loss.

          Design: Fifty-six children with hearing loss and 50 age-matched children with normal hearing who were 7–10 years-old participated in this study. Aided sentence recognition was measured using an adaptive procedure to determine the signal-to-noise ratio for 50% correct (SNR50) recognition in steady-state speech-shaped noise. SNR50 was also measured with noise plus a simulation of 600 ms reverberation time. Receptive vocabulary, auditory attention, and visuospatial working memory were measured. Aided speech audibility indexed by the Speech Intelligibility Index was measured through the hearing aids of children with hearing loss.

          Results: Children with hearing loss had poorer aided speech recognition in noise and reverberation than children with typical hearing. Children with higher receptive vocabulary and working memory skills had better speech recognition in noise and noise plus reverberation than peers with poorer skills in these domains. Children with hearing loss with higher aided audibility had better speech recognition in noise and reverberation than peers with poorer audibility. Better audibility was also associated with stronger language skills.

          Conclusions: Children with hearing loss are at considerable risk for poor speech understanding in noise and in conditions with noise and reverberation. Consistent with the predictions of the ELU model, children with stronger vocabulary and working memory abilities performed better than peers with poorer skills in these domains. Better aided speech audibility was associated with better recognition in noise and noise plus reverberation conditions for children with hearing loss. Speech audibility had direct effects on speech recognition in noise and reverberation and cumulative effects on speech recognition in noise through a positive association with language development over time.

          Related collections

          Most cited references40

          • Record: found
          • Abstract: found
          • Article: not found

          Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults.

          This paper summarizes twenty studies, published since 1989, that have measured experimentally the relationship between speech recognition in noise and some aspect of cognition, using statistical techniques such as correlation or factor analysis. The results demonstrate that there is a link, but it is secondary to the predictive effects of hearing loss, and it is somewhat mixed across study. No one cognitive test always gave a significant result, but measures of working memory (especially reading span) were mostly effective, whereas measures of general ability, such as IQ, were mostly ineffective. Some of the studies included aided listening, and two reported the benefits from aided listening: again mixed results were found, and in some circumstances cognition was a useful predictor of hearing-aid benefit.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Cognition counts: a working memory system for ease of language understanding (ELU).

            A general working memory system for ease of language understanding (ELU, Rönnberg, 2003a) is presented. The purpose of the system is to describe and predict the dynamic interplay between explicit and implicit cognitive functions, especially in conditions of poorly perceived or poorly specified linguistic signals. In relation to speech understanding, the system based on (1) the quality and precision of phonological representations in long-term memory, (2) phonologically mediated lexical access speed, and (3) explicit, storage, and processing resources. If there is a mismatch between phonological information extracted from the speech signal and the phonological information represented in long-term memory, the system is assumed to produce a mismatch signal that invokes explicit processing resources. In the present paper, we focus on four aspects of the model which have led to the current, updated version: the language generality assumption; the mismatch assumption; chronological age; and the episodic buffer function of rapid, automatic multimodal binding of phonology (RAMBPHO). We evaluate the language generality assumption in relation to sign language and speech, and the mismatch assumption in relation to signal processing in hearing aids. Further, we discuss the effects of chronological age and the implications of RAMBPHO.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Informational and energetic masking effects in the perception of multiple simultaneous talkers.

              Although many researchers have examined the role that binaural cues play in the perception of spatially separated speech signals, relatively little is known about the cues that listeners use to segregate competing speech messages in a monaural or diotic stimulus. This series of experiments examined how variations in the relative levels and voice characteristics of the target and masking talkers influence a listener's ability to extract information from a target phrase in a 3-talker or 4-talker diotic stimulus. Performance in this speech perception task decreased systematically when the level of the target talker was reduced relative to the masking talkers. Performance also generally decreased when the target and masking talkers had similar voice characteristics: the target phrase was most intelligible when the target and masking phrases were spoken by different-sex talkers, and least intelligible when the target and masking phrases were spoken by the same talker. However, when the target-to-masker ratio was less than 3 dB, overall performance was usually lower with one different-sex masker than with all same-sex maskers. In most of the conditions tested, the listeners performed better when they were exposed to the characteristics of the target voice prior to the presentation of the stimulus. The results of these experiments demonstrate how monaural factors may play an important role in the segregation of speech signals in multitalker environments.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Neurosci
                Front Neurosci
                Front. Neurosci.
                Frontiers in Neuroscience
                Frontiers Media S.A.
                1662-4548
                1662-453X
                15 October 2019
                2019
                : 13
                : 1093
                Affiliations
                [1] 1The Audibility Perception and Cognition Laboratory, Boys Town National Research Hospital , Omaha, NE, United States
                [2] 2Pediatric Audiology Laboratory, Department of Communication Sciences and Disorders, University of Iowa , Iowa City, IA, United States
                [3] 3Amplification and Perception Laboratory, Department of Special Education and Communication Disorders, University of Nebraska , Lincoln, NE, United States
                Author notes

                Edited by: Mary Rudner, Linköping University, Sweden

                Reviewed by: Christian Füllgrabe, Loughborough University, United Kingdom; Tone Stokkereit Mattsson, Ålesund Hospital, Norway

                *Correspondence: Ryan W. McCreery ryan.mccreery@ 123456boystown.org

                This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Neuroscience

                Article
                10.3389/fnins.2019.01093
                6803493
                31680828
                ce723419-dee0-4a26-aa2c-106772e533b8
                Copyright © 2019 McCreery, Walker, Spratford, Lewis and Brennan.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 31 May 2019
                : 30 September 2019
                Page count
                Figures: 4, Tables: 5, Equations: 0, References: 54, Pages: 11, Words: 8923
                Funding
                Funded by: National Institute on Deafness and Other Communication Disorders 10.13039/100000055
                Categories
                Neuroscience
                Original Research

                Neurosciences
                children,hearing loss,noise,reverberation,speech recognition,hearing aids
                Neurosciences
                children, hearing loss, noise, reverberation, speech recognition, hearing aids

                Comments

                Comment on this article