7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Musicians Show Improved Speech Segregation in Competitive, Multi-Talker Cocktail Party Scenarios

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Studies suggest that long-term music experience enhances the brain’s ability to segregate speech from noise. Musicians’ “speech-in-noise (SIN) benefit” is based largely on perception from simple figure-ground tasks rather than competitive, multi-talker scenarios that offer realistic spatial cues for segregation and engage binaural processing. We aimed to investigate whether musicians show perceptual advantages in cocktail party speech segregation in a competitive, multi-talker environment. We used the coordinate response measure (CRM) paradigm to measure speech recognition and localization performance in musicians vs. non-musicians in a simulated 3D cocktail party environment conducted in an anechoic chamber. Speech was delivered through a 16-channel speaker array distributed around the horizontal soundfield surrounding the listener. Participants recalled the color, number, and perceived location of target callsign sentences. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (0–1–2–3–4–6–8 multi-talkers). Musicians obtained faster and better speech recognition amidst up to around eight simultaneous talkers and showed less noise-related decline in performance with increasing interferers than their non-musician peers. Correlations revealed associations between listeners’ years of musical training and CRM recognition and working memory. However, better working memory correlated with better speech streaming. Basic (QuickSIN) but not more complex (speech streaming) SIN processing was still predicted by music training after controlling for working memory. Our findings confirm a relationship between musicianship and naturalistic cocktail party speech streaming but also suggest that cognitive factors at least partially drive musicians’ SIN advantage.

          Related collections

          Most cited references74

          • Record: found
          • Abstract: found
          • Article: not found

          Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise.

          A large set of sentence materials, chosen for their uniformity in length and representation of natural speech, has been developed for the measurement of sentence speech reception thresholds (sSRTs). The mean-squared level of each digitally recorded sentence was adjusted to equate intelligibility when presented in spectrally matched noise to normal-hearing listeners. These materials were cast into 25 phonemically balanced lists of ten sentences for adaptive measurement of sentence sSRTs. The 95% confidence interval for these measurements is +/- 2.98 dB for sSRTs in quiet and +/- 2.41 dB for sSRTs in noise, as defined by the variability of repeated measures with different lists. Average sSRTs in quiet were 23.91 dB(A). Average sSRTs in 72 dB(A) noise were 69.08 dB(A), or -2.92 dB signal/noise ratio. Low-pass filtering increased sSRTs slightly in quiet and noise as the 4- and 8-kHz octave bands were eliminated. Much larger increases in SRT occurred when the 2-kHz octave band was eliminated, and bandwidth dropped below 2.5 kHz. Reliability was not degraded substantially until bandwidth dropped below 2.5 kHz. The statistical reliability and efficiency of the test suit it to practical applications in which measures of speech intelligibility are required.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            'Oops!': performance correlates of everyday attentional failures in traumatic brain injured and normal subjects.

            Insufficient attention to tasks can result in slips of action as automatic, unintended action sequences are triggered inappropriately. Such slips arise in part from deficits in sustained attention, which are particularly likely to happen following frontal lobe and white matter damage in traumatic brain injury (TBI). We present a reliable laboratory paradigm that elicits such slips of action and demonstrates high correlations between the severity of brain damage and relative-reported everyday attention failures in a group of 34 TBI patients. We also demonstrate significant correlations between self- and informant-reported everyday attentional failures and performance on this paradigm in a group of 75 normal controls. The paradigm (the Sustained Attention to Response Task-SART) involves the withholding of key presses to rare (one in nine) targets. Performance on the SART correlates significantly with performance on tests of sustained attention, but not other types of attention, supporting the view that this is indeed a measure of sustained attention. We also show that errors (false presses) on the SART can be predicted by a significant shortening of reaction times in the immediately preceding responses, supporting the view that these errors are a result of 'drift' of controlled processing into automatic responding consequent on impaired sustained attention to task. We also report a highly significant correlation of -0.58 between SART performance and Glasgow Coma Scale Scores in the TBI group.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners.

              This paper describes a shortened and improved version of the Speech in Noise (SIN) Test (Etymotic Research, 1993). In the first two of four experiments, the level of a female talker relative to that of four-talker babble was adjusted sentence by sentence to produce 50% correct scores for normal-hearing subjects. In the second two experiments, those sentences-in-babble that produced either lack of equivalence or high across-subject variability in scores were discarded. These experiments produced 12 equivalent lists, each containing six sentences, with one sentence at each adjusted signal-to-noise ratio of 25, 20, 15, 10, 5, and 0 dB. Six additional lists were also made equivalent when the scores of particular pairs were averaged. The final lists comprise the "QuickSIN" test that measures the SNR a listener requires to understand 50% of key words in sentences in a background of babble. The standard deviation of single-list scores is 1.4 dB SNR for hearing-impaired subjects, based on test-retest data. A single QuickSIN list takes approximately one minute to administer and provides an estimate of SNR loss accurate to +/-2.7 dB at the 95% confidence level.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Psychol
                Front Psychol
                Front. Psychol.
                Frontiers in Psychology
                Frontiers Media S.A.
                1664-1078
                18 August 2020
                2020
                : 11
                : 1927
                Affiliations
                [1] 1Institute for Intelligent Systems, University of Memphis , Memphis, TN, United States
                [2] 2School of Communication Sciences and Disorders, University of Memphis , Memphis, TN, United States
                [3] 3Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center , Memphis, TN, United States
                Author notes

                Edited by: Cunmei Jiang, Shanghai Normal University, China

                Reviewed by: Outi Tuomainen, University of Potsdam, Germany; Mireille Besson, Institut de Neurosciences Cognitives de la Méditerranée (INCM), France

                *Correspondence: Gavin M. Bidelman, gmbdlman@ 123456memphis.edu

                This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Psychology

                Article
                10.3389/fpsyg.2020.01927
                7461890
                32973610
                14fe9937-e85c-46a4-8d5e-de1ab33c756f
                Copyright © 2020 Bidelman and Yoo.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 10 April 2020
                : 13 July 2020
                Page count
                Figures: 4, Tables: 0, Equations: 0, References: 99, Pages: 11, Words: 0
                Funding
                Funded by: National Institute on Deafness and Other Communication Disorders 10.13039/100000055
                Categories
                Psychology
                Brief Research Report

                Clinical Psychology & Psychiatry
                acoustic scene analysis,stream segregation,experience-dependent plasticity,musical training,speech-in-noise perception

                Comments

                Comment on this article