19
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Multi-modal cross-linguistic perception of Mandarin tones in clear speech

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Clearly enunciated speech (relative to conversational, plain speech) involves articulatory and acoustic modifications that enhance auditory–visual (AV) segmental intelligibility. However, little research has explored clear-speech effects on the perception of suprasegmental properties such as lexical tone, particularly involving visual (facial) perception. Since tone production does not primarily rely on vocal tract configurations, tones may be less visually distinctive. Questions thus arise as to whether clear speech can enhance visual tone intelligibility, and if so, whether any intelligibility gain can be attributable to tone-specific category-enhancing (code-based) clear-speech cues or tone-general saliency-enhancing (signal-based) cues. The present study addresses these questions by examining the identification of clear and plain Mandarin tones with visual-only, auditory-only, and AV input modalities by native (Mandarin) and nonnative (English) perceivers. Results show that code-based visual and acoustic clear tone modifications, although limited, affect both native and nonnative intelligibility, with category-enhancing cues increasing intelligibility and category-blurring cues decreasing intelligibility. In contrast, signal-based cues, which are extensively available, do not benefit native intelligibility, although they contribute to nonnative intelligibility gain. These findings demonstrate that linguistically relevant visual tonal cues are existent. In clear speech, such tone category-enhancing cues are incorporated with saliency-enhancing cues across AV modalities for intelligibility improvements.

          Related collections

          Most cited references59

          • Record: found
          • Abstract: found
          • Article: not found

          Random effects structure for confirmatory hypothesis testing: Keep it maximal.

          Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F 1 and F 2 tests, and in many cases, even worse than F 1 alone. Maximal LMEMs should be the 'gold standard' for confirmatory hypothesis testing in psycholinguistics and beyond.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            A simple sequentially rejective multiple test procedure

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Visual Contribution to Speech Intelligibility in Noise

                Bookmark

                Author and article information

                Contributors
                Journal
                Front Hum Neurosci
                Front Hum Neurosci
                Front. Hum. Neurosci.
                Frontiers in Human Neuroscience
                Frontiers Media S.A.
                1662-5161
                27 September 2023
                2023
                : 17
                : 1247811
                Affiliations
                [1] 1Department of Linguistics, University of Kansas , Lawrence, KS, United States
                [2] 2Department of Linguistics, Simon Fraser University , Burnaby, BC, Canada
                Author notes

                Edited by: Kauyumari Sanchez, Cal Poly Humboldt, United States

                Reviewed by: Roel Jonkers, University of Groningen, Netherlands; Fei Chen, Hunan University, China

                *Correspondence: Allard Jongman, jongman@ 123456ku.edu
                Article
                10.3389/fnhum.2023.1247811
                10565566
                37829822
                c6910411-ee2a-43bb-8a40-384727d28c17
                Copyright © 2023 Zeng, Leung, Jongman, Sereno and Wang.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 26 June 2023
                : 08 September 2023
                Page count
                Figures: 4, Tables: 1, Equations: 0, References: 69, Pages: 13, Words: 11348
                Categories
                Human Neuroscience
                Original Research
                Custom metadata
                Speech and Language

                Neurosciences
                multi-modal,audio-visual,clear speech,mandarin tone,intelligibility,cross-linguistic
                Neurosciences
                multi-modal, audio-visual, clear speech, mandarin tone, intelligibility, cross-linguistic

                Comments

                Comment on this article