Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
29
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Finding Phrases: The Interplay of Word Frequency, Phrasal Prosody and Co-speech Visual Information in Chunking Speech by Monolingual and Bilingual Adults

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The audiovisual speech signal contains multimodal information to phrase boundaries. In three artificial language learning studies with 12 groups of adult participants we investigated whether English monolinguals and bilingual speakers of English and a language with opposite basic word order (i.e., in which objects precede verbs) can use word frequency, phrasal prosody and co-speech (facial) visual information, namely head nods, to parse unknown languages into phrase-like units. We showed that monolinguals and bilinguals used the auditory and visual sources of information to chunk “phrases” from the input. These results suggest that speech segmentation is a bimodal process, though the influence of co-speech facial gestures is rather limited and linked to the presence of auditory prosody. Importantly, a pragmatic factor, namely the language of the context, seems to determine the bilinguals’ segmentation, overriding the auditory and visual cues and revealing a factor that begs further exploration.

          Related collections

          Most cited references38

          • Record: found
          • Abstract: not found
          • Article: not found

          The Structural Sources of Verb Meanings

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            The use of visible speech cues for improving auditory detection of spoken sentences.

            Classic accounts of the benefits of speechreading to speech recognition treat auditory and visual channels as independent sources of information that are integrated fairly early in the speech perception process. The primary question addressed in this study was whether visible movements of the speech articulators could be used to improve the detection of speech in noise, thus demonstrating an influence of speechreading on the ability to detect, rather than recognize, speech. In the first experiment, ten normal-hearing subjects detected the presence of three known spoken sentences in noise under three conditions: auditory-only (A), auditory plus speechreading with a visually matched sentence (AV(M)) and auditory plus speechreading with a visually unmatched sentence (AV(UM). When the speechread sentence matched the target sentence, average detection thresholds improved by about 1.6 dB relative to the auditory condition. However, the amount of threshold reduction varied significantly for the three target sentences (from 0.8 to 2.2 dB). There was no difference in detection thresholds between the AV(UM) condition and the A condition. In a second experiment, the effects of visually matched orthographic stimuli on detection thresholds was examined for the same three target sentences in six subjects who participated in the earlier experiment. When the orthographic stimuli were presented just prior to each trial, average detection thresholds improved by about 0.5 dB relative to the A condition. However, unlike the AV(M) condition, the detection improvement due to orthography was not dependent on the target sentence. Analyses of correlations between area of mouth opening and acoustic envelopes derived from selected spectral regions of each sentence (corresponding to the wide-band speech, and first, second, and third formant regions) suggested that AV(M) threshold reduction may be determined by the degree of auditory-visual temporal coherence, especially between the area of lip opening and the envelope derived from mid- to high-frequency acoustic energy. Taken together, the data (for these sentences at least) suggest that visual cues derived from the dynamic movements of the fact during speech production interact with time-aligned auditory cues to enhance sensitivity in auditory detection. The amount of visual influence depends in part on the degree of correlation between acoustic envelopes and visible movement of the articulators.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Children use syntax to learn verb meanings.

              L Naigles (1990)
              Verb learning is clearly a function of observation of real-world contingencies; however, it is argued that such observational information is insufficient to account fully for vocabulary acquisition. This paper provides an experimental validation of Landau & Gleitman's (1985) syntactic bootstrapping procedure; namely, that children may use syntactic information to learn new verbs. Pairs of actions were presented simultaneously with a nonsense verb in one of two syntactic structures. The actions were subsequently separated, and the children (MA = 2;1) were asked to select which action was the referent for the verb. The children's choice of referent was found to be a function of the syntactic structure in which the verb had appeared.
                Bookmark

                Author and article information

                Contributors
                Journal
                Lang Speech
                Lang Speech
                LAS
                splas
                Language and Speech
                SAGE Publications (Sage UK: London, England )
                0023-8309
                1756-6053
                19 April 2019
                June 2020
                : 63
                : 2
                : 264-291
                Affiliations
                [1-0023830919842353]Integrative Neuroscience and Cognition Center (INCC—UMR 8002), Université Paris Descartes (Sorbonne Paris Cité), France; Integrative Neuroscience and Cognition Center (INCC—UMR 8002), CNRS, France
                [2-0023830919842353]Department of Psychology, University of British Columbia, Canada
                [3-0023830919842353]Department of Linguistics, University of British Columbia, Canada
                [4-0023830919842353]Integrative Neuroscience and Cognition Center (INCC—UMR 8002), Université Paris Descartes (Sorbonne Paris Cité), France; Integrative Neuroscience and Cognition Center (INCC—UMR 8002), CNRS, France
                Author notes
                [*]Irene de la Cruz-Pavía, Integrative Neuroscience and Cognition Center (INCC—UMR 8002), Université Paris Descartes-CNRS, 45 rue des Saints-Pères, Paris, 75006, France. Email: idelacruzpavia@ 123456gmail.com
                Author information
                https://orcid.org/0000-0003-3425-0596
                Article
                10.1177_0023830919842353
                10.1177/0023830919842353
                7254630
                31002280
                b03b6943-df9f-4631-97cc-dbe93bac8ab3
                © The Author(s) 2019

                This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License ( https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages ( https://us.sagepub.com/en-us/nam/open-access-at-sage).

                History
                Funding
                Funded by: Social Sciences and Humanities Research Council of Canada, FundRef https://doi.org/10.13039/501100000155;
                Award ID: 435-2014-0917
                Funded by: Agence Nationale de la Recherche, FundRef https://doi.org/10.13039/501100001665;
                Award ID: ANR-15-CE37-0009-01
                Funded by: european research council, FundRef https://doi.org/10.13039/501100000781;
                Award ID: 773202 ERC-2017-COG ‘BabyRhythm’
                Funded by: FP7 People: Marie-Curie Actions, FundRef https://doi.org/10.13039/100011264;
                Award ID: Marie Curie International Outgoing Fellowship unde
                Funded by: French Investissements d’Avenir - Labex EFL Program, ;
                Award ID: ANR- 10-LABX-0083
                Funded by: Natural Sciences and Engineering Research Council of Canada, FundRef https://doi.org/10.13039/501100000038;
                Award ID: Discovery Grant - 81103
                Funded by: Natural Sciences and Engineering Research Council of Canada, FundRef https://doi.org/10.13039/501100000038;
                Award ID: RGPIN-2015-03967
                Categories
                Articles
                Custom metadata
                ts1

                phrase segmentation,co-speech visual information,artificial grammar learning,bilingualism,prosody,frequency-based information

                Comments

                Comment on this article