Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
18
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Imagined speech can be decoded from low- and cross-frequency intracranial EEG features

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding.

          Abstract

          Reconstructing imagined speech from neural activity holds great promises for people with severe speech production deficits. Here, the authors demonstrate using human intracranial recordings that both low- and higher-frequency power and local cross-frequency contribute to imagined speech decoding.

          Related collections

          Most cited references76

          • Record: found
          • Abstract: not found
          • Article: not found

          Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            FreeSurfer.

            FreeSurfer is a suite of tools for the analysis of neuroimaging data that provides an array of algorithms to quantify the functional, connectional and structural properties of the human brain. It has evolved from a package primarily aimed at generating surface representations of the cerebral cortex into one that automatically creates models of most macroscopically visible structures in the human brain given any reasonable T1-weighted input image. It is freely available, runs on a wide variety of hardware and software platforms, and is open source. Copyright © 2012 Elsevier Inc. All rights reserved.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              BCI2000: a general-purpose brain-computer interface (BCI) system.

              Many laboratories have begun to develop brain-computer interface (BCI) systems that provide communication and control capabilities to people with severe motor disabilities. Further progress and realization of practical applications depends on systematic evaluations and comparisons of different brain signals, recording methods, processing algorithms, output formats, and operating protocols. However, the typical BCI system is designed specifically for one particular BCI method and is, therefore, not suited to the systematic studies that are essential for continued progress. In response to this problem, we have developed a documented general-purpose BCI research and development platform called BCI2000. BCI2000 can incorporate alone or in combination any brain signals, signal processing methods, output devices, and operating protocols. This report is intended to describe to investigators, biomedical engineers, and computer scientists the concepts that the BC12000 system is based upon and gives examples of successful BCI implementations using this system. To date, we have used BCI2000 to create BCI systems for a variety of brain signals, processing methods, and applications. The data show that these systems function well in online operation and that BCI2000 satisfies the stringent real-time requirements of BCI systems. By substantially reducing labor and cost, BCI2000 facilitates the implementation of different BCI systems and other psychophysiological experiments. It is available with full documentation and free of charge for research or educational purposes and is currently being used in a variety of studies by many research groups.
                Bookmark

                Author and article information

                Contributors
                timothee.proix@unige.ch
                Journal
                Nat Commun
                Nat Commun
                Nature Communications
                Nature Publishing Group UK (London )
                2041-1723
                10 January 2022
                10 January 2022
                2022
                : 13
                : 48
                Affiliations
                [1 ]GRID grid.8591.5, ISNI 0000 0001 2322 4988, Department of Basic Neurosciences, Faculty of Medicine, , University of Geneva, ; Geneva, Switzerland
                [2 ]GRID grid.47840.3f, ISNI 0000 0001 2181 7878, Helen Wills Neuroscience Institute, , University of California, Berkeley, ; Berkeley, USA
                [3 ]GRID grid.47840.3f, ISNI 0000 0001 2181 7878, Department of Psychology, , University of California, Berkeley, ; Berkeley, USA
                [4 ]GRID grid.449457.f, ISNI 0000 0004 5376 0118, Division of Arts and Sciences, , New York University Shanghai, ; Shanghai, China
                [5 ]GRID grid.22069.3f, ISNI 0000 0004 0369 6365, Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, , East China Normal University, ; Shanghai, China
                [6 ]GRID grid.449457.f, ISNI 0000 0004 5376 0118, NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, ; Shanghai, China
                [7 ]GRID grid.137628.9, ISNI 0000 0004 1936 8753, Department of Psychology, , New York University, ; New York, NY USA
                [8 ]GRID grid.461715.0, Ernst Strüngmann Institute for Neuroscience, ; Frankfurt, Germany
                [9 ]GRID grid.240324.3, ISNI 0000 0001 2109 4251, Department of Neurology, , New York University Grossman School of Medicine, ; New York, NY USA
                [10 ]Institut de l’Audition, Institut Pasteur, INSERM, F-75012 Paris, France
                [11 ]GRID grid.150338.c, ISNI 0000 0001 0721 9812, Division of Neurology, , Geneva University Hospitals, ; Geneva, Switzerland
                Author information
                http://orcid.org/0000-0003-4750-9915
                http://orcid.org/0000-0003-1629-6304
                http://orcid.org/0000-0003-0044-4632
                http://orcid.org/0000-0002-2226-6497
                http://orcid.org/0000-0002-0427-547X
                http://orcid.org/0000-0002-1261-3555
                Article
                27725
                10.1038/s41467-021-27725-3
                8748882
                35013268
                88656c69-1b53-427e-995a-64eb273debf2
                © The Author(s) 2022

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 12 April 2021
                : 3 December 2021
                Funding
                Funded by: FundRef https://doi.org/10.13039/501100001711, Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (Swiss National Science Foundation);
                Award ID: 193542
                Award ID: 163040
                Award Recipient :
                Funded by: FundRef https://doi.org/10.13039/100000065, U.S. Department of Health & Human Services | NIH | National Institute of Neurological Disorders and Stroke (NINDS);
                Award ID: R3723115
                Award Recipient :
                Funded by: FundRef https://doi.org/10.13039/501100001809, National Natural Science Foundation of China (National Science Foundation of China);
                Award ID: 32071099
                Award Recipient :
                Funded by: FundRef https://doi.org/10.13039/100007219, Natural Science Foundation of Shanghai (Natural Science Foundation of Shanghai Municipality);
                Award ID: 20ZR1472100
                Award Recipient :
                Funded by: Program of Introducing Talents of Discipline to Universities, Base B16018; NYU Shanghai Boost Fund
                Funded by: Fondation pour l'audition FPA RD-2020-10
                Funded by: EU FET-BrainCom project; NCCR Evolving Language, Swiss National Science Foundation Agreement #51NF40_180888
                Categories
                Article
                Custom metadata
                © The Author(s) 2022

                Uncategorized
                neuroscience,language
                Uncategorized
                neuroscience, language

                Comments

                Comment on this article