5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Artificial Intelligence Meets Flexible Sensors: Emerging Smart Flexible Sensing Systems Driven by Machine Learning and Artificial Synapses

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Highlights

          • The latest progress of emerging smart flexible sensing systems driven by brain-inspired artificial intelligence (AI) from both the algorithm (machine learning) and the framework (artificial synapses) level is reviewed.

          • New enabling features such as powerful data analysis and intelligent decision-making resulting from the fusion of AI technology with flexible sensors are discussed.

          • Promising application prospects of AI-driven smart flexible sensing systems such as more intelligent monitoring for human activities, more humanoid feeling by artificial sensory organs, and more autonomous action of soft robotics are demonstrated.

          Abstract

          The recent wave of the artificial intelligence (AI) revolution has aroused unprecedented interest in the intelligentialize of human society. As an essential component that bridges the physical world and digital signals, flexible sensors are evolving from a single sensing element to a smarter system, which is capable of highly efficient acquisition, analysis, and even perception of vast, multifaceted data. While challenging from a manual perspective, the development of intelligent flexible sensing has been remarkably facilitated owing to the rapid advances of brain-inspired AI innovations from both the algorithm (machine learning) and the framework (artificial synapses) level. This review presents the recent progress of the emerging AI-driven, intelligent flexible sensing systems. The basic concept of machine learning and artificial synapses are introduced. The new enabling features induced by the fusion of AI and flexible sensing are comprehensively reviewed, which significantly advances the applications such as flexible sensory systems, soft/humanoid robotics, and human activity monitoring. As two of the most profound innovations in the twenty-first century, the deep incorporation of flexible sensing and AI technology holds tremendous potential for creating a smarter world for human beings.

          Related collections

          Most cited references210

          • Record: found
          • Abstract: found
          • Article: not found

          Deep learning.

          Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Human-level control through deep reinforcement learning.

            The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Mastering the game of Go with deep neural networks and tree search.

              The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
                Bookmark

                Author and article information

                Contributors
                wangwenxian@tyut.edu.cn
                zougsh@tsinghua.edu.cn
                liulei@tsinghua.edu.cn
                Journal
                Nanomicro Lett
                Nanomicro Lett
                Nano-Micro Letters
                Springer Nature Singapore (Singapore )
                2311-6706
                2150-5551
                13 November 2023
                13 November 2023
                December 2024
                : 16
                : 14
                Affiliations
                [1 ]Department of Mechanical Engineering, State Key Laboratory of Tribology in Advanced Equipment, Key Laboratory for Advanced Manufacturing by Materials Processing Technology, Ministry of Education of PR China, Tsinghua University, ( https://ror.org/03cve4549) Beijing, 100084 People’s Republic of China
                [2 ]College of Materials Science and Engineering, Shanxi Province, Taiyuan University of Technology, ( https://ror.org/03kv08d37) Taiyuan, 030024 People’s Republic of China
                Article
                1235
                10.1007/s40820-023-01235-x
                10643743
                37955844
                45900ddc-2569-4472-8ac6-153febc3d133
                © The Author(s) 2023

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 26 June 2023
                : 24 September 2023
                Funding
                Funded by: Shanghai Jiao Tong University
                Categories
                Review
                Custom metadata
                © Shanghai Jiao Tong University 2024

                flexible electronics,wearable electronics,neuromorphic,memristor,deep learning

                Comments

                Comment on this article