11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Resilient Robot Teams: a Review Integrating Decentralised Control, Change-Detection, and Learning

      , ,
      Current Robotics Reports
      Springer Science and Business Media LLC

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Purpose of Review

          This paper reviews opportunities and challenges for decentralised control, change-detection, and learning in the context of resilient robot teams.

          Recent Findings

          Exogenous fault-detection methods can provide a generic detection or a specific diagnosis with a recovery solution. Robot teams can perform active and distributed sensing for detecting changes in the environment, including identifying and tracking dynamic anomalies, as well as collaboratively mapping dynamic environments. Resilient methods for decentralised control have been developed in learning perception-action-communication loops, multi-agent reinforcement learning, embodied evolution, offline evolution with online adaptation, explicit task allocation, and stigmergy in swarm robotics.

          Summary

          Remaining challenges for resilient robot teams are integrating change-detection and trial-and-error learning methods, obtaining reliable performance evaluations under constrained evaluation time, improving the safety of resilient robot teams, theoretical results demonstrating rapid adaptation to given environmental perturbations, and designing realistic and compelling case studies.

          Related collections

          Most cited references63

          • Record: found
          • Abstract: found
          • Article: not found

          Evolving neural networks through augmenting topologies.

          An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is significantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Q-learning

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The Contract Net Protocol: High-Level Communication and Control in a Distributed Problem Solver

              Smith (1980)
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                Journal
                Current Robotics Reports
                Curr Robot Rep
                Springer Science and Business Media LLC
                2662-4087
                September 2022
                June 13 2022
                : 3
                : 3
                : 85-95
                Article
                10.1007/s43154-022-00079-4
                b3475db5-cc79-4d10-878e-b89441b1f73e
                © 2022

                https://creativecommons.org/licenses/by/4.0

                https://creativecommons.org/licenses/by/4.0

                History

                Comments

                Comment on this article