53
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Evolving Neural Networks through Augmenting Topologies

      1 , 1
      Evolutionary Computation
      MIT Press - Journals

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is signicantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.

          Related collections

          Most cited references15

          • Record: found
          • Abstract: found
          • Article: not found

          Reinforcement Learning: A Survey

          This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Evolving artificial neural networks

            XIN YAO (1999)
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              An evolutionary algorithm that constructs recurrent neural networks

                Bookmark

                Author and article information

                Journal
                Evolutionary Computation
                Evolutionary Computation
                MIT Press - Journals
                1063-6560
                1530-9304
                June 2002
                June 2002
                : 10
                : 2
                : 99-127
                Affiliations
                [1 ]Department of Computer Sciences, The University of Texas at Austin, Austin, TX 78712, USA
                Article
                10.1162/106365602320169811
                12180173
                d029b5bd-a81d-4d87-aef0-c2f5ae508b11
                © 2002
                History

                Comments

                Comment on this article