14
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Proximal Policy Optimization Algorithms

      journal-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.

          Related collections

          Author and article information

          Journal
          arXiv
          2017
          20 July 2017
          21 July 2017
          28 August 2017
          29 August 2017
          July 2017
          Article
          10.48550/ARXIV.1707.06347
          579b69aa-2b63-45ef-a66a-a76d082e9501

          arXiv.org perpetual, non-exclusive license

          History

          Machine Learning (cs.LG),FOS: Computer and information sciences

          Comments

          Comment on this article