102
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      On the Linear Convergence of the ADMM in Decentralized Consensus Optimization

      Preprint
      , , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In decentralized consensus optimization, a connected network of agents collaboratively minimize the sum of their local objective functions over a common decision variable, where their information exchange is restricted between the neighbors. To this end, one can first obtain a problem reformulation and then apply the alternating direction method of multipliers (ADMM). The method applies iterative computation at the individual agents and information exchange between the neighbors. This approach has been observed to converge quickly and deemed powerful. This paper establishes its linear convergence rate for decentralized consensus optimization problem with strongly convex local objective functions. The theoretical convergence rate is explicitly given in terms of the network topology, the properties of local objective functions, and the algorithm parameter. This result is not only a performance guarantee but also a guideline toward accelerating the ADMM convergence.

          Related collections

          Most cited references13

          • Record: found
          • Abstract: not found
          • Article: not found

          Distributed Subgradient Methods for Multi-Agent Optimization

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Consensus in Ad Hoc WSNs With Noisy Links—Part I: Distributed Estimation of Deterministic Signals

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling

              , , (2011)
              The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent co-ordination, estimation in sensor networks, and large-scale optimization in machine learning. We develop and analyze distributed algorithms based on dual averaging of subgradients, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our method of analysis allows for a clear separation between the convergence of the optimization algorithm itself and the effects of communication constraints arising from the network structure. In particular, we show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network. The sharpness of this prediction is confirmed both by theoretical lower bounds and simulations for various networks. Our approach includes both the cases of deterministic optimization and communication, as well as problems with stochastic optimization and/or communication.
                Bookmark

                Author and article information

                Journal
                21 July 2013
                2014-11-29
                Article
                10.1109/TSP.2014.2304432
                1307.5561
                7213644b-4191-40ed-b1f8-d3f5b79350d1

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                11 figures, IEEE Transactions on Signal Processing, 2014
                math.OC

                Comments

                Comment on this article