5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Asymptotic Convergence Rate and Statistical Inference for Stochastic Sequential Quadratic Programming

      Preprint
      ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We apply a stochastic sequential quadratic programming (StoSQP) algorithm to solve constrained nonlinear optimization problems, where the objective is stochastic and the constraints are deterministic. We study a fully stochastic setup, where only a single sample is available in each iteration for estimating the gradient and Hessian of the objective. We allow StoSQP to select a random stepsize \(\bar{\alpha}_t\) adaptively, such that \(\beta_t\leq \bar{\alpha}_t \leq \beta_t+\chi_t\), where \(\beta_t\), \(\chi_t=o(\beta_t)\) are prespecified deterministic sequences. We also allow StoSQP to solve Newton system inexactly via randomized iterative solvers, e.g., with the sketch-and-project method; and we do not require the approximation error of inexact Newton direction to vanish. For this general StoSQP framework, we establish the asymptotic convergence rate for its last iterate, with the worst-case iteration complexity as a byproduct; and we perform statistical inference. In particular, with proper decaying \(\beta_t,\chi_t\), we show that: (i) the StoSQP scheme can take at most \(O(1/\epsilon^4)\) iterations to achieve \(\epsilon\)-stationarity; (ii) asymptotically and almost surely, \(\|(x_t -x^\star, \lambda_t - \lambda^\star)\| = O(\sqrt{\beta_t\log(1/\beta_t)})+O(\chi_t/\beta_t)\), where \((x_t,\lambda_t)\) is the primal-dual StoSQP iterate; (iii) the sequence \(1/\sqrt{\beta_t}\cdot (x_t -x^\star, \lambda_t - \lambda^\star)\) converges to a mean zero Gaussian distribution with a nontrivial covariance matrix. Moreover, we establish the Berry-Esseen bound for \((x_t, \lambda_t)\) to measure quantitatively the convergence of its distribution function. We also provide a practical estimator for the covariance matrix, from which the confidence intervals of \((x^\star, \lambda^\star)\) can be constructed using iterates \(\{(x_t,\lambda_t)\}_t\). Our theorems are validated using nonlinear problems in CUTEst test set.

          Related collections

          Author and article information

          Journal
          26 May 2022
          Article
          2205.13687
          8380702a-27f7-43db-80ab-481362f52509

          http://creativecommons.org/licenses/by/4.0/

          History
          Custom metadata
          59 pages, 6 figures
          math.OC cs.LG stat.ML

          Numerical methods,Machine learning,Artificial intelligence
          Numerical methods, Machine learning, Artificial intelligence

          Comments

          Comment on this article