Operations Research Transactions ›› 2023, Vol. 27 ›› Issue (4): 153-165.doi: 10.15960/j.cnki.issn.1007-6093.2023.04.008

Previous Articles     Next Articles

Stochastic approximation methods for nonconvex constrained optimization

Xiao WANG1,*()   

  1. 1. Peng Cheng Laboratory, Shenzhen 518066, Guangdong, China
  • Received:2023-04-30 Online:2023-12-15 Published:2023-12-07
  • Contact: Xiao WANG E-mail:wangx07@pcl.ac.cn

Abstract:

In many application fields such as artificial intelligence and scientific computing, application-driven mathematical optimization models often show randomness due to their dependence on large data sets and/or noisy data, and the model scale is large and accompanied by complex non-convex operator constraints. Thus it is normally expensive to get access to the exact function information of those models and can bring great challenges to algorithm design as well as theoretical analysis. In recent years, stochastic approximation algorithms for nonconvex constrained optimization have attracted much attention. Currently those algorithms can be separated into three categories: penalty methods based on stochastic approximations, proximal point algorithms and stochastic sequential quadratic programming algorithms. In this paper, we will briefly introduce latest progress on these types of algorithms, and present their algorithmic frameworks and theoretical properties including asymptotic convergence and complexity theory.

Key words: stochastic approximation, nonconvex constraints, critical point, constraint qualifications, asymptotic convergence, complexity

CLC Number: