Operations Research Transactions ›› 2021, Vol. 25 ›› Issue (1): 61-72.doi: 10.15960/j.cnki.issn.1007-6093.2021.01.005

Previous Articles     Next Articles

A proximal gradient method for nonsmooth convex optimization problems

Hongwu LI1,2,(), Min XIE1, Rong ZHANG3   

  1. 1. College of Applied Sciences, Beijing University of Technology, Beijing 100124, China
    2. School of mathematics and statistics, Nanyang Normal University, Nanyang 473061, Henan, China
    3. Hanergy Thin Film Power Group Head Quaters, Beijing 100101, China
  • Received:2019-04-01 Online:2021-03-15 Published:2021-03-05
  • Contact: Hongwu LI E-mail:xmin@emails.bjut.edu.cn

Abstract:

A Proximal Gradient Method based on linesearch (L-PGM) and its convergence for solving the convex optimization problems which objective function is the sum of smooth loss function and non-smooth regular function are studied in this paper. Considering the loss function's gradient is locally Lipschitz continuous in the problems, the R-linear convergence rate of the L-PGM method is proved. Then, focusing on the problems regularized by the sparse group Lasso function, we prove that the error bound holds around the optimal solution set, thus, the linear convergence for solving such problems with the L-PGM method is given. Finally, The preliminary experimental results support our theoretical analysis.

Key words: nonsmooth convex optimization, proximal gradientmethod, locally lipschitz continuous, error bound, linearconvergence

CLC Number: