A proximal gradient method for nonsmooth convex optimization problems

Expand
  • 1. College of Applied Sciences, Beijing University of Technology, Beijing 100124, China
    2. School of mathematics and statistics, Nanyang Normal University, Nanyang 473061, Henan, China
    3. Hanergy Thin Film Power Group Head Quaters, Beijing 100101, China

Received date: 2019-04-01

  Online published: 2021-03-05

Abstract

A Proximal Gradient Method based on linesearch (L-PGM) and its convergence for solving the convex optimization problems which objective function is the sum of smooth loss function and non-smooth regular function are studied in this paper. Considering the loss function's gradient is locally Lipschitz continuous in the problems, the R-linear convergence rate of the L-PGM method is proved. Then, focusing on the problems regularized by the sparse group Lasso function, we prove that the error bound holds around the optimal solution set, thus, the linear convergence for solving such problems with the L-PGM method is given. Finally, The preliminary experimental results support our theoretical analysis.

Cite this article

Hongwu LI, Min XIE, Rong ZHANG . A proximal gradient method for nonsmooth convex optimization problems[J]. Operations Research Transactions, 2021 , 25(1) : 61 -72 . DOI: 10.15960/j.cnki.issn.1007-6093.2021.01.005

References

1 Fan J , Li R . Variable selection via nonconcave penalized likelihood and its oracle properties[J]. Publications of the American Statistical Association, 2001, 96 (456): 1348- 1359.
2 Zou H , Hastie T . Regularization and variable selection via the elastic net[J]. Journal of the Royal Statistical Society, 2005, 67 (2): 301- 320.
3 Zhang H B , Jiang J J , Luo Z Q . On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems[J]. Journal of the Operations Research Society of China, 2013, 1 (2): 163- 186.
4 Cruz J Y B , Nghia T T A . On the convergence of the forward-backward splitting method with linesearches[J]. Optimization Methods and Software, 2016, 31 (6): 1209- 1238.
5 Yu M . Proximal extrapolated gradient methods for variational inequalities[J]. Optimization Methods and Software, 2018, 33 (1): 140- 164.
6 Tseng P . Approximation accuracy, gradient methods, and error bound for structured convex optimization[J]. Mathematical Programming, 2010, 125 (2): 263- 295.
7 Rockafellar T R , Wets J B . Variational Analysis[M]. New York: Springer-Verlag, 1998.
8 Combettes P L . Signal recovery by proximal forward-backward splitting[J]. Siam Journal on Multiscale Modeling and Simulation, 2006, 4 (4): 1168- 1200.
10 Nesterov Y . Introductory Lectures on Convex Optimization: a Basic Course[M]. Boston: Springer, 2004.
11 Zhang H B , Wei J , Li M X , et al. On proximal gradient method for the convex problems regularized with the group reproducing kernel norm[J]. Journal of Global Optimization, 2014, 58 (1): 169- 188.
12 Luo Z Q , Tseng P . On the linear convergence of descent methods for convex essentially smooth minimization[J]. SIAM Journal on Control and Optimization, 1992, 30 (2): 408- 425.
Outlines

/