一类非光滑凸优化问题的邻近梯度算法

展开
  • 1. 北京工业大学应用数理学院, 北京 100124
    2. 南阳师范学院数学与统计学院, 河南南阳 473061
    3. 汉能薄膜发电集团总部, 北京 100101
李红武  E-mail: xmin@emails.bjut.edu.cn

收稿日期: 2019-04-01

  网络出版日期: 2021-03-05

基金资助

国家自然科学基金(11771003)

A proximal gradient method for nonsmooth convex optimization problems

Expand
  • 1. College of Applied Sciences, Beijing University of Technology, Beijing 100124, China
    2. School of mathematics and statistics, Nanyang Normal University, Nanyang 473061, Henan, China
    3. Hanergy Thin Film Power Group Head Quaters, Beijing 100101, China

Received date: 2019-04-01

  Online published: 2021-03-05

摘要

考虑求解目标函数为光滑损失函数与非光滑正则函数之和的凸优化问题的一种基于线搜索的邻近梯度算法及其收敛性分析,证明了在梯度局部Lipschitz连续条件下该算法是$R$-线性收敛的,并在非光滑部分为稀疏块LASSO正则函数情况下给出了误差界条件成立的证明,得到了线性收敛率。最后,数值实验结果验证了方法的有效性。

本文引用格式

李红武, 谢敏, 张榕 . 一类非光滑凸优化问题的邻近梯度算法[J]. 运筹学学报, 2021 , 25(1) : 61 -72 . DOI: 10.15960/j.cnki.issn.1007-6093.2021.01.005

Abstract

A Proximal Gradient Method based on linesearch (L-PGM) and its convergence for solving the convex optimization problems which objective function is the sum of smooth loss function and non-smooth regular function are studied in this paper. Considering the loss function's gradient is locally Lipschitz continuous in the problems, the R-linear convergence rate of the L-PGM method is proved. Then, focusing on the problems regularized by the sparse group Lasso function, we prove that the error bound holds around the optimal solution set, thus, the linear convergence for solving such problems with the L-PGM method is given. Finally, The preliminary experimental results support our theoretical analysis.

参考文献

1 Fan J , Li R . Variable selection via nonconcave penalized likelihood and its oracle properties[J]. Publications of the American Statistical Association, 2001, 96 (456): 1348- 1359.
2 Zou H , Hastie T . Regularization and variable selection via the elastic net[J]. Journal of the Royal Statistical Society, 2005, 67 (2): 301- 320.
3 Zhang H B , Jiang J J , Luo Z Q . On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems[J]. Journal of the Operations Research Society of China, 2013, 1 (2): 163- 186.
4 Cruz J Y B , Nghia T T A . On the convergence of the forward-backward splitting method with linesearches[J]. Optimization Methods and Software, 2016, 31 (6): 1209- 1238.
5 Yu M . Proximal extrapolated gradient methods for variational inequalities[J]. Optimization Methods and Software, 2018, 33 (1): 140- 164.
6 Tseng P . Approximation accuracy, gradient methods, and error bound for structured convex optimization[J]. Mathematical Programming, 2010, 125 (2): 263- 295.
7 Rockafellar T R , Wets J B . Variational Analysis[M]. New York: Springer-Verlag, 1998.
8 Combettes P L . Signal recovery by proximal forward-backward splitting[J]. Siam Journal on Multiscale Modeling and Simulation, 2006, 4 (4): 1168- 1200.
10 Nesterov Y . Introductory Lectures on Convex Optimization: a Basic Course[M]. Boston: Springer, 2004.
11 Zhang H B , Wei J , Li M X , et al. On proximal gradient method for the convex problems regularized with the group reproducing kernel norm[J]. Journal of Global Optimization, 2014, 58 (1): 169- 188.
12 Luo Z Q , Tseng P . On the linear convergence of descent methods for convex essentially smooth minimization[J]. SIAM Journal on Control and Optimization, 1992, 30 (2): 408- 425.
文章导航

/