运筹学学报 ›› 2021, Vol. 25 ›› Issue (1): 61-72.doi: 10.15960/j.cnki.issn.1007-6093.2021.01.005

•   • 上一篇    下一篇

一类非光滑凸优化问题的邻近梯度算法

李红武1,2,(), 谢敏1, 张榕3   

  1. 1. 北京工业大学应用数理学院, 北京 100124
    2. 南阳师范学院数学与统计学院, 河南南阳 473061
    3. 汉能薄膜发电集团总部, 北京 100101
  • 收稿日期:2019-04-01 出版日期:2021-03-15 发布日期:2021-03-05
  • 通讯作者: 李红武 E-mail:xmin@emails.bjut.edu.cn
  • 作者简介:李红武  E-mail: xmin@emails.bjut.edu.cn
  • 基金资助:
    国家自然科学基金(11771003)

A proximal gradient method for nonsmooth convex optimization problems

Hongwu LI1,2,(), Min XIE1, Rong ZHANG3   

  1. 1. College of Applied Sciences, Beijing University of Technology, Beijing 100124, China
    2. School of mathematics and statistics, Nanyang Normal University, Nanyang 473061, Henan, China
    3. Hanergy Thin Film Power Group Head Quaters, Beijing 100101, China
  • Received:2019-04-01 Online:2021-03-15 Published:2021-03-05
  • Contact: Hongwu LI E-mail:xmin@emails.bjut.edu.cn

摘要:

考虑求解目标函数为光滑损失函数与非光滑正则函数之和的凸优化问题的一种基于线搜索的邻近梯度算法及其收敛性分析,证明了在梯度局部Lipschitz连续条件下该算法是$R$-线性收敛的,并在非光滑部分为稀疏块LASSO正则函数情况下给出了误差界条件成立的证明,得到了线性收敛率。最后,数值实验结果验证了方法的有效性。

关键词: 非光滑凸优化, 邻近梯度法, 局部Lipschitz连续, 误差界, 线性收敛

Abstract:

A Proximal Gradient Method based on linesearch (L-PGM) and its convergence for solving the convex optimization problems which objective function is the sum of smooth loss function and non-smooth regular function are studied in this paper. Considering the loss function's gradient is locally Lipschitz continuous in the problems, the R-linear convergence rate of the L-PGM method is proved. Then, focusing on the problems regularized by the sparse group Lasso function, we prove that the error bound holds around the optimal solution set, thus, the linear convergence for solving such problems with the L-PGM method is given. Finally, The preliminary experimental results support our theoretical analysis.

Key words: nonsmooth convex optimization, proximal gradientmethod, locally lipschitz continuous, error bound, linearconvergence

中图分类号: