运筹学学报(中英文) ›› 2024, Vol. 28 ›› Issue (2): 131-142.doi: 10.15960/j.cnki.issn.1007-6093.2024.02.010

•   • 上一篇    下一篇

基于Polyak步长的加速临近随机方差缩减算法

王福胜1,*(), 史鲁玉1   

  1. 1. 太原师范学院数学与统计学院, 山西晋中 030619
  • 收稿日期:2023-07-01 出版日期:2024-06-15 发布日期:2024-06-07
  • 通讯作者: 王福胜 E-mail:fswang2005@163.com
  • 基金资助:
    山西省基础研究计划(自由探索类) 基金(202103021224303)

An accelerated proximal stochastic gradient method with variance reduction based on Polyak step size

Fusheng WANG1,*(), Luyu SHI1   

  1. 1. School of Mathematics and Statistics, Taiyuan Normal University, Jinzhong 030619, Shanxi, China
  • Received:2023-07-01 Online:2024-06-15 Published:2024-06-07
  • Contact: Fusheng WANG E-mail:fswang2005@163.com

摘要:

针对大规模机器学习中随机复合优化问题, 本文将加速临近随机方差缩减算法(Acc-Prox-SVRG)和Polyak步长方法相结合, 提出了一种新的加速临近随机方差缩减算法(Acc-Prox-SVRG-Polyak)。相比于已有算法, 新算法充分利用加速技术和Polyak步长的优越性, 提高了准确率。在通常的假定下论证了算法的收敛性, 并分析了复杂度。最后, 数值实验验证了新算法的有效性。

关键词: Polyak步长, 方差缩减, 机器学习, 随机梯度

Abstract:

To solve stochastic composite optimization problems in machine learning, we propose a new accelerated proximal variance reduction gradient algorithm called Acc-Prox-SVRG-Polyak, which combines the Acc-Prox-SVRG algorithm with the Polyak step size method. Compared to the existing algorithms, the new algorithm can make full use of the advantages of acceleration technology and Polyak step size to improve its accuracy, the convergence of the algorithm is demonstrated under the usual assumptions, and the complexity is analyzed. Finally, numerical experiments on standard data sets verify the effectiveness of the new algorithm.

Key words: Polyak step size, variance reduction, machine learning, stochastic gradient

中图分类号: