运筹学学报 >
2021 , Vol. 25 >Issue 2: 81 - 92
DOI: https://doi.org/10.15960/j.cnki.issn.1007-6093.2021.02.006
一种单位化的增量梯度算法
收稿日期: 2019-12-30
网络出版日期: 2021-05-06
基金资助
国家自然科学基金(11661019);贵州省科技计划项目(20161039);贵州省数据驱动建模学习与优化创新团队(黔科合平台人才[2020]5016)
A scaled incremental gradient method
Received date: 2019-12-30
Online published: 2021-05-06
钱晓慧, 王湘美 . 一种单位化的增量梯度算法[J]. 运筹学学报, 2021 , 25(2) : 81 -92 . DOI: 10.15960/j.cnki.issn.1007-6093.2021.02.006
A scaled incremental gradient algorithm for minimizing a sum of continuously differentiable functions is presented. At each iteration of the algorithm, the iterate is updated incrementally by a sequence of some steps, and each step is cyclically evaluates a normalized gradient of a single component function (or several component functions). Under some moderate assumptions, the convergence result of the algorithm employing the divergence step sizes is established. As applications, the new algorithm and the (unscaled) one proposed by Bertsekas D P, Tsitsikils J N are applied to solve the robust estimation problem and the source localization problem, respectively. Some numerical experiments show that the new algorithm is more effective and robust than the corresponding (unscaled) one.
| 1 | Bertsekas D P . Incremental least squares methods and the extended kalman filter[J]. SIAM Journal on Optimization, 1996, 6 (3): 807- 822. |
| 2 | Bertsekas D P . A new class of incremental gradient methods for least squares problems[J]. SIAM Journal on Optimization, 1997, 7 (4): 913- 926. |
| 3 | Moriyama H , Yamashita N , Fukushima M . The incremental Gauss-Newton algorithm with adaptive stepsize rule[J]. Computational Optimization and Applications, 2003, 26 (2): 107- 141. |
| 4 | Gurbuzbalaban M , Ozdaglar A , Parrilo P . A globally convergent incremental newton method[J]. Mathematical Programming, 2015, 151 (1): 283- 313. |
| 5 | Blatt D , Hero A , Gauchman H . A convergent incremental gradient method with a constant stepsize[J]. SIAM Journal on Optimization, 2007, 18 (1): 29- 51. |
| 6 | Bottou L , Le Cun Y . On-line learning for very large data sets[J]. Applied Stochastic Models in Business and Industry, 2005, 21 (2): 137- 151. |
| 7 | Roux N L, Schmidt M, Bach F R. A stochastic gradient method with an exponential convergence rate for finite training sets [C]//ICONIP 2018 2018: 25th International Conference on Neural Information Processing Systems, 2018. |
| 8 | Grippo L . A class of unconstrained minimization methods for neural network training[J]. Optimization Methods and Software, 1994, 4 (2): 135- 150. |
| 9 | Mangasarian O L , Solodov M V . Serial and parallel backpropagation convergence via nonmonotone perturbed minimization[J]. Optimization Methods and Software, 1994, 4 (2): 103- 116. |
| 10 | Bertsekas D P , Tsitsiklis J N . Gradient convergence in gradient methods with errors[J]. SIAM Journal on Optimization, 2000, 10 (3): 627- 642. |
| 11 | BertsekasD P. 非线性规划[M]. 第2版 北京: 清华大学出版社, 2013. |
| 12 | 王宜举, 修乃华. 非线性最优化理论与方法[M]. 北京: 科学出版社, 2015. |
| 13 | 袁亚湘, 孙文瑜. 最优化理论与方法[M]. 北京: 科学出版社, 2007. |
| 14 | 陈宝林. 最优化理论与算法[M]. 北京: 清华大学出版社, 2005. |
| 15 | 马昌凤, 柯艺芬, 谢亚君. 最优化计算方法及其MATLAB程序实现[M]. 北京: 国防教育出版社, 2017. |
| 16 | Trefethen . Spectral Methods in MATLAB[M]. New York: SIAM, 2000. |
| 17 | Rabbat M G, Nowak R D. Decentralized source localization and tracking[C]//2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, New York: IEEE, 2004. |
| 18 | Sheng X , Hu Y H . Information Processing in Sensor Networks[M]. California: Springer, 2003. |
| 19 | Chen J C , Yao K , Hudson R E . Source localization and beamforming[J]. IEEE Signal Processing Magazine, 2002, 19, 30- 39. |
| 20 | Huber P . Robust Statistics[M]. New York: John Wiley and Sons, 1981. |
| 21 | Polyak B T . Introduction to Optimization[M]. New York: Chapman & Hall, 1987. |
| 22 | Rey W J J . Introduction to Robust and Quasi-Robust Statistical Methods[M]. Berlin: SpringerVerlag, 1983. |
| 23 | Shor N Z . Minimization Methods for Non-differentiable Functions[M]. Kiev: Naukova Dumka, 1979. |
/
| 〈 |
|
〉 |