The Quasi-Newton method is widely regarded as one of the most effective methods to solve the problem of small scale optimization problem. They avoid the problem of solving the hession matrix by Newton method. In fact, the Quasi-newton method produces a symmetric matrix to approximate the hession matrix at each iteration. There are many Quasi-newton update formulas, frequently, such as BFGS update formula, SR1 update formula, DFP update formula and so on. Compared to other Quasi-Newton update formulas, SR1 Update formulas are more simpler and need less calculation. But most of time its convergence is improved on the strict condition of uniformly linearly independent. So this paper proposes a new modified SR1 Update formalus, and analyzes the convergence of the quasi-Newton algorithm based on the new modified formula SR1 under two different hypothesises respectively, including the condition of uniformly linearly independent and not uniformly linearly independent. By avoiding uniformly linearly independent and adding the assumption of positive-definite bounded hessian approximations, the new modified SR1 Update formalus is proved to be $n$+1 step $q$-superlinearly convergent. In addition, We also validate the reasonableness of assumptions and effectiveness of algorithms through experiments. The numerical results are reported to support that new update formulas proposed in the paper has better convergence rate in some conditions.
[1] Dennis J E, Schnabel R B. Numerical Methods for Unconstrained Optimization and Nonlinear Equation[M]. Prentlce-Hall:Science Press, 1983.
[2] Conn A R, Gould N I, Toint Ph. L. Convergence of quasi-Newton matrices generated by the symmetric rank one update[J]. Mathematical Programming, 1991, 50:177-195
[3] Khalfan H F, Byrd R H, Schnabel R B. A theoretical and experimental study of the symmetric rank one update[J]. SIAM Journal on Optimization, 1993, 3:1-24.
[4] Osborne M R, and Sun L P. A new approach to the symmetric rank-one updating algorithm[D]. Canberra:Australian National University.
[5] Nazareth J L, Smith B. Metric based symmetric rank-one updates[J]. Computational Optimization and Applications, 1997, 8:219-244.
[6] Spellucci P. A modified rank one update which converges Q-superlinearly[J]. Computational Optimization and Applications, 2001, 19:273-296.
[7] Zhang J Z, Deng N Y, Chen L H. New quasi-Newton equation and related methods for unconstrained optimization[J]. Journal of Optimization Theory and Applications, 1999, 102(1):147-167.
[8] Zhang J, Xu C. Properties and numerical performance of quasi-Newton methods with modified quasi-Newton equations[J]. Journal of Computational and Applied Mathematics, 2001, 137:269-278.
[9] Narushima J Y. Symmetric rank-one method based on some modified secant conditions for unconstrained optimization[J]. SUT Journal of Mathematics, 2011, 47(1):25-43.
[10] Moré J J, Garbow B S, Hillstrom K. E. Testing unconstrained optimization software[J]. ACM Transactions on Mathematical Software, 1981, 7:17-41.
[11] Mehiddin Al-Baali J, Humaid Khalfan. J. A combined class of self-scaling and modified quasiNewton methods[J]. Computational Optimization and Applications, 2012, 52:393-408.
[12] Huang W. Optimization algorithms on Riemannian manifolds with applications[D]. Florida:Florida State University, 2013.
[13] Khiyabani F M, Leong W J. Limited memory methods with improved symmetric rank-one updates and its applications on nonlinear image restoration[J]. Arabian Journal for ence & Engineering, 2014, 39(11):7823-7838.
[14] He J W, Wei H Y. A Riemannian subspace limited-memory SR1 trust region method[J]. Optimization Letters, 2015.
[15] 袁亚湘, 孙文瑜. 最优化理论与方法[M]. 北京:科学出版社. 2005.
[16] 马昌凤. 最优化方法及其MATLAB程序设计[M]. 北京:科学出版社. 2010.
[17] Nocedal J, Wright S. Numerical Optimization[M]. New York:Springer-Verlag, 1999.