运筹学学报 ›› 2014, Vol. 18 ›› Issue (1): 69-92.
戴彧虹1,*, 刘新为2
出版日期:
2014-03-15
发布日期:
2014-03-15
通讯作者:
戴彧虹
E-mail:dyh@lsec.cc.ac.cn
基金资助:
国家杰出青年科学基金 (No. 11125107), 国家自然科学基金资助项目 (Nos. \!11331012, 81173633, 11271107)
DAI Yuhong1,*, LIU Xinwei2
Online:
2014-03-15
Published:
2014-03-15
摘要: 线性规划与非线性规划是数学规划中经典而重要的研究方向. 主要介绍该研究方向的背景知识,并介绍线性规划、无约束优化和约束优化的最新算法与理论以及一些前沿与热点问题. 交替方向乘子法是一类求解带结构的约束优化问题的方法,近年来倍受重视. 全局优化是一个对于应用优化领域非常重要的研究方向. 因此也试图介绍这两个方面的一些最新研究进展和问题.
中图分类号:
戴彧虹,刘新为. 线性与非线性规划算法与理论[J]. 运筹学学报, 2014, 18(1): 69-92.
DAI Yuhong, LIU Xinwei. Advances in linear and nonlinear programming[J]. Operations Research Transactions, 2014, 18(1): 69-92.
Renegar J. A polynomial-time algorithm based on Newton's method for linear programming [J]. Mathematical Programming, 1988, 40: 59-93. Spielman D A, Teng S H. Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time [J]. Journal of the ACM, 2004, 51: 385-463. Smale S. Mathematical problems for the next century [J]. Mathematical Intelligencer, 1998, 20: 7-15. Tardos E. A strongly polynomial algorithm to solve combinatorial linear programming [J]. Operations Research, 1986, 34: 250-256. Vavasis S A, Ye Y. A primal-dual interior point method whose running time depends only on the constraint matrix [J]. Mathematical Programming, 1996, 74: 79-120.Ye Y. The simplex and policy-iteration methods are strongly polynomial for the Markov decision problem with a fixed discount rate [J]. Mathematics of Operations Research, 2011, 36: 593-603. Chubanov S. A strongly polynomial algorithm for linear systems have a binary solution [J]. Mathematical Programming, 2013, 134: 533-570. Mehrotra S. On the implementation of a primal-dual interior point method [J].SIAM Journal on Optimization, 1992, 2: 575-601. Goldfarb D E, Reid J K. A practicable steepest-edge simplex algorithm [J]. Mathematical Programming, 1977, 12: 361-371. Greenberg H J, Kalan J E. An exact update for Harris' TREAD [J]. Mathematical Programming Study, 1975, 4: 26-29. Muter I, Birbil S. I, Bulbul K. Simultaneous column-and-row generation for large-scale linear programs with column-dependent rows [J]. Mathematical Programming, Series A, 2013, 142: 47-82. Barzilai J, Borwein J M. Two-point step size gradient methods [J]. IMA Journal of Numerical Analysis, 1988, 8: 141-148. “10000个科学难题”数学编委会. 10000个科学难题~[M]. 北京: 科学出版社, 2009. Santos F. A counterexample to the Hirsch conjecture [J]. Annals of Mathematics, 2012, 176(2): 383-412. Barzilai J, Borwein J M. Two-point step size gradient methods [J]. IMA Journal of Numerical Analysis, 1988, 8: 141-148. Raydan M. On the Barzilai and Borwein choice of steplength for the gradient method [J]. IMA Journal of Numerical Analysis, 1993, 13: 321-326. Dai Y H, Liao L Z. R-linear convergence of the Barzilai and Borwein gradient method [J]. IMA Journal of Numerical Analysis, 2002, 26: 1-10. Dai Y H, Fletcher R. On the asymptotic behavior of some new gradient methods [J]. Mathematical Programming, Series A, 2005, 103: 541-559. Fletcher R. On the Barzilai-Borwein method [M]//Optimization and Control with Applications, Berlin: Springer-Verlag, 2005, 237-256. Dai Y H, Zhang H. An adaptive two-point stepsize gradient algorithm [J]. Numerical Algorithms, 2001, 27: 377-385. Raydan M. The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem [J]. SIAM Journal on Optimization, 1997, 7: 26-33. Yuan Y. A new stepsize for the steepest descent method [J]. Journal of Computational Mathematics, 2006, 24: 149-156. Dai Y H, Yuan Y. Analyses of monotone gradient methods [J]. Journal of Industry and Management Optimization, 2005, 1: 181-192. Zhou B, Gao L, Dai Y H. Gradient methods with adaptive step-sizes [J]. Computational Optimization and Applications, 2006, 35: 69-86. Dai Y H. Convergence analysis of nonlinear conjugate gradient methods [M]//Optimization and Regularization for Computational Inverse Problems and Applications, Berlin: Springer-Verlag, 2010, 157-181. Nocedal J. Theory of algorithms for unconstrained optimization [J]. Acta Numerica, 1992, 1: 199-242. 戴彧虹, 袁亚湘. 非线性共轭梯度法 [M]. 北京: 上海科学技术出版社, 2000. Dai Y H. Nonlinear conjugate gradient methods [J]. Wiley Encyclopedia of Operations Research and Management Science, Doi: 10.1002/9780470400531.eorms0183. Nocedal J. Theory of algorithms for unconstrained optimization [J]. Acta Numerica, 1992, 1: 199-242. Dai Y H, Yuan Y. A nonlinear conjugate gradient method with a strong global convergence property [J]. SIAM Journal on Optimization, 1999, 10(1): 177-182. Hager W W, Zhang H. A new conjugate gradient method with guaranteed descent and an efficient line search [J]. SIAM Journal on Optimization, 2005, 16: 170-192. Yu G H, Guan L T, Chen W. F. Spectral conjugate gradient methods with sufficient descent property for large-scale unconstrained optimization [J].Optimization Methods and Software, 2008, 23: 275-293. Cheng W Y. A two-term PRP-based descent method [J]. Numerical Functional Analysis and Optimization, 2007, 28: 1217-1230. Cheng W Y, Liu Q F. Sufficient descent nonlinear conjugate gradient methods with conjugacy conditions [J]. Numerical Algorithms, 2010, 53: 113-131. Zhang L, Zhou W, Li D. A descent modified Polak-Ribi\`ere-Polyak conjugate gradient method and its global convergence [J]. IMA Journal of Numerical Analysis, 2006, 26: 629-640. Dai Y H, Kou C X. A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search [J]. SIAM Journal on Optimization, 2013, 23: 296-320. Dai Y H, Liao L Z. New conjugate conditions and related nonlinear conjugate gradient methods [J]. Applied Mathematics and Optimization, 2001, 43: 87-101. Yabe H, Takano M. Global convergence properties of nonlinear conjugate gradient methods with modified secant condition [J]. Computational Optimization and Applications, 2004, 28: 203-225. Li G Y, Tang C M, Wei Z X. New conjugacy condition and related new conjugate gradient methods for unconstrained optimization [J]. Journal of Computational and Applied Mathematics, 2007, 202: 523-539. Yuan Y, Stoer J. A subspace study on conjugate gradient algorithms [J]. Zeitschrift für Angewandte Mathematik und Mechanik, 1995, 75: 69-77. Dennis J E, Mor\'e J J. Quasi-Newton method, motivation and theory [J]. SIAM Review, 1977, 19: 46-89. 袁亚湘. 非线性规划数值方法 [M]. 上海: 上海科学技术出版社, 1993. Dai Y H. Convergence properties of the BFGS algorithm [J]. SIAM Journal on Optimization, 2002, 13: 693-701. Powell M J D. Nonconvex minimization calculations and the conjugate gradient method [M]//Numerical Analysis, Berlin: Springer-Verlag, 1984, 1066: 122-141. Powell M J D. On the convergence of the DFP algorithm for unconstrained optimization when there are only two variables [J]. Mathematical Programming, Series B, 2000, 87: 281-301. Mascarenhas W F. The BFGS algorithm with exact line searches fails for nonconvex functions [J]. Mathematical Programming, Series A, 2004, 99: 49-61. Dai Y H. A perfect example for the BFGS method [J]. Mathematical Programming, 2013, 138: 501-530. Li D H, Fukushima M. On the global convergence of the BFGS method for nonconvex unconstrained optimization problems [J]. SIAM Journal on Optimization, 2001, 11: 1054-1064. Pu D, Yu W. On the convergence property of the DFP algorithm [J]. Annals of Operations Research, 1990, 24(1): 175-184. Yuan Y. Convergence of DFP algorithm [J]. Science in China, 1995, 38: 1281-1294. Yuan Y. A modified BFGS algorithm for unconstrained optimization. [J] IMA Journal of Numerical Analysis, 1991, 11: 325-332. Yamashita N. Sparse quasi-Newton updates with positive definite matrix completion [J]. Mathematical Programming, Series A, 2008, 115: 1-30. Conn A R, Gould N I, Toint P L. Trust-Region Methods [M]. Philadelphia: Society for Industrial and Applied Mathematics and the Mathematical Programming Society, 2000. Yuan Y. Conditions for convergence of trust region algorithms for nonsmooth optimization [J]. Mathematical Programming, 1985, 31: 220-228. Byrd R H, Gilbert J C, Nocedal J. A trust region method based on interior point techniques for nonlinear programming [J]. Mathematical Programming, 2000, 89: 149-185. Fletcher R, Leyffer S. Nonlinear programming without a penalty function [J]. Mathematical Programming, 2002, 91: 239-269. Gould N I, Toint P L. Nonlinear programming without a penalty function or a filter [J]. Mathematical Programming, 2009, 122: 155-196. Yuan Y. On the truncated conjugate gradient method [J]. Mathematical Programming, 2000, 87: 561-571. Yuan Y. On a subproblem of trust region algorithms for constrained optimization [J]. Mathematical Programming, 1990, 47: 53-63. Chen X, Yuan Y. A note on quadratic forms [J]. Mathematical Programming, 1999, 86: 187-197. Ni Q. Optimality conditions for trust-region subproblems involving a conic model [J]. SIAM Journal on Optimization, 2005, 15: 826-837. Fan J Y, Yuan Y. On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption [J]. Computing, 2005, 74: 23-39. Lu Y, Yuan Y. An interior-point trust-region algorithm for general symmetric cone programming [J]. SIAM Journal on Optimization, 2007, 18: 65-86. Yuan Y. A trust region algorithm for Nash equilibrium problems [J]. Pacific Journal of Optimization, 2011, 7: 125-138. Cartis C, Gould N I, Toint P L. Adaptive cubic overestimation methods for unconstrained optimization. Part I: motivation, convergence and numerical results [J]. Mathematical Programming, 2011, 127: 245-295. Fan J Y, Yuan Y. A new trust region algorithm with trust region radius converging to zero [C]//Proceedings of the 5th International Conference on Optimization: Techniques and Applications, 2001, 786-794. Zhang J, Wang Y. A new trust region method for nonlinear equations [J]. Mathematical Methods of Operations Research, 2003, 58: 283-298. Fan J Y. Convergence rate of the trust region method for nonlinear equations under local error bound condition [J]. Computational Optimization and Applications, 2006, 34: 215-227. Nesterov Y, Polyak B T. Cubic regularization of Newton method and its global performance [J]. Mathematical Programming, 2006, 108: 177-205. Nesterov Y. Modified Gauss-Newton scheme with worst-case guarantees for global performance [J]. Optimization Methods and Software, 2007, 22: 469-483. Villacorta K, Oliveira P, Soubeyran A. A trust-region method for unconstrained multiobjective problems with applications in satisfying processes [J]. Journal of Optimization Theory and Applications, Doi: 10.1007/s10957-013-0392-7. Cartis C, Gould N I, Toint P L. Adaptive cubic overestimation methods for unconstrained optimization. Part II: worst-case function-evaluation complexity [J]. Mathematical Programming, 2011, 130: 295-319. Cartis C, Gould N I, Toint P L. On the complexity of finding first-order critical points in constrained nonlinear optimization [J]. Mathematical Programming, Doi: 10.1007/s10107-012-0617-9. Audet C, Orban D. Finding optimal algorithmic parameters using derivative-free optimization [J]. SIAM Journal on Optimization, 2006, 17(3): 642-664. Nelder J A, Mead R. A simplex method for function minimization [J]. The Computer Journal, 1965, 7(4): 308-313. Spendley W, Hext G R, Himsworth F R. Sequential application of simplex designs in optimization and evolutionary operation [J]. Technometrics, 1962, 4: 441-461. Woods D J. An interactive approach for solving multi-objective optimization problems [D]. Houston: Rice University, 1985. Wright M. The Nelder-Mead method: numerical experimentation and algorithmic improvements [R]. New Jersey: AT & T Bell Laboratories, 1996. Mckinnon K I M. Convergence of the Nelder-Mead simplex method to a nonstationary point [J]. SIAM Journal on Optimization, 1998, 9: 148-158. Lagarias J C, Poonen B, Wright M H. Convergence of the restricted Nelder-Mead method in two dimensions [J]. SIAM Journal on Optimization, 2012, 22: 501-532. Yu W C. The convergent property of the simplex evolutionary techniques [J]. Sci Sinica, 1979, 1: 269-288. Tseng P. On the convergence of pattern search algorithms [J]. SIAM Journal on Optimization, 1999, 10: 269-288. Powell M J D. Direct search algorithms for optimization calculations [J]. Acta Numerica, 1998, 7: 287-336. 丁晓东. 基于插值模型的无导数优化方法及其应用 [D]. 北京: 中国科学院数学与系统科学研究院, 2010. 张在坤. 无导数优化方法的研究 [D]. 北京: 中国科学院数学与系统科学研究院, 2012. Powell M J D. A direct search optimization method that models the objective and constraint functions by linear interpolation [C]//Advances in Optimization and Numerical Analysis, Dordrecht: Kluwer Academic, 1994, 51-67. Winfield D. Function minimization by interpolation in a data table [J]. IMA Journal of Applied Mathematics, 1973, 12: 339-437. Conn A R, Scheinberg K, Toint Ph L. A derivative free optimization algorithm in practice [C]//Proceedings of the 7th AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization, St. Louis: American Institute of Aeronautics and Astronautics, 1998. Conn A R, Toint Ph L. An algorithm using quadratic interpolation for unconstrained derivative free optimization [C]//Nonlinear optimization and Applications, New York: Kluwer Academic Plenum Publishers, 1996. Powell M J D. UOBYQA: unconstrained optimization by quadratic approximation [J]. Mathematical Programming, 2002, 92(3): 555-582. Fasano G, Morales J L, Nocedal J. On the geometry phase in model-based algorithms for derivative-free optimization [J]. Optimization Methods and Software, 2009, 24(1): 145-154. Scheinberg K. Geometry in model-based algorithms for derivative-free unconstrained optimization [J]. Mathematical Programming Society Newsletter, 2009, 79: 1-5. Scheiberg K, Toint Ph L. Self-Correcting geometry in model-based algorithms for derivative-free unconstrained optimization [J]. SIAM Journal on Optimization, 2010, 20(6): 3512-3532. Conn A R, Scheinberg K, Vicente L N. Introduction to Derivative-Free Optimization [M]. Philadelphia: Society for Industrial and Applied Mathematics and the Mathematical Programming Society, 2009. Powell M J D. The BOBYQA algorithm for bound constrained optimization without derivatives [R]. Cambridge: University of Cambridge, 2009. Powell M J D. Beyond symmetric Broyden for updating quadratic models in minimization without derivatives [R]. Cambridge: University of Cambridge, 2010. Kirkpatrick S. Optimization by simulated annealing: quantitative studies [J]. Journal of Statistical Physics, 1984, 34(5-6): 975-986. Kirkpatrick S, Gelatt C D Jr, Vecchi M P. Optimization by simulating annealing [J]. Science, 1983, 220: 671-680. Goldberg D E. Genetic algorithms and machine learning [J]. Machine Learning, 1988, 3(2-3): 95-99. Goldberg D E. Genetic Algorithms in Search, Optimization and Machine Learning [M]. Boston: Addison Wesley Longman, 1989. Glover F. Tabu search-Part I [J]. ORSA Journal on computing, 1989, 1: 190-206. Haykin S. Neural Networks: A Comprehensive Foundation [M]. Upper Saddle River: Prentice Hall PTR, 1994. Kennedy J, Eberhar R t. Particle swarm optimization [C]//Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth: IEEE Service Center, 1995, 1942-1948. Shan S, Wang G G. Survey of modeling and optimization strategies to solve high-dimensional design problems with computationally-expensive black-box functions [J]. Structural and Multidisciplinary Optimization, 2010, 41: 219-241. Regis R G, Shoemaker C A. A stochastic radial basis function method for the global optimization of expensive functions [J]. INFORMS Journal on Computing, 2007, 19(4): 497-509. Jones D R, Schonlau M, Welch W J. Efficient global optimization of expensive black-box functions [J]. Journal of Global Optimization, 1998, 13: 455-492. Bull A D. Convergence rates of efficient global optimization algorithms [J]. Journal of Machine Learning Research, 2011, 12: 2879-2904. Wang Y L. SOARS: Statistical and Optimization Analysis and Response Surfaces for Computationally Expensive Models [R]. Changchun: the 9th International Conference on Numerical Optimization and Numerical Linear Algebra, 2013. Fletcher R, Leyffer S, Toint P L. On the global convergence of a filter-SQP algorithm [J]. SIAM Journal on Optimization, 2002, 13: 44-59. Ulbrich M, Ulbrich S, Vicente L N. A globally convergent primal-dual interior-point filter method for nonlinear programming [J]. Mathematical Programming, 2004, 100: 379-410. W\"achter A, Biegler L T. Line search filter methods for nonlinear programming: Motivation and global convergence [J]. SIAM Journal on Optimization, 2005, 16: 1-31. Ribeiro A A, Karas E W, Gonzaga C C. Global convergence of filter methods for nonlinear programming [J]. SIAM Journal on Optimization, 2008, 19: 1231-1249. Liu X W, Yuan Y. A sequential quadratic programming method without a penalty function or a filter for nonlinear equality constrained optimization [J]. SIAM Journal on Optimization, 2011, 21: 545-571. Dai Y H, Schittkowski K. A sequential quadratic programming algorithm with non-monotone line search [J]. Pacific Journal on Optimization, 2008, 4: 335-351. Solodov M. Global convergence of an SQP method without boundedness assumptions on any of the iterative sequences [J]. Mathematical Programming, 2009, 118: 1-12. Ulbrich M, Ulbrich S. Non-monotone trust region methods for nonlinear equality constrained optimization without a penalty function [J]. Mathematical Programming, 2003, 95: 103-135. El-Bakry A S, Tapia R A, Tsuchiya T, et al. On the formulation and theory of the Newton interior-point method for nonlinear programming [J]. Journal of Optimization Theory and Applications, 1996, 89: 507-541. Liu X W, Yuan Y. A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties [J]. Mathematical Programming, 2010, 125: 163-193. Andreani R, Birgin E. G, Martinez J M, et al. Augmented Lagrangian methods under the constant positive linear dependence constraint qualification [J]. Mathematical Programming, 2008, 111: 5-32. Andreani R, Mart\'inez J M, Svaiter B F. A new sequential optimality condition for constrained optimization and algorithmic consequences [J]. SIAM Journal on Optimization, 2010, 20: 3533-3554. Liu X W, Sun J. A robust primal-dual interior point algorithm for nonlinear programs [J]. SIAM Journal on Optimization, 2004, 14: 1163-1186. Byrd R H, Curtis F E, Nocedal J. Infeasibility detection and SQP methods for nonlinear optimization [J]. SIAM Journal on Optimization, 2010, 20: 2281-2299. Glowinski R, Marrocco A. Approximation par \'el\'ements finis d'ordre un et r\'esolution par p\'enalisation-dualit\'e d\'une classe de probl\`emes non lin\'eaires [J]. R A I R O , 1975, R2: 41-76. Chan T F, Glowinski R. Finite element approximation and iterative solution of a class of mildly non-linear elliptic equations [R]. California: Stanford University, 1978. Gabay D, Mercier B. A dual algorithm for the solution of nonlinear variational problems via finite element approximations [J]. Comput Math Appli, 1976, 2: 17-40. Bertsekas D P. Nonlinear Programming [M]. [s.l.]: Athena Scientific, 2010. Luo Z Q, Tseng P. On the Linear convergence of descent methods for convex essentially smooth minimization [J]. SIAM Journal on Control and Optimization, 1992, 30: 408-425. Luo Z Q, Tseng P. On the convergence rate of dual ascent methods for strictly convex minimization [J]. Mathematics of Operations Research, 1993, 18: 846-867. He B S, Yang H. Some convergence properties of a method of multipliers for linearly constrained monotone variational inequalities [J]. Operations Research Letters, 1998, 23: 151-161. He B S, Liao L, Han D, et al. A new inexact alternating directions method for monotone variational inequalities [J]. Mathematical Programming, 2002, 92: 103-118. Douglas J, Rachford H H. On the numerical solution of the heat conduction problem in 2 and 3 space variables [J]. Transactions of the American Mathematical Society, 1956, 82: 421-439. Eckstein J, Bertsekas D P. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators [J].Mathematical Programming, 1992, 55: 293-318. Goldfarb D, Ma S. Fast multiple splitting algorithms for convex optimization [J]. SIAM Journal on Optimization, 2012, 22: 533-556. Goldfarb D, Ma S, Scheinberg K. Fast alternating linearization methods for minimizing the sum of two convex functions [J]. Mathematical Programming, 2013, 141: 349-382. He B S, Yuan X M. On the O(1/n) convergence rate of Douglas-Rachford alternating direction method [J]. SIAM Journal on Numerical Analysis, 2012, 50: 700-709. Boley D. Linear convergence of ADMM on a model problem [R]. Minnesota: University of Minnesota, 2012. Han D, Yuan X Y. Local linear convergence of the alternating direction method of multipliers for quadratic programs [J]. SIAM Journal on Numerical Analysis, 2013, 51(6): 3446-3457. Goldstein T, O’Donoghue B, Setzer S.Fast alternating direction optimization methods [J]. CAM report 12-35, UCLA, 2012. Ng M K, Weiss P, Yuan X M. Solving constrained total-variation problems via alternating direction methods [J]. SIAM Journal on Scientific Computing, 2010, 32: 2710-2736. Yang J F, Zhang Y. Alternating direction algorithms for L_1-problems in compressive sensing [J]. SIAM Journal on Scientific Computing, 2011, 33: 250-278. Goldstein T, Osher S. The split Bregman method for 1-regularized problems [J]. SIAM Journal on Imaging Sciences, 2009, 2: 323-343. Boyd S, Parikh N, Chu E, et al. Distributed optimization and statisticallearning via the alternating direction method of multipliers [J]. Foundations and Trends in Machine, 2010, 3: 1-122. Glowinski R. On alternating direction methods of multipliers: a historical perspective [C]//Proceedings of a Conference Dedicated to J Periaux, 2014. Han D, Yuan X M. A note on the alternating direction method [J]. Journal of Optimization Theory and Applications, 2012, 155: 227-238. Glowinski R. On alternating direction methods of multipliers: a historical perspective [C]//Proceedings of a Conference Dedicated to J Periaux, 2014. Chen C, He B, Ye Y, et al. The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent [EB/OL]. [2013-10-29]. http://www.optimization-online.org/DB_FILE/2013/09/4059.pdf. He B S, Tao M, Yuan X M. Alternating direction method with Gaussian back substitution for Separable Convex Programming [J]. SIAM Journal on Optimization, 2012, 22: 313-340. Glowinski R. On alternating direction methods of multipliers: a historical perspective [C]//Proceedings of a Conference Dedicated to J Periaux, 2014. He B S, Tao M, Yuan X M. Alternating direction method with Gaussian back substitution for Separable Convex Programming [J]. SIAM Journal on Optimization, 2012, 22: 313-340. Glowinski R. On alternating direction methods of multipliers: a historical perspective [C]//Proceedings of a Conference Dedicated to J Periaux, 2014. Glowinski R. On alternating direction methods of multipliers: a historical perspective [C]//Proceedings of a Conference Dedicated to J Periaux, 2014. Hong M, Luo Z Q. On the linear convergence of the alternating direction method of multipliers [EB/OL]. [2013-11-10]. http://arxiv.org/pdf/1208.3922v3.pdf. He B S, Xu H K, Yuan X M. On the proximal Jacobian decomposition of ALM for multiple-block separable convex minimization problems and its relationship to ADMM [EB/OL]. [2013-11-21]. http://www.optimization-online.org/DB_FILE/2013/11/4142.pdf. Wang X F, Hong M Y, Ma S Q, et al. Solving multi-block separable convex minimization problems using two-block alternating direction method of multipliers [EB/OL]. [2013-11-23]. http://arxiv.org/pdf/1308.5294v1.pdf. 石光明, 刘丹华, 高大化, 等. 压缩感知理论及其研究进展 [J]. 电子学报, 2009, 37: 1070-1081. 文再文, 印卧涛, 刘歆, 等. 压缩感知和稀疏优化简介 [J]. 运筹学学报, 2012, 16: 49-64. 许志强. 压缩感知 [J]. 中国科学: 数学, 2012, 42: 865-877. Glowinski R. On alternating direction methods of multipliers: a historical perspective [C]//Proceedings of a Conference Dedicated to J Periaux, 2014. Yuan Y. Subspace techniques for nonlinear optimization [M]//Some Topics in Industrial and Applied Mathematics, Beijing: Higher Education Press, 2007, 206-218. |
[1] | 陈永, 王薇, 徐以汎. 具有间断扩散性质的线性约束全局优化随机算法[J]. 运筹学学报, 2020, 24(1): 88-100. |
[2] | 黄培煌, 朱文兴. 移动传感器网络中的最大价值路径扫描覆盖算法[J]. 运筹学学报, 2019, 23(4): 155-164. |
[3] | 杨沐明, 黄亚魁, 戴彧虹. 一类多商品设施选址问题的基于线性松弛解的启发式方法[J]. 运筹学学报, 2019, 23(3): 15-26. |
[4] | 陈彩华. 多块交替方向乘子法不收敛反例的几点注记[J]. 运筹学学报, 2019, 23(3): 135-140. |
[5] | 王伟祥, 尚有林, 王朵. 求解带箱子集约束的非光滑全局优化问题的填充函数方法[J]. 运筹学学报, 2019, 23(1): 28-34. |
[6] | 蔡爽, 杨珂, 刘克. 具有机器适用限制的分布式置换流水车间问题的模型与算法[J]. 运筹学学报, 2018, 22(4): 17-30. |
[7] | Tsegay Giday Woldu, 张海斌, 张鑫, 张芳. 一种求解非线性无约束优化问题的充分下降的共轭梯度法[J]. 运筹学学报, 2018, 22(3): 59-68. |
[8] | 杨俊锋. 图像处理中全变差正则化数据拟合问题算法回顾[J]. 运筹学学报, 2017, 21(4): 69-83. |
[9] | 郑跃, 庄道元, 万仲平. 求解弱线性双层规划问题的一种全局优化方法[J]. 运筹学学报, 2017, 21(3): 86-94. |
[10] | 黄亚魁, 李博, 康阳, 戴彧虹, 柳建军. 天然气稳态运行优化的混合整数模型及其算法[J]. 运筹学学报, 2017, 21(2): 13-23. |
[11] | 连淑君, 杜爱华, 唐加会. 等式约束优化问题的一类新的简单光滑精确罚函数[J]. 运筹学学报, 2017, 21(1): 33-43. |
[12] | 石礼堂, 陈伟. 非线性无约束优化问题的滤子填充函数算法[J]. 运筹学学报, 2017, 21(1): 55-64. |
[13] | 胡铨, 王薇. 求解带箱式约束全局优化问题的滤子填充函数方法[J]. 运筹学学报, 2016, 20(3): 57-67. |
[14] | 黎健玲, 杨振平, 简金宝. 非线性半定规划若干算法介绍[J]. 运筹学学报, 2016, 20(2): 1-22. |
[15] | 马国栋, 简金宝. 不等式约束优化一个可行序列线性方程组算法[J]. 运筹学学报, 2015, 19(4): 48-58. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||