[1] Nesterov Y. Dual extrapolation and its applications to solving variational inequalities and related problems[J]. Mathematical Programming, 2007, 109(2/3):319-344. [2] Monteiro R, Svaiter B. On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean[J]. SIAM Journal on Optimization, 2010, 20(6):2755-2787. [3] Juditsky A, Nemirovski A. Solving variational inequalities with monotone operators on domains given by linear minimization oracles[J]. Mathematical Programming, 2016, 156(1/2):221-256. [4] Facchinei F, Pang J. Finite-Dimensional Variational Inequalities and Complementarity Problems[M]. Berlin:Springer Science & Business Media, 2007. [5] Von Neumann J, Morgenstern O. Theory of Games and Economic Behavior[M]. Princeton:Princeton University Press, 2007. [6] Gidel G, Berard H, Vignoud G, et al. A variational inequality perspective on generative adversarial networks[C]//International Conference on Learning Representations, 2019. [7] Chen X. Global and superlinear convergence of inexact uzawa methods for saddle point problems with nondifferentiable mappings[J]. SIAM Journal on Numerical Analysis, 1998, 35(3):1130-1148. [8] 袁亚湘. 非线性优化计算方法[M]. 北京:科学出版社,2008. [9] He B, Yuan X. Convergence analysis of primal-dual algorithms for a saddle-point problem:From contraction perspective[J]. SIAM Journal on Imaging Sciences, 2012, 5(1):119-149. [10] Chen Y, Lan G, Ouyang Y. Optimal primal-dual methods for a class of saddle point problems[J]. SIAM Journal on Optimization, 2014, 24(4):1779-1814. [11] Chen Y, Lan G, Ouyang Y. Accelerated schemes for a class of variational inequalities[J]. Mathematical Programming, 2017, 165(1):113-149. [12] Goodfellow I, Pouget-Abadie J, Mirza M. Generative Adversarial Nets[M]. Cambridge:MIT Press, 2014. [13] Qian Q, Zhu S, Tang J, et al. Robust optimization over multiple domains[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019:4739-4746. [14] Lu S, Tsaknakis I, Hong M, et al. Hybrid block successive approximation for one-sided nonconvex min-max problems:algorithms and applications[J]. IEEE Transactions on Signal Processing, 2020, 68:3676-3691. [15] Sinha A, Namkoong H, Duchi J. Certifiable distributional robustness with principled adversarial training[C]//International Conference on Learning Representations, 2018. [16] Dai B, Shaw A, Li L. Sbeed:Convergent reinforcement learning with nonlinear function approximation[C]//International Conference on Machine Learning, PMLR, 2018:1125-1134. [17] Shafieezadeh-Abadeh S, Esfahani P, Kuhn D. Distributionally robust logistic regression[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems, 2015:1576-1584. [18] Smale S. Mathematical problems for the next century[J]. Mathematical Intelligencer, 1998, 20:7-15. [19] Hiriart-Urruty J. A new series of conjectures and open questions in optimization and matrix analysis[J]. ESAIM:Control, Optimisation and Calculus of Variations, 2009, 15:454-470. [20] 10000个科学难题数学编委会. 10000个科学难题:数学卷[M]. 北京:科学出版社,2008. [21] 胡晓东,袁亚湘,章祥荪. 运筹学发展的回顾与展望[J]. 中国科学院院刊, 2012, (2):145-160. [22] 王奇超,文再文,蓝光辉,等. 优化算法的复杂度分析[J]. 中国科学:数学,2020, (9):144-209. [23] Bubeck S. Convex optimization:Algorithms and complexity[J]. Foundations and Trends in Machine Learning, 2015, 8(3/4):231-357. [24] Daskalakis C, Panageas I. The limit points of (optimistic) gradient descent in min-max optimization[C]//Advances in Neural Information Processing Systems, 2018:9256-9266. [25] Mazumdar E, Jordan M, Sastry S. On finding local Nash equilibria (and only local Nash equilibria) in zero-sum games[J]. 2019, arXiv:1901.00838. [26] Jin C, Netrapalli P, Jordan M. What is local optimality in nonconvex-nonconcave minimax optimization?[C]//International Conference on Machine Learning, PMLR, 2020:4880-4889. [27] Dai Y, Zhang L. Optimality conditions for constrained minimax optimization[J]. CSIAM Transactions on Applied Mathematics, 2020, 1:296-315. [28] Xu Z, Zhang H, Xu Y, et al. A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problems[J]. 2020, arXiv:2006.02032. [29] Nouiehed M, Sanjabi M, Huang T, et al. Solving a class of nonconvex min-max games using iterative first order methods[J]. Advances in Neural Information Processing Systems, 2019:14905-14916. [30] Nemirovski A. A prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems[J]. SIAM Journal on Optimization, 2004, 15(1):229-251. [31] Auslender A, Teboulle M. Interior projection-like methods for monotone variational inequalities[J]. Mathematical programming, 2005, 104(1):39-68. [32] Monteiro R, Svaiter B. Complexity of variants of Tseng.s modified FB splitting and Korpelevich.s methods for generalized variational inequalities with applications to saddle point and convex optimization problems[J]. SIAM Journal on Optimization, 2010, 21(4):1688-1720. [33] Tseng P. On accelerated proximal gradient methods for convex-concave optimization[J/OL]. SIAM Journal on Optimization, (2008-01-23)[2021-03-02]. https://www.csie.ntu.edu.tw/b97058/tseng/papers/apgm.pdf. [34] Ouyang Y, Xu Y. Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems[J]. Mathematical Programming, 2019, 185:1-35. [35] Mertikopoulos P, Zenati H, Lecouat B, et al. Mirror descent in saddle-point problems:Going the extra (gradient) mile[J]. 2018, arXiv:1807.02629. [36] Rafique H, Liu M, Lin Q, et al. Non-convex min-max optimization:Provable algorithms and applications in machine learning[J]. 2018, arXiv:1810.02060. [37] Sanjabi M, Ba J, Razaviyayn M, et al. On the convergence and robustness of training gans with regularized optimal transport[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018:7091-7101. [38] Sanjabi M, Razaviyayn M, Lee J. Solving non-convex non-concave min-max games under polyak-lojasiewicz condition[J]. 2018, arXiv:1812.02878. [39] Thekumparampil K, Jain P, Netrapalli P et al. Efficient algorithms for smooth minimax optimization[J]. 2019, arXiv:1907.01543. [40] Kong W, Monteiro R. An accelerated inexact proximal point method for solving nonconvex concave min-max problems[J]. 2019, arXiv:1905.13433. [41] Lin T, Jin C Jordan M. Near-optimal algorithms for minimax optimization[J]. 2020, arXiv:2002.02417. [42] Letcher A, Balduzzi D, Racaniere S, et al. Differentiable game mechanics[J]. Journal of Machine Learning Research, 2019, 20(84):1-40. [43] Lin T, Jin C, Jordan M. On gradient descent ascent for nonconvex-concave minimax problems[J]. 2019, arXiv:1906.00331. [44] Jin C, Netrapalli P, Jordan M. Minmax optimization:Stable limit points of gradient descent ascent are locally optimal[J]. 2019, arXiv:1902.00618. [45] Pan W, Shen J, Xu Z. An efficient algorithm for nonconvex-linear minimax optimization problem and its application in solving weighted maximin dispsersion problem[J]. Computational Optimization and Applications, 2021, 78(1):287-306. [46] 张慧灵, 徐洋, 徐姿. 分块凸-非凹极小极大问题的交替近端梯度算法[J/OL]. 运筹学学报, (2021-03-15)[2021-02-26]. https://www.ort.shu.edu.cn/CN/abstract/abstract18248.shtml. [47] Zhang J, Xiao P, Sun R, et al. A single-loop smoothed gradient descent-ascent algorithm for nonconvex-concave min-max problems[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020. [48] Lin Q, Liu M, Rafique H, et al. Solving weakly-convex-weakly-concave saddle-point problems as weakly-monotone variational inequality[J]. 2018, arXiv:1810.10207. [49] Dang C, Lan G. On the convergence properties of non-euclidean extragradient methods for variational inequalities with generalized monotone operators[J]. Computational Optimization and Applications, 2015, 60(2):277-310. [50] Yang J, Kiyavash N, He N. Global convergence and variance-reduced optimization for a class of nonconvex-nonconcave minimax problems[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020. [51] Menickelly M, Wild S. Derivative-free robust optimization by outer approximations[J]. Mathematical Programming, 2018, 179(1):1-37. [52] Picheny V, Binois M, Habbal A. A bayesian optimization approach to find nash equilibria[J]. Journal of Global Optimization, 2019, 73(1):171-192. [53] Roy A, Chen Y, Balasubramanian K, et al. Online and bandit algorithms for nonstationary stochastic saddle-point optimization[J]. 2019, arXiv:1912.01698. [54] Liu S, Lu S, Chen X, et al. Min-max optimization without gradients:Convergence and applications to adversarialml[C]//International Conference on Machine Learning, 2020, 6282-6293. [55] Wang Z, Balasubramanian K, Ma S, et al. Zeroth-order algorithms for nonconvex minimax problems with improved complexities[J]. 2020, arXiv:2001.07819. [56] Xu T, Wang Z, Liang Y, et al. Enhanced first and zeroth order variance reduced algorithms for min-max optimization[J]. 2020, arXiv:2006.09361. [57] Huang F, Gao S, Pei J, et al. Accelerated zeroth-order and first-order momentum methods from mini to minimax optimization[J]. 2020, arXiv:2008.08170. [58] Luo L, Ye H, Zhang T. Stochastic recursive gradient descent ascent for stochastic nonconvexstrongly-concave minimax problems[C]//Proceedings of the 33th International Conference on Neural Information Processing Systems, 2020. [59] Carmon Y, Duchi J, Hinder O, et al. Lower bounds for finding stationary points II:first-order methods[J]. Mathematical Programming, 2021, 185:315-355. |