With the continuous development of deep learning and neural network technologies, generative methods have made significant breakthroughs in the field of machine learning and have demonstrated immense potential across various application scenarios. This paper constructs a unified mathematical framework for artificial intelligence generative methods and systematically introduces its core technologies, including variational autoencoder (VAE), generative adversarial network (GAN), diffusion model, and flow-based model. Additionally, it provides an in-depth analysis of the advantages and limitations of different methods in various tasks. Furthermore, this paper explores the application prospects of generative methods in artificial intelligence across fields such as mathematics, physics, life sciences, medicine, computer science, and engineering. Finally, the paper summarizes the key challenges faced by generative methods in artificial intelligence and discusses their future development directions in the fields of mathematics and intelligent optimization. This paper aims to provide valuable insights and references for researchers and practitioners in related fields.
Uniqueness of tensor decomposition is a cornerstone for modeling optimization problems of low-rank tensor decomposition and low-rank tensor approximation in diverse areas of applications, and it is a powerful theory for system parameter identification. This paper briefly summarizes the basic concepts of unique decomposition theory, classical conclusions such as the Kruskal theorem, necessary conditions for uniqueness, the Jennrich-Harshman theory and its extensions, partial uniqueness theory of decomposition, uniqueness of block decomposition, and uniqueness in the statistical sense, etc. Understanding these fundamental properties provides a theoretical basis for further research on the modeling, analysis, solution, and verification of corresponding low-rank tensor decomposition and low-rank tensor approximation optimization models.
One of the most important problems in managing service systems is reducing customer waiting time, and making appointment in advance to reduce the uncertainty of customer arrivals (the demand) is an important approach to reducing customer waiting time. Studies on appointment scheduling problems date back to the 1950s and ever since appointment scheduling optimization models and algorithms have received extensive attention in both academia and industry, in particular, in healthcare service industry. In this paper, we first describe the appointment scheduling problems and their characteristics, and present a framework for classifying the problems; we then discuss the major results and progresses of appointment scheduling research with our commentaries; finally, we examine the current important appointment scheduling problems and discuss the future research trends and directions.
Queueing-inventory systems (QIS) have emerged as a critical interdisciplinary framework that integrates queueing theory with inventory management to address dynamic service-inventory interactions in complex operational environments. This review aims to consolidate and evaluate the theoretical advancements and practical implementations of QIS, with particular emphasis on steady-state analysis techniques and their deployment across diverse application domains. Originating from the foundational contributions of Sigman and Simchi-Levi, and Melikov and Molchanov in 1992, and formally conceptualized by Schwarz et al. in 2006, QIS has evolved into a mature analytical framework. Three primary analytical approaches are examined in depth: product-form solutions, matrix-geometric methods, and approximate product-form solutions. Product-form solutions facilitate analytical tractability by decoupling the joint distribution of queue lengths and inventory levels, particularly effective in M/M/1 and related models. Matrixgeometric methods, based on quasi-birth-and-death (QBD) processes, leverage the rate matrix R to compute steady-state probabilities, with developments progressing from closed-form derivations to iterative numerical algorithms. Approximate product-form solutions are employed to handle more complex systems through state-space decomposition and bounding techniques, providing a balance between accuracy and computational efficiency. The review further explores the incorporation of game-theoretic models, particularly Stackelberg games, into QIS frameworks to capture strategic customer behavior and hierarchical decision-making in inventory control. Practical implementations of QIS span a wide range of sectors, including food manufacturing (e.g., 3D food printing), healthcare (e.g., medical waste disposal during the pandemic), blood supply chains, and urban transportation systems. Recent modeling innovations, such as fluid inventory models and batch Markovian arrival processes, have significantly improved system responsiveness and resource optimization.
The Stackelberg prediction game (SPG) is a bilevel optimization framework for modeling strategic interactions between a learner and a follower. Existing methods for solving this problem with general loss functions are computationally expensive and scarce. We propose a novel hyper-gradient type method with a warm-start strategy to address this challenge. Particularly, we first use a Taylor expansion-based approach to obtain a good initial point. Then we apply a hyper-gradient descent method with an explicit approximate hyper-gradient. We establish the convergence results of our algorithm theoretically. Furthermore, when the follower employs the least squares loss function, our method is shown to reach an ε-stationary point by solving quadratic subproblems. Numerical experiments show our algorithms are empirically orders of magnitude faster than the state-of-the-art.
A nowhere-zero $k$-flow on a graph $G=(V(G), E(G))$ is a pair $(D, f)$, where $D$ is an orientation on $E(G)$ and $f\colon E(G)\to \{\pm1, \pm2, \cdots, \pm(k-1)\}$ is a function such that the total outflow equals to the total inflow at each vertex. This concept was introduced by Tutte as an extension of face colorings, and Tutte in 1954 conjectured that every bridgeless graph admits a nowhere-zero 5-flow, known as the 5-Flow Conjecture. This conjecture is verified for some graph classes and remains unresolved as of today. In this paper, we show that every bridgeless graph of Euler genus at most 20 admits a nowhere-zero 5-flow, which improves several known results.
Risk management often plays an important role in decision making under uncertainty. In quantitative risk management, assessing and optimizing risk metrics requires efficient computing techniques and reliable theoretical guarantees. In this paper, we introduce several topics on quantitative risk management and review some of the recent studies and advancements on the topics. We consider several risk metrics and study decision models that involve the metrics, with a main focus on the related computing techniques and theoretical properties. We show that stochastic optimization, as a powerful tool, can be leveraged to effectively address these problems.
In this paper, we introduce the real pairwise completely positive (RPCP) matrices with one of them is necessarily positive semidefinite while the other one is necessarily entrywise nonnegative, which has a real pairwise completely positive (RPCP) decomposition. We study the properties of RPCP matrices and give some necessary and sufficient conditions for a matrix pair to be RPCP. First, we give an equivalent decomposition for the RPCP matrices, which is different from the RPCP-decomposition and show that the matrix pair (X, X) is RPCP if and only if X is completely positive. Besides, we also prove that the RPCP matrices checking problem is equivalent to the separable completion problem. A semidefinite algorithm is also proposed for detecting whether or not a matrix pair is RPCP. The asymptotic and finite convergence of the algorithm are also discussed. If it is RPCP, we can further give a RPCP-decomposition for it; if it is not, we can obtain a certificate for this.
In the era of the digital economy, the rapid development of cloud computing and artificial intelligence industries has made computing power an increasingly valuable strategic resource. However, the energy consumption and carbon emissions generated by computing applications are also rising sharply. In this context, the development of green computing has become an industry consensus and a necessity of our times. Optimizing the scheduling of computing resources has emerged as a critical approach to reducing energy consumption, lowering costs, and improving efficiency. This paper focuses on four specific optimization problems related to computing resource scheduling: peak-shifting scheduling of computational tasks, load balancing of containers, autoscaling of clusters, and uniform deployment of mixed services. For each of these optimization problems, we present corresponding mathematical models and optimization algorithms. Furthermore, we introduce an intelligent computing power scheduling system designed for industrial applications, along with the challenges faced during its implementation. This scheduling system has been successfully applied to various scenarios within Ant Group, including big data computing and database management, delivering significant benefits in terms of carbon neutrality and energy savings. The results demonstrate that the system not only reduces energy consumption but also enhances operational efficiency, offering a practical solution for enterprises to achieve sustainability goals. Finally, this paper discusses the challenges of computing power scheduling in the era of large AI models, where the growing complexity and scale of computations demand even more innovative approaches.
We investigate fundamental combinatorial optimization problems, including the knapsack problem, the subset sum problem and the convolution problem. We want to explore algorithms that achieve the best possible running time, that is, algorithms whose running time is the best possible under standard complexity assumptions. In recent years, important algorithmic progress has been achieved in classic combinatorial optimization problems via additive combinatorics tools. In particular, for several variants of the knapsack problem and subset sum problem researchers have obtained pseudopolynomial time algorithms and polynomial time approximation schemes whose running time almost matches their lower bounds. This paper will survey several important results along this research line, showing the relevance between additive combinatorics and discrete optimization. In particular, we will present: (ⅰ) the finite addition theorem and its application in the algorithmic design for the knapsack problem and the subset sum problem; (ⅱ) Szemerédi-Vu sumset theorem and its application in the algorithmic design for the subset sum problem; and (ⅲ) Balog-Szemerédi-Gowers theorem and its application in the algorithmic design for the bounded monotone (min, +)-convolution problem.
Social preference theory generalizes the standard assumption of self-regarding utility maximization in economics, and allows decision-makers to take into account the effect of own behavior on others. In the basic model, the objective function is a weighted average of one's own utility and that of others. "Confucian Game Theory" adopts this framework, associating an individual's level of empathy-based benevolence with the weight assigned to others' utility in their objective function. Within a supermodular game framework, they prove that benevolence (ren motive) enhances altruistic behavior (de behavior), which in turn improves overall social welfare (yi consequence). This "Confucian Game Theory" provides a behavioral scientific foundation for Confucian philosophy. This paper systematically reviews the related behavioral game theory literature and compares it with "Confucian Game Theory". We find that although earlier models of social preferences are formulated generally, the existing equilibrium analysis usually focuses on specific contexts such as public goods games, the prisoner's dilemma, and dictator games. Consequently, while previous studies have derived many conclusions in the same vein as "ren improves de and yi", those results are often context-dependent. In a sense, Confucian Game Theory unifies conclusions that are scattered across myriad contexts.
This paper investigates numerical methods and theoretical analysis for the minimization problem of Landau free energy functionals, which are widely applied in physics and materials science to study phase transitions and the formation of ordered structures. Landau free energy functionals typically consist of high-order differential terms describing spatial interactions and nonlinear terms representing bulk energy. This characteristic leads to two major computational challenges: the stiffness problem arising from high-order differential operators and the lack of global Lipschitz continuity of gradients in the nonlinear terms. To address these difficulties, we first discretize the functional minimization problem into a finite-dimensional optimization problem, then design an efficient algorithmic framework based on Bregman divergence, and subsequently establish convergence analysis. Furthermore, we extend the algorithm to function spaces and systematically analyze its convergence properties for the original functional minimization problem. Additionally, this paper explores the intrinsic connection between Bregman iterations and gradient flow methods, providing new perspectives for understanding the dynamical mechanisms of optimization algorithms. The effectiveness of the proposed algorithms and the validity of the theoretical analysis are verified through a series of numerical experiments.