Low-rank and sparse matrix optimization problem is a class of non-convex and non-smooth optimization problems with combinatorial properties. Due to the importance and special structure of the $\ell_0$-norm and rank functions, the models and algorithms of such NP-hard matrix optimization problems have been studied extensively in the past ten years with great success. This paper summarizes the progress from four aspects:sparse matrix optimization problem, low rank matrix optimization problems, low rank plus sparse matrix optimization problem, and low rank tensor optimization problems. For the sparse matrix optimization problem, we mainly focus on the sparse inverse covariance matrix estimation and column sparse matrix optimization problems as examples. For the low-rank matrix optimization problem, we mainly consider the convex relaxation methods and the matrix decomposition methods. Finally, we summarize a few challenging issues in low-rank and sparse matrix optimization, and disucss some interesting topics.

This paper applies a data-driven dynamic transmission rate to replace the basic reproduction number $R_0$ and studies the characteristics and trends of the development of COVID-19 at both national and provincial levels. Firstly, based on the dynamic growth rate, an ordinary differential equation for infectious diseases is established, which can derive the dynamic transmission rate model. Secondly, this paper selects the power function as the fitting function of the dynamic transmission rate, and uses 3 days as the optimal sliding window period to estimate the inflection points in different regions. Finally, using the dynamic model, this paper predicts the starting point of the end phase of the epidemic at different levels in various places, and then compares and analyzes 9 epidemic-related indicators among 13 provinces and cities. The results show that the dynamic transmission rates in all regions have steadily declined after a brief fluctuation, which means the epidemic situation has been effectively controlled; the date of the estimated inflection points are mainly concentrated in mid-February, and the predicted end phase will come before the end of March; at the same time, there are some differences in the characteristics and trends of the epidemic situation as well as the intensity and effectiveness of prevention and control measures.

The outbreak of the novel coronavirus pneumonia (COVID-19) has caused a great impact on the whole economic and social development. It is an important challenge for local governments to plan the production resumption of enterprises without relaxing the epidemic prevention and control. Based on the experiences of Zhejiang Province in overall planning of epidemic prevention and control and economic development, in this paper, we formulate a production resumption planning problem, which selects a subset of enterprises from a large number of candidates that apply for production resumption and determines their order of resumption under epidemics, so as to satisfy the social demand for industrial capacities as much as possible without violating the constraints such as epidemic spreading risk. To efficiently solve this problem, we propose an improved tabu search algorithm, which uses a greedy strategy to construct an initial solution and continually explores a better solution based on variable neighborhood search. Computational results on enterprise production resumption planning in several regions demonstrates the efficiency of our method.

In this paper, we investigate a class of online over-list scheduling problem in MapReduce system. We are given two uniform machines, and a list of jobs arrive one by one, each job consists of one Map task and one Reduce task. The map task can be arbitrarily split and processed on both machines simultaneously, while the reduce task has to be processed on a single machine. After a job arrived, we should assign a(some) machine(s) to process its map and reduce task, which aims to minimize the makespan. We show that the classical LSc algorithm has competitive ratio $1+1/s$ when $s\geq (1+\sqrt{5})/2$ and $1+s/(s+1)$, otherwise. If the length of Map task is greater than or equal to the length of Reduce task for each job, then the competitive ratio is at most $1+1/(2s)$ when $s\geq (1+\sqrt{3})/2$ and $1+s/(2s+1)$ otherwise.

This paper focuses on the quay crane scheduling on one vessel, and suggests that the idle of crane will affect the efficiency of the whole container terminal. We consider a single container as task unit and constrains such as quay crane's moving time and safe margin and establish a multi-objective programming model to minimize the vessel completion time and crane idle time. Noticing the lower bound of completion time, the problem will be divided into two situations:determined by the workload of key bays and determined by average workload. Respectively, we proposed the heuristic algorithm based on neighborhood search and "bay segmentation" algorithm based on the greedy strategy, and proved that in the case where the lower bound is determined by the average workload, our algorithm will not lead to idle of any cranes. The results of numerical experiments with different scales and lower bounds show that the quay crane schedule obtained by our model and algorithm are more suitable for actual production operation, and can effectively approach the lower bound of the completion time. The algorithm has a significant improvement in the speed of the algorithm with the existing research.

In a dress rehearsal, the duration of a program which is affected by internal and external factors, is uncertain. A robust optimization method is adopted to schedule the programs to minimize the total waiting cost of actors. A deterministic dress rehearsal model is first proposed. Then, based on the above deterministic model, a two-stage robust optimization model is built, considering the uncertainty of the programs. durations and the risk preference of decision makers. Thirdly, the robust optimization model is converted into a 0-1 mixed linear programming. At last, numerical experiments are carried out by Matlab, and the results show that the actors' waiting cost increases with the decreasement of decision makers' risk preference.

In this paper, we study the supply chain joint decision of sale price, product quality and service level when the retailer is risk preference. Retailer's risk preference can be characterized by the mean-CVaR criterion, which include risk aversion, risk neutrality, risk pursuing and loss avoidance. Firstly, we get the optimal decision and optimal profit (expected utility) under the centralized system and decentralized system which service is provided by manufacturer (model one) and retailer (model two) separately. Secondly, the results show that the supply chain can be coordinated under the cost-sharing contract when the retailer is risk averse. Thirdly, comparing the optimal profit (expected utility) after coordination of model one and model two, we find that providing service has no effect on the manufacturer's profit, while the retailer owns more expected utility if it offers service. Finally, numerical examples demonstrate the results.

In this paper, we consider the continuous time dynamic optimal asset allocation problem under the stochastic interest rate. We suppose that the market interest rate satisfies a stochastic process with the characteristic of mean-reverting, and the financial market consists of a zero-coupon bond and two dependent risky assets whose prices are suffered a common shock. Under the mean-variance criterion, using stochastic optimal control theory and Lagrange dual principle, the analytical solution for the efficient investment strategies and corresponding efficient frontier are obtained. Finally, through numerical examples, the sensitivity of efficient strategies and efficient frontier to relevant parameters are analyzed, and the relevant theoretical results are also verified.

To improve the performance of proximal support vector machine, a new sparse proximal support vector machine is proposed in this paper where $\ell_0$-norm regularization is used to improve the feature selection ability of the new model. However, problem with $\ell_0$-norm regularization usually is NP-hard. To overcome this difficulty, a continuous nonconvex function is used to approximate $\ell_0$-norm. With proper DC decomposition, we transform the problem into a DC programming problem which can be solved efficiently by DC algorithm. Meanwhile, we also discuss the convergence properties of our algorithm. The experimental results on both simulated and real datasets demonstrate the efficiency of the proposed algorithms.

The viscosity approximation method plays an important role in the study of the fixed point problem of nonexpansive mappings. In this paper, we proposed a kind of generalized viscosity approximation method. Under certain conditions, we proved the convergence of the algorithm. As applications, we applied the obtained convergence results to solve the constrained convex optimization problems and the bilevel optimization problems.

The Quasi-Newton method is widely regarded as one of the most effective methods to solve the problem of small scale optimization problem. They avoid the problem of solving the hession matrix by Newton method. In fact, the Quasi-newton method produces a symmetric matrix to approximate the hession matrix at each iteration. There are many Quasi-newton update formulas, frequently, such as BFGS update formula, SR1 update formula, DFP update formula and so on. Compared to other Quasi-Newton update formulas, SR1 Update formulas are more simpler and need less calculation. But most of time its convergence is improved on the strict condition of uniformly linearly independent. So this paper proposes a new modified SR1 Update formalus, and analyzes the convergence of the quasi-Newton algorithm based on the new modified formula SR1 under two different hypothesises respectively, including the condition of uniformly linearly independent and not uniformly linearly independent. By avoiding uniformly linearly independent and adding the assumption of positive-definite bounded hessian approximations, the new modified SR1 Update formalus is proved to be $n$+1 step $q$-superlinearly convergent. In addition, We also validate the reasonableness of assumptions and effectiveness of algorithms through experiments. The numerical results are reported to support that new update formulas proposed in the paper has better convergence rate in some conditions.

Graph coloring has interesting real-life applications in optimization, computer science and network design, such as file transferring in a computer network, computation of Hessians matrix and so on. There is an important coloring in the coloring of the graph, linear arboricity, which is an improper edge coloring, and the linear arboricity of an undirected graph is the smallest number of linear forests whose edges can be partitioned into. In this paper, we mainly studied linear arboricity on planar graphs with maximum degree $\bigtriangleup\geq7$ and we have proved that for the two fixed integers $i$, $j\in \{5, 6, 7\}$, if there is no adjacent chordal $i$, $j$-cycles, then the linear arboricity of $G$ is $\lceil\frac\bigtriangleup2\rceil$.}

A balanced bipartition of a graph $G(V,E)$ is a bipartition $V_{1}$ and $V_{2}$ of V(G) such that $||V_{1}|-|V_{2}||\leq 1$. The minimum balanced bipartition problem asks for a balanced bipartition minimizing $e(V_{1},V_{2})$, where $e(V_{1},V_{2})$ is the number of edges joining $V_{1}$ and $V_{2}$. In this paper, for Hamilton plane graphs, we study the relation between upper bounds on the minimum of $e(V_{1},V_{2})$ and $d$, which is the difference between the maximum edge function value and the minimum edge function value of Perfect-inner triangle.

Quarterly, Founded in 1997
Superintendent by:China Association for Science and Technology
Sponsored by:Operations Research Society of China
Organized by：Shanghai University
Editor-in-Chief:HU Xudong
ISSN 1007-6093
CN 31-1732/O1