This paper considers a two-echelon supply chain system consisting of one supplier and one retailer under carbon tax regulation. When the time-varying demand rate of retailer is dependent on stock level and selling price, the decentralized and centralized models are constructed and compared. The result shows that cooperation between supplier and retailer lead to higher profit and more carbon emissions. Wholesale price contract and two-part tariff contract are respectively used to coordinate the decentralized supply chain. Several conditions that the supply chain can be coordinated by two-part tariff contract are also obtained. Finally, several numerical examples are provided to verify the theoretical results and impacts of carbon tax price on supply chain coordination under two-part tariff contract are analyzed.
The difficulties in optimizing the steady-state gas network are the complex structure and big scale of the network, high nonlinearities of the objective and the constraints. In this paper, we formulate the steady-state gas network optimization as a mixed integer nonlinear programming model. Then based on the techniques of network reduction and linearization, we develop a new algorithm for the problem. Numerical results on an instance of the western natural gas network of China show that the proposed algorithm is promising.
High-dimensional constrained matrix regression refers to non-convex constrained statistical regression with the multivariate responses and multivariate predictors in the high-dimensional setting. Its mathematical model is a matrix optimization, which is generally NP-hard and has a wide range of applications in a lot of areas such as machine learning and artificial intelligence, medical imaging and diagnosis, gene expression analysis, neural networks, risk management. This paper briefly reviews the new results on optimization theory and algorithm of high-dimensional constrained matrix regression. Moreover, we list the corresponding important references.
In this paper, we consider the two-machine flow-shop scheduling with deterioration and rejection, in which each job's processing time is simple linear increasing function of its starting time. A job is either accepted and processed on the two machines in a flow-shop system, or rejected with a certain penalty having to be paid. The objective is to minimize the sum of the makespan of the accepted jobs plus the total penalty of the rejected jobs. We show that the problem is NP-hard and present a dynamic programming algorithm. Furthermore, we propose an optimal schedule for one special case.
Graph theory and optimization theory are clearly very useful in the study of protein structures. Firstly, this paper is to research/review graph theory models in studies of protein structures. Secondly, we build a graph theory model to let the side-chain of a protein as a node, in the use of graph theory concepts such as clique, k-clique, community, hub, and cluster to build the edges. Thirdly, we solve the graph theory model built, using optimization theory/modern data mining algorithms/methods. Successful and fresh numerical results of data mining the large scale buffalo prion protein database will be illuminated.
A successive linearization method with flexible penalty is presented to solve a nonlinear semidefinite programming with nonlinear inequality constraints. The new method does not require the penalty function to be reduced and does not use filter technique. The storage of the filter set is avoided. The updating of the penalty parameter is flexible, which is only dependent on the message of the current iterate. The penalty parameter sequence corresponding to the successful iterate point does not need to increase monotonically. To decide whether the trial step can be accepted or not, the new method requires the measure of constraint violation to be improved or the value of the objective function to be improved within the measure of feasibility control. Under the usual assumptions, we prove that the algorithm is well defined and globally convergent. Finally, preliminary numerical results are reported.
In this paper, we study a kind of special nonsmooth optimization problem, which is widely used in the field of compressed sensing and image processing. A smoothing gradient method is proposed and the global convergence is also given. Finally, the related numerical results indicate the efficiency of the given method.
In this paper, we focus on the optimality conditions for a class of stochastic optimization problem with probabilistic complementarity constraints. By using a kind of nonlinear complementarity (NCP) function, we transform the probabilistic complementary constraint into a chance constraint. By using the theories in chance constraint, we obtain an optimization problem with inequality constraint and then, optimality conditions for weak stationary points and the optimal solutions are given.
Complexity concepts oriented from the theoretical Turing machine are widely accepted in study of combinatorial optimization problems. The polynomially computable and NP-hard concepts are frequently used in recent papers on continuous optimization problems. This paper presents a very brief introduction to show their relationship and difference used in the two fields.
In this paper, we propose a nonlinear dynamical system of batch fermentation with the continuous piecewise linear functions as parameters, and investigate the existence of solution about the nonlinear dynamical system. Based on a smooth curve which is fitted to the experimental data, a new identifiable model was established by using the continuous piecewise linear function as optimization parameters. According to the relationship between the state variables and identification function, an efficient algorithm is developed to solve the identification system, and the convergence of optimization algorithm is also analysed. Finally, numerical results are discussed to illustrate the validity of the present model.
This paper discusses of the methods for solving the l_1-norm problem of linear equations. First, the problem is translated into a split feasibility problem and a convex feasibility problem, respectively. Then,some relaxed projection algorithms are presented. Finally, the new algorithms are applied to solve some signal processing problems.
Single valued neutrosophic set (SVNS) depicts not only the incomplete information, but also the indeterminate information and inconsistent information which exist commonly in belief systems. The existing decision making methods for SVNS consider the case that the attributes are independent, and cannot handle the correlation among attributes. Based on the Choquet integral and the cosine similarity degree of single valued neutrosophic number, we propose an operator to aggregate single valued neutrosophic numbers (SVNNs), which can deal with the single valued neutrosophic information with connective attributes. By using the proposed single valued neutrosophic Choquet integral operator, an approach is given for the multi-attribute group decision making problems with SVNNs. An example is showed to illustrate the validity and applicability of the proposed method.
We investigate a two-stage supply chain scheduling problem in which jobs have the holding time which is limited by a constant. A job which is processed completely by the machine needs to dispatch with batch to customer by many vehicles, and a job will incur a holding cost if its completion time is earlier than its dispatch date. Each job will incur an early (tardy) penalty if it is early (tardy) with respect to the common due window under a given schedule. The objective is to find the optimal size and location of the window, the optimal dispatch date for each job, as well as an optimal job sequence to minimize a cost function based on earliness, tardiness, holding time, window location, window size, and batch delivery. We provide the proof of NP-hard for the problem. Then, we consider the case where the unit cost of holding time is not more than the unit cost of tardiness and give a pseudo-polynomial time algorithm for this case.
The k-means is one of the most classical problems in both computer science and combinatorial optimization. The k-means clustering is popular as one of the most significant and easiest methods of cluster analysis in data mining. The k-means problem can be described as: Given a set of observations with n elements, where each element is a d-dimensional real vector. The objective is to partition the n observations into k sets, so as to minimize within-cluster sum of squares, where we call the center of the cluster the mean of the observations in it. The k-means is NP-hard in theory however, there are efficient heuristic algorithms employed widely in market segmentation, machine vision, geostatistics, astronomy, agriculture and so on. With more and more complicated the problem we meet in practical, and larger and larger the data grows, further research is needed. This article aims at listing the classical algorithms for solving k-means as well as the variety of the variants and generalization of it, and summarize some problems in k-means which have not been solved for readers.