19th Ave New York, NY 95822, USA

# 数学代考| Efficiency Analysis of the Simplex Method 运筹学代写

my-assignmentexpert愿做同学们坚强的后盾，助同学们顺利完成学业，同学们如果在学业上遇到任何问题，请联系my-assignmentexpert™，我们随时为您服务！

## 运筹学代写

Extensive experience with the simplex procedure applied to problems from various fields, and having various values of $n$ and $m$, has indicated that the method can be expected to converge to an optimum solution in about $m$, or perhaps $3 \mathrm{~m} / 2$, iterations. Thus, particularly if $m$ is much smaller than $n$, that is, if the matrix $\mathbf{A}$ has far fewer rows than columns, only a small fraction of the columns would enter the basis during the course of optimization. However, in a rare worst case (see Chap. 5) the simplex method does need $2^{m}$ iterations to reach the optimum.
To explain this phenomena, we provide an efficiency analysis in this section based on the characteristic property of the basic feasible solution of the constraints. We establish a worst-case iteration upper bound for the simplex method that polyproperty.
Define a characteristic property of a basic feasible solution
Definition (Basic Value Distribution) For a basic feasible solutions, $\mathbf{x}{\mathbf{B}}$, of an LP problem, the sum of its basic variable values is bounded above $\Delta$ (i.e., $\mathbf{1}^{T} \mathbf{x}{\mathbf{B}} \leq \Delta$ ) and its smallest entry is bounded below by $\delta$ (i.e., $\min \left(\mathbf{x}{\mathbf{B}}\right) \geq \delta$ ) for some positive constants $\Delta$ and $\delta$. This property implies that the basic feasible solution is nondegenerate. Clearly, $\Delta / \delta \geq m$, and, when $\Delta / \delta$ is smaller, the basic variable values are more evenly distributed. For the rest materials of this section, we assume that every basic feasible solution has this $(\Delta, \delta)$ property for the linear program in the standard form. We leave the following example as an exercise. Example Consider the dual example 5 of Markov Decision Process in Sect. 3.1. Then every basic feasible solution has the basic value distribution $(\Delta, \delta)$ property with $$\Delta=\frac{m}{1-\gamma} \quad \text { and } \quad \delta=1$$ 115 In addition, we abuse notations and also use $\mathbf{B}$ to denote the index set of basic variables and $\mathbf{D}$ to denote the index set of nonbasic variables. Similarly, $\mathbf{B}^{}$ and $\mathbf{D}^{}$ also denote the index sets of optimal basic and nonbasic variables, respectively. We
4.6 Efficiency Analysis of the Simplex Method
first introduce a lemma indicating that the objective gap is reduced at a geometric rate depending on the ratio of $\frac{\delta}{\Delta}$.
Lemma 1 For a feasible linear program in the standard form, let every basic feasible. solution (extreme point) generated by the simplex method have the basic value distribution $(\Delta, \delta)$ property. Then starting from any basic feasible solution $\mathbf{x}^{k}$ with basis $\mathbf{B}^{k}$, the next basic feasible solution, denoted by $\mathbf{x}^{k+1}$ with basis $\mathbf{B}^{k+1}$, has an objective value reduction
$$\frac{\mathbf{c}^{T} \mathbf{x}^{k+1}-z^{}}{\mathbf{c}^{T} \mathbf{x}^{k}-z^{}} \leq 1-\frac{\delta}{\Delta}$$
where $z^{}$ represents the minimal objective value of the linear program. Proof Let $\mathbf{r}^{k}$ and $\mathbf{r}^{}$ be the reduced cost vectors corresponding to current basic feasible $\mathbf{x}^{k}$ and optimal solution $\mathbf{x}^{}$, respectively. Note that both $\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{k}=0$ and $\left(\mathbf{r}^{}\right)^{T} \mathbf{x}^{}=0$ from complementary slackness. Recall that the incoming variable $x{e}$ is selected such that
$$r_{e}^{k}=\min {j \in \mathbf{D}^{k}}\left{r{j}^{k}\right}<0,$$ where $\left(\mathbf{r}^{k}\right)^{T}=\mathbf{c}^{T}-\left(\mathbf{y}^{k}\right)^{T} A$ and $\left(\mathbf{y}^{k}\right)^{T}=\mathbf{c}{\mathbf{B}^{k}}^{T}\left(\mathbf{B}^{k}\right)^{-1}$ is the dual solution vector at the current step. Thus, \begin{aligned} \mathbf{c}^{T} \mathbf{x}^{k}-z^{} &=\mathbf{c}^{T} \mathbf{x}^{k}-\mathbf{c}^{T} \mathbf{x}^{} \ &=\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{k}-\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{} \ &=-\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{} \leq-r{e}^{k} \cdot \mathbf{1}^{T} \mathbf{x}^{ } \leq\left|r_{e}^{k}\right| \cdot \Delta . \end{aligned} the current step. Thus,
$$=-\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{} \leq-r_{e}^{k} \cdot \mathbf{1}^{T} \mathbf{x}^{} \leq\left|r_{e}^{k}\right| \cdot \Delta .$$
On the other hand, we have
\begin{aligned} \mathbf{c}^{T} \mathbf{x}^{k+1}-\mathbf{c}^{T} \mathbf{x}^{k} &=\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{k+1}-\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{k} \ &=\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{k+1}=\sum_{j=1}^{n} r_{j}^{k} \cdot x_{j}^{k+1}=r_{e}^{k} \cdot x_{e}^{k+1} \leq r_{e}^{k} \cdot \delta \end{aligned}
where we have used facts $\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{k}=0$ and only one term is nonzero in the summation. Thus
$$\left(\mathbf{c}^{T} \mathbf{x}^{k+1}-z^{}\right)-\left(\mathbf{c}^{T} \mathbf{x}^{k}-z^{}\right)=\mathbf{c}^{T} \mathbf{x}^{k+1}-\mathbf{c}^{T} \mathbf{x}^{k} \leq r_{e}^{k} \cdot \delta=-\left|r_{e}^{k}\right| \cdot \delta$$
or
$$\frac{\mathbf{c}^{T} \mathbf{x}^{k+1}-z^{}}{\mathbf{c}^{T} \mathbf{x}^{k}-z^{}} \leq 1-\frac{\left|r_{e}^{k}\right| \cdot \delta}{\mathbf{c}^{T} \mathbf{x}^{k}-z^{*}} \leq 1-\frac{\delta}{\Delta}$$

4.6 单纯形法的效率分析

$$\frac{\mathbf{c}^{T} \mathbf{x}^{k+1}-z^{}}{\mathbf{c}^{T} \mathbf{x}^{k}- z^{}} \leq 1-\frac{\delta}{\Delta}$$

$$r_{e}^{k}=\min {j \in \mathbf{D}^{k}}\left{r{j}^{k}\right}<0,$$ 其中$\left(\mathbf{r}^{k}\right)^{T}=\mathbf{c}^{T}-\left(\mathbf{y}^{k}\right)^{T} A$ 和 $\left(\mathbf{y}^{k}\right)^{T}=\mathbf{c}{\mathbf{B}^{k}}^{T}\left(\mathbf {B}^{k}\right)^{-1}$ 是当前步骤的对偶解向量。因此，\begin{aligned} \mathbf{c}^{T} \mathbf{x}^{k}-z^{} &=\mathbf{c}^{T} \mathbf{x}^ {k}-\mathbf{c}^{T} \mathbf{x}^{} \ &=\left(\mathbf{r}^{k}\right)^{T} \mathbf{x} ^{k}-\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{} \ &=-\left(\mathbf{r}^{k} \right)^{T} \mathbf{x}^{} \leq-r{e}^{k} \cdot \mathbf{1}^{T} \mathbf{x}^{ } \leq\左|r_{e}^{k}\右| \cdot \三角洲。 \end{aligned} 当前步骤。因此，
$$=-\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{} \leq-r_{e}^{k} \cdot \mathbf{1}^{ T} \mathbf{x}^{} \leq\left|r_{e}^{k}\right| \cdot \三角洲。$$

$$\开始{对齐} \mathbf{c}^{T} \mathbf{x}^{k+1}-\mathbf{c}^{T} \mathbf{x}^{k} &=\left(\mathbf{r}^ {k}\right)^{T} \mathbf{x}^{k+1}-\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{k} \ &=\left(\mathbf{r}^{k}\right)^{T} \mathbf{x}^{k+1}=\sum_{j=1}^{n} r_{j}^{ k} \cdot x_{j}^{k+1}=r_{e}^{k} \cdot x_{e}^{k+1} \leq r_{e}^{k} \cdot \delta \end{对齐}$$


\left(\mathbf{c}^{T} \mathbf{x}^{k+1}-z^{}\right)-\left(\mathbf{c}^{T} \mathbf{x} ^{k}-z^{}\right)=\mathbf{c}^{T} \mathbf{x}^{k

## 什么是运筹学代写

• 确定需要解决的问题。
• 围绕问题构建一个类似于现实世界和变量的模型。
• 使用模型得出问题的解决方案。
• 在模型上测试每个解决方案并分析其成功。
• 实施解决实际问题的方法。

## 运筹学代写的三个特点

• 优化——运筹学的目的是在给定的条件下达到某一机器或者模型的最佳性能。优化还涉及比较不同选项和缩小潜在最佳选项的范围。
• 模拟—— 这涉及构建模型，以便在应用解决方案刀具体的复杂大规模问题之前之前尝试和测试简单模型的解决方案。
• 概率和统计——这包括使用数学算法和数据挖掘来发现有用的信息和潜在的风险，做出有效的预测并测试可能的解决方法。