19th Ave New York, NY 95822, USA

数学代考|The Fundamental Theorem of Linear Programming 运筹学代写

my-assignmentexpert愿做同学们坚强的后盾，助同学们顺利完成学业，同学们如果在学业上遇到任何问题，请联系my-assignmentexpert™，我们随时为您服务！

运筹学代写

In this section, through the fundamental theorem of linear programming, we
establish the primary importance of basic feasible solutions in solving linear
programs. The method of proof of the theorem is in many respects as important as
the result itself, since it represents the beginning of the development of the simplex
method. The theorem (due to Carathéodory) itself shows that it is necessary only
to consider basic feasible solutions when seeking an optimal solution to a linear
program because the optimal value is always achieved at such a solution.
Corresponding to a linear program in standard form
minimize $\mathbf{c}^{T} \mathbf{x}$
subject to $\mathbf{A x}=\mathbf{b}, \mathbf{x} \geqslant \mathbf{0}$
constraints that achieves the minimum value of the
(2.13)
a feasible solution to the constraints that achieves the minimum value of the
objective function subject to those constraints is said to be an optimal feasible
solution. If this solution is basic, it is an optimal basic feasible solution.
Fundamental Theorem of Linear Programming Given a linear program in standard
form (2.13) where A is an $m \times n$ matrix of rank $m$,
i) if there is a feasible solution, there is a basic feasible solution; ii) if there is an optimal feasible solution, there is an optimal basic feasible solution.
ii) if there is an optimal feasible solution, there is an optimal basic feasible solution.
$\left(x_{1}, x_{2}, \ldots, x_{n}\right)$ is a feasible solution. Then, in terms of the columns of $\mathbf{A}$,
this solution satisfies:
$x_{1} \mathbf{a}{1}+x{2} \mathbf{a}{2}+\cdots+x{n} \mathbf{a}{n}=\mathbf{b} .$ Assume that exactly $p$ of the variables $x{i}$ are greater than zero, and for convenience,
that they are the first $p$ variables. Thus
$x_{1} \mathbf{a}{1}+x{2} \mathbf{a}{2}+\cdots+x{p} \mathbf{a}{p}=\mathbf{b}$ There are now two cases, corresponding as to whether the set $\mathbf{a}{1}, \mathbf{a}{2}, \ldots, \mathbf{a}{p}$ is
linearly independent or linearly dependent.
CASE 1: Assume $\mathbf{a}{1}, \mathbf{a}{2}, \ldots, \mathbf{a}{p}$ are linearly independent. Then clearly, $p \leqslant m$. If $p=m$, the solution is basic and the proof is complete. If $p{1}, \mathbf{a}{2}, \ldots, \mathbf{a}{p}$ are linearly dependent. Then there is a
nontrivial linear combination of these vectors that is zero. Thus there are constants $y_{1}, y_{2}, \ldots, y_{p}$, at least one of which can be assumed to be positive, such that
$y_{1}, y_{2}, \ldots, y_{p}$, at least one of which can be assumed to be positive, such that
$$y_{1} \mathbf{a}{1}+y{2} \mathbf{a}{2}+\cdots+y{p} \mathbf{a}{p}=\mathbf{0} .$$ $y{1} \mathbf{a}{1}+y{2} \mathbf{a}{2}+\cdots+y{p} \mathbf{a}{p}=\mathbf{0} .$ Multiplying this equation by a scalar $\varepsilon$ and subtracting it from (2.14), we obtain $\left(x{1}-\varepsilon y_{1}\right) \mathbf{a}{1}+\left(x{2}-\varepsilon y_{2}\right) \mathbf{a}{2}+\cdots+\left(x{p}-\varepsilon y_{p}\right) \mathbf{a}{p}=\mathbf{b}$ This equation holds for every $\varepsilon$, and for each $\varepsilon$ the components $x{j}-\varepsilon y_{j}$ correspond
to a solution of the linear equalities-although they may violate $x_{i}-\varepsilon y_{i} \geqslant 0$.
Denoting $\mathbf{y}=\left(y_{1}, y_{2}, \ldots, y_{p}, 0,0, \ldots, 0\right)$, we see that for any $\varepsilon$
$\mathbf{x}-\varepsilon \mathbf{y}$
(2.17)
is a solution to the equalities. For $\varepsilon=0$, this reduces to the original feasible solution.
As $\varepsilon$ is increased from zero, the various components increase, decrease, or remain
As $\varepsilon$ is increased from zero, the various components increase, decrease, or remain
Since we assume at least one $y_{i}$ is positive, at least one component will decrease as $\varepsilon$
is increased. We increase $\varepsilon$ to the first point where one or more components become
zero. Specifically, we set
$\varepsilon=\min \left{x_{i} / y_{i}: y_{i}>0\right}$
$\varepsilon=\min \left{x_{i} / y_{i}: y_{i}>0\right}$
For this value of $\varepsilon$ the solution given by $(2.17)$ is feasible and has at most $p-1$
positive variables. Repeating this process if necessary, we can eliminate positive
variables until we have a feasible solution with corresponding columns that are
linearly independent. At that point Case 1 applies.
Proof of (ii) Let $\mathbf{x}=\left(x_{1}, x_{2}, \ldots, x_{n}\right)$ be an optimal feasible solution and, as in the
proof of (i) above, suppose there are exactly $p$ positive variables $x_{1}, x_{2}, \ldots, x_{p}$.
Again there are two cases; and Case 1, corresponding to linear independence, is
exactly the same as before.
Case 2 also goes exactly the same as before, but it must be shown that for any
$\varepsilon$ the solution $(2.17)$ is optimal. To show this, note that the value of the solution

(2.13)

i) 若有可行解，则存在基本可行解； ii) 如果存在最优可行解，则存在最优基本可行解。
ii) 如果存在最优可行解，则存在最优基本可行解。
$\left(x_{1}, x_{2}, \ldots, x_{n}\right)$ 是一个可行的解决方案。那么，就 $\mathbf{A}$ 的列而言，

$x_{1} \mathbf{a}{1}+x{2} \mathbf{a}{2}+\cdots+x{n} \mathbf{a}{n}=\mathbf{b } .$ 假设变量 $x{i}$ 中恰好有 $p$ 大于零，并且为方便起见，

$x_{1} \mathbf{a}{1}+x{2} \mathbf{a}{2}+\cdots+x{p} \mathbf{a}{p}=\mathbf{b }$ 现在有两种情况，分别对应集合 $\mathbf{a}{1}, \mathbf{a}{2}, \ldots, \mathbf{a}{p}$ 是否为

$y_{1}, y_{2}, \ldots, y_{p}$，其中至少一个可以假设为正数，使得
$$y_{1} \mathbf{a}{1}+y{2} \mathbf{a}{2}+\cdots+y{p} \mathbf{a}{p}=\mathbf{0} .$$ $y{1} \mathbf{a}{1}+y{2} \mathbf{a}{2}+\cdots+y{p} \mathbf{a}{p}=\mathbf{0 } .$ 将这个方程乘以一个标量 $\varepsilon$ 并从 (2.14) 中减去它，我们得到 $\left(x{1}-\varepsilon y_{1}\right) \mathbf{a}{1}+\left(x{2}-\varepsilon y_{2}\right) \mathbf{a} {2}+\cdots+\left(x{p}-\varepsilon y_{p}\right) \mathbf{a}{p}=\mathbf{b}$ 这个等式对每一个 $\varepsilon$ 都成立，对于每一个 $\varepsilon$，分量 $x{j}-\varepsilon y_{j}$ 对应

$\mathbf{x}-\varepsilon \mathbf{y}$
(2.17)

$\varepsilon=\min \left{x_{i} / y_{i}: y_{i}>0\right}$
$\varepsilon=\min \left{x_{i} / y_{i}: y_{i}>0\right}$

(ii) 的证明 令 $\mathbf{x}=\left(x_{1}, x_{2}, \ldots, x_{n}\right)$ 是最优可行解，如

什么是运筹学代写

• 确定需要解决的问题。
• 围绕问题构建一个类似于现实世界和变量的模型。
• 使用模型得出问题的解决方案。
• 在模型上测试每个解决方案并分析其成功。
• 实施解决实际问题的方法。

运筹学代写的三个特点

• 优化——运筹学的目的是在给定的条件下达到某一机器或者模型的最佳性能。优化还涉及比较不同选项和缩小潜在最佳选项的范围。
• 模拟—— 这涉及构建模型，以便在应用解决方案刀具体的复杂大规模问题之前之前尝试和测试简单模型的解决方案。
• 概率和统计——这包括使用数学算法和数据挖掘来发现有用的信息和潜在的风险，做出有效的预测并测试可能的解决方法。