19th Ave New York, NY 95822, USA

博弈论代写代考| Solving Constant Sum-Games with Linear Programming 数学代写

博弈论代考

Given that the method of equalizing expectation becomes more and more tedious with the size of the game, we seek an alternate method to solve constant-sum games. In this section, we show how to do so using linear programming. Since solving a game is a type of optimization problem, it is not such a stretch to believe the linear programming can be applied. In order to motivate the process, we begin with a $2 \times 2$ example.
Suppose we have a constant-sum game with payoff matrix
$$A=\left[\begin{array}{ll} a & b \ c & d \end{array}\right]$$
We are going to make the additional assumption that all the entries in $A$ are strictly positive. This is really no loss of generality, because if we start with an arbitrary payoff matrix, we can always translate the game by adding a positive constant $M$ to all the payoffs. As we know, the solution to the translated game is the same optimal strategies as the original, with that the value of the translated game is the value of the original plus $M$.
Let’s consider the row player. We have been writing the unknown optimal strategy $\widehat{p}=(x, 1-x)$, but in this section, we prefer to write it as $\widehat{p}=\left(p_{1}, p_{2}\right)$, where
$$p_{1}+p_{2}=1$$
They key to converting this to a linear programming problem is to look at what Theorem $6.21$ says about $\widehat{p}$ – namely that it provides a guaranteed minimum payoff no matter what the column player does. In symbols
$$E(\widehat{p}, q) \geq v_{\text {row }}$$
this in particular to the two pure strategies $q=U_{1}=\left[\begin{array}{l}1 \ 0\end{array}\right]$ and $q=U_{2}=\left[\begin{array}{l}0 \ 1\end{array}\right] .$ Of course, we can do them both at once by making the computation $\widehat{p A I}=\widehat{p} A$, since the matrix with the columns $U_{1}$ and $U_{2}$ is the identity matrix. Thus: $$\left[\begin{array}{ll}p_{1} & p_{2}\end{array}\right]\left[\begin{array}{ll}a & b \ c & d\end{array}\right]=\left[\begin{array}{ll}a p_{1}+c p_{2} & b p_{1}+d p_{2}\end{array}\right]$$ and each of these must be $\geq v_{\text {row }}$. Thus, we obtain a pair of inequalities: \begin{aligned} a p_{1}+c p_{2} & \geq v_{\text {row }} \ b p_{1}+d p_{2} & \geq v_{\text {row. }} \end{array} \end{aligned}
Furthermore, we think of $w p_{i}=\frac{p_{i}}{v_{\text {row }}}$ as our variables instead of the $p_{i}$ :
\begin{aligned} &t_{1}=p_{1} w \ &t_{2}=p_{2} w \end{aligned}
Observe that
$$t_{1}+t_{2}=p_{1} w+p_{2} w=\left(p_{1}+p_{2}\right) w=w$$
since the probabilities sum to 1 . Also, $w=\frac{1}{v_{\mathrm{row}}}$ is minimized whenever $v_{\mathrm{row}}$ is maximized, and the previous equation expresses $w$ in terms of our unknowns $t_{i}$. Finally, note that, again, since $w>0$, we have $t_{i} \geq 0$ because $p_{i} \geq 0$. So our transformation into a linear programming problem is complete. The row player must

$$A=\left[\begin{数组}{ll} a & b \ 开发 \end{数组}\right]$$

$$p_{1}+p_{2}=1$$

$$E(\widehat{p}, q) \geq v_{\text {行 }}$$

$$\开始{对齐} &t_{1}=p_{1} w \ &t_{2}=p_{2} w \end{对齐}$$

$$t_{1}+t_{2}=p_{1} w+p_{2} w=\left(p_{1}+p_{2}\right) w=w$$

博弈论代写

my-assignmentexpert愿做同学们坚强的后盾，助同学们顺利完成学业，同学们如果在学业上遇到任何问题，请联系my-assignmentexpert™，我们随时为您服务！

编码理论代写

1. 数据压缩（或信源编码
2. 前向错误更正（或信道编码
3. 加密编码
4. 线路码

复分析代考

(1) 提到复变函数 ，首先需要了解复数的基本性左和四则运算规则。怎么样计算复数的平方根， 极坐标与 $x y$ 坐标的转换，复数的模之类的。这些在高中的时候囸本上都会学过。
(2) 复变函数自然是在复平面上来研究问题，此时数学分析里面的求导数之尖的运算就会很自然的 引入到复平面里面，从而引出解析函数的定义。那/研究解析函数的性贡就是关楗所在。最关键的 地方就是所谓的Cauchy一Riemann公式，这个是判断一个函数是否是解析函数的关键所在。
(3) 明白解析函数的定义以及性质之后，就会把数学分析里面的曲线积分 $a$ 的概念引入复分析中， 定义几乎是一致的。在引入了闭曲线和曲线积分之后，就会有出现复分析中的重要的定理: Cauchy 积分公式。 这个是易分析的第一个重要定理。