问题 1 (problem1). Let \(\mathcal{A}\) be a linear transformation on a finite dimensional vector space \(V\). Show that \(\operatorname{dim}\left(\operatorname{Im} \mathcal{A}^{2}\right)=\operatorname{dim}(\operatorname{Im} \mathcal{A})\) if and only if
$$
V=\operatorname{Im} \mathcal{A} \oplus \operatorname{Ker} \mathcal{A}
$$
Specifically, if \(\mathcal{A}^{2}=\mathcal{A},\) then \(V=\operatorname{Im} \mathcal{A} \oplus \operatorname{Ker} \mathcal{A} ;\) is the converse true?
问题 2 (problem2). Consider \(\mathbb{P}_{n}[x]\) over \(\mathbb{R}\). Define
$$
\mathcal{A}(p(x))=x p^{\prime}(x)-p(x), \quad p(x) \in \mathbb{P}_{n}[x]
$$
(a) Show that \(\mathcal{A}\) is a linear transformation on \(\mathbb{P}_{n}[x]\).
(b) Find Ker \(\mathcal{A}\) and \(\operatorname{Im} \mathcal{A}\).
(c) Show that \(\mathbb{P}_{n}[x]=\operatorname{Ker} \mathcal{A} \oplus \operatorname{Im} \mathcal{A}\).
问题 3 (problem3). Let \(A \in M_{n}\left(\right.\) C). Define a linear transformation \(\mathcal{L}\) on \(M_{n}\) (C) by
$$
\mathcal{L}(X)=A X, \quad X \in M_{n}(\mathbb{C})
$$
Show that \(\mathcal{L}\) and \(A\) have the same set of eigenvalues. How are the characteristic polynomials of \(\mathcal{L}\) and \(A\) related?
问题 4 (problem4). For the vector space \(\mathbb{P}[x]\) over \(\mathbf{R}\), define
$$
\mathcal{A} f(x)=f^{\prime}(x), \quad f(x) \in \mathbb{P}[x]
$$
and
$$
\mathcal{B} f(x)=x f(x), \quad f(x) \in \mathbb{P}[x]
$$
Show that
(a) \(\mathcal{A}\) and \(\mathcal{B}\) are linear transformations.
(b) \(\operatorname{Im} \mathcal{A}=\mathbb{P}[x]\) and \(\operatorname{Ker} \mathcal{A} \neq\{0\}\)
(c) \(\operatorname{Ker} \mathcal{B}=\{0\}\) and \(\mathcal{B}\) does not have an inverse.
(d) \(\mathcal{A B}-\mathcal{B A}=\mathcal{I}\)
(e) \(\mathcal{A}^{k} \mathcal{B}-\mathcal{B} \mathcal{A}^{k}=k \mathcal{A}^{k-1}\) for every positive integer \(k\).
问题 5 (problem5). Let \(V\) be a finite dimensional vector space and \(\mathcal{A}\) be a linear transformation on \(V\). Show that there exists a positive integer \(k\) so that
$$
V=\operatorname{Im} \mathcal{A}^{k} \oplus \operatorname{Ker} \mathcal{A}^{k}
$$
证明 (problem1). \begin{aligned}
&\text { If } V=\operatorname{Im} \mathcal{A} \oplus \operatorname{Ker} \mathcal{A}, \text { we show that } \operatorname{Im} \mathcal{A}^{2}=\operatorname{Im} \mathcal{A} . \text { Obviously, }\\
&\operatorname{Im} \mathcal{A}^{2} \subseteq \operatorname{Im} \mathcal{A} . \text { Let } u \in \operatorname{Im} \mathcal{A} . \text { Then } u=\mathcal{A} v \text { for some } v \in V . \text { Write }\\
&v=w_{1}+w_{2}, \text { where } w_{1} \in \operatorname{Im} \mathcal{A} \text { and } w_{2} \in \operatorname{Ker} \mathcal{A} . \text { Let } w_{1}=\mathcal{A} z_{1} . \text { Then }\\
&u=\mathcal{A} v=\mathcal{A}\left(w_{1}\right)+\mathcal{A}\left(w_{2}\right)=\mathcal{A}\left(w_{1}\right)=\mathcal{A}^{2}\left(z_{1}\right) \in \operatorname{Im} \mathcal{A}^{2} . \text { Therefore, }\\
&\operatorname{Im} \mathcal{A}^{2}=\operatorname{Im} \mathcal{A} \text { and } r\left(\mathcal{A}^{2}\right)=\operatorname{dim}\left(\operatorname{Im} \mathcal{A}^{2}\right)=\operatorname{dim}(\operatorname{Im} \mathcal{A})=r(\mathcal{A})
\end{aligned}
证明 (problem2). (a) Let \(p, q \in \mathbb{P}_{n}[x] .\) Then
$$
\begin{aligned}
\mathcal{A}((p+k q)(x)) &=\mathcal{A}((p(x)+k q(x))\\
&=x\left(p(x)+k q(x)^{\prime}-(p(x)+k q(x))\right.\\
&=x p^{\prime}(x)+x k q^{\prime}(x)-p(x)-k q(x) \\
&=\mathcal{A}(p(x))+k \mathcal{A}(q(x))
\end{aligned}
$$
So \(\mathcal{A}\) is a linear transformation on \(\mathbb{P}_{n}[x]\)
(b) \(\operatorname{Ker} \mathcal{A}=\{k x \mid k \in \mathbb{R}\}\)
\(\operatorname{Im} \mathcal{A}=\left\{a_{0}+a_{2} x^{2}+\cdots+a_{n-1} x^{n-1} \mid a_{0}, a_{2}, \ldots, a_{n-1} \in \mathbb{R}\right\}\)
(c) By (b).
证明 (problem3). If \(\lambda_{0}\) is an eigenvalue of \(\mathcal{L}\)
$$
\mathcal{L}\left(X_{0}\right)=\lambda_{0} X_{0}, \quad \text { for some } \quad X_{0} \neq 0
$$
then
$$
A X_{0}=\lambda_{0} X_{0}
$$
If \(x_{0}\) is a nonzero column vector of \(X_{0},\) then \(A x_{0}=\lambda_{0} x_{0}\) and \(\lambda_{0}\) is an eigenvalue of \(A\). Conversely, if \(x_{0}\) is an eigenvector of \(A\) belonging to \(\lambda_{0}\), let \(X_{0}\) be the matrix with all column vectors \(x_{0}\). Then
$$
\mathcal{L}\left(X_{0}\right)=A X_{0}=\lambda_{0} X_{0}
$$
and \(\lambda_{0}\) is an eigenvalue of \(\mathcal{L}\).
The characteristic polynomial of \(\mathcal{L}\) is the \(n\) -th power of the characteristic polynomial of \(A\). To see this, consider the basis of \(M_{n}(\mathbb{C})\)
$$
\left\{E_{11}, E_{21}, \ldots, E_{n 1}, E_{12}, E_{22}, \ldots, E_{n 2}, \ldots, E_{1 n}, E_{2 n}, \ldots, E_{n n}\right\}
$$
where \(E_{i j}\) is the \(n \times n\) matrix with the \((i, j)\) -entry 1 and 0 elsewhere. It is easy to compute that
$$
\mathcal{L}\left(E_{i j}\right)=A E_{i j}=a_{1 i} E_{1 j}+a_{2 i} E_{2 j}+\cdots+a_{n i} E_{n j}
$$
Thus the matrix representation of \(\mathcal{L}\) under the basis is
$$
\left(\begin{array}{cccc}
A & & & 0 \\
& A & & \\
& & \ddots & \\
0 & & & A
\end{array}\right)
$$
The desired result follows immediately.
证明 (problem4). (a) and (b) can be directly vcrificd.
(c) \(B\) is one-to-one, but not onto.
(d) For every \(f(x),(\mathcal{A B}-\mathcal{B A})(f)=f\)
(e) By induction on \(k\).
证明 (problem5). Obviously, \(\operatorname{Ker} \mathcal{A} \subseteq \operatorname{Ker} \mathcal{A}^{2} \subseteq \cdots \subseteq \operatorname{Ker} \mathcal{A}^{k} \subseteq \operatorname{Ker} A^{k+1} \subseteq \cdots\)
This inclusion chain stops because the dimensions are finite numbers. Thus Ker \(\mathcal{A}^{m}=\) Ker \(\mathcal{A}^{m+k}\) for some \(m\) and for all positive integers k. We claim that \(V=\operatorname{Ker} \mathcal{A}^{m} \oplus \operatorname{Im} \mathcal{A}^{m}\). All we need to show is
\(\operatorname{Ker} \mathcal{A}^{m} \cap \operatorname{Im} \mathcal{A}^{m}=\{0\} .\) Let \(u \in \operatorname{Ker} \mathcal{A}^{m} \cap \operatorname{Im} \mathcal{A}^{m} .\) Then \(\mathcal{A}^{m}(u)=0\)
and \(u=\mathcal{A}^{m}(v)\) for some \(v .\) So \(\mathcal{A}^{m}(u)=\mathcal{A}^{2 m}(v)=0,\) which implies \(v \in \operatorname{Ker} \mathcal{A}^{2 m}=\operatorname{Ker} \mathcal{A}^{m} .\) Thus \(\mathcal{A}^{m}(v)=0 ;\) that is, \(u=\mathcal{A}^{m}(v)=0\)
[…] 凸优化很难学? 学霸老司机,带你飞!对于学习凸优化的同学来说,光有计算能力和线性代数的学科基础是不够的,学习凸优化的最基本的观点是要彻底的理解凸带来的各种好处 线性代数代写|Convex Optimization代写|凸优化代写|least square regression代写 UpriviateTA代写,做自己擅长的领域!UpriviateTA的专家精通各种经典论文、方程模型、通读各种完整书单,比如代写和线性代数密切相关的Matrix theory, 各种矩阵形式的不等式,比如Wely inequality, Min-Max principle,polynomial method, rank trick那是完全不在话下,对于各种singularity value decomposition, LLS, Jordan standard form ,QR decomposition, 基于advance的LLS的optimazation问题,乃至于linear programming method(8维和24维Sphere packing的解决方案的关键)的计算也是熟能生巧正如一些大神所说的,代写线性代数留学生线性代数作业assignment论文paper的很多方法都是围绕几个线性代数中心问题的解决不断产生的。比如GIT,从klein时代开始研究的几何不变量问题,以及牛顿时代开始研究的最优化问题等等(摆线),此外,强烈推荐我们UpriviateTA代写的线性代数这门课,从2017年起,我们已经陆续代写过近千份类似的作业assignment。 以下就是一份典型凸优化案例的答案 线性代数代写|Convex Optimization代写|凸优化代写|least square regression代写 Problem 1. Solution set of a quadratic inequality. Let $C subseteq mathbf{R}^{n}$ be the solution set of a quadratic inequality, $$ C=left{x in mathbf{R}^{n} mid x^{T} A x+b^{T} x+c leq 0right} $$ with $A in mathbf{S}^{n}, b in mathbf{R}^{n},$ and $c in mathbf{R}$ (a) Show that $C$ is convex if $A succeq 0$. (b) Show that the intersection of $C$ and the hyperplane defined by $g^{T} x+h=0$ (where $g neq 0$ ) is convex if $A+lambda g g^{T} succeq 0$ for some $lambda in mathbf{R}$ Are the converses of these statements true? Proof . A set is convex if and only if its intersection with an arbitrary line ${hat{x}+t v mid t in$ $mathbf{R}}$ is convex. (a) We have $$ (hat{x}+t v)^{T} A(hat{x}+t v)+b^{T}(hat{x}+t v)+c=alpha t^{2}+beta t+gamma $$ where $e^{e}$ $$ alpha=v^{T} A v, quad beta=b^{T} v+2 hat{x}^{T} A v, quad gamma=c+b^{T} hat{x}+hat{x}^{T} A hat{x} $$ The intersection of $C$ with the line defined by $hat{x}$ and $v$ is the set $$ left{hat{x}+t v mid alpha t^{2}+beta t+gamma leq 0right} $$ which is convex if $alpha geq 0$. This is true for any $v,$ if $v^{T} A v geq 0$ for all $v,$ i.e., $A succeq 0$ The converse does not hold; for example, take $A=-1, b=0, c=-1 .$ Then $A nsucc 0$ but $C=mathbf{R}$ is convex. (b) Let $H=left{x mid g^{T} x+h=0right} .$ We define $alpha, beta,$ and $gamma$ as in the solution of part (a), and, in addition. $$ delta=g^{T} v, quad epsilon=g^{T} hat{x}+h $$ Without loss of generality we can assume that $hat{x} in H,$ i.e., $epsilon=0 .$ The intersection of $C cap H$ with the line defined by $hat{x}$ and $v$ is $$ left{hat{x}+t v mid alpha t^{2}+beta t+gamma leq 0, delta t=0right} $$ If $delta=g^{T} v neq 0,$ the intersection is the singleton ${hat{x}},$ if $gamma leq 0,$ or it is empty. $operatorname{In}$ either case it is a convex set. If $delta=g^{T} v=0,$ the set reduces to $$ left{hat{x}+t v mid alpha t^{2}+beta t+gamma leq 0right} $$ which is convex if $alpha>0$. Therefore $C cap H$ is convex if $$ g^{T} v=0 Longrightarrow v^{T} A v geq 0 $$ This is true if there exists $lambda$ such that $A+lambda g g^{T} succeq 0 ;$ then $$ v^{T} A v=v^{T}left(A+lambda g g^{T}right) v geq 0 $$ for all $v$ satisfying $g^{T} v=0$. Again, the converse is not true. Problem 2. Hyperbolic sets. Show that the hyperbolic set $left{x in mathbf{R}_{+}^{2} mid x_{1} x_{2} geq 1right}$ is convex. As a generalization, show that $left{x in mathbf{R}_{+}^{n} mid prod_{i=1}^{n} x_{i} geq 1right}$ is convex. Hint. If $a, b geq 0$ and $0 leq theta leq 1,$ then $a^{theta} b^{1-theta} leq theta a+(1-theta) b$ Proof . (a) We prove the first part without using the hint. Consider a convex combination $z$ of two points $left(x_{1}, x_{2}right)$ and $left(y_{1}, y_{2}right)$ in the set. If $x succeq y,$ then $z=theta x+(1-theta) y succeq y$ and obviously $z_{1} z_{2} geq y_{1} y_{2} geq 1$. Similar proof if $y succeq x$. Suppose $y nsucceq 0$ and $x nsucceq y,$ i.e., $left(y_{1}-x_{1}right)left(y_{2}-x_{2}right)<0 .$ Then $$ begin{array}{l} left(theta x_{1}+(1-theta) y_{1}right)left(theta x_{2}+(1-theta) y_{2}right) \ quad=theta^{2} x_{1} x_{2}+(1-theta)^{2} y_{1} y_{2}+theta(1-theta) x_{1} y_{2}+theta(1-theta) x_{2} y_{1} \ quad=theta x_{1} x_{2}+(1-theta) y_{1} y_{2}-theta(1-theta)left(y_{1}-x_{1}right)left(y_{2}-x_{2}right) \ geq 1 end{array} $$ (b) Assume that $prod_{i} x_{i} geq 1$ and $prod_{i} y_{i} geq 1 .$ Using the inequality in the hint, we have $$ prod_{i}left(theta x_{i}+(1-theta) y_{i}right) geq prod x_{i}^{theta} y_{i}^{1-theta}=left(prod_{i} x_{i}right)^{theta}left(prod_{i} y_{i}right)^{1-theta} geq 1 . $$ Problem 3. Expanded and restricted sets. Let $S subseteq mathbf{R}^{n}$, and let $|cdot|$ be a norm on $mathbf{R}^{n}$. (a) For $a geq 0$ we define $S_{a}$ as ${x mid operatorname{dist}(x, S) leq a},$ where $operatorname{dist}(x, S)=inf _{y in S}|x-y|$. We refer to $S_{a}$ as $S$ expanded or extended by $a$. Show that if $S$ is convex, then $S_{a}$ is convex. (b) For $a geq 0$ we define $S_{-a}={x mid B(x, a) subseteq S},$ where $B(x, a)$ is the ball (in the norm $|cdot|),$ centered at $x,$ with radius $a$. We refer to $S_{-a}$ as $S$ shrunk or restricted by $a$ since $S_{-a}$ consists of all points that are at least a distance $a$ from $mathbf{R}^{n} backslash S .$ Show that if $S$ is convex, then $S_{-a}$ is convex. Proof . (a) Consider two points $x_{1}, x_{2} in S_{a} .$ For $0 leq theta leq 1$, $$ begin{aligned} operatorname{dist}left(theta x_{1}+(1-theta) x_{2}, Xright) &=inf _{y in S}left|theta x_{1}+(1-theta) x_{2}-yright| \ &=inf _{y_{1}, y_{2} in S}left|theta x_{1}+(1-theta) x_{2}-theta y_{1}-(1-theta) y_{2}right| \ &=inf _{y_{1}, y_{2} in S}left|thetaleft(x_{1}-y_{1}right)+(1-theta)left(x_{2}-y_{2}right)right| \ & leq inf _{y_{1}, y_{2} in S}left(thetaleft|x_{1}-y_{1}right|+(1-theta)left|x_{2}-y_{2}right|right) \ &left.=theta inf _{y_{1} in S}left|x_{1}-y_{1}right|+(1-theta) inf _{y_{2} in s}left|x_{2}-y_{2}right|right) \ & leq a end{aligned} $$ so $theta x_{1}+(1-theta) x_{2} in S_{a},$ proving convexity. (b) Consider two points $x_{1}, x_{2} in S_{-a},$ so for all $u$ with $|u| leq a$, $$ x_{1}+u in S, quad x_{2}+u in S $$ For $0 leq theta leq 1$ and $|u| leq a$ $$ theta x_{1}+(1-theta) x_{2}+u=thetaleft(x_{1}+uright)+(1-theta)left(x_{2}+uright) in S $$ because $S$ is convex. We conclude that $theta x_{1}+(1-theta) x_{2} in S_{-a}$. Problem 4. Invertible linear-fractional functions. Let $f: mathbf{R}^{n} rightarrow mathbf{R}^{n}$ be the linear-fractional function $$ f(x)=(A x+b) /left(c^{T} x+dright), quad operatorname{dom} f=left{x mid c^{T} x+d>0right} $$ Suppose the matrix $$ Q=left[begin{array}{ll} A & b \ c^{T} & d end{array}right] $$ is nonsingular. Show that $f$ is invertible and that $f^{-1}$ is a linear-fractional mapping. Give an explicit expression for $f^{-1}$ and its domain in terms of $A, b, c,$ and $d$. Hint. It may be easier to express $f^{-1}$ in terms of $Q$. Proof . This follows from remark 2.2 on page $41 .$ The inverse of $f$ is given by $$ f^{-1}(x)=mathcal{P}^{-1}left(Q^{-1} mathcal{P}(x)right) $$ so $f^{-1}$ is the projective transformation associated with $Q^{-1}$. Problem 5. Strictly positive solution of linear equations. Suppose $A in mathbf{R}^{m times n}, b in mathbf{R}^{m},$ with $b in mathcal{R}(A) .$ Show that there exists an $x$ satisfying $$ x succ 0, quad A x=b $$ if and only if there exists no $lambda$ with $$ A^{T} lambda succeq 0, quad A^{T} lambda neq 0, quad b^{T} lambda leq 0 $$ Hint. First prove the following fact from linear algebra: $c^{T} x=d$ for all $x$ satisfying $A x=b$ if and on
ly if there is a vector $lambda$ such that $c=A^{T} lambda, d=b^{T} lambda$. Proof . Solution. We first prove the result in the hint. Suppose that there exists a $lambda$ such that $c=A^{T} lambda, d=b^{T} lambda .$ It is clear that if $A x=b$ then $$ c^{T} x=lambda^{T} A x=lambda^{T} b=d $$ Conversely, suppose $A x=b$ implies $c^{T} x=d,$ and that $operatorname{rank} A=r .$ Let $F in mathbf{R}^{n times(n-r)}$ be a matrix with $mathcal{R}(F)=mathcal{N}(A),$ and let $x_{0}$ be a solution of $A x=b .$ Then $A x=b$ if and only if $x=F y+x_{0}$ for some $y,$ and $c^{T} x=d$ for all $x=F y+x_{0}$ implies $$ c^{T} F y+c^{T} x_{0}=d $$ for all $y .$ This is only possible if $F^{T} c=0,$ i.e., $c in mathcal{N}left(F^{T}right)=mathcal{R}left(A^{T}right),$ i.e., there exists a $lambda$ such that $c=A^{T} lambda$. The condition $c^{T} F y+c^{T} x_{0}=d$ then reduces to $c^{T} x_{0}=d,$ i.e., $lambda^{T} A x_{0}=lambda^{T} b=d .$ In conclusion, if $c^{T} x=d$ for all $x$ with $A x=b,$ then there there exists a $lambda$ such that $c=A^{T} lambda$ and $d=b^{T} lambda$. To prove the main result, we use a standard separating hyperplane argument, applied to the sets $C=mathbf{R}_{++}^{n}$ and $D={x mid A x=b} .$ If they are disjoint, there exists $c neq 0$ and $d$ such that $c^{T} x geq d$ for all $x in C$ and $c^{T} x leq d$ for all $x in D .$ The first condition means that $c succeq 0$ and $d leq overline{0}$. Since $c^{T} x leq d$ on $D,$ which is an affine set, we must have $c^{T} x$ constant on $D .$ (If $c^{T} x$ weren’t constant on $D$, it would take on all values.) We can relabel $d$ to be this constant value, so we have $c^{T} x=d$ on $D .$ Now using the hint, there is some $lambda$ such that $c=A^{T} lambda, d=b^{T} lambda$ 更多线性代数代写案例请参阅此处。 […]
[…] 凸优化很难学? 学霸老司机,带你飞!对于学习凸优化的同学来说,光有计算能力和线性代数的学科基础是不够的,学习凸优化的最基本的观点是要彻底的理解凸带来的各种好处 线性代数代写|Convex Optimization代写|凸优化代写|least square regression代写 UpriviateTA代写,做自己擅长的领域!UpriviateTA的专家精通各种经典论文、方程模型、通读各种完整书单,比如代写和线性代数密切相关的Matrix theory, 各种矩阵形式的不等式,比如Wely inequality, Min-Max principle,polynomial method, rank trick那是完全不在话下,对于各种singularity value decomposition, LLS, Jordan standard form ,QR decomposition, 基于advance的LLS的optimazation问题,乃至于linear programming method(8维和24维Sphere packing的解决方案的关键)的计算也是熟能生巧正如一些大神所说的,代写线性代数留学生线性代数作业assignment论文paper的很多方法都是围绕几个线性代数中心问题的解决不断产生的。比如GIT,从klein时代开始研究的几何不变量问题,以及牛顿时代开始研究的最优化问题等等(摆线),此外,强烈推荐我们UpriviateTA代写的线性代数这门课,从2017年起,我们已经陆续代写过近千份类似的作业assignment。 以下就是一份典型凸优化案例的答案 线性代数代写|Convex Optimization代写|凸优化代写|least square regression代写 以下就是一份典型案例的答案 Problem 1. The monotone nonnegative cone. We define the monotone nonnegative cone as $$ K_{mathrm{m}+}=left{x in mathbf{R}^{n} mid x_{1} geq x_{2} geq cdots geq x_{n} geq 0right} $$ i.e., all nonnegative vectors with components sorted in nonincreasing order. (a) Show that $K_{mathrm{m}+}$ is a proper cone. (b) Find the dual cone $K_{mathrm{m}+}^{*} .$ Hint. Use the identity $$ begin{aligned} sum_{i=1}^{n} x_{i} y_{i}=&left(x_{1}-x_{2}right) y_{1}+left(x_{2}-x_{3}right)left(y_{1}+y_{2}right)+left(x_{3}-x_{4}right)left(y_{1}+y_{2}+y_{3}right)+cdots \ &+left(x_{n-1}-x_{n}right)left(y_{1}+cdots+y_{n-1}right)+x_{n}left(y_{1}+cdots+y_{n}right) end{aligned} $$ Proof . (a) The set $K_{mathrm{m}+}$ is defined by $n$ homogeneous linear inequalities, hence it is a closed (polyhedral) cone. The interior of $K_{mathrm{m}+}$ is nonempty, because there are points that satisfy the inequalities with strict inequality, for example, $x=(n, n-1, n-2, ldots, 1)$. To show that $K_{mathrm{m}+}$ is pointed, we note that if $x in K_{mathrm{m}+},$ then $-x in K_{mathrm{m}+}$ only if $x=0 .$ This implies that the cone does not contain an entire line. (b) Using the hint, we see that $y^{T} x geq 0$ for all $x in K_{mathrm{m}+}$ if and only if $$ y_{1} geq 0, quad y_{1}+y_{2} geq 0, quad ldots, y_{1}+y_{2}+cdots+y_{n} geq 0 $$ Therefore $$ K_{mathrm{m}+}^{*}=left{y mid sum_{i=1}^{k} y_{i} geq 0, k=1, ldots, nright} $$ Problem 2. Copositive matrices. A matrix $X in mathbf{S}^{n}$ is called copositive if $z^{T} X z geq 0$ for all $z succeq 0$. Verify that the set of copositive matrices is a proper cone. Find its dual cone. Proof . We denote by $K$ the set of copositive matrices in $mathbf{S}^{n} . K$ is a closed convex cone because it is the intersection of (infinitely many) halfspaces defined by homogeneous inequalities $$ z^{T} X z=sum_{i, j} z_{i} z_{j} X_{i j} geq 0 $$ $K$ has nonempty interior, because it includes the cone of positive semidefinite matrices, which has nonempty interior. $K$ is pointed because $X in K,-X in K$ means $z^{T} X z=0$ for all $z succeq 0,$ hence $X=0$. By definition, the dual cone of a cone $K$ is the set of normal vectors of all homogeneous halfspaces containing $K$ (plus the origin). Therefore, $$ K^{*}=operatorname{conv}left{z z^{T} mid z succeq 0right} . $$ Problem 3. Second-order conditions for convexity on an affine set. Let $F in mathbf{R}^{n times m}, hat{x} in mathbf{R}^{n}$. The restriction of $f: mathbf{R}^{n} rightarrow mathbf{R}$ to the affine set $left{F z+hat{x} mid z in mathbf{R}^{m}right}$ is defined as the function $tilde{f}: mathbf{R}^{m} rightarrow mathbf{R}$ with $$ tilde{f}(z)=f(F z+hat{x}), quad operatorname{dom} tilde{f}={z mid F z+hat{x} in operatorname{dom} f} $$ Suppose $f$ is twice differentiable with a convex domain. (a) Show that $tilde{f}$ is convex if and only if for all $z in operatorname{dom} tilde{f}$ $$ F^{T} nabla^{2} f(F z+hat{x}) F succeq 0 $$ (b) Suppose $A in mathbf{R}^{p times n}$ is a matrix whose nullspace is equal to the range of $F,$ i.e., $A F=0$ and $operatorname{rank} A=n-operatorname{rank} F .$ Show that $tilde{f}$ is convex if and only if for all $z in operatorname{dom} tilde{f}$ there exists a $lambda in mathbf{R}$ such that $$ nabla^{2} f(F z+hat{x})+lambda A^{T} A succeq 0 $$ Hint. Use the following result: If $B in mathbf{S}^{n}$ and $A in mathbf{R}^{p times n}$, then $x^{T} B x geq 0$ for all $x in mathcal{N}(A)$ if and only if there exists a $lambda$ such that $B+lambda A^{dot{T}} A succeq 0$ Proof . (a) The Hessian of $tilde{f}$ must be positive semidefinite everywhere: $$ nabla^{2} tilde{f}(z)=F^{T} nabla^{2} f(F z+hat{x}) F succeq 0 $$ (b) The condition in (a) means that $v^{T} nabla^{2} f(F z+hat{x}) v geq 0$ for all $v$ with $A v=0,$ i.e., $$ v^{T} A^{T} A v=0 Longrightarrow v^{T} nabla^{2} f(F z+hat{x}) v geq 0 $$ The result immediately follows from the hint. Problem 4. Suppose $f: mathbf{R}^{n} rightarrow mathbf{R}$ is convex, $g: mathbf{R}^{n} rightarrow mathbf{R}$ is concave, $operatorname{dom} f=operatorname{dom} g=mathbf{R}^{n},$ and for all $x, g(x) leq f(x)$. Show that there exists an affine function $h$ such that for all $x$, $g(x) leq h(x) leq f(x)$. In other words, if a concave function $g$ is an underestimator of a convex function $f$, then we can fit an affine function between $f$ and $g$. Proof . Solution. We first note that int epi $f$ is nonempty (since $operatorname{dom} f=mathbf{R}^{n}$ ), and does not intersect hypo $g$ (since $f(x)<t$ for $(x, t) in$ int epi $f$ and $t geq g(x)$ for $(x, t) in$ hypo $g$ ). The two sets can therefore be separated by a hyperplane, i.e., there exist $a in mathbf{R}^{n}, b in mathbf{R}$ not both zero, and $c in mathbf{R}$ such that $$ a^{T} x+b t geq c geq a^{T} y+b v $$ if $t>f(x)$ and $v leq g(y)$. We must have $b neq 0$, since otherwise the condition would reduce to $a^{T} x geq a^{T} y$ for all $x$ and $y,$ which is only possible if $a=0$. Choosing $x=y$, and using the fact that $f(x) geq g(x)$, we also see that $b>0$. Now we apply the separating hyperplane conditions to a point $(x, t) in$ int epi $f,$ and $(y, v)=(x, g(x)) in$ hypo $g,$ and obtain $$ a^{T} x+b t geq c geq a^{T} x+b g(x) $$ and dividing by $b$, $$ t geqleft(c-a^{T} xright) / b geq g(x) $$ for all $t>f(x)$. Therefore the affine function $h(x)=left(c-a^{T} xright) / b$ lies between $f$ and $g$. 更多线性代数代写案例请参阅此处。 […]
[…] 更多Stochastic Process代写随机过程代写案例请参阅此处。 […]