MY-ASSIGNMENTEXPERT™可以为您提供mccormick IEMS458 Convex Optimization凸优化课程的代写代考和辅导服务!
麦考密克工程与应用科学学院 凸优化课程的代写成功案例。
IEMS458课程简介
The course will take an in-depth look at the main concepts and algorithms in convex optimization. The goal is to develop expert knowledge in duality and in the design and analysis of algorithms for convex optimization. Emphasis is on proof techniques and understanding the mechanisms that drive convergence of algorithms.
Prerequisites
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms,whereas mathematical optimization is in general NP-hard.
IEMS458 Convex Optimization HELP(EXAM HELP, ONLINE TUTOR)
A function $f: \mathbb{R}^n \rightarrow R$ is continuously differentiable. Also, assume that $f(x)$ is concave on a convex set $\mathcal{X}$. Given the aforementioned properties of $f(x)$, prove that for all $x_1, x_2 \in \mathcal{X}, f(x)$ satisfies this property:
$$
f\left(x_2\right) \leq f\left(x_1\right)+D f\left(x_1\right)\left(x_2-x_1\right) .
$$
Hint: Back to basics – what is the basic definition of a derivative?
Solutions:
By definition, a function $f(x)$ is concave if $g(x)=-f(x)$ is convex. By the definition of a convex function, we know that $\forall \beta \in[0,1], g(x)=-f(x)$ satisfies:
$$
g\left(\beta x_2+(1-\beta) x_1\right) \leq \beta g\left(x_2\right)+(1-\beta) g\left(x_1\right) .
$$
The above inequality can be written as:
$$
\frac{g\left(x_1+\beta\left(x_2-x_1\right)\right)-g\left(x_1\right)}{\beta} \leq g\left(x_2\right)-g\left(x_1\right) .
$$
Applying the basic definition of a derivative by setting $\beta \rightarrow 0$, we obtain:
$$
\begin{gathered}
D g\left(x_1\right)\left(x_2-x_1\right) \leq g\left(x_2\right)-g\left(x_1\right), \text { or } \
g\left(x_2\right) \geq g\left(x_1\right)+D g\left(x_1\right)\left(x_2-x_1\right) .
\end{gathered}
$$
Since $g(x)=-f(x)$, we obtain:
$$
f\left(x_2\right) \leq f\left(x_1\right)+D f\left(x_1\right)\left(x_2-x_1\right) .
$$
Show that the set $\Omega$ given by $\Omega=\left{y \in \mathbb{R}^2 ;|y|^2 \leq 4\right}$ is convex, where $|y|^2=y^{\top} y$.
Hint: Show that if $z=\beta x+(1-\beta) y$, then $|z|^2 \leq 4$. You might find the submultiplicative matrixvector property to be useful too.
Solutions:
The set $\Omega$ is convex if $x, y \in \Omega$, then $z=\beta x+(1-\beta) y$ is a point on the line joining $x$ and $y$ should also be in $\Omega$, where $0 \leq \beta \leq 1$. Hence, the problem reduces to showing that $z=\beta x+(1-\beta) y \in \Omega$.
$$
\begin{aligned}
|z|^2 & =z^{\top} z \
|z|^2 & =(\beta x+(1-\beta) y)^{\top}(\beta x+(1-\beta) y) \
& =\beta^2|x|^2+2 \beta(1-\beta) x^{\top} y+(1-\beta)^2|y|^2 \
& \leq \beta^2|x|^2+2 \beta(1-\beta)|x||y|+(1-\beta)^2|y|^2 \
& \leq 4 \beta^2+8 \beta(1-\beta)+4(1-\beta)^2 \
& =4 \beta^2+8 \beta-8 \beta^2+4+4 \beta^2-8 \beta \
& =4
\end{aligned}
$$
Hence $|z|^2 \leq 4$, which proves that $z$ is in $\Omega$ and that $\Omega$ is a convex set.
Given a multivariable function $f(x)$, many optimization solvers use the following algorithm to solve $\min _x f(x)$ :
Choose an initial guess, $x^{(0)}$
Choose an initial real, symmetric positive definite matrix $H^{(0)}$
Compute $d^{(k)}=-H^{(k)} \nabla_x f\left(x^{(k)}\right)$
Find $\beta^{(k)}=\arg \min _\beta f\left(x^{(k)}+\beta^{(k)} d^{(k)}\right), \beta \geq 0$
Compute $x^{(k+1)}=x^{(k)}+\beta^{(k)} d^{(k)}$
For this problem, we assume that the given function is a typical quadratic function $\left(x \in \mathbb{R}^n\right)$, as follows:
$$
f(x)=\frac{1}{2} x^{\top} Q x-x^{\top} b+c, \quad Q=Q^{\top} \succ 0 .
$$
Answer the following questions:
Find $f\left(x^{(k)}+\beta^{(k)} d^{(k)}\right)$ for the given quadratic function.
Obtain $\nabla_x f\left(x^{(k)}\right)$ for $f(x)$.
Using the chain rule, and given that $\beta^{(k)}=\arg \min _\beta f\left(x^{(k)}+\beta^{(k)} d^{(k)}\right)$, find a closed form solution for $\beta^{(k)}$ in terms of the given matrices $\left(H^{(k)}, \nabla f\left(x^{(k)}\right), d^{(k)}, Q\right)$.
Since it is required that $\beta^{(k)} \geq 0$, provide a sufficient condition related to $H^{(k)}$ that guarantees the aforementioned condition on $\beta^{(k)}$.
Solutions:
$f\left(x^{(k)}+\beta^{(k)} d^{(k)}\right)=\frac{1}{2}\left(x^{(k)}+\beta^{(k)} d^{(k)}\right)^{\top} Q\left(x^{(k)}+\beta^{(k)} d^{(k)}\right)-\left(x^{(k)}+\beta^{(k)} d^{(k)}\right)^{\top} b+c$
$\nabla_x f\left(x^{(k)}\right)=Q x^{(k)}-b$
Using the chain rule, and since we’re minimizing with respect to $\beta$, we obtain:
$$
\begin{gathered}
\frac{d}{d \beta} f\left(x^{(k)}+\beta d^{(k)}\right)=\left(x^{(k)}+\beta d^{(k)}\right)^{\top} Q d^{(k)}-\left(d^{(k)}\right)^{\top} b=0 \
\Rightarrow\left(\left(x^{(k)}\right)^{\top} Q-b^{\top}\right) d^{(k)}=-\beta\left(d^{(k)}\right)^{\top} Q d^{(k)} .
\end{gathered}
$$
Note that $d^{(k)}=-H^{(k)} \nabla_x f\left(x^{(k)}\right)$, and since $Q=Q^{\top} \succ 0$, we obtain:
$$
\beta_k^*=-\frac{\nabla f\left(x^{(k)}\right)^{\top} d^{(k)}}{\left(d^{(k)}\right)^{\top} Q d^{(k)}}=\frac{\nabla f\left(x^{(k)}\right)^{\top} H^{(k)} \nabla f\left(x^{(k)}\right)}{\left(d^{(k)}\right)^{\top} Q d^{(k)}} .
$$
Clearly, the condition is $H^{(k)}=\left(H^{(k)}\right)^{\top} \succ 0$, since $Q=Q^{\top} \succ 0$.
MY-ASSIGNMENTEXPERT™可以为您提供mccormick IEMS458 Convex Optimization凸优化课程的代写代考和辅导服务!