Scroll Top
19th Ave New York, NY 95822, USA

数学代写|MATH609 Numerical analysis

MY-ASSIGNMENTEXPERT™可以为您提供math.tamu.edu MATH609 Numerical analysis数值分析课程的代写代考辅导服务!

数学代写|MATH609 Numerical analysis

MATH609课程简介

Next week (the week starting Oct. 5) is the middle of the semester and as promised in the syllabus, we shall have a midterm. This exam will be a take home but with a fairly restrictive timeline, namely, the exam will be sent to you on Wednesday Oct. 7 at 9:00 am and must be completed and EMAILED back to me ([email protected]) by 3:00 PM that day. The six hour window should suffice as I expect that most will be able to essentially finish in under two. In lieu of the this, there will be no new homework assigned the week before and only one lecture during the week of the exam.

Prerequisites 

I am teaching two versions of 609 this semester. 609-700 is for students in the math department’s distance masters program while this course, 609-600 is for students in residence at TAMU, College Station. The class homepage for this course is located at:http://www.math.tamu.edu/~pasciak/classes/609-Local

MATH609 Numerical analysis HELP(EXAM HELP, ONLINE TUTOR)

问题 1.

Let $A \in \mathbf{R}^{n \times n}$ be symmetric positive definite with the extreme eigenvalues $\lambda:=$ $\lambda_{\min }(A), \Lambda:=\lambda_{\max }(A)$ and let $\kappa:=\Lambda / \lambda$ be the spectral condition of $A$. Further, let $b \in \mathbf{R}^n$ and $x^0 \in \mathbf{R}^n$.
(i) Consider the semi-iterative Richardson iteration
$$
y^{m+1}=y^m+\Theta_{m+1}\left(b-A y^m\right), m \geq 0 .
$$
Specify the values of $y^0 \in \mathbf{R}^n$ and $\Theta_{m+1}$ for which the sequence of iterates $\left(y^m\right){\mathbf{N}}$ corresponds to the sequence $\left(x^m\right){\mathrm{N}}$ obtained by the gradient method.

(i) In order for the sequence of iterates $\$ \backslash$ left( $\mathrm{y}^{\wedge} \mathrm{m} \backslash$ right) ${\mid m a t h b f{\mathrm{~N}}} \$$ obtained by the semi-iterative Richardson iteration to correspond to the sequence $\$ 1 /$ left $\left(x^{\wedge} m \backslash\right.$ right){\mathbf $\left.{\mathrm{N}}\right} \$$ obtained by the gradient method, we need to choose $\$ \mathrm{y}^{\wedge} 0=\mathrm{x}^{\wedge} 0 \$$ and $\$ \backslash$ Theta ${\mathrm{m}+1} \$$ such that
$$
y^{m+1}=y^m+\Theta_{m+1}\left(b-A y^m\right)=x^{m+1},
$$
where $\$ x^{\wedge}{\mathrm{m}+1}=\mathrm{x}^{\wedge} \mathrm{m}$ – \alpha_m \nabla $F\left(x^{\wedge} m\right) \$$ is the update step in the gradient method.
Setting $\$ y^{\wedge} 0=x^{\wedge} 0 \$$, we have
$$
y^1=y^0+\Theta_1\left(b-A y^0\right)=x^1=x^0-\alpha_0 \nabla F\left(x^0\right),
$$
which implies
$$
\Theta_1=\frac{\alpha_0}{\lambda}
$$

Similarly, for $\$ \mathrm{~m} \backslash$ geq $1 \$$, we have
$\backslash$ begin $\left{\right.$ align $\left.{ }^*\right}$
$\mathrm{y}^{\wedge}{\mathrm{m}+1} \&=\mathrm{y}^{\wedge} \mathrm{m}+\backslash$ Theta_ ${\mathrm{m}+1}\left(\mathrm{b}-\mathrm{Ay}{ }^{\wedge} \mathrm{m}\right) \backslash$
$\&=x^{\wedge} m-\backslash a l p h a _m \backslash$ nabla $F\left(x^{\wedge} m\right)+\backslash$ Theta_ ${\mathrm{m}+1}\left(b-A x^{\wedge} m+\right.$
Alalpha_m $\backslash$ nabla $\left.F\left(x^{\wedge} m\right)\right) \backslash$
$8=x^{\wedge}{m+1}$.
lend{align $}$
This implies
$$
\Theta_{m+1}=\frac{\alpha_m}{\lambda}
$$
Therefore, choosing $\$ y^{\wedge} 0=x^{\wedge} 0 \$$ and $\$ \backslash T h e t a _{m+1}=\backslash$ frac ${\backslash$ alpha_m $}$ ${\backslash$ lambda}\$, the semi-iterative Richardson iteration corresponds to the gradient method.

问题 2.

(ii) Show that for any initial vector $x^0 \in \mathbf{R}^n$ the sequence $\left(x^m\right)_{\mathbf{N}}$, obtained by the gradient method, converges to $x^:=A^{-1} b$. Moreover, verify the estimates $$ \begin{aligned} & F\left(x^m\right)-F\left(x^\right) \leq\left(\frac{\kappa-1}{\kappa+1}\right)^{2 m}\left[F\left(x^0\right)-F\left(x^\right)\right], \ &\left|x^m-x^\right|_A \leq\left(\frac{\kappa-1}{\kappa+1}\right)^m\left|x^0-x^*\right|_A,
\end{aligned}
$$
where $\left.F(x):=\frac{1}{2}<A x, x\right\rangle-\langle b, x\rangle$.

[Hint: Utilize Theorem 1.9 as presented in class and take advantage of the fact that $x^m$ minimizes the error $\left.\left|x^m-x^*\right|_{A .}.\right]$

(ii) By Theorem 1.9, we know that the gradient method with constant step size $\$ \backslash a l p h a=\backslash f r a c{2} \backslash \backslash a m b d a+\backslash$ Lambda $} \$$ converges to the unique minimizer $\$ x^{\wedge } \$$ of $\$ F(x) \$$. Moreover, we have the following estimate for the error: $$ F\left(x^m\right)-F\left(x^\right) \leq\left(\frac{\kappa-1}{\kappa+1}\right)^{2 m}\left(F\left(x^0\right)-F\left(x^*\right)\right) .
$$
Now, we show that $\$ x^{\wedge *} \$$ is also the limit of the sequence
$\$ \backslash$ left $\left(\mathrm{y}^{\wedge} \mathrm{m} \backslash\right.$ right $){\backslash$ mathbf ${\mathrm{N}}} \$$ obtained by the semi-iterative Richardson iteration with the choices $\$ y^{\wedge} 0=x^{\wedge} 0 \$$ and $\$ \mid$ Theta ${\mathrm{m}+1}=$ $\backslash$ frac{alpha_m}{lambda $}$.
Let $\$ \mathrm{e}^{\wedge} \mathrm{m}=\mathrm{x}^{\wedge} \mathrm{m}-\mathrm{x}^{\wedge} \$$. Then,
|begin{align
$\backslash$ left $\mathrm{e}^{\wedge}{\mathrm{m}+1} \backslash$ right $\mid A^{\wedge} 2 \&=\backslash$ left $\left|\mathrm{x}^{\wedge}{\mathrm{m}+1}-\mathrm{x}^{\wedge}\right|$ right $A^{\wedge} 21$
$\&=\mid$ left $/ x^{\wedge} m-$ alpha $m$ |nabla $F\left(x^{\wedge} m\right)-x^{\wedge} \backslash$ right $\mid A^{\wedge} 2 \backslash$
$\&=\backslash$ left $\mid e^{\wedge} m$ – \alpha_m A $e^{\wedge} m \backslash$ right $\mid A^{\wedge} 2 \backslash$
$\&=(1 \text { – \alpha_m } \backslash \text { lambda })^{\wedge} 2\left|e^{\wedge} m\right|_{-} A^{\wedge} 2$,
lend ${$ align $\star$
where we have used the fact that $\$ \backslash$ nabla $F\left(x^{\wedge} m\right)=A x$

MATH609 Numerical analysis

MY-ASSIGNMENTEXPERT™可以为您提供UNIVERSITY OF ILLINOIS URBANA-CHAMPAIGN MATH2940 linear algebra线性代数课程的代写代考和辅导服务! 请认准MY-ASSIGNMENTEXPERT™. MY-ASSIGNMENTEXPERT™为您的留学生涯保驾护航。

Related Posts

Leave a comment