数学代写|matlab代写|Symmetric Matrix Eigenvalue Problems

如果你也在 怎样代写matlab这个学科遇到相关的难题,请随时右上角联系我们的24/7代写客服。matlab是由MathWorks公司开发的一种专有的多范式编程语言和数字计算环境。MATLAB允许进行矩阵操作、绘制函数和数据、实现算法、创建用户界面以及与用其他语言编写的程序对接。

matlab尽管MATLAB主要用于数值计算,但一个可选的工具箱使用MuPAD符号引擎,允许访问符号计算能力。一个额外的软件包,Simulink,为动态和嵌入式系统增加了图形化的多域仿真和基于模型的设计。截至2020年,MATLAB在全球拥有超过400万用户。他们来自工程、科学和经济的各种背景。

my-assignmentexpert™ matlab作业代写,免费提交作业要求, 满意后付款,成绩80\%以下全额退款,安全省心无顾虑。专业硕 博写手团队,所有订单可靠准时,保证 100% 原创。my-assignmentexpert™, 最高质量的matlab作业代写,服务覆盖北美、欧洲、澳洲等 国家。 在代写价格方面,考虑到同学们的经济条件,在保障代写质量的前提下,我们为客户提供最合理的价格。 由于统计Statistics作业种类很多,同时其中的大部分作业在字数上都没有具体要求,因此matlab作业代写的价格不固定。通常在经济学专家查看完作业要求之后会给出报价。作业难度和截止日期对价格也有很大的影响。

想知道您作业确定的价格吗? 免费下单以相关学科的专家能了解具体的要求之后在1-3个小时就提出价格。专家的 报价比上列的价格能便宜好几倍。

my-assignmentexpert™ 为您的留学生涯保驾护航 在数学Mathematics作业代写方面已经树立了自己的口碑, 保证靠谱, 高质且原创的matlab代写服务。我们的专家在数学Mathematics代写方面经验极为丰富,各种matlab相关的作业也就用不着 说。

我们提供的matlab及其相关学科的代写,服务范围广, 其中包括但不限于:

数学代写|matlab代写|Symmetric Matrix Eigenvalue Problems

数学代写|matlab代写|Introduction

The standard form of the matrix eigenvalue problem is
$$
\mathbf{A x}=\lambda \mathbf{x}
$$
where A is a given $n \times n$ matrix. The problem is to find the scalar $\lambda$ and the vector $\mathbf{x}$. Rewriting Eq. (9.1) in the form
$$
(\mathbf{A}-\lambda \mathbf{I}) \mathbf{x}=\mathbf{0}
$$
it becomes apparent that we are dealing with a system of $n$ homogeneous equations. An obvious solution is the trivial one $\mathbf{x}=\mathbf{0}$. A nontrivial solution can exist only if the determinant of the coefficient matrix vanishes; that is, if
$$
|\mathbf{A}-\lambda \mathbf{I}|=0
$$
Expansion of the determinant leads to the polynomial equation known as the characteristic equation
$$
a_{1} \lambda^{n}+a_{2} \lambda^{n-1}+\cdots+a_{n} \lambda+a_{n+1}=0
$$
which has the roots $\lambda_{i}, i=1,2, \ldots, n$, called the eigenvalues of the matrix A. The solutions $\mathbf{x}{i}$ of $\left(\mathbf{A}-\lambda{i} \mathbf{I}\right) \mathbf{x}=\mathbf{0}$ are known as the eigenvectors.
As an example, consider the matrix
$$
A=\left[\begin{array}{rrr}
1 & -1 & 0 \
-1 & 2 & -1 \
0 & -1 & 1
\end{array}\right]
$$

数学代写|matlab代写|Jacobi Method

Consider the standard matrix eigenvalue problem
$$
A \mathbf{x}=\lambda \mathbf{x}
$$
where A is symmetric. Let us now apply the transformation
$$
\mathbf{x}=\mathbf{P x}
$$
where $\mathrm{P}$ is a nonsingular matrix. Substituting Eq. (9.5) into Eq. (9.4) and premultiplying each side by $\mathbf{P}^{-1}$, we get
$$
\mathbf{P}^{-1} \mathbf{A P \mathbf { x } ^ { * }}=\lambda \mathbf{P}^{-1} \mathbf{P \mathbf { x } ^ { * }}
$$
or
$$
\mathbf{A}^{} \mathbf{x}^{}=\lambda \mathbf{x}^{*}
$$

where $\mathbf{A}^{}=\mathbf{P}^{-1} \mathbf{A P}$. Because $\lambda$ was untouched by the transformation, the eigenvalues of $\mathbf{A}$ are also the eigenvalues of $\mathbf{A}^{}$. Matrices that have the same eigenvalues are deemed to be similar, and the transformation between them is called a similarity transformation.

Similarity transformations are frequently used to change an eigenvalue problem to a form that is easier to solve. Suppose that we managed by some means to find a $\mathbf{P}$ that diagonalizes A ${ }^{}$, so that Eqs. (9.6) are $$ \left[\begin{array}{cccc} A_{11}^{}-\lambda & 0 & \cdots & 0 \
0 & A_{22}^{}-\lambda & \cdots & 0 \ \vdots & \vdots & \ddots & \vdots \ 0 & 0 & \cdots & A_{n n}^{}-\lambda
\end{array}\right]\left[\begin{array}{c}
x_{1}^{} \ x_{2}^{} \
\vdots \
x_{n}^{} \end{array}\right]=\left[\begin{array}{c} 0 \ 0 \ \vdots \ 0 \end{array}\right] $$ The solution of these equations is $$ \begin{gathered} \lambda_{1}=A_{11}^{} \quad \lambda_{2}=A_{22}^{} \quad \cdots \quad \lambda_{n}=A_{n n}^{} \
\mathbf{x}{1}^{}=\left[\begin{array}{c} 1 \ 0 \ \vdots \ 0 \end{array}\right] \quad \mathbf{x}{2}^{}=\left[\begin{array}{c}
0 \
1 \
\vdots \
0
\end{array}\right] \quad \cdots \quad \mathbf{x}{n}^{}=\left[\begin{array}{c} 0 \ 0 \ \vdots \ 1 \end{array}\right] \end{gathered} $$ or $$ \mathbf{X}^{}=\left[\begin{array}{llll}
\mathbf{x}
{1}^{} & \mathbf{x}{2}^{} & \cdots & \mathbf{x}{n}^{} \end{array}\right]=\mathbf{I} $$ According to Eq. (9.5) the eigenvector matrix of A is $$ \mathbf{X}=\mathbf{P X}^{}=\mathbf{P I}=\mathbf{P}
$$
Hence the transformation matrix $P$ is the eigenvector matrix of $A$ and the eigenvalues of $\mathbf{A}$ are the diagonal terms of $\mathbf{A}^{*}$.

数学代写|MATLAB代写|Inverse Power and Power Methods

The inverse power method is a simple iterative procedure for finding the smallest eigenvalue $\lambda_{1}$ and the corresponding eigenvector $\mathbf{x}_{1}$ of
$$
\mathbf{A x}=\lambda \mathbf{x}
$$
The method works like this:

  1. Let $\mathbf{v}$ be an approximation to $\mathbf{x}_{1}$ (a random vector of unit magnitude will do).
  2. Solve
    $\mathrm{Az}=\mathrm{v}$
    for the vector $\mathbf{z}$.
  3. Compute $|\mathbf{z}|$.
  4. Let $\mathbf{v}=\mathbf{z} /|\mathbf{z}|$ and repeat steps $2-4$ until the change in $v$ is negligible.
    At the conclusion of the procedure, $|\mathbf{z}|=\pm 1 / \lambda_{1}$ and $\mathbf{v}=\mathbf{x}{1}$. The sign of $\lambda{1}$ is determined as follows: if $z$ changes sign between successive iterations, $\lambda_{1}$ is negative; otherwise, $\lambda_{1}$ is positive.

Let us now investigate why the method works. Since the eigenvectors $\mathbf{x}{i}$ of Eq. (9.27) are orthonormal, they can be used as the basis for any $n$-dimensional vector. Thus $\mathbf{v}$ and $\mathbf{z}$ admit the unique representations $$ \mathbf{v}=\sum{i=1}^{n} v_{i} \mathbf{x}{i} \quad \mathbf{z}=\sum{i=1}^{n} z_{i} \mathbf{x}{i} $$ Note that $v{i}$ and $z_{i}$ are not the elements of $\mathbf{v}$ and $\mathbf{z}$, but the components with respect to the eigenvectors $\mathbf{x}{i}$. Substitution into Eq. (9.28) yields $$ \mathbf{A} \sum{i=1}^{n} z_{i} \mathbf{x}{i}-\sum{i=1}^{n} v_{i} \mathbf{x}{i}=\mathbf{0} $$ But $\mathbf{A} \mathbf{x}{i}=\lambda_{i} \mathbf{x}{i}$, so that $$ \sum{i=1}^{n}\left(z_{i} \lambda_{i}-v_{i}\right) \mathbf{x}{i}=\mathbf{0} $$ Hence $$ z{i}=\frac{v_{i}}{\lambda_{i}}
$$
It follows from Eq. (a) that
$$
\begin{aligned}
\mathbf{z} &=\sum_{i=1}^{n} \frac{v_{i}}{\lambda_{i}} \mathbf{x}{i}=\frac{1}{\lambda{1}} \sum_{i=1}^{n} v_{i} \frac{\lambda_{1}}{\lambda_{i}} \mathbf{x}{i} \ &=\frac{1}{\lambda{1}}\left(v_{1} \mathbf{x}{1}+v{2} \frac{\lambda_{1}}{\lambda_{2}} \mathbf{x}{2}+v{3} \frac{\lambda_{1}}{\lambda_{3}} \mathbf{x}{3}+\cdots\right) \end{aligned} $$ Since $\left|\lambda{1} / \lambda_{i}\right|<1(i \neq 1)$, we observe that the coefficient of $\mathbf{x}{1}$ has become more prominent in $\mathbf{z}$ than it was in $\mathbf{v}$; hence $\mathbf{z}$ is a better approximation to $\mathbf{x}{1}$. This completes the first iterative cycle.

数学代写|matlab代写|Symmetric Matrix Eigenvalue Problems

matlab代写

数学代写|MATLAB代写|INTRODUCTION

矩阵特征值问题的标准形式是
一种X=λX
其中 A 是给定的n×n矩阵。问题是找到标量λ和向量X. 重写方程。9.1在表格中
(一种−λ一世)X=0
很明显,我们正在处理一个系统n齐次方程。一个明显的解决方案是微不足道的X=0. 只有当系数矩阵的行列式消失时,才能存在非平凡解;也就是说,如果
|一种−λ一世|=0
行列式的展开导致称为特征方程的多项式方程
一种1λn+一种2λn−1+⋯+一种nλ+一种n+1=0
有根λ一世,一世=1,2,…,n,称为矩阵 A 的特征值。解 $\mathbf{x} {i}这F\left(\mathbf{A}-\lambda {i} \mathbf{I}\right) \mathbf{x}=\mathbf{0}一种r和ķn这在n一种s吨H和和一世G和n在和C吨这rs.一种s一种n和X一种米pl和,C这ns一世d和r吨H和米一种吨r一世X一种=[1−10 −12−1 0−11]$

数学代写|MATLAB代写|JACOBI METHOD

考虑标准矩阵特征值问题
一种X=λX
其中 A 是对称的。现在让我们应用转换
X=磷X
在哪里磷是一个非奇异矩阵。代入方程式。9.5进入方程。9.4并预乘每一边磷−1,我们得到
磷−1一种磷X∗=λ磷−1磷X∗

$$
\mathbf{A}^{ } \mathbf{x}^{ }=\lambda \mathbf{x}^{*}
$$

其中 $\mathbf{A}^{}=\mathbf{P}^{-1} \mathbf{A P}$. Because $\lambda$ was untouched by the transformation, the eigenvalues of $\mathbf{A}$ are also the eigenvalues of $\mathbf{A}^{}$.. 具有相同特征值的矩阵被认为是相似的,它们之间的变换称为相似变换。

相似变换经常用于将特征值问题转换为更容易解决的形式。假设我们通过某种方式设法找到一个磷对角化A ${ }^{}$, so that Eqs. (9.6) are $$ \left[\begin{array}{cccc} A_{11}^{}-\lambda & 0 & \cdots & 0 \
0 & A_{22}^{}-\lambda & \cdots & 0 \ \vdots & \vdots & \ddots & \vdots \ 0 & 0 & \cdots & A_{n n}^{}-\lambda
\end{array}\right]\left[\begin{array}{c}
x_{1}^{} \ x_{2}^{} \
\vdots \
x_{n}^{} \end{array}\right]=\left[\begin{array}{c} 0 \ 0 \ \vdots \ 0 \end{array}\right] $$ The solution of these equations is $$ \begin{gathered} \lambda_{1}=A_{11}^{} \quad \lambda_{2}=A_{22}^{} \quad \cdots \quad \lambda_{n}=A_{n n}^{} \
\mathbf{x}{1}^{}=\left[\begin{array}{c} 1 \ 0 \ \vdots \ 0 \end{array}\right] \quad \mathbf{x}{2}^{}=\left[\begin{array}{c}
0 \
1 \
\vdots \
0
\end{array}\right] \quad \cdots \quad \mathbf{x}{n}^{}=\left[\begin{array}{c} 0 \ 0 \ \vdots \ 1 \end{array}\right] \end{gathered} $$ or $$ \mathbf{X}^{}=\left[\begin{array}{llll}
\mathbf{x}
{1}^{} & \mathbf{x}{2}^{} & \cdots & \mathbf{x}{n}^{} \end{array}\right]=\mathbf{I} $$ According to Eq. (9.5) the eigenvector matrix of A is $$ \mathbf{X}=\mathbf{P X}^{}=\mathbf{P I}=\mathbf{P}
$$
因此变换矩阵磷是特征向量矩阵一种和特征值一种是对角线项一种∗.

数学代写|MATLAB代写|INVERSE POWER AND POWER METHODS

逆幂法是一种简单的迭代过程,用于寻找最小特征值λ1和对应的特征向量X1的
一种X=λX
该方法的工作原理如下:

  1. 让在近似为X1 一种r一种nd这米在和C吨这r这F在n一世吨米一种Gn一世吨在d和在一世lld这.
  2. 解决
    一种和=在
    对于向量和.
  3. 计算|和|.
  4. 让在=和/|和|并重复步骤2−4直到改变在可以忽略不计。
    在程序结束时,|和|=±1/λ1和 $\mathbf{v}=\mathbf{x} {1}.吨H和s一世Gn这Fλ {1}一世sd和吨和r米一世n和d一种sF这ll这在s:一世F和CH一种nG和ss一世Gnb和吨在和和ns在CC和ss一世在和一世吨和r一种吨一世这ns,\lambda_{1}一世sn和G一种吨一世在和;这吨H和r在一世s和,\lambda_{1}$ 是正数。

现在让我们研究为什么该方法有效。由于特征向量$\mathbf{x}{i}$ of Eq. (9.27) are orthonormal, they can be used as the basis for any $n$-dimensional vector. Thus $\mathbf{v}$ and $\mathbf{z}$ admit the unique representations $$ \mathbf{v}=\sum{i=1}^{n} v_{i} \mathbf{x}{i} \quad \mathbf{z}=\sum{i=1}^{n} z_{i} \mathbf{x}{i} $$ Note that $v{i}$ and $z_{i}$ are not the elements of $\mathbf{v}$ and $\mathbf{z}$, but the components with respect to the eigenvectors $\mathbf{x}{i}$. Substitution into Eq. (9.28) yields $$ \mathbf{A} \sum{i=1}^{n} z_{i} \mathbf{x}{i}-\sum{i=1}^{n} v_{i} \mathbf{x}{i}=\mathbf{0} $$ But $\mathbf{A} \mathbf{x}{i}=\lambda_{i} \mathbf{x}{i}$, so that $$ \sum{i=1}^{n}\left(z_{i} \lambda_{i}-v_{i}\right) \mathbf{x}{i}=\mathbf{0} $$ Hence $$ z{i}=\frac{v_{i}}{\lambda_{i}}
$$
It follows from Eq. (a) that
$$
\begin{aligned}
\mathbf{z} &=\sum_{i=1}^{n} \frac{v_{i}}{\lambda_{i}} \mathbf{x}{i}=\frac{1}{\lambda{1}} \sum_{i=1}^{n} v_{i} \frac{\lambda_{1}}{\lambda_{i}} \mathbf{x}{i} \ &=\frac{1}{\lambda{1}}\left(v_{1} \mathbf{x}{1}+v{2} \frac{\lambda_{1}}{\lambda_{2}} \mathbf{x}{2}+v{3} \frac{\lambda_{1}}{\lambda_{3}} \mathbf{x}{3}+\cdots\right) \end{aligned} $$ Since $\left|\lambda{1} / \lambda_{i}\right|<1(i \neq 1)$, we observe that the coefficient of $\mathbf{x}{1}$ has become more prominent in $\mathbf{z}$ than it was in $\mathbf{v}$; hence $\mathbf{z}$ is a better approximation to $\mathbf{x}{1}$. This completes the first iterative cycle.

数学代写|matlab代写

数学代写|matlab代写 请认准UprivateTA™. UprivateTA™为您的留学生涯保驾护航。

抽象代数Galois理论代写

偏微分方程代写成功案例

代数数论代考

概率论代考

离散数学代写

集合论数理逻辑代写案例

时间序列分析代写

离散数学网课代修

发表评论

您的电子邮箱地址不会被公开。 必填项已用 * 标注