如果你也在 怎样代写数值分析numerical analysis这个学科遇到相关的难题,请随时右上角联系我们的24/7代写客服。数值分析numerical analysis是研究使用数值逼近(相对于符号操作)来解决数学分析问题的算法(有别于离散数学)。数值分析在工程和物理科学的所有领域都有应用,在21世纪还包括生命科学和社会科学、医学、商业甚至艺术领域。
数值分析numerical analysis目前计算能力的增长使得更复杂的数值分析得以使用,在科学和工程中提供详细和现实的数学模型。数值分析的例子包括:天体力学中的常微分方程(预测行星、恒星和星系的运动),数据分析中的数值线性代数,以及用于模拟医学和生物学中活细胞的随机微分方程和马尔科夫链。
my-assignmentexpert™ 数值分析numerical analysis作业代写,免费提交作业要求, 满意后付款,成绩80\%以下全额退款,安全省心无顾虑。专业硕 博写手团队,所有订单可靠准时,保证 100% 原创。my-assignmentexpert™, 最高质量的数值分析numerical analysis作业代写,服务覆盖北美、欧洲、澳洲等 国家。 在代写价格方面,考虑到同学们的经济条件,在保障代写质量的前提下,我们为客户提供最合理的价格。 由于统计Statistics作业种类很多,同时其中的大部分作业在字数上都没有具体要求,因此数值分析numerical analysis作业代写的价格不固定。通常在经济学专家查看完作业要求之后会给出报价。作业难度和截止日期对价格也有很大的影响。
想知道您作业确定的价格吗? 免费下单以相关学科的专家能了解具体的要求之后在1-3个小时就提出价格。专家的 报价比上列的价格能便宜好几倍。
my-assignmentexpert™ 为您的留学生涯保驾护航 在数学Mathematics作业代写方面已经树立了自己的口碑, 保证靠谱, 高质且原创的数值分析numerical anaysis代写服务。我们的专家在数学Mathematics代写方面经验极为丰富,各种数值分析numerical analysis相关的作业也就用不着 说。
我们提供的数值分析numerical analysis及其相关学科的代写,服务范围广, 其中包括但不限于:
数学代写|数值分析代写numerical analysis代考|SPECTRAL METHODS FOR TWO-POINT BOUNDARY VALUE PROBLEMS
We begin by summarizing a number of results about Chebyshev polynomials from Chapter 4:
Definition:
$$
T_{n}(x)=\cos (n \arccos x)
$$
Orthogonality relation:
$$
\int_{-1}^{1} \frac{T_{i}(x) T_{j}(x)}{\sqrt{1-x^{2}}} d x=0, \quad i \neq j
$$
Explicit form of the first five polynomials:
$$
\begin{aligned}
&T_{0}(x)=1 \
&T_{1}(x)=x \
&T_{2}(x)=2 x^{2}-1 \
&T_{3}(x)=4 x^{3}-3 x \
&T_{4}(x)=8 x^{4}-8 x^{2}+1 .
\end{aligned}
$$
Three-term recurrence relation:
$$
T_{n+1}(x)=2 x T_{n}(x)-T_{n-1}(x) .
$$
We consider almost the same type of problem that we studied in $\S \S 2.7$ and $6.10 .3$, namely,
$$
\begin{aligned}
-u^{\prime \prime}+u &=f(x), \quad-1 \leq x \leq 1 \
u(-1) &=0 \
u(1) &=0
\end{aligned}
$$
The change from the interval $(0,1)$ to $(-1,1)$ is to accommodate the special properties of the Chebyshev polynomials, which are defined on $(-1,1)$.
We look for an approximate solution in the form
$$
u_{N}(x)=\sum_{i=1}^{N} v_{i} T_{i-1}(x)
$$
数学代写|数值分析代写numerical analysis代考|SPECTRAL METHODS FOR TIME-DEPENDENT PROBLEMS
The standard way to apply spectral methods to time-dependent problems is to use a spectral approximation for the spatial variables, which reduces the original problem to an ODE system. This “method of lines” approach allows us to then apply any ODE method to obtain the final approximation. We retain spectral accuracy in the spatial variables, but only the accuracy of the ODE method for the variation in time. This is considered acceptable because it is generally less costly to take a smaller time step than to refine the spatial approximation.
We will make all this more specific by considering the following example problem:
$$
\begin{aligned}
u_{t} &=u_{x x}, \quad-1 \leq x \leq 1 \
u(-1, t) &=u(1, t)=0, \quad t>0 \
u(x, 0) &=\cos \pi x / 2-\sin \pi x
\end{aligned}
$$
which has the exact solution (verify this) $u(x, t)=e^{-(\pi / 2)^{2} t} \cos \pi x / 2-e^{-\pi^{2} t} \sin \pi x$. (This is essentially the same example we looked at in $\$ \$ 9.1$ and $9.2$, adjusted for the interval $(-1,1)$.) We look for an approximation in the form
$$
u_{N}(x, t)=\sum_{k=1}^{N-1} v_{k}(t) C_{k+1}(x)
$$
where the $C_{i}$ are defined as in (10.15)-(10.16). If we substitute this into the PDE (10.17) and evaluate at the collocation points $\left{\zeta_{j}^{(N)}\right}_{j=1}^{j=N-2}$, we quickly get the equation
$$
\sum_{k=1}^{N-1} v_{k}^{\prime}(t) C_{k+1}\left(\zeta_{j}^{(N)}\right)=\left(\sum_{k=1}^{N-1} v_{k}(t) C_{k+1}^{\prime \prime}\left(\zeta_{j}^{(N)}\right)\right), \quad 1 \leq k \leq N-1, \quad 1 \leq j \leq N-2
$$
The boundary conditions are automatically satisfied, and thus we have the following ODE system for the coefficients $v_{k}(t)$ :
$$
M v^{\prime}(t)=K v(t), \quad v(0)=v_{0},
$$
where the matrices $M$ and $K$ are defined by
$$
M=\left[\begin{array}{ccccc}
C_{2}\left(\zeta_{1}^{(N)}\right) & C_{3}\left(\zeta_{1}^{(N)}\right) & \cdots & \cdots & C_{N+1}\left(\zeta_{1}^{(N)}\right) \
C_{2}\left(\zeta_{2}^{(N)}\right) & \vdots & \vdots & \vdots & C_{N+1}\left(\zeta_{2}^{(N)}\right) \
\vdots & \vdots & \vdots & \vdots & \vdots \
C_{2}\left(\zeta_{N-2}^{(N)}\right) & \cdots & \cdots & \cdots & C_{N+1}\left(\zeta_{N-2}^{(N)}\right)
\end{array}\right]
$$
and
$$
K=\left[\begin{array}{ccccc}
C_{2}^{\prime \prime}\left(\zeta_{1}^{(N)}\right) & C_{3}^{\prime \prime}\left(\zeta_{1}^{(N)}\right) & \cdots & \cdots & C_{N+1}^{\prime \prime}\left(\zeta_{1}^{(N)}\right) \
C_{2}^{\prime \prime}\left(\zeta_{2}^{(N)}\right) & \vdots & \vdots & \vdots & C_{N+1}^{\prime \prime}\left(\zeta_{2}^{(N)}\right) \
\vdots & \vdots & \vdots & \vdots & \vdots \
C_{2}^{\prime \prime}\left(\zeta_{N-2}^{(N)}\right) & \cdots & \cdots & \cdots & C_{N+1}^{\prime \prime}\left(\zeta_{N-2}^{(N)}\right)
\end{array}\right]
$$
数学代写|数值分析代写NUMERICAL ANALYSIS代考|CLENSHAW-CURTIS QUADRATURE
A skeptical reader might wonder why a section on quadrature appears at the end of a chapter on spectral methods for ODEs and PDEs. The answer is very simple: Our spectral methods for differential equations are based on using Chebyshev expansions, and that is also the basis for Clenshaw-Curtis quadrature [4].
Consider the integration problem
$$
I(f)=\int_{-1}^{1} f(x) d x
$$
In Gaussian quadrature ( $\S 5.6)$ we found $N$ weights $w_{i}$ and $N$ abscissas $\xi_{i}$, so that the quadrature
$$
\int_{-1}^{1} f(x) d x \approx \sum_{i=1}^{N} w_{i} f\left(x_{i}\right)
$$
is exact for all polynomials of degree $\leq 2 N-1$. This produced some remarkably accurate quadrature rules, but the weights and abscissas are not easily computed. ${ }^{8}$ The ClenshawCurtis idea, which leads to simpler weights and abscissas, is simply to expand $f$ in a series of Chebyshev polynomials, and integrate this exactly. We have
$$
\int_{-1}^{1} f(x) d x \approx \sum_{k=1}^{N} w_{k}^{(N)} f\left(\xi_{k}^{(N)}\right)
$$
which is very similar in form to Gaussian quadrature. The abscissas are the same as our collocation points,
$$
\xi_{k}^{(N)}=\cos \left(\frac{k \pi}{N+1}\right) .
$$
The weights can be computed very easily -much more easily than for Gaussian quadrature. Following Boyd [1], we have
$$
w_{k}^{(N)}=\frac{2 \sin \left(\frac{k \pi}{N+1}\right)}{N+1} \sum_{j=1}^{N} \sin \left(\frac{j k \pi}{N+1}\right)\left(\frac{1-\cos j \pi}{j}\right)
$$
for $1 \leq k \leq N$.
数值分析代写
数学代写|数值分析代写NUMERICAL ANALYSIS代考|SPECTRAL METHODS FOR TWO-POINT BOUNDARY VALUE PROBLEMS
我们首先总结了第 4 章中关于 Chebyshev 多项式的一些结果:
定义:
吨n(X)=因(n阿尔科斯X)
正交关系:
∫−11吨一世(X)吨j(X)1−X2dX=0,一世≠j
前五个多项式的显式形式:
吨0(X)=1 吨1(X)=X 吨2(X)=2X2−1 吨3(X)=4X3−3X 吨4(X)=8X4−8X2+1.
三项递归关系:
吨n+1(X)=2X吨n(X)−吨n−1(X).
我们考虑几乎与我们研究过的相同类型的问题§§§§2.7和6.10.3,即,
−在′′+在=F(X),−1≤X≤1 在(−1)=0 在(1)=0
从区间变化(0,1)到(−1,1)是为了适应 Chebyshev 多项式的特殊性质,这些性质定义在(−1,1).
我们在表格中寻找一个近似解
$$
u_{N}(x)=\sum_{i=1}^{N} v_{i} T_{i-1}(x)
$$
数学代写|数值分析代写NUMERICAL ANALYSIS代考|SPECTRAL METHODS FOR TIME-DEPENDENT PROBLEMS
将谱方法应用于时间相关问题的标准方法是对空间变量使用谱近似,这将原始问题简化为 ODE 系统。这种“线法”方法允许我们随后应用任何 ODE 方法来获得最终近似值。我们保留空间变量中的光谱精度,但仅保留 ODE 方法对时间变化的精度。这被认为是可以接受的,因为与改进空间近似相比,采用更小的时间步长通常成本更低。
我们将通过考虑以下示例问题来使这一切更加具体:
$$
\begin{aligned}
u_{t} &=u_{x x}, \quad-1 \leq x \leq 1 \
u(-1, t) &=u(1, t)=0, \quad t>0 \
u(x, 0) &=\cos \pi x / 2-\sin \pi x
\end{aligned}
$$
which has the exact solution (verify this) $u(x, t)=e^{-(\pi / 2)^{2} t} \cos \pi x / 2-e^{-\pi^{2} t} \sin \pi x$. (This is essentially the same example we looked at in $\$ \$ 9.1$ and $9.2$, adjusted for the interval $(-1,1)$.) We look for an approximation in the form
$$
u_{N}(x, t)=\sum_{k=1}^{N-1} v_{k}(t) C_{k+1}(x)
$$
where the $C_{i}$ are defined as in (10.15)-(10.16). If we substitute this into the PDE (10.17) and evaluate at the collocation points $\left{\zeta_{j}^{(N)}\right}_{j=1}^{j=N-2}$, we quickly get the equation
$$
\sum_{k=1}^{N-1} v_{k}^{\prime}(t) C_{k+1}\left(\zeta_{j}^{(N)}\right)=\left(\sum_{k=1}^{N-1} v_{k}(t) C_{k+1}^{\prime \prime}\left(\zeta_{j}^{(N)}\right)\right), \quad 1 \leq k \leq N-1, \quad 1 \leq j \leq N-2
$$
The boundary conditions are automatically satisfied, and thus we have the following ODE system for the coefficients $v_{k}(t)$ :
$$
M v^{\prime}(t)=K v(t), \quad v(0)=v_{0},
$$
where the matrices $M$ and $K$ are defined by
$$
M=\left[\begin{array}{ccccc}
C_{2}\left(\zeta_{1}^{(N)}\right) & C_{3}\left(\zeta_{1}^{(N)}\right) & \cdots & \cdots & C_{N+1}\left(\zeta_{1}^{(N)}\right) \
C_{2}\left(\zeta_{2}^{(N)}\right) & \vdots & \vdots & \vdots & C_{N+1}\left(\zeta_{2}^{(N)}\right) \
\vdots & \vdots & \vdots & \vdots & \vdots \
C_{2}\left(\zeta_{N-2}^{(N)}\right) & \cdots & \cdots & \cdots & C_{N+1}\left(\zeta_{N-2}^{(N)}\right)
\end{array}\right]
$$
and
$$
K=\left[\begin{array}{ccccc}
C_{2}^{\prime \prime}\left(\zeta_{1}^{(N)}\right) & C_{3}^{\prime \prime}\left(\zeta_{1}^{(N)}\right) & \cdots & \cdots & C_{N+1}^{\prime \prime}\left(\zeta_{1}^{(N)}\right) \
C_{2}^{\prime \prime}\left(\zeta_{2}^{(N)}\right) & \vdots & \vdots & \vdots & C_{N+1}^{\prime \prime}\left(\zeta_{2}^{(N)}\right) \
\vdots & \vdots & \vdots & \vdots & \vdots \
C_{2}^{\prime \prime}\left(\zeta_{N-2}^{(N)}\right) & \cdots & \cdots & \cdots & C_{N+1}^{\prime \prime}\left(\zeta_{N-2}^{(N)}\right)
\end{array}\right]
$$
数学代写|数值分析代写NUMERICAL ANALYSIS代考|CLENSHAW-CURTIS QUADRATURE
持怀疑态度的读者可能想知道为什么关于 ODE 和 PDE 的谱方法的章节末尾会出现关于求积的部分。答案很简单:我们的微分方程谱方法基于使用 Chebyshev 展开,这也是 Clenshaw-Curtis 求积的基础4.
考虑整合问题
$$
I(f)=\int_{-1}^{1} f(x) d x
$$
In Gaussian quadrature ( $\S 5.6)$ we found $N$ weights $w_{i}$ and $N$ abscissas $\xi_{i}$, so that the quadrature
$$
\int_{-1}^{1} f(x) d x \approx \sum_{i=1}^{N} w_{i} f\left(x_{i}\right)
$$
is exact for all polynomials of degree $\leq 2 N-1$. This produced some remarkably accurate quadrature rules, but the weights and abscissas are not easily computed. ${ }^{8}$ The ClenshawCurtis idea, which leads to simpler weights and abscissas, is simply to expand $f$ in a series of Chebyshev polynomials, and integrate this exactly. We have
$$
\int_{-1}^{1} f(x) d x \approx \sum_{k=1}^{N} w_{k}^{(N)} f\left(\xi_{k}^{(N)}\right)
$$
which is very similar in form to Gaussian quadrature. The abscissas are the same as our collocation points,
$$
\xi_{k}^{(N)}=\cos \left(\frac{k \pi}{N+1}\right) .
$$
The weights can be computed very easily -much more easily than for Gaussian quadrature. Following Boyd [1], we have
$$
w_{k}^{(N)}=\frac{2 \sin \left(\frac{k \pi}{N+1}\right)}{N+1} \sum_{j=1}^{N} \sin \left(\frac{j k \pi}{N+1}\right)\left(\frac{1-\cos j \pi}{j}\right)
$$
for $1 \leq k \leq N$.
数学代写|数值分析代写numerical analysis代考 请认准UprivateTA™. UprivateTA™为您的留学生涯保驾护航。