线性代数代写|schmidt gram orthogonal代写|QR decomposition代写|least square regression代写

线性代数很难学? 学霸老司机,带带你!对于学习线性代数的同学来说,光有计算能力和微积分的学科基础是不够的,也不是必要的,学习线性代数的最基本的观点是将linear transform的变换的观点和matrix的计算的观点结合起来,互为掎角之势,在解决特定问题时使用特定观点的优点。(类似与函数在physics space和frenquence space的形态互为补充,uncertain principle对此有所刻画,但是不完全一样)
UpriviateTA代写,做自己擅长的领域!UpriviateTA的专家精通各种经典论文、方程模型、通读各种完整书单,比如代写和线性代数密切相关的Matrix theory, 各种矩阵形式的不等式,比如Wely inequality, Min-Max principle,polynomial method, rank trick那是完全不在话下,对于各种singularity value decomposition, LLS, Jordan standard form ,QR decomposition, 基于advance的LLS的optimazation问题,乃至于linear programming method(8维和24维Sphere packing的解决方案的关键)的计算也是熟能生巧正如一些大神所说的,代写线性代数留学生线性代数作业assignment论文paper的很多方法都是围绕几个线性代数中心问题的解决不断产生的。比如GIT,从klein时代开始研究的几何不变量问题,以及牛顿时代开始研究的最优化问题等等(摆线),此外,强烈推荐我们UpriviateTA代写的线性代数这门课,从2017年起,我们已经陆续代写过近千份类似的作业assignment。 以下就是一份典型案例的答案
问题 1. Let \(A\) be a real matrix. If there exists an orthogonal matrix \(Q\) such that \(A=Q\left[\begin{array}{l}R \\ 0\end{array}\right],\) show that $$ A^{T} A=R^{T} R $$
证明 . This simply involves multiplying the block matrices out: $$ A^{T} A=\left[\begin{array}{ll} R^{T} & 0 \end{array}\right] Q^{T} Q\left[\begin{array}{l} R \\ \end{array}\right]=\left[\begin{array}{ll} R^{T} & 0 \end{array}\right]\left[\begin{array}{l} R \\ \end{array}\right]=R^{T} R $$
问题 2. Let \(A\) be symmetric positive definite. Given an initial guess \(x_{0},\) the method of steepest descent for solving \(A x=b\) is defined as $$ x_{k+1}=x_{k}+\alpha_{k} r_{k} $$ where \(r_{k}=b-A x_{k}\) and \(\alpha_{k}\) is chosen to minimize \(f\left(x_{k+1}\right),\) where $$ f(x):=\frac{1}{2} x^{T} A x-x^{T} b $$ (a) Show that $$ \alpha_{k}=\frac{r_{k}^{T} r_{k}}{r_{k}^{T} A r_{k}} $$ (b) Show that $$ r_{i+1}^{T} r_{i}=0 $$ (c) If \(e_{i}:=x-x_{i} \neq 0,\) show that $$ e_{i+1}^{T} A e_{i+1}<e_{i}^{T} A e_{i} $$ Hint: This is equivalent to showing \(r_{i+1}^{T} A^{-1} r_{i+1}<r_{i}^{T} A^{-1} r_{i}\) (why?).
证明 . (a) To find out what \(\alpha_{k}\) is, apply the recurrence relation to the definition of \(f\left(x_{k+1}\right)\) : \(\begin{aligned} f\left(x_{k+1}\right) &=\frac{1}{2}\left(x_{k}+\alpha_{k} r_{k}\right)^{T} A\left(x_{k}+\alpha_{k} r_{k}\right)-\left(x_{k}+\alpha_{k} r_{k}\right)^{T} b \\ &=\frac{1}{2}\left(x_{k}^{T} A x_{k}+\alpha_{k} r_{k}^{T} A x_{k}+\alpha_{k} x_{k}^{T} A r_{k}+\alpha_{k}^{2} r_{k}^{T} A r_{k}\right)-x_{k}^{T} b-\alpha_{k} r_{k}^{T} b \\ &=\frac{1}{2}\left(x_{k}^{T} A x_{k}+\alpha_{k} r_{k}^{T} A x_{k}+\alpha_{k} r_{k}^{T} A^{T} x_{k}+\alpha_{k}^{2} r_{k}^{T} A r_{k}\right)-x_{k}^{T} b-\alpha_{k} r_{k}^{T} b \\ &=\frac{1}{2}\left(x_{k}^{T} A x_{k}+\alpha_{k} r_{k}^{T} A x_{k}+\alpha_{k} r_{k}^{T} A x_{k}+\alpha_{k}^{2} r_{k}^{T} A r_{k}\right)-x_{k}^{T} b-\alpha_{k} r_{k}^{T} b \\ &=\frac{1}{2}\left(x_{k}^{T} A x_{k}+2 \alpha_{k} r_{k}^{T} A x_{k}+\alpha_{k}^{2} r_{k}^{T} A r_{k}\right)-x_{k}^{T} b-\alpha_{k} r_{k}^{T} b \end{aligned}\) Now differentiate with respect to \(\alpha_{k}\) and set to zero: $$ \begin{aligned} \frac{d}{d \alpha_{k}} f\left(x_{k+1}\left(\alpha_{k}\right)\right) &=r_{k}^{T} A x_{k}+\alpha r_{k}^{T} A r_{k}-r_{k}^{T} b=0 \\ \alpha_{k} r_{k}^{T} A r_{k} &=r_{k}^{T} b-r_{k}^{T} A x_{k}=r_{k}^{T} r_{k} \end{aligned} $$ Isolating \(\alpha_{k}\) yields the desired result. Note that \(\alpha>0,\) since \(A\) is positive definite. (b) It helps to first establish a recurrence relation involving the residuals only: note that $$ r k+1=b-A x_{k+1}=b-A\left(x_{k}+\alpha_{k} r_{k}\right)=r_{k}-\alpha_{k} A r_{k} $$ Then $$ \begin{aligned} r_{k+1}^{T} r_{k} &=\left(r_{k}-\alpha_{k} A r_{k}\right)^{T} r_{k} \\ &=r_{k}^{T} r_{k}-\left(\frac{r_{k}^{T} r_{k}}{r_{k}^{T} A r_{k}}\right) r_{k}^{T} A^{T} r_{k} \\ &=0 \end{aligned} $$ since \(A=A^{T}\). (c) First, note that $$ r_{k}=b-A x_{k}=A x-A x_{k}=A\left(x-x_{k}\right)=A e_{k} $$ so that \(r_{k}^{T} A^{-1} r_{k}=e_{k}^{T} A^{T} A^{-1} A e_{k}=e_{k}^{T} A e_{k},\) and similarly for \(r_{k+1}\). Thus, the two inequalities are completely equivalent. Now we attack the inequality involving \(r\) : $$ \begin{aligned} r_{k+1}^{T} A^{-1} r_{k+1} &=r_{k+1}^{T} A^{-1}\left(r_{k}-\alpha_{k} A r_{k}\right) \\ &=r_{k+1}^{T} A^{-1} r_{k}-\underbrace{\alpha_{k} r_{k+1}^{T} r_{k}}_{0 \text { by (b) }} \\ &=\left(r_{k}-\alpha_{k} A r_{k}\right)^{T} A^{-1} r_{k} \\ &=r_{k} A^{-1} r_{k}-\underbrace{\alpha_{k} r_{k}^{T} r_{k}}_{>0} \\ &<r_{k} A^{-1} r_{k} . \end{aligned} $$
问题 3. Let \(B\) be an \(n \times n\) matrix, and assume that \(B\) is both orthogonal and triangular. (a) Prove that \(B\) must be diagonal. (b) What are the diagonal entries of \(B ?\) (c) Let \(A\) be \(n \times n\) and non-singular. Use parts (a) and (b) to prove that the QR factorization of \(A\) is unique up to the signs of the diagonal entries of \(R\). In particular, show that there exist unique matrices \(Q\) and \(R\) such that \(Q\) is orthogonal, \(R\) is upper triangular with positive entries on its main diagonal, and \(A=Q R\).
证明 . Solution. (a) We need two facts: i. \(P, Q\) orthogonal \(\Longrightarrow P Q\) orthogonal ii. \(P\) upper triangular \(\Longrightarrow P^{-1}\) upper triangular (i) is easy: \((P Q)^{T}(P Q)=Q^{T} P^{T} P Q=Q^{T} I Q=Q^{T} Q=I\). (ii) can be argued by considering the \(k\) th column \(p_{k}\) of \(P^{-1}\). Then \(P^{-1} e_{k}=p_{k},\) so \(P p_{k}=e_{k}\). Since \(P\) is upper triangular, we can use backward substitution to solve for \(p_{k} .\) Recall the formula for backward substitution for a general upper triangular system \(T x=b\) : $$ x_{k}=\frac{1}{t_{k k}}\left(b_{k}-\sum_{j=k+1}^{n} t_{k j} b_{j}\right) $$ In our case, we see that since \(\left(e_{k}\right)_{j}=0\) for \(j>k,\) this implies \(\left(p_{k}\right)_{j}=0\) for \(j>k .\) In other words, the \(k\) th column of \(P^{-1}\) must be all zero below the \(k\) th row. This is to say \(P^{-1}\) is upper triangular. Now we show that \(B\) both orthogonal and triangular implies \(B\) diagonal. Without loss of generality (i.e. replacing \(B\) by \(B^{T}\) if necessary), we can assume \(B\) is upper triangular. Then \(B^{-1}\) is also upper triangular by (ii). But \(B^{-1}=B^{T}\) by orthogonality, and we know \(B^{T}\) is lower triangular. So \(B^{T}\) (and hence \(B\) ) is both upper and lower triangular, implying that \(B\) is in fact diagonal. (b) We know \(B\) is diagonal, so that \(B=B^{T} .\) Let \(B=\operatorname{diag}\left(b_{11}, \ldots, b_{n n}\right) .\) Then \(I=B B^{T}=\) \(\operatorname{diag}\left(b_{11}^{2}, \ldots, b_{n n}^{2}\right),\) so \(b_{i i}^{2}=1\) for all \(i .\) Thus, the diagonal entries of \(B\) are \(\pm 1 .\) (c) There is an existence part and a uniqueness part to this problem. In class we showed existence of a decomposition \(A=Q R\) where \(Q\) is orthogonal, and \(R\) is upper triangular, but not necessarily with positive entries on the diagonal, so we have to fix it up. We know that \(r_{i i} \neq 0\) (otherwise \(R\) would be singular, contradicting the non-singularity of \(A\) ), so we can define $$ D=\operatorname{diag}\left(\operatorname{sgn}\left(r_{11}\right), \ldots, \operatorname{sgn}\left(r_{n n}\right)\right) $$ where $$ \operatorname{sgn}(x)=\left\{\begin{array}{ll} 1, & x>0 \\ 0, & x=0 \\ -1, & x<0 \end{array}\right. $$ Note that \(D\) is orthogonal, and \(\tilde{R}=D R\) has positive diagonal. So if we define \(\tilde{Q}=Q D\) (orthogonal), then \(A=\tilde{Q} \tilde{R}\) gives the required decomposition, so we have proved existence. For uniqueness, suppose \(A=Q_{1} R_{1}=Q_{2} R_{2}\) are two such decompositions. Then \(Q_{2}^{T} Q_{1}=\) \(R_{2} R_{1}^{-1},\) so that the left-hand side is orthogonal and the right-hand side is upper triangular. By parts (a) and (b), this implies both sides are equal to a diagonal matrix with ±1 as the only possible entries. But both \(R_{1}\) and \(R_{2}\) has positive diagonal, so \(R_{2} R_{1}^{-1}\) must have positive diagonal (you should verify that for upper triangular matrices, the diagonal of the inverse is the inverse of the diagonal, and the diagonal of the product is the product of the diagonals). Thus, both sides are equal to a diagonal matrices with +1 on the diagonal, i.e. the identity. So \(Q_{2}^{T} Q_{1}=I \Longrightarrow Q_{1}=Q_{2},\) and \(R_{2} R_{1}^{-1}=I \Longrightarrow R_{1}=R_{2} .\) This shows uniqueness.
问题 4. We are given the following data for the total population of the United States, as determined by the U. S. Census, for the years 1900 to \(2000 .\) The units are millions of people. \begin{array}{cr} t & y \\ \hline 1900 & 75.995 \\ 1910 & 91.972 \\ 1920 & 105.711 \\ 1930 & 123.203 \\ 1940 & 131.669 \\ 1950 & 150.697 \\ 1960 & 179.323 \\ 1970 & 203.212 \\ 1980 & 226.505 \\ 1990 & 249.633 \\ 2000 & 281.422 \\ \hline \end{array} Suppose we model the population growth by $$ y \approx \beta_{1} t^{3}+\beta_{2} t^{2}+\beta_{3} t+\beta_{4} $$ (a) Use normal equations for computing \(\beta\). Plot the resulting polynomial and the exact values \(\mathbf{y}\) in the same graph. (b) Use the QR factorization to obtain \(\beta\) for the same problem. Plot the resulting polynomial and the exact values, as well as the polynomial in part (a), in the same graph. Also compare your coefficients with those obtained in part (a). (c) Suppose we translate and scale the time variable \(t\) by $$ s=(t-1950) / 50 $$ and use the model $$ y \approx \beta_{1} s^{3}+\beta_{2} s^{2}+\beta_{3} s+\beta_{4} $$ Now solve for the coefficients \(\beta\) and plot the polynomial and the exact values in the same graph. Which of the polynomials in part (a) through (c) gives the best fit to the data?
证明 . (a) The least squares problem is $$ \min \|y-A \beta\| $$ where $$ A=\left[\begin{array}{cccc} t_{1}^{3} & t_{1}^{2} & t_{1} & 1 \\ \vdots & & & \vdots \\ t_{n}^{3} & t_{n}^{2} & t_{n} & 1 \end{array}\right] $$ \(n=11, t_{i}=1900+10(i-1) .\) We form the normal equations \(A^{T} A \beta=A^{T} y\) and solve for \(\beta\) to obtain beta \(=\) \(1.010415596011712 e-005\) \(-4.961885780449666 \mathrm{e}-002\) \(8.025770365215973 e+001\) $$ -4.259196447217581 e+004 $$ The plot is shown in Figure 4 . Here we have \(\|y-A \beta\|_{2}=10.1\). (b) Now we use the QR factorization to obtain beta2. \(\gg[Q, R]=\mathrm{qr}(\mathrm{A}) ;\) \(\gg \mathrm{b}=\) Q’*y; \(\gg\) beta \(2=R(1: 4,1: 4) \backslash b(1: 4)\) beta2 \(=\) $$ \begin{array}{r}1.010353535409117 \mathrm{e}-005 \\ -4.961522727598729 \mathrm{e}-002 \\ 8.025062525889830 \mathrm{e}+001 \\ -4.258736497384948 \mathrm{e}+004\end{array} $$ So \(\|\) beta \(2-\) beta \(\|_{2}=4.60 .\) The norm of the residual is essentially the same as in part (a), with a difference of \(1.08 \times 10^{-10}\). This shows the least-squares problem is very ill-conditioned, since a small change in the residual yields a relatively large change in the solution vector. However, Figure 5 shows that the two fits are hardly distinguishable on a graph (mostly because the residuals are both small). (c) We now perform a change of the independent variable $$ s=(t-1950) / 50 $$ so that \(s \in[-1,1] .\) We again set up the least square system and solve: \(\gg[Q, R]=q r(A)\) \(\gg b=Q^{\prime} * b ;\) \(\gg b=Q^{\prime} * y ;\) \(\gg\) beta \(3=R(1: 4,1: 4) \backslash b(1: 4) ;\) \(\gg\) betas beta3 \(=\) $$ \begin{array}{r}1.262941919191900 \mathrm{e}+000 \\ 2.372613636363639 \mathrm{e}+001 \\ 1.003659217171717 \mathrm{e}+002 \\ 1.559042727272727 \mathrm{e}+002\end{array} $$ Note that the coefficients are different because we are using a different basis. The residual is again essentially the same as parts (a) and (b) (the difference is \(1.54 \times 10^{-11}\) ). So even though the coefficients are quite different, the quality of the fit is essentially the same (even though (c) is more accurate by just a tiny bit).
更多线性代数代写案例请参阅此处

E-mail: [email protected]  微信:shuxuejun


uprivate™是一个服务全球中国留学生的专业代写公司 专注提供稳定可靠的北美、澳洲、英国代写服务 专注于数学,统计,金融,经济,计算机科学,物理的作业代写服务

1人评论了“线性代数代写|schmidt gram orthogonal代写|QR decomposition代写|least square regression代写”

  1. Pingback: linear algebra代写| 线性代数代写| dimension argument代写 代写 代写 | 数学代写数学答疑数学辅导

发表评论

您的电子邮箱地址不会被公开。 必填项已用 * 标注