19th Ave New York, NY 95822, USA

# 物理代写| Basis Vectors 相对论代考

## 物理代写

$3.2$ Basis Vectors
First we will describe a $n$ dimensional flat space. Any point in this space can be specified by $n$ real numbers called its coordinates denoted by $x^{i}, i=1,2, \ldots, n$, where we have used a superscript to label the coordinates. Note that it is not an exponent and so any such confusion should be avoided. Strictly speaking a curvilinear coordinate system that is not Cartesian does not cover the entire manifold but a Cartesian system does. At this point, we will call a space flat if it can be covered fully by $x^{i}$ and the distance between adjacent points labelled by $x^{i}$ and $x^{i}+d x^{i}$ is given by:
$$d s^{2}=\eta_{i k} d x^{i} d x^{k}$$
where $\eta_{i k}$ is the metric tensor which is symmetric, namely, $\eta_{i k}=\eta_{k i}$ and the $\eta_{i k}$ are constants. This we will see is fully equivalent to the description in Chapter 4 of a flat space in terms of the Riemann tensor, in which the Riemann tensor vanishes identically throughout the space (see Exercise $4.3$ in Chapter 4). By a principal axis transformation and rescaling, we can make the $\eta_{i k}$ diagonal and each diagonal entry equals $\pm 1$. The spacetime of SR is an example of a 4 dimensional flat space. If the diagonal entries are all $+1$, then $d s^{2}$ is a sum of squares which is in fact the repeated application of the Pythagoras theorem. Such a space is called Euclidean and the metric is called positive definite. The SR spacetime has an indefinite metric because here the terms in the metric $\eta_{i k}$, in the convention we have adopted, there is one positive sign and three negative signs. We will in general consider any curvilinear coordinate system which may not necessarily span the full space. This does not hamper us in any way because we will be only doing local analysis. As remarked before, in flat space it is possible to specify a point $\mathbf{x}$ uniquely, namely the position vector, using $n$ coordinates $x^{i}, i=1, \ldots, n$. One can construct “imaginary grid lines” for mental visualisation of the space coordinates. For a given $k$, along the $x^{k}$ grid line only the coordinate $x^{k}$ varies, while others are kept constant. One can also imagine $n-1$ dimensional hypersurfaces which slice the space, such that on each slice only one coordinate is held constant. Surfaces for which a given coordinate is kept constant should not intersect, as that will make a point in space to have two values of the same coordinate, hence not a valid coordinate system.

In order to make the discussion simple and familiar we will consider a positive definite metric. The results are virtually similar and easily generalised to an indefinite metric. To represent a vector field $\mathbf{V}(\mathbf{x})$ in this space in terms of its components we
40
3 Tensor Algebra
need to define $n$ basis vectors at each point in space. Let $\mathbf{x}$ be the position vector depending on $n$ curvilinear coordinates $x^{i}$, that is, $\mathbf{x}\left(x^{i}\right)$. The $x^{i}$ form a grid of coordinate curves covering the space. The basis vectors can be chosen in two natural ways:

1. Along the grid lines: if the $i^{\text {th }}$ coordinate is changed from $x^{i}$ to $x^{i}+\mathrm{d} x^{i}$, the direction in which the position vector $\mathbf{x}$ changes (which is along the grid lines) could be taken as the direction of the $i^{\text {th }}$ basis vector $\mathbf{e}{i}$ at $\mathbf{x}$. That is, $$\mathbf{e}{i}:=\frac{\partial \mathbf{x}}{\partial x^{i}}$$
Therefore geometrically, each $\mathbf{e}_{i}$ is a tangent to the coordinate curve $x^{i}$.
2. Normal to the slicing surfaces: the gradient to the surface of constant $x^{i}$ at $\mathbf{x}$ is normal to the surface. Here the basis vectors would be
Since we are at a given point $\mathbf{x}{\mathbf{0}}$, we leave out the position dependence explicitly. But the above equation is true at each point of the defined region. Note also the superscript and subscript on the components of the vector $\mathbf{V}$. We will discuss this point in the next section. In an orthogonal curvilinear coordinate system, the slicing surfaces would meet at right angles and one could show that the above two sets of basis vectors become co-aligned $\left(\mathbf{e}{i} \propto \mathbf{e}^{i}\right)$. However, in general this is not true. In a general curvilinear coordinate, the orthogonality of the basis vectors does not hold. However, the basis vectors, $\mathbf{e}^{i}$ and $\mathbf{e}{i}$ defined in the above manner, form a reciprocal system of vectors, such that, $$\mathbf{e}^{i} \cdot \mathbf{e}{j}=\delta_{j}^{i}$$
where $3.2$ Basis Vectors $\delta_{j}^{i}:=\left{\begin{array}{l}1 \text { if } i=j \ 0 \text { otherwise }\end{array}\right.$
is the Kronecker Delta function. The concept of a non-orthogonal curvilinear coordinate system and how the basis vectors form a reciprocal system of vectors is made clear with the specific example of a two-dimensional oblique coordinate system on a plane in Sect. 3.7. Note, however, that this example is for a coordinate system, that can cover the whole manifold, which is simpler to understand. In general, a single coordinate system may not even suffice to cover the whole space.
41
The above equations can be used to extract the components along a given unit vector. Note that we can talk of unit vectors and dot products because we are in Euclidean space on which the usual metric is defined. The $i^{\text {th }}$ component of $\mathbf{V}$ along the basis vector $\mathbf{e}{i}$ and $\mathrm{e}^{i}$ can thus be obtained by projecting the vector on the reciprocal vector. $V{i}=\mathbf{V} \cdot \mathbf{e}{i} .$ Again, this is very different from the special case of orthogonal coordinates. Derivation 1 Show that the two sets of basis vectors $\mathrm{e}{i}$ and $\mathrm{e}^{i}$ form a reciprocal system of vectors (Spiegel (1959)).
Using the chain rule of partial differentiation one can write
$$\mathrm{d} \mathbf{x}=\sum_{i=1}^{n} \frac{\partial \mathbf{x}}{\partial x^{i}} \mathrm{~d} x^{i}=\sum_{i=1}^{n} \mathbf{e}{i} \mathrm{~d} x^{i}$$ Taking dot product of each side with $\mathrm{e}^{j}:=\nabla x^{j}$, one gets $$\nabla x^{j} \cdot \mathrm{d} \mathbf{x}:=\mathrm{d} x^{j}=\sum{i=1}^{n}\left(\mathbf{e}{i} \cdot \mathbf{e}^{j}\right) \mathrm{d} x^{i}$$ Since the above identity has to hold for arbitrary $\mathrm{d} \mathbf{x}$, comparing the coefficients of $\mathrm{d} x^{i}$ on both sides we obtain $\mathbf{e}{i} \cdot \mathbf{e}^{j}=\delta_{i}^{j}$.

## 物理代考

$3.2$ 基础向量

$$d s^{2}=\eta_{i k} d x^{i} d x^{k}$$

40
3 张量代数

1.沿网格线：如果$i^{\text {th }}$坐标从$x^{i}$变为$x^{i}+\mathrm{d} x^{i}$ , 位置向量 $\mathbf{x}$ 变化的方向（沿着网格线）可以作为 $i^{\text {th }}$ 基向量 $\mathbf{e 的方向}{i}$ 在 $\mathbf{x}$。那是， $$\mathbf{e}{i}:=\frac{\partial \mathbf{x}}{\partial x^{i}}$$

$$\mathbf{e}^{i} \cdot \mathbf{e}{j}=\delta{j}^{i}$$

Matlab代写