19th Ave New York, NY 95822, USA

# 统计代写| Markov property and transition matrix stat代写

## 统计代考

11.1 Markov property and transition matrix
Markov chains “live” in both space and time: the set of possible values of the $X_{n}$ is called the state space, and the index $n$ represents the evolution of the process over time. The state space of a Markov chain can be either discrete or continuous, and time can also be either discrete or continuous (in the continuous-time setting, we would imagine a process $X_{t}$ defined for all real $t \geq 0$ ). In this chapter we will focus Specifically, we will assume that the $X_{n}$ take values in a finite set, which we usually take to be ${1,2, \ldots, M}$ or ${0,1, \ldots, M}$.
Definition 11.1.1 (Markov chain). A sequence of random variables $X_{0}, X_{1}, X_{2}, \ldots$ taking values in the state space ${1,2, \ldots, M}$ is called a Markov chain if for all $n \geq 0$,
$$P\left(X_{n+1}=j \mid X_{n}=i, X_{n-1}=i_{n-1}, \ldots, X_{0}=i_{0}\right)=P\left(X_{n+1}=j \mid X_{n}=i\right) .$$
497
498
The quantity $P\left(X_{n+1}=j \mid X_{n}=i\right)$ is called the transition probability from state $i$ to state $j$. In this book, when referring to a Markov chain we will implicitly assume that it is time-homogeneous, which means that the transition probability $P\left(X_{n+1}=j \mid X_{n}=i\right)$ is the same for all times $n$. But care is needed, since the literature is not consistent about whether to say “time-homogeneous Markov chain” or just “Markov chain”.

The above condition is called the Markov property, and it says that given the entire past history $X_{0}, X_{1}, X_{2}, \ldots, X_{n}$, only the most recent term, $X_{n}$, matters for predicting $X_{n+1}$. If we think of time $n$ as the present, times before $n$ as the past, and times after $n$ as the future, the Markov property says that given the present, the past and future are conditionally independent. The Markov property greatly simplifies computations of conditional probability: instead of having to condition on the entire past, we only need to condition on the most recent value.

To describe the dynamics of a Markov chain, we need to know the probabilities of moving from any state to any other state, that is, the probabilities $P\left(X_{n+1}=\right.$ $j \mid X_{n}=i$ ) on the right-hand side of the Markov property. This information can be encoded in a matrix, called the transition matrix, whose of going from state $i$ to state $j$ in one step of the chain.

Definition 11.1.2 (Transition matrix). Let $X_{0}, X_{1}, X_{2}, \ldots$ be a Markov chain with state space ${1,2, \ldots, M}$, and let $q_{i j}=P\left(X_{n+1}=j \mid X_{n}=i\right.$ ) be the transition probability from state $i$ to state $j$. The $M \times M$ matrix $Q=\left(q_{i j}\right)$ is called the transition matrix of the chain.
Note that $Q$ is a nonnegative matrix in which each row sums to 1 . This is because, starting from any state $i$, the events “move to 1 “, “move to 2 “, …, “move to $M “$ are disjoint, and their union has probability 1 because the chain has to go somewhere.

Example 11.1.3 (Rainy-sunny Markov chain). Suppose that on any given day, the weather can either be rainy or sunny. If today is rainy, then tomorrow will be rainy with probability $1 / 3$ and sunny with probability $2 / 3$. If today is sunny, then tomorrow will be rainy with probability $1 / 2$ and sunny with probability $1 / 2$. Letting $X_{n}$ be the weather on day $n, X_{0}, X_{1}, X_{2}, \ldots$ is a Markov chain on the state space ${R, S}$, where $R$ stands for rainy and $S$ for sunny. We know that the Markov prop- erty is satisfied because, from the description of the process, only today’s weather erty is satisfied because, from the matters for predicting tomorrow’s.
The transition matrix of the chain is
$$\begin{gathered} R \ R \ S \ \left(\begin{array}{cc} 1 / 3 & 2 / 3 \ 1 / 2 & 1 / 2 \end{array}\right) \end{gathered}$$
The first row says that starting from state $R$, we transition back to
state $R$ with probability $1 / 3$ and transition to state $S$ with
probability $2 / 3$. The second row says
that starting from state $S$, we have a $1 / 2$ chance of moving to state $R$ and a $1 / 2$ chance of staying in state $S$. We could just as well have used
$$\begin{gathered} S \ \left(\begin{array}{cc} 1 / 2 & 1 / 2 \ 2 / 3 & 1 / 3 \end{array}\right) \end{gathered}$$
as our transition matrix instead. In general, if there isn’t an obvious ordering of the states of a Markov chain (as with the states $R$ and $S$ ), we just need to fix an ordering of the states and use it consistently.

The transition probabilities of a Markov chain can also be represented with a diagram. Each state is represented by a circle, and the arrows indicate the possible one-step transitions; we can imagine a particle wandering around from state to state, randomly choosing which arrow to follow. Next to the arrows we write the corresponding transition probabilities.

What if the weather tomorrow depended on the weather today and the weather yesterday? For example, suppose that the weather behaves as above, except that if there have been two consecutive days of rain, then tomorrow will definitely be sunny, and if there have been two consecutive days of sun, then tomorrow will definitely be rainy. Under these new weather dynamics, the $X_{n}$ no longer form a Markov chain, as the Markov property is violated: conditional on today’s weather, yesterday’s weather can still provide useful information for predicting tomorrow’s weather.

However, by enlarging the state space, we can create a new Markov chain: let $Y_{n}=\left(X_{n-1}, X_{n}\right)$ for $n \geq 1$. Then $Y_{1}, Y_{2}, \ldots$ is a Markov chain on the state space ${(R, R),(R, S),(S, R),(S, S)}$. You can verify that the new transition matrix is
$\left.\begin{array}{l}(R, R) \ (R, R) \ (R, S) \ (S, R) \ (S, S) \ (S, \quad 0 & 1 & 0 & 0 \ 0 & 0 & 1 / 2 & 1 / 2 \ 0 & 0 & 1 & 0\end{array}\right)$
and that its corresponding graphical representation is given in the following figure.

11.1 马尔可夫性质和转移矩阵