19th Ave New York, NY 95822, USA

# 统计代写| Expectations and variances stat代写

## 统计代考

11.2 Classification of states
In this section we introduce terminology for describing the various characteristics of a Markov chain. The states of a Markov chain can be classified as recurrent or transient, depending on whether they are visited over and over again in the long run or are eventually abandoned. States can also be classified according to their period, which is a positive integer summarizing the amount of time that can elapse between successive visits to a state. These characteristics are important because they determine the long-run behavior of the Markov chain, which we will study in they determi Section $11.3$.

The concepts of recurrence and transience are best illustrated with a concrete example. In the Markov chain shown on the left of Figure 11.2 (previously featured in Example 11.1.5), a particle moving around between states will continue to spend time in all 4 states in the long run, since it is possible to get from any state to any other state. In contrast, consider the chain on the right of Figure 11.2, and let the particle start at state 1 . For a while, the chain may linger in the triangle formed by states 1,2, and 3, but eventually it will reach state 4, and from there it can never states 1,2, and 3, but eventually it will reach state 4, and from there it can never return to states 1,2, or 3 . It will then wander around between states 4,5, and 6 return to states 1,2, or $3 .$ it will then wander around between states 4,5, forever. States 1,2, and 3 are transient and states 4,5, and 6 are recurrent.
In general, these concepts are defined as follows.
503
Markov chains
FIGURE 11.2
Left: 4-state Markov chain with all states recurrent. Right: 6-state Markov chain with states 1,2 , and 3 transient.

Definition 11.2.1 (Recurrent and transient states). State $i$ of a Markov chain is recurrent if starting from $i$, the probability is 1 that the chain will eventually return to $i$. Otherwise, the state is transient, which means that if the chain starts from $i$, there is a positive probability of never returning to $i$.

In fact, although the definition of a transient state only requires that there be a positive probability of never returning to the state, we can say something stronger: as long as there is a positive probability of leaving $i$ forever, the chain eventually will leave $i$ forever. Moreover, we can find the distribution of the number of returns to the state.

Proposition 11.2.2 (Number of returns to transient state is Geometric). Let $i$ be a transient state of a Markov chain. Suppose the probability of never returning to $i$, starting from $i$, is a positive number $p>0$. Then, starting from $i$, the number of times that the chain returns to $i$ before leaving forever is distributed $\operatorname{Geom}(p)$.
The proof is by the story of the Geometric distribution: each time that the chain is at $i$, we have a Bernoulli trial which results in “failure” if the chain eventually returns to $i$ and “success” if the chain leaves $i$ forever; these trials are independent by the Markov property. The number of returns to state $i$ is the number of failures since a Geometric random variable always takes finite values, this proposition tells us that after some finite number of visits, the chain will leave state $i$ forever.

If the number of states is not too large, one way to classify states as recurrent or transient is to draw a diagram of the Markov chain and use the same kind of reasoning that we used when analyzing the chains in Figure 11.2. A special case where we can immediately conclude all states are recurrent is when the chain is irreducible, meaning that it is possible to get from any state to any other state.

11.2 状态分类

503