19th Ave New York, NY 95822, USA

# 统计代写| Recap stat代写

## 统计代考

4.10 Recap
The expectation of a discrete r.v. $X$ is
$$E(X)=\sum_{x} x P(X=x) \text {. }$$
An equivalent “ungrouped” way of calculating expectation is
$$E(X)=\sum_{s} X(s) P({s})$$
where the sum is taken over pebbles in the sample space. Expectation is a single number summarizing the center of mass of a distribution. A single-number summary of the spread of a distribution is the variance, defined by
$$\operatorname{Var}(X)=E(X-E X)^{2}=E\left(X^{2}\right)-(E X)^{2} .$$
The square root of the variance is called the standard deviation.
190
Expectation is linear:
$$E(c X)=c E(X) \text { and } E(X+Y)=E(X)+E(Y),$$
regardless of whether $X$ and $Y$ are independent or not. Variance is not linear:
$$\operatorname{Var}(c X)=c^{2} \operatorname{Var}(X)$$
and
$$\operatorname{Var}(X+Y) \neq \operatorname{Var}(X)+\operatorname{Var}(Y)$$
in general (an important exception is when $X$ and $Y$ are independent). A very important strategy for calculating the expectation of a discrete ruv. $X$ is to tal bridge. This technique is especially powerful because the indicator r.v.s need not be independent; linearity holds even for dependent r.v.s. The strategy can be summarized in the following three steps.

1. Represent the r.v. $X$ as a sum of indicator r.v.s. To decide how to define the indicators, think about what $X$ is counting. For example, if $X$ is the number of local maxima, as in the Putnam problem, then we should create an indicator for each local maximum that could occur.
2. Use the fundamental bridge to calculate the expected value of each indicator. When applicable, symmetry may be very helpful at this stage.
3. By linearity of expectation, $E(X)$ can be obtained by adding up the expectations of the indicators.

Another tool for computing expectations is LOTUS, which says we can calculate the expectation of $g(X)$ using only the PMF of $X$, via
$$E(g(X))=\sum_{x} g(x) P(X=x) .$$
If $g$ is non-linear, it is a grave mistake to attempt to calculate $E(g(X))$ by swapping the $E$ and the $g$.

Four new discrete distributions to add to our list are the Geometric, Negative Binomial, Negative Hypergeometric, and Poisson distributions. A Geom $(p)$ r.v. is the number of failures before the first success in a sequence of independent Bernoulli trials with probability $p$ of success, and an NBin $(r, p) r, v$, is the number of failures mial except, in terms of drawing balls from an urn, the Negative Hypergeometric samples without replacement and the Negative Binomial samples with replacement. (We also introduced the First Success distribution, which is just a Geometric shifted so that the success is included.)
A Poisson $r . v .$ is often used as an approximation for the number of successes that
Expectation
191 occur when there are many independent or weakly dependent trials, where each trial has a small probability of success. In the Binomial story, all the trials have the same probability $p$ of success, but in the Poisson approximation, different trials can have different (but small) probabilities $p_{j}$ of success.
The Poisson, Binomial, and Hypergeometric distributions are mutually connected via the operations of conditioning and taking limits, as illustrated in Figure $4.8$. In the rest of this book, we’ll continue to introduce new named distributions and add them to this family tree, until everything is connected!

## 统计代考

4

4.10 回顾

$$E(X)=\sum_{x} x P(X=x) \text {。 }$$

$$E(X)=\sum_{s} X(s) P({s})$$

$$\operatorname{Var}(X)=E(X-E X)^{2}=E\left(X^{2}\right)-(E X)^{2} 。$$

190

$$E(c X)=c E(X) \text { 和 } E(X+Y)=E(X)+E(Y)，$$

$$\operatorname{Var}(c X)=c^{2} \operatorname{Var}(X)$$

$$\operatorname{Var}(X+Y) \neq \operatorname{Var}(X)+\operatorname{Var}(Y)$$

1. 代表房车$X$ 作为指标 r.v.s 的总和。要决定如何定义指标，请考虑 $X$ 的计数。例如，如果 $X$ 是局部最大值的数量，如 Putnam 问题，那么我们应该为每个可能出现的局部最大值创建一个指标。
2. 使用基础桥计算每个指标的期望值。在适用的情况下，对称性在此阶段可能非常有用。
3、通过期望的线性，可以将指标的期望相加得到$E(X)$。

$$E(g(X))=\sum_{x} g(x) P(X=x) 。$$