19th Ave New York, NY 95822, USA

# 数学代写|统计计算作业代写Statistical Computing代考|Resampling methods

my-assignmentexpert™统计计算Statistical Computing作业代写，免费提交作业要求， 满意后付款，成绩80\%以下全额退款，安全省心无顾虑。专业硕 博写手团队，所有订单可靠准时，保证 100% 原创。my-assignmentexpert™， 最高质量的统计计算Statistical Computing作业代写，服务覆盖北美、欧洲、澳洲等 国家。 在代写价格方面，考虑到同学们的经济条件，在保障代写质量的前提下，我们为客户提供最合理的价格。 由于统计Statistics作业种类很多，同时其中的大部分作业在字数上都没有具体要求，因此统计计算Statistical Computing作业代写的价格不固定。通常在经济学专家查看完作业要求之后会给出报价。作业难度和截止日期对价格也有很大的影响。

my-assignmentexpert™ 为您的留学生涯保驾护航 在统计计算Statistical Computing作业代写方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计计算Statistical Computing代写服务。我们的专家在统计计算Statistical Computing代写方面经验极为丰富，各种统计计算Statistical Computing相关的作业也就用不着 说。

• 随机微积分 Stochastic calculus
• 随机分析 Stochastic analysis
• 随机控制理论 Stochastic control theory
• 微观经济学 Microeconomics
• 数量经济学 Quantitative Economics
• 宏观经济学 Macroeconomics
• 经济统计学 Economic Statistics
• 经济学理论 Economic Theory
• 计量经济学 Econometrics

## 数学代写|统计计算作业代写Statistical Computing代考|Bootstrap estimates

The basis of all resampling methods is to replace the distribution given by a model with the ’empirical distribution’ of the given data, as described in the following definition.

Definition 5.7 Given a sequence $x=\left(x_{1}, x_{2}, \ldots, x_{M}\right)$, the distribution of $X^{}=x_{K}$, where the index $K$ is random and uniformly distributed on the set ${1,2, \ldots, M}$, is called the empirical distribution of the $x_{i}$. In this chapter we denote the empirical distribution of $x$ by $P_{x}^{}$.

In the definition, the vector $x$ is assumed to be fixed. The randomness in $X^{}$ stems from the choice of a random element, with index $K \sim \mathcal{U}{1,, 2, \ldots, M}$, of this fixed sequence. Computational methods which are based on the idea of approximating an unknown ‘true’ distribution by an empirical distribution are called bootstrap methods. Assume that $X^{}$ is distributed according to the empirical distribution $P_{x}^{}$. Then we have $$P\left(X^{}=a\right)=\frac{1}{M} \sum_{i=1}^{M} \mathbb{1}{{a}}\left(x{i}\right)$$
that is under the empirical distribution, the probability that $X^{}$ equals $a$ is given by the relative frequency of occurrences of $a$ in the given data. Similarly, we have the relations $$P\left(X^{} \in A\right)=\frac{1}{M} \sum_{i=1}^{M} \mathbb{1}{A}\left(x{i}\right)$$
and
\begin{aligned} \mathbb{E}\left(f\left(X^{}\right)\right) &=\sum_{a \in\left{x_{1}, \ldots, x_{M}\right}} f(a) P\left(X^{}=a\right) \ &=\sum_{a \in\left{x_{1}, \ldots, x_{M}\right}} f(a) \frac{1}{M} \sum_{i=1}^{M} \mathbb{1}{{a}}\left(x{i}\right)=\frac{1}{M} \sum_{i=1}^{M} f\left(x_{i}\right) \end{aligned}
Some care is needed when verifying this relation: the sums where the index $a$ runs over the set $\left{x_{1}, \ldots, x_{M}\right}$ have only one term for each element of the set, even if the corresponding value occurs repeatedly in the given data.

## 数学代写|统计计算作业代写STATISTICAL COMPUTING代考|Applications to statistical inference

The main application of the bootstrap method in statistical inference is to quantify the accuracy of parameter estimates.

In this section, we will consider parameters as a function of the corresponding distribution: if $\theta$ is a parameter, for example the mean or the variance, then we write $\theta(P)$ for the corresponding parameter. In statistics, there are many ways of constructing estimators for a parameter $\theta$. One general method for constructing parameter estimators, the plug-in principle, is given in the following definition.

Definition $5.12$ Consider an estimator $\hat{\theta}{n}=\hat{\theta}{n}\left(X_{1}, \ldots, X_{n}\right)$ for a parameter $\theta(P)$. The estimator $\hat{\theta}{n}$ satisfies the plug-in principle, if it satisfies the relation $$\hat{\theta}{n}\left(x_{1}, \ldots, x_{n}\right)=\theta\left(P_{x}^{}\right),$$ for all $x=\left(x_{1}, \ldots, x_{n}\right)$, where $P_{x}^{}$ is the empirical distribution of $x$. In this case, $\hat{\theta}_{n}$ is called the plug-in estimator for $\theta$.

Since the idea of bootstrap methods is to approximate the distribution $P$ by the empirical distribution $P_{x}^{*}$, plug-in estimators are particularly useful in conjunction with bootstrap methods.

## 数学代写|统计计算作业代写STATISTICAL COMPUTING代考|BOOTSTRAP ESTIMATES

$$P\left(X^{} \in A\right)=\frac{1}{M} \sum_{i=1}^{M} \mathbb{1}{A}\left(x{i}\right)$$
and
\begin{aligned} \mathbb{E}\left(f\left(X^{ }\right)\right) &=\sum_{a \in\left{x_{1}, \ldots, x_{M}\right}} f(a) P\left(X^{*}=a\right) \ &=\sum_{a \in\left{x_{1}, \ldots, x_{M}\right}} f(a) \frac{1}{M} \sum_{i=1}^{M} \mathbb{1}{{a}}\left(x{i}\right)=\frac{1}{M} \sum_{i=1}^{M} f\left(x_{i}\right) . \end{aligned}

## 数学代写|统计计算作业代写STATISTICAL COMPUTING代考|APPLICATIONS TO STATISTICAL INFERENCE

bootstrap 方法在统计推断中的主要应用是量化参数估计的准确性。

$\hat{\theta}{n}=\hat{\theta}{n}\left(X_{1}, \ldots, X_{n}\right)$ for a parameter $\theta(P)$. The estimator $\hat{\theta}{n}$ satisfies the plug-in principle, if it satisfies the relation $$\hat{\theta}{n}\left(x_{1}, \ldots, x_{n}\right)=\theta\left(P_{x}^{}\right)$$ for all $x=\left(x_{1}, \ldots, x_{n}\right)$, where $P_{x}^{}$ is the empirical distribution of $x$. In this case, $\hat{\theta}_{n}$ is called the plug-in estimator for $\theta$.