19th Ave New York, NY 95822, USA

数学代写|统计计算作业代写Statistical Computing代考|Variance reduction methods

my-assignmentexpert™统计计算Statistical Computing作业代写，免费提交作业要求， 满意后付款，成绩80\%以下全额退款，安全省心无顾虑。专业硕 博写手团队，所有订单可靠准时，保证 100% 原创。my-assignmentexpert™， 最高质量的统计计算Statistical Computing作业代写，服务覆盖北美、欧洲、澳洲等 国家。 在代写价格方面，考虑到同学们的经济条件，在保障代写质量的前提下，我们为客户提供最合理的价格。 由于统计Statistics作业种类很多，同时其中的大部分作业在字数上都没有具体要求，因此统计计算Statistical Computing作业代写的价格不固定。通常在经济学专家查看完作业要求之后会给出报价。作业难度和截止日期对价格也有很大的影响。

my-assignmentexpert™ 为您的留学生涯保驾护航 在统计计算Statistical Computing作业代写方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计计算Statistical Computing代写服务。我们的专家在统计计算Statistical Computing代写方面经验极为丰富，各种统计计算Statistical Computing相关的作业也就用不着 说。

• 随机微积分 Stochastic calculus
• 随机分析 Stochastic analysis
• 随机控制理论 Stochastic control theory
• 微观经济学 Microeconomics
• 数量经济学 Quantitative Economics
• 宏观经济学 Macroeconomics
• 经济统计学 Economic Statistics
• 经济学理论 Economic Theory
• 计量经济学 Econometrics

数学代写|统计计算作业代写Statistical Computing代考|Importance sampling

The importance sampling method is based on the following argument. Assume that $X$ is a random variable with density $\varphi$, that $f$ is a real-valued function and that $\psi$ is another probability density with $\psi(x)>0$ whenever $f(x) \varphi(x)>0$. Then we have
$$\mathbb{E}(f(X))=\int f(x) \varphi(x) d x=\int f(x) \frac{\varphi(x)}{\psi(x)} \psi(x) d x$$
where we define the fraction to be 0 whenever the denominator (and thus the numerator) equals 0 . Since $\psi$ is a probability density, the integral on the right can be written as an expectation again: if $Y$ has density $\psi$, we have
$$\mathbb{E}(f(X))=\mathbb{E}\left(f(Y) \frac{\varphi(Y)}{\psi(Y)}\right)$$
Now we can use a basic Monte Carlo estimate for the expectation on the right-hand side to get the following estimate.

数学代写|统计计算作业代写STATISTICAL COMPUTING代考|Antithetic variables

The antithetic variables method (also called antithetic variates method) reduces the variance and thus the error of Monte Carlo estimates by using pairwise dependent samples $X_{j}$ instead of the independent samples used in basic Monte Carlo estimation.
For illustration, we first consider the case $N=2$ : assume that $X$ and $X^{\prime}$ are identically distributed random variables, which are not independent. As for the independent case we have
$$\mathbb{E}\left(\frac{f(X)+f\left(X^{\prime}\right)}{2}\right)=\frac{\mathbb{E}(f(X))+\mathbb{E}\left(f\left(X^{\prime}\right)\right)}{2}=\mathbb{E}(f(X))$$
but for the variance we get
\begin{aligned} \operatorname{Var}\left(\frac{f(X)+f\left(X^{\prime}\right)}{2}\right) &=\frac{\operatorname{Var}(f(X))+2 \operatorname{Cov}\left(f(X), f\left(X^{\prime}\right)\right)+\operatorname{Var}\left(f\left(X^{\prime}\right)\right)}{4} \ &=\frac{1}{2} \operatorname{Var}(f(X))+\frac{1}{2} \operatorname{Cov}\left(f(X), f\left(X^{\prime}\right)\right) \end{aligned}

数学代写|统计计算作业代写STATISTICAL COMPUTING代考|Control variates

The control variates method is another method to reduce the variance of Monte Carlo estimates for expectations of the form $\mathbb{E}(f(X))$. The method is based on the following idea: if we can find a ‘simpler’ function $g \approx f$ such that $\mathbb{E}(g(X))$ can be computed analytically, then we can use our knowledge of $\mathbb{E}(g(X))$ to assist with the estimation of $\mathbb{E}(f(X))$. In the control variates methods, this is done by rewriting the expectation of interest as
$$\mathbb{E}(f(X))=\mathbb{E}(f(X)-g(X))+\mathbb{E}(g(X))$$
Since we know $\mathbb{E}(g(X))$, the Monte Carlo estimation can now be restricted to the term $\mathbb{E}(f(X)-g(X))$ and since $f(X) \approx g(X)$, the random quantity $f(X)-g(X)$ has smaller variance and thus smaller Monte Carlo error than $f(X)$ has on its own. In this context, the random variable $g(X)$ is called a control variate for $f(X)$.