19th Ave New York, NY 95822, USA

# 统计代写| Metropolis-Hastings stat代写

## 统计代考

$12.4 \quad R$
Metropolis-Hastings
Here’s how to implement the Metropolis-Hastings algorithm for Example 12.1.8, the Normal-Normal model. First, we choose our observed value of $Y$ and decide on values for the constants $\sigma, \mu$, and $\tau$ :
$\mathrm{y}<-3$
sigma $<-1$
$\mathrm{mu}<-0$
We also need to choose the standard deviation of the proposals for step 1 of the algorithm, as explained in Example 12.1.8; for this problem, we let $d=1$. We set the number of iterations to run, and we allocate a vector theta of length $10^{4}$ which we will fill with our simulated draws:
$d<-1$
niter $<-10^{-4}$
theta <- rep (0, niter)
Now for the main loop. We initialize $\theta$ to the observed value $y$, then run the algorithm described in Example 12.1.8:
theta [1] <- $\mathrm{y}$
for ( $i$ in $2:$ niter) {
theta.p <- theta[i-1] + rnorm $(1,0, d)$
$r<-\operatorname{dnorm}(y$, theta.p,sigma) * dnorm(theta.p, mu, tau) / (dnorm (y, theta[i-1], sigma) * dnorm(theta[i-1], mu, tau))
$f l i p<-r b i n o m(1,1, \min (r, 1))$
theta $[i]<-$ if $(f l i p=1)$ theta.p else theta $[i-1]$
Let’s step through each line inside the loop. The proposed value of $\theta$ is theta.p, which equals the previous value of $\theta$ plus a Normal random variable with mean 0 and standard deviation d (recall that rnorm takes the standard deviation and not the variance as input). The ratio $r$ is
$$\frac{f_{\theta \mid Y}\left(x^{\prime} \mid y\right)}{f_{\theta \mid Y}(x \mid y)}=\frac{e^{-\frac{1}{2 \sigma^{2}}\left(y-x^{\prime}\right)^{2}} e^{-\frac{1}{2 T^{2}}\left(x^{\prime}-\mu\right)^{2}}}{e^{-\frac{1}{2 \sigma^{2}}(y-x)^{2}} e^{-\frac{1}{2 T^{2}}(x-\mu)^{2}}}$$
where theta.p is playing the role of $x^{\prime}$ and theta [i-1] is playing the role of $x$. The coin flip to determine whether to accept or reject the proposal is flip, which is a coin flip with probability min $(r, 1)$ of Heads (encoding Heads as 1 and Tails as 0). Finally, we set theta[i] equal to the proposed value if the coin flip lands Heads, and keep it at the previous value otherwise.
556
The vector theta now contains all of our simulation draws. We typically discard some of the initial draws to give the chain some time to approach the stationary distribution. The following line of code discards the first half of the draws:
theta <- theta [- (1: (niter/2))]
To see what the remaining draws look like, we can create a histogram using hist (theta). We can also compute summary statistics such as mean(theta) and var (theta), which give us the sample mean and sample variance.
Gibbs
Now let’s implement Gibbs sampling for Example 12.2.6, the chicken-egg story with unknown hatching probability and invisible unhatched eggs. The first step is to decide on our observed value of $X$, as well as the constants $\lambda, a, b$ :
$x<-7$
lambda <- 10
$a<-1$ $b<-1$
Next we decide how many iterations to run, and we allocate space for our results, creating two vectors $\mathrm{p}$ and $\mathrm{N}$ of length $10^{4}$ which we will fill with our simulated draws:
niter $<-10^{-4}$
$p<-\operatorname{rep}(0$, niter $)$ $N<-\operatorname{rep}(0$, niter $)$
Finally, we’re ready to run the Gibbs sampler. We initialize $p$ and $N$ to the values
$0.5$ and $2 x$, respectively, and then we run the algorithm as explained in Example
$12.2 .6$ :

## 统计代考

$12.4 \四R$

$\mathrm{y}<-3$

$\mathrm{mu}<-0$

$d<-1$

theta <- rep (0, niter)

theta [1] <- $\mathrm{y}$
for ($i$ in $2:$ niter) {
theta.p <- theta[i-1] + rnorm $(1,0, d)$
$r<-\operatorname{dnorm}(y$, theta.p,sigma) * dnorm(theta.p, mu, tau) / (dnorm (y, theta[i-1], sigma) * dnorm(theta[ i-1], mu, tau))
$f l i p<-r b i n o m(1,1, \min (r, 1))$
theta $[i]<-$ if $(f l i p=1)$ theta.p else theta $[i-1]$

$$\frac{f_{\theta \mid Y}\left(x^{\prime} \mid y\right)}{f_{\theta \mid Y}(x \mid y)}=\frac{e^{ -\frac{1}{2 \sigma^{2}}\left(yx^{\prime}\right)^{2}} e^{-\frac{1}{2 T^{2}}\左(x^{\prime}-\mu\right)^{2}}}{e^{-\frac{1}{2 \sigma^{2}}(yx)^{2}} e^{ -\frac{1}{2 T^{2}}(x-\mu)^{2}}}$$

556

theta <- theta [- (1: (niter/2))]

$x<-7$
λ <- 10
$a<-1$ $b<-1$

$p<-\operatorname{rep}(0$, niter $)$ $N<-\operatorname{rep}(0$, niter $)$

$0.5$ 和 $2 x$，然后我们按照示例中的说明运行算法
12.2 .6 美元：