讲解:MCMC、R、R、algorithmPython|Database

Lab 3: Markov chain Monte CarloNial Friel1 November 2019Aim of this labIn the lab we will focus on implementing MCMC for an example which we have explored in class. Theobjective will be understand the role that the proposal variance plays in determining the efficiency of theMarkov chain.A brief re-cap of MCMCRecall that MCMC is a framework for simulating from a target distribution π(θ) by constructing a Markovchain with whose stationary distribution is π(θ). Then, if we run the chain for long enough, simulated valuesfrom the chain can be treated as a (dependent) sample from the target distribution and used as a basis forsummarising important features of π.Under certain regularity conditions, the Markov chain sample path mimics a random sample from π. Givenrealisations {θt : t = 0, 1, . . . } from such a chain, typical asymptotic results include the distributionalconvergence of the realisations, ieθt → π(θ)both in distribution and as t → ∞. Also the consistency of the ergodic average, for any scalar functional φ,So this leads to the question of how one can generate such a Markov chain, in general. As we’ve seen inlectures, the Metropolis-Hastings algorithm provides a general means to do this.Metropolis-Hastings algorithmRecall that at iteration t of the Metropolis-Hastings algorithm one simulates a proposed state, φ from someproposal distribution which depends on the current state, θt. We denote this proposal density by q(θt, φ).This proposed state is then accepted with probability denoted by α(θt, φ) (as described below) and the nextstate of the Markov chain becomes θt+1 = φ, otherwise, θt+1 = θt.We algorithmically describe this algorithm as follows:Step 1. Given the current state, θt, generate a proposed new state, φ, from the distribution q(θt, φ).Step 2. Calculate.Step 3. With probability α(θt, φ), set θt+1 = φ, else set θt+1 = θt.Step 4. Return to step 1.1Here we write a generic function to implement the Metroplis-Hastings algorithm.metropolis.hastings g, # the proposal distributionrandom.g, # a sample from the proposal distributionx0, # initial value for chain, in R it is x[1]sigma, # proposal st. dev.chain.size=1e5){ # chain sizex for(i in 2:chain.size) {y #alpha alpha if( runif(1)else{x[i] A first (simple) exampleHere we will re-visit a simple example which we explored in lectures where we wish to sample from a N(0, 1)distribution using another normal distribution as a proposal distribution. Recall that the objective wasexplore the effect of the variance of the proposal distribution on the efficiency of the resulting Markov chain.To implement this using our generic code we do the following, where we first use a poor choice of proposalvariance (σ2 = 202):sigma=20f random.g g x0 chain.size x We can then explore the output of this chain as follows:par(mfrow=c(1,2))plot(x, xlab=x, ylab=f(x), main=paste(Trace plot of x[t], sigma=, sigma),col=red, type=l)xmin = min(x[(0.2*chain.size) : chain.size])xmax = max(x[(0.2*chain.size) : chMCMC代做、代写R程序设计、代做R设计、代写algoritain.size])xs.lower = min(-4,xmin)2xs.upper = max(4,xmax)xs hist(x[(0.2*chain.size) : chain.size],50, xlim=c(xs.lower,xs.upper),col=blue,xlab=x, main=Metropolis-Hastings,freq=FALSE)lines(xs,f(xs),col=red, lwd=1)0 2000 6000 10000Trace plot of x[t], sigma= 20Metropolis−HastingsWe see that the chain apears not to have mixed very well (as evidenced from the trace plot). Also, thehistrogram does not yield an excellent summary of the target density (plotted in red on the right hand plot).Now we use a much smaller proposal variance (σ2 = 0.22) and explore how this has effected the Markov chain.sigma=0.2f random.g g x0 chain.size x We can then explore the output of this chain as follows:par(mfrow=c(1,2))plot(x, xlab=x, ylab=f(x), main=paste(Trace plot of x[t], sigma=, sigma),col=red, type=l)xmin = min(x[(0.2*chain.size) : chain.size])xmax = max(x[(0.2*chain.size) : chain.size])xs.lower = min(-4,xmin)xs.upper = max(4,xmax)xs 3hist(x[(0.2*chain.size) : chain.size],50, xlim=c(xs.lower,xs.upper),col=blue,xlab=x, main=Metropolis-Hastings,freq=FALSE)lines(xs,f(xs),col=red, lwd=1)0 2000 6000 10000Trace plot of x[t], sigma= 0.2Metropolis−HastingsAgain, the trace plot indicates that the chain has not mixed very well, although the histogram of the Markovchain trace is in much better agreement with the target density.Finally, we consider the case where the proposal variance is set to σ2 = 32.sigma=3f random.g g x0 chain.size x We can then explore the output of this chain as follows:par(mfrow=c(1,2))plot(x, xlab=x, ylab=f(x), main=paste(Trace plot of x[t], sigma=, sigma),col=red, type=l)xmin = min(x[(0.2*chain.size) : chain.size])xmax = max(x[(0.2*chain.size) : chain.size])xs.lower = min(-4,xmin)xs.upper = max(4,xmax)xs hist(x[(0.2*chain.size) : chain.size],50, xlim=c(xs.lower,xs.upper),col=blue,xlab=x, main=Metropolis-Hastings,freq=FALSE)lines(xs,f(xs),col=red, lwd=1)Trace plot of x[t], sigma= 3Now the output, in terms of the trace plot and histogram suggests that the algorithm is mixing well and thatthe output from the Metropolis-Hastings algorithm provides a good summary of the target distribution.5Exercises1. Consider again the problem explored in the lab.• Modify the code to monitor the rate at which the proposed values are accepted, for each of the threescenarios above (σ = 20, 0.2 or 3). Explain how this analysis can help to decide how to tune the proposalvariance. [15 marks]• For scenario, estimate the probability that X > 2, where X ∼ N(0, 1) using the output from theMetropolis-Hastings algorithm. Provide an approximate standard error in each case. (Note that youcan compare your estimates to the true value 0.02275) [10 marks]2. Consider sampling from Beta(2.5, 4.5) distribution using the an independence sampler with a uniformproposal distribution. Provide R code to do this. Produce output from your code to illustrate theperformance of your algorithm.[25 marks]Hand-in date: November 13th at 12pm6转自:http://www.3daixie.com/contents/11/3444.html

你可能感兴趣的:(讲解:MCMC、R、R、algorithmPython|Database)