# Markov Chain/ Monte Carlo Simulation

1. Jan 23, 2008

### tronter

Let $$\bold{X}$$ be a discrete random variable whose set of possible values is $$\bold{x}_j, \ j \geq 1$$. Let the probability mass function of $$\bold{X}$$ be given by $$P \{\bold{X} = \bold{x}_j \}, \ j \geq 1$$, and suppose we are interested in calculating $$\theta = E[h(\bold{X})] = \sum_{j=1}^{\infty} h(\bold{x}_j) P \{\bold{X} = \bold{x}_j \}$$.

In some cases, why are Markov Chains better for estimating $$\theta$$ as opposed to Monte-Carlo simulations? If we wanted to calculate $$E[\bold{X}]$$ there would not be any need to use simulation at all, right?

And $\lim_{n \to \infty} \frac{h(\bold{x}_j)}{n} \approx \theta \ \?$?

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Can you offer guidance or do you also need help?
Draft saved Draft deleted