## Markov Chain/ Monte Carlo Simulation

Let $$\bold{X}$$ be a discrete random variable whose set of possible values is $$\bold{x}_j, \ j \geq 1$$. Let the probability mass function of $$\bold{X}$$ be given by $$P \{\bold{X} = \bold{x}_j \}, \ j \geq 1$$, and suppose we are interested in calculating $$\theta = E[h(\bold{X})] = \sum_{j=1}^{\infty} h(\bold{x}_j) P \{\bold{X} = \bold{x}_j \}$$.

In some cases, why are Markov Chains better for estimating $$\theta$$ as opposed to Monte-Carlo simulations? If we wanted to calculate $$E[\bold{X}]$$ there would not be any need to use simulation at all, right?

And $\lim_{n \to \infty} \frac{h(\bold{x}_j)}{n} \approx \theta \ \?$?
 PhysOrg.com science news on PhysOrg.com >> Galaxies fed by funnels of fuel>> The better to see you with: Scientists build record-setting metamaterial flat lens>> Google eyes emerging markets networks