# Sampling from normalized and un-normalized posterior

1. Feb 15, 2015

### michaelmas

Help me understand something.

I get that the posterior $p(\theta|y) \propto p(y|\theta)p(\theta)$ should be normalized by $\frac{1}{p(y)}$ for the probability to sum to 1, but what about the mean and variance?

Am I not right understanding that sampling from the un-normalized posterior gives the same mean and variance as sampling from the normalized posterior?

Can I prove it mathematically?

Can't find it and can't figure it out.

2. Feb 15, 2015

### Stephen Tashi

How do you define "sampling" from a distribution that doesn't integrate to 1.0 ?

3. Feb 15, 2015

### michaelmas

I don't know.

What exactly is the point with MCMC and not normalizing?

Isn't that what the Bayesians are doing?

4. Feb 15, 2015

### Stephen Tashi

You should explain where in the Markov Chain Monte Carlo method that you think sampling is done from a non-normalized distribution. As far as I know, that technique is never defined or used.

5. Feb 15, 2015

### michaelmas

Ok, let me rephrase the question.

if $p(\theta|y)$ is the distribution of interest, then what good is $p(y|\theta)p(\theta)$ if the mean and variance aren't the same?

6. Feb 15, 2015

### Stephen Tashi

You are speaking as if you've read that one distribution is of some "good" in answering questions about the other, but until you say exactly what the "good" is, it isn't possible to understand what you are asking.

In general, if $W$ is a random variable and $X = k\ W$ for some constant $k$ then you can figure out the mean and variance of $X$ if you know the mean and variance of $W$. If we treat $y$ a fixed event then $k = \frac{1}{p(y)}$.