Sampling from normalized and un-normalized posterior

  • Thread starter Thread starter michaelmas
  • Start date Start date
  • Tags Tags
    Sampling
AI Thread Summary
The discussion revolves around the relationship between normalized and un-normalized posteriors in Bayesian statistics. It emphasizes that while the posterior distribution must be normalized for proper probability interpretation, sampling from the un-normalized posterior can yield the same mean and variance as the normalized version. Participants question the utility of un-normalized distributions in Markov Chain Monte Carlo (MCMC) methods, seeking clarity on how sampling is conducted without normalization. The conversation highlights the need for a clear understanding of the implications of using un-normalized distributions in Bayesian inference. Overall, the dialogue underscores the mathematical principles governing mean and variance in relation to normalization in posterior distributions.
michaelmas
Messages
3
Reaction score
0
Help me understand something.

I get that the posterior ##p(\theta|y) \propto p(y|\theta)p(\theta)## should be normalized by ##\frac{1}{p(y)}## for the probability to sum to 1, but what about the mean and variance?

Am I not right understanding that sampling from the un-normalized posterior gives the same mean and variance as sampling from the normalized posterior?

Can I prove it mathematically?

Can't find it and can't figure it out.
 
Physics news on Phys.org
michaelmas said:
Am I not right understanding that sampling from the un-normalized posterior gives the same mean and variance as sampling from the normalized posterior?

.

How do you define "sampling" from a distribution that doesn't integrate to 1.0 ?
 
I don't know.

What exactly is the point with MCMC and not normalizing?

Isn't that what the Bayesians are doing?
 
You should explain where in the Markov Chain Monte Carlo method that you think sampling is done from a non-normalized distribution. As far as I know, that technique is never defined or used.
 
Ok, let me rephrase the question.

if ##p(\theta|y)## is the distribution of interest, then what good is ##p(y|\theta)p(\theta)## if the mean and variance aren't the same?
 
michaelmas said:
if ##p(\theta|y)## is the distribution of interest, then what good is ##p(y|\theta)p(\theta)## if the mean and variance aren't the same?

You are speaking as if you've read that one distribution is of some "good" in answering questions about the other, but until you say exactly what the "good" is, it isn't possible to understand what you are asking.

In general, if W is a random variable and X = k\ W for some constant k then you can figure out the mean and variance of X if you know the mean and variance of W. If we treat y a fixed event then k = \frac{1}{p(y)}.
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top