Sampling from normalized and un-normalized posterior

  • Context: Graduate 
  • Thread starter Thread starter michaelmas
  • Start date Start date
  • Tags Tags
    Sampling
Click For Summary

Discussion Overview

The discussion revolves around the properties of sampling from normalized versus un-normalized posterior distributions in Bayesian statistics. Participants explore the implications of normalization on mean and variance, as well as the role of Markov Chain Monte Carlo (MCMC) methods in this context.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant questions whether sampling from the un-normalized posterior yields the same mean and variance as from the normalized posterior, expressing uncertainty about the mathematical proof of this claim.
  • Another participant challenges the concept of "sampling" from a distribution that does not integrate to 1.0, seeking clarification on the definition of sampling in this context.
  • A participant expresses confusion regarding the purpose of MCMC methods if they involve non-normalized distributions, questioning the practices of Bayesian statisticians.
  • There is a request for clarification on where in the MCMC method sampling from a non-normalized distribution occurs, with a claim that such a technique is not typically defined or used.
  • One participant reiterates the importance of understanding the relationship between the distributions, questioning the utility of the un-normalized posterior if it does not provide the same mean and variance as the normalized version.
  • Another participant emphasizes the need for clarity on what is meant by the "good" of one distribution in relation to the other, suggesting that understanding the transformation of random variables could be relevant to the discussion.

Areas of Agreement / Disagreement

Participants express differing views on the implications of sampling from normalized versus un-normalized posteriors, with no consensus reached on the utility or properties of these distributions.

Contextual Notes

Participants highlight the potential limitations in understanding the relationship between normalized and un-normalized distributions, particularly regarding the definitions and implications of sampling methods like MCMC.

michaelmas
Messages
3
Reaction score
0
Help me understand something.

I get that the posterior ##p(\theta|y) \propto p(y|\theta)p(\theta)## should be normalized by ##\frac{1}{p(y)}## for the probability to sum to 1, but what about the mean and variance?

Am I not right understanding that sampling from the un-normalized posterior gives the same mean and variance as sampling from the normalized posterior?

Can I prove it mathematically?

Can't find it and can't figure it out.
 
Physics news on Phys.org
michaelmas said:
Am I not right understanding that sampling from the un-normalized posterior gives the same mean and variance as sampling from the normalized posterior?

.

How do you define "sampling" from a distribution that doesn't integrate to 1.0 ?
 
I don't know.

What exactly is the point with MCMC and not normalizing?

Isn't that what the Bayesians are doing?
 
You should explain where in the Markov Chain Monte Carlo method that you think sampling is done from a non-normalized distribution. As far as I know, that technique is never defined or used.
 
Ok, let me rephrase the question.

if ##p(\theta|y)## is the distribution of interest, then what good is ##p(y|\theta)p(\theta)## if the mean and variance aren't the same?
 
michaelmas said:
if ##p(\theta|y)## is the distribution of interest, then what good is ##p(y|\theta)p(\theta)## if the mean and variance aren't the same?

You are speaking as if you've read that one distribution is of some "good" in answering questions about the other, but until you say exactly what the "good" is, it isn't possible to understand what you are asking.

In general, if [itex]W[/itex] is a random variable and [itex]X = k\ W[/itex] for some constant [itex]k[/itex] then you can figure out the mean and variance of [itex]X[/itex] if you know the mean and variance of [itex]W[/itex]. If we treat [itex]y[/itex] a fixed event then [itex]k = \frac{1}{p(y)}[/itex].
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 15 ·
Replies
15
Views
4K