Bayes rule using higher order prior probability

AltCtrlDel
Messages
2
Reaction score
0
Hi

I am asking, if I am trying to make inference using Bayes rule based on a prior probability that is a random variable by itself; is it sufficient to use the expected value of such probability or there are other details.

Thanks in advance.
 
Physics news on Phys.org
It depends on what analysis you want to perform on the resulting posterior probability distribution.

If you are only interested in its expectation (or some other linear function of the posterior), you can use the expected value of the prior.

If you also want to calculate some nonlinear measure of the posterior (e.g. its variance) you need to compute all possible posteriors given all possible priors, calculate their respective variances, and then take the expectation of those.

Best wishes,

-Emanuel
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Back
Top