MHB Prior probability distributions

AI Thread Summary
The discussion centers on estimating a parameter k within a finite interval (a;b) using a series of measurements X with known standard deviations. Participants are inquiring about Jeffreys prior and Bernardo's prior in relation to this estimation process. The key question is whether these priors can ensure that the posterior distribution approximates the Maximum Likelihood Estimate closely. The conversation highlights the importance of prior distributions in Bayesian analysis and their influence on posterior outcomes. Understanding the relationship between priors and likelihood shapes is crucial for effective parameter estimation.
lotharson
Messages
2
Reaction score
0
Hi folks.

I've a question.

Let k be a parameter which must be estimated. It lies within the interval (a;b), a and b being finite real numbers.

Let us further assume we dispose of a series of measurements X of known standard deviations.
X is a complex function of k.

What are Jeffreys prior Bernardo's prior?

Many thanks for your answer :-)
 
Physics news on Phys.org
Do we dispose of some type of guarantee that these priors imitate the shape of the likelihood in such a way that the posterior distribution delivers us a result close enough to the Maximum Likelihood Estimate?
 
I was reading a Bachelor thesis on Peano Arithmetic (PA). PA has the following axioms (not including the induction schema): $$\begin{align} & (A1) ~~~~ \forall x \neg (x + 1 = 0) \nonumber \\ & (A2) ~~~~ \forall xy (x + 1 =y + 1 \to x = y) \nonumber \\ & (A3) ~~~~ \forall x (x + 0 = x) \nonumber \\ & (A4) ~~~~ \forall xy (x + (y +1) = (x + y ) + 1) \nonumber \\ & (A5) ~~~~ \forall x (x \cdot 0 = 0) \nonumber \\ & (A6) ~~~~ \forall xy (x \cdot (y + 1) = (x \cdot y) + x) \nonumber...

Similar threads

Back
Top