Bayesian method vs.maximum likelihood

Click For Summary
The discussion centers on the comparison between Bayesian methods and Maximum Likelihood Estimation (MLE) in statistical inference. MLE is part of a broader framework for point and interval estimation, while Bayesian methods address probabilistic situations where parameters vary. The choice between Maximum a Posteriori (MAP) and MLE often depends on whether one adopts a Bayesian or frequentist perspective. Each method has its own terminologies and frameworks, which can influence their effectiveness in different contexts. Understanding the specific application and underlying assumptions is crucial for selecting the appropriate method.
Mark J.
Messages
81
Reaction score
0
Hi,
Wondering if there is any priorities one method has versus the other one and are there any specific cases where to use one vs.other?

regards
 
Physics news on Phys.org
Hey Mark J.

I'm not exactly sure what you mean specifically. The MLE is part of a massive framework used in point and interval estimation for statistical inference, but the bayesian stuff is a framework dealing with generalizing probabilistic situations where parameters of distributions are not constant (which leads to all kinds of other results both probabilistically and statistically).

Do you have a specific example of Bayesian Probability or Inference that you are referring to?

For example if you are talking about inference, are you talking about estimating parameters with a specific posterior and prior? Specific posterior and general prior? General posteriors and priors?
 
Mark J. said:
Wondering if there is any priorities one method has versus the other one and are there any specific cases where to use one vs.other?
The priority of MAP vs ML depends largely on whether one is already of a Bayesianist or frequentist mindset. Maximum a posteriori and maximum likelihood have their own lingo, their own sets of a massive underlying frameworks, their own set of heuristics for overcoming weaknesses in the methods. I've seen a few papers that compare MAP vs ML. However, if you look at the publications of the authors of such a paper before reading it, you can form a pretty solid prior regarding which technique will come out on top.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • Sticky
  • · Replies 13 ·
Replies
13
Views
6K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 26 ·
Replies
26
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
3
Views
3K