Knee deep in Bayesian statistics and

  • Context: Graduate 
  • Thread starter Thread starter QuantumDefect
  • Start date Start date
  • Tags Tags
    Bayesian Statistics
Click For Summary

Discussion Overview

The discussion revolves around the challenges of understanding Bayesian statistics compared to Frequentist approaches, particularly in the context of bias testing and the interpretation of statistical errors. Participants explore theoretical concepts, practical implications, and philosophical considerations related to Bayesian methods.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant expresses difficulty in maintaining a Bayesian perspective and questions how bias is tested within the Bayesian framework, contrasting it with Frequentist methods.
  • Another participant suggests that Bayesianism represents a paradigm shift and recommends a specific textbook for deeper understanding.
  • A third participant introduces the concept of exchangeability as a substitute for identical experiments in Bayesian statistics.
  • There is a query about the existence of statistical error in the Bayesian paradigm, with a request for clarification on the format of presenting estimators with systematic and statistical errors.
  • One participant elaborates on Bayesian probability as conditional probability, emphasizing the importance of understanding conditional probability for grasping Bayesian concepts and discussing the decomposition of random variables in models.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the existence of statistical error in the Bayesian paradigm or on the interpretation of estimators. Multiple competing views are presented regarding the foundational concepts of Bayesian statistics.

Contextual Notes

Limitations include potential misunderstandings of Bayesian concepts, the need for clarity on the definitions of statistical and systematic errors, and the dependence on specific interpretations of probability frameworks.

QuantumDefect
Messages
64
Reaction score
0
Hello all,

To just begin, I am having a lot of trouble keeping my brain in the Bayesian view and not letting it revert back to a Frequentist way of thinking. Not to mention having troubles with unbinned MLE estimation. If I say something wrong, please let me know. My questions are as follows:

1) In the Frequentist view, the mean of the distribution of fitted estimators is the true value. If it doesn't match up with your true pseudo-experiment's value, you have a bias and you need to correct for it when you analyze the real experimental data. In the Bayesian view, it doesn't make sense for there to be multiple data sets. How do you test for bias in the Bayesian paradigm?

2) The Bayesian paradigm comes built in with a spiffy way of dealing with systematic uncertainty. You can just allow your unobservable parameter to take all the values within its uncertainty via Monte Carlo and see the resultant spread in your fitted estimator. However when reading papers, it usually gives the estimator in a value plus systematic error plus statistical error format. Does the statistical error even exist in the Bayesian paradigm? The data is what it is and identical experiments don't exist so there is no way of determining a spread in your fitted estimator over many data sets who only vary due to poisson statistics.I'm sure I'll have more questions but those are it for now. I would be greatly appreciative if I could get some help in understanding this. Thanks again!
 
Physics news on Phys.org
Bayesianism may be a paradigm change in the sense of Kuhn's The Structure of Scientific Revolutions. Don't hurry through or into it. I found Edwin Thompson Jaynes' Probability Theory: The Logic of Science (2003) a most valuable testbook. Max-ent is a good way of thinking.
 
QuantumDefect said:
However when reading papers, it usually gives the estimator in a value plus systematic error plus statistical error format. Does the statistical error even exist in the Bayesian paradigm?

I don't understand the format you are talking about. Can you give an example?
 
Bayesian probability is conditional probability but with a focus on making parameters random variables and deriving lots of results based on that - even for general distributions like the Markov Chain Monte Carlo results (MCMC) that are used in many complicated distribution situations where you have tonnes of random variables with all kinds of assumptions.

If you understand conditional probability well enough (and I mean understand it in a deep way) then Bayesian stuff is straight forward.

The interpretations of probabilities and the philosophy of Bayesian statistics is one thing entirely, but regardless of that the probability framework that includes conditional probability will help you understand the derivations, the results, and basically what the hell is actually going on.

In terms of statistical error you need to understand how the random variable or models involving said random variables are decomposed. You can take for example a Normal distribution and decompose into N = mu + sigma*Z where Z ~ N(0,1). It's a pretty useless decomposition but it is one.

The residual models will have random variables with a particular distribution - which may have its parameters as a random variable as well. Whatever the case is you look at the definition and decide based on that.
 

Similar threads

  • · Replies 26 ·
Replies
26
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
Replies
1
Views
2K
  • · Replies 11 ·
Replies
11
Views
5K