Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Knee deep in Bayesian statistics and need help

  1. Jan 5, 2015 #1
    Hello all,

    To just begin, I am having a lot of trouble keeping my brain in the Bayesian view and not letting it revert back to a Frequentist way of thinking. Not to mention having troubles with unbinned MLE estimation. If I say something wrong, please let me know. My questions are as follows:

    1) In the Frequentist view, the mean of the distribution of fitted estimators is the true value. If it doesn't match up with your true pseudo-experiment's value, you have a bias and you need to correct for it when you analyze the real experimental data. In the Bayesian view, it doesn't make sense for there to be multiple data sets. How do you test for bias in the Bayesian paradigm?

    2) The Bayesian paradigm comes built in with a spiffy way of dealing with systematic uncertainty. You can just allow your unobservable parameter to take all the values within its uncertainty via Monte Carlo and see the resultant spread in your fitted estimator. However when reading papers, it usually gives the estimator in a value plus systematic error plus statistical error format. Does the statistical error even exist in the Bayesian paradigm? The data is what it is and identical experiments don't exist so there is no way of determining a spread in your fitted estimator over many data sets who only vary due to poisson statistics.


    I'm sure I'll have more questions but those are it for now. I would be greatly appreciative if I could get some help in understanding this. Thanks again!
     
  2. jcsd
  3. Jan 5, 2015 #2

    Doug Huffman

    User Avatar
    Gold Member

    Bayesianism may be a paradigm change in the sense of Kuhn's The Structure of Scientific Revolutions. Don't hurry through or into it. I found Edwin Thompson Jaynes' Probability Theory: The Logic of Science (2003) a most valuable testbook. Max-ent is a good way of thinking.
     
  4. Jan 5, 2015 #3

    atyy

    User Avatar
    Science Advisor

  5. Jan 10, 2015 #4

    Stephen Tashi

    User Avatar
    Science Advisor

    I don't understand the format you are talking about. Can you give an example?
     
  6. Jan 13, 2015 #5

    chiro

    User Avatar
    Science Advisor

    Bayesian probability is conditional probability but with a focus on making parameters random variables and deriving lots of results based on that - even for general distributions like the Markov Chain Monte Carlo results (MCMC) that are used in many complicated distribution situations where you have tonnes of random variables with all kinds of assumptions.

    If you understand conditional probability well enough (and I mean understand it in a deep way) then Bayesian stuff is straight forward.

    The interpretations of probabilities and the philosophy of Bayesian statistics is one thing entirely, but regardless of that the probability framework that includes conditional probability will help you understand the derivations, the results, and basically what the hell is actually going on.

    In terms of statistical error you need to understand how the random variable or models involving said random variables are decomposed. You can take for example a Normal distribution and decompose into N = mu + sigma*Z where Z ~ N(0,1). It's a pretty useless decomposition but it is one.

    The residual models will have random variables with a particular distribution - which may have its parameters as a random variable as well. Whatever the case is you look at the definition and decide based on that.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Knee deep in Bayesian statistics and need help
Loading...