Bayesian statistics learning materials

In summary, the conversation is about learning materials for Bayesian statistics, specifically for introductory level. Some recommended sources are the book "Bayesian Data Analysis" by Gelman et al. and online resources such as Christian Robert's blog and Jim Albert's teaching blog. The use of Bayesian methods is widespread in various fields, but there is still a preference for frequentist methods in the scientific community. Both approaches have their pros and cons. The conversation also touches on the history and controversy surrounding Bayesian theory.
  • #1
Cinitiator
69
0
Does anyone here know any good Bayesian statistics, Bayesian hypothesis testing, Bayesian inference, etc. learning materials (preferably online)?
 
Physics news on Phys.org
  • #2
We can't say without knowing your background.
 
  • #3
Number Nine said:
We can't say without knowing your background.

I'm looking for introductory level learning materials. I only know the most essential basics, or even less than that.
 
  • #4
If you are learning Bayesian Inference I'd strongly suggest you become comfortable with standard non-Bayesian Inference first before doing both at the same time.

Some books cover both in the one book, but I still stand by my recommendation.
 
  • #5
chiro said:
If you are learning Bayesian Inference I'd strongly suggest you become comfortable with standard non-Bayesian Inference first before doing both at the same time.

Some books cover both in the one book, but I still stand by my recommendation.

I'm more or less familiar with frequentist inference.
 
  • #6
I know it's not online, but anyway, I strongly recommend this book (any edition, of course):
Gelman, A., Carlin, J. B., Stern, H. S. Rubin, D. B., Bayesian Data Analysis, Second Edition (Chapman & Hall/CRC Texts in Statistical Science), 2nd edn. (Chapman and Hall/CRC, 2003).

Its main advantage compared to other famous books (e.g. Robert's The Bayesian Choice or Bernardo&Smith's Bayesian Theory) is its straightforward approach. While the others first develop the decision-theoretic framework and set Bayesian methods within it, Gelman hits directly the "statistical core" of Bayesianism and provides computational means already in the first pages. For a newcomer I find this approach more digestible and better for getting the idea what's Bayesian statistics about.

Regarding the online sources, I recommend to walk through Christian Robert's blog, where, among others, you can find references to his teaching material. Jim Albert's teaching blog is great too.
 
Last edited:
  • #7
camillio said:
I know it's not online, but anyway, I strongly recommend this book (any edition, of course):
Gelman, A., Carlin, J. B., Stern, H. S. Rubin, D. B., Bayesian Data Analysis, Second Edition (Chapman & Hall/CRC Texts in Statistical Science), 2nd edn. (Chapman and Hall/CRC, 2003).

Its main advantage compared to other famous books (e.g. Robert's The Bayesian Choice or Bernardo&Smith's Bayesian Theory) is its straightforward approach. While the others first develop the decision-theoretic framework and set Bayesian methods within it, Gelman hits directly the "statistical core" of Bayesianism and provides computational means already in the first pages. For a newcomer I find this approach more digestible and better for getting the idea what's Bayesian statistics about.

Regarding the online sources, I recommend to walk through Christian Robert's blog, where, among others, you can find references to his teaching material. Jim Albert's teaching blog is great too.

Thanks a lot for your suggestions, I will order the book in question today.

By the way, is the Bayesian approach often used in science? Why doesn't it supersede the traditional scientific methodology of "true" or "false" approaches to the empirical hypothesis confirmation and theory credibility, and provide a likelihood framework instead, which could allow more rational decisions to be made?
 
  • #8
Well, the Bayesian approach is already a well established counterpart of the traditional frequentist statistics. The reason why it took so much time (although it's far older then frequentism) consisted mainly in the enormous denial from the traditional Fisher's and Neyman-Pearson's schools. However, their arguments were more or less philosophical and based on comparison of hardly comparable ideas. At this point it is worth to notice that even Fisherians and Neyman-Pearsonians fought each other :-)) If you become interested in the tangled history of Bayesian theory, I suggest reading McGraynes book "The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy", its really worth those $10 on Amazon. I also plan to read Gelman's and Robert's paper dealing with the affair.

The use of Bayesian methods is now widespread, you can meet them in the computer science, biology, geology, genetics, economy etc. etc. Unlike the traditional methods, they allow to quantify the uncertainty related to statistical decision (e.g. parameter estimation). This, in turn, allows to base decisions even on a very small sample and, if necessary, express the initial informative belief. I (not being a militant advocate of any of the two camps) believe that both Bayesian and frequentist methods are worth knowing and application when they suit circumstances. Both have pros and cons :-)
 
  • #9
camillio said:
Well, the Bayesian approach is already a well established counterpart of the traditional frequentist statistics. The reason why it took so much time (although it's far older then frequentism) consisted mainly in the enormous denial from the traditional Fisher's and Neyman-Pearson's schools. However, their arguments were more or less philosophical and based on comparison of hardly comparable ideas. At this point it is worth to notice that even Fisherians and Neyman-Pearsonians fought each other :-)) If you become interested in the tangled history of Bayesian theory, I suggest reading McGraynes book "The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy", its really worth those $10 on Amazon. I also plan to read Gelman's and Robert's paper dealing with the affair.

The use of Bayesian methods is now widespread, you can meet them in the computer science, biology, geology, genetics, economy etc. etc. Unlike the traditional methods, they allow to quantify the uncertainty related to statistical decision (e.g. parameter estimation). This, in turn, allows to base decisions even on a very small sample and, if necessary, express the initial informative belief. I (not being a militant advocate of any of the two camps) believe that both Bayesian and frequentist methods are worth knowing and application when they suit circumstances. Both have pros and cons :-)


Thanks for your book suggestion, I will order that one as well.
But aren't Bayesian methods far less widespread than the frequentist ones? Most studies I read seem to be using the frequentist approach and frequentist hypothesis testing. Why is there such a preference for the frequentist methods in the scientific community, despite the Bayesian ones providing a far more clear and certain image of the real world?
 
  • #10
Cinitiator said:
Thanks for your book suggestion, I will order that one as well.
But aren't Bayesian methods far less widespread than the frequentist ones? Most studies I read seem to be using the frequentist approach and frequentist hypothesis testing. Why is there such a preference for the frequentist methods in the scientific community, despite the Bayesian ones providing a far more clear and certain image of the real world?

It depends.

In areas where you have small samples (like in medicine and in trials in bio-statistics and health related areas) then the Bayesian approach is a good one since adequate priors give some extra flexibility in dealing with low sample sizes.

Also the Bayesian approach has well developed theoretical methods like MCMC techniques that allow you to simulate really complex distributions where you have all kinds of crazy dependencies with standard techniques which is good when it comes to simulating processes that you couldn't ordinarily simulate.

If you have a good reason where some non-standard prior will give results that are more accurate in the context of your work, then this is good thing and I've highlighted one area which has to do with small sample sizes and one with complex models and if these are big issues, then the Bayesian approach will at some point, get a look in.
 
  • #11
Cinitiator said:
Thanks for your book suggestion, I will order that one as well.
But aren't Bayesian methods far less widespread than the frequentist ones? Most studies I read seem to be using the frequentist approach and frequentist hypothesis testing. Why is there such a preference for the frequentist methods in the scientific community, despite the Bayesian ones providing a far more clear and certain image of the real world?

In addition to chiro's reply, I'd add that the basic reasons why frequentist methods dominate are (i) historical - they were generally accepted quite recently, (ii) they are much harder to learn and understand for non-mathematicians (non-statisticians), (iii) they do not provide a simple bunch of methods easy to use and (very frequently) misuse. One usually needs to think about what he's doing, not simply feed a software with some (maybe spurious) data and click on a button to get some result (whatever one understands to be a result). Also, as you mentioned, the user doesn't obtain a simple dichotomous answer of type "yes" or "no" as, e.g. in classical hypotheses testing (again, whatever it means).
 
  • #12
camillio said:
In addition to chiro's reply, I'd add that the basic reasons why frequentist methods dominate are (i) historical - they were generally accepted quite recently, (ii) they are much harder to learn and understand for non-mathematicians (non-statisticians), (iii) they do not provide a simple bunch of methods easy to use and (very frequently) misuse. One usually needs to think about what he's doing, not simply feed a software with some (maybe spurious) data and click on a button to get some result (whatever one understands to be a result). Also, as you mentioned, the user doesn't obtain a simple dichotomous answer of type "yes" or "no" as, e.g. in classical hypotheses testing (again, whatever it means).

Thanks for your response. By saying "yes" or "no" I made an analogy to the traditional hypothesis testing. Because the null hypothesis is either rejected "no", or accepted "yes" depending on the confidence intervals. Instead of having "yes" or "no" answers for given confidence intervals, it would be more convenient to have the probability of the null hypothesis being rejected.
 

1. What is Bayesian statistics?

Bayesian statistics is a method of statistical inference that uses probability theory to update beliefs about a hypothesis as more evidence or data is collected. It is based on Bayes' theorem and is often used in decision making and predictive modeling.

2. How is Bayesian statistics different from traditional statistics?

Traditional statistics relies on fixed parameters and assumes that the data is a random sample from a population. Bayesian statistics, on the other hand, allows for the incorporation of prior beliefs and updates those beliefs as more data is collected. It also treats the parameters as random variables, rather than fixed values.

3. What are some common applications of Bayesian statistics?

Bayesian statistics has a wide range of applications, including but not limited to decision making, predictive modeling, risk assessment, and data analysis in various fields such as medicine, engineering, and finance. It is also commonly used in machine learning and artificial intelligence.

4. How can I learn about Bayesian statistics?

There are various resources available for learning about Bayesian statistics, including textbooks, online courses, and tutorials. Some popular books on the subject include "Bayesian Data Analysis" by Andrew Gelman et al. and "Doing Bayesian Data Analysis" by John K. Kruschke. Online courses and tutorials can be found on platforms such as Coursera, Udemy, and YouTube.

5. Is prior knowledge necessary to understand Bayesian statistics?

Prior knowledge is not necessary to understand Bayesian statistics, as it can be learned independently. However, having some knowledge of probability theory and basic statistics can make it easier to grasp the concepts. It is also important to have a willingness to think in terms of probabilities and continuously update beliefs based on new evidence.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
26
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
685
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
5K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
Back
Top