Insights How Does Bayesian Reasoning Affect Mental Disorders?

AI Thread Summary
The discussion centers on the challenges of assigning a priori probabilities (P(H)) to hypotheses in scientific inference, particularly in the context of predictive success and falsifiability. Participants express uncertainty about how to effectively assign these probabilities, noting that only a limited number of hypotheses are typically developed to the point of making testable predictions. The conversation highlights the distinction between P(H) and P(H | \mathcal{M}), where \mathcal{M} represents the underlying model that connects hypotheses to observations. There is a consensus that while prior probabilities should have a basis, this aspect is often not included in mathematical formulations. The role of the model in determining probabilities is emphasized, with discussions on how observations that do not fit the model indicate the need for adjustments to the model or the consideration of new hypotheses. The thread also touches on the implications of Bayesian reasoning in relation to mental health, referencing an external article on the subject.
bapowell
Science Advisor
Insights Author
Messages
2,243
Reaction score
261
bapowell submitted a new PF Insights post

Scientific Inference P3: Balancing predictive success with falsifiability

logicp3.png


Continue reading the Original PF Insights Post.
 
  • Like
Likes Pepper Mint and Greg Bernhardt
Physics news on Phys.org
I second Greg's comment.

But it occurred to me that we really have no basis at all for assigning P(H), the a priori probability of a hypothesis. I suppose that at any given time, there are only a handful of hypotheses that have actually been developed to the extent of making testable predictions, so maybe you can just weight them all equally?
 
stevendaryl said:
I second Greg's comment.

But it occurred to me that we really have no basis at all for assigning P(H), the a priori probability of a hypothesis. I suppose that at any given time, there are only a handful of hypotheses that have actually been developed to the extent of making testable predictions, so maybe you can just weight them all equally?

In the article, it's not P(H) but P(H | \mathcal{M}), but I'm not sure that I understand the role of \mathcal{M} here.
 
stevendaryl said:
I second Greg's comment.

But it occurred to me that we really have no basis at all for assigning P(H), the a priori probability of a hypothesis. I suppose that at any given time, there are only a handful of hypotheses that have actually been developed to the extent of making testable predictions, so maybe you can just weight them all equally?

There should be a basis for assigning the prior. It's just not part of the math.

You could collect data that 1% of the population has AIDS. That would be your prior for an individual having the condition.
 
Hornbein said:
There should be a basis for assigning the prior. It's just not part of the math.

You could collect data that 1% of the population has AIDS. That would be your prior for an individual having the condition.

Okay, I was thinking of a different type of "hypothesis": a law-like hypothesis such as Newton's law of gravity, or the hypothesis that AIDS is caused by HIV. I don't know how you would assign a prior to such things.
 
stevendaryl said:
In the article, it's not P(H) but P(H | \mathcal{M}), but I'm not sure that I understand the role of \mathcal{M} here.
You can think of the hypothesis as being the value of a certain parameter, like the curvature of the universe. The model is the underlying theory relating that parameter to the observation, and should include prior information like the range of the parameter.
 
  • Like
Likes stevendaryl
bapowell said:
You can think of the hypothesis as being the value of a certain parameter, like the curvature of the universe. The model is the underlying theory relating that parameter to the observation, and should include prior information like the range of the parameter.

Suppose we observe something that is totally unrelated to the underlying theory. What is M in that case? What is P(H|M) in that case?

Edit: I should have asked what is P(O|M) in that case?
 
M can be thought of as the underlying theory, which in practice is a set of equations relating the observable quantities to a set of parameters (together with constraints on those parameters, like the ranges of permitted values). If an observation is made that is not well-accommodated by the model M, then we will find low posterior probabilities for the parameters of the model, p(H|O). This is a signal that we either need to consider additional parameters within M, or consider a new M altogether.
 
Last edited:

Similar threads

Replies
32
Views
3K
Replies
15
Views
3K
Replies
6
Views
2K
Replies
4
Views
2K
Replies
7
Views
2K
Replies
24
Views
3K
Replies
20
Views
3K
Back
Top