- #1

- 565

- 60

- A
- Thread starter FallenApple
- Start date

- #1

- 565

- 60

- #2

- 3,886

- 1,454

If we have an intuitive, satisfying explanation of why something happens, which is consistent with all our other accepted theories, but the predictions that follow from that explanation are not borne out by observation, there is probably something wrong with the explanation. Indeed it is sometimes failures of prediction for well-understood and highly-trusted theories that leads to their replacement by more sophisticated theories with wider scope of application.

But good prediction does not imply that we have a good explanation. It is much more satisfying when one has an explanation of a relationship, but there are many observed relationships that are widely used, for which we have

When building statistical models, the usual approach is to set the bar lower for inclusion of a factor that is called 'intuitive' - ie for which we can imagine a reason why it would affect the output in the way it has been observed to do. We might for instance set a lower confidence level or improvement in model score as the threshold that must be crossed for an intuitive factor than for an unintuitive factor. But sometimes the statistical evidence for inclusion of a factor is just too strong, even though we are unable to imagine a reason why it should affect the output in the way it has been observed to do.

One reason unintuitive factors make their way into models is the existence of interaction effects between factors. When a model has several factors there can be many levels of interaction and the number of possible interactions explodes combinatorically, with many of them hard to conceptualise. But as long as we set a high enough requirement of impact before including an unintuitive factor, it would be counter-productive to rule that out.

- #3

- 565

- 60

That makes sense. Another example in science is that Newtonian physics predicts well, but is wrong. But it is a limiting case of General Relativity, so the false theory of Newtonian Physics is still at least directly related to the truth.

If we have an intuitive, satisfying explanation of why something happens, which is consistent with all our other accepted theories, but the predictions that follow from that explanation are not borne out by observation, there is probably something wrong with the explanation. Indeed it is sometimes failures of prediction for well-understood and highly-trusted theories that leads to their replacement by more sophisticated theories with wider scope of application.

But good prediction does not imply that we have a good explanation. It is much more satisfying when one has an explanation of a relationship, but there are many observed relationships that are widely used, for which we havenoexplanations. For example there are plenty of approved drugs that have been repeatedly shown to be effective in treating certain ailments, and which are widely prescribed, but for which no mechanism is known about how and why they work.

When building statistical models, the usual approach is to set the bar lower for inclusion of a factor that is called 'intuitive' - ie for which we can imagine a reason why it would affect the output in the way it has been observed to do. We might for instance set a lower confidence level or improvement in model score as the threshold that must be crossed for an intuitive factor than for an unintuitive factor. But sometimes the statistical evidence for inclusion of a factor is just too strong, even though we are unable to imagine a reason why it should affect the output in the way it has been observed to do.

One reason unintuitive factors make their way into models is the existence of interaction effects between factors. When a model has several factors there can be many levels of interaction and the number of possible interactions explodes combinatorically, with many of them hard to conceptualise. But as long as we set a high enough requirement of impact before including an unintuitive factor, it would be counter-productive to rule that out.

If certain medications work well the vast majority of the time, then it likely isn't a coincidence.

Is the mathematical/statistical reason for setting the bar lower for "intuitive variables" is because even if by itself it isn't significant, it could be after including it? Because the error term for the model becomes correlated to the input of confounders? Because that error term wouldn't be irreducible. So that error the would be absorbing some of the influence?

So the unintuitive factor has a higher bar because it is unlikely, given the theory is true, to be a confounder and adding it probably will just complicates things for interpretation because of the combinatorial issue you noted. Also, I think from a predictive standpoint, it would be pretty bad as well right? Because it increases the dimensionality of the input space and will result in higher variance of outcomes after validation.

But wouldn't this result in a tradeoff? Because sometimes there are many confounders, so including them will be necessary for getting good explanation and better model fit(Lower RSS), but will increase of variance of the predicted outcome on a validation set if we were to obtain one.

- #4

- 30,866

- 7,474

This is not true in general. Suppose you have five noisy regression points. You can explain them perfectly with a fourth order polynomial, but the resulting predictions can be much worse than a linear fit which explains less. Prediction and explanation are substantially different things.If we can infer well using the coefficients, then we should be able to predict well also.

I disagree completely with this. Newtonian physics is verified in its domain of applicability, as centuries of experimental outcomes confirm. You should read Asimov’s “Relativity of wrong” and the recent Insights article about classical mechanicsAnother example in science is that Newtonian physics predicts well, but is wrong. But it is a limiting case of General Relativity, so the false theory of Newtonian Physics is still at least directly related to the truth.

- #5

- 565

- 60

That makes sense. Forth order has a lot of curvature and can curve away quickly before the x location of a 6th point used for testing prediction, depending on where that point appears.This is not true in general. Suppose you have five noisy regression points. You can explain them perfectly with a fourth order polynomial, but the resulting predictions can be much worse than a linear fit which explains less. Prediction and explanation are substantially different things.

I disagree completely with this. Newtonian physics is verified in its domain of applicability, as centuries of experimental outcomes confirm. You should read Asimov’s “Relativity of wrong” and the recent Insights article about classical mechanics

I just read the insights article. Yes, Newtonian mechanics being the limiting case of GR implies that it is a subset of the modern GR theory and hence is necessarily as correct as GR itself due to being a subset.

- Replies
- 1

- Views
- 3K

- Last Post

- Replies
- 0

- Views
- 2K

- Last Post

- Replies
- 1

- Views
- 4K

- Last Post

- Replies
- 3

- Views
- 3K

- Last Post

- Replies
- 5

- Views
- 1K

- Last Post

- Replies
- 3

- Views
- 659

- Last Post

- Replies
- 1

- Views
- 16K

- Last Post

- Replies
- 3

- Views
- 3K

- Last Post

- Replies
- 0

- Views
- 1K

- Replies
- 0

- Views
- 2K