Scientific Inference and How We Come to Know Stuff - Comments

In summary: This definition of knowledge is problematic in that it excludes a lot of knowledge that is important in practice, including the knowledge we use to make decisions.I think the definition of knowledge needs to be more inclusive, perhaps including the knowledge we use to make decisions, but also including the knowledge that is uncertain and conditional.In summary, Brian Powell's new PF Insights post is great and must read for any scientist. It has a minor nitpick with equation (1), but the rest of the post is brilliant.
  • #1
bapowell
Science Advisor
Insights Author
2,243
260
bapowell submitted a new PF Insights post

Scientific Inference and How We Come to Know Stuff

logic1.png


Continue reading the Original PF Insights Post.
 
  • Like
Likes atyy, Samy_A, Buzz Bloom and 3 others
Physics news on Phys.org
  • #2
This is great stuff. This is must read for any scientist. Sadly, most ignore these kind of questions.

Personally, I adhere to Kuhn's view of science, so I'm looking forward to what you'll say about him!
 
  • Like
Likes Greg Bernhardt and bapowell
  • #3
A minor nitpick: equation (1) does not evaluate to 1 for ##n = 0##, and it does evaluate to 11 for ##n = 5##. As you've written it, it should have ##2n - 1## at the end and be evaluated starting at ##n = 1##. Or, if you want to start with ##n = 0##, the final ##+1## in each of the five factors in parentheses should be a ##-1##.
 
  • Like
Likes bapowell
  • #5
I gave up worrying about whether or not a scientific theory is really true (in the mathematical sense) some time in college. I am only concerned about whether the predictions are accurate (and how accurate) and how well the known boundaries of application are.

No well educated physicist pretends that Newton's laws are rigorously true or exact (in light of quantum mechanics and relativity). But they are certainly good enough for 99% of mechanical engineering tasks. Improving our lives does not require any scientific theory to be rigorously "true" only that they work "well enough."

As a practical matter "well enough" usually means being some measure more accurate than the theory or model being replaced.
 
  • Like
Likes Greg Bernhardt
  • #6
Dr. Courtney:

It's not about whether theories are "really true". They are not, for they are supported by inference and predictions (deductions only work within an inference theory framework). It's about knowing the limits of the knowledge itself. And by "It" I mean the work of the physicist. Phenomenological descriptions of the world are the goal of the physicist, regardless of implications; might as well know the basis of the work.

PS: Of course, a lot of physicists work on life-improving applications of this or that, but I meant to center on the "pure" side of things. Knowledge for knowledge's sake. That necessarily includes the methodology itself.
 
  • #7
PeterDonis said:
A minor nitpick: equation (1) does not evaluate to 1 for ##n = 0##, and it does evaluate to 11 for ##n = 5##. As you've written it, it should have ##2n - 1## at the end and be evaluated starting at ##n = 1##. Or, if you want to start with ##n = 0##, the final ##+1## in each of the five factors in parentheses should be a ##-1##.
Thanks and fixed!
 
  • #8
voila said:
Dr. Courtney:

It's not about whether theories are "really true". They are not, for they are supported by inference and predictions (deductions only work within an inference theory framework). It's about knowing the limits of the knowledge itself.

This viewpoint is abandoned in most cases where the scientific claims have political or policy implications. No one is threatened by the physicist who doesn't believe Newton's laws, Quantum Mechanics, or General Relativity are "really true" (in nearly every sense that physicists seem to make these claims).

But lots of folks are threatened with theories with greater political import or public policy implications are challenged (vaccinations, evolution, climate change, etc.) Very few scientists are willing to acknowledge that these "theories" may not really be true, but are only approximations.

This gives rise to Courtney's law: the accuracy of the error estimates are inversely proportional to the public policy implications.

And the corollary: the likelihood of a claim for a theory to be absolutely true is directly proportional to the public policy implications.

I tend to sense a different epistemology at work when the conversation shifts from how well a theory is supported by data and repeatable experiment to arguments from authority citing how many "experts" support a given view.

Scientific inference and how we come to know stuff should be the same for theories with relatively little policy implications as it is for theories with broad and important policy implications. The way most science with policy implications is presented to the public (and often taught in schools) suggests the rules of inference and how we come to know stuff are rather flexible, when it really should not be.

When the success of education is judged by whether students agree with scientific consensus on most issues rather than by whether they really grasp how scientific inference and how we come to know stuff really work, education is bound to fail, because the metric for determining success is fundamentally flawed.
 
Last edited:
  • Like
Likes brainpushups and PeterDonis
  • #9
Hi Brian:

I much like your presentation.

It occurs to me that Hume's argument is based on an assumed definition about what knowledge is. The assumption seems to be
"Knowledge" is a belief for which there is certainty that the belief is correct/true.​
It is the kind of knowledge associated with the theorems of pure math (and perhaps also theology). Even if we ignore that erroneous proofs of theorems which have in the past occasionally fooled mathematicians, this definition is completely incompatible with the concept of scientific knowledge, in which absolute certainty is recognized as always impossible. All scientific knowledge (excluding pure math knowledge) comes with a level of confidence which is less than certainty.

Note: As an example of a widely accepted for a decade false proof of a theorem, see
Here is a quote:
One alleged proof was given by Alfred Kempe in 1879, which was widely acclaimed... It was not until 1890 that Kempe's proof was shown incorrect by Percy Heawood.​

Regards,
Buzz
 
  • #10
Dr. Courtney:

Whether students agreeing with the consensus is a good measure depends on one's view of the purpose of education. Early 20th century education reformers like Dewy saw the purpose of education as the promotion of political unity. It's not clear to me that has changed.

As Brian Powell points out, truth is a slippery concept. Political unity is much easier to understand.
 
  • #11
Buzz Bloom said:
Hi Brian:

I much like your presentation.

It occurs to me that Hume's argument is based on an assumed definition about what knowledge is. The assumption seems to be
"Knowledge" is a belief for which there is certainty that the belief is correct/true.​
Thank you Buzz. I'm not sure that this is the kind of knowledge that Hume has in mind. Though his work predates Kant's delineation between "analytic" and "synthetic" knowledge (that is, the distinction between logical and empirical truths), Hume understood that knowledge about the world must be ascertained by observing it. After all, he was one of the founders of empiricism. This is why he tangles with induction: it's the only way to acquire knowledge through experience. Accepting that such knowledge is never perfect, the problem of induction illustrates that we cannot, in general, know how "good" our knowledge is. So Hume accepts that knowledge is imperfect; the problem of induction shows that we cannot know how imperfect it is.​
 
  • #12
bapowell said:
the problem of induction shows that we cannot know how imperfect it is.
Hi Brian:

I am not sure I understand what the above means. When science gives a calculated confidence level with a prediction/measurement, isn't that the same thing as knowing the degree of imperfection? I do not know Hume's history, so an explanation might be that he was unaware of probability theory.

Regards,
Buzz
 
  • #13
Buzz Bloom said:
I am not sure I understand what the above means. When science gives a calculated confidence level with a prediction/measurement, isn't that the same thing as knowing the degree of imperfection? I do not know Hume's history, so an explanation might be that he was unaware of probability theory.

Every measurement in science in an estimated value rather than a perfectly determined value, and every confidence level or set of error bars is also an estimate.

You usually have to read the fine print in a published paper to determine how error estimates are calculated, on what assumptions the confidence levels are based, etc. Common assumptions that may not always be true are random errors and a normal distribution of measurement errors.

So in the language of your question, "the degree of imperfection" is not "known" exactly, it is only estimated, and that estimation is based on assumptions that may not always be true.

Although most of my own published papers include error bars and confidence levels, in most cases we only use one technique (the most common in the field we're working in) to determine the error estimates we include in the paper. We do commonly estimate uncertainties with other available techniques which I take to provide some ballpark estimates on how accurate the error estimates themselves might be. It is nearly universal for different approaches to estimating uncertainties to yield slightly different results. One approach might suggest 1% uncertainty, while another approach might suggest 2%. I end up with a "gut feeling" that the actual uncertainty is likely between 1-2%.
 
  • Like
Likes Buzz Bloom
  • #14
Kant (1724-1804) argued that a priori knowledge exists. That is that some things can be known before [without] reference to our senses. Hume (1711-1776) argued that only a posteriori knowledge exists; knowledge is gained exclusively after perception through senses.

Hume promoted a more pure form of empiricism where passion ruled while Kant believed rational thought gave form to our observations.

So Hume would point out that any "gut feeling" is just that. Perhaps with more observation, the confidence limit could approach zero, but it couldn't ever reach zero.

Kant might argue that with proper math, the limit could be made arbitrarily small. But the key would be well reasoned math and a good model.

So there's an open question about the relationship between "the map and the territory". Can a model be made so precise that it becomes the territory? Or in more modern terms, do we live in The Matrix? In a digital universe, the two can be identical. In a chaotic universe (in the mathematical sense), they cannot be.

IMO, science today should assume the least restrictive option, that the map is not the territory. Models represent reality, but are not reality. Perhaps that is not true, but assuming the opposite could lead to errors where we assume a poor map is better than it is.

Still, this borders on matters of faith, and peoples' basic motivations in persuing science. I would not to discourage others from seeking their own truths.
 
  • #15
Buzz Bloom said:
Hi Brian:

I am not sure I understand what the above means. When science gives a calculated confidence level with a prediction/measurement, isn't that the same thing as knowing the degree of imperfection? I do not know Hume's history, so an explanation might be that he was unaware of probability theory.
The problem is that confidence limits, and in fact all statistical inference, are based on the premise that induction works: that we can generalize individual measurements here and now to universal law. The tools of statistical inference are used to quantify our uncertainty under the presumption that the thing being measured is projectable, but, as I discuss in the note, it's not always easy to know that you're projecting the right thing (that you're making the right inductive inference.) As I hope to show over the course of the next few notes, we can still make progress in science despite these limitations; and yes, statistics has much to say on it (though not the sampling statistics we use to summarize our data...)
 
  • Like
Likes Buzz Bloom
  • #16
Jeff Rosenbury said:
Kant (1724-1804) argued that a priori knowledge exists. That is that some things can be known before [without] reference to our senses. Hume (1711-1776) argued that only a posteriori knowledge exists; knowledge is gained exclusively after perception through senses.

Hume promoted a more pure form of empiricism where passion ruled while Kant believed rational thought gave form to our observations.

So Hume would point out that any "gut feeling" is just that. Perhaps with more observation, the confidence limit could approach zero, but it couldn't ever reach zero.

Kant might argue that with proper math, the limit could be made arbitrarily small. But the key would be well reasoned math and a good model.

So there's an open question about the relationship between "the map and the territory". Can a model be made so precise that it becomes the territory? Or in more modern terms, do we live in The Matrix? In a digital universe, the two can be identical. In a chaotic universe (in the mathematical sense), they cannot be.

IMO, science today should assume the least restrictive option, that the map is not the territory. Models represent reality, but are not reality. Perhaps that is not true, but assuming the opposite could lead to errors where we assume a poor map is better than it is.

Still, this borders on matters of faith, and peoples' basic motivations in persuing science. I would not to discourage others from seeking their own truths.

Well said, and well explained. Thanks.

I tend to view pure math like Kant: given Euclid's axioms, the whole of Euclidean geometry can be produced with certainty without regard for sensory perception.

But I view the natural sciences like Hume: repeatable experiment alone is the ultimate arbiter, knowledge without sensory perception (experiment) is impossible, and since all measurement has uncertainty, so does all theory.

So in pure math, I am comfortable with the map being the territory. But in natural science, the map is not the territory. To me, this is the fundamental distinction between pure math and natural science, and it tends to be under appreciated by students at most levels.
 
  • Like
Likes Buzz Bloom
  • #17
Dr. Courtney said:
But I view the natural sciences like Hume: repeatable experiment alone is the ultimate arbiter, knowledge without sensory perception (experiment) is impossible, and since all measurement has uncertainty, so does all theory.
Kant would agree with you. He held that knowledge was either analytic (true by logical necessity) or synthetic (knowable only through experience). Scientific knowledge is in the latter category.
 
Last edited:
  • Like
Likes Buzz Bloom and Jeff Rosenbury
  • #18
Dr. Courtney said:
So in pure math, I am comfortable with the map being the territory. But in natural science, the map is not the territory. To me, this is the fundamental distinction between pure math and natural science, and it tends to be under appreciated by students at most levels.

Even in pure math the answer isn't clear to me. Gödel proved a disturbing theorem (the incompleteness theorem) which approaches this point tangentially. It is possible even some theorems in math depend on viewpoint, thus sense data.

But since the incompleteness theorem doesn't apply directly, there's the possibility it doesn't apply, or more disturbingly only applies from some points of view.

I wish I was smart enough to understand rather than guess.
 
  • #19
bapowell said:
The problem is that confidence limits, and in fact all statistical inference, are based on the premise that induction works:
What works for me is the concept that a distinction exists between TRUE knowledge (based on provable truth like pure math) and USEFUL knowledge (based only on practical usefulness like the rest of science). USEFUL knowledge comes with an estimate of how likely it is that the knowledge will continue to be useful for some useful period of time.

Regards,
Buzz
 
  • #20
Buzz Bloom said:
TRUE knowledge (based on provable truth like pure math)

But these provable truths are only provable in the context of a particular set of abstract axioms, which might or might not actually apply to the real world. That is why Bertrand Russell (IIRC) said that to the degree that the statements of mathematics refer to reality, they are not certain, and to the degree that they are certain, they do not refer to reality.
 
  • Like
Likes Jeff Rosenbury and Buzz Bloom
  • #21
PeterDonis said:
But these provable truths are only provable in the context of a particular set of abstract axioms, which might or might not actually apply to the real world.
Hi Peter:

Agreed. That is why TRUE knowledge is only a map, but USEFUL knowledge IS the SUBJECTIVE terrain. Both forms of knowledge are approximations of the OBJECTIVE terrain.

Regards,
Buzz
 
  • #22
Buzz Bloom said:
TRUE knowledge is only a map, but USEFUL knowledge IS the SUBJECTIVE terrain.

I don't see the difference between a map and the subjective terrain; a map is the "subjective terrain"--the representation of the terrain that you use as a guide.
 
  • #23
PeterDonis said:
I don't see the difference between a map and the subjective terrain; a map is the "subjective terrain"--the representation of the terrain that you use as a guide.
Hi Peter:

I am not confident that the following explanation will be generally helpful, but it works for me.

OBJECTIVE reality is not knowable. SUBJECTIVE reality is the only reality that is knowable, and it is therefore USEFUL to consider it to be THE REALITY. Alternatively, it is also useful to go with the (existential ?) concept that THE REALITY is the link between subjective and objective reality. TRUE knowledge is not REALITY at all; it is ONLY a MAP.

Regards,
Buzz
 
  • #24
Buzz Bloom said:
USEFUL knowledge comes with an estimate of how likely it is that the knowledge will continue to be useful for some useful period of time.
Exactly. And this is where the problem of induction comes in: how are you able to know whether your knowledge will continue to be useful? Any expectation that it will be useful for some period of time presumes certain regularities about the universe. The challenge is identifying what these regularities are in every instance so that we can bolster our inductive reasoning.
 
  • Like
Likes Buzz Bloom
  • #25
bapowell said:
Any expectation that it will be useful for some period of time presumes certain regularities about the universe. The challenge is identifying what these regularities are in every instance so that we can bolster our inductive reasoning.
Hi Brian:

Part of USEFUL knowledge is our estimates about the regularities continuing to be regular. Logically this creates circularity concerning the truth, but with respect to usefulness this is perfectly reasonable.

Regards,
Buzz
 
  • #26
bapowell said:
Exactly. And this is where the problem of induction comes in: how are you able to know whether your knowledge will continue to be useful? Any expectation that it will be useful for some period of time presumes certain regularities about the universe. The challenge is identifying what these regularities are in every instance so that we can bolster our inductive reasoning.
It is fascinating to me that my favorite definition of energy (Noetherian energy) relies on the assumption of such regularities in the universe. This is so true, that to some extent modern scientific models are based on this underlying Lagrangian path. In other words, since the real (as in true) path's derivative is (classically) defined as zero, any changes would show up as energy changes in the model.

Thus, to some extent, irregularities do exist as energy changing form. Of course we observe these changes and account for them in our models. Whenever we find a new change, we account for it as a new form of energy. Thus we change our model to match our changing universe.
 
  • #27
Is the principle of equivalence a form of induction, ie. the local laws of physics are everywhere the same?
 
  • #28
Science cannot really prove the constancy of the laws of nature. Science assumes the constancy of natural law.

Stephen Jay Gould put it this way:

Begin Exact Quote (Gould 1984, p. 11):

METHODOLOGICAL PRESUPPOSITIONS ACCEPTED BY ALL SCIENTISTS

1) The Uniformity of law - Natural laws are invariant in space and time. John Stuart Mill (1881) argued that such a postulate of uniformity must be invoked if we are to have any confidence in the validity of inductive inference; for if laws change, then an hypothesis about cause and effect gains no support from repeated observations - the law may alter the next time and yield a different result. We cannot "prove" the assumption of invariant laws; we cannot even venture forth into the world to gather empirical evidence for it. It is an a priori methodological assumption made in order to practice science; it is a warrant for inductive inference (Gould, 1965).

End Exact Quote (Gould 1984, p. 11)

Gould, Stephen Jay. "Toward the vindication of punctuational change."Catastrophes and Earth history (1984): 9-16.
also see:
Gould, Stephen Jay. "Is uniformitarianism necessary?" American Journal of Science 263.3 (1965): 223-228.
Gould, Stephen Jay. Time's arrow, time's cycle: Myth and metaphor in the discovery of geological time. Harvard University Press, 1987.
 
  • Like
Likes bapowell
  • #29
atyy said:
Is the principle of equivalence a form of induction, ie. the local laws of physics are everywhere the same?
Any physical law that is a generalization through time or space (or both) of individual observed instances is an induction. So, yeah, the equivalence principle is a great example!
 
  • #30
atyy said:
Is the principle of equivalence a form of induction, ie. the local laws of physics are everywhere the same?

Someone more of an expert in GR may adjust my answer, but there are some specific assertions in the principle of equivalence beyond the local laws of physics being the same everywhere. These assertions could potentially be falsified:

1. Gravitational mass is equal to inertial mass.
2. There is no experiment that can possibly distinguish a uniform acceleration from a uniform gravitational field.
 
  • Like
Likes Pepper Mint
  • #32
And part two is very good. Well worth a careful read.
 
  • Like
Likes bapowell
  • #33
I really thank the author for an excellent read.

Induction IMO degenerates to solipsism and then your just having a boring talk to yourself.

I am happy to just assume we all share a common external reality that we can study reliably. The rest, not so much.
 
  • Like
Likes bapowell and Greg Bernhardt

1. What is scientific inference?

Scientific inference is the process of using evidence and reasoning to draw conclusions about the natural world. It involves making predictions, testing hypotheses, and analyzing data to support or reject a proposed explanation.

2. How do scientists make inferences?

Scientists make inferences by using the scientific method, which involves making observations, formulating a hypothesis, designing and conducting experiments, and analyzing data. They also use critical thinking and logical reasoning to draw conclusions from their findings.

3. What is the difference between scientific inference and intuition?

Scientific inference is based on evidence and logical reasoning, while intuition is based on gut feelings or personal beliefs. Inferences are supported by data and can be tested and verified, while intuition is subjective and cannot be objectively proven.

4. What are some limitations of scientific inference?

Some limitations of scientific inference include the possibility of bias, limited access to information, and the complexity of certain phenomena. Additionally, scientific knowledge is constantly evolving and subject to change as new evidence is discovered.

5. How does scientific inference contribute to our understanding of the world?

Scientific inference allows us to gain a deeper understanding of the natural world by providing evidence-based explanations for phenomena and helping us make predictions about future events. It also allows us to identify patterns and relationships between different variables, leading to new discoveries and advancements in various fields of study.

Similar threads

Replies
11
Views
2K
Replies
9
Views
1K
  • General Discussion
Replies
6
Views
1K
  • General Discussion
Replies
15
Views
2K
  • General Discussion
Replies
6
Views
1K
  • General Discussion
Replies
7
Views
2K
  • General Discussion
Replies
24
Views
3K
  • General Discussion
Replies
19
Views
3K
Replies
7
Views
2K
  • General Discussion
Replies
4
Views
838
Back
Top