Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Insights Scientific Inference and How We Come to Know Stuff - Comments

Tags:
  1. Apr 21, 2016 #1

    bapowell

    User Avatar
    Science Advisor

  2. jcsd
  3. Apr 21, 2016 #2

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    This is great stuff. This is must read for any scientist. Sadly, most ignore these kind of questions.

    Personally, I adhere to Kuhn's view of science, so I'm looking forward to what you'll say about him!
     
  4. Apr 21, 2016 #3

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    A minor nitpick: equation (1) does not evaluate to 1 for ##n = 0##, and it does evaluate to 11 for ##n = 5##. As you've written it, it should have ##2n - 1## at the end and be evaluated starting at ##n = 1##. Or, if you want to start with ##n = 0##, the final ##+1## in each of the five factors in parentheses should be a ##-1##.
     
  5. Apr 21, 2016 #4
    Brilliant stuff Brian!
     
  6. Apr 22, 2016 #5
    I gave up worrying about whether or not a scientific theory is really true (in the mathematical sense) some time in college. I am only concerned about whether the predictions are accurate (and how accurate) and how well the known boundaries of application are.

    No well educated physicist pretends that Newton's laws are rigorously true or exact (in light of quantum mechanics and relativity). But they are certainly good enough for 99% of mechanical engineering tasks. Improving our lives does not require any scientific theory to be rigorously "true" only that they work "well enough."

    As a practical matter "well enough" usually means being some measure more accurate than the theory or model being replaced.
     
  7. Apr 22, 2016 #6
    Dr. Courtney:

    It's not about whether theories are "really true". They are not, for they are supported by inference and predictions (deductions only work within an inference theory framework). It's about knowing the limits of the knowledge itself. And by "It" I mean the work of the physicist. Phenomenological descriptions of the world are the goal of the physicist, regardless of implications; might as well know the basis of the work.

    PS: Of course, a lot of physicists work on life-improving applications of this or that, but I meant to center on the "pure" side of things. Knowledge for knowledge's sake. That necessarily includes the methodology itself.
     
  8. Apr 22, 2016 #7

    bapowell

    User Avatar
    Science Advisor

    Thanks and fixed!
     
  9. Apr 22, 2016 #8
    This viewpoint is abandoned in most cases where the scientific claims have political or policy implications. No one is threatened by the physicist who doesn't believe Newton's laws, Quantum Mechanics, or General Relativity are "really true" (in nearly every sense that physicists seem to make these claims).

    But lots of folks are threatened with theories with greater political import or public policy implications are challenged (vaccinations, evolution, climate change, etc.) Very few scientists are willing to acknowledge that these "theories" may not really be true, but are only approximations.

    This gives rise to Courtney's law: the accuracy of the error estimates are inversely proportional to the public policy implications.

    And the corollary: the likelihood of a claim for a theory to be absolutely true is directly proportional to the public policy implications.

    I tend to sense a different epistemology at work when the conversation shifts from how well a theory is supported by data and repeatable experiment to arguments from authority citing how many "experts" support a given view.

    Scientific inference and how we come to know stuff should be the same for theories with relatively little policy implications as it is for theories with broad and important policy implications. The way most science with policy implications is presented to the public (and often taught in schools) suggests the rules of inference and how we come to know stuff are rather flexible, when it really should not be.

    When the success of education is judged by whether students agree with scientific consensus on most issues rather than by whether they really grasp how scientific inference and how we come to know stuff really work, education is bound to fail, because the metric for determining success is fundamentally flawed.
     
    Last edited: Apr 22, 2016
  10. Apr 23, 2016 #9
    Hi Brian:

    I much like your presentation.

    It occurs to me that Hume's argument is based on an assumed definition about what knowledge is. The assumption seems to be
    "Knowledge" is a belief for which there is certainty that the belief is correct/true.​
    It is the kind of knowledge associated with the theorems of pure math (and perhaps also theology). Even if we ignore that erroneous proofs of theorems which have in the past occasionally fooled mathematicians, this definition is completely incompatible with the concept of scientific knowledge, in which absolute certainty is recognized as always impossible. All scientific knowledge (excluding pure math knowledge) comes with a level of confidence which is less than certainty.

    Note: As an example of a widely accepted for a decade false proof of a theorem, see
    Here is a quote:
    One alleged proof was given by Alfred Kempe in 1879, which was widely acclaimed... It was not until 1890 that Kempe's proof was shown incorrect by Percy Heawood.​

    Regards,
    Buzz
     
  11. Apr 23, 2016 #10
    Dr. Courtney:

    Whether students agreeing with the consensus is a good measure depends on one's view of the purpose of education. Early 20th century education reformers like Dewy saw the purpose of education as the promotion of political unity. It's not clear to me that has changed.

    As Brian Powell points out, truth is a slippery concept. Political unity is much easier to understand.
     
  12. Apr 24, 2016 #11

    bapowell

    User Avatar
    Science Advisor

    Thank you Buzz. I'm not sure that this is the kind of knowledge that Hume has in mind. Though his work predates Kant's delineation between "analytic" and "synthetic" knowledge (that is, the distinction between logical and empirical truths), Hume understood that knowledge about the world must be ascertained by observing it. After all, he was one of the founders of empiricism. This is why he tangles with induction: it's the only way to acquire knowledge through experience. Accepting that such knowledge is never perfect, the problem of induction illustrates that we cannot, in general, know how "good" our knowledge is. So Hume accepts that knowledge is imperfect; the problem of induction shows that we cannot know how imperfect it is.​
     
  13. Apr 24, 2016 #12
    Hi Brian:

    I am not sure I understand what the above means. When science gives a calculated confidence level with a prediction/measurement, isn't that the same thing as knowing the degree of imperfection? I do not know Hume's history, so an explanation might be that he was unaware of probability theory.

    Regards,
    Buzz
     
  14. Apr 24, 2016 #13
    Every measurement in science in an estimated value rather than a perfectly determined value, and every confidence level or set of error bars is also an estimate.

    You usually have to read the fine print in a published paper to determine how error estimates are calculated, on what assumptions the confidence levels are based, etc. Common assumptions that may not always be true are random errors and a normal distribution of measurement errors.

    So in the language of your question, "the degree of imperfection" is not "known" exactly, it is only estimated, and that estimation is based on assumptions that may not always be true.

    Although most of my own published papers include error bars and confidence levels, in most cases we only use one technique (the most common in the field we're working in) to determine the error estimates we include in the paper. We do commonly estimate uncertainties with other available techniques which I take to provide some ballpark estimates on how accurate the error estimates themselves might be. It is nearly universal for different approaches to estimating uncertainties to yield slightly different results. One approach might suggest 1% uncertainty, while another approach might suggest 2%. I end up with a "gut feeling" that the actual uncertainty is likely between 1-2%.
     
  15. Apr 24, 2016 #14
    Kant (1724-1804) argued that a priori knowledge exists. That is that some things can be known before [without] reference to our senses. Hume (1711-1776) argued that only a posteriori knowledge exists; knowledge is gained exclusively after perception through senses.

    Hume promoted a more pure form of empiricism where passion ruled while Kant believed rational thought gave form to our observations.

    So Hume would point out that any "gut feeling" is just that. Perhaps with more observation, the confidence limit could approach zero, but it couldn't ever reach zero.

    Kant might argue that with proper math, the limit could be made arbitrarily small. But the key would be well reasoned math and a good model.

    So there's an open question about the relationship between "the map and the territory". Can a model be made so precise that it becomes the territory? Or in more modern terms, do we live in The Matrix? In a digital universe, the two can be identical. In a chaotic universe (in the mathematical sense), they cannot be.

    IMO, science today should assume the least restrictive option, that the map is not the territory. Models represent reality, but are not reality. Perhaps that is not true, but assuming the opposite could lead to errors where we assume a poor map is better than it is.

    Still, this borders on matters of faith, and peoples' basic motivations in persuing science. I would not to discourage others from seeking their own truths.
     
  16. Apr 24, 2016 #15

    bapowell

    User Avatar
    Science Advisor

    The problem is that confidence limits, and in fact all statistical inference, are based on the premise that induction works: that we can generalize individual measurements here and now to universal law. The tools of statistical inference are used to quantify our uncertainty under the presumption that the thing being measured is projectable, but, as I discuss in the note, it's not always easy to know that you're projecting the right thing (that you're making the right inductive inference.) As I hope to show over the course of the next few notes, we can still make progress in science despite these limitations; and yes, statistics has much to say on it (though not the sampling statistics we use to summarize our data...)
     
  17. Apr 24, 2016 #16
    Well said, and well explained. Thanks.

    I tend to view pure math like Kant: given Euclid's axioms, the whole of Euclidean geometry can be produced with certainty without regard for sensory perception.

    But I view the natural sciences like Hume: repeatable experiment alone is the ultimate arbiter, knowledge without sensory perception (experiment) is impossible, and since all measurement has uncertainty, so does all theory.

    So in pure math, I am comfortable with the map being the territory. But in natural science, the map is not the territory. To me, this is the fundamental distinction between pure math and natural science, and it tends to be under appreciated by students at most levels.
     
  18. Apr 24, 2016 #17

    bapowell

    User Avatar
    Science Advisor

    Kant would agree with you. He held that knowledge was either analytic (true by logical necessity) or synthetic (knowable only through experience). Scientific knowledge is in the latter category.
     
    Last edited: Apr 24, 2016
  19. Apr 24, 2016 #18
    Even in pure math the answer isn't clear to me. Gödel proved a disturbing theorem (the incompleteness theorem) which approaches this point tangentially. It is possible even some theorems in math depend on viewpoint, thus sense data.

    But since the incompleteness theorem doesn't apply directly, there's the possibility it doesn't apply, or more disturbingly only applies from some points of view.

    I wish I was smart enough to understand rather than guess.
     
  20. Apr 25, 2016 #19
    What works for me is the concept that a distinction exists between TRUE knowledge (based on provable truth like pure math) and USEFUL knowledge (based only on practical usefulness like the rest of science). USEFUL knowledge comes with an estimate of how likely it is that the knowledge will continue to be useful for some useful period of time.

    Regards,
    Buzz
     
  21. Apr 25, 2016 #20

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    But these provable truths are only provable in the context of a particular set of abstract axioms, which might or might not actually apply to the real world. That is why Bertrand Russell (IIRC) said that to the degree that the statements of mathematics refer to reality, they are not certain, and to the degree that they are certain, they do not refer to reality.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Scientific Inference and How We Come to Know Stuff - Comments
Loading...