Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The Truth Wears Off

  1. Jul 30, 2015 #1

    Stephen Tashi

    User Avatar
    Science Advisor

  2. jcsd
  3. Jul 30, 2015 #2
    People, and scientists are people, find some ideas more attractive than others.
    I know PhD's who "like" black holes, worm holes and many-world theories.
    In science you have to be particularly wary of concepts that you "like".
    Such emotional attachment easily generates myths and belief systems.
    In fact, I like the idea that attractive concepts are the stuff that all myths are made of.
    An example of bias at the highest level is the book title: "Under the spell of the gauge principle".
    Unless the title is irony, which I doubt, this indicates a 100% bias.
    The author indeed will never consider a theory that is not gauge invariant.
     
    Last edited: Jul 30, 2015
  4. Jul 30, 2015 #3

    ShayanJ

    User Avatar
    Gold Member

    I don't think its about bias(at least not completely). It seems to me that its a completely statistical effect, I mean, its not completely due to the methods used in the research, but is an effect you can find in a general scenario about data analysis.
    When you first encounter statistics and data analysis, it doesn't seem to be an exciting science, but its actually very exciting, and the decline effect(which I just became aware of) is not the only reason I'm telling this, although its a very interesting effect.
     
  5. Jul 30, 2015 #4
    Fantastic article! Thanks for posting that!
     
  6. Jul 30, 2015 #5
    I agree statistics plays a role in it, but isn't not publishing null results a product of bias itself, albeit on part of the journals and not the researchers?
     
    Last edited: Jul 30, 2015
  7. Jul 30, 2015 #6

    ShayanJ

    User Avatar
    Gold Member

    You're right. But when I consider the whole picture, I see things not completely explained by bias. Schooler, although aware of the effect, tried many years to replicate his results, but he couldn't. Pay attention that it was a solitary effort, it wasn't about several research groups communicating via journals, but a single researcher! And still he was observing the effect. Also, he did the precognition experiment to observe exactly the decline effect and by that time, he surely was aware that it could be due to bias and so its natural to assume he was trying to reduce the effect of bias, but he again observed the effect in the precognition experiment!

    Also a question occurred to me. What about the decline effect in data analysis of date to observe the decline effect itself? Surely people try to reduce the decline effect and it'll be much less in the future but what if people didn't try to reduce it? Would we see a decline effect in such experiments? One may suggest that the reduction of the effect of bias to make experiments better, that naturally reduces the decline effect in the future, is exactly the thing that is responsible for the decline effect in experiments where we observe it. But I'm not sure of this.

    I should confess, now that I think about it more carefully, I agree with you that its about the effect of bias(either due to single research groups or scientific community and journals as a whole), but I see some strange things about it!
     
  8. Jul 30, 2015 #7
    I read that article several years ago, and I too found it interesting. Millikan's oil drop experiment has also been used as an example of this effect. You can read the Wikipedia article about it here, as told by Richard Feynman.
     
  9. Aug 4, 2015 #8

    Drakkith

    User Avatar

    Staff: Mentor

  10. Aug 4, 2015 #9

    BWV

    User Avatar

    Here is the Ioannidis Paper mentioned in the article on the selection bias problem:

    Why Most Published Research Findings are False
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/

    One problem not mentioned in the New Yorker article is how this problem is exploited by pseudoscience - for example there are a few flukey studies (other than the Wakefield one) showing a correlation between vaccines and autism which anti-vaxxers point to, but inevitably they suffer from small sample sizes and the results don't hold up across multiple studies.
     
  11. Aug 4, 2015 #10

    WWGD

    User Avatar
    Science Advisor
    Gold Member

    Interestingly -enough, the author of the article was accused of ethical mispractices :
    http://www.cuil.pt/r.php?cx=002825717068136152164:qf0jmwd8jku&cof=FORID:10&ie=UTF-8&q=jonah+lehrer,+ehics&sa=Search

    Which leads me to dismiss his findings ;).

    Like someone said, failure does not sell papers. People want excitement, not failure, nor deep reflection.
    I think a true researcher must be a "glass alf-full" -type person , finding flaws everywhere ( especially in his own experiments) . The status quo does not reward this approach, this personality type.

    I suggest people interested in this article also read the book " The Signal and the Noise" by Nate Silver (whose predictions on national, state elections did become true to a large degree, BTW) on what personality and otherwise factors contribute to accurate predictions.
     
    Last edited: Aug 4, 2015
  12. Aug 4, 2015 #11
    I have always believed any experiment that relies heavily on the use of statistics to validate its findings sinks to the level of suspect. Such experiments involve assumptions on the application of the particular statistical method to a particular study. Assumption that are either tacitly accepted as obviously true or inadvertently accepted unconsciously. In the cocaine mouse experiment performed at three different lab although pains where taken to eliminate all recognizable possible influences on the outcome, the difference in outcomes seems more than random. This assumes that such large deviations are not random, but what is a reasonable deviation? Events of low probability can be described by a Poisson Distribution which is distinctly asymmetric favoring large deviation in the direction of greater values. So a single observation of such a system will give you a greater chance of finding a value larger than the mean.

    In physics experiments one generally attempt to reduce statistical uncertainly to a low level of importance compared to other uncertainties but even so find great disparity in carefully executed experiments. see for example "The Search for Newton's Constant" by Clive Speake and Terry Quinn, Physics Today, July 2014. Not everything is apparent. So how easy is it to miss a relevant factor in even a simple living system.

    Recall the famous comparison of statistics to a bikini - Statistics is like a bikini, what it reveals is enticing but what it conceals is vital.
     
  13. Aug 4, 2015 #12
    So...science is invalid because some results turn out to be false?

    It sounds like what happened to Schooler is that as he became more experienced with research and was given access to larger sample sizes and better techniques and equipment, ie he was doing better experiments and collecting higher-quality data, it turned out that his preliminary results were mistaken either by statistical fluke or by the limitations that a researcher at the very beginning of his career would have to deal with.

    The general contention of this essay is that isolated unusual results indicate an underlying problem with science itself. There are always going to be flukes, anomalies, and outright mistakes and even impropriety on the part of researchers. But how can anyone honestly say that we are just as ignorant now as we were 500 years ago? A 2% discrepancy in the velocities of weights dropped into a borehole from what was predicted mathematically does not mean that science itself is fundamentally flawed, it means that there is a 2% discrepancy in the velocity of weights dropped into a borehole, and that this discrepancy needs to be accounted for. The only other option is to attribute it to the supernatural, and that's hardly going to reveal anything.

    I like Adam Savage's quote: "Science isn't about facts and absolute results, science is about ignorance. In particular it's about collective ignorance, and when you do science you're trying to eliminate little pockets of ignorance so that at the end of the day you're a little less wrong than you were the day before." True, f = ma is not the complete picture of mechanics. On the other hand, when you learn even the most basic of Newtonian physics, you know far more about the universe than you did before you learned it. And there still exist plenty of situations where Newtonian physics is enough to understand them.

    What this article (rightly) recognizes is the importance of being aware of one's own biases, taking preliminary results with a grain of salt until they have been replicated, and being wary of working alone or with small sample sizes. None of that implies that there is something "wrong" with science itself. I think the author of this article kind of tips his hand with the phrase "Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true." No, the very definition of "truthful" is "proven"; otherwise we're just running around believing whatever we want regardless of how it matches up with the evidence. When something previously believed to be true later turns out to be false, that doesn't mean that science is broken and can't explain everything and whatever other quasi-spiritual claptrap, it means that it was false to begin with, and that now we know why.
     
  14. Aug 5, 2015 #13
    You seem to be suggesting that a thing can be cleared up definitively, and that the things the article talks about are all errors that are now corrected. One of the problems the article raises is that they don't get corrected. People cling to them despite contradictory evidence:
     
  15. Aug 5, 2015 #14
    Not that anything can be cleared up "definitively", just that science doesn't even claim to provide definitive answers in the first place. The article's premise is that sometimes scientists make mistakes and scientists are human and have human failings, that as a result science can't necessarily be said to provide any sort of absolute and universal truth, that sometimes promising and interesting new results later turn out to be invalid, and that as a result we should be skeptical of anything that claims to (all of which is true). But from there it extrapolates that all of the understanding of the world that has been provided by the scientific method is invalid (which is ludicrous).
     
  16. Aug 5, 2015 #15
    Where does it make this claim? I didn't see any statement to this effect.
     
  17. Aug 5, 2015 #16
    The points of the article according to me are-
    Decline effect due to smaller sample size and publication bias, and exaggeration of positive results due to perception biases leading to selective publishing (reporting bias).
    It makes the point that scientific method is in many cases not applied on a rigorous standard.
    Here's a paper it mentions from plos medicine.
    Why Most Published Research Findings Are False
    • John P. A. Ioannidis
    Abstract
    There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
    http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
     
  18. Aug 5, 2015 #17

    Stephen Tashi

    User Avatar
    Science Advisor

    There are various kinds of "bias". The article (linked in the OP) deals directly with the kind of bias that is due to decisions about whether to submit certain studies and whether to publish them. It mentions another type of bias that may be due to "rigging" the population of people or data that are used in a study.

    An experiment cannot be "repeated" unless we impose human opinions about it. We must accept a theory that tells what aspects of it must be duplicated and what aspects can be ignored. The theory may be a predictive model - for example, a model of mechanics that implies it doesn't matter what color shoes the lab technicians wear. The theory may merely be a set of requirements for a population of subjects (e.g. require that they be in given age range, have a given range of daily internet usage, 20/20 corrected visual acuity and not require that they have the same eye color and weight range.). Avoiding bias doesn't mean avoiding all opinions about the phenomena being studied.
     
  19. Aug 5, 2015 #18

    PeroK

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    All may not be lost. If the "truth wears off" phenomenon does indeed exist, then the "decline effect" itself will decline over time, perhaps.
     
  20. Aug 5, 2015 #19
    The decline of The Decline Effect would merely confirm The Decline Effect.
     
  21. Aug 5, 2015 #20
    Back up a bit, though. A theory doesn't merely describe a phenomenon, it attempts to explain it. So, a lot of the things mentioned in the article can't even be called "theories." For example, the studies that showed certain drugs to be effective against psychiatric symptoms, were simple statements of the "When you do this, it does that," type: "When a schizophrenic takes this chemical compound, his schizophrenic symptoms lessen." This isn't worked up into a theory (although it might conceivably be utilized as a postulate on which to base a theory), it is really more like an observation presented as a candidate to become an axiom or law: 'A large percentage of people with cirrhosis of the liver also drink more than amount x of alcohol per unit time (adjusted for body mass,etc). Therefore, it may be axiomatic that more than amount x of alcohol per unit time (adjusted for body mass,etc) causes cirrhosis.' The original paper describing Verbal Overshadowing was not proposing a theory, it was offering a candidate for axiom-hood; that verbal descriptions actually weaken rather than strengthen memory. The paper was primarily a description of an observed phenomenon, not primarily an attempt to explain it. Papers that describe observed phenomena often include tentative explanations, but explanation is not the primary purpose, and when that latter is the case, when explanation is not the primary purpose, I don't think you can call it a "theory."
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: The Truth Wears Off
  1. The truth about truth (Replies: 19)

  2. Wear it (Replies: 22)

  3. Concerning truth (Replies: 1)

  4. Mathematical Truths (Replies: 54)

Loading...