The Truth Wears Off: Is There Something Wrong With the Scientific Method?

  • Thread starter Stephen Tashi
  • Start date
In summary, the author discusses the bias that scientists can have towards concepts they like, and how this can lead to incorrect conclusions. He also discusses the problem of small sample sizes and how this can lead to incorrect conclusions.
Physics news on Phys.org
  • #2
People, and scientists are people, find some ideas more attractive than others.
I know PhD's who "like" black holes, worm holes and many-world theories.
In science you have to be particularly wary of concepts that you "like".
Such emotional attachment easily generates myths and belief systems.
In fact, I like the idea that attractive concepts are the stuff that all myths are made of.
An example of bias at the highest level is the book title: "Under the spell of the gauge principle".
Unless the title is irony, which I doubt, this indicates a 100% bias.
The author indeed will never consider a theory that is not gauge invariant.
 
Last edited:
  • #3
I don't think its about bias(at least not completely). It seems to me that its a completely statistical effect, I mean, its not completely due to the methods used in the research, but is an effect you can find in a general scenario about data analysis.
When you first encounter statistics and data analysis, it doesn't seem to be an exciting science, but its actually very exciting, and the decline effect(which I just became aware of) is not the only reason I'm telling this, although its a very interesting effect.
 
  • #4
Fantastic article! Thanks for posting that!
 
  • #5
Shyan said:
I don't think its about bias(at least not completely).
I agree statistics plays a role in it, but isn't not publishing null results a product of bias itself, albeit on part of the journals and not the researchers?
 
Last edited:
  • #6
Enigman said:
I agree statistics plays a role in it but isn't not publishing null results a product of bias itself, albeit on part of the journals and not the researchers?
You're right. But when I consider the whole picture, I see things not completely explained by bias. Schooler, although aware of the effect, tried many years to replicate his results, but he couldn't. Pay attention that it was a solitary effort, it wasn't about several research groups communicating via journals, but a single researcher! And still he was observing the effect. Also, he did the precognition experiment to observe exactly the decline effect and by that time, he surely was aware that it could be due to bias and so its natural to assume he was trying to reduce the effect of bias, but he again observed the effect in the precognition experiment!

Also a question occurred to me. What about the decline effect in data analysis of date to observe the decline effect itself? Surely people try to reduce the decline effect and it'll be much less in the future but what if people didn't try to reduce it? Would we see a decline effect in such experiments? One may suggest that the reduction of the effect of bias to make experiments better, that naturally reduces the decline effect in the future, is exactly the thing that is responsible for the decline effect in experiments where we observe it. But I'm not sure of this.

I should confess, now that I think about it more carefully, I agree with you that its about the effect of bias(either due to single research groups or scientific community and journals as a whole), but I see some strange things about it!
 
  • #7
I read that article several years ago, and I too found it interesting. Millikan's oil drop experiment has also been used as an example of this effect. You can read the Wikipedia article about it here, as told by Richard Feynman.
 
  • Like
Likes billy_joule
  • #9
Here is the Ioannidis Paper mentioned in the article on the selection bias problem:

Why Most Published Research Findings are False
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/

One problem not mentioned in the New Yorker article is how this problem is exploited by pseudoscience - for example there are a few flukey studies (other than the Wakefield one) showing a correlation between vaccines and autism which anti-vaxxers point to, but inevitably they suffer from small sample sizes and the results don't hold up across multiple studies.
 
  • #10
Interestingly -enough, the author of the article was accused of ethical mispractices :
http://www.cuil.pt/r.php?cx=002825717068136152164:qf0jmwd8jku&cof=FORID:10&ie=UTF-8&q=jonah+lehrer,+ehics&sa=Search

Which leads me to dismiss his findings ;).

Like someone said, failure does not sell papers. People want excitement, not failure, nor deep reflection.
I think a true researcher must be a "glass alf-full" -type person , finding flaws everywhere ( especially in his own experiments) . The status quo does not reward this approach, this personality type.

I suggest people interested in this article also read the book " The Signal and the Noise" by Nate Silver (whose predictions on national, state elections did become true to a large degree, BTW) on what personality and otherwise factors contribute to accurate predictions.
 
Last edited:
  • Like
Likes Drakkith
  • #11
I have always believed any experiment that relies heavily on the use of statistics to validate its findings sinks to the level of suspect. Such experiments involve assumptions on the application of the particular statistical method to a particular study. Assumption that are either tacitly accepted as obviously true or inadvertently accepted unconsciously. In the cocaine mouse experiment performed at three different lab although pains where taken to eliminate all recognizable possible influences on the outcome, the difference in outcomes seems more than random. This assumes that such large deviations are not random, but what is a reasonable deviation? Events of low probability can be described by a Poisson Distribution which is distinctly asymmetric favoring large deviation in the direction of greater values. So a single observation of such a system will give you a greater chance of finding a value larger than the mean.

In physics experiments one generally attempt to reduce statistical uncertainly to a low level of importance compared to other uncertainties but even so find great disparity in carefully executed experiments. see for example "The Search for Newton's Constant" by Clive Speake and Terry Quinn, Physics Today, July 2014. Not everything is apparent. So how easy is it to miss a relevant factor in even a simple living system.

Recall the famous comparison of statistics to a bikini - Statistics is like a bikini, what it reveals is enticing but what it conceals is vital.
 
  • Like
Likes Drakkith and Choppy
  • #12
So...science is invalid because some results turn out to be false?

It sounds like what happened to Schooler is that as he became more experienced with research and was given access to larger sample sizes and better techniques and equipment, ie he was doing better experiments and collecting higher-quality data, it turned out that his preliminary results were mistaken either by statistical fluke or by the limitations that a researcher at the very beginning of his career would have to deal with.

The general contention of this essay is that isolated unusual results indicate an underlying problem with science itself. There are always going to be flukes, anomalies, and outright mistakes and even impropriety on the part of researchers. But how can anyone honestly say that we are just as ignorant now as we were 500 years ago? A 2% discrepancy in the velocities of weights dropped into a borehole from what was predicted mathematically does not mean that science itself is fundamentally flawed, it means that there is a 2% discrepancy in the velocity of weights dropped into a borehole, and that this discrepancy needs to be accounted for. The only other option is to attribute it to the supernatural, and that's hardly going to reveal anything.

I like Adam Savage's quote: "Science isn't about facts and absolute results, science is about ignorance. In particular it's about collective ignorance, and when you do science you're trying to eliminate little pockets of ignorance so that at the end of the day you're a little less wrong than you were the day before." True, f = ma is not the complete picture of mechanics. On the other hand, when you learn even the most basic of Newtonian physics, you know far more about the universe than you did before you learned it. And there still exist plenty of situations where Newtonian physics is enough to understand them.

What this article (rightly) recognizes is the importance of being aware of one's own biases, taking preliminary results with a grain of salt until they have been replicated, and being wary of working alone or with small sample sizes. None of that implies that there is something "wrong" with science itself. I think the author of this article kind of tips his hand with the phrase "Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true." No, the very definition of "truthful" is "proven"; otherwise we're just running around believing whatever we want regardless of how it matches up with the evidence. When something previously believed to be true later turns out to be false, that doesn't mean that science is broken and can't explain everything and whatever other quasi-spiritual claptrap, it means that it was false to begin with, and that now we know why.
 
  • Like
Likes Choppy and Silicon Waffle
  • #13
jack476 said:
So...science is invalid because some results turn out to be false?...

...When something previously believed to be true later turns out to be false, that doesn't mean that science is broken and can't explain everything and whatever other quasi-spiritual claptrap, it means that it was false to begin with, and that now we know why.
You seem to be suggesting that a thing can be cleared up definitively, and that the things the article talks about are all errors that are now corrected. One of the problems the article raises is that they don't get corrected. People cling to them despite contradictory evidence:
Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything.
 
  • #14
zoobyshoe said:
You seem to be suggesting that a thing can be cleared up definitively, and that the things the article talks about are all errors that are now corrected. One of the problems the article raises is that they don't get corrected. People cling to them despite contradictory evidence:

Not that anything can be cleared up "definitively", just that science doesn't even claim to provide definitive answers in the first place. The article's premise is that sometimes scientists make mistakes and scientists are human and have human failings, that as a result science can't necessarily be said to provide any sort of absolute and universal truth, that sometimes promising and interesting new results later turn out to be invalid, and that as a result we should be skeptical of anything that claims to (all of which is true). But from there it extrapolates that all of the understanding of the world that has been provided by the scientific method is invalid (which is ludicrous).
 
  • #15
jack476 said:
But from there it extrapolates that all of the understanding of the world that has been provided by the scientific method is invalid (which is ludicrous).
Where does it make this claim? I didn't see any statement to this effect.
 
  • #16
The points of the article according to me are-
Decline effect due to smaller sample size and publication bias, and exaggeration of positive results due to perception biases leading to selective publishing (reporting bias).
It makes the point that scientific method is in many cases not applied on a rigorous standard.
Here's a paper it mentions from plos medicine.
Why Most Published Research Findings Are False
  • John P. A. Ioannidis
Abstract
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
 
  • #17
There are various kinds of "bias". The article (linked in the OP) deals directly with the kind of bias that is due to decisions about whether to submit certain studies and whether to publish them. It mentions another type of bias that may be due to "rigging" the population of people or data that are used in a study.

An experiment cannot be "repeated" unless we impose human opinions about it. We must accept a theory that tells what aspects of it must be duplicated and what aspects can be ignored. The theory may be a predictive model - for example, a model of mechanics that implies it doesn't matter what color shoes the lab technicians wear. The theory may merely be a set of requirements for a population of subjects (e.g. require that they be in given age range, have a given range of daily internet usage, 20/20 corrected visual acuity and not require that they have the same eye color and weight range.). Avoiding bias doesn't mean avoiding all opinions about the phenomena being studied.
 
  • #18
All may not be lost. If the "truth wears off" phenomenon does indeed exist, then the "decline effect" itself will decline over time, perhaps.
 
  • #19
PeroK said:
All may not be lost. If the "truth wears off" phenomenon does indeed exist, then the "decline effect" itself will decline over time, perhaps.
The decline of The Decline Effect would merely confirm The Decline Effect.
 
  • #20
Stephen Tashi said:
We must accept a theory that tells what aspects of it must be duplicated and what aspects can be ignored. The theory may be a predictive model - for example, a model of mechanics that implies it doesn't matter what color shoes the lab technicians wear. The theory may merely be a set of requirements for a population of subjects (e.g. require that they be in given age range, have a given range of daily internet usage, 20/20 corrected visual acuity and not require that they have the same eye color and weight range.). Avoiding bias doesn't mean avoiding all opinions about the phenomena being studied.
Back up a bit, though. A theory doesn't merely describe a phenomenon, it attempts to explain it. So, a lot of the things mentioned in the article can't even be called "theories." For example, the studies that showed certain drugs to be effective against psychiatric symptoms, were simple statements of the "When you do this, it does that," type: "When a schizophrenic takes this chemical compound, his schizophrenic symptoms lessen." This isn't worked up into a theory (although it might conceivably be utilized as a postulate on which to base a theory), it is really more like an observation presented as a candidate to become an axiom or law: 'A large percentage of people with cirrhosis of the liver also drink more than amount x of alcohol per unit time (adjusted for body mass,etc). Therefore, it may be axiomatic that more than amount x of alcohol per unit time (adjusted for body mass,etc) causes cirrhosis.' The original paper describing Verbal Overshadowing was not proposing a theory, it was offering a candidate for axiom-hood; that verbal descriptions actually weaken rather than strengthen memory. The paper was primarily a description of an observed phenomenon, not primarily an attempt to explain it. Papers that describe observed phenomena often include tentative explanations, but explanation is not the primary purpose, and when that latter is the case, when explanation is not the primary purpose, I don't think you can call it a "theory."
 
  • #21
zoobyshoe said:
Back up a bit, though. A theory doesn't merely describe a phenomenon, it attempts to explain it.

Yes, the definition of "theory" can be debated. We could get into the difference between "predict" and "explain". At any rate, to repeat an experiment requires that something specify what aspects must be repeated. (I'm talking about repeating "the same" experiment, not doing a modified version of it.)
 
  • #22
Stephen Tashi said:
Yes, the definition of "theory" can be debated.
No, the definition of "theory" can be looked up.
We could get into the difference between "predict" and "explain".
The way you say this, it suggests you think there's some confusion about their meanings that would lead to discussion in and of itself. There isn't. Both words can be looked up.

This is the way it works: lexicographers scour any and all literature to determine what most people mean when they use a word. They publish that meaning in dictionaries, and other people can look it up. If two people happen to disagree over the meaning of a word, they agree to accept the dictionary definition.

If a word has a specialized meaning in a certain field (for example, physics), that, too, can be looked up.

If a person doesn't agree to this system, their every utterance becomes suspect; of indeterminate meaning.
At any rate, to repeat an experiment requires that something specify what aspects must be repeated. (I'm talking about repeating "the same" experiment, not doing a modified version of it.)
Yes. I'm not contesting this particular aspect of your point. Unfortunately, this point was mixed in with the erroneous notion that "theory," "observation," "axiom," "postulate," etc. are all pretty much synonymous and interchangeable. So, your post #17 about experiment requiring "human opinion" about the parameters is vague and mushy. For example, your statement, "We must accept a theory that tells what aspects of it must be duplicated and what aspects can be ignored," is a statement that is certainly not true. You are using the word "theory" when you probably (I'm surmising) mean "experiment." No one is obligated to accept a theory because it's parameters are sharp. That's simply not evidence the theory holds water.

The point you seem to be making about "human opinion" governing the parameters is, conceivably a good one, but I can't tell because you seem to have a very impressionistic sense of word meanings.
 
  • #23
zoobyshoe said:
Both words can be looked up.
Did you have a particular dictionary in mind?

This is the way it works: lexicographers scour any and all literature to determine what most people mean when they use a word. They publish that meaning in dictionaries, and other people can look it up. If two people happen to disagree over the meaning of a word, they agree to accept the dictionary definition.
Usually, there isn't such a thing as "the" dictionary definition since a single word can have a variety of meanings.

If a word has a specialized meaning in a certain field (for example, physics), that, too, can be looked up.

If a person doesn't agree to this system, their every utterance becomes suspect; of indeterminate meaning.
What do you mean by "indeterminate meaning"?

Yes. I'm not contesting this particular aspect of your point. Unfortunately, this point was mixed in with the erroneous notion that "theory," "observation," "axiom," "postulate," etc. are all pretty much synonymous and interchangeable.
Did I say that a "theory" is an "observation"?

So you think "axiom" and "postulate" are not "assumptions"?
For example, your statement, "We must accept a theory that tells what aspects of it must be duplicated and what aspects can be ignored," is a statement that is certainly not true. You are using the word "theory" when you probably (I'm surmising) mean "experiment."
No, it is a "theory" that contains information that tells what things affect the outcome of an experiment. The experiment is a procedure.
 
  • #24
Stephen Tashi said:
Did you have a particular dictionary in mind?
Any good one should do. Personally I favor Merriam-Websters. When in doubt, check for agreement among many dictionaries.
Usually, there isn't such a thing as "the" dictionary definition since a single word can have a variety of meanings.
Only one of of the variety will apply to the case in question. That's "the" dictionary definition you're looking for in a given situation.
What do you mean by "indeterminate meaning"?
in·de·ter·mi·nate/ˌindəˈtərmənət/
adjective
  1. not exactly known, established, or defined.
I'm saying, if someone doesn't agree to use words as defined in the dictionary, the reader can never be sure what they are saying. The meaning of their words will be not exactly known, established, or defined.
Did I say that a "theory" is an "observation"?
You are calling all things set forth in papers "theories." A lot of papers merely report observations with no serious attempt made to explain them.
So you think "axiom" and "postulate" are not "assumptions"?
Sure they're assumptions. What they aren't are theories. Newton's 3 Laws are axioms. They're not theories. Regardless, they require experimental verification.
No, it is a "theory" that contains information that tells what things affect the outcome of an experiment.
So might an effect or an observation. If, for example, you want to set up an experiment to test and see if the McGurk Effect actually exists, there are certain parameters that must be met. These are evident from the report of the McGurk Effect. But The McGurk Effect, by itself, is not a theory. A proposed explanation of the McGurk Effect that could be tested by experiment would be a theory.
 
  • #25
I lean more towards Tashi's position: words are used in everyday language in a very fluid way. Every day language is necessarily ambiguous and fluid because it must serve to describe an infinite variety of situations. This vagueness effectively serves its purpose, but its often very poor at dealing with situations where you need a high level of precision, where technical language is much more effective ; there is a tradeoff between flexibility and accuracy (this is the reason why scientific papers are written in such a terse language) . At any rate, I don't think one can standardize the meaning of words used in everyday language nor clearly interpret what was meant by a given word. Once you want to discuss more abstract topics you must clearly define what is meant by each term, or at least what is meant by each of the main words used in the discussion .
 

1. What is "The Truth Wears Off"?

"The Truth Wears Off" is a theory proposed by science journalist Jonah Lehrer, which suggests that many scientific studies and theories experience a decline in their initial findings over time. This phenomenon challenges the reliability and reproducibility of scientific research.

2. What evidence supports "The Truth Wears Off"?

Lehrer's theory is supported by multiple studies that have shown a decline in the effect or significance of initial results over time. This has been observed in various fields such as psychology, medicine, and biology.

3. Does this mean the scientific method is flawed?

Not necessarily. "The Truth Wears Off" does not question the fundamental principles of the scientific method, but rather highlights the need for more rigorous and transparent research practices to ensure the reliability of scientific findings.

4. How does "The Truth Wears Off" impact the credibility of scientific research?

The decline effect observed in "The Truth Wears Off" can lead to a lack of confidence in the reliability of scientific findings, especially in fields where replication studies are not readily available. This can also raise concerns about the potential influence of biases or errors in the scientific process.

5. What can be done to address "The Truth Wears Off"?

To address "The Truth Wears Off", scientists can adopt more rigorous research practices such as pre-registering studies, replicating results, and openly sharing data and methods. Additionally, funding agencies and journals can also play a role in promoting transparency and reproducibility in scientific research.

Similar threads

Replies
5
Views
310
Replies
9
Views
1K
Replies
9
Views
1K
Replies
14
Views
910
  • General Discussion
Replies
12
Views
1K
  • STEM Career Guidance
Replies
3
Views
1K
  • General Discussion
Replies
14
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
69
Views
4K
Replies
69
Views
7K
  • General Discussion
Replies
11
Views
1K
Back
Top