Reproducibility of previous research reports

  • Thread starter jim mcnamara
  • Start date
  • Tags
    Research
In summary, this article discusses the lack of reliability in research conducted in the field of psychology. It cites a study which found that approximately half of the "significant results" in three major psychology journals from 2008 could be duplicated. Because of this, the general public should not rely on these journals for reliable information. Furthermore, the article discusses the issue of p-hacking, which is when researchers only publish results that are statistically significant. This problem is not limited to psychology, but is also common in other fields.
  • #1
jim mcnamara
Mentor
4,770
3,816
http://www.sciencemag.org/content/349/6251/aac4716
Science 28 August 2015:
Vol. 349 no. 6251
DOI: 10.1126/science.aac4716

Nosek B. , et al., Estimating the reproducibility of psychological science

35 of 97 reports of statistically significant results published in three major psychology journals from 2008
could be duplicated.
Reproducibility is not well understood because the incentives for individual scientists prioritize novelty over replication. Innovation is the engine of discovery and is vital for a productive, effective scientific enterprise.

In light of https://www.physicsforums.com/threads/peer-review-your-own-papers.829065/ which discusses reviewing your own paper for publication, this whole topic needs discussion. - 'reliability of research'.

My point of view: We already have enough non-scientists fabricating garbage like 'autism is caused by vaccinations' without adding to the pile of stuff we have to refute.

For example: Dealing with 'wrong' research comes across to the non-scientist with a strong subtext - science based on research may not be reliable, so why should I accept it? To them it could sound like more of a politcal debate. We just went throught the whole deal of removng cholesterol - the molecule - from top of the list as a cause of heart disease. Everyday people and physicians trying to communicate with Joe Public may still use the terms 'cholesterol' and 'bad cholesterol' to mean CLDL levels.

Old bad research is really hard to get past sometimes. Especially when Captain Kirk (Wm. Shatner) was pushing margarine made with trans-fats and talking about cholesterol for years on TV here in the US. People are sometimes floored when I tell them I should eat several eggs per week. -- I have a slightly damaged heart.
 
Physics news on Phys.org
  • #2
jim mcnamara said:
35 of 97 reports of statistically significant results published in three major psychology journals from 2008
could be duplicated.
In other words, roughly half of the "significant results" in those journals are wrong. Don't trust a publication there unless it is repeated or you have very good evidence that this study has been done better than the average.

It is certainly an issue beyond psychology. "This psychology study was wrong, why should I trust this study about [other topic]?"
The much higher replication rate in other fields is not visible to the general public.
 
  • Like
Likes Student100
  • #4
Why not get graduate students to do repeat work (while supervised) for their Stats classes? An innocence project of sorts (where students gave their effort for free ).
 
Last edited:
  • #5
Perhaps this is why many don't consider psychology to be a "hard" science.

Of course, biology is subject to these issues as well. Studies reporting positive outcomes in clinical trials (i.e. results strong enough to reject the null hypothesis) decreased dramatically once it became standard practice to pre-register clinical trials (ameliorating the problem of negative studies never being released). However, the fact that only 8% of the trials report statistically significant results (where p < 0.05 is often the standard for statistical significance) is concerning.

The data journalism site FiveThirtyEight had a nice feature on a related topic last week addressing the issue of reproducibility and reliability in science. It has a great interactive on the practice of "p-hacking," and discusses many related issues.
 
Last edited:
  • #6
Ygggdrasil said:
why many don't consider psychology to be a "hard" science.
Ygggdrasil said:
biology is subject to these issues as well
As is chemistry, in my experience --- and geology --- and chemical oceanography. "A good literature search should find all the retractions, letters to editors, and other 'contrary' information.":DD:oldlaugh::DD:oldlaugh:
 
  • #7
Nobody seems to be aware of Ioannidis, J. P. A. (2005). "Why Most Published Research Findings Are False". PLoS Medicine 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMC 1182327. PMID 16060722 or J Ellenberg's book 'How not to be wrong ...'. Or to cite the reasons given in those articles about why there is a problem with some published results.

I am not saying these guys are definitive, but in my field, Biology, there definitely are published results that are not reproducible. The two references above attempt to explain.
 

1. What is reproducibility in scientific research?

Reproducibility refers to the ability to obtain consistent results when repeating experiments or analyses using the same methods and data. It is a critical aspect of scientific research as it allows for the verification and validation of previous findings.

2. Why is reproducibility important in scientific research?

Reproducibility is important because it ensures the reliability and credibility of scientific findings. If results cannot be reproduced, it raises questions about the accuracy and validity of the original study. Reproducibility also allows for the building of new knowledge and the advancement of science.

3. What factors can affect the reproducibility of research?

There are several factors that can affect the reproducibility of research, including variability in experimental conditions, differences in data collection and analysis methods, and human error. Other factors such as sample size, statistical power, and publication bias can also impact reproducibility.

4. How can researchers improve the reproducibility of their studies?

Researchers can improve the reproducibility of their studies by following best practices such as using standardized methods, providing detailed and transparent documentation of their procedures and data, and making their data and code openly available for others to review and replicate. Additionally, conducting replication studies can also help to validate and confirm the findings of previous research.

5. What can the scientific community do to promote reproducibility?

The scientific community can promote reproducibility by encouraging open and transparent research practices, providing funding and support for replication studies, and promoting collaboration and data sharing among researchers. Journals can also play a role by requiring authors to provide detailed methods and data, and by implementing rigorous peer-review processes to ensure the validity and reproducibility of published research.

Similar threads

  • Art, Music, History, and Linguistics
Replies
1
Views
1K
Replies
705
Views
133K
  • Biology and Medical
Replies
10
Views
4K
  • General Discussion
Replies
11
Views
25K
Back
Top