Reproducibility of previous research reports

  • Thread starter Thread starter jim mcnamara
  • Start date Start date
  • Tags Tags
    Research
Click For Summary

Discussion Overview

The discussion centers on the reproducibility of research findings, particularly in psychology and other scientific fields. Participants explore the implications of non-reproducible studies on public trust in science and the reliability of published research. The conversation touches on the broader issues of research integrity, the impact of publication practices, and the role of education in addressing these challenges.

Discussion Character

  • Debate/contested
  • Exploratory
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Some participants highlight that only 35 out of 97 statistically significant results from major psychology journals could be replicated, suggesting a significant portion of published findings may be unreliable.
  • Concerns are raised about the public perception of science when faced with non-reproducible studies, with some arguing that it undermines trust in scientific findings across various fields.
  • One participant proposes involving graduate students in supervised replication studies as a potential solution to address reproducibility issues.
  • Others note that the problem of reproducibility is not limited to psychology but extends to biology, chemistry, and geology, indicating a broader systemic issue in scientific research.
  • References to works by Ioannidis and Ellenberg are mentioned, which discuss the prevalence of non-reproducible research findings and the underlying reasons for these issues.
  • Some participants express skepticism about the classification of psychology as a "hard" science, pointing out similar challenges in other scientific disciplines.

Areas of Agreement / Disagreement

Participants express a range of views on the reproducibility crisis, with some agreeing on its significance across multiple fields while others emphasize specific issues within psychology. There is no consensus on solutions or the extent of the problem across disciplines.

Contextual Notes

Participants acknowledge limitations in the current understanding of reproducibility, including the influence of publication practices and the need for better educational approaches to address these challenges. The discussion reflects ongoing uncertainties and varying perspectives on the reliability of scientific research.

Who May Find This Useful

This discussion may be of interest to researchers, educators, and students in psychology, biology, and other scientific fields, as well as those concerned with the integrity and reliability of published research.

jim mcnamara
Mentor
Messages
4,789
Reaction score
3,852
http://www.sciencemag.org/content/349/6251/aac4716
Science 28 August 2015:
Vol. 349 no. 6251
DOI: 10.1126/science.aac4716

Nosek B. , et al., Estimating the reproducibility of psychological science

35 of 97 reports of statistically significant results published in three major psychology journals from 2008
could be duplicated.
Reproducibility is not well understood because the incentives for individual scientists prioritize novelty over replication. Innovation is the engine of discovery and is vital for a productive, effective scientific enterprise.

In light of https://www.physicsforums.com/threads/peer-review-your-own-papers.829065/ which discusses reviewing your own paper for publication, this whole topic needs discussion. - 'reliability of research'.

My point of view: We already have enough non-scientists fabricating garbage like 'autism is caused by vaccinations' without adding to the pile of stuff we have to refute.

For example: Dealing with 'wrong' research comes across to the non-scientist with a strong subtext - science based on research may not be reliable, so why should I accept it? To them it could sound like more of a politcal debate. We just went through the whole deal of removng cholesterol - the molecule - from top of the list as a cause of heart disease. Everyday people and physicians trying to communicate with Joe Public may still use the terms 'cholesterol' and 'bad cholesterol' to mean CLDL levels.

Old bad research is really hard to get past sometimes. Especially when Captain Kirk (Wm. Shatner) was pushing margarine made with trans-fats and talking about cholesterol for years on TV here in the US. People are sometimes floored when I tell them I should eat several eggs per week. -- I have a slightly damaged heart.
 
Biology news on Phys.org
jim mcnamara said:
35 of 97 reports of statistically significant results published in three major psychology journals from 2008
could be duplicated.
In other words, roughly half of the "significant results" in those journals are wrong. Don't trust a publication there unless it is repeated or you have very good evidence that this study has been done better than the average.

It is certainly an issue beyond psychology. "This psychology study was wrong, why should I trust this study about [other topic]?"
The much higher replication rate in other fields is not visible to the general public.
 
  • Like
Likes   Reactions: Student100
Why not get graduate students to do repeat work (while supervised) for their Stats classes? An innocence project of sorts (where students gave their effort for free ).
 
Last edited:
Perhaps this is why many don't consider psychology to be a "hard" science.

Of course, biology is subject to these issues as well. Studies reporting positive outcomes in clinical trials (i.e. results strong enough to reject the null hypothesis) decreased dramatically once it became standard practice to pre-register clinical trials (ameliorating the problem of negative studies never being released). However, the fact that only 8% of the trials report statistically significant results (where p < 0.05 is often the standard for statistical significance) is concerning.

The data journalism site FiveThirtyEight had a nice feature on a related topic last week addressing the issue of reproducibility and reliability in science. It has a great interactive on the practice of "p-hacking," and discusses many related issues.
 
Last edited:
Ygggdrasil said:
why many don't consider psychology to be a "hard" science.
Ygggdrasil said:
biology is subject to these issues as well
As is chemistry, in my experience --- and geology --- and chemical oceanography. "A good literature search should find all the retractions, letters to editors, and other 'contrary' information.":DD:oldlaugh::DD:oldlaugh:
 
Nobody seems to be aware of Ioannidis, J. P. A. (2005). "Why Most Published Research Findings Are False". PLoS Medicine 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMC 1182327. PMID 16060722 or J Ellenberg's book 'How not to be wrong ...'. Or to cite the reasons given in those articles about why there is a problem with some published results.

I am not saying these guys are definitive, but in my field, Biology, there definitely are published results that are not reproducible. The two references above attempt to explain.
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
5K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 20 ·
Replies
20
Views
8K