- #1
- 11,308
- 8,732
I'm reposting this here because it raises a fascinating kind of vulnerability (new to me). It is a vulnerabililty of any science using software for analysis to a common-mode error. The thought that 40,000 scientific teams were fooled is shocking. It is a good post, it links sources both supporting and opposing the conclusions.
I would think that this issue supports mandating that the raw data of all scientific studies should be open sourced and archived publicly. That way, the data could be re-processed in the future when improved (or corrected) tools become available, and published conclusions could be automatically updated or automatically deprecated.
http://catless.ncl.ac.uk/Risks/29.60.html said:Faulty image analysis software may invalidate 40,000 fMRI studies
Bruce Horrocks <bruce@scorecrow.com>Thu, 7 Jul 2016 21:14:15 +0100
[Please read this to the end. PGN]
A new paper [1] suggests that as many as 40,000 scientific studies that used
Functional Magnetic Resonance Imaging (fMRI) to analyse human brain activity
may be invalid because of a software fault common to all three of the most
popular image analysis packages.
... From the paper's significance statement:
"Functional MRI (fMRI) is 25 years old, yet surprisingly its most common
statistical methods have not been validated using real data. Here, we used
resting-state fMRI data from 499 healthy controls to conduct 3 million task
group analyses. Using this null data with different experimental designs, we
estimate the incidence of significant results. In theory, we should find 5%
false positives (for a significance threshold of 5%), but instead we found
that the most common software packages for fMRI analysis (SPM, FSL, AFNI)
can result in false-positive rates of up to 70%. These results question the
validity of some 40,000 fMRI studies and may have a large impact on the
interpretation of neuroimaging results."
Two of the software related risks:
a) It is common to assume that software that is widely used must be
reliable, yet 40,000 teams did not spot these flaws[2]. The authors
identified a bug in one package that had been present for 15 years.
b) Quoting from the paper: "It is not feasible to redo 40,000 fMRI studies,
and lamentable archiving and data-sharing practices mean most could not
be reanalyzed either."
[1] "Cluster failure: Why fMRI inferences for spatial extent have inflated
false-positive rates" by Anders Eklund, Thomas E. Nichols and Hans
Knufsson. <http://www.pnas.org/content/early/2016/06/27/1602413113.full>
[2] That's so many you begin to wonder if this paper might itself be wrong?
Expect to see a retraction in a future RISKS. ;-)
[Also noted by Lauren Weinstein in *The Register*:]
http://www.theregister.co.uk/2016/07/03/mri_software_bugs_could_upend_years_of_research/
[And then there is this counter-argument, noted by Mark Thorson:
http://blogs.discovermagazine.com/neuroskeptic/2016/07/07/false-positive-fmri-mainstream/
The author (Neuroskeptic) notes that Eklund et al. have discovered a
different kind of bug in AFNI, but does not apply to FSL and SPM, and does
not "invalidate 15 years of brain research." PGN]
I would think that this issue supports mandating that the raw data of all scientific studies should be open sourced and archived publicly. That way, the data could be re-processed in the future when improved (or corrected) tools become available, and published conclusions could be automatically updated or automatically deprecated.