Monique said:
Unfortunately, this is not the first time that such a scenario happened and came out. It really is a shame that people have been able to get away with it.
With the publication pressure that researchers face today I wouldn't be surprised that the number of 'false publications' is quite high. With false I mean publications that don't have as much proof or support as claimed in the paper.
On a related topic, I am saddened by the prevalence of 'publication bias' in academic journals. Often researchers will not "bother" to write up and submit results that show no significance. Even the rare researcher who does bother most often gets rejected by the journal editors. The problem of course, is even more acute when private companies commission large studies and quash the results when they are unfavorable and only allow ones that are favorable to see the light of day. This latter example goes beyond simple apathy - it's downright legal brutishness in that researchers' hands are tied by NDRs and contracts into holding their tongues about potentially important data on drug safety and efficacy, etc.
There are ways to gauge the burden of publication bias - certain meta-analytic techniques give us clues, but they're not perfect. I believe this is a problem that must be tackled at its root - the view of a properly designed study showing "no rejection of the null hypothesis" as being somehow worthless. And researchers must not be gauged based on publications (showing a significant effect) alone, they should be assessed on the merits of their research methodology based on all the trials they designed, even the ones with a "negative" outcome.
To this end, I propose that we implement a system of free online journals that accept only studies that show "no significance/association" or "weak association". These studies must not be published in the written literature so that they will not be counted twice in a meta-analysis. The submission format for such a service could be fairly succinct instead of the often flowery prose and conclusion that authors are forced to eke out when submitting to a "reputed" journal. Just a quick and dirty abstract, study inclusion/exclusion criteria in point form, raw data, indices of (in)signicance and a terse and to the point conclusion if so desired. Most of the time, this stuff is for the mills of the data mining machinery, anyway.
There should still be some sort of editorial review, but it should focus on study design, the appropriateness of the statistical tests used, etc. Even if the wrong tests have been used to reach an insupportable conclusion, the raw data can still be archived if the design is adequate. Someone else will come along sooner or later to "rescue" the orphaned data.
The discipline of meta-analysis will be transformed. No longer will people have to guess as to the effect of publication bias, it will be nearly eliminated since the data mining can now cover the whole spectrum of data. Researchers will also feel less bad about not producing "results", since this will be a real, freely accessible online resource, and they can cite references to their publications therein within their CVs. There should also be strict laws drawn up to forbid non-disclosure of unfavorable results by private workers so that this valuable data will not go to waste.
Good idea ? Is such a thing already around ? Or should I try to start one going ?
EDIT : Holy moley, looks like someone has already beaten me to the punch. Check this out :
http://www.jnrbm.com/ That's a fantastic example of what I'm on about. Now if only we could extend it to other disciplines and link them all up.