Catching plagiarism has gotten much easier in the past few years due to automated detection software. Students are having a hard time fooling it. Most scientific publishers are using it now. But it only works if teachers use it.
Catching fake data is often straightforward, but it requires paying attention and running a few statistical and possibly other tests. Back in 2008, my wife and I caught errors in a biomechanics paper and published a reply because it was obvious in the graphs that the data violated the Work-Energy theorem. My wife and her colleagues recently published a comment pointing out data dishonesty in an important bone paper. Something smelled fishy, so she asked me to read it. I agreed and encouraged her to dig deeper. She dug up the Master’s thesis with the original data and uncovered the sleight of hand. In 2010, I caught an atomic physics paper that had copied several paragraphs verbatim (without attribution) from one of my papers from the 1990s. Instead of a retraction, the editors let it slide with a corrigendum and citation after the fact.
We also caught errors in the weight-length data at Fishbase.org and published a paper on it in 2010 or 2011. In this case, my wife alerted me that something was afoul, and some cadets at the Air Force Academy made a project out of it under my oversight. The database editors villified us for pointing it out, but they have since gotten a lot better at error checking and correction. We later traced most of the errors to a single source: one of the most cited handbooks on freshwater fisheries biology.
Similarly, we caught a number of both scientific and statistical errors in a 2011 Fishery Bulletin paper on magnetoreception in fish. The editor published an erratum correcting the statistical errors, but declined to publish our comment pointing out the unsupported claims in the abstract and other scientific errors. There was no suggestion our comment was wrong, but the journal simply has an editorial policy of not publishing comments that bring to light scientific errors in their papers. Refusing to publish corrections for clear scientific errors is a failure of scientific integrity that falls on scientific authors and editors rather than government.
Not every correction needs to happen in the public arena. When erroneous or falsified data have been published, then a public correction is appropriate and may be the only way to prevent propagation of the error. However, sometimes a correction can be made timely to avoid a public error. For example, my wife was reading a paper in her field of research that was available online in “pre-print” form prior to publication. She noticed an error in the results tables and contacted the primary author privately in case there was time to correct it before others in the field would be evaluating and applying the results. Happily, in that situation the author thanked my wife and confirmed that there was time to correct the error prior to final publication. Within research groups, we can help each other by evaluating data critically – not to undermine any individual but to help maintain both scientific integrity and the reputations of all involved by sharing the goals of correct results and appropriate interpretation.
However, colleagues and I have also had numerous situations where we’ve pointed out scientific or academic error or misconduct and nothing was done. In addition to having letters to journal editors ignored in cases of clear published errors, there is also a battle for integrity in the schools. The absence of negative feedback has the effect of training students in poor behavior early on. We learned of a student texting answers to other students during a science test. The student admitted doing so, but refused to name others (recipients of the texts). The department of the North Carolina public school refused to investigate further or attempt to find out who benefited from the cheating. Not even the admitted cheater received any consequence. We’ve seen a pattern of failures in academic and scientific integrity in North Carolina (such as the UNC athlete cheating scandal).
When I taught at the Air Force Academy, things were handled better. Even if the process failed to bring a disciplinary consequence to the student, an academic consequence could be brought by the instructor and department head by meeting a more-likely-than-not standard of evidence. The Math department head always supported a teacher recommendation of a zero for cheating on any graded event.
When I ran a cadet research program, I terminated cadet participation in the research program immediately and permanently when it became clear that a student had faked data or otherwise committed academic dishonesty. Even when a superior (not in the math department) recommended a gentler approach to allow for a “learning experience,” I terminated participation in the program, because I thought a firmer response was needed to bring the lesson home and protect the integrity of the program.
I have a sharp eye for data, and I run a number of statistical and common sense checks on student data and analysis. I may be the only professor I know who repeats student analysis at every step in most projects under my supervision. I have developed a good sense for what “too good to be true” looks like and what kinds of uncertainties can be expected given the experimental conditions and sample sizes. In my mentoring of science projects, students know from the beginning that I have zero tolerance for violations of academic and scientific integrity, and that I am double checking their data and analysis closely.
It is interesting to note that the original article cites Ernst Haeckel but fails to note his well known fraudulent embryo drawings. I recall stirring up controversy in a guest lecture to a biology class in the last decade by pointing out their modern day textbook was still using the errant Haeckel drawings. The drawings and the associated recapitulation theory have been considered in error for over 100 years, so it is something of a mystery how they can appear in modern textbooks without hordes of teachers and scientists objecting.
If you teach laboratories, what consideration have you given to making it harder for students to fake data? I mentor a number of students on ISEF-type research projects and undergraduate research, so I get their feedback frequently on how their lab science classes are going. Some of their teachers are really getting out in front of scientific integrity by designing lab experiments with an auditable data path from the original execution of the experiment to the graded lab report. This approach is analogous to requirements some journals and funding agencies have that data be published in a repository. In some cases, lab instructors are even requiring students to take pictures while executing experiments. It’s much harder to fake data if there are time stamped data files with the original data as well as time stamped pictures of the experiment in progress. Sure, someone will be smart enough to fool any accountability system, but putting a good system in place keeps students from thinking they somehow have tacit approval to manufacture data, because they don’t just need to fake the data, they need to intentionally subvert the accountability system.
It’s too easy to blame the government. They have entrusted matters of academic and scientific integrity to the diligence of teachers and scientists. We should all be doing our part in our respective areas of work to maintain integrity. How many scientists and students have you busted in the past decade?