Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Hwang: faked research

  1. Dec 25, 2005 #1
    http://www.nytimes.com/2005/12/24/science/24clone.html
    http://www.nytimes.com/2005/12/23/science/23stem.html
    :yuck:
     
  2. jcsd
  3. Dec 25, 2005 #2
    And lets not jump to conclusions people; The whole stem cell research is STILL under investigation as told from the korean news.
     
    Last edited by a moderator: Dec 30, 2005
  4. Dec 30, 2005 #3
    So, returning to the TOPIC that Hwang FAKED HIS RESEARCH...

    Where's his scientific integrity and honour? If he were Klingon I'd challenge him to a... Klingon... death duel thingy.
     
  5. Dec 30, 2005 #4
    It's not nice that the thread was derailed at all, irregardless of offensiveness.

    On topic:
    Panel Further Discredits South Korean Scientist
     
  6. Dec 30, 2005 #5

    Monique

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Unfortunately, this is not the first time that such a scenario happened and came out. It really is a shame that people have been able to get away with it.

    With the publication pressure that researchers face today I wouldn't be surprised that the number of 'false publications' is quite high. With false I mean publications that don't have as much proof or support as claimed in the paper.
     
  7. Dec 30, 2005 #6

    Math Is Hard

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    That's totally unfair stereotyping. Not all Klingons participate in death duels.:mad:

    Back on topic, who is responsible for checking out the validity of the research? Is it the sponsor of the project? I guess I just never really thought about someone fudging or flagrantly lying about results.
     
  8. Dec 30, 2005 #7

    Moonbear

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    That's the tricky thing. Nobody routinely checks unless something sounds highly suspicious. I don't know what system is in place outside the US. In the US, the funding agencies can do a site visit and demand to see all the records. The journals publishing the work and the reviewers have some of the burden for making sure things make sense (I've gotten a manuscript to review that had gross examples of plagiarism, which was immediately reported to the editor, and someone else I know got one to review that looked like someone had hand-drawn in things on the photographs provided, which was also immediately reported to the editor of that journal...the editors are then responsible for following up with the author's institution).

    The main way such a thing could be discovered would be if someone were trying to replicate the results and couldn't so decided to ask more questions about the details. Not being able to replicate findings might not mean any misconduct occurred, it could just mean the conclusions drawn were premature and the outcome was due to some overlooked source of variation, but it is the sort of thing that leads one to ask questions, and if they aren't answered adequately, you might start getting suspicious.

    As Monique talks about the pressure to publish, I'm not sure that's really going to create pressure to fabricate data. I think the more common thing out there is that people report findings too prematurely without having replicated them a sufficient number of times (sort of the..."we got a positive result, quick, publish it before we find out it's wrong" attitude), or break a bigger study into multiple smaller publications to get the number up so you have to spend more time hunting through several articles to get the full picture on something. In those cases, the onus does fall on the journal reviewers to look at the sample sizes and replicates and statistical methods (I believe there was an article a few years ago in Science or Nature where the authors went back through numerous articles and checked the statistics used and found a fairly high percentage that used the wrong statistics...for many, the correct statistics didn't alter the final outcome/conclusions, but the concern was more for those where bad statistics slipped through and using the correct statistic would have changed the conclusions), and how much was done to confirm the findings in terms of controls, replications, approaches to the question, etc.
     
  9. Dec 30, 2005 #8
    It was MBS of the Korean broadcast. There was a large contrevarsy going on over the last couple of weeks regarding Hwangs research with interviews going on with hwangs team who allegdelly said Hwangs research were fake.

    I guess with the news out he quit from his university, it is true. Still, it created a lot of public hatred for MBS, and too hwang as well. Couple of news mentioned '8/9 likely to suicide - Hwang'

    Personally, I think the stem cell research took a dive for the wrong when the korean governemnt started pushing hwang along with the public who wanted Hwang to become a 'Natural treasure'. I don't believe a research scholar of Hwangs standard can handle that without some woopsies in his research - hence what happened.
     
  10. Dec 30, 2005 #9
    Do they still shoot people in Korea for this or is that only in the North?
     
  11. Dec 31, 2005 #10
    North Korea only.
     
  12. Dec 31, 2005 #11

    Curious3141

    User Avatar
    Homework Helper

    On a related topic, I am saddened by the prevalence of 'publication bias' in academic journals. Often researchers will not "bother" to write up and submit results that show no significance. Even the rare researcher who does bother most often gets rejected by the journal editors. The problem of course, is even more acute when private companies commission large studies and quash the results when they are unfavorable and only allow ones that are favorable to see the light of day. This latter example goes beyond simple apathy - it's downright legal brutishness in that researchers' hands are tied by NDRs and contracts into holding their tongues about potentially important data on drug safety and efficacy, etc.

    There are ways to gauge the burden of publication bias - certain meta-analytic techniques give us clues, but they're not perfect. I believe this is a problem that must be tackled at its root - the view of a properly designed study showing "no rejection of the null hypothesis" as being somehow worthless. And researchers must not be gauged based on publications (showing a significant effect) alone, they should be assessed on the merits of their research methodology based on all the trials they designed, even the ones with a "negative" outcome.

    To this end, I propose that we implement a system of free online journals that accept only studies that show "no significance/association" or "weak association". These studies must not be published in the written literature so that they will not be counted twice in a meta-analysis. The submission format for such a service could be fairly succinct instead of the often flowery prose and conclusion that authors are forced to eke out when submitting to a "reputed" journal. Just a quick and dirty abstract, study inclusion/exclusion criteria in point form, raw data, indices of (in)signicance and a terse and to the point conclusion if so desired. Most of the time, this stuff is for the mills of the data mining machinery, anyway.

    There should still be some sort of editorial review, but it should focus on study design, the appropriateness of the statistical tests used, etc. Even if the wrong tests have been used to reach an insupportable conclusion, the raw data can still be archived if the design is adequate. Someone else will come along sooner or later to "rescue" the orphaned data.

    The discipline of meta-analysis will be transformed. No longer will people have to guess as to the effect of publication bias, it will be nearly eliminated since the data mining can now cover the whole spectrum of data. Researchers will also feel less bad about not producing "results", since this will be a real, freely accessible online resource, and they can cite references to their publications therein within their CVs. There should also be strict laws drawn up to forbid non-disclosure of unfavorable results by private workers so that this valuable data will not go to waste.

    Good idea ? Is such a thing already around ? Or should I try to start one going ?

    EDIT : Holy moley, looks like someone has already beaten me to the punch. Check this out : http://www.jnrbm.com/ That's a fantastic example of what I'm on about. Now if only we could extend it to other disciplines and link them all up.
     
    Last edited: Dec 31, 2005
  13. Dec 31, 2005 #12

    Monique

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    To be clear: I did not say that publication pressure leads to fabrication of data, that's outright criminal behaviour, but to premature publishing as you mention.
     
  14. Dec 31, 2005 #13

    Monique

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Curious3141, a journal to publish negative results as a second option for when you don't get your data published in regular journal would be valuable: if the experiments were designed properly and actually are of value for the community (it is much easier to come up with negative data than positive).

    I once did a meta-analysis of several genes and their association to coronary artery disease. The number of publications on the same genes can be staggering (take ACE for instance) and there really is no drawing conclusions, even when you compile all the different studies together or stratify them by conditions. It shows that negative results can be published, it also shows that we need to use more powerful designs for our experiments in order to get meaningful results.
     
  15. Dec 31, 2005 #14

    Moonbear

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I was going to direct you to that. Someone recently pointed it out to me as I was saying pretty much the same as you that the negative findings are just as important as the results that support our hypotheses, if not more so! It's a lot more certain to disprove a hypothesis than to keep adding support to it. As long as the study is properly designed, I don't think we should have to jump through such hoops to try to get that data published (I have a bit of negative data of my own sitting around waiting for something else I can slip it in with...a review article or something related with a positive finding...but it's not just going to be publishable as-is, despite being very conclusive; it's a shame such things have to get buried in with something else in order to be published).
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Hwang: faked research
  1. Biology Research (Replies: 5)

  2. Real/Fake Organs (Replies: 2)

Loading...