Precognition paper to be published in mainstream journal

In summary: HUGE for the field of parapsychology. It may finally gain the credibility it has long deserved. However, if it is found to be false, then it has also discredited the entire field.
  • #71
Ivan Seeking said:
Until we see a published rebuttal, all arguments are anecdotal or unsupported.

We don't use one standard for claims we like, and another for claims we don't like. See the S&D forum guidelines.

This is my understanding of "anecdote" as per the scientific method, and not just a PF-rules issue; Would that be correct?
 
Physics news on Phys.org
  • #72
nismaratwork said:
This is my understanding of "anecdote" as per the scientific method, and not just a PF-rules issue; Would that be correct?

In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.
 
  • #73
Ivan Seeking said:
Until we see a published rebuttal, all arguments are anecdotal or unsupported. Unpublished papers count at most as anecdotal evidence, which never trumps a published paper.

We don't use one standard for claims we like, and another for claims we don't like. See the S&D forum guidelines.
The rebuttal is going to be published in the same Journal at the same time as the Berm paper, so they are on equal footing.

Dr. Wagenmakers is co-author of a rebuttal to the ESP paper that is scheduled to appear in the same issue of the journal.

http://www.nytimes.com/2011/01/06/science/06esp.html
 
  • #74
Technically, I am making a special exception to allow an unpublished rebuttal to a published paper. If the tables were turned, it would never be allowed. That would be considered crackpot or pseudoscience.
 
  • #75
Evo said:
The rebuttal is going to be published in the same Journal at the same time as the Berm paper, so they are on equal footing.



http://www.nytimes.com/2011/01/06/science/06esp.html

Sorry, okay. I knew there were objections to be published, but not a formal paper.
 
  • #76
Ivan Seeking said:
In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.

Actually, publication is simply a means for dissemination, and peer review is merely a noise filter for quality control (which both papers discussed here already passed). The same is also used for quality control of Wikipedia and discussion topics on this site.

Dissemination filters must however not be confused with science or the scientific method! What matters in science are facts and theories, and the verification or disproof of those theories.

Entries for further reading can be found in:
http://en.wikipedia.org/wiki/Scientific_method

Harald
 
  • #77
Evo said:
The rebuttal is going to be published in the same Journal at the same time as the Berm paper, so they are on equal footing.

http://www.nytimes.com/2011/01/06/science/06esp.html

Thanks, I already wrote twice that they are on equal footing because they are both peer reviewed... but I didn't know that they were to be published in the same journal. :smile:

Perhaps it's done on purpose, in order to push for a change in statistical methods. :biggrin:
 
  • #78
Ivan Seeking said:
In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.

I agree... just not in this situation for reasons you already have accepted, and I don't need to restate.


harrylin: JUST a filter? You make that sound so small, but it's the primary mechanism that ensures what you linked to is being FOLLOWED.
 
  • #79
Ivan Seeking said:
In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.

So on this forum, nobody is allowed to argue against a paper unless they themselves have that argument in a published paper? I don't follow. The Bem paper has some very basic flaws that I could have easily pointed out without referencing the paper that I did. However, that paper put it much more eloquently than I could.

Valid arguments don't become invalid just because they're not published any more than invalid arguments become valid just because they're published.

In any case, using the same set of data to both come up with AND test a hypothesis is a horrible methodological flaw that I hope anybody here could see, with or without a published or unpublished paper as a reference.
 
  • #80
Ivan Seeking said:
Technically, I am making a special exception to allow an unpublished rebuttal to a published paper. If the tables were turned, it would never be allowed. That would be considered crackpot or pseudoscience.

It seems a bit severe to discount non-peer-reviewed rebuttals when the Bem paper has not actually appeared in print yet. If the precognition paper were 5 years old, I would support trying to limit the discussion to rebuttals appearing in the published literature, but given that the findings are very new, it seems prudent to consider unpublished responses from experts in the field. As very few researchers have had time to come up with experiments to address Bem's claims, let alone get them peer reviewed, limiting discussion to peer-reviewed findings in essence invalidates any criticism of the Bem paper.

Should these unpublished rebuttals be taken with a grain of salt? Yes, just as any research findings, peer-reviewed or not, should be met with skepticism.

Edit: Also, here is a peer-reviewed paper that discusses many of the flaws in study design and bias discussed in this thread:
Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124
Abstract

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
 
Last edited:
  • #81
Ygggdrasil, Jack... He already accepted the points you're making!

Ivan Seeking said:
Sorry, okay. I knew there were objections to be published, but not a formal paper.

Otherwise everyone seems to arguing for the same rigor to be applied, so what's the problem?
 
  • #82
nismaratwork said:
Ygggdrasil, Jack... He already accepted the points you're making!



Otherwise everyone seems to arguing for the same rigor to be applied, so what's the problem?

It would be absurd to apply the same rigor to comments on an internet forum as in a peer-reviewed journal. Ivan seemed to be implying that all comments made here had to be peer-reviewed before he'd consider them valid.
 
  • #83
Jack21222 said:
It would be absurd to apply the same rigor to comments on an internet forum as in a peer-reviewed journal. Ivan seemed to be implying that all comments made here had to be peer-reviewed before he'd consider them valid.

Jack, we both have been here long enough to KNOW that's not what he was saying. Was he wrong, yeah, was he being absurdist?... no.
 
  • #84
nismaratwork said:
Jack, we both have been here long enough to KNOW that's not what he was saying. Was he wrong, yeah, was he being absurdist?... no.

He wouldn't comment on the content of the post because it wasn't peer-reviewed. You tell me what that means.
 
  • #85
Honestly people, this is going in circles.

We've dealt with the 'finer points' of the documents, how about discussion gets back on topic.
 
  • #86
Jack21222 said:
He wouldn't comment on the content of the post because it wasn't peer-reviewed. You tell me what that means.

I admit, that goes beyond my ability to explain; I can only say that I don't believe that's what Ivan intended, but obviously he speaks for himself.
 
  • #87
jarednjames said:
Honestly people, this is going in circles.

We've dealt with the 'finer points' of the documents, how about discussion gets back on topic.

That would be nice!
 
  • #88
Ygggdrasil said:
[..]

Edit: Also, here is a peer-reviewed paper that discusses many of the flaws in study design and bias discussed in this thread:
Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124

Wow that's an amazing paper! But yes, it looks like Bem's paper and the criticism on it is being published to provide a case example of just that problem...
 
  • #89
Ivan was right that a published peer reviewed paper has more credibility than an non-published non-peer reviewed one. He just didnt know that the criticism paper (which was linked to in the NYT article that i posted) was also to be published. We can't just go "hey someone criticised that scientific peer reviewed paper that i don't like, that means its false", especially not in a skepticism and debunking forum. It will take time for science to show whether Bem has actually found ESP or not.

harrylin said:
Wow that's an amazing paper! But yes, it looks like Bem's paper and the criticism on it is being published to provide a case example of just that problem...
Where'd you get that from? That paper is 5 years old and it applies to the majority of published research, not just a single ESP paper. It specifically refers to the area of biomedical research.
 
Last edited:
  • #90
pftest said:
Ivan was right that a published peer reviewed paper has more credibility than an non-published non-peer reviewed one. He just didnt know that the criticism paper (which was linked to in the NYT article that i posted) was also to be published. We can't just go "hey someone criticised that scientific peer reviewed paper that i don't like, that means its false", especially not in a skepticism and debunking forum. It will take time for science to show whether Bem has actually found ESP or not.

Where'd you get that from? That paper is 5 years old and it applies to the majority of published research, not just a single ESP paper. It specifically refers to the area of biomedical research.

The paper applies to statistics.

Hey... omg...

Einstein's paper on SR is what, 96 years old just from the date of PUBLISHING?! Quick, everyone... is 'c' increasing?!? No? Hmmm, well GR is old, anyone suddenly find falsification for that?

The irony of course, is that you could have made the same argument logically (if poorly) from the OPPOSITE perspective, and been right: the longer a theory or paper is being peer-reviewed, attacked, worked on... the more credible it is. Science seeks to tear something down, in the hopes that it CAN'T and will be left with something valid... it's the destructive element of the process (not in a bad way), and a means of quality control in results AND methodology!
 
  • #91
pftest said:
We can't just go "hey someone criticised that scientific peer reviewed paper that i don't like, that means its false", especially not in a skepticism and debunking forum. It will take time for science to show whether Bem has actually found ESP or not.

Nobody here ever said "hey, someone criticized that paper, that means it's false." I said "that paper tortures the data in an unacceptable way, using the same data to both form and test a hypothesis."

Using the same data to both form and test a hypothesis is never acceptable. Ever. I don't care if it's in a peer-reviewed journal or not. Doing that makes the paper false. Never once did I appeal to authority like you're claiming (by phrasing my argument as "hey, someone criticized it").

Bem used the http://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy" One doesn't need peer-reviewed research to point that out.
 
Last edited by a moderator:
  • #92
Jack21222 said:
Nobody here ever said "hey, someone criticized that paper, that means it's false." I said "that paper tortures the data in an unacceptable way, using the same data to both form and test a hypothesis."

Using the same data to both form and test a hypothesis is never acceptable. Ever. I don't care if it's in a peer-reviewed journal or not. Doing that makes the paper false. Never once did I appeal to authority like you're claiming (by phrasing my argument as "hey, someone criticized it").

Bem used the http://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy" One doesn't need peer-reviewed research to point that out.

Perfectly said.
 
Last edited by a moderator:
  • #93
nismaratwork said:
Perfectly said.

Perfectly said.
 
  • #94
So at what point does discussion get back to the OP? Or is everyone going to sit bickering about who said what?
 
  • #95
jarednjames said:
So at what point does discussion get back to the OP? Or is everyone going to sit bickering about who said what?

That was an amazingly ironic post.

I wish people would stop quoting other people's posts and and writing about them! We need to get back to the OP! Everything else is a distraction. I hate it when people ramble on and on about nothing at all like a leaky sink faucet! Just dripping water all night against the unwashed pan from the night before. The metronomic pinging of water against metal a constant reminder that, no matter how hard you try, you just can't prepare dinner to her satisfaction.

Ping, "this is undercooked."
Ping, "did you just put seasoned salt on this?"
Ping, "Jason knew how to cook haddock."

The incessant nagging still with you long after she's fallen asleep; a dead weight in the bed pulling you closer only though the deformation of the long saggy mattress. And that's the moment you realized the love is gone.
 
  • #96
FlexGunship said:
That was an amazingly ironic post.

Yes it is (so is this one), and by extension so is every "can we get back to topic" post.

This thread is no longer discussing the OP (or related materials), it is arguing over silly little things and getting no where.

So back to the OP please.
 
  • #97
jarednjames said:
Yes it is (so is this one), and by extension so is every "can we get back to topic" post.

This thread is no longer discussing the OP (or related materials), it is arguing over silly little things and getting no where.

So back to the OP please.

Yeah... it's gone off topic because the OP got the answer, the argument, and everyone's opinion. What's left except minutiae?
 
  • #98
nismaratwork said:
Yeah... it's gone off topic because the OP got the answer, the argument, and everyone's opinion. What's left except minutiae?

Okay, fine. I'll bring it back to the OP. Not the specific paper, but the topic.

I think, that because of the nature of a discovery like precognition, a single peer-reviewed paper shouldn't be considered enough. This is the type of effect that should be reproducible on command, in many different locations, at a very small cost. Therefore, I don't think it's unreasonable to wait for additional conformational papers.

Does anyone disagree?
 
  • #99
FlexGunship said:
Okay, fine. I'll bring it back to the OP. Not the specific paper, but the topic.

I think, that because of the nature of a discovery like precognition, a single peer-reviewed paper shouldn't be considered enough. This is the type of effect that should be reproducible on command, in many different locations, at a very small cost. Therefore, I don't think it's unreasonable to wait for additional conformational papers.

Does anyone disagree?

I concur, much as would be the case with a SETI discovery, confirming such a thing would be a process. What I hate, and what the 'true believers' miss, is that who wouldn't be thrilled to find out that the universe was so odd? I'd go for a super-power!

I just don't see the evidence to start leaping from buildings to see if I'll fly, to throw out a colorful metaphor.
 
  • #100
I completely agree flex. One paper doesn't constitute perfect evidence, but it is a good starting point.
 
  • #101
jarednjames said:
I completely agree flex. One paper doesn't constitute perfect evidence, but it is a good starting point.

Well, one good paper would be a good starting point. This paper is no starting point at all, for the reasons I mentioned. If I take all sorts of data and start drawing lines around some of it, I'm sure I could "prove" all sorts of weird things.

"Oh look, dice throws come up as a five 3% more often on the third Tuesday of January, March, and November. We did over 1,000 dice throws every day, so the results are statistically significant."
 
  • #102
[I wrote: "Wow [Ionadis 2005] is an amazing paper! But yes, it looks like Bem's paper and the criticism on it is being published to provide a case example of just that problem... ]
pftest said:
[..]
Where'd you get that from? That paper is 5 years old and it applies to the majority of published research, not just a single ESP paper. It specifically refers to the area of biomedical research.

The criticism I referred to is the refutation by Wagenmakers at al, which apparently will be published in the same edition as Bem's paper.
 
Last edited:
  • #103
My post was in response to what flex put (like I said). Which is regarding one paper not being enough.

I'm not saying this paper is the starting point. My comment was a general reply regarding any topic, where one paper (under the conditions outlined by flex) is a good starting point.

For me, this paper is not that starting point.
 
  • #104
Uh, right. I should've been more clear. I was NOT implying that the paper being discussed would have been "#1" in the list called "Evidence." I'm simply saying that the list called "Evidence" can't be one item long.
 
  • #105
FlexGunship said:
Uh, right. I should've been more clear. I was NOT implying that the paper being discussed would have been "#1" in the list called "Evidence." I'm simply saying that the list called "Evidence" can't be one item long.

A word I love... "Indication"!
 

Similar threads

Replies
11
Views
26K
Back
Top