Precognition paper to be published in mainstream journal

  • Thread starter Thread starter pftest
  • Start date Start date
  • Tags Tags
    Journal Paper
Click For Summary
Recent discussions highlight a groundbreaking paper suggesting that future events may influence current behavior, challenging traditional views on precognition. The study, led by Daryl Bem and set to be published in a prominent psychology journal, has garnered attention for its rigorous methodology, with even skeptics unable to identify significant flaws. Previous experiments, such as those on "presentiment," have shown physiological responses occurring before stimuli, hinting at a possible precognitive effect. While some participants express skepticism about the findings, the research opens the door for further scientific inquiry into phenomena previously deemed untestable. The implications of confirming precognition could revolutionize our understanding of time and perception.
  • #61
pftest said:
These are supposedly 3 failed replications of Bems testresults (dont know if they are the same ones as mentioned in the NYT article):
https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=13quorf_DWEXBBvlDPngbUNFKm5-BjgXgehJJ7ndnxc_wx2BsXn84iPhLeVfX&hl=en / http://circee.org/Retro-priming-et-re-test.html / 3

There must be more replication efforts out there. [..].

Well, in view of Wagenmakers et al's response paper and their reinterpretation, those are actually successful replications! :-p
 
Last edited by a moderator:
Physics news on Phys.org
  • #62
Ivan Seeking said:
It should be noted that so far, all objections are only opinions and anecdotes. The rebuttal paper can only be considered anecdotal evidence - it cannot be used as evidence that he original paper was flawed - unless/until it is published in a mainstream journal. It is fine to discuss the objections, but they cannot be declared valid at this time.

Likewise, one published paper proves nothing. We have experimental evidence for the claim that is subject to peer review and verification.

I don't think you know what an "anecdote" means. Pointing out methodological flaws isn't an anecdote. You may argue that it isn't scientifically accepted evidence yet, but it's very convincing if you ask me, especially the part where they formed and tested the hypothesis with the same set of data.

That is a horrible abuse of data points.
 
  • #63
Jack21222 said:
I don't think you know what an "anecdote" means. Pointing out methodological flaws isn't an anecdote. You may argue that it isn't scientifically accepted evidence yet, but it's very convincing if you ask me, especially the part where they formed and tested the hypothesis with the same set of data.

That is a horrible abuse of data points.

I agree with the spirit of what you're saying... do the rules allow for something published so openly, but not peer reviewed to be considered more than anecdotal? It may be an issue of the rules of the site vs. the standard terminology... I hope.
 
  • #64
nismaratwork said:
I agree with the spirit of what you're saying... do the rules allow for something published so openly, but not peer reviewed to be considered more than anecdotal? It may be an issue of the rules of the site vs. the standard terminology... I hope.

An anecdote is a story. What I linked is not a story. It's a criticism based on methodology.
 
  • #65
nismaratwork said:
For instance, would it be logical to assume the existence (i.e. truth of hypothesis) of something, then go about to prove your assumption? That's called... NOT SCIENCE...

I agree that it is not science.

Yet, it is exactly what disbelievers of ESP/paranormal do. They assume that it does not exist, then go about to prove it, finding errors in the procedures, statistical analysys, etc, of the ESP experiments.

So, it seems they are being as unscientific as the ones they criticise.
 
  • #66
Jack21222 said:
An anecdote is a story. What I linked is not a story. It's a criticism based on methodology.

Note that it is just as much peer reviewed as the paper that it criticizes.

The main issue is I think, that the original paper seems to have been a fishing expedition, without properly accounting for that fact. Anyway, I'm now becoming familiar with Bayesian statistics thanks to this. :smile:

Harald
 
  • #67
coelho said:
I agree that it is not science.

Yet, it is exactly what disbelievers of ESP/paranormal do. They assume that it does not exist, then go about to prove it, finding errors in the procedures, statistical analysys, etc, of the ESP experiments.

So, it seems they are being as unscientific as the ones they criticise.

Finding errors in other peoples work is the ENTIRE BASIS OF SCIENCE. That's how we have so much confidence in what survives the scientific process, because it HAS been thoroughly attacked from every angle, and it came out the other end alive.

To use your example, if ESP was real, even after the disbelievers go about to disprove it, attempting to find errors in the procedure, statistical analysis, etc, the evidence would still hold up. If it doesn't hold up, that means it isn't accepted by science yet, come back when you have evidence that can survive the scientific process.

To say that those things you mentioned are "unscientific" is just about the most absurd thing you can possibly say. It's like saying giving live birth and having warm blood is "un-mammalian."
 
  • #68
coelho said:
Yet, it is exactly what disbelievers of ESP/paranormal do. They assume that it does not exist, then go about to prove it, finding errors in the procedures, statistical analysys, etc, of the ESP experiments.

So, it seems they are being as unscientific as the ones they criticise.

Firstly, if you claim ESP exists then it is up to you to prove it.

You give evidence of its existence, people then 'tear it apart'. That's science.

Every flaw, every error, every single thing you can find wrong with the evidence / procedure, whatever is there, is a mark against it. But, if after all of that the evidence still holds, then ESP would still be accepted.

The default assumption is that science has nothing to say on a subject without evidence. Until verifiable evidence comes to light, there is no reason to entertain the notion of it existing. Simple.

The fact is, the evidence for ESP / the paranormal doesn't hold up to even the simplest examination. And let's not get started on the test methods.

There is nothing unscientific about finding flaws in data and test methods (heck, you're encouraged to). There is nothing unscientific in requiring valid evidence for claims.
 
  • #69
Coelho: Jack and Jared have replied to your fundamental lack of understanding of science, better than I could.
 
  • #70
Jack21222 said:
I don't think you know what an "anecdote" means. Pointing out methodological flaws isn't an anecdote. You may argue that it isn't scientifically accepted evidence yet, but it's very convincing if you ask me, especially the part where they formed and tested the hypothesis with the same set of data.

Until we see a published rebuttal, all arguments are anecdotal or unsupported. Unpublished papers count at most as anecdotal evidence, which never trumps a published paper.

We don't use one standard for claims we like, and another for claims we don't like. See the S&D forum guidelines.
 
Last edited:
  • #71
Ivan Seeking said:
Until we see a published rebuttal, all arguments are anecdotal or unsupported.

We don't use one standard for claims we like, and another for claims we don't like. See the S&D forum guidelines.

This is my understanding of "anecdote" as per the scientific method, and not just a PF-rules issue; Would that be correct?
 
  • #72
nismaratwork said:
This is my understanding of "anecdote" as per the scientific method, and not just a PF-rules issue; Would that be correct?

In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.
 
  • #73
Ivan Seeking said:
Until we see a published rebuttal, all arguments are anecdotal or unsupported. Unpublished papers count at most as anecdotal evidence, which never trumps a published paper.

We don't use one standard for claims we like, and another for claims we don't like. See the S&D forum guidelines.
The rebuttal is going to be published in the same Journal at the same time as the Berm paper, so they are on equal footing.

Dr. Wagenmakers is co-author of a rebuttal to the ESP paper that is scheduled to appear in the same issue of the journal.

http://www.nytimes.com/2011/01/06/science/06esp.html
 
  • #74
Technically, I am making a special exception to allow an unpublished rebuttal to a published paper. If the tables were turned, it would never be allowed. That would be considered crackpot or pseudoscience.
 
  • #75
Evo said:
The rebuttal is going to be published in the same Journal at the same time as the Berm paper, so they are on equal footing.



http://www.nytimes.com/2011/01/06/science/06esp.html

Sorry, okay. I knew there were objections to be published, but not a formal paper.
 
  • #76
Ivan Seeking said:
In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.

Actually, publication is simply a means for dissemination, and peer review is merely a noise filter for quality control (which both papers discussed here already passed). The same is also used for quality control of Wikipedia and discussion topics on this site.

Dissemination filters must however not be confused with science or the scientific method! What matters in science are facts and theories, and the verification or disproof of those theories.

Entries for further reading can be found in:
http://en.wikipedia.org/wiki/Scientific_method

Harald
 
  • #77
Evo said:
The rebuttal is going to be published in the same Journal at the same time as the Berm paper, so they are on equal footing.

http://www.nytimes.com/2011/01/06/science/06esp.html

Thanks, I already wrote twice that they are on equal footing because they are both peer reviewed... but I didn't know that they were to be published in the same journal. :smile:

Perhaps it's done on purpose, in order to push for a change in statistical methods. :biggrin:
 
  • #78
Ivan Seeking said:
In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.

I agree... just not in this situation for reasons you already have accepted, and I don't need to restate.


harrylin: JUST a filter? You make that sound so small, but it's the primary mechanism that ensures what you linked to is being FOLLOWED.
 
  • #79
Ivan Seeking said:
In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.

So on this forum, nobody is allowed to argue against a paper unless they themselves have that argument in a published paper? I don't follow. The Bem paper has some very basic flaws that I could have easily pointed out without referencing the paper that I did. However, that paper put it much more eloquently than I could.

Valid arguments don't become invalid just because they're not published any more than invalid arguments become valid just because they're published.

In any case, using the same set of data to both come up with AND test a hypothesis is a horrible methodological flaw that I hope anybody here could see, with or without a published or unpublished paper as a reference.
 
  • #80
Ivan Seeking said:
Technically, I am making a special exception to allow an unpublished rebuttal to a published paper. If the tables were turned, it would never be allowed. That would be considered crackpot or pseudoscience.

It seems a bit severe to discount non-peer-reviewed rebuttals when the Bem paper has not actually appeared in print yet. If the precognition paper were 5 years old, I would support trying to limit the discussion to rebuttals appearing in the published literature, but given that the findings are very new, it seems prudent to consider unpublished responses from experts in the field. As very few researchers have had time to come up with experiments to address Bem's claims, let alone get them peer reviewed, limiting discussion to peer-reviewed findings in essence invalidates any criticism of the Bem paper.

Should these unpublished rebuttals be taken with a grain of salt? Yes, just as any research findings, peer-reviewed or not, should be met with skepticism.

Edit: Also, here is a peer-reviewed paper that discusses many of the flaws in study design and bias discussed in this thread:
Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124
Abstract

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
 
Last edited:
  • #81
Ygggdrasil, Jack... He already accepted the points you're making!

Ivan Seeking said:
Sorry, okay. I knew there were objections to be published, but not a formal paper.

Otherwise everyone seems to arguing for the same rigor to be applied, so what's the problem?
 
  • #82
nismaratwork said:
Ygggdrasil, Jack... He already accepted the points you're making!



Otherwise everyone seems to arguing for the same rigor to be applied, so what's the problem?

It would be absurd to apply the same rigor to comments on an internet forum as in a peer-reviewed journal. Ivan seemed to be implying that all comments made here had to be peer-reviewed before he'd consider them valid.
 
  • #83
Jack21222 said:
It would be absurd to apply the same rigor to comments on an internet forum as in a peer-reviewed journal. Ivan seemed to be implying that all comments made here had to be peer-reviewed before he'd consider them valid.

Jack, we both have been here long enough to KNOW that's not what he was saying. Was he wrong, yeah, was he being absurdist?... no.
 
  • #84
nismaratwork said:
Jack, we both have been here long enough to KNOW that's not what he was saying. Was he wrong, yeah, was he being absurdist?... no.

He wouldn't comment on the content of the post because it wasn't peer-reviewed. You tell me what that means.
 
  • #85
Honestly people, this is going in circles.

We've dealt with the 'finer points' of the documents, how about discussion gets back on topic.
 
  • #86
Jack21222 said:
He wouldn't comment on the content of the post because it wasn't peer-reviewed. You tell me what that means.

I admit, that goes beyond my ability to explain; I can only say that I don't believe that's what Ivan intended, but obviously he speaks for himself.
 
  • #87
jarednjames said:
Honestly people, this is going in circles.

We've dealt with the 'finer points' of the documents, how about discussion gets back on topic.

That would be nice!
 
  • #88
Ygggdrasil said:
[..]

Edit: Also, here is a peer-reviewed paper that discusses many of the flaws in study design and bias discussed in this thread:
Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124

Wow that's an amazing paper! But yes, it looks like Bem's paper and the criticism on it is being published to provide a case example of just that problem...
 
  • #89
Ivan was right that a published peer reviewed paper has more credibility than an non-published non-peer reviewed one. He just didnt know that the criticism paper (which was linked to in the NYT article that i posted) was also to be published. We can't just go "hey someone criticised that scientific peer reviewed paper that i don't like, that means its false", especially not in a skepticism and debunking forum. It will take time for science to show whether Bem has actually found ESP or not.

harrylin said:
Wow that's an amazing paper! But yes, it looks like Bem's paper and the criticism on it is being published to provide a case example of just that problem...
Where'd you get that from? That paper is 5 years old and it applies to the majority of published research, not just a single ESP paper. It specifically refers to the area of biomedical research.
 
Last edited:
  • #90
pftest said:
Ivan was right that a published peer reviewed paper has more credibility than an non-published non-peer reviewed one. He just didnt know that the criticism paper (which was linked to in the NYT article that i posted) was also to be published. We can't just go "hey someone criticised that scientific peer reviewed paper that i don't like, that means its false", especially not in a skepticism and debunking forum. It will take time for science to show whether Bem has actually found ESP or not.

Where'd you get that from? That paper is 5 years old and it applies to the majority of published research, not just a single ESP paper. It specifically refers to the area of biomedical research.

The paper applies to statistics.

Hey... omg...

Einstein's paper on SR is what, 96 years old just from the date of PUBLISHING?! Quick, everyone... is 'c' increasing?!? No? Hmmm, well GR is old, anyone suddenly find falsification for that?

The irony of course, is that you could have made the same argument logically (if poorly) from the OPPOSITE perspective, and been right: the longer a theory or paper is being peer-reviewed, attacked, worked on... the more credible it is. Science seeks to tear something down, in the hopes that it CAN'T and will be left with something valid... it's the destructive element of the process (not in a bad way), and a means of quality control in results AND methodology!
 

Similar threads

  • · Replies 11 ·
Replies
11
Views
28K