Precognition paper to be published in mainstream journal

  • Thread starter pftest
  • Start date
  • #51
308
0
Perhaps this falls into the category of "journalism" that seems so despised in this discussion, but Jonah Lehrer wrote a nice article for The New Yorker that touches on issues relevant to the debate (similar to the points already brought up in the thread: that subtle biases in study design, analysis and interpretation can introduce significant biases and lead to erroneous results). In particular, he talks about some work done by Jonathan Schooler:
http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer

In essence, Schooler replicated the results of the Bem paper but, after performing many more tests, showed that the results were noting but a statistical anomaly. I'm not aware whether Schooler published these results.

This, especially in light of other such examples detailed in Lehrer's piece, is why I'm hesitant to trust findings based primarily on statistical data without a plausible, empirically-tested mechanism explaining the results.

Nah, when you post journalism, it's OK... you're the world-tree after all :wink:. Plus, your article actually offers information rather than obscuring it when the original paper is available. Thank you.
 
  • #52
249
0
Oh, in that case I'll have Flex do the same referring to ME as an "expert", and I'll call him a journalist. I can see that you really press the standards here when it comes to credulity.
The article i posted is about Bems paper, aswell as some of the replication efforts. It also has a "debate" section, or rather a criticism section, in which 9 different scientists give their opinion on it. The NYT does not invent its experts, sources or the many scientists it mentions, if thats what you are suggesting. Google them if you dont believe they exist. I was the one who posted Bems original paper btw.

Perhaps you didnt read it because it now requires a login (it didnt when i posted it yesterday), but registration is free.
 
  • #53
308
0
The article i posted is about Bems paper, aswell as some of the replication efforts. It also has a "debate" section, or rather a criticism section, in which 9 different scientists give their opinion on it. The NYT does not invent its experts, sources or the many scientists it mentions, if thats what you are suggesting. Google them if you dont believe they exist. I was the one who posted Bems original paper btw.

Perhaps you didnt read it because it now requires a login (it didnt when i posted it yesterday), but registration is free.

Oh lord... listen pftest... the NYtimes isn't a peer reviewed journal, so what you're talking about is the fallacy of an appeal to authority. I am also NOT suggesting anything about the NYTimes... I really know very little about them and don't use it for my news; I prefer more direct sources. I did read THIS, but the OPINIONS of 9 people are just that... and not scientific support. AGAIN, I don't believe you're familiar with standards like this, so you're running into trouble... again.
 
  • #54
3,875
90
Perhaps this falls into the category of "journalism" that seems so despised in this discussion, but Jonah Lehrer wrote a nice article for The New Yorker that touches on issues relevant to the debate (similar to the points already brought up in the thread: that subtle biases in study design, analysis and interpretation can introduce significant biases and lead to erroneous results). In particular, he talks about some work done by Jonathan Schooler:
http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer

In essence, Schooler replicated the results of the Bem paper but, after performing many more tests, showed that the results were noting but a statistical anomaly. I'm not aware whether Schooler published these results.

This, especially in light of other such examples detailed in Lehrer's piece, is why I'm hesitant to trust findings based primarily on statistical data without a plausible, empirically-tested mechanism explaining the results.

Very interesting, thanks! Although kind of stating the contrary as Bern, I would say that Schooler's findings are almost as mind boggling as those of Bern... Perhaps worth a topic fork?

PS as a personal anecdote, as a kid I once came across a "one-armed bandit" gambling machine with a group of guys around it. They had thrown a lot of false coins(!) in the machine and one of them was about to throw in the last coin when he noticed me. After I confirmed to him that I had never gambled before he asked me to throw it in, and I got jackpot for them - most of it consisting of their own false coins. I left the scene with mixed feelings, as they had robbed my chance on beginners luck for myself...
 
Last edited:
  • #55
Ivan Seeking
Staff Emeritus
Science Advisor
Gold Member
7,714
576
It should be noted that so far, all objections are only opinions and anecdotes. The rebuttal paper can only be considered anecdotal evidence - it cannot be used as evidence that he original paper was flawed - unless/until it is published in a mainstream journal. It is fine to discuss the objections, but they cannot be declared valid at this time.

Likewise, one published paper proves nothing. We have experimental evidence for the claim that is subject to peer review and verification.
 
Last edited:
  • #56
2,745
22
Personally, I still stand by my original thoughts which where that 3% isn't that significant.

OK, it's above average (53% correct in an area with 50/50 odds). But given the way the test was performed it didn't prove anything as far as I'm concerned.

If you really want to do something like this, take 1000 people, sit them down and toss a coin for them (via some coin toss machine) and get them to predict the outcome.

No need for anything excessive given the subject.

After that trial, if you have 53% it means that 30,000 of the guesses were correct when they shouldn't have been. Now that is significant.

Regardless, the biggest problem I see with tests like this is that I could sit calling heads each time and the odds say I'll break even, so any additional would count towards precognition. If this happens with a number of subjects, you could end up with a skewed result.
Although you would expect equal numbers of each, it is quite possible for you to get a larger number of heads than tails during the test and so the above system would skew things.

Perhaps you could do the test as outlined above and use the continuous heads/tails method as a set of benchmarks.
 
  • #57
3,875
90
It should be noted that so far, all objections are only opinions and anecdotes. The rebuttal paper can only be considered anecdotal evidence - it cannot be used as evidence that he original paper was flawed - unless/until it is published in a mainstream journal. It is fine to discuss the objections, but they cannot be declared valid at this time.

Likewise, one published paper proves nothing. We have experimental evidence for the claim that is subject to peer review and verification.

I had overlooked that there is a rebuttal paper - thanks, I'll read it now! But such a rebuttal as by Wagenmaker et al cannot be considered "anecdotal", that's something very different; and the publication or not of a paper in a "mainstream journal" cannot be taken as evidence for a paper's correctness, just as an email that passed your spam filter isn't necessary true, nor are all emails that have not yet been sent or that fall in your spambox spam. What matters in physics are presented facts and their verification. Discussions on this forum may be limited to peer reviewed stuff for exactly the same anti-spam purpose, but a forum discussion should not be confused with the scientific method.

Harald

Edit: I now see that the essence of Wagenmaker's paper has been accepted for publication: it's "a revised version of a previous draft that was accepted pending revision for Journal of Personality and Social Psychology."
 
Last edited:
  • #59
249
0
These are supposedly 3 failed replications of Bems testresults (dont know if they are the same ones as mentioned in the NYT article):
1 / http://circee.org/Retro-priming-et-re-test.html [Broken] / 3

There must be more replication efforts out there.

nismaraatwork said:
Oh lord... listen pftest... the NYtimes isn't a peer reviewed journal, so what you're talking about is the fallacy of an appeal to authority. I am also NOT suggesting anything about the NYTimes... I really know very little about them and don't use it for my news; I prefer more direct sources. I did read THIS, but the OPINIONS of 9 people are just that... and not scientific support. AGAIN, I don't believe you're familiar with standards like this, so you're running into trouble... again.
:yuck:
Calm down chap, i just posted an article with an abundance of relevant information. I didnt claim the NYT is a peer reviewed scientific journal...
 
Last edited by a moderator:
  • #60
308
0
These are supposedly 3 failed replications of Bems testresults (dont know if they are the same ones as mentioned in the NYT article):
1 / http://circee.org/Retro-priming-et-re-test.html [Broken] / 3

There must be more replication efforts out there.

:yuck:
Calm down chap, i just posted an article with an abundance of relevant information. I didnt claim the NYT is a peer reviewed scientific journal...

Sorry, I've been jumping between thread, and threads and work too much. I don't agree with what you clearly believe, but nonetheless I was rude. I apologize.
 
Last edited by a moderator:
  • #61
3,875
90
These are supposedly 3 failed replications of Bems testresults (dont know if they are the same ones as mentioned in the NYT article):
1 / http://circee.org/Retro-priming-et-re-test.html [Broken] / 3

There must be more replication efforts out there. [..].

Well, in view of Wagenmakers et al's response paper and their reinterpretation, those are actually successful replications! :tongue2:
 
Last edited by a moderator:
  • #62
187
1
It should be noted that so far, all objections are only opinions and anecdotes. The rebuttal paper can only be considered anecdotal evidence - it cannot be used as evidence that he original paper was flawed - unless/until it is published in a mainstream journal. It is fine to discuss the objections, but they cannot be declared valid at this time.

Likewise, one published paper proves nothing. We have experimental evidence for the claim that is subject to peer review and verification.

I don't think you know what an "anecdote" means. Pointing out methodological flaws isn't an anecdote. You may argue that it isn't scientifically accepted evidence yet, but it's very convincing if you ask me, especially the part where they formed and tested the hypothesis with the same set of data.

That is a horrible abuse of data points.
 
  • #63
308
0
I don't think you know what an "anecdote" means. Pointing out methodological flaws isn't an anecdote. You may argue that it isn't scientifically accepted evidence yet, but it's very convincing if you ask me, especially the part where they formed and tested the hypothesis with the same set of data.

That is a horrible abuse of data points.

I agree with the spirit of what you're saying... do the rules allow for something published so openly, but not peer reviewed to be considered more than anecdotal? It may be an issue of the rules of the site vs. the standard terminology... I hope.
 
  • #64
187
1
I agree with the spirit of what you're saying... do the rules allow for something published so openly, but not peer reviewed to be considered more than anecdotal? It may be an issue of the rules of the site vs. the standard terminology... I hope.

An anecdote is a story. What I linked is not a story. It's a criticism based on methodology.
 
  • #65
53
0
For instance, would it be logical to assume the existence (i.e. truth of hypothesis) of something, then go about to prove your assumption? That's called... NOT SCIENCE...

I agree that it is not science.

Yet, it is exactly what disbelievers of ESP/paranormal do. They assume that it does not exist, then go about to prove it, finding errors in the procedures, statistical analysys, etc, of the ESP experiments.

So, it seems they are being as unscientific as the ones they criticise.
 
  • #66
3,875
90
An anecdote is a story. What I linked is not a story. It's a criticism based on methodology.

Note that it is just as much peer reviewed as the paper that it criticizes.

The main issue is I think, that the original paper seems to have been a fishing expedition, without properly accounting for that fact. Anyway, I'm now becoming familiar with Bayesian statistics thanks to this. :smile:

Harald
 
  • #67
187
1
I agree that it is not science.

Yet, it is exactly what disbelievers of ESP/paranormal do. They assume that it does not exist, then go about to prove it, finding errors in the procedures, statistical analysys, etc, of the ESP experiments.

So, it seems they are being as unscientific as the ones they criticise.

Finding errors in other peoples work is the ENTIRE BASIS OF SCIENCE. That's how we have so much confidence in what survives the scientific process, because it HAS been thoroughly attacked from every angle, and it came out the other end alive.

To use your example, if ESP was real, even after the disbelievers go about to disprove it, attempting to find errors in the procedure, statistical analysis, etc, the evidence would still hold up. If it doesn't hold up, that means it isn't accepted by science yet, come back when you have evidence that can survive the scientific process.

To say that those things you mentioned are "unscientific" is just about the most absurd thing you can possibly say. It's like saying giving live birth and having warm blood is "un-mammalian."
 
  • #68
2,745
22
Yet, it is exactly what disbelievers of ESP/paranormal do. They assume that it does not exist, then go about to prove it, finding errors in the procedures, statistical analysys, etc, of the ESP experiments.

So, it seems they are being as unscientific as the ones they criticise.

Firstly, if you claim ESP exists then it is up to you to prove it.

You give evidence of its existence, people then 'tear it apart'. That's science.

Every flaw, every error, every single thing you can find wrong with the evidence / procedure, whatever is there, is a mark against it. But, if after all of that the evidence still holds, then ESP would still be accepted.

The default assumption is that science has nothing to say on a subject without evidence. Until verifiable evidence comes to light, there is no reason to entertain the notion of it existing. Simple.

The fact is, the evidence for ESP / the paranormal doesn't hold up to even the simplest examination. And let's not get started on the test methods.

There is nothing unscientific about finding flaws in data and test methods (heck, you're encouraged to). There is nothing unscientific in requiring valid evidence for claims.
 
  • #69
308
0
Coelho: Jack and Jared have replied to your fundamental lack of understanding of science, better than I could.
 
  • #70
Ivan Seeking
Staff Emeritus
Science Advisor
Gold Member
7,714
576
I don't think you know what an "anecdote" means. Pointing out methodological flaws isn't an anecdote. You may argue that it isn't scientifically accepted evidence yet, but it's very convincing if you ask me, especially the part where they formed and tested the hypothesis with the same set of data.

Until we see a published rebuttal, all arguments are anecdotal or unsupported. Unpublished papers count at most as anecdotal evidence, which never trumps a published paper.

We don't use one standard for claims we like, and another for claims we don't like. See the S&D forum guidelines.
 
Last edited:
  • #71
308
0
Until we see a published rebuttal, all arguments are anecdotal or unsupported.

We don't use one standard for claims we like, and another for claims we don't like. See the S&D forum guidelines.

This is my understanding of "anecdote" as per the scientific method, and not just a PF-rules issue; Would that be correct?
 
  • #72
Ivan Seeking
Staff Emeritus
Science Advisor
Gold Member
7,714
576
This is my understanding of "anecdote" as per the scientific method, and not just a PF-rules issue; Would that be correct?

In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.
 
  • #73
Evo
Mentor
23,509
3,060
Until we see a published rebuttal, all arguments are anecdotal or unsupported. Unpublished papers count at most as anecdotal evidence, which never trumps a published paper.

We don't use one standard for claims we like, and another for claims we don't like. See the S&D forum guidelines.
The rebuttal is going to be published in the same Journal at the same time as the Berm paper, so they are on equal footing.

Dr. Wagenmakers is co-author of a rebuttal to the ESP paper that is scheduled to appear in the same issue of the journal.

http://www.nytimes.com/2011/01/06/science/06esp.html
 
  • #74
Ivan Seeking
Staff Emeritus
Science Advisor
Gold Member
7,714
576
Technically, I am making a special exception to allow an unpublished rebuttal to a published paper. If the tables were turned, it would never be allowed. That would be considered crackpot or pseudoscience.
 

Related Threads on Precognition paper to be published in mainstream journal

  • Last Post
Replies
2
Views
3K
Replies
2
Views
3K
Replies
5
Views
638
Replies
12
Views
13K
  • Last Post
Replies
6
Views
2K
Replies
12
Views
2K
  • Last Post
Replies
2
Views
3K
  • Last Post
Replies
18
Views
3K
Replies
10
Views
2K
Top