Precognition paper to be published in mainstream journal

  • Thread starter Thread starter pftest
  • Start date Start date
  • Tags Tags
    Journal Paper
AI Thread Summary
Recent discussions highlight a groundbreaking paper suggesting that future events may influence current behavior, challenging traditional views on precognition. The study, led by Daryl Bem and set to be published in a prominent psychology journal, has garnered attention for its rigorous methodology, with even skeptics unable to identify significant flaws. Previous experiments, such as those on "presentiment," have shown physiological responses occurring before stimuli, hinting at a possible precognitive effect. While some participants express skepticism about the findings, the research opens the door for further scientific inquiry into phenomena previously deemed untestable. The implications of confirming precognition could revolutionize our understanding of time and perception.
  • #51
Ygggdrasil said:
Perhaps this falls into the category of "journalism" that seems so despised in this discussion, but Jonah Lehrer wrote a nice article for The New Yorker that touches on issues relevant to the debate (similar to the points already brought up in the thread: that subtle biases in study design, analysis and interpretation can introduce significant biases and lead to erroneous results). In particular, he talks about some work done by Jonathan Schooler:
http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer

In essence, Schooler replicated the results of the Bem paper but, after performing many more tests, showed that the results were noting but a statistical anomaly. I'm not aware whether Schooler published these results.

This, especially in light of other such examples detailed in Lehrer's piece, is why I'm hesitant to trust findings based primarily on statistical data without a plausible, empirically-tested mechanism explaining the results.

Nah, when you post journalism, it's OK... you're the world-tree after all :wink:. Plus, your article actually offers information rather than obscuring it when the original paper is available. Thank you.
 
Physics news on Phys.org
  • #52
nismaratwork said:
Oh, in that case I'll have Flex do the same referring to ME as an "expert", and I'll call him a journalist. I can see that you really press the standards here when it comes to credulity.
The article i posted is about Bems paper, as well as some of the replication efforts. It also has a "debate" section, or rather a criticism section, in which 9 different scientists give their opinion on it. The NYT does not invent its experts, sources or the many scientists it mentions, if that's what you are suggesting. Google them if you don't believe they exist. I was the one who posted Bems original paper btw.

Perhaps you didnt read it because it now requires a login (it didnt when i posted it yesterday), but registration is free.
 
  • #53
pftest said:
The article i posted is about Bems paper, as well as some of the replication efforts. It also has a "debate" section, or rather a criticism section, in which 9 different scientists give their opinion on it. The NYT does not invent its experts, sources or the many scientists it mentions, if that's what you are suggesting. Google them if you don't believe they exist. I was the one who posted Bems original paper btw.

Perhaps you didnt read it because it now requires a login (it didnt when i posted it yesterday), but registration is free.

Oh lord... listen pftest... the NYtimes isn't a peer reviewed journal, so what you're talking about is the fallacy of an appeal to authority. I am also NOT suggesting anything about the NYTimes... I really know very little about them and don't use it for my news; I prefer more direct sources. I did read THIS, but the OPINIONS of 9 people are just that... and not scientific support. AGAIN, I don't believe you're familiar with standards like this, so you're running into trouble... again.
 
  • #54
Ygggdrasil said:
Perhaps this falls into the category of "journalism" that seems so despised in this discussion, but Jonah Lehrer wrote a nice article for The New Yorker that touches on issues relevant to the debate (similar to the points already brought up in the thread: that subtle biases in study design, analysis and interpretation can introduce significant biases and lead to erroneous results). In particular, he talks about some work done by Jonathan Schooler:
http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer

In essence, Schooler replicated the results of the Bem paper but, after performing many more tests, showed that the results were noting but a statistical anomaly. I'm not aware whether Schooler published these results.

This, especially in light of other such examples detailed in Lehrer's piece, is why I'm hesitant to trust findings based primarily on statistical data without a plausible, empirically-tested mechanism explaining the results.

Very interesting, thanks! Although kind of stating the contrary as Bern, I would say that Schooler's findings are almost as mind boggling as those of Bern... Perhaps worth a topic fork?

PS as a personal anecdote, as a kid I once came across a "one-armed bandit" gambling machine with a group of guys around it. They had thrown a lot of false coins(!) in the machine and one of them was about to throw in the last coin when he noticed me. After I confirmed to him that I had never gambled before he asked me to throw it in, and I got jackpot for them - most of it consisting of their own false coins. I left the scene with mixed feelings, as they had robbed my chance on beginners luck for myself...
 
Last edited:
  • #55
It should be noted that so far, all objections are only opinions and anecdotes. The rebuttal paper can only be considered anecdotal evidence - it cannot be used as evidence that he original paper was flawed - unless/until it is published in a mainstream journal. It is fine to discuss the objections, but they cannot be declared valid at this time.

Likewise, one published paper proves nothing. We have experimental evidence for the claim that is subject to peer review and verification.
 
Last edited:
  • #56
Personally, I still stand by my original thoughts which where that 3% isn't that significant.

OK, it's above average (53% correct in an area with 50/50 odds). But given the way the test was performed it didn't prove anything as far as I'm concerned.

If you really want to do something like this, take 1000 people, sit them down and toss a coin for them (via some coin toss machine) and get them to predict the outcome.

No need for anything excessive given the subject.

After that trial, if you have 53% it means that 30,000 of the guesses were correct when they shouldn't have been. Now that is significant.

Regardless, the biggest problem I see with tests like this is that I could sit calling heads each time and the odds say I'll break even, so any additional would count towards precognition. If this happens with a number of subjects, you could end up with a skewed result.
Although you would expect equal numbers of each, it is quite possible for you to get a larger number of heads than tails during the test and so the above system would skew things.

Perhaps you could do the test as outlined above and use the continuous heads/tails method as a set of benchmarks.
 
  • #57
Ivan Seeking said:
It should be noted that so far, all objections are only opinions and anecdotes. The rebuttal paper can only be considered anecdotal evidence - it cannot be used as evidence that he original paper was flawed - unless/until it is published in a mainstream journal. It is fine to discuss the objections, but they cannot be declared valid at this time.

Likewise, one published paper proves nothing. We have experimental evidence for the claim that is subject to peer review and verification.

I had overlooked that there is a rebuttal paper - thanks, I'll read it now! But such a rebuttal as by Wagenmaker et al cannot be considered "anecdotal", that's something very different; and the publication or not of a paper in a "mainstream journal" cannot be taken as evidence for a paper's correctness, just as an email that passed your spam filter isn't necessary true, nor are all emails that have not yet been sent or that fall in your spambox spam. What matters in physics are presented facts and their verification. Discussions on this forum may be limited to peer reviewed stuff for exactly the same anti-spam purpose, but a forum discussion should not be confused with the scientific method.

Harald

Edit: I now see that the essence of Wagenmaker's paper has been accepted for publication: it's "a revised version of a previous draft that was accepted pending revision for Journal of Personality and Social Psychology."
 
Last edited:
  • #58
Jack21222 said:
Here is a PDF of a response paper:

http://dl.dropbox.com/u/1018886/Bem6.pdf

[..]

Thanks a lot for that preview! I'll read it with interest, as it may be useful in general. :smile:
 
Last edited:
  • #59
These are supposedly 3 failed replications of Bems testresults (dont know if they are the same ones as mentioned in the NYT article):
https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=13quorf_DWEXBBvlDPngbUNFKm5-BjgXgehJJ7ndnxc_wx2BsXn84iPhLeVfX&hl=en / http://circee.org/Retro-priming-et-re-test.html / 3

There must be more replication efforts out there.

nismaraatwork said:
Oh lord... listen pftest... the NYtimes isn't a peer reviewed journal, so what you're talking about is the fallacy of an appeal to authority. I am also NOT suggesting anything about the NYTimes... I really know very little about them and don't use it for my news; I prefer more direct sources. I did read THIS, but the OPINIONS of 9 people are just that... and not scientific support. AGAIN, I don't believe you're familiar with standards like this, so you're running into trouble... again.
Calm down chap, i just posted an article with an abundance of relevant information. I didnt claim the NYT is a peer reviewed scientific journal...
 
Last edited by a moderator:
  • #60
pftest said:
These are supposedly 3 failed replications of Bems testresults (dont know if they are the same ones as mentioned in the NYT article):
https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=13quorf_DWEXBBvlDPngbUNFKm5-BjgXgehJJ7ndnxc_wx2BsXn84iPhLeVfX&hl=en / http://circee.org/Retro-priming-et-re-test.html / 3

There must be more replication efforts out there.


Calm down chap, i just posted an article with an abundance of relevant information. I didnt claim the NYT is a peer reviewed scientific journal...

Sorry, I've been jumping between thread, and threads and work too much. I don't agree with what you clearly believe, but nonetheless I was rude. I apologize.
 
Last edited by a moderator:
  • #61
pftest said:
These are supposedly 3 failed replications of Bems testresults (dont know if they are the same ones as mentioned in the NYT article):
https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=13quorf_DWEXBBvlDPngbUNFKm5-BjgXgehJJ7ndnxc_wx2BsXn84iPhLeVfX&hl=en / http://circee.org/Retro-priming-et-re-test.html / 3

There must be more replication efforts out there. [..].

Well, in view of Wagenmakers et al's response paper and their reinterpretation, those are actually successful replications! :-p
 
Last edited by a moderator:
  • #62
Ivan Seeking said:
It should be noted that so far, all objections are only opinions and anecdotes. The rebuttal paper can only be considered anecdotal evidence - it cannot be used as evidence that he original paper was flawed - unless/until it is published in a mainstream journal. It is fine to discuss the objections, but they cannot be declared valid at this time.

Likewise, one published paper proves nothing. We have experimental evidence for the claim that is subject to peer review and verification.

I don't think you know what an "anecdote" means. Pointing out methodological flaws isn't an anecdote. You may argue that it isn't scientifically accepted evidence yet, but it's very convincing if you ask me, especially the part where they formed and tested the hypothesis with the same set of data.

That is a horrible abuse of data points.
 
  • #63
Jack21222 said:
I don't think you know what an "anecdote" means. Pointing out methodological flaws isn't an anecdote. You may argue that it isn't scientifically accepted evidence yet, but it's very convincing if you ask me, especially the part where they formed and tested the hypothesis with the same set of data.

That is a horrible abuse of data points.

I agree with the spirit of what you're saying... do the rules allow for something published so openly, but not peer reviewed to be considered more than anecdotal? It may be an issue of the rules of the site vs. the standard terminology... I hope.
 
  • #64
nismaratwork said:
I agree with the spirit of what you're saying... do the rules allow for something published so openly, but not peer reviewed to be considered more than anecdotal? It may be an issue of the rules of the site vs. the standard terminology... I hope.

An anecdote is a story. What I linked is not a story. It's a criticism based on methodology.
 
  • #65
nismaratwork said:
For instance, would it be logical to assume the existence (i.e. truth of hypothesis) of something, then go about to prove your assumption? That's called... NOT SCIENCE...

I agree that it is not science.

Yet, it is exactly what disbelievers of ESP/paranormal do. They assume that it does not exist, then go about to prove it, finding errors in the procedures, statistical analysys, etc, of the ESP experiments.

So, it seems they are being as unscientific as the ones they criticise.
 
  • #66
Jack21222 said:
An anecdote is a story. What I linked is not a story. It's a criticism based on methodology.

Note that it is just as much peer reviewed as the paper that it criticizes.

The main issue is I think, that the original paper seems to have been a fishing expedition, without properly accounting for that fact. Anyway, I'm now becoming familiar with Bayesian statistics thanks to this. :smile:

Harald
 
  • #67
coelho said:
I agree that it is not science.

Yet, it is exactly what disbelievers of ESP/paranormal do. They assume that it does not exist, then go about to prove it, finding errors in the procedures, statistical analysys, etc, of the ESP experiments.

So, it seems they are being as unscientific as the ones they criticise.

Finding errors in other peoples work is the ENTIRE BASIS OF SCIENCE. That's how we have so much confidence in what survives the scientific process, because it HAS been thoroughly attacked from every angle, and it came out the other end alive.

To use your example, if ESP was real, even after the disbelievers go about to disprove it, attempting to find errors in the procedure, statistical analysis, etc, the evidence would still hold up. If it doesn't hold up, that means it isn't accepted by science yet, come back when you have evidence that can survive the scientific process.

To say that those things you mentioned are "unscientific" is just about the most absurd thing you can possibly say. It's like saying giving live birth and having warm blood is "un-mammalian."
 
  • #68
coelho said:
Yet, it is exactly what disbelievers of ESP/paranormal do. They assume that it does not exist, then go about to prove it, finding errors in the procedures, statistical analysys, etc, of the ESP experiments.

So, it seems they are being as unscientific as the ones they criticise.

Firstly, if you claim ESP exists then it is up to you to prove it.

You give evidence of its existence, people then 'tear it apart'. That's science.

Every flaw, every error, every single thing you can find wrong with the evidence / procedure, whatever is there, is a mark against it. But, if after all of that the evidence still holds, then ESP would still be accepted.

The default assumption is that science has nothing to say on a subject without evidence. Until verifiable evidence comes to light, there is no reason to entertain the notion of it existing. Simple.

The fact is, the evidence for ESP / the paranormal doesn't hold up to even the simplest examination. And let's not get started on the test methods.

There is nothing unscientific about finding flaws in data and test methods (heck, you're encouraged to). There is nothing unscientific in requiring valid evidence for claims.
 
  • #69
Coelho: Jack and Jared have replied to your fundamental lack of understanding of science, better than I could.
 
  • #70
Jack21222 said:
I don't think you know what an "anecdote" means. Pointing out methodological flaws isn't an anecdote. You may argue that it isn't scientifically accepted evidence yet, but it's very convincing if you ask me, especially the part where they formed and tested the hypothesis with the same set of data.

Until we see a published rebuttal, all arguments are anecdotal or unsupported. Unpublished papers count at most as anecdotal evidence, which never trumps a published paper.

We don't use one standard for claims we like, and another for claims we don't like. See the S&D forum guidelines.
 
Last edited:
  • #71
Ivan Seeking said:
Until we see a published rebuttal, all arguments are anecdotal or unsupported.

We don't use one standard for claims we like, and another for claims we don't like. See the S&D forum guidelines.

This is my understanding of "anecdote" as per the scientific method, and not just a PF-rules issue; Would that be correct?
 
  • #72
nismaratwork said:
This is my understanding of "anecdote" as per the scientific method, and not just a PF-rules issue; Would that be correct?

In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.
 
  • #73
Ivan Seeking said:
Until we see a published rebuttal, all arguments are anecdotal or unsupported. Unpublished papers count at most as anecdotal evidence, which never trumps a published paper.

We don't use one standard for claims we like, and another for claims we don't like. See the S&D forum guidelines.
The rebuttal is going to be published in the same Journal at the same time as the Berm paper, so they are on equal footing.

Dr. Wagenmakers is co-author of a rebuttal to the ESP paper that is scheduled to appear in the same issue of the journal.

http://www.nytimes.com/2011/01/06/science/06esp.html
 
  • #74
Technically, I am making a special exception to allow an unpublished rebuttal to a published paper. If the tables were turned, it would never be allowed. That would be considered crackpot or pseudoscience.
 
  • #75
Evo said:
The rebuttal is going to be published in the same Journal at the same time as the Berm paper, so they are on equal footing.



http://www.nytimes.com/2011/01/06/science/06esp.html

Sorry, okay. I knew there were objections to be published, but not a formal paper.
 
  • #76
Ivan Seeking said:
In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.

Actually, publication is simply a means for dissemination, and peer review is merely a noise filter for quality control (which both papers discussed here already passed). The same is also used for quality control of Wikipedia and discussion topics on this site.

Dissemination filters must however not be confused with science or the scientific method! What matters in science are facts and theories, and the verification or disproof of those theories.

Entries for further reading can be found in:
http://en.wikipedia.org/wiki/Scientific_method

Harald
 
  • #77
Evo said:
The rebuttal is going to be published in the same Journal at the same time as the Berm paper, so they are on equal footing.

http://www.nytimes.com/2011/01/06/science/06esp.html

Thanks, I already wrote twice that they are on equal footing because they are both peer reviewed... but I didn't know that they were to be published in the same journal. :smile:

Perhaps it's done on purpose, in order to push for a change in statistical methods. :biggrin:
 
  • #78
Ivan Seeking said:
In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.

I agree... just not in this situation for reasons you already have accepted, and I don't need to restate.


harrylin: JUST a filter? You make that sound so small, but it's the primary mechanism that ensures what you linked to is being FOLLOWED.
 
  • #79
Ivan Seeking said:
In science, an unpublished paper counts for nothing. They are only allowed for discussion here as they do often constitute anecdotal evidence for the claim or argument.

So on this forum, nobody is allowed to argue against a paper unless they themselves have that argument in a published paper? I don't follow. The Bem paper has some very basic flaws that I could have easily pointed out without referencing the paper that I did. However, that paper put it much more eloquently than I could.

Valid arguments don't become invalid just because they're not published any more than invalid arguments become valid just because they're published.

In any case, using the same set of data to both come up with AND test a hypothesis is a horrible methodological flaw that I hope anybody here could see, with or without a published or unpublished paper as a reference.
 
  • #80
Ivan Seeking said:
Technically, I am making a special exception to allow an unpublished rebuttal to a published paper. If the tables were turned, it would never be allowed. That would be considered crackpot or pseudoscience.

It seems a bit severe to discount non-peer-reviewed rebuttals when the Bem paper has not actually appeared in print yet. If the precognition paper were 5 years old, I would support trying to limit the discussion to rebuttals appearing in the published literature, but given that the findings are very new, it seems prudent to consider unpublished responses from experts in the field. As very few researchers have had time to come up with experiments to address Bem's claims, let alone get them peer reviewed, limiting discussion to peer-reviewed findings in essence invalidates any criticism of the Bem paper.

Should these unpublished rebuttals be taken with a grain of salt? Yes, just as any research findings, peer-reviewed or not, should be met with skepticism.

Edit: Also, here is a peer-reviewed paper that discusses many of the flaws in study design and bias discussed in this thread:
Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124
Abstract

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
 
Last edited:
  • #81
Ygggdrasil, Jack... He already accepted the points you're making!

Ivan Seeking said:
Sorry, okay. I knew there were objections to be published, but not a formal paper.

Otherwise everyone seems to arguing for the same rigor to be applied, so what's the problem?
 
  • #82
nismaratwork said:
Ygggdrasil, Jack... He already accepted the points you're making!



Otherwise everyone seems to arguing for the same rigor to be applied, so what's the problem?

It would be absurd to apply the same rigor to comments on an internet forum as in a peer-reviewed journal. Ivan seemed to be implying that all comments made here had to be peer-reviewed before he'd consider them valid.
 
  • #83
Jack21222 said:
It would be absurd to apply the same rigor to comments on an internet forum as in a peer-reviewed journal. Ivan seemed to be implying that all comments made here had to be peer-reviewed before he'd consider them valid.

Jack, we both have been here long enough to KNOW that's not what he was saying. Was he wrong, yeah, was he being absurdist?... no.
 
  • #84
nismaratwork said:
Jack, we both have been here long enough to KNOW that's not what he was saying. Was he wrong, yeah, was he being absurdist?... no.

He wouldn't comment on the content of the post because it wasn't peer-reviewed. You tell me what that means.
 
  • #85
Honestly people, this is going in circles.

We've dealt with the 'finer points' of the documents, how about discussion gets back on topic.
 
  • #86
Jack21222 said:
He wouldn't comment on the content of the post because it wasn't peer-reviewed. You tell me what that means.

I admit, that goes beyond my ability to explain; I can only say that I don't believe that's what Ivan intended, but obviously he speaks for himself.
 
  • #87
jarednjames said:
Honestly people, this is going in circles.

We've dealt with the 'finer points' of the documents, how about discussion gets back on topic.

That would be nice!
 
  • #88
Ygggdrasil said:
[..]

Edit: Also, here is a peer-reviewed paper that discusses many of the flaws in study design and bias discussed in this thread:
Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124

Wow that's an amazing paper! But yes, it looks like Bem's paper and the criticism on it is being published to provide a case example of just that problem...
 
  • #89
Ivan was right that a published peer reviewed paper has more credibility than an non-published non-peer reviewed one. He just didnt know that the criticism paper (which was linked to in the NYT article that i posted) was also to be published. We can't just go "hey someone criticised that scientific peer reviewed paper that i don't like, that means its false", especially not in a skepticism and debunking forum. It will take time for science to show whether Bem has actually found ESP or not.

harrylin said:
Wow that's an amazing paper! But yes, it looks like Bem's paper and the criticism on it is being published to provide a case example of just that problem...
Where'd you get that from? That paper is 5 years old and it applies to the majority of published research, not just a single ESP paper. It specifically refers to the area of biomedical research.
 
Last edited:
  • #90
pftest said:
Ivan was right that a published peer reviewed paper has more credibility than an non-published non-peer reviewed one. He just didnt know that the criticism paper (which was linked to in the NYT article that i posted) was also to be published. We can't just go "hey someone criticised that scientific peer reviewed paper that i don't like, that means its false", especially not in a skepticism and debunking forum. It will take time for science to show whether Bem has actually found ESP or not.

Where'd you get that from? That paper is 5 years old and it applies to the majority of published research, not just a single ESP paper. It specifically refers to the area of biomedical research.

The paper applies to statistics.

Hey... omg...

Einstein's paper on SR is what, 96 years old just from the date of PUBLISHING?! Quick, everyone... is 'c' increasing?!? No? Hmmm, well GR is old, anyone suddenly find falsification for that?

The irony of course, is that you could have made the same argument logically (if poorly) from the OPPOSITE perspective, and been right: the longer a theory or paper is being peer-reviewed, attacked, worked on... the more credible it is. Science seeks to tear something down, in the hopes that it CAN'T and will be left with something valid... it's the destructive element of the process (not in a bad way), and a means of quality control in results AND methodology!
 
  • #91
pftest said:
We can't just go "hey someone criticised that scientific peer reviewed paper that i don't like, that means its false", especially not in a skepticism and debunking forum. It will take time for science to show whether Bem has actually found ESP or not.

Nobody here ever said "hey, someone criticized that paper, that means it's false." I said "that paper tortures the data in an unacceptable way, using the same data to both form and test a hypothesis."

Using the same data to both form and test a hypothesis is never acceptable. Ever. I don't care if it's in a peer-reviewed journal or not. Doing that makes the paper false. Never once did I appeal to authority like you're claiming (by phrasing my argument as "hey, someone criticized it").

Bem used the http://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy" One doesn't need peer-reviewed research to point that out.
 
Last edited by a moderator:
  • #92
Jack21222 said:
Nobody here ever said "hey, someone criticized that paper, that means it's false." I said "that paper tortures the data in an unacceptable way, using the same data to both form and test a hypothesis."

Using the same data to both form and test a hypothesis is never acceptable. Ever. I don't care if it's in a peer-reviewed journal or not. Doing that makes the paper false. Never once did I appeal to authority like you're claiming (by phrasing my argument as "hey, someone criticized it").

Bem used the http://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy" One doesn't need peer-reviewed research to point that out.

Perfectly said.
 
Last edited by a moderator:
  • #93
nismaratwork said:
Perfectly said.

Perfectly said.
 
  • #94
So at what point does discussion get back to the OP? Or is everyone going to sit bickering about who said what?
 
  • #95
jarednjames said:
So at what point does discussion get back to the OP? Or is everyone going to sit bickering about who said what?

That was an amazingly ironic post.

I wish people would stop quoting other people's posts and and writing about them! We need to get back to the OP! Everything else is a distraction. I hate it when people ramble on and on about nothing at all like a leaky sink faucet! Just dripping water all night against the unwashed pan from the night before. The metronomic pinging of water against metal a constant reminder that, no matter how hard you try, you just can't prepare dinner to her satisfaction.

Ping, "this is undercooked."
Ping, "did you just put seasoned salt on this?"
Ping, "Jason knew how to cook haddock."

The incessant nagging still with you long after she's fallen asleep; a dead weight in the bed pulling you closer only though the deformation of the long saggy mattress. And that's the moment you realized the love is gone.
 
  • #96
FlexGunship said:
That was an amazingly ironic post.

Yes it is (so is this one), and by extension so is every "can we get back to topic" post.

This thread is no longer discussing the OP (or related materials), it is arguing over silly little things and getting no where.

So back to the OP please.
 
  • #97
jarednjames said:
Yes it is (so is this one), and by extension so is every "can we get back to topic" post.

This thread is no longer discussing the OP (or related materials), it is arguing over silly little things and getting no where.

So back to the OP please.

Yeah... it's gone off topic because the OP got the answer, the argument, and everyone's opinion. What's left except minutiae?
 
  • #98
nismaratwork said:
Yeah... it's gone off topic because the OP got the answer, the argument, and everyone's opinion. What's left except minutiae?

Okay, fine. I'll bring it back to the OP. Not the specific paper, but the topic.

I think, that because of the nature of a discovery like precognition, a single peer-reviewed paper shouldn't be considered enough. This is the type of effect that should be reproducible on command, in many different locations, at a very small cost. Therefore, I don't think it's unreasonable to wait for additional conformational papers.

Does anyone disagree?
 
  • #99
FlexGunship said:
Okay, fine. I'll bring it back to the OP. Not the specific paper, but the topic.

I think, that because of the nature of a discovery like precognition, a single peer-reviewed paper shouldn't be considered enough. This is the type of effect that should be reproducible on command, in many different locations, at a very small cost. Therefore, I don't think it's unreasonable to wait for additional conformational papers.

Does anyone disagree?

I concur, much as would be the case with a SETI discovery, confirming such a thing would be a process. What I hate, and what the 'true believers' miss, is that who wouldn't be thrilled to find out that the universe was so odd? I'd go for a super-power!

I just don't see the evidence to start leaping from buildings to see if I'll fly, to throw out a colorful metaphor.
 
  • #100
I completely agree flex. One paper doesn't constitute perfect evidence, but it is a good starting point.
 

Similar threads

Replies
11
Views
27K
Back
Top