Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

[Evidence for] Telekinesis ?

  1. Jun 2, 2004 #1


    User Avatar

    Ive read a few articles about telekinetics and all those mind sciences and Im just wondering whether you guys believe in or not. The articles Ive read are extremely convincing and said that they have to do with quantum theory. Me, being only 14, dont know much about quantum theory. I mainly want to know your opinion on telekinesis and stuff like that.
  2. jcsd
  3. Jun 3, 2004 #2
    My opinion is that most of the mysto-physics are pseudo scientific areas and should always be looked at with the most critical eye.

    However, even if guite useless in the short perspective, these studies of supernatural "phenomenas" may be of importance in the long run. Just think of alchemy: those guys spent hundreds of years in their quest to make gold out of rubbish. Allthough the quest was doomed the hunt led to numerous discoveries which ultimatley made up the foundaitions of modern chemistry.

    So even if some maniac waste his/her life trying to prove the conection between quantum mechanics and flying rocks, he/she still have the chance to stumble across something of importance. If not, talented people can focus on the important stuff without having to bother about the more doubtful areas of science.

    I haven't seen the articles you're reffering to, but do have in mind that most things can be made to look convincing in media. I'm also not a master of quantum physics, but I strongly doubt that somone can move anything directly by thought, and even if so, I doubt that it will have anything to do with quantum mechanics.

    Remember that science can be used in many purposes. I've tried to use relativistic arguments when I was late for calss... that didn't work.
    Last edited: Jun 3, 2004
  4. Jun 3, 2004 #3
    A couple of people in the circus do telekenesis let me tell you its all an act
  5. Jun 3, 2004 #4
    I will go with bozo...I think psychokinesis (why do people insist on calling it telekinesis) is just a bunch of lies, just like a lot of other 'mind-phenomena' is.

    Like fortune tellers and horrorscopers - want to have some fun? Find yourself a fortune teller in your local downtown mall and, when you go in, show absolutely no emotion. That's what they base everything on. Your reactions. If you show no reactions, (my teacher told a story of him doing this, my psychology teacher) it really messes them up. They just don't know what to do!
  6. Jun 3, 2004 #5
    I think that if anybody could do telekinesis we'd probably know about it. Let's face it, telekinesis would be an incredibly powerful weapon, even without much control over it it would still be quite easy to use it to cause heart attacks, fatal brain hemhorrages etc. Now there's no reason to believe that telekinetic powers should be limited to people who intend to use them for nice purposes (the laws of physics have no concept of morality, for instance Newton's Law of Gravitation doesn't suddenly stop being true when you try to drop a ten ton block of lead on somebody's head). So sooner or later, we should expect that some psychopath with telekinetic powers would come along and use them to perpetrate the psychic equivalent of a shooting massacre. I think we'd notice an incident where everybody who comes near such a person suddenly dies of a fatal stroke.
    This is not to say that telekinesis is necessarily impossible - merely that it is highly unlikely that with our current knowledge of consiousness that anybody can do it.
  7. Jun 3, 2004 #6
    Both terms are correct.

    AS for the original post, its perfectly possible that quatnum theory could explain such phenomenon and such phenonmenon are real, and when i say perfectly possible, i mean i cannot definitively prove otherwise. However there is no real evidence to suggest that this explanation is any better any other explanation, they are all equally speculative, and equally unscientific, despite any reference to modern theories.
  8. Jun 10, 2004 #7

    Well, when you look at it, is there any REAL evidence to justify the heavily belaboured quantum theory? I am afraid that it, much like Einstiens relativity, have been overused and abused by dreamers and screamers, psuedoscientists and the real McCoy both to accomodate whatever latest theory has the scientific communities knickers in a knot
  9. Jun 10, 2004 #8

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    With the possible exception of some very, very slight statistical aberrations in laboratory results, there is no reliable evidence that anyone can do this. A few scientists are claiming that enough of these slight statistical anomalies are now seen to so as to qualify as scientific evidence, but I know of no major journals that have published this conclusion.

    This all relates to experiments in which the viewer attempts to affect the role of a dice or some other random event. No dramtic examples of this sort of thing are known. For example, no one has ever demonstrated the ability to levitate a chair.

    Our mentor Hypnagogue knows about this I think...I will try to dig up anything significant.
    Last edited: Jun 10, 2004
  10. Jun 10, 2004 #9
    Something else to mention is that seeing objects move by themselves isn't uncommon in hallucinations. I have read stories of people seeing things move in hallucinations caused by sleep deprivation, simple partial seizures, general psychotic conditions, and drugs. Stories from people in such states of mind add to the lore about telekinesis. I can remember that back in the 60s and 70s people who took drugs were very loyal to the belief that drugs allowed for a glimpse into an alternate "reality", and they felt these experiences were authentic. There were no end of stories from people who claimed the drugs allowed them to "do things with the mind" they couldn't otherwise do.
  11. Jun 12, 2004 #10
    RE: "The articles Ive read are extremely convincing and said that they have to do with quantum theory. Me, being only 14, dont know much about quantum theory."

    How could it be so convincing if it relied on science you don't understand?

    RE: "A few scientists are claiming that enough of these slight statistical anomalies are now seen to so as to qualify as scientific evidence, but I know of no major journals that have published this conclusion."

    This is the voodoo they call meta-analysis. When scientists trot out meta-analysis, head for the hills.
  12. Jun 12, 2004 #11

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Can you elaborate? How does "meta-analysis" differ from qualified statistical analysis?
  13. Jun 13, 2004 #12
    Hypnagogue described meta-analysis in detail in my thread "Eyes on the back of your head".
  14. Jun 13, 2004 #13

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

  15. Jun 13, 2004 #14

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Let me ask the question this way. What precisely is your objection to the analyses already done?
  16. Jun 13, 2004 #15
    Assuming we are talking about the same thing, a meta-analysis combines the results of numerous studies to try and achieve statistical significance. There are numerous problems with it:

    1. To be accurate, those experiments that produced negative results would have to be published at the same frequency as those with positive results. This is almost never true. In fact, it isn't even close. Those that claim that a huge number of such "file drawer" experiments would have had to be performed to discount the results of a meta-analysis are simply wrong. I can generate a completely null result by having one file drawer experiment for each published experiment. If one experiment showed that heads appeared 52% of the time with a sample population of 200 throws, then a file-drawer experiment of 200 throws showing heads appearing 48% of the time would produce a combined null result. If anything, there are probably far more file-drawer experiments than published experiments.

    2. The individual studies do not have the same designs. Sure, the experimenters reject those studies that have sufficiently dissimilar designs (as if you can really define "similar"). But all this does is give them one subjective means of throwing out experiments that they know will not help their cause.

    3. The studies used in a meta-analysis are hand-picked, thus losing objectivity. Because meta-analysis studies are done on experiments involving very feint (probably nonexistent) phenomena, then any subjectivity mixed into the process overshadows the mathematical results.

    In other words, when performing experiments to discover phenomena that is extremely hard to detect, precise control conditions are vital. But meta-analysis incorporates even more subjectivity than the individual experiments.

    4. You lose accountabilty when the individual studies were performed by researchers who did not cooperate with each other. How do you know that 10% of the researchers didn't cheat?

    5. You cannot replicate the study. Suppose a meta-analysis produces evidence for an effect. How would you replicate the study?

    Here is my opinion: If you don't have statistical significance, you ain't got squat. Meta-analyses are performed by researchers who simply will not take "no" for an answer because their own ideology depends on a "yes." (And "yes" always seems to be what the meta-analysis produces.)

    Now, are there legitimate reasons to use a meta-analysis? Sure. It can sometimes point to phenomena that MIGHT exist and, therefore, could be a source for future experimentation. But taken alone it isn't evidence of anything. (In my opinion, of course. Others disagree. I will leave it up to the members to decide if they want to believe in its efficacy. If so, I suggest they leave their wallets at home.)
  17. Jun 13, 2004 #16
    Thought provoking stuff, JDY. My only problem with your analysis is that you seem to be suggesting that meta-analyses tend biased in favour of the phenomenon under scrutiny. This may be true - and my experience is limited - but I know of at least one meta-anaysis that was fairly clearly weighted against the phenomenon being investigated...

    ...in fact I have just grabbed an example: Newell et al. (2002) in their investigation of psychological treatments for cancer (published in a mainstream journal) coded results of trials as being nonsignificant unless more than 50% of measures of the same variable we found to be significant e.g. if a psychometric measure of anxiety found patients to be significantly less anxious, but cortisol levels (stress hormones) had not significantly decreased, then the overall measure was seen as nonsignificant.

    In general the standard of proof is raised for non-conventional research, so I am surprised to hear you say that it is not generally the case in meta-analyses. Tell me it aint so, JDY?
  18. Jun 13, 2004 #17
    When performing experiments, scientists are often out to discover a phenomenon. When the phenomenon is not discovered, the value of the research drops tremendously.

    It would be nice to live in a world where negative results count for as much as positive results. However, if you perform a study to find a link between (say) electromagnetic fields and cancer, your study has much greater chance of being published if you found a positive connection between the two.

    It isn't just the researchers at fault. The publications don't want to publish oodles of "Didn't Find It" articles.

    So scientific research is inherently biased in favor of positive results (which is why the list of unsafe foods increases constantly). Simply put, positive results are more newsworthy and, therefore, more "worthy" for publication.

    We can counteract this bias to a large extent by demanding replication before making assumptions. (But the media make the assumptions regardless.) But meta-analysis has no provision for replication.

    Meta-analysis simply takes the biased results from numerous experiments and gleans an outcome. The Emporer's Nose fallacy (see Feynman) now takes over. You don't get more meaningful results simply be increasing the amount of faulty data.

    My biggest complaing with meta-analysis is simple: It introduces bias, contrary to one of the main goals of statistics. What good is any statistical method that actually INTRODUCES bias?

    RE: "In general the standard of proof is raised for non-conventional research,"

    To my knowledge, the measure of statistical significance is not changed for non-conventional research. They simply cannot meet the goal.

    One more thought: In order for a result to be convincing, the result has to be deduced from clean statistical methods free from serious objections. Since meta-analysis has so many problems associated with it, what good is it?
  19. Jun 13, 2004 #18
    People are now starting to catch on:


    "Heart Disease / Cardiology: Should calcium blockers be avoided in hypertension?

    By DrRich

    A recent meta-analysis strongly suggests that the use of calcium channel blockers in the treatment of high blood pressure significantly increases the risk of both heart attacks and heart failure. The report has engendered a firestorm of protest from the pharmaceutical industry and from several hypertension experts.

    The report was presented in August, 2000 at the Congress of the European Society of Cardiology by Dr. Curt Furberg, Professor of Public Health Services at the Wake Forest University School of Medicine, and well-known expert on the effectiveness of cardiovascular drugs. Furberg’s study found that patients taking calcium blockers had a 27% increase in heart attacks, and a 26% increase in heart failure, as compared to patients taking other kinds of drugs for hypertension. Furberg concluded that, when treating hypertension, doctors should avoid calcium blockers whenever possible, using diuretics, beta blockers, and ACE inhibitors instead.

    Responses to this report from hypertension experts and from the pharmaceutical industry was instantaneous. In press releases and in interviews with news services, various leading lights in the hypertension community described Furberg’s report as being “unbalanced,” “outrageous,” “unscientific,” “inflammatory,” and “inappropriate.” The terminology they used in private conversation, some say, was less complimentary.

    The International Society of Hypertension released their own meta-analysis (the WHO-ISH study) on August 24, and found no problem with long-acting calcium blockers. [How could a sound statistical method produce two completely different answers? Answer: It can't. Meta-analysis is not sound.]

    Pfizer, which sells amlodipine (the best selling calcium blocker), pointed out in a press release that only 190 of the 27,000 patients in Furberg’s study were taking their drug.

    All critics agreed that the big problem with Furberg’s data was the meta-analysis he used.

    What’s a meta analysis?

    A meta-analysis is a relatively new statistical technique which combines the data from several clinical trials that all address a particular clinical question, in an attempt to estimate an overall answer to that question.

    The reason meta-analyses are necessary is that similar clinical trials addressing a particular question frequently give different answers. The meta-analysis is a means of trying to come up with an overall best answer, given the available data from all the trials. When properly done, meta-analyses can give important insights to clinical questions, insights that can be gained in no other way.

    There are many inherent problems with meta-analyses, however. By selecting which trials to include in the meta-analysis and which to exclude, by weighing which outcomes are the most appropriate to measure, and by making many other decisions in choosing the methodology for performing the meta-analysis, often (it is said), the one who performs that analysis gets to choose the outcome. The difficulty in performing a legitimate meta-analysis has led some to remark that meta-analysis is to statistical analysis what meta-physics is to physics. [BINGO!!!]

    As a result, when the results of the meta-analysis do not agree with a particular expert’s preconceived notion, it is always extremely easy for that expert to zero in on several aspects of the meta-analysis he/she disagrees with, and that, he/she can pronounce, completely invalidates the entire study. Better yet, critics can do their own meta-analysis to counter the one they don’t like.

    Thus, if meta-analyses are to be used at all, they ought ideally to be (a) performed by individuals who do not have a particular prejudice as to the outcome, and (b) performed by experts who understand the field. Unfortunately, these two criteria are often mutually exclusive. [Especially in paranormal research.]

    The political-economic dynamics of the calcium blocker meta-analyses

    The world calcium blocker market is estimated to be about $6 billion. Drug companies potentially have a lot to lose if these drugs fall out of favor.

    The WHO-ISH meta-analysis, presented by the International Society of Hypertension and suggesting no problem with long-acting calcium blockers, was paid for by the pharmaceutical industry. Notably, the WHO-ISH study did not include heart failure as an outcome, despite the fact that calcium blockers are known to cause this problem. Furthermore, one would be hard pressed to identify a “hypertension expert” who wasn’t significantly compromised by professional and/or financial relationships with pharmaceutical companies. Indeed, it is close relationships with drug companies that often determine who is recognized as an expert, since drug companies sponsor much of the research and many of the speaking engagements that give experts their visibility. One thus ought to attach at least a few grains of salt to the WHO-ISH meta-analysis, and to the opinions expressed by many of the experts’ complaining about the Furberg study.

    In contrast, Furberg’s study had no external funding. Furberg himself is a highly regarded investigator, thought to be relatively independent of drug company money. However, Furberg has been at significant odds with prominent players in the hypertension community for at least 5 years for his attacks on calcium blockers, and, one might argue (and some have), has a reputation to protect. His latest study tends to vindicate his efforts for the past several years – another outcome, in other words, would not have vindicated those efforts. Hence the characterization of his study as “unbalanced,” “inflammatory,” etc."
    Last edited: Jun 13, 2004
  20. Jun 14, 2004 #19
    John, I salute you, and all who sail in you. A good point well made. I took your original point as 'meta-analyses are generally biased in favour of the topic analysed' (not a quote), whereas I was saying the bias was often in the opposite direction. The article you cite says that in fact you can make the figures say whatever you like, depending on your POV.

    Nobody has said truer words that Disreali on statistics. :smile:

    To quote Coolican (1994) (an undergraduate psychology stats textbook): "If we are about to challenge a well-established theory or research finding by publishing results which contradict it, the convention is to achieve 1% significance before publication. A further reason for requiring 1% significance would be when the researcher only has a one-off chance to demonstrate an effect" (p.250).

    1% as opposed to the usual 5% i.e. the effect must be 5 times greater if it is non-replicable e.g. most field and natural experiments, or lacks a sound theoretical basis e.g. psi.

    If you were to be blunt about it, it could easily be said that this convention is designed to ignore subtle phenomema in favour of endorsing only the blindingly obvious! Is that the great adventure that we call science?
  21. Jun 14, 2004 #20
    Good catch on the last issue. I wasn't aware of the two different standards.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: [Evidence for] Telekinesis ?
  1. Telekinesis ect.? (Replies: 2)

  2. Telekinesis Not really (Replies: 10)