Eyes On The Back Of Your Head

  • Thread starter zoobyshoe
  • Start date
  • #26
russ_watters
Mentor
19,855
6,276
Originally posted by Zero
I think the methodology in both cases is flawed by interference from teh testers. It was just as bad for the woman to tell the subjects that she was supporting the idea, as it was for the man to say he was trying to disprove it.
I'm not sure why I didn't read this thread before. It seems to me that the one thing this study proved is that studies like this need to be double-blind. It almost appears it was set up for that specific purpose.

A few years ago a teenager did an experiment for a science fair where she tested "touch therapist's" (misnomer - they don't actually touch their subjects) ability to sense when their hands were in close proximity to another person's. Since "touch therapy" was the profession of the subjects, all of them believed they had such an ability. The test was double-blind and considered of good enough scientific merit to be published in the NE Journal of Medicine.
 
Last edited:
  • #27
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,244
2
Originally posted by russ_watters
I'm not sure why I didn't read this thread before. It seems to me that the one thing this study proved is that studies like this need to be double-blind. It almost appears it was set up for that specific purpose.
Granted that telling the subjects the intent of the experiments wasn't the most scientific thing to do. But based on the experimental methodology, it is still eminently unclear how Wiseman and Schlitz could have achieved their respective results. How could they have unfairly tipped their subjects into their respective favors when the subjects' skin conductances were the dependent variables and the remote observation periods were completely blind and random?

Under this design, the conventional view says that there still should be absolutely no correlation between EDA and observation period, no matter what they told the subjects at the outset (so long as the observation periods continued to be blind and random).
 
  • #28
6,265
1,280
Hypnagogue,

Your explanation of statistics gave me a good boost in understanding what the numbers presented were meant to demonstrate. I am still going to have to read it several more times.

In reference to statistics and Chaos you said:
Yet on the macro scale of classic physics we observe regular, deterministic behavior, thanks to the macroscopic statistical tendencies of all those little and unpredictable quantum particles.
Chaos was spawned by the fact that the above is actually not true. Classic physics arrives at the concept of "regular, deterministic behaviour" by ignoring irregular looking behaviour under various pretexts. The bulk of differential equations cannot be solved; a very small percentage can. If you study differential equations you will be herded toward those that can be solved. That is the grossest example. It becomes more insidious the more negligible what is being dismissed seems.

Chaos is the result of people starting to pay attention to, and come to grips with, the "irregularities" that have traditionally necessarily been dismissed in order to have a clear picture to look at. One way of defining Chaos might be to say that it is the study of the fact that systems will not settle into equilibrium. Periods of apparent order arise and seem stable, but in fact they eventually destabilize, and reverse.

Chaos might predict something more along these lines if the two researchers were engaged in ongoing, non-stop testing of the "Staring" phenomenon: The results would remain stable for a while. Then the woman would start to get less and less confirmation for her hypothesis, and the man: more. A reversal would happen where the woman couldn't prove her theory for the life of her, and the man couldn't disprove the ability for the life of him. Things would stay that way for a while, then reverse again.

There are many other kinds of "Chaos" dynamics that might be at work. That one, is just the most famous, as far as I know. Chaos wants to uncover the patterns and dynamics behind the fuzzy edges and hitherto dismissed swirleys and random-seeming (but not random) bumps and crevices. Statistics, to the best of my knowledge, is an attempt to derive information from data by making those irregularities go away, and sticking to the "big" picture.

It is clear to me that you know more about statistics than I know about Chaos. Since neither of us knows much about the science the other is basing their perspective upon we may not be able to get too far, but I find it interesting. (Easy reading on Chaos is Chaos by James Gleik: very clear expository prose, excellent illustrations).
 
  • #29
russ_watters
Mentor
19,855
6,276
Originally posted by hypnagogue
Under this design, the conventional view says that there still should be absolutely no correlation between EDA and observation period, no matter what they told the subjects at the outset (so long as the observation periods continued to be blind and random).
I agree, and from that I can only conclude there must be some unknown experimental error at work here. It seems like a relatively simple study - I'm wondering if it has been redone with slight variations in method (such as not using those two experimenters).
 
  • #30
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,244
2
Originally posted by zoobyshoe
Statistics, to the best of my knowledge, is an attempt to derive information from data by making those irregularities go away, and sticking to the "big" picture.
I'm not sure what you mean when you say statistics (to the best of your knowledge) makes irregularities 'go away.'

Statistics is a method for estimating probabilities. When we do not have complete information about a population of data, we analyze data from samples of that population and use rigorous mathematical methods for extrapolating what we can from the sample data. For instance, take TV ratings. It would be impractical to monitor the viewing habits of every household in the US with a TV; therefore, we take samples of data from a subset of all households with TVs and extrapolate what we can from that data to make educated guesses about the viewing habits of the entire population. Using this method, we might be able to say something like "We are 95% confident that between 5 million and 7 million households watched Gilligan's Island on Fox last night." The idea is that even with incomplete information about a population we can narrow our range of uncertainty as to certain claims about that population.

Such claims rest on assumptions, typically assumptions as to the probabilistic distribution of data across a population. However, even these assumptions can be tested; for instance, once we draw a goodly amount of data, we can see how well it retro-fits into the assumed population probability distribution.

Irregularities aren't made to vanish by statistics; they vanish by themselves. But they only vanish by themselves if the data supports it. If the data is not indicative of any regularity, it's not as if one is artificially created.

One instance where regularities are almost automatically generated by statistics is encapsulated by the Central Limit Theorem. This theorem goes as follows: if you take a group of samples of data from a population, where each sample is comprised of n points of data, and you take the average value of each sample, then this average sample value will follow a normal (bell-shaped) distribution, provided n is sufficiently large-- regardless of the probability distribution you are drawing from. An excellent java demo of this effect is at http://www.ruf.rice.edu/~lane/stat_sim/sampling_dist/index.html .

How chaos ties into all this I can't say, but hopefully this will give you some more insight into how statistics works.
 
  • #31
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,244
2
Originally posted by russ_watters
I agree, and from that I can only conclude there must be some unknown experimental error at work here. It seems like a relatively simple study - I'm wondering if it has been redone with slight variations in method (such as not using those two experimenters).
You can only conclude unknown experimental error because you are not willing to entertain the idea that some unknown but actual phenomenon might be at work here.

Not using these two experimenters would seem to defeat the point of the results, which show a correlation between data gathered and experimenter: an 'experimenter' effect. If anything, I think a sort of meta-experiment is appropriate using different variations of experimenters-- using Schlitz and Wiseman as the designated 'observors' while letting a third party run the rest of the experiment; letting Schlitz and Wiseman talk to participants while using third parties as 'observors' (although this has already been done in each of their initial experiments, with similar results to their joint experiment); letting Wiseman and Schlitz instruct a 3rd and 4th party how they conducted the subject conversation and observing, and letting those new parties take their respective places in the joint experiment; and so on.

But clearly we must pay special attention to what, if any, causal roles these experimenters have been playing in the experiments, without automatically attributing it to experimental error. Further research is needed to see what is truly responsible for the observed data.
 
  • #32
6,265
1,280
Originally posted by hypnagogue How chaos ties into all this I can't say, but hopefully this will give you some more insight into how statistics works.
James Gleik talks about the bell curve and Chaos starting on page 84 of the book. It's about a four page story about how a early pioneer in Chaos got an insight as to what was really happening in a situation where the bell curve just wouldn't work. Too long to quote here, but it gives me the notion that, knowing all you do about statistics, you would appreciate Chaos far more than I am capable of doing.
 
  • #33
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,244
2
Thanks zooby, I'll definitely have to look into that.
 
  • #34
russ_watters
Mentor
19,855
6,276
Originally posted by hypnagogue
You can only conclude unknown experimental error because you are not willing to entertain the idea that some unknown but actual phenomenon might be at work here.
Quite right. I belong to the "extrordinary claims require extrordinary evidence" camp. To convince me that its even POSSIBLE that there is some actual phenomenon at work here would require, at the very least, an experiment that is NOT fundamentally flawed. I don't think thats much to ask.
Not using these two experimenters would seem to defeat the point of the results, which show a correlation between data gathered and experimenter: an 'experimenter' effect.
Thats true if the goal is analyzing flaws in research methodology. If the goal is exploring the phenomenon, then an experiment that is not flawed should be done. If people have such a power, an experiment that is not flawed will show it. This seems self-evident to me....

...unless you are suggesting that step has already been covered. If such a power has already been proven, then certainly you could do an experiment about outside effects on this power. But as with MANY other types of off the mainstream science, it appears to me that the first step is being purposefully overlooked.
Further research is needed to see what is truly responsible for the observed data.
Which is why I think an experiment should be done that does NOT include experimentor effects. If a non-flawed experment is run, it will show whether the phenomenon exists and by implication what the actual effect of the experimentor is.
 
  • #35
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,244
2
Originally posted by russ_watters
Quite right. I belong to the "extrordinary claims require extrordinary evidence" camp. To convince me that its even POSSIBLE that there is some actual phenomenon at work here would require, at the very least, an experiment that is NOT fundamentally flawed. I don't think thats much to ask.
What a catch 22 this is! You seem to be operating with the logic of the following two statements:

1) If and only if there is no experimental flaw, I will consider that there is a genuine psi phenomenon.
2) If an experiment produces positive results in favor of psi, there must be an experimental flaw.

Given the two conditionals above, it is impossible to get from "an experiment produces positive results in favor of psi" to "I will consider that there is a genuine psi phenomenon." Obviously the flaw is in the 2nd conditional.

Assume for a moment that psi exists. Just how are we ever to establish that an experiment that produces positive results in favor psi is not experimentally flawed?

For that matter, what is the flaw in the Schlitz/Wiseman experiments? If you consider it to be the 'priming' done before their respective trial runs, I can only reiterate what I have said before:

1) Whether the data supports the existence of psi or not appears to be contingent upon who runs the experiment. So there seems to be an 'experimenter effect.' To replace the two experimenters with any 2 arbitrary experimenters and thus to remove the attitudes they display to the subjects ignores that the critical variable in this experiment appears to be the effect of the experimenter him/herself on the subjects before they produce the experimental data.

2) We can control for this as indicated in my last post, by running a sort of 'meta-experiment' where data is collected to concentrate on and map out any potential correlation between the experimenter and the experimental data. Such a meta-experiment would not necessarily be flawed in the way you seem to think the Schlitz/Wiseman experiment is, but the 'flaw' in that lower level experiment by necessity cannot be eliminated, since it is the critical variable differentiating the two sets of data. If we are comparing the differing behaviors of two systems that are identical except for one parameter P, then obviously we will not learn anything about the nature of the differing behaviors by setting P to be equivalent in both systems. Rather, we try to see exactly what kind of effect P has in influencing the systems to behave in their different respective manners by deliberately varying P and observing the different systematic behaviors it illicits.
 
Last edited:
  • #36
Zero
hypnagogue, wouldn't it be better to change the experimental parameters, rather than juggle the data until you get the result you seek?
 
  • #37
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,244
2
Originally posted by Zero
hypnagogue, wouldn't it be better to change the experimental parameters, rather than juggle the data until you get the result you seek?
No one is juggling data. Have you read the Schlitz/Wiseman paper?

The experimental parameters were identical. The only different parameter in the 2 sets of data were the experimenters themselves-- Schlitz or Wiseman. Therefore, the Schlitz/Wiseman experiment indicates that the differences in the data sets (supporting psi or not) lies not in the experimental parameters but in something to do with Schlitz or Wiseman themselves. (And before you say it, they took careful measures to prevent data tampering both by themselves and outside parties.)

Thus, it only makes sense to carry out further experiments to try to discern such an 'experimenter effect.'
 
  • #38
Zero
Originally posted by hypnagogue
No one is juggling data. Have you read the Schlitz/Wiseman paper?

The experimental parameters were identical. The only different parameter in the 2 sets of data were the experimenters themselves-- Schlitz or Wiseman. Therefore, the Schlitz/Wiseman experiment indicates that the differences in the data sets (supporting psi or not) lies not in the experimental parameters but in something to do with Schlitz or Wiseman themselves. (And before you say it, they took careful measures to prevent data tampering both by themselves and outside parties.)

Thus, it only makes sense to carry out further experiments to try to discern such an 'experimenter effect.'
Haven't read the paper...is there a link I missed?

You know, this reminds me of something else...hearing tests. I know it is possible to pass those and be stone deaf at the same time. I wonder how people do it, but I know it can be done. I'm intersted to see how the evaluation of the results is done. Do you subtract the misses from the hits, and count a positive score as a positive result?
 
  • #39
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,244
2
Originally posted by Zero
Haven't read the paper...is there a link I missed?
There are a bunch of links in a post of mine on the 2nd page of this thread. You can read all about the methodology and such there. One of the compelling things about this experiment is that they didn't measure 'psychic correlation' or whatever it should be called via a deliberate choice/hit or miss setup, but rather unconscious fluctuations of nervous system activity as measured by skin conductace.
 
  • #40
russ_watters
Mentor
19,855
6,276
Originally posted by hypnagogue
What a catch 22 this is! You seem to be operating with the logic of the following two statements:

1) If and only if there is no experimental flaw, I will consider that there is a genuine psi phenomenon.
2) If an experiment produces positive results in favor of psi, there must be an experimental flaw.

Given the two conditionals above, it is impossible to get from "an experiment produces positive results in favor of psi" to "I will consider that there is a genuine psi phenomenon." Obviously the flaw is in the 2nd conditional.
#1 is fine and I don't see why there is anything wrong with it. #2 is not what I am saying. I'm saying it is possible to set up ahead of time, an experiment without the known flaws of this experiment.

By implying #2, YOU seem to be suggesting that only a flawed experiment could produce positive results. Are you suggesting that this phenomenon can only be explored through flawed experiments?
Assume for a moment that psi exists. Just how are we ever to establish that an experiment that produces positive results in favor psi is not experimentally flawed?
I really did already cover this. An experiment which by design has no experimentor impact: a simple (not just an extraneous word - a truly simple test eliminates ambiguity) and double-blind test. The test I described earlier about "touch therapy" was such a test. Unbiased tests really are a piece of cake to set up!

The catch-22 cuts both ways, zooby, and given my bias toward not accepting extrordinary claims without extrordinary evidence, I will NOT assume psi exists. It must be PROVEN to exist through experiments. I'm sorry, but thats how science works and MUST work.
Whether the data supports the existence of psi or not appears to be contingent upon who runs the experiment. So there seems to be an 'experimenter effect.'
"Experimenter effect" is synonomous with "experimenter error" so to me (and frankly, most scientists) such a statement is tantamount to an acknowledgement that there is no psi phenomenon at work here. A test for "experimenter effect" would have both the experimenter and the test subjects only THINKING the stated experimenter was the experimenter. But then - to those who assume psi does exist, such a test would not be satisfactory. I'm afraid there is no way out of that cath-22. It is unreasonable to demand that tests on phenomenon depend on a fundamental flaw in the test.
I know it is possible to pass those and be stone deaf at the same time. I wonder how people do it, but I know it can be done.
I am quite certain I've done it. I had a ruptured ear drum in high school that almost disqualified me from military service. I've taken so many ear tests, I'm sure being "trigger-happy" gives false-positive results.
 
  • #41
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,244
2
Originally posted by russ_watters
#1 is fine and I don't see why there is anything wrong with it. #2 is not what I am saying. I'm saying it is possible to set up ahead of time, an experiment without the known flaws of this experiment.

By implying #2, YOU seem to be suggesting that only a flawed experiment could produce positive results. Are you suggesting that this phenomenon can only be explored through flawed experiments? I really did already cover this. An experiment which by design has no experimentor impact: a simple (not just an extraneous word - a truly simple test eliminates ambiguity) and double-blind test. The test I described earlier about "touch therapy" was such a test. Unbiased tests really are a piece of cake to set up!

The catch-22 cuts both ways, zooby, and given my bias toward not accepting extrordinary claims without extrordinary evidence, I will NOT assume psi exists. It must be PROVEN to exist through experiments. I'm sorry, but thats how science works and MUST work. "Experimenter effect" is synonomous with "experimenter error" so to me (and frankly, most scientists) such a statement is tantamount to an acknowledgement that there is no psi phenomenon at work here.
First of all, I'm not zooby.

Secondly, I think your insistence on thinking of the experiment as 'flawed' has no true basis. It is 'flawed' only insofar as it did not follow ideal scientific protocol exactly. However, you have even admitted that the 'flaw' in the Schlitz/Wiseman experiment, according to the currently accepted scientific worldview, cannot fully account for the results they got. How is the 'flaw' an 'experimental error' when it cannot even be explained how the flaw systematically produced the data that it did? You are positing an 'experimental error' that we cannot explain using the current scientific paradigm, which doesn't really win you any points but only contradicts your position.

The Schiltz/Wiseman experiment seems to show a meaningful effect arising from the interaction between experimenter and test subject that cannot be accounted for by the currently accepted scientific worldview. Of course making the experiment double-blind could only hinder or even outright destroy this effect. But once again, the nature of the experimental data is such that the data cannot be explained even by the non-double blind nature of this experiment if we adhere to the currently accepted paradigm. This is why I think it is perfectly acceptable that these tests have not been completely double blind.

By not making them completely double blind, Schlitz and Wiseman seem to have uncovered an interesting affect that cannot be explained by currently accepted scientific paradigms-- whether you think of it as an 'error' or not. The existence (and reproduction) of data that cannot be accounted for intelligibly should should lead us to scrutinize more closely the apparent 'experimenter effect' in future experiments to see what (if anything) there really is to it, NOT to revert to experimental designs that necessarily prevent us from scrutinizing this effect altogether.
 
  • #42
russ_watters
Mentor
19,855
6,276
Originally posted by hypnagogue
First of all, I'm not zooby.
Oops. Sorry.
Secondly, I think your insistence on thinking of the experiment as 'flawed' has no true basis. It is 'flawed' only insofar as it did not follow ideal scientific protocol exactly.
Thats by definition of course. If an experiment doesn't follow the scientific "protocol" it is by definition, flawed.
However, you have even admitted that the 'flaw' in the Schlitz/Wiseman experiment, according to the currently accepted scientific worldview, cannot fully account for the results they got. How is the 'flaw' an 'experimental error' when it cannot even be explained how the flaw systematically produced the data that it did? You are positing an 'experimental error' that we cannot explain using the current scientific paradigm, which doesn't really win you any points but only contradicts your position.
I don't think so. There is a difference between finding that there is a flaw and figuring out what that flaw is. I see no reason to require that a flaw be explained in order to show it exists. Indeed, thats again the way science works. Data is data. It doesn't require a theory to explain it to be valid. It would be nice, but either way, the data must always come first.
The existence (and reproduction) of data that cannot be accounted for intelligibly should should lead us to scrutinize more closely the apparent 'experimenter effect' in future experiments to see what (if anything) there really is to it, NOT to revert to experimental designs that necessarily prevent us from scrutinizing this effect altogether.
I didn't say we shouldn't scrutinize this effect. Indeed, I suggest just the opposite: experiments be constructed in ways that could help pinpoint the source of the error.
 
Last edited:
  • #43
Zero
Originally posted by hypnagogue
There are a bunch of links in a post of mine on the 2nd page of this thread. You can read all about the methodology and such there. One of the compelling things about this experiment is that they didn't measure 'psychic correlation' or whatever it should be called via a deliberate choice/hit or miss setup, but rather unconscious fluctuations of nervous system activity as measured by skin conductace.
What is compelling is that the experimenter suggested that she believed in it, which could skew the skin conductance results, in the same way that nervousness at the doctor's office raises blood pressure.
 
  • #44
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,244
2
Originally posted by russ_watters
Thats by definition of course. If an experiment doesn't follow the scientific "protocol" it is by definition, flawed.
Right... but if this is a truly valid psi effect, then it is really scientific protocol itself that is flawed (in this instance, at least-- not saying that science is suddenly invalid, but rather that there would be an existing phenomenon that it couldn't detect without considering the experiment flawed. Which is bad.)


I don't think so. There is a difference between finding that there is a flaw and figuring out what that flaw is. I see no reason to require that a flaw be explained in order to show it exists. Indeed, thats again the way science works. Data is data. It doesn't require a theory to explain it to be valid. It would be nice, but either way, the data must always come first.


How do you know that there is a flaw without knowing what it is? Just because you defined it that way? That's not a good way to conduct our thinking.

The point is that if you believe that everything said in the paper is true, then you cannot come up with an explanation for how this error works WITHOUT assuming some kind of psi effect anyway. Or, if you think you can, I'd like to hear it.

I didn't say we shouldn't scrutinize this effect. Indeed, I suggest just the opposite: experiments be constructed in ways that could help pinpoint the source of the error.
Right, fine-- but these experiments must be 'meta-experiments' of the type I have outlined. If you destroy the unique interaction between experimenter and subject, then you have destroyed the hypothesized cause of the effect-- in other words, you're not really testing for it in the first place. This is why I proposed that the experimenter/subject interaction continue to take place, and to allow a meta-experimenter overseeing the entire thing to be the truly objective one, just observing the effects of the interactions between the lower level 'experimenter' and his/her subjects.
 
  • #45
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,244
2
Originally posted by Zero
What is compelling is that the experimenter suggested that she believed in it, which could skew the skin conductance results, in the same way that nervousness at the doctor's office raises blood pressure.
The compelling evidence is not that the average skin conductance for Schlitz's subjects were higher. The compelling evidence is that their skin conductances jumped precisely when they were being observed via the closed circuit TV. That is really something entirely different from what you implied in your response.
 

Related Threads on Eyes On The Back Of Your Head

Replies
8
Views
897
  • Last Post
Replies
9
Views
2K
  • Last Post
Replies
4
Views
2K
  • Last Post
Replies
12
Views
3K
  • Last Post
4
Replies
86
Views
17K
  • Last Post
2
Replies
31
Views
4K
  • Last Post
2
Replies
34
Views
3K
Replies
26
Views
3K
Replies
7
Views
2K
  • Last Post
Replies
5
Views
1K
Top