Do our eyes have the ability to sense when someone is looking at us?

  • Thread starter Thread starter zoobyshoe
  • Start date Start date
  • Tags Tags
    Eyes Head
Click For Summary

Discussion Overview

The discussion revolves around the ability of humans to sense when someone is looking at them, referencing two studies with contrasting outcomes. Participants explore the implications of these studies, anecdotal experiences, and potential explanations for the observed phenomena, including the influence of psychological factors on experimental results.

Discussion Character

  • Exploratory
  • Debate/contested
  • Conceptual clarification

Main Points Raised

  • Some participants recount two studies where one suggested a heightened ability to sense being watched, while the other found no such ability, raising questions about the psychological preparation of subjects.
  • There is speculation that the results of the studies could be influenced by the participants' beliefs and expectations regarding the phenomenon being tested.
  • One participant suggests that the poor results in the man's study might indicate that subjects unconsciously chose not to respond when they felt watched, potentially due to the study's framing.
  • Another participant introduces the idea of detecting electromagnetic fields as a possible explanation for sensing when someone is looking, drawing parallels to animal behavior before earthquakes.
  • There is a discussion about whether the phenomenon can be reproduced in controlled settings, with some expressing skepticism about the existence of the ability itself.

Areas of Agreement / Disagreement

Participants express a range of views, with no consensus on the existence of the ability to sense being watched. Some propose psychological explanations, while others consider more speculative ideas, leading to an unresolved debate.

Contextual Notes

Limitations include the lack of detailed information about the studies and their methodologies, as well as the dependence on subjective experiences and anecdotal evidence. The discussion also highlights the potential influence of belief on experimental outcomes.

Who May Find This Useful

This discussion may be of interest to those exploring human perception, psychological influences on behavior, and the intersection of anecdotal evidence with scientific inquiry.

  • #31
Originally posted by russ_watters
I agree, and from that I can only conclude there must be some unknown experimental error at work here. It seems like a relatively simple study - I'm wondering if it has been redone with slight variations in method (such as not using those two experimenters).

You can only conclude unknown experimental error because you are not willing to entertain the idea that some unknown but actual phenomenon might be at work here.

Not using these two experimenters would seem to defeat the point of the results, which show a correlation between data gathered and experimenter: an 'experimenter' effect. If anything, I think a sort of meta-experiment is appropriate using different variations of experimenters-- using Schlitz and Wiseman as the designated 'observors' while letting a third party run the rest of the experiment; letting Schlitz and Wiseman talk to participants while using third parties as 'observors' (although this has already been done in each of their initial experiments, with similar results to their joint experiment); letting Wiseman and Schlitz instruct a 3rd and 4th party how they conducted the subject conversation and observing, and letting those new parties take their respective places in the joint experiment; and so on.

But clearly we must pay special attention to what, if any, causal roles these experimenters have been playing in the experiments, without automatically attributing it to experimental error. Further research is needed to see what is truly responsible for the observed data.
 
Biology news on Phys.org
  • #32
Originally posted by hypnagogue How chaos ties into all this I can't say, but hopefully this will give you some more insight into how statistics works.
James Gleik talks about the bell curve and Chaos starting on page 84 of the book. It's about a four page story about how a early pioneer in Chaos got an insight as to what was really happening in a situation where the bell curve just wouldn't work. Too long to quote here, but it gives me the notion that, knowing all you do about statistics, you would appreciate Chaos far more than I am capable of doing.
 
  • #33
Thanks zooby, I'll definitely have to look into that.
 
  • #34
Originally posted by hypnagogue
You can only conclude unknown experimental error because you are not willing to entertain the idea that some unknown but actual phenomenon might be at work here.
Quite right. I belong to the "extrordinary claims require extrordinary evidence" camp. To convince me that its even POSSIBLE that there is some actual phenomenon at work here would require, at the very least, an experiment that is NOT fundamentally flawed. I don't think that's much to ask.
Not using these two experimenters would seem to defeat the point of the results, which show a correlation between data gathered and experimenter: an 'experimenter' effect.
Thats true if the goal is analyzing flaws in research methodology. If the goal is exploring the phenomenon, then an experiment that is not flawed should be done. If people have such a power, an experiment that is not flawed will show it. This seems self-evident to me...

...unless you are suggesting that step has already been covered. If such a power has already been proven, then certainly you could do an experiment about outside effects on this power. But as with MANY other types of off the mainstream science, it appears to me that the first step is being purposefully overlooked.
Further research is needed to see what is truly responsible for the observed data.
Which is why I think an experiment should be done that does NOT include experimentor effects. If a non-flawed experment is run, it will show whether the phenomenon exists and by implication what the actual effect of the experimentor is.
 
  • #35
Originally posted by russ_watters
Quite right. I belong to the "extrordinary claims require extrordinary evidence" camp. To convince me that its even POSSIBLE that there is some actual phenomenon at work here would require, at the very least, an experiment that is NOT fundamentally flawed. I don't think that's much to ask.

What a catch 22 this is! You seem to be operating with the logic of the following two statements:

1) If and only if there is no experimental flaw, I will consider that there is a genuine psi phenomenon.
2) If an experiment produces positive results in favor of psi, there must be an experimental flaw.

Given the two conditionals above, it is impossible to get from "an experiment produces positive results in favor of psi" to "I will consider that there is a genuine psi phenomenon." Obviously the flaw is in the 2nd conditional.

Assume for a moment that psi exists. Just how are we ever to establish that an experiment that produces positive results in favor psi is not experimentally flawed?

For that matter, what is the flaw in the Schlitz/Wiseman experiments? If you consider it to be the 'priming' done before their respective trial runs, I can only reiterate what I have said before:

1) Whether the data supports the existence of psi or not appears to be contingent upon who runs the experiment. So there seems to be an 'experimenter effect.' To replace the two experimenters with any 2 arbitrary experimenters and thus to remove the attitudes they display to the subjects ignores that the critical variable in this experiment appears to be the effect of the experimenter him/herself on the subjects before they produce the experimental data.

2) We can control for this as indicated in my last post, by running a sort of 'meta-experiment' where data is collected to concentrate on and map out any potential correlation between the experimenter and the experimental data. Such a meta-experiment would not necessarily be flawed in the way you seem to think the Schlitz/Wiseman experiment is, but the 'flaw' in that lower level experiment by necessity cannot be eliminated, since it is the critical variable differentiating the two sets of data. If we are comparing the differing behaviors of two systems that are identical except for one parameter P, then obviously we will not learn anything about the nature of the differing behaviors by setting P to be equivalent in both systems. Rather, we try to see exactly what kind of effect P has in influencing the systems to behave in their different respective manners by deliberately varying P and observing the different systematic behaviors it illicits.
 
Last edited:
  • #36
hypnagogue, wouldn't it be better to change the experimental parameters, rather than juggle the data until you get the result you seek?
 
  • #37
Originally posted by Zero
hypnagogue, wouldn't it be better to change the experimental parameters, rather than juggle the data until you get the result you seek?

No one is juggling data. Have you read the Schlitz/Wiseman paper?

The experimental parameters were identical. The only different parameter in the 2 sets of data were the experimenters themselves-- Schlitz or Wiseman. Therefore, the Schlitz/Wiseman experiment indicates that the differences in the data sets (supporting psi or not) lies not in the experimental parameters but in something to do with Schlitz or Wiseman themselves. (And before you say it, they took careful measures to prevent data tampering both by themselves and outside parties.)

Thus, it only makes sense to carry out further experiments to try to discern such an 'experimenter effect.'
 
  • #38
Originally posted by hypnagogue
No one is juggling data. Have you read the Schlitz/Wiseman paper?

The experimental parameters were identical. The only different parameter in the 2 sets of data were the experimenters themselves-- Schlitz or Wiseman. Therefore, the Schlitz/Wiseman experiment indicates that the differences in the data sets (supporting psi or not) lies not in the experimental parameters but in something to do with Schlitz or Wiseman themselves. (And before you say it, they took careful measures to prevent data tampering both by themselves and outside parties.)

Thus, it only makes sense to carry out further experiments to try to discern such an 'experimenter effect.'
Haven't read the paper...is there a link I missed?

You know, this reminds me of something else...hearing tests. I know it is possible to pass those and be stone deaf at the same time. I wonder how people do it, but I know it can be done. I'm intersted to see how the evaluation of the results is done. Do you subtract the misses from the hits, and count a positive score as a positive result?
 
  • #39
Originally posted by Zero
Haven't read the paper...is there a link I missed?

There are a bunch of links in a post of mine on the 2nd page of this thread. You can read all about the methodology and such there. One of the compelling things about this experiment is that they didn't measure 'psychic correlation' or whatever it should be called via a deliberate choice/hit or miss setup, but rather unconscious fluctuations of nervous system activity as measured by skin conductace.
 
  • #40
Originally posted by hypnagogue
What a catch 22 this is! You seem to be operating with the logic of the following two statements:

1) If and only if there is no experimental flaw, I will consider that there is a genuine psi phenomenon.
2) If an experiment produces positive results in favor of psi, there must be an experimental flaw.

Given the two conditionals above, it is impossible to get from "an experiment produces positive results in favor of psi" to "I will consider that there is a genuine psi phenomenon." Obviously the flaw is in the 2nd conditional.
#1 is fine and I don't see why there is anything wrong with it. #2 is not what I am saying. I'm saying it is possible to set up ahead of time, an experiment without the known flaws of this experiment.

By implying #2, YOU seem to be suggesting that only a flawed experiment could produce positive results. Are you suggesting that this phenomenon can only be explored through flawed experiments?
Assume for a moment that psi exists. Just how are we ever to establish that an experiment that produces positive results in favor psi is not experimentally flawed?
I really did already cover this. An experiment which by design has no experimentor impact: a simple (not just an extraneous word - a truly simple test eliminates ambiguity) and double-blind test. The test I described earlier about "touch therapy" was such a test. Unbiased tests really are a piece of cake to set up!

The catch-22 cuts both ways, zooby, and given my bias toward not accepting extrordinary claims without extrordinary evidence, I will NOT assume psi exists. It must be PROVEN to exist through experiments. I'm sorry, but that's how science works and MUST work.
Whether the data supports the existence of psi or not appears to be contingent upon who runs the experiment. So there seems to be an 'experimenter effect.'
"Experimenter effect" is synonomous with "experimenter error" so to me (and frankly, most scientists) such a statement is tantamount to an acknowledgment that there is no psi phenomenon at work here. A test for "experimenter effect" would have both the experimenter and the test subjects only THINKING the stated experimenter was the experimenter. But then - to those who assume psi does exist, such a test would not be satisfactory. I'm afraid there is no way out of that cath-22. It is unreasonable to demand that tests on phenomenon depend on a fundamental flaw in the test.
I know it is possible to pass those and be stone deaf at the same time. I wonder how people do it, but I know it can be done.
I am quite certain I've done it. I had a ruptured ear drum in high school that almost disqualified me from military service. I've taken so many ear tests, I'm sure being "trigger-happy" gives false-positive results.
 
  • #41
Originally posted by russ_watters
#1 is fine and I don't see why there is anything wrong with it. #2 is not what I am saying. I'm saying it is possible to set up ahead of time, an experiment without the known flaws of this experiment.

By implying #2, YOU seem to be suggesting that only a flawed experiment could produce positive results. Are you suggesting that this phenomenon can only be explored through flawed experiments? I really did already cover this. An experiment which by design has no experimentor impact: a simple (not just an extraneous word - a truly simple test eliminates ambiguity) and double-blind test. The test I described earlier about "touch therapy" was such a test. Unbiased tests really are a piece of cake to set up!

The catch-22 cuts both ways, zooby, and given my bias toward not accepting extrordinary claims without extrordinary evidence, I will NOT assume psi exists. It must be PROVEN to exist through experiments. I'm sorry, but that's how science works and MUST work. "Experimenter effect" is synonomous with "experimenter error" so to me (and frankly, most scientists) such a statement is tantamount to an acknowledgment that there is no psi phenomenon at work here.

First of all, I'm not zooby.

Secondly, I think your insistence on thinking of the experiment as 'flawed' has no true basis. It is 'flawed' only insofar as it did not follow ideal scientific protocol exactly. However, you have even admitted that the 'flaw' in the Schlitz/Wiseman experiment, according to the currently accepted scientific worldview, cannot fully account for the results they got. How is the 'flaw' an 'experimental error' when it cannot even be explained how the flaw systematically produced the data that it did? You are positing an 'experimental error' that we cannot explain using the current scientific paradigm, which doesn't really win you any points but only contradicts your position.

The Schiltz/Wiseman experiment seems to show a meaningful effect arising from the interaction between experimenter and test subject that cannot be accounted for by the currently accepted scientific worldview. Of course making the experiment double-blind could only hinder or even outright destroy this effect. But once again, the nature of the experimental data is such that the data cannot be explained even by the non-double blind nature of this experiment if we adhere to the currently accepted paradigm. This is why I think it is perfectly acceptable that these tests have not been completely double blind.

By not making them completely double blind, Schlitz and Wiseman seem to have uncovered an interesting affect that cannot be explained by currently accepted scientific paradigms-- whether you think of it as an 'error' or not. The existence (and reproduction) of data that cannot be accounted for intelligibly should should lead us to scrutinize more closely the apparent 'experimenter effect' in future experiments to see what (if anything) there really is to it, NOT to revert to experimental designs that necessarily prevent us from scrutinizing this effect altogether.
 
  • #42
Originally posted by hypnagogue
First of all, I'm not zooby.
Oops. Sorry.
Secondly, I think your insistence on thinking of the experiment as 'flawed' has no true basis. It is 'flawed' only insofar as it did not follow ideal scientific protocol exactly.
Thats by definition of course. If an experiment doesn't follow the scientific "protocol" it is by definition, flawed.
However, you have even admitted that the 'flaw' in the Schlitz/Wiseman experiment, according to the currently accepted scientific worldview, cannot fully account for the results they got. How is the 'flaw' an 'experimental error' when it cannot even be explained how the flaw systematically produced the data that it did? You are positing an 'experimental error' that we cannot explain using the current scientific paradigm, which doesn't really win you any points but only contradicts your position.
I don't think so. There is a difference between finding that there is a flaw and figuring out what that flaw is. I see no reason to require that a flaw be explained in order to show it exists. Indeed, that's again the way science works. Data is data. It doesn't require a theory to explain it to be valid. It would be nice, but either way, the data must always come first.
The existence (and reproduction) of data that cannot be accounted for intelligibly should should lead us to scrutinize more closely the apparent 'experimenter effect' in future experiments to see what (if anything) there really is to it, NOT to revert to experimental designs that necessarily prevent us from scrutinizing this effect altogether.
I didn't say we shouldn't scrutinize this effect. Indeed, I suggest just the opposite: experiments be constructed in ways that could help pinpoint the source of the error.
 
Last edited:
  • #43
Originally posted by hypnagogue
There are a bunch of links in a post of mine on the 2nd page of this thread. You can read all about the methodology and such there. One of the compelling things about this experiment is that they didn't measure 'psychic correlation' or whatever it should be called via a deliberate choice/hit or miss setup, but rather unconscious fluctuations of nervous system activity as measured by skin conductace.
What is compelling is that the experimenter suggested that she believed in it, which could skew the skin conductance results, in the same way that nervousness at the doctor's office raises blood pressure.
 
  • #44
Originally posted by russ_watters
Thats by definition of course. If an experiment doesn't follow the scientific "protocol" it is by definition, flawed.

Right... but if this is a truly valid psi effect, then it is really scientific protocol itself that is flawed (in this instance, at least-- not saying that science is suddenly invalid, but rather that there would be an existing phenomenon that it couldn't detect without considering the experiment flawed. Which is bad.)


I don't think so. There is a difference between finding that there is a flaw and figuring out what that flaw is. I see no reason to require that a flaw be explained in order to show it exists. Indeed, that's again the way science works. Data is data. It doesn't require a theory to explain it to be valid. It would be nice, but either way, the data must always come first.


How do you know that there is a flaw without knowing what it is? Just because you defined it that way? That's not a good way to conduct our thinking.

The point is that if you believe that everything said in the paper is true, then you cannot come up with an explanation for how this error works WITHOUT assuming some kind of psi effect anyway. Or, if you think you can, I'd like to hear it.

I didn't say we shouldn't scrutinize this effect. Indeed, I suggest just the opposite: experiments be constructed in ways that could help pinpoint the source of the error.

Right, fine-- but these experiments must be 'meta-experiments' of the type I have outlined. If you destroy the unique interaction between experimenter and subject, then you have destroyed the hypothesized cause of the effect-- in other words, you're not really testing for it in the first place. This is why I proposed that the experimenter/subject interaction continue to take place, and to allow a meta-experimenter overseeing the entire thing to be the truly objective one, just observing the effects of the interactions between the lower level 'experimenter' and his/her subjects.
 
  • #45
Originally posted by Zero
What is compelling is that the experimenter suggested that she believed in it, which could skew the skin conductance results, in the same way that nervousness at the doctor's office raises blood pressure.

The compelling evidence is not that the average skin conductance for Schlitz's subjects were higher. The compelling evidence is that their skin conductances jumped precisely when they were being observed via the closed circuit TV. That is really something entirely different from what you implied in your response.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 21 ·
Replies
21
Views
4K
  • · Replies 29 ·
Replies
29
Views
5K
  • · Replies 36 ·
2
Replies
36
Views
23K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K