Is There a Standard Method for Judging Experiments in Meta-Analysis?

  • Thread starter Thread starter GFX
  • Start date Start date
Click For Summary
The discussion revolves around the belief in telekinesis and its potential connections to quantum theory. Participants express skepticism about telekinesis, labeling it as pseudo-scientific and lacking credible evidence. They argue that while some studies may seem convincing, they often rely on flawed methodologies, such as meta-analysis, which can introduce bias and fail to account for negative results. The conversation highlights the historical context of scientific exploration, comparing telekinesis to alchemy, suggesting that even seemingly futile pursuits may lead to valuable discoveries. However, the consensus leans towards the view that telekinesis, as popularly conceived, is unlikely to be real, with no substantial evidence supporting its existence. Participants also discuss the psychological aspects of belief in such phenomena, including the influence of hallucinations and the role of media in shaping perceptions. Overall, the thread emphasizes a critical approach to claims of telekinesis and similar mind sciences, advocating for rigorous scientific standards and skepticism.
  • #31
RE: "I'm prepared to believe that these studies were actually carried out,"

I'm not. I think these studies were a giant fraud from the very get-go. We are presented with no description of the method, apparatus, results, goals... nothing. And terms are introduced with no clear definition (resonance?). It almost sounds like the type of writing that a student pulls out of his ass the night before a deadline.
 
Physics news on Phys.org
  • #32
JohnDubYa said:
We are presented with no description of the method, apparatus, results, goals... nothing. And terms are introduced with no clear definition (resonance?)

You don't usually see the kind of detail you describe in the average magazine article. That's why I'd like to see the full article, if such a thing exists. I'll do a search if I get a chance later on.
 
  • #33
I have dug out an abstract that looks interesting, and some blurb on a book chapter. Not a lot to go on, but if anyone feels can get the full articles it might be worth a read.

Correlations of Continuous Random Data with Major World Events

Foundations of Physics Letters December 2002, vol. 15, no. 6, pp. 537-550(14)

Nelson R.D.[1]; Radin D.I.[2]; Shoup R.[3]; Bancel P.A.[4]

[1] Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, New Jersey 08544. rdnelson@princeton.edu [2] Institute of Noetic Sciences, Petaluma, California 94952 [3] Boundary Institute, Los Altos, California 94024 [4] 108, rue St Maur, Paris, France F-75011

Abstract:
The interaction of consciousness and physical systems is most often discussed in theoretical terms, usually with reference to the epistemological and ontological challenges of quantum theory. Less well known is a growing literature reporting experiments that examine the mind-matter relationship empirically. Here we describe data from a global network of physical random number generators that shows unexpected structure apparently associated with major world events. Arbitrary samples from the continuous, four-year data archive meet rigorous criteria for randomness, but pre-specified samples corresponding to events of broad regional or global importance show significant departures of distribution parameters from expectation. These deviations also correlate with a quantitative index of daily news intensity. Focused analyses of data recorded on September 11, 2001, show departures from random expectation in several statistics. Contextual analyses indicate that these cannot be attributed to identifiable physical interactions and may be attributable to some unidentified interaction associated with human consciousness.

Keywords: physical random systems; correlations in random data; quantum randomness; consciousness; global events; statistical anomalies

Document Type: Research article ISSN: 0894-9875



Nelson & Radin (2001). Statistically robust anomalous effects: Replication in random event generator experiments. In Rao, Koneru Ramakrishna (Ed). (2001). Basic research in parapsychology (2nd ed.). (pp. 89-93). Jefferson, NC, US: McFarland & Co, Inc., Publishers.
ISBN: 0786410086

(from the chapter) Discusses studies and meta-analyses of random event generators (REGs). It is argued that this database of more than 800 independent studies contains both weak and strong replications and the meta-analytic procedures allow for the combination of results from which it is possible to draw conclusions regarding this class of experiments. A bibliographic search was performed and located 152 reports beginning in 1959, of experiments meeting the circumscription constraints. 235 control and 597 experimental studies were examined. Each study was represented by a z score reflecting the deviation of results in the direction of intention, and an effect size was computed. A weight was assigned to the study based on 16 quality criteria. It is concluded that the REG database contains unequivocal evidence for a replicable statistical effect in a variety of specific protocols, all designed to assess an anomalous correlation of distribution parameters with the intentions of human observers. it is argued that the effect is robust, and it is not significantly diluted by incorporated adjustments for experiment quality and for inhomogeneity, nor is it eliminated by incorporating an estimated filedrawer of unreported nonsignificant studies.
 
  • #34
RE: "Discusses studies and meta-analyses of random event generators (REGs)."

Meta-analyses? In other words, voodoo.

Here is the actual paper:

http://www.boundaryinstitute.org/articles/FoPL_nelson-pp.pdf

Those teaching English can use this paper to demonstrate how the passive writing style produces muddy prose (which is perfect in situations where you want to obfuscate methods and results).

Figure 2 exemplifies my problems with such research papers. This figure shows a trend that apparently begins on September 11. Since the figure does not illustrate data gathered before September 5 or after September 15, there is no way to know if the trend on September 11 is unusual. Frankly, I see nothing profound in the figure, whatsoever.

I could not find any mention of how the researchers bent over backwards to prevent bias, such as using a double-blind methods.

I think the operable word here is "glean." The researchers obtained data, and knowing what they wanted to find, were able to glean statistical results that confirmed their beliefs. If they had handed over the data to a disinterested observer with no clue as to the dates, I doubt he would have reached the same conclusion. We will never know because they (apparently) never bothered to follow careful protocol.
 
Last edited by a moderator:
  • #35
I checked into the people doing the studies and as I figured...it's pretty much unsubstantiated, data is skewed, etc...

Here's an excerpt from Skeptic Report - An evening with Dean Radin

O.J.: A global event?


Radin gave several examples of how GCP had detected "global consciousness". One was the day O.J. Simpson was acquitted of double-murder. We were shown a graph where - no doubt about that - the data formed a nice ascending curve in the minutes after the pre-show started, with cameras basically waiting for the verdict to be read. And yes, there was a nice, ascending curve in the minutes after the verdict was read.

However, about half an hour before the verdict, there was a similar curve ascending for no apparent reason. Radin's quick explanation before moving on to the next slide?

"I don't know what happened there."

It was not to be the last time we heard that answer.

September 11th: A study in wishful thinking.


It was obvious that the terror attacks of that day should make a pretty good case for Global Consciousness (GC). On the surface, it did. There seemed to be a very pronounced effect on that day and in the time right after.

There were, however, several problems. The most obvious was that the changes began at 6:40am ET, when the attacks hadn't started yet. It can of course be argued when the attacks "started", but if the theory is based on a lot of people "focusing" on the same thing, the theory falls flat - at 6:40am, only the attackers knew about the upcoming event. Not even the CIA knew. Hardly enough to justify a "global" consciousness.


"Another serious problem with the September 11 result was that during the days before the attacks, there were several instances of the eggs picking up data that showed the same fluctuation as on September 11th. When I asked Radin what had happened on those days, the answer was:

"I don't know."

I then asked him - and I'll admit that I was a bit flabbergasted - why on Earth he hadn't gone back to see if similar "global events" had happened there since he got the same fluctuations. He answered that it would be "shoe-horning" - fitting the data to the result.

Checking your hypothesis against seemingly contradictory data is "shoe-horning"?

For once, I was speechless. "

http://www.skepticreport.com/psychics/radin2002.htm

And this, from Skepdic.com about "PEAR". Seems the results are seriously skewed.

After all, fraud, unconscious cheating, errors in calculation, software errors, and self-deception could be considered as “influence of human operators.” So could the fact that "operator 10," believed to be a PEAR staff member, "has been involved in 15% of the 14 million trials, yet contributed to a full half of the total excess hits" (McCrone 1994). According to Dean Radin, the criticism that there "was anyone person responsible for the overall results of the experiment...was tested and found to be groundless" (Radin 1997, 221). His source for this claim is a 1991 article by Jahn et al. in the Journal of Scientific Exploration, "Count population profiles in engineering anomalies experiments" (5:205-32). However, Jahn gives the data for his experiments in Margins of Reality: The Role of Consciousness in the Physical World (Harcourt Brace, 1988, p. 352-353). John McCrone has done the calculations and found that 'If [operator 10's] figures are taken out of the data pool, scoring in the "low intention" condition falls to chance while "high intention" scoring drops close to the .05 boundary considered weakly significant in scientific results."

http://www.skepdic.com/pear.html
 
Last edited by a moderator:
  • #36
RE: "I don't know what happened there."

Are you sure he said that? I was thinking he would say something like "In the absence of definite knowledge, an answer to your query will not be forthcoming."
 
  • #37
Phew - you guys made quick work of Radin! I will read the paper when I get a chance, but it sounds like I'll be able to fly a UFO through the holes.

However, although it is tempting to dismiss the other work on the grounds that operator 10 may well have done a bit of 'automatic writing' when it came to recording data...

"operator 10," believed to be a PEAR staff member, "has been involved in 15% of the 14 million trials, yet contributed to a full half of the total excess hits" (McCrone 1994).

... let's not dismiss the possibility that operator 10 might have unusual abilities. The fact that we have evidence suggesting that the results are not what they seem doesn't prove that something paranormal was not at work. It just means that in this case operator 10 needs to stick to the role of participant, and not investigator or other staff member. We don't want to totally dismiss conclusions on the grounds of circumstantial evidence (e.g. that we can't rule out high jinks) any more than we want to totally accept things on the grounds of circumstantial evidence (e.g. strong correlations or hearsay). Poorly designed work and wishful thinking does not negate the possibility that there may be something there worth investigating properly.
 
  • #38
RE: "Phew - you guys made quick work of Radin! I will read the paper when I get a chance, but it sounds like I'll be able to fly a UFO through the holes."

If you find reading the paper tough-sledding, it won't be because they are smart. They simply can't write fer ****.
 
  • #39
I was wondering if we could backtrack a bit to meta-analysis. I’m curious about the statement made by JohnDubYa on page one:

“The individual studies do not have the same designs. Sure, the experimenters reject those studies that have sufficiently dissimilar designs (as if you can really define "similar"). But all this does is give them one subjective means of throwing out experiments that they know will not help their cause.”

I’d like to ask if there is a standard technique for judging which experiments are allowed into a meta-anlysis and which aren’t. Some kind of method that everybody can use to lessen the subjectivity of which data are included. Does such a thing exist?
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
5
Views
2K
Replies
9
Views
2K
  • · Replies 5 ·
Replies
5
Views
344
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K