Register to reply

Precognition paper to be published in mainstream journal

Share this thread:
Evo
#19
Nov17-10, 02:51 PM
Mentor
Evo's Avatar
P: 26,661
I loved the aspirin bit, that was so off the wall and of no significance to this, I'm still scratching my head on that one.
FlexGunship
#20
Nov17-10, 03:14 PM
PF Gold
FlexGunship's Avatar
P: 739
Quote Quote by FlexGunship View Post
I believe the goal was to illustrate that "although 53% might sound very close to 50%... aspirin is recommended because instead of helping 50% of people, it helps 53% of people." Therefore, we are to conclude that 53% is, indeed, a statistically significant number.
Edit by Evo: AAAARRGH, flex, I accidently edited out your post. I need to stop answering the phone when I'm responding.
Evo
#21
Nov17-10, 03:42 PM
Mentor
Evo's Avatar
P: 26,661
Still meaningless when the discussion is about guessing something. If I only performed my job correctly 53% of the time, I'd be fired. If a doctor killed 47% of his patients it would be unacceptable. Know what I mean?
FlexGunship
#22
Nov17-10, 04:10 PM
PF Gold
FlexGunship's Avatar
P: 739
Quote Quote by FlexGunship View Post
Edit by Evo: AAAARRGH, flex, I accidently edited out your post. I need to stop answering the phone when I'm responding.
Wait... where IS my response? My well-reasoned, carefully thought out post seems to have gone decidedly AWOL.

You mean... you... edited it.. out.


Everyone keeps deleting my posts...
Evo
#23
Nov17-10, 05:02 PM
Mentor
Evo's Avatar
P: 26,661
Quote Quote by FlexGunship View Post
Wait... where IS my response? My well-reasoned, carefully thought out post seems to have gone decidedly AWOL.

You mean... you... edited it.. out.


Everyone keeps deleting my posts...
I didn't just delete it, I sent it into oblivion.

And it was a truly great post.
FlexGunship
#24
Nov17-10, 06:22 PM
PF Gold
FlexGunship's Avatar
P: 739
Quote Quote by Evo View Post
I didn't just delete it, I sent it into oblivion.
That's nothing. I once got an entire thread deleted.



(Edited for increased cleverness)
FlexGunship
#25
Nov18-10, 08:26 AM
PF Gold
FlexGunship's Avatar
P: 739
Quote Quote by jarednjames
Spooky my a**. All they've said there is that a student has shown they remembered a word and then when asked to type some words later that is one of the ones the typed. Would you believe it.
Heh, good point. "Researchers have demonstrated that they can get students to type words that they have previously remembered when asked."

Quote Quote by jarednjames
Subliminal advertising comes to mind. Nothing new here.
Well, they're saying it works in reverse. Seeing the word "ugly" before seeing a kitten delays your response on the quality of the kitten. They are claiming that after you delay your response on the quality of a kitten, then will show the word "ugly."

Traditional:
Show word -> show cat -> delay -> judgement
Show cat -> judgement

Bem's version:
Show cat -> delay -> judgement -> show word
Show cat -> judgement

They are talking about moving the word but not moving the delay.

Again, I would like to see the response of the testee input into a supercomputer and if they show a delay, then have a supercomputer NOT display the word "ugly" afterwards. Then what do they attribute the delay to? Or does the universe simply fall apart?

Quote Quote by Evo: Destroyer of Posts!
I loved the aspirin bit, that was so off the wall and of no significance to this, I'm still scratching my head on that one.
I believe the goal was to illustrate that "although 53% might sound very close to 50%... aspirin is recommended because instead of helping 50% of people, it helps 53% of people." Therefore, we are to conclude that 53% is, indeed, a statistically significant number.

I've found out the "Catch 22" here. Since scientific studies seek to establish causal relationships (i.e. this causes that), Bem will claim that such a metric is invalid since the very thing they are demonstrating is non-causal.

EDIT: Do you believe in miracles, Evo?
JaredJames
#26
Nov18-10, 12:06 PM
P: 3,387
A test for precognition should be simple, shouldn't it?

I propose the following:

The test subject must accurately* predict a future event. The event must be something that is otherwise considered un-predictable (or of such low odds any other method wouldn't be able to determine its occurrence accurately).

*Accuracy is defined here in relation to the complexity of the prediction. See following examples.

Example 1
Task - A person predicts the outcome of a number of rolls of a fair dice.
Accuracy Required - Due to the nature of the task, the person must predict the exact result.
Additional Requirements - The dice must be rolled a number of times to ensure the probability of simply guessing the outcome correctly each time is made as low as possible. Recomendation is 20 rolls as a start.

Example 2
Task - A person predicts a seemingly random event, in this case we'll use a car crash.
Accuracy Required - The event must be described in enough detail so that a random person could match the description to the crash should it occur, without any details being left vague or open to interpretation. "A car will crash on the M4 tomorrow" is not a valid predicition. "A blue Ford will crash into a red Hyundai near junction 10 on the M4 tomorrow" is acceptable, but more detail would be preferred.
Additional Requirements - As above, the event must clearly match the description given in order to be considered an accurate prediction of said event.

As you can see, all you need to do is describe a future event in enough detail for us to clearly identify it when it occurs. Simple.
FlexGunship
#27
Nov18-10, 12:17 PM
PF Gold
FlexGunship's Avatar
P: 739
Quote Quote by jarednjames View Post
A test for precognition should be simple, shouldn't it?

I propose the following: [...] Simple.
<Devil's Advocate>
I think the idea is that this is an unconscious response. And that it is uncontrollable by the individual. Specifically, they are saying that psychological tests are functional even if causality is reversed.

Examples of standard tests:
  1. Show a scary picture -> heart rate increases
  2. Show a boring picture -> heart rate steady

Examples of precognition tests:
  1. Heart rate increases -> show a scary picture
  2. Heart rate steady -> show a boring picture

The important fact is that whether a scary picture or a boring picture is being shown is predetermined and NOT based on the heart rate. It's quite a claim!
</Devil's Advocate>
collinsmark
#28
Nov18-10, 10:58 PM
HW Helper
PF Gold
collinsmark's Avatar
P: 1,971
Quote Quote by jarednjames View Post
In one experiment, students were shown a list of words and then asked to recall words from it, after which they were told to type words that were randomly selected from the same list. Spookily, the students were better at recalling words that they would later type.
Spooky my a**. All they've said there is that a student has shown they remembered a word and then when asked to type some words later that is one of the ones the typed. Would you believe it.
I think you might be misinterpreting the experiment, maybe. The article is ambiguous, and not well written on this point, but here is how the experiment was apparently done (I'll try to summarize it):

The entire process for each participant was done in private on a computer. There were a total of 100 precipitants.
  1. A list of 48 common words are given to the participant to remember. The word list and word order are identical for all test subjects. I'll call this word list the "super-set."
  2. The test subject is then asked to recall as many words as they could from the superset. I'll call this list of a test subject's recalled words the "recalled-set."
  3. The computer randomly generates a subset of 24 words from the super-set. This list of words is called the "practice-word-set" (the draft version of the paper calls them the 24 "practice words"). Participants then had to perform some exercises on each word, such as clicking on each word with the mouse, categorizing each word (all words form the superset are are either foods, animals, occupations, or clothes), and typing each practice word.
  4. I'll call the remaining 24 words from the super-set that are not in the practice-word-set the "control-word-set" (the paper calls them "control words").
  5. A measure is calculated called a "weighted differential recall (DR) score," ranging from -100% to 100%, which correlates the recalled-set to the practice-word-set and control-word-set. A positive DR% means the words from the recalled-set had a higher percentage of "practice words" than "control words." A negative DR% means the words from the recalled-set had a higher percentage of "control words" than "practice words." A 0 DR% means that the participant chose an equal number of words from both sets.
    The DR score was calculated as follows,
    P: number of words in both the recalled-set and practice-word set.
    C: number of words in both the recalled-set and control-word set.
    DR% = [(P – C) (P + C)]/576

    {Edit: Here's an example: 10 practice words recalled, 8 control words recalled. DR% = 100% x [(10-8)(10+8)]/576 = 6.25%}
There was also a 25 person control group. In this group, the procedure was the same except the participants did not do any practice exercises, and were not shown the randomly generated practice-word-set. However it was still used to calculate a DR% score for comparison.

Results:
Mean DR% score:
Main group:2.27%
Control group: 0.26%

A variation of the experiment was performed which had a slight change of how the superset of words were originally given to the participants. In this version of the experiment, the sample size was much smaller; only 50 participants. There was also a 25 participant control session.

Mean DR% score:
Main group:4.21%
Control group: Not given in the paper, but only mentioned as, "DR% scores from the control sessions did not differ significantly from zero."

For details, here's a link to where I gathered this:
http://dbem.ws/FeelingFuture.pdf

I'd like to see the experiment reproduced with a larger sample size. For now I am not impressed. And why does the paper not give the control group's mean DR% in the second experiment ?!? Perhaps because all DR% scores in the whole experiment do not statistically differ significantly from 0? I'm not impressed.
Ivan Seeking
#29
Nov18-10, 11:17 PM
Emeritus
Sci Advisor
PF Gold
Ivan Seeking's Avatar
P: 12,501
Quote Quote by Evo View Post
Flex and Jared, you guys are discussing the wrong paper. You're discussing the crackpot Radin paper that Ivan posted. He was thinking of an older unrelated paper.
What are you talking about? This is what I linked.

2010 American Psychological Association
http://www.apa.org/pubs/journals/psp/index.aspx 0022-3514/10/$12.00 DOI: 10.1037/a0021524
This article may not exactly replicate the final version published in the APA journal. It is not the copy of record.
Feeling the Future: Experimental Evidence for
Anomalous Retroactive Influences on Cognition and Affect
Daryl J. Bem
Cornell University
Ivan Seeking
#30
Nov18-10, 11:23 PM
Emeritus
Sci Advisor
PF Gold
Ivan Seeking's Avatar
P: 12,501
Actually, I didn't even link it, I just quoted from the paper linked in the op.
JaredJames
#31
Nov19-10, 07:03 AM
P: 3,387
Quote Quote by collinsmark View Post
I think you might be misinterpreting the experiment
No misinterpretation about it, that is what the article said.


53% means you are only 3% over the expected 50/50 odds of guesswork. Without a much larger test group that 3% doesn't mean anything. It could simply be a statistical anomaly.

Any of you seen the Derren Brown episode where he flips a coin ten times in a row and it comes out a head each time?

The test group is too small and this 3% doesn't show anything. If I sat in a room and flipped a coin 100 times, calling heads each time, there is a an equal chance that heads will come up as tails and so although you'd expect an even spread of heads vs tails, however there is a chance that you get more heads than tails and as such would show me as being correct >50% of the time. But there's nothing precognitive about that.
Also, as per the Derren Brown experiment, I flip a coin ten times and could call heads ten times in a row and each coin toss come out heads. Again, nothing precognitive there. Despite what it looks like.

As a note, DB spent 8 hours stood in front of a camera flipping the coin until it came out heads ten times in a row (they showed this at the end). He used it to explain something in a show (he made out it was extremely likely to happen to help what he was trying to get the audience to do), but the purpose of the explanation (showing the 8 hours worth of attempts) at the end was him trying to demonstrate that it is possible for heads to come out ten times in a row, how unlikely it was - but not impossible.
collinsmark
#32
Nov19-10, 07:24 AM
HW Helper
PF Gold
collinsmark's Avatar
P: 1,971
Considering the experiment involving the word memorization followed by the "practice" typing of a random subset of words,

Now I am kinda' impressed. (But not jumping out of my seat or anything).

I just created a C# program to simulate Daryl J. Bem's experiment in order to analyze the statistics. Basically, the program simulates the experiment, except without any human interaction so we can rule out any human influences. This way one can compare the paper's reported DR% against simulated DR% values.

When simulating 100 participants in a given experiment, and repeating the experiment 5000 times, the mean DR% was very close to 0 as expected, but the standard deviation of the mean DR% was only 1.097%. The paper's reported DR% (for the first trial of 100 participants) was 2.27%. That's over two standard deviations better than expected. That could be significant.

For the second trial with 50 participants, repeating the experiment 5000 times, the simulated mean was (of course) almost 0, and the standard deviation of the mean DR% was 1.54%. The actual experiment apparently had a DR% of 4.21%. That's about 2.7 standard deviations away from what is expected.

So, the numbers in this experiment might be somewhat statistically significant. But I still would be curious to see how it turns out with a larger sample set.

I've attached the code below. Please forgive my poor coding, I wasn't putting a whole lot of time into this.

//Written by Collins Mark.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace Precognition_tester
{
    class Program
    {
        static void Main(string[] args)
        {   
            int NumLoops = 5000;  // <== number of experiments
            int SampleSize = 100;  // <== number of participants in each experiment.

            double memoryMean = 18.4; // <== averge number of words recalled.
            double memoryStDev = 5;   // <== standard deviation of number of words 
                                      //     recalled (I had to guess at this one)
            int ItemsPerCat = 12;
            int i;
            Random uniRand = new Random();

            // Load the category lists.
            List<string> foodList = new List<string>();
            foodList.Add("HotDogs");
            foodList.Add("Hamburgers");
            foodList.Add("Waffles");
            foodList.Add("IceCream");
            foodList.Add("Coffee");
            foodList.Add("Pizza");
            foodList.Add("Guinness");
            foodList.Add("SausageEggAndCheeseBiscuit");
            foodList.Add("Toast");
            foodList.Add("Salad");
            foodList.Add("Taco");
            foodList.Add("Steak");

            List<string> animalList = new List<string>();
            animalList.Add("Cat");
            animalList.Add("Dog");
            animalList.Add("Snake");
            animalList.Add("Whale");
            animalList.Add("Bee");
            animalList.Add("Spider");
            animalList.Add("Elephant");
            animalList.Add("Mongoose");
            animalList.Add("Wambat");
            animalList.Add("Bonobo");
            animalList.Add("Hamster");
            animalList.Add("Human");

            List<string> occupationsList = new List<string>();
            occupationsList.Add("Engineer");
            occupationsList.Add("Plumber");
            occupationsList.Add("TalkShowHost");
            occupationsList.Add("Doctor");
            occupationsList.Add("Janitor");
            occupationsList.Add("Prostitute");
            occupationsList.Add("Cook");
            occupationsList.Add("Theif");
            occupationsList.Add("Pilot");
            occupationsList.Add("Maid");
            occupationsList.Add("Nanny");
            occupationsList.Add("Bartender");

            List<string> clothesList = new List<string>();
            clothesList.Add("Shirt");
            clothesList.Add("Shoes");
            clothesList.Add("Jacket");
            clothesList.Add("Undershorts");
            clothesList.Add("Socks");
            clothesList.Add("Jeans");
            clothesList.Add("Wristwatch");
            clothesList.Add("Cap");
            clothesList.Add("Sunglasses");
            clothesList.Add("Overalls");
            clothesList.Add("LegWarmers");
            clothesList.Add("Bra");

            // Add elements to superset without clustering
            List<string> superset = new List<string>();
            for (i = 0; i < ItemsPerCat; i++)
            {
                superset.Add(foodList[i]);
                superset.Add(animalList[i]);
                superset.Add(occupationsList[i]);
                superset.Add(clothesList[i]);
            }

            mainLoop(
                NumLoops, 
                SampleSize, 
                ItemsPerCat, 
                memoryMean, 
                memoryStDev, 
                superset, 
                foodList, 
                animalList, 
                occupationsList, 
                clothesList, 
                uniRand);
        }

        // This is the big, main loop.
        static void mainLoop(
            int NumLoops, 
            int SampleSize, 
            int ItemsPerCat, 
            double memoryMean, 
            double memoryStDev, 
            List<string> superset,
            List<string> foodList,
            List<string> animalList,
            List<string> occupationsList,
            List<string> clothesList,
            Random uniRand)
        {
            // Report something to the screen,
            Console.WriteLine("Simulating {0} experiments of {1} participants each", NumLoops, SampleSize);
            Console.WriteLine("...Calculating...");

            // Create list of meanDR of separate experiments.
            List<double> meanDRlist = new List<double>();

            // Loop through main big loop
            for (int mainCntr = 0; mainCntr < NumLoops; mainCntr++)
            {
                // create Array of participant's DR's for a given experiment.
                List<double> DRarray = new List<double>();

                //Loop through each participant in one experiment.
                for (int participant = 0; participant < SampleSize; participant++)
                {
                    // Reset parameters.
                    int P = 0; // number of practice words recalled.
                    int C = 0; // number of control words recalled.
                    double DR = 0; // weighted differential recall (DR) score.

                    // Create recalled set.
                    List<string> recalledSet = new List<string>();
                    createRecalledSet(
                        recalledSet,
                        superset,
                        memoryMean,
                        memoryStDev,
                        uniRand);

                    // Create random practice set.
                    List<string> practiceSet = new List<string>();
                    createPracticeSet(
                        practiceSet,
                        foodList,
                        animalList,
                        occupationsList,
                        clothesList,
                        ItemsPerCat,
                        uniRand);

                    // Compare recalled count to practice set.
                    foreach (string strTemp in recalledSet)
                    {
                        if (practiceSet.Contains(strTemp))
                            P++;
                        else
                            C++;
                    }

                    // Compute weighted differential recall (DR) score
                    DR = 100.0 * (P - C) * (P + C) / 576.0;

                    // Record DR in list.
                    DRarray.Add(DR);

                    // Report output.
                    //Console.WriteLine("DR%:  {0}", DR);
                }
                // record mean DR.
                double meanDR = DRarray.Average();
                meanDRlist.Add(meanDR);

                // Report Average DR.
                //Console.WriteLine("Experiment {0}, Sample size: {1},  mean DR:  {2}", mainCntr, SampleSize, meanDR);

            }
            // Finished looping.

            // Calculate mean of meanDR
            double finalMean = meanDRlist.Average();

            // Calculate standard deviation of meanDR
            double finalStDev = 0;
            foreach (double dTemp in meanDRlist)
            {
                finalStDev += (dTemp - finalMean) * (dTemp - finalMean);
            }
            finalStDev = finalStDev / NumLoops;
            finalStDev = Math.Sqrt(finalStDev);
            
            // Report final results.

            Console.WriteLine(" ");
            Console.WriteLine("Participants per experiment: {0}", SampleSize);
            Console.WriteLine("Number of separate experiments: {0}", NumLoops);
            Console.WriteLine("mean of the mean DR% from all experiments: {0}",
                finalMean);
            Console.WriteLine("Standard deviation of the mean DR%: {0}", finalStDev);

            Console.ReadLine();
            
        }

        static double Gaussrand(double unirand1, double unirand2)
        {
            return (Math.Sqrt(-2 * Math.Log(unirand1)) * Math.Cos(2 * Math.PI * unirand2));
        }
        
        static void createRecalledSet(List<string> recalledSet, List<string> superSet, double mean, double stdev, Random unirand)
        {
            // Determine how many words were recalled. (random)
            double unirand1 = unirand.NextDouble();
            double unirand2 = unirand.NextDouble();
            while (unirand1 == 0.0) unirand1 = unirand.NextDouble();
            while (unirand2 == 0.0) unirand2 = unirand.NextDouble();

            double gaussrand = Gaussrand(unirand1, unirand2);
            gaussrand *= stdev;
            gaussrand += mean;
            int recalledCount = (int)gaussrand;
            if (recalledCount > superSet.Count) recalledCount = superSet.Count; 
            
            // Create temporary superset and copy elements over.
            List<string> tempSuperSet = new List<string>();
            foreach (string strTemp in superSet)
            {
                tempSuperSet.Add(strTemp);
            }

            // Randomize temporary superset.
            shuffleList(tempSuperSet, unirand);

            // Copy over first recalledCount items to recalledSet.
            for (int i = 0; i < recalledCount; i++)
            {
                recalledSet.Add(tempSuperSet[i]);
            }
        }

        static void createPracticeSet(
            List<string> practiceList, 
            List<string> foodList,
            List<string> animalList,
            List<string> occupationsList,
            List<string> clothesList,
            int itemsPerCat,
            Random uniRand)
        {
            List<string> tempFoodList = new List<string>();
            List<string> tempAnimalList = new List<string>();
            List<string> tempOccupationsList = new List<string>();
            List<string> tempClothesList = new List<string>();

            // load temporary lists.
            foreach (string strTemp in foodList)
                tempFoodList.Add(strTemp);
            foreach (string strTemp in animalList)
                tempAnimalList.Add(strTemp);
            foreach (string strTemp in occupationsList)
                tempOccupationsList.Add(strTemp);
            foreach (string strTemp in clothesList)
                tempClothesList.Add(strTemp);

            // Shuffle temporary lists
            shuffleList(tempFoodList, uniRand);
            shuffleList(tempAnimalList, uniRand);
            shuffleList(tempOccupationsList, uniRand);
            shuffleList(tempClothesList, uniRand);

            // Load practice list
            for (int i = 0; i < itemsPerCat / 2; i++)
            {
                practiceList.Add(tempFoodList[i]);
                practiceList.Add(tempAnimalList[i]);
                practiceList.Add(tempOccupationsList[i]);
                practiceList.Add(tempClothesList[i]);
            }

            // Shuffle practice list
            shuffleList(practiceList, uniRand);
        }

        // method to shuffle lists.
        static void shuffleList(List<string> list, Random unirand)
        {
            List<string> shuffledList = new List<string>();
            while (list.Count() > 0)
            {
                int indexTemp = unirand.Next(list.Count());
                shuffledList.Add(list[indexTemp]);
                list.RemoveAt(indexTemp);
            }
            foreach (string strTemp in shuffledList) list.Add(strTemp);
        }
    }
}
Evo
#33
Nov19-10, 11:34 AM
Mentor
Evo's Avatar
P: 26,661
Quote Quote by Ivan Seeking View Post
What are you talking about? This is what I linked.

2010 American Psychological Association
http://www.apa.org/pubs/journals/psp/index.aspx 0022-3514/10/$12.00 DOI: 10.1037/a0021524
This article may not exactly replicate the final version published in the APA journal. It is not the copy of record.
Feeling the Future: Experimental Evidence for
Anomalous Retroactive Influences on Cognition and Affect
Daryl J. Bem
Cornell University
Quote Quote by Ivan Seeking View Post
Actually, I didn't even link it, I just quoted from the paper linked in the op.
This is what you posted http://www.physicsforums.com/showpos...04&postcount=3

Quote Quote by Ivan Seeking View Post
From the cited paper, this is what I saw quite some time ago [probably around 2002 or 2003]. I have mentioned it but was never able to find a valid reference for this work.
The trend is exemplified by several recent “presentiment” experiments, pioneered by Radin (1997), in which physiological indices of participants’ emotional arousal were monitored as participants viewed a series of pictures on a computer screen. Most of the pictures were emotionally neutral, but a highly arousing negative or erotic image was displayed on randomly selected trials. As expected, strong emotional arousal occurred when these images appeared on the screen, but the remarkable finding is that the increased arousal was observed to occur a few seconds before the picture appeared, before the computer has even selected the picture to be displayed. The presentiment effect has also been demonstrated in an fMRI experiment that monitored brain activity (Bierman & Scholte, 2002) and in experiments using bursts of noise rather than visual images as the arousing stimuli (Spottiswoode & May, 2003). A review of presentiment experiments prior to 2006 can be found in Radin (2006, pp. 161–180). Although there has not yet been a formal meta-analysis of presentiment studies, there have been 24 studies with human participants through 2009, of which 19 were in the predicted direction and Feeling the Future 5 about half were statistically significant. Two studies with animals are both positive, one marginally and the other substantially so (D. I. Radin, personal communication, December 20, 2009)...
FlexGunship
#34
Nov19-10, 12:14 PM
PF Gold
FlexGunship's Avatar
P: 739
<whisper>Umm... so was I talking about the wrong thing or not? </whisper>
Ivan Seeking
#35
Nov19-10, 01:44 PM
Emeritus
Sci Advisor
PF Gold
Ivan Seeking's Avatar
P: 12,501
Quote Quote by Evo View Post
I quoted the paper linked. I didn't link to an unpublished paper by Radin.
collinsmark
#36
Nov19-10, 06:27 PM
HW Helper
PF Gold
collinsmark's Avatar
P: 1,971
Quote Quote by jarednjames View Post
No misinterpretation about it, that is what the article said.


53% means you are only 3% over the expected 50/50 odds of guesswork. Without a much larger test group that 3% doesn't mean anything. It could simply be a statistical anomaly.

Any of you seen the Derren Brown episode where he flips a coin ten times in a row and it comes out a head each time?

The test group is too small and this 3% doesn't show anything. If I sat in a room and flipped a coin 100 times, calling heads each time, there is a an equal chance that heads will come up as tails and so although you'd expect an even spread of heads vs tails, however there is a chance that you get more heads than tails and as such would show me as being correct >50% of the time. But there's nothing precognitive about that.
Also, as per the Derren Brown experiment, I flip a coin ten times and could call heads ten times in a row and each coin toss come out heads. Again, nothing precognitive there. Despite what it looks like.
Yes, if you were to flip a coin fair ten times in a single experiment, the likelihood of the coin coming up all heads on a given experiment is 1/210 or about 1 chance in 1024. If that happened on the first experimental attempt, it would be a statistical fluke. Not at all impossible but very unlikely. And if an experimenter did not know if the coin was fair or not, he might take that as positive evidence against the coin being fair, and of meriting further trials. But I'm not sure how the analogy applies to this this set of experiments though. Are you suspecting that the author of the study repeated the experiment perhaps hundreds of times, each with 50 or 100 people in each experiment (many thousands or tens of thousands of people total), and then cherry picked the best results? If so, that would be unethical manipulation of the data (and very costly too ). [Edit: And besides, there are easier ways to manipulate the data.]

And forgive me for my confusion, but I'm not certain where you are getting the 53%? In my earlier reply, I was talking about the specific set of experiments described in the study as "Experiment 8: Retroactive Facilitation of Recall I" and "Experiment 9: Retroactive Facilitation of Recall II." These are the experiments where participants are asked to memorize a list of words, and try to recall the words. Then later, a computer generated random subset of half the total words are given to the subjects to perform "practice exercises" on, such as typing each word. The study seems to show that the words recalled are correlated to the random subset of "practice" words that was generated after the fact. Those are the only experiments I was previously discussing on this thread. I haven't even really looked at any of the other experiments in the study.

To demonstrate the statistical relevance further, I've modified my C# a little bit to add some more information. I've attached it below. Now it shows how many of the simulated experiments produce a DR% that is greater than or equal to the DR% reported in the study. My results show 1 in 56 chance, and a 1 in 300 chance, for achieving a DR% that is greater than or equal to the mean DR% reported in the study, for the first and second experiment respectively (the paper calls them experiment 8 and experiment 9). The program simulated 10000 experiments in both cases -- the first with 100 participants per experiment, the second with 50, as per the paper.

Here are the possible choices of interpretations, as I see them:
(I) The author of the paper might really be on to something. This study may be worth further investigation and attempted reproduction.

(II) The data obtained in the experiments were a statistical fluke. However, for the record, if the experiment was repeated many times, the statistics show that the chances of achieving a mean DR%, at or above what is given in the paper, merely by chance and equal odds, are roughly 1 out of 56 for the first experiment (consisting of 100 participants, mean DR% of 2.27%) and roughly 1 out of 333 for the second experiment (consisting of 50 participants, mean DR% of 4.21).

(III) The experiments were somehow biased in ways not evident from the paper, or the data were manipulated or corrupted somehow.
In my own personal, biased opinion [edit: being the skeptic that I am], I suspect that either (II) or (III) is what really happened. But all I am saying in this post is that the statics quoted in the paper are actually relevant. Granted, a larger sample size would have been better, but still, even with the sample size given in the paper, the results are statistically significant. If we're going to poke holes in the study, we're not going to get very far by poking holes in the study's statistics.

Below is the revised C# code. It was written as console program in Microsoft Visual C# 2008, if you'd like to try it out. You can modify the parameters near the top and recompile to test out different experimental parameters and number of simulated experiments.
(Again, pardon my inefficient coding. I wasn't putting a lot of effort into this).
//Written by Collins Mark.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace Precognition_tester
{
    class Program
    {
        static void Main(string[] args)
        {
            int NumLoops = 10000;  // <== number of experiments
            int SampleSize = 50;  // <== number of participants in each experiment.

            // This represents the paper's mean DR% threshold. Used for
            // comparison of simulated mean DR% values. Should be 2.27
            // for SampleSize of 100, and 4.21% for SampleSize of 50,
            // to compare directly with paper's results.
            double DRcomparisonThreshold = 4.21;

            double memoryMean = 18.4; // <== averge number of words recalled.
            double memoryStDev = 5;   // <== standard deviation of number of words 
                                      //     recalled (I had to guess at this one)

            int ItemsPerCat = 12;
            int i;
            Random uniRand = new Random();

            // Load the category lists.
            List<string> foodList = new List<string>();
            foodList.Add("HotDogs");
            foodList.Add("Hamburgers");
            foodList.Add("Waffles");
            foodList.Add("IceCream");
            foodList.Add("Coffee");
            foodList.Add("Pizza");
            foodList.Add("Guinness");
            foodList.Add("SausageEggAndCheeseBiscuit");
            foodList.Add("Toast");
            foodList.Add("Salad");
            foodList.Add("Taco");
            foodList.Add("Steak");

            List<string> animalList = new List<string>();
            animalList.Add("Cat");
            animalList.Add("Dog");
            animalList.Add("Snake");
            animalList.Add("Whale");
            animalList.Add("Bee");
            animalList.Add("Spider");
            animalList.Add("Elephant");
            animalList.Add("Mongoose");
            animalList.Add("Wambat");
            animalList.Add("Bonobo");
            animalList.Add("Hamster");
            animalList.Add("Human");

            List<string> occupationsList = new List<string>();
            occupationsList.Add("Engineer");
            occupationsList.Add("Plumber");
            occupationsList.Add("TalkShowHost");
            occupationsList.Add("Doctor");
            occupationsList.Add("Janitor");
            occupationsList.Add("Prostitute");
            occupationsList.Add("Cook");
            occupationsList.Add("Theif");
            occupationsList.Add("Pilot");
            occupationsList.Add("Maid");
            occupationsList.Add("Nanny");
            occupationsList.Add("Bartender");

            List<string> clothesList = new List<string>();
            clothesList.Add("Shirt");
            clothesList.Add("Shoes");
            clothesList.Add("Jacket");
            clothesList.Add("Undershorts");
            clothesList.Add("Socks");
            clothesList.Add("Jeans");
            clothesList.Add("Wristwatch");
            clothesList.Add("Cap");
            clothesList.Add("Sunglasses");
            clothesList.Add("Overalls");
            clothesList.Add("LegWarmers");
            clothesList.Add("Bra");

            // Add elements to superset without clustering
            List<string> superset = new List<string>();
            for (i = 0; i < ItemsPerCat; i++)
            {
                superset.Add(foodList[i]);
                superset.Add(animalList[i]);
                superset.Add(occupationsList[i]);
                superset.Add(clothesList[i]);
            }

            mainLoop(
                NumLoops,
                SampleSize, 
                DRcomparisonThreshold,
                ItemsPerCat,
                memoryMean,
                memoryStDev,
                superset,
                foodList,
                animalList,
                occupationsList,
                clothesList,
                uniRand);
        }

        // This is the big, main loop.
        static void mainLoop(
            int NumLoops,
            int SampleSize,
            double DRcomparisonThreshold,
            int ItemsPerCat,
            double memoryMean,
            double memoryStDev,
            List<string> superset,
            List<string> foodList,
            List<string> animalList,
            List<string> occupationsList,
            List<string> clothesList,
            Random uniRand)
        {
            // Report something to the screen,
            Console.WriteLine("Simulating {0} experiments of {1} participants each", NumLoops, SampleSize);
            Console.WriteLine("...Calculating...");

            // Create list of meanDR of separate experiments.
            List<double> meanDRlist = new List<double>();

            // Initialze DR comparison counter.
            int NumDRaboveThresh = 0; // Number of DR% above comparison thesh.

            // Loop through main big loop
            for (int mainCntr = 0; mainCntr < NumLoops; mainCntr++)
            {
                // create Array of participant's DR's for a given experiment.
                List<double> DRarray = new List<double>();

                //Loop through each participant in one experiment.
                for (int participant = 0; participant < SampleSize; participant++)
                {
                    // Reset parameters.
                    int P = 0; // number of practice words recalled.
                    int C = 0; // number of control words recalled.
                    double DR = 0; // weighted differential recall (DR) score.

                    // Create recalled set.
                    List<string> recalledSet = new List<string>();
                    createRecalledSet(
                        recalledSet,
                        superset,
                        memoryMean,
                        memoryStDev,
                        uniRand);

                    // Create random practice set.
                    List<string> practiceSet = new List<string>();
                    createPracticeSet(
                        practiceSet,
                        foodList,
                        animalList,
                        occupationsList,
                        clothesList,
                        ItemsPerCat,
                        uniRand);

                    // Compare recalled count to practice set.
                    foreach (string strTemp in recalledSet)
                    {
                        if (practiceSet.Contains(strTemp))
                            P++;
                        else
                            C++;
                    }

                    // Compute weighted differential recall (DR) score
                    DR = 100.0 * (P - C) * (P + C) / 576.0;

                    // Record DR in list.
                    DRarray.Add(DR);

                    // Report output.
                    //Console.WriteLine("DR%:  {0}", DR);
                }
                // record mean DR.
                double meanDR = DRarray.Average();
                meanDRlist.Add(meanDR);

                // Update comparison counter
                if (meanDR >= DRcomparisonThreshold) NumDRaboveThresh++;

                // Report Average DR.
                //Console.WriteLine("Experiment {0}, Sample size: {1},  mean DR:  {2}", mainCntr, SampleSize, meanDR);

            }
            // Finished looping.

            // Calculate mean of meanDR
            double finalMean = meanDRlist.Average();

            // Calculate standard deviation of meanDR
            double finalStDev = 0;
            foreach (double dTemp in meanDRlist)
            {
                finalStDev += (dTemp - finalMean) * (dTemp - finalMean);
            }
            finalStDev = finalStDev / NumLoops;
            finalStDev = Math.Sqrt(finalStDev);

            // Report final results.

            Console.WriteLine(" ");
            Console.WriteLine("Participants per experiment: {0}", SampleSize);
            Console.WriteLine("Number of separate experiments: {0}", NumLoops);
            Console.WriteLine("mean of the mean DR% from all experiments: {0}",
                finalMean);
            Console.WriteLine("Standard deviation of the mean DR%: {0}", finalStDev);
            Console.WriteLine("");
            Console.WriteLine("Comparison theshold (from study): {0}", DRcomparisonThreshold);
            Console.WriteLine("Total number of meanDR above comparison threshold: {0}", NumDRaboveThresh);
            Console.WriteLine("% of meanDR above comparison threshold: {0}%", 100.0*((double)NumDRaboveThresh)/((double)NumLoops));
            Console.ReadLine();

        }

        static double Gaussrand(double unirand1, double unirand2)
        {
            return (Math.Sqrt(-2 * Math.Log(unirand1)) * Math.Cos(2 * Math.PI * unirand2));
        }

        static void createRecalledSet(List<string> recalledSet, List<string> superSet, double mean, double stdev, Random unirand)
        {
            // Determine how many words were recalled. (random)
            double unirand1 = unirand.NextDouble();
            double unirand2 = unirand.NextDouble();
            while (unirand1 == 0.0) unirand1 = unirand.NextDouble();
            while (unirand2 == 0.0) unirand2 = unirand.NextDouble();

            double gaussrand = Gaussrand(unirand1, unirand2);
            gaussrand *= stdev;
            gaussrand += mean;
            int recalledCount = (int)gaussrand;
            if (recalledCount > superSet.Count) recalledCount = superSet.Count;

            // Create temporary superset and copy elements over.
            List<string> tempSuperSet = new List<string>();
            foreach (string strTemp in superSet)
            {
                tempSuperSet.Add(strTemp);
            }

            // Randomize temporary superset.
            shuffleList(tempSuperSet, unirand);

            // Copy over first recalledCount items to recalledSet.
            for (int i = 0; i < recalledCount; i++)
            {
                recalledSet.Add(tempSuperSet[i]);
            }
        }

        static void createPracticeSet(
            List<string> practiceList,
            List<string> foodList,
            List<string> animalList,
            List<string> occupationsList,
            List<string> clothesList,
            int itemsPerCat,
            Random uniRand)
        {
            List<string> tempFoodList = new List<string>();
            List<string> tempAnimalList = new List<string>();
            List<string> tempOccupationsList = new List<string>();
            List<string> tempClothesList = new List<string>();

            // load temporary lists.
            foreach (string strTemp in foodList)
                tempFoodList.Add(strTemp);
            foreach (string strTemp in animalList)
                tempAnimalList.Add(strTemp);
            foreach (string strTemp in occupationsList)
                tempOccupationsList.Add(strTemp);
            foreach (string strTemp in clothesList)
                tempClothesList.Add(strTemp);

            // Shuffle temporary lists
            shuffleList(tempFoodList, uniRand);
            shuffleList(tempAnimalList, uniRand);
            shuffleList(tempOccupationsList, uniRand);
            shuffleList(tempClothesList, uniRand);

            // Load practice list
            for (int i = 0; i < itemsPerCat / 2; i++)
            {
                practiceList.Add(tempFoodList[i]);
                practiceList.Add(tempAnimalList[i]);
                practiceList.Add(tempOccupationsList[i]);
                practiceList.Add(tempClothesList[i]);
            }

            // Shuffle practice list
            shuffleList(practiceList, uniRand);
        }

        // method to shuffle lists.
        static void shuffleList(List<string> list, Random unirand)
        {
            List<string> shuffledList = new List<string>();
            while (list.Count() > 0)
            {
                int indexTemp = unirand.Next(list.Count());
                shuffledList.Add(list[indexTemp]);
                list.RemoveAt(indexTemp);
            }
            foreach (string strTemp in shuffledList) list.Add(strTemp);
        }
    }
}


Register to reply

Related Discussions
Has anybody here been published in a scientifc journal ? Academic Guidance 85
Published paper is needed General Physics 10
New published issue of Journal of Physics Students (JPS) General Physics 0
New published issue of Journal of Physics Students (JPS) General Physics 0
Getting Published in a Journal General Discussion 2