At what probability is it rational to look for paranormal explanations?

AI Thread Summary
In a scenario where a friend correctly guesses the outcome of 250 coin flips, the probability of this happening by chance is astronomically low, at 1 in 1.8*10^75. This raises questions about the underlying mechanics of the guessing process, suggesting that statistical analysis alone cannot explain the phenomenon. Participants discuss the need for a model that balances complexity and usability to understand the mechanics behind such improbable outcomes. Additionally, the conversation touches on the potential for misleading predictions in random events, highlighting how one could exploit this for personal gain. Ultimately, the discussion emphasizes the importance of examining both statistical data and the processes that generate it.
Firefight
Messages
5
Reaction score
0
You're about to flip a quarter while your friend guesses which side will come up. You agree to switch turns after one incorrect guess each.

He gets 2 in a row right before guessing wrong, and now it's your turn.

You incorrectly guess the outcome of the first flip he does, and now it's his turn again.

He guesses 250 flips correctly before you give up (132 heads, 118 tails). You're absolutely dumbfounded and check the coin again just to reassure everything is right.

What do you conclude has happened (What's the most likely explanation)?

How many correct guesses would your friend have to make in a row in order for you to stop believing it was chance based?

The chance of correctly guessing 250 coin flips in a row on one try is 2^250 or 1 in 1.8*10^75Assumptions:
You start triple checking everything and flipping at different speeds past flip 15 to ensure 'randomness' and minimal human error.
Your friend is not manipulating the coin in any predictable way below a success ratio of 1:1.85.
 
Physics news on Phys.org
blank edit.
 
Last edited:
Firefight said:
You're about to flip a quarter while your friend guesses which side will come up. You agree to switch turns after one incorrect guess each.

He gets 2 in a row right before guessing wrong, and now it's your turn.

You incorrectly guess the outcome of the first flip he does, and now it's his turn again.

He guesses 250 flips correctly before you give up (132 heads, 118 tails). You're absolutely dumbfounded and check the coin again just to reassure everything is right.

What do you conclude has happened (What's the most likely explanation)?

How many correct guesses would your friend have to make in a row in order for you to stop believing it was chance based?

The chance of correctly guessing 250 coin flips in a row on one try is 2^250 or 1 in 1.8*10^75


Assumptions:
You start triple checking everything and flipping at different speeds past flip 15 to ensure 'randomness' and minimal human error.
Your friend is not manipulating the coin in any predictable way below a success ratio of 1:1.85.

This is an interesting scenario.

In terms of likelihood estimation, if you used an MLE approach, your proportion parameter would end up being close to 1 instead of 0.5.

If you wanted to do inference that the true underlying population parameter was 0.5 given the sample, it would be highly unlikely. (It's not improbable, just implausible).

But you need to remember that this information does not give any information about the mechanics of the actual underlying process, and I think this is what you are trying to focus on.

In terms of the mechanics of the process, you need to focus not on the data (which is what you are doing in probability/statistics), but on the process itself.

In doing this, there could be a number of avenues to take. You might think about a physical explanation by using mathematical modelling to generate results using various parameters along with initial conditions.

You might also present something more 'esoteric'. It could be based on something that is independent of the physical parameters of the coin, like the 'paranormal' element. It might even be a combination of the two.

Statistics doesn't really shed insight into actual processes in the context you are describing (the real mechanics of the process). It can only make inferences on things that are probabilistic.

The big thing I see is choosing a model that finds a perfect (or good enough) fit between complexity and accuracy. You can have models that are completely accurate (millions of variables or more), but completely complex (and practically useless), or you can have a model with very few numbers of variables (practical in terms of using the model), but completely useless (doesn't replicate the mechanics of the system in any meaningful way).

Applied mathematics is in one way, centered around the idea of finding a good balance between complexity and useability in the context of replicating the mechanics of a process in a meaningful way.
 
Reminds me of a surefire way to make some money. You get yourself the names and email addresses of 1024 well-heeled stock market investors. You send half of them an email saying the market will go up tomorrow, and you tell the other half the market will go down.

You'll be right with 512 of the investors. Forget the other ones. You send half of the remaining 512 an email saying the market will go up tomorrow, etc.

Eventually you'll have made ten accurate market predictions in a row to one investor. You send that investor an email saying, "Ok I proved myself to you. I just sent you ten accurate predictions in a row. Would you like to subscribe to my private mailing list? Only cost you $100k."

He sends you the money and from now on you send him predictions that are right by chance only half the time.

It's really not difficult to make correct predictions of random events ... if you can make a simultaneous strings of predictions to many different people.
 
SteveL27 said:
Reminds me of a surefire way to make some money. You get yourself the names and email addresses of 1024 well-heeled stock market investors. You send half of them an email saying the market will go up tomorrow, and you tell the other half the market will go down.

You'll be right with 512 of the investors. Forget the other ones. You send half of the remaining 512 an email saying the market will go up tomorrow, etc.

Eventually you'll have made ten accurate market predictions in a row to one investor. You send that investor an email saying, "Ok I proved myself to you. I just sent you ten accurate predictions in a row. Would you like to subscribe to my private mailing list? Only cost you $100k."

He sends you the money and from now on you send him predictions that are right by chance only half the time.

It's really not difficult to make correct predictions of random events ... if you can make a simultaneous strings of predictions to many different people.

Not exactly relevant, but still entertaining to read :p Thanks for the idea. I'm going to try that on a forum in the near future.

Also thanks for the input Chiro.
 
Last edited:
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top