Probability basics - Heads or Tails thought experiment.

Click For Summary

Discussion Overview

The discussion revolves around a thought experiment involving a random walk based on coin tosses, exploring the implications of probability theory and its relationship to quantum theory. Participants examine the behavior of a walker taking steps left or right based on the outcome of coin flips, questioning the nature of probability and expected outcomes over time.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant describes a random walk experiment where steps are determined by coin tosses, expressing confusion about the implications of probability over many trials.
  • Another participant identifies the scenario as the 1D Drunkard's Walk problem, noting that over time, the walker is more likely to wander away from the starting point, countering the intuitive expectation of remaining close to the origin.
  • It is mentioned that in one or two dimensions, the probability of eventually returning to the origin is one, while this probability decreases in higher dimensions.
  • A participant questions how a random walk can return to the origin if it started from a different point, pondering the nature of the walk's memory of its starting position.
  • Another participant discusses the concept of "closeness" to the origin, suggesting that while the expected value of a random walk is the origin, the variance can lead to significant distances from it over many steps.
  • It is stated that every point in a random walk will be hit infinitely often, raising further questions about the implications of this behavior.

Areas of Agreement / Disagreement

Participants express differing views on the implications of random walks, with some agreeing on the mathematical properties while others challenge the intuitive understanding of probability and distance from the origin. The discussion remains unresolved regarding the nature of "closeness" and the behavior of the random walk over time.

Contextual Notes

Participants highlight the need to redefine concepts such as "close" and the implications of variance in random walks, indicating that assumptions about distance and probability may not align with intuitive expectations.

Izhaki
Messages
18
Reaction score
0
Right,

This is very confusing for myself, I'm pretty sure it will be so for everyone else.

My question is really related to quantum theory, but I find it most appropriate to spell it out here as it's more of a maths question on the basics of probability.

Here my experiment:

I stand at a point a throw a coin.
If it's heads, I take a step to the right; tails I take a step to the left.
The throw of the coin is completely random, and my steps are precisely the same distance to left and right.
My time axis is 'throws', so after 10 throws t = 10.

Now it doesn't takes a genius to know that probability has it that I'll be close to my origin point most of the time.

But also, if I'm to run this experiment indefinitely, there will come a point where I will be million steps to the left (same way monkeys will type Shakespeare). Say the time when this happened was M.

Can I not tell that if I run the experiment M more times (so at time 2M), I'll back around my origin point again?

But as my chance to go left or right are equal, that makes no sense whatsoever. Because when I'm million steps to the left of my origin, I still have a similar chance going million steps to the right or left. Yet given I started from my original origin point, at any time my mean should be that origin point.

In other words - when I get heads I 'owe' probability a tail. But that's nonsense, as the chance for heads or tails is equal in every throw.

So what am I getting wrong?

Thanks in advance.
 
Last edited:
Physics news on Phys.org
You have described the 1D Drunkards Walk problem - go look it up. Over time you will more likely wander away from the start point. The intuitive idea that the equal probabilities will keep you close by is simply incorrect.

Basically - once you have taken a step to the right (say) then, by the argument you used, the remaining tosses should keep you close to that point rather than the origin. See the problem?

Very balanced outcomes like HTTHTHHTHTTHHT that keep you close to the start are quite unlikely and quite long runs are more common than you'd guess. In fact it is possible to tell the difference between a sheet of actual coin tosses and a sheet of made-up coin tosses this way - the actual one will have more long runs on heads or tails.
 
One interesting fact about random walk. In one or two dimensions the probability that eventually you will return to the origin is one. For higher dimensions it is less.
 
That is interesting - so if a 1d random walk starts at position x, then, over a long time, it returns to position x? But if it got to x via a random walk from the origin then it will end up at the origin and not x ... how does it know where it started out?

Or is it more that the long runs of heads are as likely as long runs on tails so, given sufficient time, the walk will eventually return to it's starting point. But that does not sound so interesting... after all, given sufficient time surely the walker will end up and any nominated point?

These things can be good fun to simulate.
I first ran into this problem when I was 13 and trying to program a trs80 to run a snail race ... which I had modeled as a 2d random walk I know now. I spent ages trying to work out how long it would take for the "snails" to exit the circle with different parameters.
 
Izhaki said:
Now it doesn't takes a genius to know that probability has it that I'll be close to my origin point most of the time.

Well, here you have to define "close" but the fact is that the more steps you take the more likely to be "further" and "further" away from the origin.

Izhaki said:
But also, if I'm to run this experiment indefinitely, there will come a point where I will be million steps to the left (same way monkeys will type Shakespeare). Say the time when this happened was M.

Can I not tell that if I run the experiment M more times (so at time 2M), I'll back around my origin point again?

What you can tell is that in the long run #tails/#heads ~ 1, but, for example, 1.trillion.tails / 1.trillion.minus.100.thousand.heads will be also close to one, but yet, you will find yourself 100 thousand steps away from the origin. The thing is that 100 thousands steps is "close" compare to the 1 trillion steps.

The expected value for a random walk is its origin, but its variance is infinite, so the further you go every point becomes more and more alike and that "closeness" can be eventually of billions of steps away from the origin which is still very close compare to a large number of steps.

So I think if you redefine your concept of "close" you will get it. :smile:
 
Last edited:
Simon Bridge said:
That is interesting - so if a 1d random walk starts at position x, then, over a long time, it returns to position x? But if it got to x via a random walk from the origin then it will end up at the origin and not x ... how does it know where it started out?

Every point will be hit infinitely often.
 

Similar threads

  • · Replies 57 ·
2
Replies
57
Views
7K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 126 ·
5
Replies
126
Views
9K
  • · Replies 20 ·
Replies
20
Views
3K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 45 ·
2
Replies
45
Views
6K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 10 ·
Replies
10
Views
4K