- #1
- 1,442
- 191
This thread is for us to discuss the text The Drunkard's Walk: How Randomness Rules Our Lives by Leonard Mlodinow, and by that I mean for me to ask questions of you, those of you who will suffer me, my experts in probability, the PhysicsForums readers, on things I'm interested in from the text. I did ask the forum moderators whether or not I could post short exact quotes from the text without violating copyright law and they gave me their blessing if anybody has interest, here's link to the book -- don't be shy... post whatever you want us to analyze in here and we'll do our best.
https://www.amazon.com/dp/0307275175/?tag=pfamazon01-20
<< Moderator's Note -- link updated >>
I hope I'm not the only one directing conversations in this thread by the end of it, 'twas a good book and I think it ought be discussed, and I feel that discussion should require pencil and paper. I have marked this thread with the Basic (High School) prefix because the book was written for such an audience but I've no way of knowing whether my questions will venture off into Intermediate at some point. As for me, since you might need to know to answer my questions in a way I can understand, I was a math major, and I studied mostly analysis though I did take the standard undergrad probability and statistics course.
There was a particular passage that I had in mind when I decided to create this thread: pg. 174, "[...]The point was rather starkly illustrated by mathematician George Spencer-Brown, who wrote that in a random series of 10^1,000,007 zeroes and ones, you should expect at least 10 nonoverlapping subsequences of 1 million consecutive zeroes." And the citation is George Spencer-Brown, Probability and Scientific Inference (London: Longmans, Green, 1957), pp. 55-56. Actually, 10 is a gross underestimate.
I am aware that the surrounding text to the above quote makes a very different point not related to my line of questioning so I have omitted the point mentioned in the quote since it doesn't serve my purpose.
My questions concerning this passage are (1) how does one compute such a thing (in brief)? (2) for random sequences of events that are very much shorter, say less than 100 events, can we use a similar calculation to grasp the likely steaks of consecutive events when the underlying probability of each event is known? (3) specifically, suppose we had three possible events A, B, and C; and that P(A)=0.4, P(B)=0.4, and P(C)=0.2, what then in a sequence of 30 events should we expect for streaks of either A's, B's, or C's? This relates to a game I play wherein you buy chests containing one of three colored coins, and you need to create sets of the colored coins to make item you can use in the game where it has often been the experience of a player needing only one purple coin to roll 7 orange coins in a row. I wish to quantify this experience and verify that the good ol' RNG actually does approximate randomness and not just stacking the deck against us.
Well, that's all for today, hope you aren't disappointed that I'm not the one providing the thoughtful analysis this time, just questions for now.
Thanks for your time Probability Forum Reader,
-Ben Orin
https://www.amazon.com/dp/0307275175/?tag=pfamazon01-20
<< Moderator's Note -- link updated >>
I hope I'm not the only one directing conversations in this thread by the end of it, 'twas a good book and I think it ought be discussed, and I feel that discussion should require pencil and paper. I have marked this thread with the Basic (High School) prefix because the book was written for such an audience but I've no way of knowing whether my questions will venture off into Intermediate at some point. As for me, since you might need to know to answer my questions in a way I can understand, I was a math major, and I studied mostly analysis though I did take the standard undergrad probability and statistics course.
There was a particular passage that I had in mind when I decided to create this thread: pg. 174, "[...]The point was rather starkly illustrated by mathematician George Spencer-Brown, who wrote that in a random series of 10^1,000,007 zeroes and ones, you should expect at least 10 nonoverlapping subsequences of 1 million consecutive zeroes." And the citation is George Spencer-Brown, Probability and Scientific Inference (London: Longmans, Green, 1957), pp. 55-56. Actually, 10 is a gross underestimate.
I am aware that the surrounding text to the above quote makes a very different point not related to my line of questioning so I have omitted the point mentioned in the quote since it doesn't serve my purpose.
My questions concerning this passage are (1) how does one compute such a thing (in brief)? (2) for random sequences of events that are very much shorter, say less than 100 events, can we use a similar calculation to grasp the likely steaks of consecutive events when the underlying probability of each event is known? (3) specifically, suppose we had three possible events A, B, and C; and that P(A)=0.4, P(B)=0.4, and P(C)=0.2, what then in a sequence of 30 events should we expect for streaks of either A's, B's, or C's? This relates to a game I play wherein you buy chests containing one of three colored coins, and you need to create sets of the colored coins to make item you can use in the game where it has often been the experience of a player needing only one purple coin to roll 7 orange coins in a row. I wish to quantify this experience and verify that the good ol' RNG actually does approximate randomness and not just stacking the deck against us.
Well, that's all for today, hope you aren't disappointed that I'm not the one providing the thoughtful analysis this time, just questions for now.
Thanks for your time Probability Forum Reader,
-Ben Orin
Last edited by a moderator: