B Probability; what is "the long run"?

  • B
  • Thread starter Thread starter Cliff Hanley
  • Start date Start date
  • Tags Tags
    Probability
Cliff Hanley
Messages
90
Reaction score
2
On a roulette table with a single green zero the probability of the ball landing in a red pocket is 18/37 or 19/18 against (or approx 49% - with the odds for it landing in a non-red pocket (black or green) being approx 51%.

Probability theory tells us that although the ball will, at times, land in a non-red pocket several times in succesion, and, at times, many times in succession, in the LONG RUN it will land there approx 51% of the time (assuming an unbiased wheel etc).

But what is the LONG RUN?

A gambler can bet on red only to see the ball land in black (or green) say, 10 times in a row. Another gambler may see the ball land in black (or green) 15 or 20 times in a row (a freakish occurrence for some, but unremarkable for the mathematician – or experienced croupier).

Q. What is the ‘record’ for successive non-reds in actual play over the few centuries that roulette has been around?

Q. As a thought experiment, if we had monitored an unbiased wheel (with all other factors not causing any bias either) for the last two or three centuries what could the ‘record’ be in this case for successive non-reds?

Q. Does probability theory suggest that if we played for a long enough period of time we would see a hundred non-reds in succession? A thousand? Million? Billion, trillion etc?

Q. How can we predict when the LONG RUN (whatever that may be) will show us the true odds realized, ie, when we see there has been approximately 49% reds, 51% non-reds?

Q. If it’s the case that in theory we could see black come up say, a million times in a row (or more), is it true that (given sufficient spins) it would be the case in practice?
 
Physics news on Phys.org
The "long runs" means a great many trials. If you are trying to measure the occurances of an event with close to a 50% probability, you do not need as large a number of trials as when trying to measure the occurances of events with much smaller probabilities. The math is easier with coin flips.

Obtaining heads n times in a row has a probability of (1/2)^n. So the probability of 10 heads in a row is 1/1024.
 
  • Like
Likes Cliff Hanley
Cliff Hanley said:
Q. Does probability theory suggest that if we played for a long enough period of time we would see a hundred non-reds in succession? A thousand? Million? Billion, trillion etc?
?
Yes. Why would it not?
 
  • Like
Likes Cliff Hanley
Cliff Hanley said:
Probability theory tells us that although the ball will, at times, land in a non-red pocket several times in succesion, and, at times, many times in succession, in the LONG RUN it will land there approx 51% of the time (assuming an unbiased wheel etc).

Technically, probability theory gives you no guarantees about any event (or series of events) actually happening. Probability theory merely uses the given probabilities to assign probabilities to other events and series of events.

When people assert that some event will happen in the long run, this is an assertion about the physics or other applied science involved in a problem, not a theorem of mathematical probability theory. The best mathematical probability theory can do in such situations is to say the limit of the probability of an event approaches 1 as the "length" of the "long run" approaches infinity.
 
Last edited:
  • Like
Likes Cliff Hanley
Cliff Hanley said:
Q. What is the ‘record’ for successive non-reds in actual play over the few centuries that roulette has been around?
You can google this, although I'm not sure how reliable the answers would be.

Cliff Hanley said:
Q. As a thought experiment, if we had monitored an unbiased wheel (with all other factors not causing any bias either) for the last two or three centuries what could the ‘record’ be in this case for successive non-reds?
A wheel spun once a minute for 300 years will spin about 130 million times. The chance of 28 successive non-reds is about 1 in 127 million. However this doesn't mean that a run of 28 will happen, or that a run of more than 28 will not happen.

Cliff Hanley said:
Q. Does probability theory suggest that if we played for a long enough period of time we would see a hundred non-reds in succession? A thousand? Million? Billion, trillion etc?
Yes: if everyone on Earth spent their whole lives playing roulette until the Earth's atmosphere is burned off by the Sun they are likely to see a hundred non-reds, but a thousand are unlikely before the universe reaches heat death (caution - I did these calculations rather carelessly).

Cliff Hanley said:
Q. How can we predict when the LONG RUN (whatever that may be) will show us the true odds realized, ie, when we see there has been approximately 49% reds, 51% non-reds?
We can't, but we can say that the more trials we do the observed proportion is more likely to approximate the theoretical proportion more closely.

Cliff Hanley said:
Q. If it’s the case that in theory we could see black come up say, a million times in a row (or more), is it true that (given sufficient spins) it would be the case in practice?
See the above comment on heat death.

You would gain more understanding by learning about this section of probability (binomial probability/Bernouilli trials) and doing the calculations yourself.
 
  • Like
Likes Cliff Hanley and HallsofIvy
"The long run" depends on how close you want to get to 18/37. Even then, there is only a probability that it will get as close as you specify. So you have to frame the question this way: "How large of a sample size would it take so that the probability of the sample result being within xxx of its theoretical value is yyy?". The answer to that question would give you the sample size that you could call "the long run" for that case.

Suppose you want to say that there is a probability of 95% that it is within 0.01 of 18/37. Then there is an equation that tells you how many trials that would take. So it tells you what "the long run" would mean for that case.
 
  • Like
Likes Cliff Hanley
I think the long run here would be described by the LLN --Law of Large Numbers.
 
  • Like
Likes Cliff Hanley
Probability theory tells us that if we play infinitely often we will certainly get to see, *infinitely* many times, a hundred non-reds in succession. And a thousand. And a million. And a billion, and a trillion.

You name it, you will get it ... with probability 1, infinitely many times.

The strong law of large numbers.
 
  • Like
Likes Cliff Hanley
WWGD said:
I think the long run here would be described by the LLN --Law of Large Numbers.
That is just replacing the vague term 'long run" with the equally vague term "large numbers". So it begs the question "What is large?". In many cases like the roulette table of the OP, there are actual numbers that can be calculated if the question is asked correctly:
Given a confidence level, say 95%, and a desired accuracy, say 0.1, what is the sample size, N. that would give a sample accuracy of 0.1 with 95% confidence?
 
  • Like
Likes Cliff Hanley
  • #10
gill1109 said:
Probability theory tells us that if we play infinitely often we will certainly get to see, *infinitely* many times, a hundred non-reds in succession. And a thousand. And a million. And a billion, and a trillion.

You name it, you will get it ... with probability 1, infinitely many times.

The strong law of large numbers.
For answering the OP, I think it's important to add that this is not a contradiction of the Law of Large Numbers. The probability of an unusual sequence, say 1000 reds in succession, is so small that there are almost certainly a huge number of more normal results before that happens. So when the 1000 reds eventually occurs, it almost certainly does not effect the sample average very much.
 
  • Like
Likes Cliff Hanley
  • #11
gill1109 said:
Probability theory tells us that if we play infinitely often we will certainly get to see, *infinitely* many times, a hundred non-reds in succession. And a thousand. And a million. And a billion, and a trillion.
Actually "certainly" is a stretch isn't it. Yes, the probability of getting any string you can name approaches 1 as the number of trials approaches infinity, but since we can't actually do an infinite number of trials, we can't ever get an absolute certainty (probability = 1.0)
 
  • Like
Likes Cliff Hanley
  • #12
phinds said:
Actually "certainly" is a stretch isn't it. Yes, the probability of getting any string you can name approaches 1 as the number of trials approaches infinity, but since we can't actually do an infinite number of trials, we can't ever get an absolute certainty (probability = 1.0)

There is a further distinction between "actually" and "certainly". If we "actually" took a sample from a normal distribution and the value was 1.23 then an event with probability 1 ( namely the event "the value of the sample will not be 1.23) failed to "actually" happen.
 
  • #13
Stephen Tashi said:
There is a further distinction between "actually" and "certainly". If we "actually" took a sample from a normal distribution and the value was 1.23 then an event with probability 1 ( namely the event "the value of the sample will not be 1.23) failed to "actually" happen.
I have no idea what you just said / what it means.
 
  • Like
Likes Cliff Hanley
  • #14
phinds said:
I have no idea what you just said / what it means.

I'm making the distinction between the statement "Event E occurs" (Or "Event E will occur") versus the statement "Event E has probability 1".

For a normally distributed random variable X, let D be the event "X = 1.23". Let E be the event "X is not equal to 1.23". The event E has probability 1.

A similar statement holds true for any particular numerical value v of X. The probability that "X is not equal to v" is 1.

As another example, we have to distinguish between the truth of a statement A and the event "A is true with probability 1" when doing mathematical proofs.

For example, in logic we have the pattern of reasoning:
Given:
If A then B
A is true
----
Conclude B is true.

However it is not a valid form of logical argument to say:
Given:
If A then B
A is true with probability 1
----
Conclude:
B is true
 
  • Like
Likes Cliff Hanley
  • #15
This still makes no sense to me but I'll take your word for it.
 
  • Like
Likes Cliff Hanley
  • #16
phinds said:
Actually "certainly" is a stretch isn't it. Yes, the probability of getting any string you can name approaches 1 as the number of trials approaches infinity, but since we can't actually do an infinite number of trials, we can't ever get an absolute certainty (probability = 1.0)
I have to disagree. For any number, N, we can always continue long enough for N+1 occurrences. The probability of infinitely many occurrences is 1 because the probability of finite occurrences is 0.
 
  • Like
Likes Cliff Hanley
  • #17
FactChecker said:
That is just replacing the vague term 'long run" with the equally vague term "large numbers". So it begs the question "What is large?". In many cases like the roulette table of the OP, there are actual numbers that can be calculated if the question is asked correctly:
Given a confidence level, say 95%, and a desired accuracy, say 0.1, what is the sample size, N. that would give a sample accuracy of 0.1 with 95% confidence?

Of course, this is the best we can do, but at least LLN gives you a theoretical backing, and, given a level of approximation wanted, then one can compute.
 
  • Like
Likes Cliff Hanley
  • #18
@Stephen Tashi I do have one question and perhaps the answer to it will enlighten me as to the rest of your comments.

How is it that the statement "A is true with probability 1" is anything other than simply an excessively redundant way of saying "A is true" ?
 
  • Like
Likes Cliff Hanley
  • #19
As examples for the 49%/51% question:
After 1000 rolls, the chance to be within 1% of this result (so somewhere from 48/52 to 50/50) is roughly 50%.
After 10000 rolls, the chance to be within 1% is about 95%.
After 100,000 rolls, the chance to be within 1% is larger than 99.9999999%.

After 1 million rolls, the chance to be within 0.1% (between 48.9/51.1 and 49.1/50.9) is about 95% and the chance to be more than 1% away is completely negligible.
After 100 million rolls, the chance to be within 0.01% (between 48.99/51.01 and 49.01/50.99) is about 95%.
phinds said:
How is it that the statement "A is true with probability 1" is anything other than simply an excessively redundant way of saying "A is true" ?
Draw a random number from a uniform distribution over the real numbers in the interval [0,1]. "The number is not 0.5" has probability 1, but it is not certain. It is almost certain.
 
  • Like
Likes Cliff Hanley and FactChecker
  • #20
phinds said:
@Stephen Tashi I do have one question and perhaps the answer to it will enlighten me as to the rest of your comments.

How is it that the statement "A is true with probability 1" is anything other than simply an excessively redundant way of saying "A is true" ?

As mfb's example illustrates, the mathematical definition of probability is very technical. The mathematical definition of probability describes a situation where events are assigned numbers called "probabilities". It doesn't specify anything about whether events actually happen and it doesn't provide any guarantees about the frequency with which they happen.

A good way to understand the situation in an intuitive and philosophical manner is to consider the general problem of formulating a theory about "uncertainty". What we desire from theories for them to make definite statements and predictions. We don't want the theory itself to be "uncertain". So how can you say something "certain" about "uncertainty"?

Probability models uncertainty by assigning numbers to events. The assignment is something definite - e.g. "The probability that a fair coin lands heads is 1/2". The conclusions of the theory are definite - e.g. The probability of two heads in two independent tosses of the fair coin is 1/4". We can make definite statements because we are talking about "the probability of" an event instead of asserting something about the event happening without the clause "the probability of" attached to it. So the general pattern of results in probability theory is: "If the probability of ... is such-and-such then the probability of ... is so-and-so".

Because the conclusions of probability speak of the "probability of" events, the conclusions of probability theory are not conclusions about the events without the modifying phrase "probability of" attached to the event.People who apply probability theory to practical problems may assert that "E has probability 1" amounts to the same thing as "E happens" and this is a valid claim in many practical situations. However this claim is not a consequence of the mathematical theory of probability. The claim must be supported by some additional facts or assumptions about the practical situation being considered.
 
  • Like
Likes jim mcnamara and Cliff Hanley
  • #21
phinds said:
How is it that the statement "A is true with probability 1" is anything other than simply an excessively redundant way of saying "A is true" ?

It is a very different statement, and one whose difference is not usually taught well to students.
Let me change the wording to "A has probability 0" and "A is false". It's the same thing, but it's easier to explain it this way.
Here is the critical example. Pick any number between ##0## and ##1## at random (assuming every number has the same possibility = uniform). What is the probability you picked ##1/2##. Math says the probability is ##0##. But it's not impossible you picked ##1/2##. In fact, every number has probability ##0## of being picked. But you have to pick some number.

This is why something with probability ##0## is said in mathematics to be true "almost always" or "almost surely". The "almost" is very important!
 
  • Like
Likes Cliff Hanley and FactChecker
  • #22
micromass said:
Here is the critical example. Pick any number between ##0## and ##1## at random (assuming every number has the same possibility = uniform).

It's worth pointing out that the formal theory of probability does not assert that we can take random samples. It only asserts that given a distribution, we can determine the probability of certain events.

Of course, in applying probability theory we deal with situations where random samples are actually taken. However, in such a situation if we consider statements of the form "If we take a random sample of ... then...(some disagreeable conclusion)" the disagreeable conclusion might occur because the premise "we take a random sample" is false. For example, in a given practical situation, one can debate whether it is possible to take a random sample from a uniform distribution on [0,1]. For example, if the sample is a reading from digital display on a voltmeter then it has limited precision.
 
  • Like
Likes Cliff Hanley
  • #23
micromass said:
It is a very different statement, and one whose difference is not usually taught well to students.
Let me change the wording to "A has probability 0" and "A is false". It's the same thing, but it's easier to explain it this way.
Here is the critical example. Pick any number between ##0## and ##1## at random (assuming every number has the same possibility = uniform). What is the probability you picked ##1/2##. Math says the probability is ##0##. But it's not impossible you picked ##1/2##. In fact, every number has probability ##0## of being picked. But you have to pick some number.

This is why something with probability ##0## is said in mathematics to be true "almost always" or "almost surely". The "almost" is very important!
OK, I get it, assuming that in the last sentence, you have a typo and meant probability 1 (or, alternatively, "almost always false"), yes?
 
  • Like
Likes Cliff Hanley
  • #24
phinds said:
Actually "certainly" is a stretch isn't it. Yes, the probability of getting any string you can name approaches 1 as the number of trials approaches infinity, but since we can't actually do an infinite number of trials, we can't ever get an absolute certainty (probability = 1.0)
You can make a mathematical model for an infinite number of trials and in that mathematical model there is probability 1 that any particular string will be repeated infinitely many times. As a consequence of the strong law of large numbers. To be sure, the mathematical result is obtained by showing that the probability any particular string is repeated at least some particular number of times in N trials converges to 1 as N tends to infinity.
 
  • #25
gill1109 said:
You can make a mathematical model for an infinite number of trials and in that mathematical model there is probability 1 that any particular string will be repeated infinitely many times. As a consequence of the strong law of large numbers. To be sure, the mathematical result is obtained by showing that the probability any particular string is repeated at least some particular number of times in N trials converges to 1 as N tends to infinity.
Yes, the "converges to 1" I get but since we can't run an infinite number of trials in reality, I don't like the "= 1". I realize that math cares not at all whether I like it or not.
 
  • #26
Dr. Courtney, you said,

“The "long runs" means a great many trials.”

The term, ‘a great many’, and ‘large numbers’ (as in The Law Of Large Numbers) seems to me to be vague given that mathematics is supposed to be a very precise discipline.

“If you are trying to measure the occurances of an event with close to a 50% probability, you do not need as large a number of trials as when trying to measure the occurances of events with much smaller probabilities.”

So we are likely to see the expected value regards reds/non-reds (48.6% v 51.4%) sooner than we might see the expected value for a single number, eg, red 36 (2.7...% v 97.2...%)?

“The math is easier with coin flips. Obtaining heads n times in a row has a probability of (1/2)^n. So the probability of 10 heads in a row is 1/1024.”

Q. Is the probability of 10 non-reds in a row 1.2750252 / 1000 [(19/37)^10]?

Q. Is another way of saying 1/1024 (re 10 heads in a row) 0.9765625/1000?

Q. If so, is the probability of 10 non-reds in a row 0.2984627 greater than the probability of 10 heads in a row (1.27... minus 0.97...)?
 
  • #27
phinds, you said,

“Yes. Why would it not? [re me asking; ‘Q. Does probability theory suggest that if we played for a long enough period of time we would see a hundred non-reds in succession? A thousand? Million? Billion, trillion etc?].”

I find it hard to imagine a hundred non-reds in a row (never mind a thousand, million or billion). I find it very hard to imagine a trillion in a row. I’m beginning to grasp the basics of probability (I think) but wondering whether the theory is borne out (or would be borne out) by the practice.

Q. Are there computer programmes that could simulate trillions (and more) spins to see if we would actually get runs of non-reds into (and beyond) the trillions?
 
  • #28
Cliff Hanley said:
I find it hard to imagine a hundred non-reds in a row (never mind a thousand, million or billion). I find it very hard to imagine a trillion in a row.
Sure, but the math doesn't care what we can imagine.

I’m beginning to grasp the basics of probability (I think) but wondering whether the theory is borne out (or would be borne out) by the practice.
yes it would
Q. Are there computer programmes that could simulate trillions (and more) spins to see if we would actually get runs of non-reds into (and beyond) the trillions?
any program will do it if you run it for long enough. It might take more than your lifetime, but it would happen.
 
  • #29
phinds said:
any program will do it if you run it for long enough. It might take more than your lifetime, but it would happen.
If the algorithm to generate random numbers is good enough. For trillions of random numbers, most algorithms are not.

@cliff: I posted example numbers as orientation in a previous post.
Cliff Hanley said:
Q. If so, is the probability of 10 non-reds in a row 0.2984627 greater than the probability of 10 heads in a row (1.27... minus 0.97...)?
Don't forget the factor of 1000. Yes.
Cliff Hanley said:
I find it hard to imagine a hundred non-reds in a row (never mind a thousand, million or billion).
It is unlikely to happen within the lifetime of Earth, even if you run a computer continuously for the next 5 billion years. If you run it for 1030 years, it is very likely to happen (with 100 non-red). If you run it for 10109 years, it is very likely to see a billion non-red in a row somewhere in this incredibly long experiment.
 
  • Like
Likes Cliff Hanley
  • #30
mfb said:
If the algorithm to generate random numbers is good enough. For trillions of random numbers, most algorithms are not.
Good point. I was assuming more a more ideal situation. Well, OK, actually I wasn't assuming anything, I just didn't think of that :smile:
 
  • #31
Stephen Tashi, you said,

“Technically, probability theory gives you no guarantees about any event (or series of events) actually happening. Probability theory merely uses the given probabilities to assign probabilities to other events and series of events.”

Q. So those who say, for example, that we will see the expected value regards non-reds at roulette (19/37) in the long run, or, given sufficient spins, are in error? Can they only correctly say that we will probably see...’

You also said,

“When people assert that some event will happen in the long run, this is an assertion about the physics or other applied science involved in a problem, not a theorem of mathematical probability theory.”

Q. Would I be correct in presuming that the physics of a large number of spins of a roulette wheel in advance of said spins would be impossible to know?

“The best mathematical probability theory can do in such situations is to say the limit of the probability of an event approaches 1 as the "length" of the "long run" approaches infinity.”

Q. What does the ‘limit of the probability’ mean?

Q. By infinity do you mean an infinite number of spins?

Q. If so, how can an infinite number be approached? Isn’t a single spin just as close to an infinite number of spins as, say, a centillion (10^303) spins - in that after the former there are just as many spins ahead of us as in the case of the latter?
 
  • #32
Cliff Hanley said:
Stephen Tashi, you said,

“Technically, probability theory gives you no guarantees about any event (or series of events) actually happening. Probability theory merely uses the given probabilities to assign probabilities to other events and series of events.”

Q. So those who say, for example, that we will see the expected value regards non-reds at roulette (19/37) in the long run, or, given sufficient spins, are in error? Can they only correctly say that we will probably see...’
I think you have a good point there. When the probability can be shown to be 1 to within some large number of decimal places (and in this case it can be a VERY large number) we tend to talk in practical terms as though the outcome is a certainty, but I believe you are right that we should say will probably see since it is not mathematically guaranteed.
 
  • Like
Likes Cliff Hanley
  • #33
That's why I used "very likely". As in, a probability of more than 99.99999999999999999999999999999999999999999999999999999 % in the previous post (and I could add as many 9 as the forum would allow).
 
  • #34
Cliff Hanley said:
Q. So those who say, for example, that we will see the expected value regards non-reds at roulette (19/37) in the long run, or, given sufficient spins, are in error? Can they only correctly say that we will probably see...’

Yes, if were talking about mathematical probability, we can only say "we will probably see...".

You also said,

“When people assert that some event will happen in the long run, this is an assertion about the physics or other applied science involved in a problem, not a theorem of mathematical probability theory.”Q. Would I be correct in presuming that the physics of a large number of spins of a roulette wheel in advance of said spins would be impossible to know?

It would take someone familiar with the construction of roulette wheels to answer that question.

“The best mathematical probability theory can do in such situations is to say the limit of the probability of an event approaches 1 as the "length" of the "long run" approaches infinity.”

Q. What does the ‘limit of the probability’ mean?

Q. By infinity do you mean an infinite number of spins?
Are you familiar with the mathematical definition of a "limit of a sequence"? The limit of a sequence of probabilities is just a special case of that concept.

Q. If so, how can an infinite number be approached? Isn’t a single spin just as close to an infinite number of spins as, say, a centillion (10^303) spins - in that after the former there are just as many spins ahead of us as in the case of the latter?

The mathematical definition of a "limit of sequence" is specific and technical. That's what we need to look at.
 
  • #35
One fundamental fact about "the long run" is this:

Suppose we're interested in some event E with a positive probability Prob(E) = p > 0 of its occurring on anyone of many identical, independent trials.

And suppose we would like to conduct enough trials so that the probability of E occurring at least once is very high, say greater than Q = 1 - ε for some arbitrarily small quantity ε > 0.

Then, there always exists some positive integer N such that if N trials are conducted, the probability of E occurring at least once is at greater than the preassigned probability Q.

(((
Proof: The probability of E not occurring on one trial is 1-p. So the probability of E not occurring at all on N independent trials is (1-p)N.

This means that the complementary probability, of E occurring at least once in N trials, is 1 - (1-p)N.

If we try to solve for N in the equation

1 - (1-p)N = Q

we get:

(1-p)N = 1-Q

and so

N = log(1-Q) / log(1-p)

where the log is to any fixed base, so we may as well have it be the natural log (base e).

But, there is no reason that this expression for N need be an integer. Hence we must increase N to the next integer to have a sensible equality, while still ensuring that our ultimate probability of E occurring at least once is greater than Q, as desired:

N = floor[1+log(1-Q) / log(1-p)]

It follows that

(1-p)N < 1-Q

and so the probability 1 - (1-p)N of E occurring at least once in N trials satisfies

1 - (1-p)N > Q

as desired.
)))
 
  • #36
  • #37
MrAnchovy, you said,

“You can google this [the record for successive non-reds in actual play ], although I'm not sure how reliable the answers would be.”

Thanks. I got this from allaboutbetting.co.uk; ‘In Monte Carlo in 1913 black came up 26 times in a row and in New York in 1943 red came up 32 times in a row.’

The odds for those, if I’ve got this correct, are (18/37)^26 and (18/37)^32 respectively. Which is 7.3087029 x 10^-9 (which is approx a 7 in a billion chance; and, 9.6886885 x 10^-11 (which is approx 1 in a trillion chance).

You also said,

“A wheel spun once a minute for 300 years will spin about 130 million times.”

I think 300 years at once a minute would be nearer 160 million years but we’ll go with 130 million for the sake of your example.

“The chance of 28 successive non-reds is about 1 in 127 million. However this doesn't mean that a run of 28 will happen, or that a run of more than 28 will not happen.”

Q. It’s just the mathematical probability, yes?

“...if everyone on Earth spent their whole lives playing roulette until the Earth's atmosphere is burned off by the Sun they are likely to see a hundred non-reds, but a thousand are unlikely before the universe reaches heat death (caution - I did these calculations rather carelessly).”

Q. I’ll heed your caution and not do any numbers on this one. Perhaps another poster can answer this one more accurately?

“We can't [know when the long run will give us the expected value] but we can say that the more trials we do the observed proportion is more likely to approximate the theoretical proportion more closely.”

Q. Only ‘more likely’?

Q. If it’s the case that in theory we could see black come up say, a million times in a row (or more), is it true that (given sufficient spins) it would be the case in practice?

“See the above comment on heat death [re the practice matching the theory re a million (or more) non-reds in a row].”

Again, I’ll heed your caution re you saying that you did your calculations rather carelessly.

“You would gain more understanding by learning about this section of probability (binomial probability/Bernouilli trials) and doing the calculations yourself.”

Yes, thanks, I’ve been checking out Bernoulli trials, and attempting many calculations myself, but I find that a bit of both (studying and asking questions) helps me to understand this better.

Thanks for the reply.

FactChecker, you said,

"The long run" depends on how close you want to get to 18/37. Even then, there is only a probability that it will get as close as you specify. So you have to frame the question this way: "How large of a sample size would it take so that the probability of the sample result being within xxx of its theoretical value is yyy?". The answer to that question would give you the sample size that you could call "the long run" for that case.”

Q. Is the sample size simply the number of trials (in this case, the number of spins)?

Q. And even if we carried out a centillion (10^303) trials (spins) we would still only probably see the expected value (or very close the expected value)? And that we might see a centillion successive non-reds instead (and if we did sufficient trials - spins - we would probably see a centillion successive non-reds)?

Q. And are you saying that there is no long run as such, that there’s only the long run for case x, the long run for case y, etc etc etc? If so, it seems to me that many people use this terms inaccurately; do you find this also?

You also said,

“Suppose you want to say that there is a probability of 95% that it is within 0.01 of 18/37.”

Q. Does ‘within 0.01 [1%] of 18/37’ mean between 18.17982/37 and 17.82018/37?

[I got those numbers by the following method;

18/37 x 0.01 = 0.00486...

To add 18/37 + 0.00486... we need to find a common denominator, which is 37, which gives us;

18/37 + 0.17982/37 which equals 18.17982/37

And;

18/37 – 0.17982/37 which equals 17.82018/37]

Q. Are the above calculations correct?

You added,

“Then there is an equation that tells you how many trials that would take. So it tells you what "the long run" would mean for that case.”

Q. What is the equation?

Q. Does this mean that if we carry out the number of trials that the equation tells us to there will be a 95% chance that we will get between 18.17982/37 reds and 17.82018 reds?

WWGD, you said,

“I think the long run here would be described by the LLN --Law of Large Numbers.”

Q. Do you think that term (LLN) is rather vague for mathematics given large is a relative term?
 
Last edited by a moderator:
  • #38
Cliff Hanley said:
Q. Do you think that term (LLN) is rather vague for mathematics given large is a relative term?

No, of course not, since the law of large numbers doesn't just say "if we generate a lot of data, we will get closer to the true value". It has an actual mathematical meaning which is very precise.
 
  • #39
9.6886885 x 10^-11 is about 97 in a trillion, or roughly 1 in 10 billion.

I get the impression the questions repeat.
 
  • #40
For the OP:

I think the best way to understand this is through the Statistical Inference theorem and Cramer-Rao lower bound that relates the information matrix (known as Fischer information) to the statistical variance of an estimator.

The idea is simple - increase information content and reduce statistical variance.

If you understand how the information content grows then you understand how the estimate converges to the population value and how it does so consistently with variance also shrinking to zero.

Note that the information matrix makes no assumption about things like whether samples have completely independent data points, partially correlated ones and even completely correlated values. If you have collinearly correlated sample points then information density will not increase.

If you want to understand the nature of how things converge in more detail then you need to understand how the information density of that sample increases as you take more data points on to what already exists. This includes dealing with situations of correlated data and using some sort of inequality with your assumptions of how correlated your data points will be to others to estimate probabilistically when your variance will be in some interval range.

The above statistical inference theorem is general - but as others have mentioned, you will need to add further information constructs like distribution models to help make things more specific and also measurable.
 
  • Like
Likes jim mcnamara
  • #41
gill1109, you said,

“Probability theory tells us that if we play infinitely often we will certainly get to see, *infinitely* many times, a hundred non-reds in succession. And a thousand. And a million. And a billion, and a trillion.”

Q. I’ve heard that the concept of infinity is a complex one (and that there are different ideas about it in maths from those in physics); do you mean here simply a number of trials without end – a purely hypothetical, and impossible, situation?

You also said,

“You name it, you will get it ... with probability 1, infinitely many times.”

Q. So we would be guaranteed to see a centillion successive non-reds (10^303)? And see it an infinite number of times?

Q. What about an infinite number of non-reds in succession; what is the probability of that given an infinite number of trials?

“The strong law of large numbers.”

Thanks. I looked it up. But the maths is too advanced for me at the moment. I will go back to it (many times I expect) as I learn to deal with more and more complex maths.
 
  • #42
Cliff Hanley said:
gill1109, you said,

“Probability theory tells us that if we play infinitely often we will certainly get to see, *infinitely* many times, a hundred non-reds in succession. And a thousand. And a million. And a billion, and a trillion.”

Q. I’ve heard that the concept of infinity is a complex one (and that there are different ideas about it in maths from those in physics); do you mean here simply a number of trials without end – a purely hypothetical, and impossible, situation?

You also said,

“You name it, you will get it ... with probability 1, infinitely many times.”

Q. So we would be guaranteed to see a centillion successive non-reds (10^303)? And see it an infinite number of times?

Q. What about an infinite number of non-reds in succession; what is the probability of that given an infinite number of trials?

“The strong law of large numbers.”

Thanks. I looked it up. But the maths is too advanced for me at the moment. I will go back to it (many times I expect) as I learn to deal with more and more complex maths.
Yes you are guaranteed to see a centillion successive non-reds (10^303), an infinite number of times. But not an infinite number of non-reds in succession. That has probability zero.
 
  • #43
gill1109 said:
Yes you are guaranteed to see a centillion successive non-reds (10^303), an infinite number of times. But not an infinite number of non-reds in succession. That has probability zero.

Any particular infinite sequence of outcomes has probability zero. So are we guaranteed not see any particular infinite sequence? Mathematically, probability theory doesn't define the "actual" occurrence of events, so it doesn't express any guarantees about that topic. If we are discussing a physical experiment, we can discuss how probability theory is interpreted in that experiment.
 
  • #44
phinds, you said,

“Yes, the probability of getting any string you can name approaches 1 as the number of trials approaches infinity, but since we can't actually do an infinite number of trials, we can't ever get an absolute certainty (probability = 1.0)”

But even if, as a thought experiment, we imagine doing an infinite number of trials, every single spin could be non-red (P[non-red] = 19/37). Or every spin could be red (P [red] = 18/37). Or every spin could be green (P [green] = 1/37). So even with an infinite number of trials there are no certainties. No?
 
  • #45
Stephen Tashi, you said,

“There is a further distinction between "actually" and "certainly". If we "actually" took a sample from a normal distribution and the value was 1.23 then an event with probability 1 ( namely the event "the value of the sample will not be 1.23) failed to "actually" happen.

Q. Would you explain what ‘normal distribution’ means in language suitable for a maths novice please?

Q. Likewise for ‘the value was 1.23’?

(I Googled it but the explanation was in language too advanced for me at the moment).
 
  • #46
FactChecker, you said,

“For any number, N, we can always continue long enough for N+1 occurrences.”

Q. Do you mean that if we substitute N for say, 1000, and carry out sufficient trials, we will (probably) see 1000 successive non-reds (or reds, or whatever); and if we continued with the trials for long enough we will see 1001 successive non-reds/whatever, and 1002 non-reds, and 1003, etc etc etc?

You added,

“The probability of infinitely many occurrences is 1 because the probability of finite occurrences is 0.”

Q. Do you mean the probability of infinitely many occurrences generally? Or the p of infinitely many occurrences of a specified sequence / sequences? I’m guessing the former given that you’ve also said that the p of finite occurences is 0?

Q. If we did do an infinite number of trials (assuming that we can somehow survive the ‘death’ of the Sun and whatever other life-threatening cosmological - or other - events will take place in the future, ie, assuming that we - us now and whatever we evolve to become - can survive eternally) what would we be likely to see in terms of sequences of non-reds / reds / etc etc?
 
  • #47
mfb, you said,

“As examples for the 49%/51% question:
After 1000 rolls, the chance to be within 1% of this result (so somewhere from 48/52 to 50/50) is roughly 50%.
After 10000 rolls, the chance to be within 1% is about 95%.
After 100,000 rolls, the chance to be within 1% is larger than 99.9999999%.”

Q. How do we work this out?

After 1 million rolls, the chance to be within 0.1% (between 48.9/51.1 and 49.1/50.9) is about 95% and the chance to be more than 1% away is completely negligible.
After 100 million rolls, the chance to be within 0.01% (between 48.99/51.01 and 49.01/50.99) is about 95%.

Q. And this?

You also said,

“Draw a random number from a uniform distribution over the real numbers in the interval [0,1].”

Q. I Googled ‘real numbers’ to discover that they are any number that we can find on a number line (including integers, fractions, decimals, irrational numbers such as pi, etc); but I was left wondering if these are real numbers what are non-real numbers; so, what are non-real numbers? And why is the distinction ‘real’ important when referring to real numbers?

You added,

"The number is not 0.5" has probability 1...”

Q. Is this because ‘uniform distribution’ (in this example) means the numbers 1,2,3 etc (with no fractions/decimals in between)?

“...but it is not certain.”

Q. Why is it not certain? If the pool of numbers from which we draw does not include 0.5 why is it not certain that the number will not be 0.5?

“It is almost certain.”

Q. Are you saying that ‘probability of 1’ ≠ ‘certainly will happen’?
 
  • #48
Thread closed temporarily for Moderation...
 
  • #49
@Cliff Hanley, the question you started with in this thread, "what is the long run?" has been asked and answered, so I am closing this thread.

Cliff Hanley said:
But what is the LONG RUN?
You have asked a number of other questions as well, some of which can be answered by a web search, but others of which will require a fair amount of time studying the relevant mathematics subjects. This forum is not meant to take the place of academic studies.
Cliff Hanley said:
Q. What does the ‘limit of the probability’ mean?
Based on other threads of yours that I have seen, your mathematical expertise is not yet at the stage where an answer would be meaningful to you.
Cliff Hanley said:
Q. Would you explain what ‘normal distribution’ means in language suitable for a maths novice please?
Did you do a web search for this term? It is not the purpose of Physics Forums to be a tutorial for large swaths of probability theory.
Cliff Hanley said:
Q. I Googled ‘real numbers’ to discover that they are any number that we can find on a number line (including integers, fractions, decimals, irrational numbers such as pi, etc); but I was left wondering if these are real numbers what are non-real numbers; so, what are non-real numbers? And why is the distinction ‘real’ important when referring to real numbers?
These are very basic questions. You should put in the effort at researching these questions rather than rely on PF as a tutorial service.

Thread closed.
 
Back
Top