MHB Success runs in Bernoulli trials

  • Thread starter Thread starter Mathick
  • Start date Start date
  • Tags Tags
    Bernoulli
AI Thread Summary
The discussion centers on understanding the recurrence relation for the probability of observing a run of r successes in a sequence of Bernoulli trials. The left-hand side of the equation represents the probabilities of different scenarios leading to the event of observing r successes, while the right-hand side directly calculates the probability of achieving r successes in a row. It is clarified that each term on the left accounts for the possibility of a renewal occurring at various points in the past trials, with the probabilities of these events being mutually exclusive. The confusion arises from the assumption that previous successes outside the run are included, but the formulation specifically focuses on the last r trials. Ultimately, the analysis illustrates two equivalent methods of calculating the same probability, reinforcing the relationship between the recurrence and the generating function.
Mathick
Messages
23
Reaction score
0
I tried to understand the following problem:

Consider a sequence of Bernoulli trials with success probability $p$. Fix a positive integer $r$ and let $\mathcal{E}$ denote the event that a run of $r$ successes is observed; recall that we do not allow overlapping runs. We use a recurrence relation for $u_n$, the probability that $\mathcal{E}$ occurs on the $n$th trial, to derive the generating function $U(s)$. Consider $n \ge r$ and the event that trials $n, n − 1, . . . , n − r + 1$ all result in success. This event has probability $p^r$.

On the other hand, if this event occurs then event $\mathcal{E}$ must occur on trial $n−k$ ($0 \le k \le r−1$) and then the subsequent $k$ trials must result in success. Thus we derive

$u_n + u_{n−1}p + · · · + u_{n−r+1}p^{r−1} = p^r$, for $n \ge r$.

Everything is fine until 'On the other hand...' How do we know that event $\mathcal{E}$ must occur on trial $n−k$ ($0 \le k \le r−1$)? And also why the LHS of the above equation equals RHS? I mean I understand RHS - the probability of $r$ successes, but I'm struggling to understand LHS. In my opinion, when we take for example $u_{n-1}$, don't we also take into account the successes which happened outside our run in question? I'm clearly wrong but I don't understand why.

Please, could anyone try to explain it to me?
 
Physics news on Phys.org
This passage comes directly from Feller volume 1. The idea is observe something and calculate it two different but equivalent ways.
- - - - -
So let the coin tossing process run for a little bit (at least $r$ tosses, but consider $2r$ or more to avoid technicalities with boundary conditions / delayed renewal type of arguments)

If you observe a run of $r$ heads, then that occurs with probability $p^r$ -- this is one way of calculating it and it is easy.

Now for the second way of calculating it: if you observe a run of $r$ heads, then we have a 'look back' window over the last $r$ tosses and we know exactly one renewal occurred during this window (why?). Hence we either had a renewal just now, or one toss prior, or two tosses prior, or ..., or $r-2$ tosses, or $r-1$ tosses.

Each renewal ends on a run of heads, so if we had a renewal $r-1$ tosses ago, then the probability of this event is to see $r-1$ in a row subsequent to the renewal (which means $r-1$ 'new' heads and 'reusing' the last heads from the prior run that caused the renewal) times the probability of renewal at that time, given by $u_{n-(r-1)}$. Now for a renewal at $r-2$ the probability is $u_{n-(r-2)}\cdot p^{r-2}$, ..., for renewal at 1 toss ago the event probability is $u_{n-1}\cdot p$ and for renewal 0 tosses ago the probability is $u_n \cdot 1$, These events are mutually exclusive (reference the earlier "why") so their probabilities add. Hence we have the equality

$ p^r = u_n + u_{n−1}p + · · · + u_{n−r+1}p^{r−1} $
 

Similar threads

Back
Top