• Support PF! Buy your school textbooks, materials and every day products Here!

What is the probability of getting r heads before s tails?

  • #1
808
37

Homework Statement


"Toss a coin repeatedly. Denote the event of getting heads or tails on the ##i##-th try by ##H_i## and ##T_i## respectively, where ##P(H_i)=p## and ##P(T_i)=1-p##, for some ##0\leq p \leq 1##. Now denote by ##E## the event of getting ##r## consecutive heads before ##s## consecutive tails. Find ##P(E|H_1)##, ##P(E|T_1)##, and ##P(E)##."

Homework Equations


Independence: If ##A## and ##B## are events, then ##P(A\cap B)=P(A)P(B)##.

The Attempt at a Solution


So far, I have denoted ##E## by ##E=(H_{j+1} \cap H_{j+2} \cap ...\cap H_{j+r} \cap T_{k+1} \cap ... \cap T_{k+s})## for some ##j\in ℕ## and for some ##k\in ℕ,\space k>j+r##. The major thing that concerns me at the moment, is the set of ##H_i,T_i## for ##1\leq i \leq j## and the set of ##H_m,T_m## for ##j+r+1\leq m \leq k##. That's really all I could come up with myself.
 
Last edited:

Answers and Replies

  • #2
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,145
546
with respect to calculating P(E):

hint: it is enough to consider the case where exactly ##n:=(s+r-1)## games are played
What kind of distribution is involved with summing ##n## iid coin tosses? And what portion of this distribution are you interested in?
 
  • #3
808
37
I think it was the binomial distribution?
 
  • #5
808
37
Wait a minute...

I've forgotten a very important detail from the opening post: they have to be ##r## consecutive heads and ##s## consecutive tails. So sorry!
 
  • #6
verty
Homework Helper
2,164
198
Wait a minute...

I've forgotten a very important detail from the opening post: they have to be ##r## consecutive heads and ##s## consecutive tails. So sorry!
What is the probability of the next two tosses being two heads?
 
  • #7
808
37
what portion of the distribution are you interested in and why?
I don't think I understand what you mean by "what portion of the distribution".

What is the probability of the next two tosses being two heads?
##P(H_j\cap H_{j+1})=P(H_j)P(H_{j+1})=p^2##.
 
  • #8
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
12,342
5,135

Homework Statement


"Toss a coin repeatedly. Denote the event of getting heads or tails on the ##i##-th try by ##H_i## and ##T_i## respectively, where ##P(H_i)=p## and ##P(T_i)=1-p##, for some ##0\leq p \leq 1##. Now denote by ##E## the event of getting ##r## consecutive heads before ##s## consecutive tails. Find ##P(E|H_1)##, ##P(E|T_1)##, and ##P(E)##."
It might be worth checking that we are all agreed on what this question asks. It looks a bit odd to me, but I think it means this:

Event E is: you toss a coin ##r+s## times. The first ##r## tosses are Heads, the next ##s## tosses are Tails.

Calculate ##P(E)##, ##P(E|H_1)## and ##P(E|T_1)##.

However, if the first toss is a tail, then clearly and trivially the first ##r## tosses can't be heads. So, that seems an odd thing to ask. It's a bit like asking: what is the probablity that a team wins all its games, given that it loses its first game?

PS I don't think it's asking: if you toss a coin an indefinite number of times, what is the probability that you eventually get ##r## consecutive heads followed by ##s## consecutive tails.
 
  • #9
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,145
546
Wait a minute...

I've forgotten a very important detail from the opening post: they have to be ##r## consecutive heads and ##s## consecutive tails. So sorry!
This is a radically different problem now. It would also explain why your problem is prodding you to calculate conditionals based on a first step analysis of ##P(E|T_1)##, ##P(E|H_1)## in addition to just calculating ##P(E)##

This now seems to be a theory of runs problem. In particular what is the probability of having a run of ##r## heads before ##s## tails if you toss a coin indefinitely (or if you prefer: ##n## tosses, and pass limits). To Be Confirmed: Is this the correct interpretation?

There's a very simple and pleasant way to do this with renewal theory, but that is extremely far from what you've listed as your relevant equations.

- - - -
What your question is prodding you to do is to condition on the first step/trial. If the first trial is heads, you work through the conditional events to get an equation for ##P(E|H_1)##. Now condition on first step tails -- repeat process and get an equation ##P(E|T_1)##.

You should now have two equations with two unknowns. Solve.

The big idea is if you get heads on your first trial, you either 'win' or you end up in the same place as ##T_1##, but I don't want to give this all away. I'd suggest going back to the drawing board on what sorts of Relevant Equations could be of help here as I think you need to work through the conditioning yourself. Drawing a picture may be of some help.
- - - -

I don't think I understand what you mean by "what portion of the distribution".
I was prodding you to sum over a certain chunk of the binomial distribution. As the problem description has changed, the point I was making is not relevant.
 
  • #10
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
12,342
5,135
This now seems to be a theory of runs problem. In particular what is the probability of having a run of ##r## heads before ##s## tails if you toss a coin indefinitely (or if you prefer: ##n## tosses, and pass limits). To Be Confirmed: Is this the correct interpretation?
Ah! So it's the probability of what happens first: ##r## consecutive heads or ##s## consecutive tails.

That makes sense now.
 
  • #11
808
37
What your question is prodding you to do is to condition on the first step/trial. If the first trial is heads, you work through the conditional events to get an equation for ##P(E|H_1)##. Now condition on first step tails -- repeat process and get an equation ##P(E|T_1)##.

You should now have two equations with two unknowns. Solve.

The big idea is if you get heads on your first trial, you either 'win' or you end up in the same place as ##T_1##, but I don't want to give this all away. I'd suggest going back to the drawing board on what sorts of Relevant Equations could be of help here as I think you need to work through the conditioning yourself. Drawing a picture may be of some help.
Okay, so what I came up with was: ##P(E|H_1)=P(\bigcap_{i=2}^{r} H_i) + P(E|T_1) - P((\bigcap_{i=2}^{r} H_i)\cap (E|T_1))##. Basically, I followed the bolded hint, and used the equation ##P(A\cup B) = P(A)+P(B) - P(A\cap B)##. What I think is going on, is given that the first coin toss returned a head, one of two things can happen for ##E## to be possible:

##A## - you get ##r-1## more heads
##B## - you get a tails and have to start over as if your initial coin flip were a tail

Hence, the probability of ##E## given that ##H_1## has occurred is the probability that either you get ##r-1## more successive heads, or the probability of ##E## given that you get a tails. I'm not sure about how this could apply if someone had to restart the coin flips multiple times... I guess that ##E|H_1## is equivalent to the event that given that any coin flip is a heads, you get ##r-1## more heads, or the event that you mess up on the trial, get a tail, and start over as if the initial coin flip were a tails?
 
Last edited:
  • #12
808
37
Okay, now I need to find out what ##E|T_1## is. I have a strong suspicion that ##E|T_1=(\bigcap_{i=2}^{s} T_i)^c\cup## and something with ##E|H_1##, I suppose.

I'm going to guess: ##E|T_1=(\bigcap_{i=2}^{s} T_i)^c\cup E|H_1##.
 
Last edited:
  • #13
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,145
546
Okay, now I need to find out what ##E|T_1## is. I have a strong suspicion that ##E|T_1=(\bigcap_{i=2}^{s} T_i)^c\cup## and something with ##E|H_1##, I suppose.

I'm going to guess: ##E|T_1=(\bigcap_{i=2}^{s} T_i)^c\cup E|H_1##.
If it were me, at this stage I'd drop the set intersection notation and just write out the math formula in terms of multiplying, adding, dividing and exponentiation of ##p## and ##(1-p)##. You then should get two equations that you can solve...

(We could also write these out as states, as in a finite state markov chain which will give numeric answers you could use to check your solution against as well as other insights like exponential decay in probability of not being absorbed, but I don't think they're the right way to get the closed form solution you're after.)
 
  • #14
808
37
Okay, I got my answer for ##P(E)##, ##P(E|H_1)## ,and ##P(E|T_1)##. It's not perfect, but I hope it'll get me something.

Thank you all for your help, and such.
 
  • #15
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,145
546
I'd suggest sharing it... as is often the case in probability, the problem is straightforward once you've figured out the conditioning.

There's really no reason you can't get full credit here.
- - - -
alternative approaches that I know would be to use renewal theory or martingales. Drawing the picture for the finite state markov chain can help with intuition I think, as long as you don't get too lost in the weeds.
 
  • #16
808
37
Okay, here's what I have:

##P(E|H_1)+ (p^{r-1}-1)P(E|T_1)=p^{r-1}##
##-(1-p)^{s-1}P(E|H_1) + P(E|T_1)=1-(1-p)^{s-1}##

##A = \begin{pmatrix}
1 & (p^{r-1}-1) \\
-(1-p)^{s-1} & 1 \\
\end{pmatrix}##

##det(A)=1+(1-p)^{s-1}(p^{r-1}-1)##

##A^{-1} = \frac{1}{det(A)}\begin{pmatrix}
1 & (1-p^{r-1}) \\
(1-p)^{s-1} & 1 \\
\end{pmatrix}##

##\begin{pmatrix}
P(E|H_1) \\
P(E|T_1) \\
\end{pmatrix} = A^{-1}
\begin{pmatrix}
p^{r-1} \\
1-(1-p)^{s-1} \\
\end{pmatrix}
=\frac{1}{1+(1-p)^{s-1}(p^{r-1}-1)} \begin{pmatrix}
p^{r-1}+(1-p^{r-1})(1-(1-p)^{s-1}) \\
1-(1-p)^{s-1}+p^{r-1}(1-p)^{s-1} \\
\end{pmatrix}##

##P(E)=\frac{1}{1+(1-p)^{s-1}(p^{r-1}-1)}[p^{r}+p(1-p^{r-1})(1-(1-p)^{s-1})+(1-p)(1-(1-p)^{s-1})+p^{r-1}(1-p)^{s}]##
 
  • #17
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,145
546
##P(E)=\frac{1}{1+(1-p)^{s-1}(p^{r-1}-1)}[p^{r}+p(1-p^{r-1})(1-(1-p)^{s-1})+(1-p)(1-(1-p)^{s-1})+p^{r-1}(1-p)^{s}]##
Did you try testing this out numerically? I think I've transcribed this right, but for p=0.5 and s=10, r=10, I get probability of 1 which can't be right. I also get probability of 1 for some other cases I tested.

Python:
def the_OP_func_version1(p, s, r):
    denominator = 1 + (1-p)**(s-1)*(p**(r-1) - 1)
    numerator_0 = p**r
    numerator_1 = p*(1-p**(r-1))*(1- (1-p)**(s-1))
    numerator_2 = (1-p)*(1 - (1-p)**(s-1))
    numerator_3 = p**(r-1)*(1-p)**s 
    result = (numerator_0 + numerator_1 + numerator_2 + numerator_3)/denominator
    return result
- - - -
note you'd probably save yourself some work by just doing elimination and not computing the matrix inverse explicitly, but it's your call.

Also, if you use ##q:= (1-p)## I think that can clean things up quite a bit.

Okay, here's what I have:

##P(E|H_1)+ (p^{r-1}-1)P(E|T_1)=p^{r-1}##
##-(1-p)^{s-1}P(E|H_1) + P(E|T_1)=1-(1-p)^{s-1}##
Can you explain via enumeration, how you got these two equations?

Incidentally your first equation looks right, but the second one looks wrong.
 
  • #18
808
37
second one looks wrong
This is what I did:

##P(E|T_1)=P[(\bigcap_{i=2}^{s} T_i)^c\cup (E|H_1)]=P(\bigcap_{i=2}^{s} T_i)^c) + P(E|H_1) - P(\bigcap_{i=2}^{s} T_i)^c)P(E|H_1)=[1-P(\bigcap_{i=2}^{s} T_i)] + P(E|H_1)-[1-P(\bigcap_{i=2}^{s} T_i)]P(E|H_1)=[1-(1-p)^{s-1}]+P(E|H_1)-[1-(1-p)^{s-1}]P(E|H_1)=[1-(1-p)^{s-1}]+(1-p)^{s-1}P(E|H_1)##

Of course, this is based off the assumption that ##E|T_1=(\bigcap_{i=2}^{s} T_i)^c \cup (E|H_1)##, so it's likely this assumption is incorrect.
 
  • #19
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,145
546
Of course, this is based off the assumption that ##E|T_1=(\bigcap_{i=2}^{s} T_i)^c \cup (E|H_1)##, so it's likely this assumption is incorrect.
Why would you need an assumption that is likely incorrect?

As I've suggested before, all you need to do is enumerate at most ##s## tosses, where the first one is tails. I'll define ##q:= 1-p##. I find it convenient to interpret in terms of getting a reward of 1 when the run of r heads occurs before the run of s tails and a reward of 0 otherwise. (I'd then calculate the expected reward, but the expectations operator looks awkwardly like the event E so I'm leaving that out.)

In the case that player one loses, this is given by ##q^s##, and there is no reward for losing. But we could also have ##\{qp, q^2p, q^3p, ..., q^{s-1}p\}##, each of which puts us in a state that is identical to having had a heads on the first toss.

We should forget about commutativity for a minute and literally this should be read as ##q^k p= \text{tails on first k tosses then one heads}##. Each one of these corresponds to an event and each one is mutually exclusive and hence additive. If you want you can label each event as number of tails (k) until first heads.

So the expected reward when the first toss is tails is ##\sum_{k=1}^{s-1} p q^k P(E\big \vert H_1)##... why is this true? Note: I wrote "when the first toss is" here...

if you prefer we can condition on the first toss being tails and hence in this conditional world you'd have

the expected reward conditioned the first toss as tails is ##\sum_{k=1}^{s-1} p q^{k-1} P(E\big \vert H_1)##

it's just a small book-keeping difference.
- - - -
another way to view the above is in terms of the geometric distribution for tosses until first heads, so for completeness you could think of it as if you always toss a coin until seeing heads, given that the first toss was tails, it's just there's no reward if your toss number happens to be ##\geq s##

##= \sum_{k=1}^{\infty} p q^{k-1}\cdot \text{Expected Reward at toss k} =\big( \sum_{k=1}^{s-1} p q^{k-1} \cdot P(E\big \vert H_1)\big) + \big(\sum_{k=s}^{\infty} p q^{k-1}\cdot 0\big)##

where you have an expected reward of ##P(E\big \vert H_1)\big)## for that first slice of the summation, and you get zero reward for all the 'later arrivals'.

- - - -

Of course, there's some other work to be done like simplifying this sum into a finite geometric series, and combining with the other equation and solving.
 
Last edited:
  • #20
Ray Vickson
Science Advisor
Homework Helper
Dearly Missed
10,706
1,728
Okay, here's what I have:

##P(E|H_1)+ (p^{r-1}-1)P(E|T_1)=p^{r-1}##
##-(1-p)^{s-1}P(E|H_1) + P(E|T_1)=1-(1-p)^{s-1}##

##A = \begin{pmatrix}
1 & (p^{r-1}-1) \\
-(1-p)^{s-1} & 1 \\
\end{pmatrix}##

##det(A)=1+(1-p)^{s-1}(p^{r-1}-1)##

##A^{-1} = \frac{1}{det(A)}\begin{pmatrix}
1 & (1-p^{r-1}) \\
(1-p)^{s-1} & 1 \\
\end{pmatrix}##

##\begin{pmatrix}
P(E|H_1) \\
P(E|T_1) \\
\end{pmatrix} = A^{-1}
\begin{pmatrix}
p^{r-1} \\
1-(1-p)^{s-1} \\
\end{pmatrix}
=\frac{1}{1+(1-p)^{s-1}(p^{r-1}-1)} \begin{pmatrix}
p^{r-1}+(1-p^{r-1})(1-(1-p)^{s-1}) \\
1-(1-p)^{s-1}+p^{r-1}(1-p)^{s-1} \\
\end{pmatrix}##

##P(E)=\frac{1}{1+(1-p)^{s-1}(p^{r-1}-1)}[p^{r}+p(1-p^{r-1})(1-(1-p)^{s-1})+(1-p)(1-(1-p)^{s-1})+p^{r-1}(1-p)^{s}]##
I am not sure if your answer is correct. I get a slightly different-looking answer, obtained in a different way.

We can model the process as a discrete-time stationary Markov chain with states H1, H2, ...., Hr and T1, T2, ..., Ts. The process at the completion of some toss is in state Hj if the toss has just resulted in the jth 'head' in a run of 'heads' (so in state H1 we have just gotten our first 'head' in a potential run of 'heads'). The states Tk have the same type of meaning, but for 'tails'. The two states Hr and Ts are "absorbing", meaning that the game ends when one of them is first reached.

The transitions are simple: when in state H1, we go to state H2 if the toss is a 'head' and to state T1 if a 'tale'. For ##1 \leq i \leq r-1## we go from state Hi to H(i+1) if 'heads' and to state T1 if 'tails'. Similarly, we go from state Tk to T(k+1) if 'tails' and to H1 if 'heads'.

So, now all we need to do is compute the first-passage probabilities for H1->Hr and T1->Hr. The process starts in state H1 with probability p and in state T1 with probability q = 1-p.

If we let ##x_i## be the probability of eventually reaching Hr, starting from state Hi, and ##y_j## be the probability of reaching Hr, starting from state Tj, then, we have:
$$\begin{array}{l}x_i = p x_{i+1} + q y_1, \; i=1,2, \ldots, r-1\\
x_r = 1\\
y_j = q y_{j+1} + p x_1, \; j=1,2, \ldots, s-1 \\
y_s=0
\end{array}$$
These equations lend themselves to a nice solution for ##x_{r-1}##, then ##x_{r-2}## , ##\ldots, ## then ##x_1## in terms of ##p, q## and ##y_1##. So, we have a fairly simple equation for ##x_1## in terms of ##y_1##. Similarly, we obtain a reasonably simple solution for ##y_1## in terms of ##x_1##. Now we have two equations that can be solved to find ##x_1## and ##y_1##. The answer is ##\text{Answer} = p x_1 + q y_1.##
 
  • #21
808
37
Everyone, thanks for your help and perspectives on how to solve this problem, but I really need to find an expression for ##P(E|T_1)## in addition to ##P(E)##.
 
  • #22
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,145
546
Everyone, thanks for your help and perspectives on how to solve this problem, but I really need to find an expression for ##P(E|T_1)##...
I basically gave you this in post #19... I actually felt like I gave away too much there. What questions do you have?

- - - -
edit:

in addition to ##P(E)##.
also, for avoidance of doubt here:

I signed off on your ##P(E|H_1)##... If you somehow solve for ##P(E)## using whatever method, then you already know ##P(E|T_1)##. I.e. via Total Probability you have ##P(E) = p \cdot P(E|H_1) + (1-p) \cdot P(E|T_1)##


which I'd actually write using indicator random variables and call Total Expectation:
##P(E) = \mathbb{E}\big[\mathbb I_E\big] = \mathbb{E}\Big[\mathbb{E}\big[\mathbb I_E\big \vert \mathbb I_{\text{Toss 1 is Heads }}\big]\Big] = p \cdot \mathbb{E}\big[\mathbb I_E\big \vert \mathbb I_{\text{Toss 1 is Heads} = 1}\big] + (1-p) \cdot \mathbb{E}\big[\mathbb I_E \big \vert \mathbb I_{\text{Toss 1 is Heads} = 0} \big]##
##= p \cdot P(E|H_1) + (1-p) \cdot P(E|T_1)##
 
Last edited:
  • #23
808
37
I basically gave you this in post #19... I actually felt like I gave away too much there. What questions do you have?
You seemed to be giving the expected values of ##E|T_1## but I can't seem to draw a correlation between that and ##P(E|T_1)##.
 
  • #24
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,145
546
You seemed to be giving the expected values of ##E|T_1## but I can't seem to draw a correlation between that and ##P(E|T_1)##.
I find it preferable to think in terms of indicators sometimes, but they are ultimately Bernouli random variables so you have

##\mathbb{E}\big[\mathbb I_E\big \vert \mathbb I_{\text{Toss 1 = Tails}}\big] = P(E|T_1)##
- - --
edit:
note that with a little contemplation you see that the reward function I setup in post 19 has either a payoff of 0 or 1... it too is a bernouli, so that tells you something...
 
  • #25
808
37
So... ##P(E|T_1)=p(1-p)^{s-1}P(E|H_1)##...?
 

Related Threads on What is the probability of getting r heads before s tails?

Replies
27
Views
3K
Replies
3
Views
2K
Replies
2
Views
2K
Replies
3
Views
3K
Replies
15
Views
604
  • Last Post
Replies
2
Views
812
  • Last Post
Replies
4
Views
953
Replies
11
Views
4K
Replies
3
Views
1K
Top