# If duelist A shoots first, what is the probability A wins?

## Homework Statement

"Mr. A and Mr. B fire at each other until one of them is hit. Each duelist has the same probability of hitting each other each time a shot is fired. Mr. A has probability $a$ of hitting, and Mr. B has probability $b$ of hitting. Given Mr. A shoots first, calculate the probability that he wins.

Also calculate the distribution of $X$, the number of shots until the duel ends, and $E(X)$."

## The Attempt at a Solution

Denote the event of A winning by $\alpha$ and $A$ denote the event of A shooting first. So $P(\alpha|A)=a$ if he gets it on his first try. If he misses, and B gets him, then $P(\alpha|A)=0$ since he loses. But if B misses and A hits on his second try, then $P(\alpha|A)=(1-a)(1-b)a$. In general, for the $n-th$ round, $P(\alpha|A)=(1-a)^n(1-b)^na$. So as I see it, the probability of A winning is the union of all these events, and follows a geometric distribution:

$P(\alpha|A)=a+(1-a)(1-b)a+...+(1-a)^n(1-b)^na\text { as }n\rightarrow \infty$
So $P(\alpha|A)=\frac{a}{1-(1-a)(1-b)}$

I'm not sure if this is correct, since my instructor has informed me that the probability should depend on the parity of the round. Anyway, for the second part of the problem, I reason that $X$ is geometrically distributed with some parameter. I reason that it could be $a+b$, but I know that it fails for $a+b>1$, since the terms in the pmf for a geometric r.v. have to be non-negative and between 0 and 1.

Related Calculus and Beyond Homework Help News on Phys.org
Orodruin
Staff Emeritus
Homework Helper
Gold Member
First of all, you cannot use the notation $P(\alpha|A)$ in the way you are doing. $P(\alpha|A)$ is a fixed probability, regardless of what events $\alpha$ and $A$ are as long as they are the same event. Therefore, $P(\alpha|A)$ is the probability of $A$ winning (in the end) under the assumption that it is $A$'s turn. This is what you are looking to compute.

However, your reasoning seems correct, even if your notation is not, but if this was an exam question I would not be surprised to see points deducted for your abuse of notation.

A faster way of reasoning is to create a system of equations for $P(\alpha|A)$ (the probability that $A$ wins given that it is his turn) and $P(\alpha|B)$ (the probability that $A$ wins given that it is $B$'s turn). The probability of $A$ winning given that it is his turn is given by
$$P(\alpha|A) = P(A_h) P(\alpha|A_h) + P(\bar A_h) P(\alpha|\bar A_h) = a + (1-a) P(\alpha|B),$$
where $A_h$ is the probability that $A$ hits a given shot. It is clear that $P(A_h) = a$ so $P(\bar A_h) = 1-a$. $P(\alpha|A_h)$ is the probability that $A$ wins if he hits, i.e., one, and $P(\alpha|\bar A_h)$ is the probability that $A$ wins if he misses, which is $P(\alpha|B)$, since it will then be $B$'s turn. At $B$'s turn, we have in the same way
$$P(\alpha|B) = P(B_h) P(\alpha|B_h) + P(\bar B_h) P(\alpha|\bar B_h) = 0 + (1-b) P(\alpha|A),$$
since the probability of $A$ winning is zero if $B$ hits and if not it will be $A$'s turn again. Inserting this equation into the first gives you
$$P(\alpha|A) = a + (1-a)(1-b) P(\alpha|A),$$
from which you can solve for $P(\alpha|A)$ to get your result. Insertion into the second equation gives
$$P(\alpha|B) = \frac{a(1-b)}{1-(1-a)(1-b)}$$

$X$ will not be geometrically distributed unless $a = b$ (but the number of rounds will be because there is a fixed probability for each round to be the last assuming that you get to it).

if this was an exam question I would not be surprised to see points deducted for your abuse of notation.
Thanks for the heads-up. I'll just denote $\alpha_i$ the event of winning on Mr. A's $i-th$ shot, then, so that $P(\alpha_1|A)=a$, $P(\alpha_2|A)=(1-a)(1-b)a$, and $P(\alpha_n|A)=(1-a)^{n-1}(1-b)^{n-1}a$, and let $\alpha=\bigcup_{i=1}^\infty \alpha_i$. This way, $P(\alpha|A)=\lim_{n\rightarrow \infty} [a+(1-a)(1-b)a+...+(1-a)^{n-1}(1-b)^{n-1}]$.

XX will not be geometrically distributed unless a=ba = b (but the number of rounds will be because there is a fixed probability for each round to be the last assuming that you get to it).
I see.

Ray Vickson
Homework Helper
Dearly Missed

## Homework Statement

"Mr. A and Mr. B fire at each other until one of them is hit. Each duelist has the same probability of hitting each other each time a shot is fired. Mr. A has probability $a$ of hitting, and Mr. B has probability $b$ of hitting. Given Mr. A shoots first, calculate the probability that he wins.

Also calculate the distribution of $X$, the number of shots until the duel ends, and $E(X)$."

## The Attempt at a Solution

Denote the event of A winning by $\alpha$ and $A$ denote the event of A shooting first. So $P(\alpha|A)=a$ if he gets it on his first try. If he misses, and B gets him, then $P(\alpha|A)=0$ since he loses. But if B misses and A hits on his second try, then $P(\alpha|A)=(1-a)(1-b)a$. In general, for the $n-th$ round, $P(\alpha|A)=(1-a)^n(1-b)^na$. So as I see it, the probability of A winning is the union of all these events, and follows a geometric distribution:

$P(\alpha|A)=a+(1-a)(1-b)a+...+(1-a)^n(1-b)^na\text { as }n\rightarrow \infty$
So $P(\alpha|A)=\frac{a}{1-(1-a)(1-b)}$

I'm not sure if this is correct, since my instructor has informed me that the probability should depend on the parity of the round. Anyway, for the second part of the problem, I reason that $X$ is geometrically distributed with some parameter. I reason that it could be $a+b$, but I know that it fails for $a+b>1$, since the terms in the pmf for a geometric r.v. have to be non-negative and between 0 and 1.
The distribution of $X$ is not geometric. Assuming that $X$ is the number of rounds until somebody wins, the probabilities $P_n = P(X = n| A \; \text{ starts})$ are: $$\begin{array}{rcl} P_1 &=& a \\ P_2 &=& (1-a) b \\ P_3 &=& (1-a)(1-b) a \\ P_4 &=& (1-a)^2 (1-b) b \\ P_5 &=& (1-a)^2 (1-b)^2 a \\ \vdots &\vdots& \vdots \end{array}$$ In the above, $A$ wins when $n$ is odd and $B$ wins when $n$ is even.

Your challenge is to find a general formula for $P_n$. Note, however, that getting $EX$ is quite straightforward, so getting the mean is easy, while getting the distribution is less so. (A Markov chain representation is helpful.)

Last edited:
StoneTemplePython
Gold Member
2019 Award
Similar to what's been said above, I'd call this a transient markov chain (with 2 absorbing states) or a defective renewal process.

In the latter jargon, the game starts and you can partition this into events $A_1$, $A_2$, and $A_3$

where $A_1$ is the event that player $A$ wins on an odd move (really move 1), $A_2$ is the event the player B wins on an even move (really move 2) and $A_3$ is the event that the process renews. You know the probabilities for each and can solve for probabilities of winning and expected number of games pretty easily.
- - - -
A different approach that I like is to make the markov chain recurrent, and then use results from your earlier problems about e.g. expected number of coin tosses before a run of heads vs tails and probability of a run of heads before run of tails. Since these are all variations of the same argument I think this would be good for synthesis...

Last edited:
Your challenge is to find a general formula for $P_n$.
Is it not: $P_n=\begin{cases}a(1-a)^{\frac{1}{2}(n-1)}(1-b)^{\frac{1}{2}(n-1)} & \text{ for odd n }\\ b(1-a)(1-a)^{\frac{1}{2}n-1}(1-b)^{\frac{1}{2}n-1}& \text{ for even n}\end{cases}$?

A different approach that I like is to make the markov chain recurrent, and then use results from your earlier problems about e.g. expected number of coin tosses before a run of heads vs tails and probability of a run of heads before run of tails. Since these are all variations of the same argument I think this would be good for synthesis...
Okay, so I'm gonna relabel some of these events and the r.v.

$\alpha=\text{A wins on odd move}\\ \beta = \text{B wins on even move}\\ U_n=\text{n shots are fired}\\ X=\text{Number of shots until duel ends}$

$E(X|\alpha)=E(X|\beta)=0$
$E(X|U_n)=a+b+(1-a)(1+E(X|U_{n+1}))+(1-b)(1+E(X|U_{n+1}))=a+b+(2-(a+b))(1+E(X|U_{n+1}))$

I'm not even sure the last expression is correct. In fact, I am confident that it is not.

Last edited:
StoneTemplePython
Gold Member
2019 Award
Okay, so I'm gonna relabel some of these events and r.v.'s

$\alpha=\text{A wins on odd move}\\ \beta = \text{B wins on even move}\\ U_n=\text{n shots are fired}\\ X=\text{Number of shots until duel ends}$

$E(X|\alpha)=E(X|\beta)=0$
$E(X|U_n)=a+b+(1-a)(1+E(X|U_{n+1}))+(1-b)(1+E(X|U_{n+1}))=a+b+(2-(a+b))(1+E(X|U_{n+1}))$

I'm not even sure the last expression is correct. In fact, I am confident that it is not.
The concern I have here is that there is some memory in this process and $X$ means different things depending on where we are -- some subscript or superscript would seem to be needed -- on it's own its confusing which is especially problematic in probability. It's also an awkward use of conditioning because your expected value of $X$ conditioned on $U_n$ inherently depends on whether $n$ is even or odd, but there is no way to tell whether $U_n$ is even and $U_{n+1}$ is odd, or vice versa....
- - - - -

with a cold start, i.e. at time zero we want to model $X$

One way of interpreting what I was suggesting was to calculate $E\big[X\big] = \bar{X}$. This should be easy. Again there are 3 events of interest -- $A$ kills $B$ on the first shot $(u=1)$ with a certain probability, or $B$ kills $A$ on second shot $(u = 2)$ and the final mutually exclusive event is $u\gt 2$ (if you want you can call it $u=3$) and this too has a certain probability and crucially you get a fresh start here -- i.e. the process probabilistic starts over at $u=3$. This certainly reminds me of the coin tossing runs problems -- but it is in some ways simpler.

- - - -
note the below may be going a touch far afield:
I was hoping you'd have $E\big[X\big]$ in hand, so you could then consider a modified process (call it the Westworld process) where Player A and B take turns as before but $A$ can never die-- what is the expected time until $A$ has a killed $B$? Call this $\bar{N}$. I'll give you

$\bar{N} =E\big[N\big] = \sum_{k=0}^\infty (2k+1)a(1-a)^k = 1a + 3a(1-a) + 5a(1-a)^2 + 7a(1-a)^3+...$

This is a standard extension of a geometric series and should be easy to get in closed form.

What you end up recovering is

$p(\alpha) = \frac{\frac{1}{\bar{N}}}{\frac{1}{\bar{X}}}=\frac{\lambda_N}{\lambda_X}$
This may seem rather odd but if you think about the idealized renewal process -- a Poisson-- then you have $\lambda_N = \frac{1}{\bar{N}}$ and $\lambda_X = \frac{1}{\bar{X}}$ so given that an arrival (renewal) occurs in the merged process, what is the probability it came from stream (player) $A$? -- it's given by
$\frac{\lambda_N}{\lambda_X}$

if you were to calculate the case where $B$ is the "westworld player" you'd find $\bar{J}$ and in turn you'd see that the renewal rates add, just like in Poissons--- i.e. we have
$\lambda_N + \lambda_J= \frac{1}{\bar{N}} + \frac{1}{\bar{J}} = \frac{1}{\bar{X}} = \lambda_X$

In fact an awful lot of tricky renewal problems can be tackled (at least approximately) if you ask yourself -- what would it look like if I idealized things and assumed it was Poisson? Figuring out what the appropriate expected time until renewal is -- or its inverse: the appropriate renewal rate-- takes some thought.
- - - -
n.b. there's a classic movie called The Duelists -- but for some reason I started thinking about Westworld partway through your thread.

Last edited:
I appreciate all the help and answer-giving you're doing, but I need to derive a solution that I am capable of understanding. I haven't learned renewal theory yet. My professor has also stated that he will probably not accept any answers that look like something beyond the ken of an undergraduate student. What I want to know is, that is it possible to derive a solution to this problem using the methods described in my previous topics, or have I not done it correctly?

Ray Vickson
Homework Helper
Dearly Missed
I appreciate all the help and answer-giving you're doing, but I need to derive a solution that I am capable of understanding. I haven't learned renewal theory yet. My professor has also stated that he will probably not accept any answers that look like something beyond the ken of an undergraduate student. What I want to know is, that is it possible to derive a solution to this problem using the methods described in my previous topics, or have I not done it correctly?
You already have a correct and complete solution to the $P(X=n)$ problem in your post #6. From that, you can perform elementary summations to get the probabilities that A wins or B wins, as well as the value of $EX.$

Basically, $P(X=1) = a$ and $P(X=2) = (1-a) b$ follow directly from the description of the duel. Then, with $r = (1-a)(1-b)$ (= "return probability"), we have that $P_n = P(X=n|\text{A starts})$ is given as
$$P_{2k+1} = a r^k, \; P_{2k+2} = (1-a) b r^k, \; k=0,1,2, \ldots$$ This holds because (for example) in order to have $X = 2k+1$ we need to return to A $k$ times (A-B-A-B - ... -A) then win in the next shot. That really is all there is to it!

Last edited:
StoneTemplePython
Gold Member
2019 Award
I appreciate all the help and answer-giving you're doing, but I need to derive a solution that I am capable of understanding. I haven't learned renewal theory yet. My professor has also stated that he will probably not accept any answers that look like something beyond the ken of an undergraduate student.
To be honest I don't think I've given away any answers -- what I've given is classic probability theorem stuff that reads: if you know (x) and (y) you can easily get (z). But you still need to somehow get two parts here on your own in order to figure the third out. I had figured the analogy with Poisson processes (commonly taught in intro to probability) was important and salient.
- - - - -

Ray's method for getting $E\big[X\big]$ should work more than fine and as he says you already have all the building blocks. What I've pushed is a simpler approach that exploits a special kind of fresh start property that occurs on the third turn. My understanding is you're in an "advanced probability course". The technique I'm mentioning is something commonly taught in an undergrad "introduction to probability" course (e.g. I know they teach this technique in MIT's introductory course 6.041 on OCW /edx) for instance when finding the expected value of a geometric distribution.

The embedded fresh start argument, which is awfully close to your coin runs problems, is really just

$E\Big[X\Big] = E\Big[E\big[X\big \vert T\big]\Big]$
and $E\big[X\big \vert T\big]$ is of course a random variable, that takes on value 1 if $T = 1$ with probability ___, takes on value 2 if $T = 2$ with probability ___ and takes on value ____ if $T = 3+$ with probability ___.

$T$ is, in short, a lot like $X$ but it's simpler -- it just counts whether the game has 1, 2 or 3+ shots involved, whereas $X$ has a much more involved distribution, as you know. So introducing $T$ makes getting the expected value of $X$ -- a complicated random variable with countably many values -- into a simpler random variable $E\big[X\big \vert T\big]$ which only takes on 3 values. Furthermore $T$ was tailor made here to exploit structural features in the problem.

If the conditional expectation format is too abstract then no worries -- just draw a tree with three branches here and apply total expectation. That's how Pascal would have solved your problem a few hundred years ago. Trees are underutilized and quite powerful in my view.

And of course an almost identical technique can be used to get the probability of $A$ winning
$P\big(A\big) = E\Big[\mathbb I_A\Big] = E\Big[ E\big[\mathbb I_A\big \vert T\big]\Big]$

What I want to know is, that is it possible to derive a solution to this problem using the methods described in my previous topics, or have I not done it correctly?
The issue in post 6 is you have the right idea of making use of conditional expectations, but as I said the type of conditioning you're doing on $U_n$ is problematic. It runs into difficulties with the odd vs even structure in the exercise. Your goal should be to come up with a kind of conditioning that exploits the structure of the problem, not one that is hobbled it.

My goal for this thread was to highlight, that while the problem is not a geometric distribution as others have said, there is a fresh start property at select times in here and that is exactly what you should exploit when conditioning. I believe you did something similar with the coin runs problems -- find a nice structural feature in the problem and condition on it. Again, you already have all the building blocks needed here to calculate $E\big[X\big]$ directly -- I just happen to think the conditional expectations approach helps connect this with other problems you've posted. Personally, I think it helps 'see the forest' and not get lost in the trees (no pun intended.)

Last edited:
Okay, I solved it. $E(X|A)=\frac{2-a}{a+b-ab}$

Your goal should be to come up with a kind of conditioning that exploits the structure of the problem, not one that is hobbled it.
I was thinking of expressing having two more events, $V_m$, $W_l$, where $m$ and $l$ are even integers and odd integers respectively, then writing $X=Y+Z$ where $Y$ is the number of odd-numbered shots fired and $Z$ is the number of even shots fired. So $E(X|U_n)=E(Y|V_m)+E(Z|W_l)$, where I solve for the two expected values using recursion. Is this what you meant?

StoneTemplePython
Gold Member
2019 Award
Okay, I solved it.
via direct use of the probability distribution you have, I take it? It'd be good to submit your solution here, in short, so we have free rein to give full solutions now that you've solved it. Your call though.

I was thinking of expressing having two more events, $V_m$, $W_l$, where $m$ and $l$ are odd integers and even integers respectively, then writing $X=Y+Z$ where $Y$ is the number of odd-numbered shots fired and $Z$ is the number of even shots fired. So $E(X|U_n)=E(Y|V_m)+E(Z|W_l)$, where I solve for the two expected values using recursion. Is this what you meant?
It could work, and may be worth spending some time on -- if only to get more practice with conditional expectations.

I strongly feel that the best way to look at this is tri-secting it into (i) first shot kills, (ii) second shot kills, or (iii) 3+ shots are taken and the process probabilistically starts over / gets a fresh start, on shot 3. That doesn't mean other approaches can't work -- but the tri-section is just so quick and easy here.

It'd be good to submit your solution here, in short, so we have free rein to give full solutions now that you've solved it.
$E(X|A)=\sum_{k\in \mathbb{N}} kP(X=k)=\sum_{k=1}^\infty (2k-1)ar^{k-1}+\sum_{k=1}^\infty 2k(1-a)br^{k-1}\\ =2a\sum_{k=1}^\infty kr^{k-1} - a\sum_{k=1}^\infty r^{k-1}+2(1-a)b\sum_{k=1}^\infty kr^{k-1}\\ =2a(\frac{d}{dr}(\frac{1}{1-r}))-a\frac{1}{1-r}+2(1-a)b(\frac{d}{dr} (\frac{1}{1-r}))\\ =\frac{2a}{(1-r)^2}-\frac{a(1-r)}{(1-r)^2}+\frac{2(1-a)b}{(1-r)^2}=\frac{(2a+2b-2ab)-a(a+b-ab)}{(a+b-ab)^2}=\frac{2-a}{a+b-ab}$

Thanks you, everyone, for your help.

Last edited:
Ray Vickson
Homework Helper
Dearly Missed
Okay, I solved it. $E(X|A)=\frac{2-a}{a+b-ab}$

I was thinking of expressing having two more events, $V_m$, $W_l$, where $m$ and $l$ are even integers and odd integers respectively, then writing $X=Y+Z$ where $Y$ is the number of odd-numbered shots fired and $Z$ is the number of even shots fired. So $E(X|U_n)=E(Y|V_m)+E(Z|W_l)$, where I solve for the two expected values using recursion. Is this what you meant?
There are (at least) two popular ways to get $E X$.
(1) As the sum $EX = \sum_{k=0}^{\infty}[ (2k+1) a r^k + (2k+2) (1-a) b r^k]$, all of which are standard summations; or
(2) Via a "conditioning" argument (for example, using a Markov chain model): If $N_A =EX$ denotes the expected game length, starting from A and $N_B = EY$ denotes the expected game length starting at B, then we have $N_A = 1+(1-a) N_B,$ and $N_B = 1 +(1-b) N_A$. Solving this simple system gives the answer for $EX = N_A$ (but as a bonus, also gives $N_B$, the expected game length if B starts the duel).

Back when I was teaching this material I often did it both ways. Some students related better to "sample-space" arguments (leading to the expressions for $P(X=n)$ used in method 1), while others lapped up the "conditioning" approach like a cat at the cream dish. I figured it would not do them harm to see it both ways.

BTW: I would do conditioning methods in both the undergraduate and graduate versions of the course, but reserved renewal theory and methods to the graduate version only.

StoneTemplePython
Gold Member
2019 Award
Cool. So the approach I'm suggesting is:

$E\Big[X\Big] = E\Big[E\big[X\big \vert T\big]\Big]$
and $E\big[X\big \vert T\big]$ is of course a random variable, that takes on value 1 if $T = 1$ with probability $p_1 = a$, takes on value 2 if $T = 2$ with probability $p_2 = (1-a)b$ and takes on value $(2 + E\big[X\big])$ if $T = 3+$ with probability $p_{3+} := 1 - a - (1-a)b$.

Rationale:
The first two should be self explanatory.
For the $T = 3+$ case: we've partitioned the probability space into 1, 2 or 3+ shots and hence this sums to $1 = p_1 + p_2 + p_{3+}$. If you get to the third shot, you have a 'sunk cost' of 2 shots that have been taken plus a 'marginal cost' that is a random variable whose distribution is the same as the distribution of $X$ (this is the fresh start) -- and in expectation this value is $E\big[X\big]$. For avoidance of doubt this reads
$E\big[X\big \vert T=3_{+}\big] = 2 + E\big[X\big] = 2 + \mu$.

Hence we have a nicely overlapping subproblem
- - - -
edit:
a slightly more careful way of introducing $T$ is to get rid of the "3+" and say that it takes on real values of 1, 2, and 3 because it is the truncated form of $X$. I.e. $T$ is $X$ for values 1 and 2 but then we truncate the distribution and allocate all residual mass and sample points to the value 3.

It may seem odd that we can get additional insights about the expected value of $X$ by conditioning on a truncated form of $X$ but that is exactly the point here.

I've dropped this in to help clarify what $T$ is but honestly it seems like a bit of a distraction.
- - - -

$\mu = E\Big[X\Big] = E\Big[E\big[X\big \vert T\big]\Big] = 1\cdot p_1 + 2\cdot p_2 + \big(2 + \mu \big)\cdot p_3$
it simplifies to

$\mu = 1\cdot p_1 + 2\cdot p_2 + 2 \cdot p_3 + \mu \cdot p_3$
$(1-p_3)\mu =1\cdot p_1 + 2\cdot p_2 + 2 \cdot p_3$
$\mu =\frac{1\cdot p_1 + 2\cdot p_2 + 2 \cdot p_3}{1-p_3}$

The only technical nit with this approach is that we know $0\lt \mu$ and indeed we know that $0\leq X$ (i.e. we're dealing with non-negative random variables here) but we are implicitly assuming $\mu \lt \infty$. It's a common issue for any recurrence setup that you require $E\big[\big \vert X\big \vert \big] \lt \infty$ (e.g. its an issue too for martingales). You can argue that you know $E\big[\big \vert X \big \vert \big] = E\big[X\big] = \mu \lt \infty$ because this entire problem can be solved by embedding it in a finite state markov chain (whether as an irreducible recurrent chain or an appropriate one with absorbing states), and expected times in these cases are always finite WP1. (Why?) A markov chain embedding argument is quite powerful, even if you don't model it directly as such. This motivates a final question: