Probability of 1st Arrival From Poisson Process of Rate $\lambda$

rsq_a
Messages
103
Reaction score
1
I did this question, but I'm unsure of my reasons behind it. I was hoping someone here could go through the problem for me.

Consider the sum of two independent Poisson processes of rates \lambda and \mu. Find the probability that the first arrival of the combined (\lambda + \mu) process comes from the process of rate \lambda

I got the answer 1/\lambda - 1/(\lambda + \mu). I did so by integrating,

\int_0^\infty P(\text{one event from } \lambda \text{ in }(0, t]) \times P(\text{zero event from } \mu \text{ in }(0, t]) \ dt

Except I didn't have any good reason for integrating the whole thing except for the idea that I want to add up all the probabilities. Is this the way it's supposed to be done?
 
Physics news on Phys.org
If you look at your result, and consider mu -> 0, your result approaches 0, which is not correct. It should approach 1 in that case. When lambda -> 0 in your formula, the probability diverges towards infinity...

You can use the first formula here for all probabilities below:
http://en.wikipedia.org/wiki/Poisson_process

I would calculate it like this:

<br /> P = \int_0^\infty dP(t)<br />

where dP(t) = P1*P2*P2 is a conditional probability:

P1 = prob. that the lambda-process increases by one in the interval [t,t+dt]
P2 = prob. that the lambda process is 0 at time t
P3 = prob. that mu-process is zero at time t+dt.

The three contributions are independent, so their probabilities can simply be multiplied. The contribution to the total probability represent different events, so they can be summed.

The end result I get I get is different from your, and in the limit lambda -> infty or mu->0 I get 1. In the limit lambda -> 0 (where mu \neq 0) I get 0. These limits are quite sensible. Also, my result is always in [0,1] independent of the values of lambda and mu, as long as they are bth positive.

I first started to fool around with a stopping time, but it seems to be unnecessary.

Torquil
 
Last edited:
Torquil,

I believed I did it correctly this time. It would be very helpful if you could write out for me your solution method. I'm a bit new to probability, so little differences in notation and approaches are confusing to me.

The problem with what I did before was that I was summing,

\sum_t P(N_\lambda(0,t+\Delta t] = 1, N_\mu(0,t+\Delta t] = 0)

where N denotes the number of arrivals from each distribution. This is incorrect because the different events (over all t) are not disjoint. The correct way to proceed is to make things conditional on no events occurring from 0 to t. Like so...

\sum_t P(N_\lambda(0,t] = 0, N_\lambda(t,t+\dDelta t] = 1, N_\mu(0,t+\Delta t] = 0)

After this, taking \Delta t \to 0 and integrating gives me \lambda/(\lambda + \mu), which seems to make more sense.

Even if your solution is similar, it would help me a lot if you could post it. Thanks!
 
rsq_a said:
Torquil,

I believed I did it correctly this time. It would be very helpful if you could write out for me your solution method. I'm a bit new to probability, so little differences in notation and approaches are confusing to me.

The problem with what I did before was that I was summing,

\sum_t P(N_\lambda(0,t+\Delta t] = 1, N_\mu(0,t+\Delta t] = 0)

where N denotes the number of arrivals from each distribution. This is incorrect because the different events (over all t) are not disjoint. The correct way to proceed is to make things conditional on no events occurring from 0 to t. Like so...

\sum_t P(N_\lambda(0,t] = 0, N_\lambda(t,t+\dDelta t] = 1, N_\mu(0,t+\Delta t] = 0)

After this, taking \Delta t \to 0 and integrating gives me \lambda/(\lambda + \mu), which seems to make more sense.

That is the same result that I got.

Even if your solution is similar, it would help me a lot if you could post it. Thanks!

For my P1, P2, P3 above, I used (from the wikipedia formula):

P1 = lambda*dt*exp(-lambda*dt)
P2 = exp(-lambda*t)
P3 = exp(-mu*(t+dt))

Multiplying them together, expanding to first order in dt, then putting it into the integral, I get

\int_0^infinity exp(-(lambda+mu)t) lambda dt = lambda/(lambda+mu)

Torquil
 
torquil said:
That is the same result that I got.
For my P1, P2, P3 above, I used (from the wikipedia formula):

P1 = lambda*dt*exp(-lambda*dt)
P2 = exp(-lambda*t)
P3 = exp(-mu*(t+dt))

Multiplying them together, expanding to first order in dt, then putting it into the integral, I get

\int_0^infinity exp(-(lambda+mu)t) lambda dt = lambda/(lambda+mu)

Torquil

Yes, this was exactly what I had done.

Is there a way to do this without the use of infinitesimals? That is, is there a way to do the problem, only using the fact that we know,

P(N_\lambda(0,t] = k) = \frac{e^{-\lambda t} (\lambda t)^k}{k!}

I realize the two definitions of the Poisson Process are equivalent, but an alternative route using this definition would clear things up for me.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Back
Top