How Does the PDF of Wait Time Vary for Pedestrians at a Traffic Signal?

Mehmood_Yasir
Messages
68
Reaction score
2

Homework Statement


Pedestrian are arriving to a signal for crossing road with an arrival rate of ##\lambda## arrivals per minute. Whenever the first Pedestrian arrives at signal, he exactly waits for time ##T##, thus we say the first Pedestrian arrives at time ##0##. When time reaches ##T##, light flashes and all other Pedestrians who have arrived within ##T## also cross the road together with the first Pedestrian. Same process repeats.

What is the PDF of wait time of Pedestrians i.e, ##1^{st}##, ##2^{nd}##, ##3^{rd},...,##, pedestrians?

2. Homework Equations

Poisson arrival time density function is ##f_{X_k} (t)=\frac{{(\lambda t)}^{k} e^{-\lambda t}}{k!}=Erlang(k,\lambda; t)## distributed

The Attempt at a Solution


As said, the first pedestrian arrive at time 0 and exactly wait for ##T## time. After time ##T##, all pedestrian arrived at the crossing will cross the street together with the first one.

Now, only see the wait time of each pedestrian to find the density function of wait time.

As the first one wait exactly for time ##T## minutes, thus we can say its wait time density function is just an impulse function shifted at ##T## i.e., ##f_{W1} (t)=\delta (t-T)##.

Since the probability density function of arrival time of the second pedestrian following Poisson process is ##Erlang(1,\lambda)=e^{-\lambda t}## distributed, he waits for ##T-E[X_1|X_1<T]## where ##E[X_1|X_1<T]## is the mean value of the conditional arrival time of the second pedestrian given it is less than ##T##. What is the pdf of wait time of the second pedestrian? Can I say that the arrival time of the second pedestrian follows ##Erlang(1,\lambda; t)=e^{-\lambda t}## distribution, then the wait time pdf should be ##Erlang(1,\lambda; t)=e^{-\lambda (t-(T-E[X_1|X_1<T]))}##. Similarly, what about the third, fourth,... Pedestrians wait time density function?
 
Physics news on Phys.org
Mehmood_Yasir said:
As said, the first pedestrian arrive at time 0 and exactly wait for ##T## time. After time ##T##, all pedestrian arrived at the crossing will cross the street together with the first one.

Now, only see the wait time of each pedestrian to find the density function of wait time.

As the first one wait exactly for time ##T## minutes, thus we can say its wait time density function is just an impulse function shifted at ##T## i.e., ##f_{W1} (t)=\delta (t-T)##.
I agree.

Mehmood_Yasir said:
Since the probability density function of arrival time of the second pedestrian following Poisson process is ##Erlang(1,\lambda)=e^{-\lambda t}## distributed, he waits for ##T-E[X_1|X_1<T]## where ##E[X_1|X_1<T]## is the mean value of the conditional arrival time of the second pedestrian given it is less than ##T##. What is the pdf of wait time of the second pedestrian? Can I say that the arrival time of the second pedestrian follows ##Erlang(1,\lambda; t)=e^{-\lambda t}## distribution, then the wait time pdf should be ##Erlang(1,\lambda; t)=e^{-\lambda (t-(T-E[X_1|X_1<T]))}##. Similarly, what about the third, fourth,... Pedestrians wait time density function?
I think the first question you need to answer is, are you finding the wait time of kth pedestrian, or of any pedestrian except the first one.

For the wait time of the second pedestrian, your assumption that the second pedestrian will wait for time ##T-E[X_1|X_1\leq T]## is not correct. Start by defining ##W_k## as the wait time for the kth pedestrian in terms of the arrival time time ##t_k##. Then find the ##P(W_k \leq w)##.
 
tnich said:
I agree.I think the first question you need to answer is, are you finding the wait time of kth pedestrian, or of any pedestrian except the first one.

For the wait time of the second pedestrian, your assumption that the second pedestrian will wait for time ##T-E[X_1|X_1\leq T]## is not correct. Start by defining ##W_k## as the wait time for the kth pedestrian in terms of the arrival time time ##t_k##. Then find the ##P(W_k \leq w)##.

Just to be clear, for any particular value of ##\lambda##, let's assume ##m=4## more pedestrian arrived in time ##T## and all 5 (including the first) crossed the street after ##T##. Now the wait time of first is clear as T and thus also its wait time density function. My objective is to find the wait time density functions of all subsequent Pedestrians which are 2nd, 3rd, 4th and 5th in above assumption. In general speaking now this ##m## can take any value as it is unconditional in my problem. But I am atleast interested to find the density function of wait time of 2nd, 3rd, and 4th subsequent pedestrians after the first pedestrian. As you mentioned about wait time of any ##k^{th}## Pedestrian, than ##1<k\leq m##, referring wait time of all subsequent pedestrians except first, as first I already know.

What you asked
tnich said:
Start by defining ##W_k## as the wait time for the kth pedestrian in terms of the arrival time time ##t_k##. Then find the ##P(W_k \leq w)##.
I think ##P(W_k \leq w)=1-P(W_k > w)=1-e^{-\lambda w}##. Is it correct?
 
Mehmood_Yasir said:
Just to be clear, for any particular value of ##\lambda##, let's assume ##m=4## more pedestrian arrived in time ##T## and all 5 (including the first) crossed the street after ##T##. Now the wait time of first is clear as T and thus also its wait time density function. My objective is to find the wait time density functions of all subsequent Pedestrians which are 2nd, 3rd, 4th and 5th in above assumption. In general speaking now this ##m## can take any value as it is unconditional in my problem. But I am atleast interested to find the density function of wait time of 2nd, 3rd, and 4th subsequent pedestrians after the first pedestrian. As you mentioned about wait time of any ##k^{th}## Pedestrian, than ##1<k\leq m##, referring wait time of all subsequent pedestrians except first, as first I already know.

What you asked

I think ##P(W_k \leq w)=1-P(W_k > w)=1-e^{-\lambda w}##. Is it correct?
What is the wait time of the kth pedestrian in terms of his arrival time?
 
tnich said:
What is the wait time of the kth pedestrian in terms of his arrival time?
If ##t_k## is expected arrival time of ##k^{th}## pedestrian, then wait time is ##T-t_k##
 
Mehmood_Yasir said:
If ##t_k## is expected arrival time of ##k^{th}## pedestrian, then wait time is ##T-t_k##
Right. And your next step is?
 
tnich said:
Right. And your next step is?
Wait time density function of ##k^{th}## pedestrian.
 
Mehmood_Yasir said:
Wait time density function of ##k^{th}## pedestrian.
Your next step is to substitute ##T - t_k## for ##W_k##.
 
tnich said:
Your next step is to substitute ##T - t_k## for ##W_k##.
Do you mean,
##P(W_k \leq w)=1-P(W_k > w)=1-e^{-\lambda w}## by
##P((T-t_k) \leq t)=1-P((T-t_k) > t)=1-e^{-\lambda t}##
 
  • #10
Mehmood_Yasir said:
Do you mean,
##P(W_k \leq w)=1-P(W_k > w)=1-e^{-\lambda w}## by
##P((T-t_k) \leq t)=1-P((T-t_k) > t)##
It's good up to here. Now use can use the distribution of ##t_k## to get the distribution of ##W_k##.
 
  • #11
tnich said:
It's good up to here. Now use can use the distribution of ##t_k## to get the distribution of ##W_k##.
is ##t_k## density function distributed as ##Erlang(k,\lambda;t)= \frac{{(\lambda t)}^{k} e^{-\lambda t}} {k!}##
 
  • #12
Mehmood_Yasir said:
is ##t_k## density function distributed as ##Erlang(k,\lambda;t)= \frac{{(\lambda t)}^{k} e^{-\lambda t}} {k!}##
Yes, but remember that is the Erlang pdf, not the cdf.
 
  • #13
tnich said:
Yes, but remember that is the Erlang pdf, not the cdf.
for the cdf, it will be ##F_{t_k}=1-\sum_{m=0}^{k-1} \frac{{(\lambda t)}^{m} e^{-\lambda t}} {m!}##
but I am at the moment looking to find the pdf of the wait time, so if the pdf of the arrival time ##t_k## is ##\frac{{(\lambda t)}^k e^{-\lambda t}} {k!}##, is pdf of ##W_k=T-t_k## is something like shifted arrival pdf .. I am not sure for this actually. ##f_{W_k}=\frac{{(\lambda (T-t))}^k e^{-\lambda (T-t)}} {k!}##, is it correct?

and resultant cdf as ##F_{W_k}= 1-\sum_{m=0}^{k-1} \frac{{(\lambda (T-t))}^{m} e^{-\lambda (T-t)}} {m!}##
 
  • #14
Mehmood_Yasir said:
for the cdf, it will be ##F_{t_k}=1-\sum_{m=0}^{k-1} \frac{{(\lambda t)}^{m} e^{-\lambda t}} {m!}##
but I am at the moment looking to find the pdf of the wait time, so if the pdf of the arrival time ##t_k## is ##\frac{{(\lambda t)}^k e^{-\lambda t}} {k!}##, is pdf of ##W_k=T-t_k## is something like shifted arrival pdf .. I am not sure for this actually. ##f_{W_k}=\frac{{(\lambda (T-t))}^k e^{-\lambda (T-t)}} {k!}##, is it correct?

and resultant cdf as ##F_{W_k}= 1-\sum_{m=0}^{k-1} \frac{{(\lambda (T-t))}^{m} e^{-\lambda (T-t)}} {m!}##
You keep making guesses, but guessing rarely gives the right answer. I suggest that you work the problem through to a solution and then check to make sure the solution makes sense. Then you will know you have the right answer.

So here is a method for getting the pdf of a random variable U given the cdf of another random variable v
1) Express ##V## as a function of ##U##: ##V = g(U)##

2) Express the probability of ##U## in terms of the probability of ##V##:
##P(U\leq u)##
##=P(g^{-1}(V)\leq u)=\begin{cases}
P(V\leq g(u)) & \text{if } V=g(U) \text{ is an increasing function of } U \\
P(V\geq g(u)) & \text{if } V=g(U) \text{ is a decreasing function of } U
\end{cases}##
(Why do you need to consider whether g(U) is increasing or decreasing?)

3) Given that ##F(v)## is the cdf of ##V##, substitute ##F(g(u))## for ##P(V\leq g(u))## in your equation for ##P(U\leq u)##. (If ##g(U)## is decreasing, you may need to express ##P(V\geq g(u))## in terms of ##P(V\leq g(u))## first.)

4) Differentiate ##P(U\leq u)## with respect to ##u## to get a function h(u).

5) Integrate ##h(u)## over the range of ##u## to get a constant
##C=\int_{u_{min}}^{u_{max}}h(u)##

If ##C = 1##, then ##h(u)## is your pdf. If not, then ##\frac 1 C h(u)## is your pdf. (Why?)
 
  • #15
tnich said:
You keep making guesses, but guessing rarely gives the right answer. I suggest that you work the problem through to a solution and then check to make sure the solution makes sense. Then you will know you have the right answer.

So here is a method for getting the pdf of a random variable U given the cdf of another random variable v
1) Express ##V## as a function of ##U##: ##V = g(U)##

2) Express the probability of ##U## in terms of the probability of ##V##:
##P(U\leq u)##
##=P(g^{-1}(V)\leq u)=\begin{cases}
P(V\leq g(u)) & \text{if } V=g(U) \text{ is an increasing function of } U \\
P(V\geq g(u)) & \text{if } V=g(U) \text{ is a decreasing function of } U
\end{cases}##
(Why do you need to consider whether g(U) is increasing or decreasing?)

3) Given that ##F(v)## is the cdf of ##V##, substitute ##F(g(u))## for ##P(V\leq g(u))## in your equation for ##P(U\leq u)##. (If ##g(U)## is decreasing, you may need to express ##P(V\geq g(u))## in terms of ##P(V\leq g(u))## first.)

4) Differentiate ##P(U\leq u)## with respect to ##u## to get a function h(u).

5) Integrate ##h(u)## over the range of ##u## to get a constant
##C=\int_{u_{min}}^{u_{max}}h(u)##

If ##C = 1##, then ##h(u)## is your pdf. If not, then ##\frac 1 C h(u)## is your pdf. (Why?)

let me see again to get to the answer of pdf of ##W_k##
 
  • #16
You keep making guesses, but guessing rarely gives the right answer. I suggest that you work the problem through to a solution and then check to make sure the solution makes sense. Then you will know you have the right answer.

So here is a method for getting the pdf of a random variable U given the cdf of another random variable v
say U ##\to## ##W_k##, V ##\to## ##t_k##, u ##\to## ##w##, v ##\to## ##t##
1) Express ##V## as a function of ##U##: ##V = g(U)##
##W_k=T-t_k##
##t_k=T-W_k##

2) Express the probability of ##U## in terms of the probability of ##V##:
##P(U\leq u)##
##=P(g^{-1}(V)\leq u)=\begin{cases}
P(V\leq g(u)) & \text{if } V=g(U) \text{ is an increasing function of } U \\
P(V\geq g(u)) & \text{if } V=g(U) \text{ is a decreasing function of } U
\end{cases}##
##P(W_k \leq w)=P(T-t_k \leq w)=1-P(T-t_k > w)##

2)
(Why do you need to consider whether g(U) is increasing or decreasing?)
Should be increasing as cdf is always increasing function.

3) Given that ##F(v)## is the cdf of ##V##,
##F(v)=F(t)=1-\sum_{m=0}^{k-1} \frac{{(\lambda t)}^m e^{-\lambda t}}{m!}##,

substitute ##F(g(u))## for ##P(V\leq g(u))## in your equation for ##P(U\leq u)##. (If ##g(U)## is decreasing, you may need to express ##P(V\geq g(u))## in terms of ##P(V\leq g(u))## first.)
##F(g(u))=F(T-w)= 1-\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!} ##

4) Differentiate ##P(U\leq u)## with respect to ##u## to get a function h(u).
## \frac{d}{dw} P(W_k \leq w)= \frac{d}{dw} P(T-t_k \leq w)=\frac{d}{dw} (1-P(T-t_k > w))##
## =\frac{d}{dw} (1-e^{-\lambda w})##
## =\lambda e^{-\lambda w}##

5) Integrate ##h(u)## over the range of ##u## to get a constant
##C=\int_{u_{min}}^{u_{max}}h(u)##
##C=\int_{0}^{T}\lambda e^{-\lambda w} dw##
##C=1- e^{-\lambda T}##

If ##C = 1##, then ##h(u)## is your pdf. If not, then ##\frac 1 C h(u)## is your pdf. (Why?)
##\frac 1 C## is normalization such that pdf should sum up to 1.
so pdf of ##W## is ## f_W=\frac{ \lambda e^{-\lambda w} } {1- e^{-\lambda T}}##
## f_W=\frac{ \lambda e^{-\lambda (T-t)} } {1- e^{-\lambda T}}##
 
  • #17
Mehmood_Yasir said:
say U ##\to## ##W_k##, V ##\to## ##t_k##, u ##\to## ##w##, v ##\to## ##t##

##W_k=T-t_k##
##t_k=T-W_k####P(W_k \leq w)=P(T-t_k \leq w)=1-P(T-t_k > w)##
The point is to get an equation for cdf of ##W_k## in terms of the cdf of ##t_k##.
You are not quite there yet. You need to end up with an expression with a term like ##P(t_k < \text{ something})## in it.

Mehmood_Yasir said:
Should be increasing as cdf is always increasing function.
g(u) is not a cdf. It is the relationship between u and v. The point of this question is why do you need to say ##P(W_k \leq w)=1-## something?

Mehmood_Yasir said:
##F(v)=F(t)=1-\sum_{m=0}^{k-1} \frac{{(\lambda t)}^m e^{-\lambda t}}{m!}##,

##F(g(u))=F(T-w)= 1-\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!} ##
This will not be right until you get step 2) right.
Mehmood_Yasir said:
## \frac{d}{dw} P(W_k \leq w)= \frac{d}{dw} P(T-t_k \leq w)=\frac{d}{dw} (1-P(T-t_k > w))##
## =\frac{d}{dw} (1-e^{-\lambda w})##
## =\lambda e^{-\lambda w}##
Here, you need to differentiate the result of step 3) (when you get it figured out).
Mehmood_Yasir said:
##C=\int_{0}^{T}\lambda e^{-\lambda w} dw##
##C=1- e^{-\lambda T}####\frac 1 C## is normalization such that pdf should sum up to 1.
so pdf of ##W## is ## f_W=\frac{ \lambda e^{-\lambda w} } {1- e^{-\lambda T}}##
## f_W=\frac{ \lambda e^{-\lambda (T-t)} } {1- e^{-\lambda T}}##
You have the right idea here but you are applying it to the wrong distribution.
 
  • Like
Likes BvU
  • #18
tnich said:
The point is to get an equation for cdf of ##W_k## in terms of the cdf of ##t_k##.
You are not quite there yet. You need to end up with an expression with a term like ##P(t_k < \text{ something})## in it.
##P(W_k \leq w)=P(T-t_k \leq w)##
##P(W_k \leq w)=P(t_k \geq T-w)##
##P(W_k \leq w)=1-P(t_k < T-w)##
Rest., I see again.
 
  • #19
tnich said:
You keep making guesses, but guessing rarely gives the right answer. I suggest that you work the problem through to a solution and then check to make sure the solution makes sense. Then you will know you have the right answer.

So here is a method for getting the pdf of a random variable U given the cdf of another random variable v
1) Express ##V## as a function of ##U##: ##V = g(U)##
##t_k=T-W_k##
2) Express the probability of ##U## in terms of the probability of ##V##:
##P(U\leq u)##
##=P(g^{-1}(V)\leq u)=\begin{cases}
P(V\leq g(u)) & \text{if } V=g(U) \text{ is an increasing function of } U \\
P(V\geq g(u)) & \text{if } V=g(U) \text{ is a decreasing function of } U
\end{cases}##
##P(W_k \leq w)=P(T-t_k \leq w)##
##P(W_k \leq w)=P(t_k \geq T-w)##
##t_k## is decreasing function in the interval ##0## to ##T## as ##W_k## is increasing, ##t_k## decreases. So,
##P(W_k \leq w)=P(t_k \geq T-w)##

3) Given that ##F(v)## is the cdf of ##V##,
##F(v)=F(t)=1-\sum_{m=0}^{k-1} \frac{{(\lambda t)}^m e^{-\lambda t}}{m!}##,

substitute ##F(g(u))## for ##P(V\leq g(u))## in your equation for ##P(U\leq u)##. (If ##g(U)## is decreasing, you may need to express ##P(V\geq g(u))## in terms of ##P(V\leq g(u))## first.)
I am a bit stuck in this part,
##v=g(u)##
##t=T-w##
##F(g(u))=1-\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##
now
##P(U\leq u)=P(W_k \leq w)##
##P(W_k \leq w)=P(t_k \geq T-w)##
since ##t_k## is decreasing
##P(W_k \leq w)=1-P(t_k < T-w)##
##P(W_k \leq w)=1-(1-\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!})##
##P(W_k \leq w)=\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##
 
  • #20
Mehmood_Yasir said:
##t_k=T-W_k##

##P(W_k \leq w)=P(T-t_k \leq w)##
##P(W_k \leq w)=P(t_k \geq T-w)##
##t_k## is decreasing function in the interval ##0## to ##T## as ##W_k## is increasing, ##t_k## decreases. So,
##P(W_k \leq w)=P(t_k \geq T-w)####F(v)=F(t)=1-\sum_{m=0}^{k-1} \frac{{(\lambda t)}^m e^{-\lambda t}}{m!}##,I am a bit stuck in this part,
##v=g(u)##
##t=T-w##
##F(g(u))=1-\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##
now
##P(U\leq u)=P(W_k \leq w)##
##P(W_k \leq w)=P(t_k \geq T-w)##
since ##t_k## is decreasing
##P(W_k \leq w)=1-P(t_k < T-w)##
##P(W_k \leq w)=1-(1-\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!})##
##P(W_k \leq w)=\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##
Mehmood_Yasir said:
##t_k=T-W_k##

##P(W_k \leq w)=P(T-t_k \leq w)##
##P(W_k \leq w)=P(t_k \geq T-w)##
##t_k## is decreasing function in the interval ##0## to ##T## as ##W_k## is increasing, ##t_k## decreases. So,
##P(W_k \leq w)=P(t_k \geq T-w)####F(v)=F(t)=1-\sum_{m=0}^{k-1} \frac{{(\lambda t)}^m e^{-\lambda t}}{m!}##,I am a bit stuck in this part,
##v=g(u)##
##t=T-w##
##F(g(u))=1-\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##
now
##P(U\leq u)=P(W_k \leq w)##
##P(W_k \leq w)=P(t_k \geq T-w)##
since ##t_k## is decreasing
##P(W_k \leq w)=1-P(t_k < T-w)##
##P(W_k \leq w)=1-(1-\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!})##
##P(W_k \leq w)=\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##
That's what I got, too, so I think you are on the right track. Keep going. Differentiate your result.
 
  • #21
tnich said:
That's what I got, too, so I think you are on the right track. Keep going. Differentiate your result.
derivative with respect to w to get ##h(u)##, getting too complex
## =\frac{d}{dw}P(W_k \leq w)##
##=\frac{d}{dw} \sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##
##= \sum_{m=0}^{k-1} \frac{d}{dw} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##
##= \sum_{m=0}^{k-1} \frac{1}{m!} \frac{d}{dw} {(\lambda (T-w))}^m e^{-\lambda (T-w)}##
using product rule.
##= \sum_{m=0}^{k-1} \frac{1}{m!} \Big(-m \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)} + {(\lambda^m (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
##= \sum_{m=0}^{k-1} \frac{\lambda(T-w) - m}{m!} \Big(\lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)} \Big)##
 
Last edited:
  • #22
Mehmood_Yasir said:
derivative with respect to w to get ##h(u)##, getting too complex
## =\frac{d}{dw}P(W_k \leq w)##
##=\frac{d}{dw} \sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##
##= \sum_{m=0}^{k-1} \frac{d}{dw} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##
##= \sum_{m=0}^{k-1} \frac{1}{m!} \frac{d}{dw} {(\lambda (T-w))}^m e^{-\lambda (T-w)}##
using product rule.
##= \sum_{m=0}^{k-1} \frac{1}{m!} \Big(-m \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)} + {(\lambda^m (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
Try stopping here and multiplying through by ##\frac 1 {m!}##. You should find that most of the terms of the summation cancel out.
 
  • #23
tnich said:
Try stopping here and multiplying through by ##\frac 1 {m!}##. You should find that most of the terms of the summation cancel out.
Also, think about the term for ##m=0##. What is ##\frac d {dw} \frac{{(\lambda (T-w))}^0 e^{-\lambda (T-w)}}{0!}##?
 
  • #24
tnich said:
Also, think about the term for ##m=0##. What is ##\frac d {dw} \frac{{(\lambda (T-w))}^0 e^{-\lambda (T-w)}}{0!}##?
##\lambda e^{-\lambda (T-w)}##
 
  • #25
tnich said:
Try stopping here and multiplying through by ##\frac 1 {m!}##. You should find that most of the terms of the summation cancel out.
##= \sum_{m=0}^{k-1} \frac{1}{m!} \Big(-m \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)} + {(\lambda^m (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
##= \sum_{m=0}^{k-1} \frac{1}{m!} \Big(-m \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)}\Big) + \frac{1}{m!} \Big( {(\lambda^m (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
I see that only ##m## in the first term is cancelled
##= \sum_{m=0}^{k-1} \frac{1}{(m-1)!} \Big(- \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)}\Big) + \frac{1}{m!} \Big( {(\lambda^m (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
 
  • #26
Mehmood_Yasir said:
##= \sum_{m=0}^{k-1} \frac{1}{m!} \Big(-m \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)} + {(\lambda^m (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
##= \sum_{m=0}^{k-1} \frac{1}{m!} \Big(-m \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)}\Big) + \frac{1}{m!} \Big( {(\lambda^m (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
I see that only ##m## in the first term is cancelled
##= \sum_{m=0}^{k-1} \frac{1}{(m-1)!} \Big(- \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)}\Big) + \frac{1}{m!} \Big( {(\lambda^m (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
What ## \frac{1}{(m-1)!} \Big(- \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)}\Big)## when ##m = p## and
##\frac{1}{m!} \Big( {(\lambda^m (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)## when ##m = p-1##?

Oh, and you have made a mistake on this part ##{(\lambda^m (T-w))}^m##.
 
  • Like
Likes Mehmood_Yasir
  • #27
tnich said:
Also, think about the term for ##m=0##. What is ##\frac d {dw} \frac{{(\lambda (T-w))}^0 e^{-\lambda (T-w)}}{0!}##?
I think you term should have another ##\lambda## in numerator for ##m=0##
##\frac{d}{dw} \frac{{(\lambda (T-w))}^0 \lambda e^{-\lambda (T-w)}}{0!}##?
##=\lambda^2 e^{-\lambda (T-w)} ##
 
  • #28
Mehmood_Yasir said:
I think you term should have another ##\lambda## in numerator for ##m=0##
##\frac{d}{dw} \frac{{(\lambda (T-w))}^0 \lambda e^{-\lambda (T-w)}}{0!}##?
##=\lambda^2 e^{-\lambda (T-w)} ##
##x^0=##?
 
  • #29
Mehmood_Yasir said:
##= \sum_{m=0}^{k-1} \frac{1}{m!} \Big(-m \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)} + {(\lambda^m (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
##= \sum_{m=0}^{k-1} \frac{1}{m!} \Big(-m \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)}\Big) + \frac{1}{m!} \Big( {(\lambda^m (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
I see that only ##m## in the first term is cancelled
##= \sum_{m=0}^{k-1} \frac{1}{(m-1)!} \Big(- \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)}\Big) + \frac{1}{m!} \Big( {(\lambda^m (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
after correcting the typo,
##= \sum_{m=0}^{k-1} \frac{1}{(m-1)!} \Big(- \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)}\Big) + \frac{1}{m!} \Big( {(\lambda (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
You are right if I open the series,
##= \sum_{m=0}^{k-1} \frac{1}{(m-1)!} \Big(- \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)}\Big) + \sum_{m=0}^{k-1} \frac{1}{m!} \Big( {(\lambda (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##

##= - \Big(0+\lambda e^{-\lambda (T-w)} +\lambda^2 (T-w) e^{-\lambda (T-w)}+...+ {\lambda^{k-1}{(T-w)}^{k-2} e^{-\lambda (T-w)}} {(k-2)!} \Big) + \Big( \lambda e^{-\lambda (T-w)} +\lambda^2 (T-w) e^{-\lambda (T-w)}+...+ {\lambda^{k-1}{(T-w)}^{k-2} e^{-\lambda (T-w)}} {(k-2)!} + {\lambda^{k}{(T-w)}^{k-1} e^{-\lambda (T-w)}} {(k-1)!}\Big)##
so we are left only with this nicely,
##= {\lambda^{k}{(T-w)}^{k-1} e^{-\lambda (T-w)}} {(k-1)!}\Big)##
 
  • #30
tnich said:
##x^0=##?
Oh, I see what you mean. You think I left out a λ. But I am just taking the 0th term from ##\frac{d}{dw} \sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##. There is no extra λ there.
Mehmood_Yasir said:
after correcting the typo,
##= \sum_{m=0}^{k-1} \frac{1}{(m-1)!} \Big(- \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)}\Big) + \frac{1}{m!} \Big( {(\lambda (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##
You are right if I open the series,
##= \sum_{m=0}^{k-1} \frac{1}{(m-1)!} \Big(- \lambda^m {(T-w)}^{m-1}e^{-\lambda (T-w)}\Big) + \sum_{m=0}^{k-1} \frac{1}{m!} \Big( {(\lambda (T-w))}^m \lambda e^{-\lambda (T-w)} \Big)##

##= - \Big(0+\lambda e^{-\lambda (T-w)} +\lambda^2 (T-w) e^{-\lambda (T-w)}+...+ {\lambda^{k-1}{(T-w)}^{k-2} e^{-\lambda (T-w)}} {(k-2)!} \Big) + \Big( \lambda e^{-\lambda (T-w)} +\lambda^2 (T-w) e^{-\lambda (T-w)}+...+ {\lambda^{k-1}{(T-w)}^{k-2} e^{-\lambda (T-w)}} {(k-2)!} + {\lambda^{k}{(T-w)}^{k-1} e^{-\lambda (T-w)}} {(k-1)!}\Big)##
so we are left only with this nicely,
##= {\lambda^{k}{(T-w)}^{k-1} e^{-\lambda (T-w)}} {(k-1)!}\Big)##
Yes, except you left off \frac. Now you need to normalize. You can also think of this as finding ##\frac d {dw} P(W_k \leq w | W_k \leq T)##. (Why?)
 
  • Like
Likes Mehmood_Yasir
  • #31
tnich said:
Oh, I see what you mean. You think I left out a λ. But I am just taking the 0th term from ##\frac{d}{dw} \sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##. There is no extra λ there.
That's correct.
Yes, except you left off \frac.
O sorry. Actual result is ##= \frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!}##

Now you need to normalize. You can also think of this as finding ##\frac d {dw} P(W_k \leq w | W_k \leq T)##. (Why?)
Yes, ##= \frac{ \frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!} }{1-P(W_k>T)} ##
##= \frac{ \frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!} }{(1-e^{-\lambda T)}} ##

because ##W_k## can never be greater than ##T## and can have a maximum value of ##T##
 
  • #32
Mehmood_Yasir said:
That's correct.

O sorry. Actual result is ##= \frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!}##Yes, ##= \frac{ \frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!} }{1-P(W_k>T)} ##
##= \frac{ \frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!} }{(1-e^{-\lambda T)}} ##
for plotting pdf of ##t_k## and pdf of ##W_k## against a time vector e.g., ##t=0:0.01:T##(minutes) on x axis, I need to replace ##w## in pdf of ##W_k## with ##t## such that

##f_{W_k} (t)= \frac{ \frac{ \lambda^k {(T-t)}^{k-1} e^{-\lambda (T-t)} }{(k-1)!} }{(1-e^{-\lambda T)}} ##

because ##W_k## can never be greater than ##T## and can have a maximum value of ##T##
Again thank you very much@tnich
 
Last edited:
  • #33
Mehmood_Yasir said:
for plotting pdf of ##t_k## and pdf of ##W_k## against a time vector e.g., ##t=0:0.01:T##(minutes) on x axis, I need to replace ##w## in pdf of ##W_k## with ##t## such that

##f_{W_k} (t)= \frac{ \frac{ \lambda^k {(T-t)}^{k-1} e^{-\lambda (T-t)} }{(k-1)!} }{(1-e^{-\lambda T)}} ##
That is not correct. From post #20, what is ##P(W_k\leq w)##?
 
  • #34
tnich said:
That is not correct. From post #20, what is ##P(W_k\leq w)##?

##P(W_k \leq w)=1-P(t_k < T-w)##
##P(W_k \leq w)=\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##
 
  • #35
tnich said:
That is not correct. From post #20, what is ##P(W_k\leq w)##?
I am a bit confused as I am plotting the pdf of ##W_k## and ##t_k## in MATLAB for the above process. For the time vector ##t## on X-axis, I plot the pdf of ##t_k## using its density function ##f_{t_k} (t)= \frac{ {(\lambda t)}^k e^{-\lambda t}} {k!}## for time vector ##0\leq t \leq T##. Thats right as it should be. Now for plotting pdf of ##W_k## using its density function ##f_{W_k} (w)## as given earlier in terms of ##w##. considering ##0\leq t\leq T##, what is ##w## in ##f_{W_k} (w)## in terms of ##t##. Should ##w## in ##f_{W_k} (w)## must be replaced with ##T-t## to plot against ##t##.
 
  • #36
Mehmood_Yasir said:
##P(W_k \leq w)=1-P(t_k < T-w)##
##P(W_k \leq w)=\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}##
Right. That's what you are going to get when you integrate ##\frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!}##. So what are your limits of integration? (What limits do you have to evaluate ##\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}## at?)
 
  • #37
Mehmood_Yasir said:
I am a bit confused as I am plotting the pdf of ##W_k## and ##t_k## in MATLAB for the above process. For the time vector ##t## on X-axis, I plot the pdf of ##t_k## using its density function ##f_{t_k} (t)= \frac{ {(\lambda t)}^k e^{-\lambda t}} {k!}## for time vector ##0\leq t \leq T##.
I am not sure what you mean here. In the context of this problem, ##t_k## can't be greater than T, so it would not make much sense to plot ##f_{t_k} (t)= \frac{ {(\lambda t)}^k e^{-\lambda t}} {k!}##. Should you be plotting ##f_{t_k|t_k \leq T} (t)## instead?
 
  • #38
tnich said:
Right. That's what you are going to get when you integrate ##\frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!}##. So what are your limits of integration?
Limits should be from ##0## to ##T## for integr.

(What limits do you have to evaluate ##\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}## at?)
 
  • #39
Mehmood_Yasir said:
Limits should be from ##0## to ##T## for integr.

(What limits do you have to evaluate ##\sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!}## at?)
Then what do you get for the value of your integral?
 
  • #40
tnich said:
I am not sure what you mean here. In the context of this problem, ##t_k## can't be greater than T, so it would not make much sense to plot ##f_{t_k} (t)= \frac{ {(\lambda t)}^k e^{-\lambda t}} {k!}##. Should you be plotting ##f_{t_k|t_k \leq T} (t)## instead?
right, yes It is in fact ##f_{t_k|t_k \leq T} (t)## what i am plotting
 
  • #41
Mehmood_Yasir said:
right, yes It is in fact ##f_{t_k|t_k \leq T} (t)## what i am plotting
So you will need to normalize that pdf, too.
 
  • #42
tnich said:
Then what do you get for the value of your integral?
this is a long go..I tried as it looks never ending integral,
 
  • #43
Mehmood_Yasir said:
this is a long go..I tried as it looks never ending integral,
I think you are making it too hard. You already showed that
##\frac{d}{dw} \sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!} = \frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!}##

So ##\int_0^T {\frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!}dw}##
##=\left. \sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!} \right|_0^T##
 
  • #44
tnich said:
I think you are making it too hard. You already showed that
##\frac{d}{dw} \sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!} = \frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!}##

So ##\int_0^T {\frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!}dw}##
##=\left. \sum_{m=0}^{k-1} \frac{{(\lambda (T-w))}^m e^{-\lambda (T-w)}}{m!} \right|_0^T##
by inserting upper limit ##T## in the sum - lower limit ##0## in the sum gives ##\sum_{m=0}^{k-1} \frac{{(\lambda (T))}^m e^{-\lambda (T)}}{m!}##
 
  • #45
Mehmood_Yasir said:
by inserting upper limit ##T## in the sum - lower limit ##0## in the sum gives ##\sum_{m=0}^{k-1} \frac{{(\lambda (T))}^m e^{-\lambda (T)}}{m!}##
there is a minus sign as well, - ##\sum_{m=0}^{k-1} \frac{{(\lambda (T))}^m e^{-\lambda (T)}}{m!}##
 
  • #46
Mehmood_Yasir said:
there is a minus sign as well, - ##\sum_{m=0}^{k-1} \frac{{(\lambda (T))}^m e^{-\lambda (T)}}{m!}##
It is ##C## and since it is not 1. Do I need to multiply ##\frac{1}{C}## with ##\frac d {dw} P(W_k \leq w | W_k \leq T)= \frac{ \frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!} }{(1-e^{-\lambda T)}} ##
 
  • #47
Mehmood_Yasir said:
It is ##C## and since it is not 1. Do I need to multiply ##\frac{1}{C}## with ##\frac d {dw} P(W_k \leq w | W_k \leq T)= \frac{ \frac{ \lambda^k {(T-w)}^{k-1} e^{-\lambda (T-w)} }{(k-1)!} }{(1-e^{-\lambda T)}} ##
when is it going to end to get the pdf or cdf of ##W_k##?
 
  • #48
Mehmood_Yasir said:

Homework Statement


Pedestrian are arriving to a signal for crossing road with an arrival rate of ##\lambda## arrivals per minute. Whenever the first Pedestrian arrives at signal, he exactly waits for time ##T##, thus we say the first Pedestrian arrives at time ##0##. When time reaches ##T##, light flashes and all other Pedestrians who have arrived within ##T## also cross the road together with the first Pedestrian. Same process repeats.

What is the PDF of wait time of Pedestrians i.e, ##1^{st}##, ##2^{nd}##, ##3^{rd},...,##, pedestrians?

2. Homework Equations

Poisson arrival time density function is ##f_{X_k} (t)=\frac{{(\lambda t)}^{k} e^{-\lambda t}}{k!}=Erlang(k,\lambda; t)## distributed

The Attempt at a Solution


As said, the first pedestrian arrive at time 0 and exactly wait for ##T## time. After time ##T##, all pedestrian arrived at the crossing will cross the street together with the first one.

Now, only see the wait time of each pedestrian to find the density function of wait time.

As the first one wait exactly for time ##T## minutes, thus we can say its wait time density function is just an impulse function shifted at ##T## i.e., ##f_{W1} (t)=\delta (t-T)##.

Since the probability density function of arrival time of the second pedestrian following Poisson process is ##Erlang(1,\lambda)=e^{-\lambda t}## distributed, he waits for ##T-E[X_1|X_1<T]## where ##E[X_1|X_1<T]## is the mean value of the conditional arrival time of the second pedestrian given it is less than ##T##. What is the pdf of wait time of the second pedestrian? Can I say that the arrival time of the second pedestrian follows ##Erlang(1,\lambda; t)=e^{-\lambda t}## distribution, then the wait time pdf should be ##Erlang(1,\lambda; t)=e^{-\lambda (t-(T-E[X_1|X_1<T]))}##. Similarly, what about the third, fourth,... Pedestrians wait time density function?

I think the problem is a lot harder than you realize. Let ##T_0=0, T_1, T_2, \ldots## be the arrival times of customers ##0,1,2, \ldots.## We have that ##T_k \sim \text{Erlang}(\lambda,k),## for ##k \geq 1##, so its density function is
$$ f_k(t) = \frac{\lambda^k t^{k-1} e^{-\lambda t}}{(k-1)!}, \; k=1,2,3, \ldots . $$

Customer #1 waits
$$W_1 = \begin{cases}
T - T_1 & \text{if} \; T_1 < T \\
T & \text{if} \; T_1 > T
\end{cases}
$$
This is true because customer #1 either arrives in the first T-interval, or else starts a new one.

Customer # 2 waits
$$W_2 = \begin{cases}
T-T_2 & \text{if} \; T_2 < T \\
T & \text{if} \; T_1 < T, T_2 > T \\
T_1+T-T_2 & \text{if} \; T_1 > T, T_2 < T_1+T \\
T & \text{if} \; T_1 > T, T_2 > T_1+T
\end{cases}
$$

The cases are: (i) customer #2 comes in the first T-interval; (ii) customer #1 does not start a new T-interval but customer #2 does; (iii) customer #2 falls in a new T-interval started by customer #1; and (iv) customer #2 starts a third T-inverval.

So, already by the time we get to customer #2 the problem becomes complicated.

I will let you play with the cases for customer #3, and worry about later customers.
 
Last edited:
  • #49
Ray Vickson said:
I think the problem is a lot harder than you realize. Let ##T_0=0, T_1, T_2, \ldots## be the arrival times of customers ##0,1,2, \ldots.## We have that ##T_k \sim \text{Erlang}(\lambda,k),## for ##k \geq 1##, so its density function is
$$ f_k(t) = \frac{\lambda^k t^{k-1} e^{-\lambda t}}{(k-1)!}, \; k=1,2,3, \ldots . $$

Hmm, I have been thinking over the night about it, what is your opinion if I say that, consider only a time interval ##0## to ##T##, first pedestrian pushes button to start the process, now forget about this pedestrian. The remaining pedestrians arrive following poisson process with rate ##\lambda##. If arrival time distribution of ##k^{th}## pedestrian is Erlang(##k,\lambda; t##), then the remaining time i.e., time from his his arrival epoch until ##T## should also be Erlang distributed with parameter ##k## and ##\lambda## but with time ##T-t## i.e., Erlang(##k,\lambda; T-t##) distributed. This is only the illustration of the idea that for the remaining time, I am looking from ##T## towards ##0##
 
  • #50
Mehmood_Yasir said:
Hmm, I have been thinking over the night about it, what is your opinion if I say that, consider only a time interval ##0## to ##T##, first pedestrian pushes button to start the process, now forget about this pedestrian. The remaining pedestrians arrive following poisson process with rate ##\lambda##. If arrival time distribution of ##k^{th}## pedestrian is Erlang(##k,\lambda; t##), then the remaining time i.e., time from his his arrival epoch until ##T## should also be Erlang distributed with parameter ##k## and ##\lambda## but with time ##T-t## i.e., Erlang(##k,\lambda; T-t##) distributed. This is only the illustration of the idea that for the remaining time, I am looking from ##T## towards ##0##

As I have indicated in post #48, the true situation is much more complicated than that, so your approach will give an approximation to the true answer. You would need to investigate further to assess the accuracy of the approximation.

On the other hand, you might be able to do something with an "equilibrium" analysis, where you look at the limiting distribution of wait times ##W_n## for customers with large ##n## For such customers far in the future, all the little details of entangled waiting times for customers ##1,2,3,\ldots ## will likely become unimportant, and the problem can be looked at much more simply. For example, you could look at the conditional distribution of waiting time for customer ##n##, given that customer ##n## is part of a group of ##k+1## customers who cross together. You could weight that by the probability of such a group, which you have found already.
 
Back
Top