Waiting time in a Queue using Poisson arrival

In summary, the conversation discusses a difficulty in calculating the waiting time of the first packet arriving in a buffer using aggregation. The arrival of packets follows a Poisson process with rate lambda and aggregation is done for a maximum 'm' packets or with the expiry of a timer T_maximum. The individual is interested in finding the expected value of the waiting time of the first packet entering the buffer, and is looking for a general formula that will work for any arrival rate lambda. They have done simulations to check their results and are seeking help from others in the forum. They also provide additional information about their calculations and ask for the other person's email to send a document for further discussion. The other person suggests keeping the conversation on the forum and shares resources
  • #1
Mehmood_Yasir
68
2
Dear colleagues.
I have a difficulty in calculating waiting time of the first packet arriving in a buffer using aggregation. Arrival follow Poisson process with rate lambda and the aggregation is done for a maximum 'm' packets or with the expiry of timer T_maximum. Can anyone help me how to consider interarrival time in case if i have e.g., T_maximum in milliseconds?
 
Physics news on Phys.org
  • #2
Can you be a bit more explicit, perhaps with a simple example to work through your problem?

It sounds like you have a problem with ##P(k, \tau, \lambda) = \frac{(\lambda \tau)^k e^{-\lambda \tau}}{k!}## (though I think you are using m where I'd use k) and you're just looking to get either (a) the Erlang density (b) density for time between two arrivals or (c) expected time for inter-arrivals, or (d) something else. This is where writing up a simple example in LaTeX, to work through, would be quite helpful -- especially if you show your work on what you've done so far and where you get stuck.
 
  • Like
Likes Mehmood_Yasir
  • #3
StoneTemplePython said:
Can you be a bit more explicit, perhaps with a simple example to work through your problem?

It sounds like you have a problem with ##P(k, \tau, \lambda) = \frac{(\lambda \tau)^k e^{-\lambda \tau}}{k!}## (though I think you are using m where I'd use k) and you're just looking to get either (a) the Erlang density (b) density for time between two arrivals or (c) expected time for inter-arrivals, or (d) something else. This is where writing up a simple example in LaTeX, to work through, would be quite helpful -- especially if you show your work on what you've done so far and where you get stuck.
Dear friend, https://www.physicsforums.com/members/stonetemplepython.613025/, Thanks for your feedback. Yes, I can explain a little bit more.
I am considering a buffer for aggregating packets based on two rules. Either based on the aggregation time e.g., Tmax, or based on the full buffer i.e., (max. 'm' packets which can be aggregated). It means that if the buffer receives 'm' packets before Tmax expire, it will send the aggregated packet. OR if the buffer receives less than 'm' packets, and Tmax expires, then It will also send the aggregated packet.

I am interested to find out the expected value of the waiting time of the FIRST packet entering the buffer. Also, an arrival of a packet triggers the timer Tmax. It means that the first packet is offcourse will come to buffer to trigger this process, the rest of the 'm-1' packets will follow the Poisson process.

Now, we assume the arrival of packets follow poisson distribution with mean Inter arrival time 1/lambda. Now, I am interested to model the expected values of the waiting time of the first packet entering the buffer. For sure, if the multiplexing is done due to full buffer, meaning that the aggregated packet contains 'r' packet before timer expiry, the waiting time of the first packet follows erlang distribution, and will have a delay of (r-1)/lambda. I wrote 'r-1' because as a said earlier we should not count the first packet that triggers the aggregation process. On the other hand, if aggregation is done due to Tmax, expiry, then what would be the delay. I just suspect that there should be some superposition to make a general formula that should work automatically for any arrival rate lambda. Here I am stuck when I kept my lambda low, meaning multiplexing is done due to Tmax, I am getting wrong answer, but for very large lambda, results from simulator matches with the (r-1)/lambda.

Also, Can I get your email? I can sent a document to discuss. If you like.
 
  • #4
I've been looking through the forums about Poisson processes (coincidentally over the last week or so) and it seems there are a handful of questions, but people would really benefit from more posts on them. Typically, though certainly not always, there is a simple and elegant way to solve (or if that fails, bound) these problems. Accordingly its really is best to keep things on the forum. Perhaps you can post an image of your document (via imgur or dropbox or whatever).

I think it's great that you're doing a simulation, btw, as they are great for checking your work and building intuition for what's going on.

A couple of preliminaries -- how familiar are you with probability theory? There is a great walkthrough of poisson processes here:

https://ocw.mit.edu/courses/electri...-applied-probability-fall-2010/lecture-notes/ (see 13 - 15). There is also some very good stuff here: https://ocw.mit.edu/courses/electri...tochastic-processes-spring-2011/course-notes/ -- see chapter 2.
- - - - -
Mehmood_Yasir said:
I am interested to find out the expected value of the waiting time of the FIRST packet entering the buffer. Also, an arrival of a packet triggers the timer Tmax. It means that the first packet is offcourse will come to buffer to trigger this process, the rest of the 'm-1' packets will follow the Poisson process.

Now, we assume the arrival of packets follow poisson distribution with mean Inter arrival time 1/lambda. Now, I am interested to model the expected values of the waiting time of the first packet entering the buffer.

If I understood you right, you have a clock (in units of ##\tau## that starts ticking after the first arrival. There is a drop dead point where ##\tau## = Tmax. The issue is that by construction, an Poisson process is an aggregation of an infinite number exponential arrivals distributions -- this is where the memoryless process comes into play. If your expect time until the next arrival is ##\frac{1}{\lambda}## given that you have had one arrival, it is also ##\frac{1}{\lambda}## given that you've had 2 and are waiting on the third, and so on. That means the expected time until the next arrival is ##\frac{1}{\lambda}## given that you've had zero arrivals. The fact is these arrivals are memoryless and their expected times don't change whether or not you have a clock sitting next to them. Further -- as long as you are careful with conditioning-- it doesn't matter whether you look at these processes going forward in time, or backward in time.

Now it occurs to me that there is a lot of jargon here, and that may not be exactly what you wanted... Instead you may be interested in knowing the percent of the time that Tmax is reached and the complement to that is the percent that your reached maximal aggregation size. Then the expected time of the whole process ##= ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p)(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})## , via Linearity of Expectations and Law of Total Expectation.

Evaluating ##p## and ##(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})## will take some care, but they should be doable. Btw, the simplest way to calculate P is

##\Sigma_{k=0}^{r-2} P(k, \tau = T_{max}, \lambda)##

this quite literally means to set your time = ##T_{max}## and then add up all the probabilities where ##k \leq r-2##. This can be interpreted as adding up the first r-2 terms in the power series for ##exp(\lambda T_{max})##, times the normalizing constant of ##e^{-\lambda T_{max}}##, of course. (There are of course ways to approximate / bound that, though such a thing is really outside the scope.)

Perhaps you want to take a shot at evaluating ##(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})##?
 
Last edited:
  • Like
Likes Mehmood_Yasir
  • #5
StoneTemplePython said:
I've been looking through the forums about Poisson processes (coincidentally over the last week or so) and it seems there are a handful of questions, but people would really benefit from more posts on them. Typically, though certainly not always, there is a simple and elegant way to solve (or if that fails, bound) these problems. Accordingly its really is best to keep things on the forum. Perhaps you can post an image of your document (via imgur or dropbox or whatever).

I think it's great that you're doing a simulation, btw, as they are great for checking your work and building intuition for what's going on.

A couple of preliminaries -- how familiar are you with probability theory? There is a great walkthrough of poisson processes here:

https://ocw.mit.edu/courses/electri...-applied-probability-fall-2010/lecture-notes/ (see 13 - 15). There is also some very good stuff here: https://ocw.mit.edu/courses/electri...tochastic-processes-spring-2011/course-notes/ -- see chapter 2.
- - - - -If I understood you right, you have a clock (in units of ##\tau## that starts ticking after the first arrival. There is a drop dead point where ##\tau## = Tmax. The issue is that by construction, an Poisson process is an aggregation of an infinite number exponential arrivals distributions -- this is where the memoryless process comes into play. If your expect time until the next arrival is ##\frac{1}{\lambda}## given that you have had one arrival, it is also ##\frac{1}{\lambda}## given that you've had 2 and are waiting on the third, and so on. That means the expected time until the next arrival is ##\frac{1}{\lambda}## given that you've had zero arrivals. The fact is these arrivals are memoryless and their expected times don't change whether or not you have a clock sitting next to them. Further -- as long as you are careful with conditioning-- it doesn't matter whether you look at these processes going forward in time, or backward in time.

Now it occurs to me that there is a lot of jargon here, and that may not be exactly what you wanted... Instead you may be interested in knowing the percent of the time that Tmax is reached and the complement to that is the percent that your reached maximal aggregation size. Then the expected time of the whole process ##= ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p)(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})## , via Linearity of Expectations and Law of Total Expectation.

Evaluating ##p## and ##(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})## will take some care, but they should be doable. Btw, the simplest way to calculate P is

##\Sigma_{k=0}^{r-2} P(k, \tau = T_{max}, \lambda)##

this quite literally means to your time = ##T_{max}## and then add up all the probabilities where ##k \leq r-2##. This can be interpreted as adding up the first r-2 terms in the power series for ##e^{\lambda T_{max}}##. (There are of course ways to approximate / bound that, though such a thing is really outside the scope.)

Perhaps you want to take a shot at evaluating ##(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})##?
Dear friend! I am quite happy that you understood it very well. Yes, I have studied this probability theory also extensively in the last couple of months. Interestingly, the document over the second link I also studied it.

Actually, I have
StoneTemplePython said:
I've been looking through the forums about Poisson processes (coincidentally over the last week or so) and it seems there are a handful of questions, but people would really benefit from more posts on them. Typically, though certainly not always, there is a simple and elegant way to solve (or if that fails, bound) these problems. Accordingly its really is best to keep things on the forum. Perhaps you can post an image of your document (via imgur or dropbox or whatever).

I think it's great that you're doing a simulation, btw, as they are great for checking your work and building intuition for what's going on.

A couple of preliminaries -- how familiar are you with probability theory? There is a great walkthrough of poisson processes here:

https://ocw.mit.edu/courses/electri...-applied-probability-fall-2010/lecture-notes/ (see 13 - 15). There is also some very good stuff here: https://ocw.mit.edu/courses/electri...tochastic-processes-spring-2011/course-notes/ -- see chapter 2.
- - - - -If I understood you right, you have a clock (in units of ##\tau## that starts ticking after the first arrival. There is a drop dead point where ##\tau## = Tmax. The issue is that by construction, an Poisson process is an aggregation of an infinite number exponential arrivals distributions -- this is where the memoryless process comes into play. If your expect time until the next arrival is ##\frac{1}{\lambda}## given that you have had one arrival, it is also ##\frac{1}{\lambda}## given that you've had 2 and are waiting on the third, and so on. That means the expected time until the next arrival is ##\frac{1}{\lambda}## given that you've had zero arrivals. The fact is these arrivals are memoryless and their expected times don't change whether or not you have a clock sitting next to them. Further -- as long as you are careful with conditioning-- it doesn't matter whether you look at these processes going forward in time, or backward in time.

Now it occurs to me that there is a lot of jargon here, and that may not be exactly what you wanted... Instead you may be interested in knowing the percent of the time that Tmax is reached and the complement to that is the percent that your reached maximal aggregation size. Then the expected time of the whole process ##= ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p)(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})## , via Linearity of Expectations and Law of Total Expectation.

Evaluating ##p## and ##(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})## will take some care, but they should be doable. Btw, the simplest way to calculate P is

##\Sigma_{k=0}^{r-2} P(k, \tau = T_{max}, \lambda)##

this quite literally means to your time = ##T_{max}## and then add up all the probabilities where ##k \leq r-2##. This can be interpreted as adding up the first r-2 terms in the power series for ##e^{\lambda T_{max}}##. (There are of course ways to approximate / bound that, though such a thing is really outside the scope.)

Perhaps you want to take a shot at evaluating ##(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})##?

Actually you have understood the issue quite well. I have studied interestingly the document on the second link provided by you.
considering p using this ##\Sigma_{k=0}^{r-2} P(k, \tau = T_{max}, \lambda)##, the probabilities will be calculated using standard poisson pmf function by inserting t=Tmax and varying the index, k? where Po reflects probability of (no arrival), I mean other than the first one that trigger timer. p1, one arrival in Tmax, .. to P_r-2 means probability of (r-2) more arrivals in Tmax? Yes, I would be happy to see some help for this one, ##(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})##
 
  • #6
Let me share with you few more things. {writing}

As a max. of 'r' arrivals can be aggregated based on Tmax, or sufficient arrivals before Tmax, expiry. So, for calculating the longest waiting time of the first arrival, from your above formula, I tried to use it this way. As Poisson PMF says that ##p_{i} = \frac{(\lambda T_{max})^i e^{-\lambda T_{max}}}{i!}##.
Now, as the clock is always trigger due to the first arrival, it means it reaches at time=0. and after the first arrival, we have ##T_{max} ## for the rest 'r-1' more arrivals. For Poisson we will not count the first arrival.
In the Poisson pmf, ##p_{0} ## means zero arrival in time ##T_{max} ##, ##p_{1} ## means 1 arrival in time ##T_{max} ##, ... similarly, ##p_{i-1} ## means 'i-1' arrivals in time ##T_{max} ##
Using your approach. ExpectedTimeFirstArrival I consider it to happen always at 0.
Then, Its means if I will have less than 'r-1' arrivals, if means aggregation is done due to Tmax. On the other hand, if 'r-1' arrivals occur in Tmax, it means the aggregation will not wait for clock, and it will be sent immediately upon the arrival of 'r-1'th arrival.
So we have two cases to compute the longest time of the first packet, as you also mentioned in your post, for the first case i.e., Tmax, we have sum up all prob. from i=0 to r-2, i.e, ##(p_{0} ## + ##p_{1} ## +...+##p_{r-2} )## * ##T_{max} ##. However, for the second case it should be 1-sum##(p_{0} ## + ##p_{1} ## +...+##p_{r-2} )##*##(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})##. Now as in case of aggregation due to max. 'r' arrivals before Tmax expiry, the first packet passes through 'r-1' stages, so the considering Poisson process I believe that ##(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})## should follow Erlang(r-1,k,Lambda) distribution and will be ##(r-1)/ \lambda##. So, the expectation values of waiting time of the first segment should look like the sum of both portions which I mentioned above (as You said also in your post).

Do you also agree for ##(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})## to be replaced by ##(r-1)/ \lambda##.?
StoneTemplePython said:
Can you be a bit more explicit, perhaps with a simple example to work through your problem?

It sounds like you have a problem with
Mehmood_Yasir said:
Dear friend! I am quite happy that you understood it very well. Yes, I have studied this probability theory also extensively in the last couple of months. Interestingly, the document over the second link I also studied it.

Actually you have understood the issue quite well. I have studied interestingly the document on the second link provided by you.
considering p using this ##\Sigma_{k=0}^{r-2} P(k, \tau = T_{max}, \lambda)##, the probabilities will be calculated using standard poisson pmf function by inserting t=Tmax and varying the index, k? where Po reflects probability of (no arrival), I mean other than the first one that trigger timer. p1, one arrival in Tmax, .. to P_r-2 means probability of (r-2) more arrivals in Tmax? Yes, I would be happy to see some help for this one, ##(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})##
(though I think you are using m where I'd use k) and you're just looking to get either (a) the Erlang density (b) density for time between two arrivals or (c) expected time for inter-arrivals, or (d) something else. This is where writing up a simple example in LaTeX, to work through, would be quite helpful -- especially if you show your work on what you've done so far and where you get stuck.
Mehmood_Yasir said:
Dear friend, https://www.physicsforums.com/members/stonetemplepython.613025/, Thanks for your feedback. Yes, I can explain a little bit more.
I am considering a buffer for aggregating packets based on two rules. Either based on the aggregation time e.g., Tmax, or based on the full buffer i.e., (max. 'm' packets which can be aggregated). It means that if the buffer receives 'm' packets before Tmax expire, it will send the aggregated packet. OR if the buffer receives less than 'm' packets, and Tmax expires, then It will also send the aggregated packet.

I am interested to find out the expected value of the waiting time of the FIRST packet entering the buffer. Also, an arrival of a packet triggers the timer Tmax. It means that the first packet is offcourse will come to buffer to trigger this process, the rest of the 'm-1' packets will follow the Poisson process.

Now, we assume the arrival of packets follow poisson distribution with mean Inter arrival time 1/lambda. Now, I am interested to model the expected values of the waiting time of the first packet entering the buffer. For sure, if the multiplexing is done due to full buffer, meaning that the aggregated packet contains 'r' packet before timer expiry, the waiting time of the first packet follows erlang distribution, and will have a delay of (r-1)/lambda. I wrote 'r-1' because as a said earlier we should not count the first packet that triggers the aggregation process. On the other hand, if aggregation is done due to Tmax, expiry, then what would be the delay. I just suspect that there should be some superposition to make a general formula that should work automatically for any arrival rate lambda. Here I am stuck when I kept my lambda low, meaning multiplexing is done due to Tmax, I am getting wrong answer, but for very large lambda, results from simulator matches with the (r-1)/lambda.

Also, Can I get your email? I can sent a document to discuss. If you like.

w more things. {writing}
 
Last edited by a moderator:
  • #7
I think the last piece you need to think about is the mechanics of *exactly* how an Erlang distribution of order k, actually works. Here is a quick recap:
- - - -
The Erlang distribution of order k is given by:

##f_{Y{_k}}(y) = \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!}##

The CDF for this distribution, up until some time ##\tau## given by

##F_{Y{_k}}(\tau) = \int_{0}^{\tau} \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy ##

and of course, since this is a valid CDF

##\lim_{\tau \to \infty}F_{Y{_k}}(\tau) = \lim_{\tau \to \infty} \int_{0}^{\tau} \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy = \int_{0}^{\infty} \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy = 1##

Now, for the expected time until the kth arrival, over some interval ##[0, \tau]## is given by

##E[k_{ArrivalTime} \big | y \leq \tau]= \int_{0}^{\tau} y \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy = \big( \frac{k}{\lambda} \frac{\lambda}{k} \big) \int_{0}^{\tau} y \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy = \frac{k}{\lambda}\int_{0}^{\tau} \frac{\lambda}{k} * y \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy = \frac{k}{\lambda}\int_{0}^{\tau} \frac{\lambda^{k + 1} y ^{k} e^{-\lambda y}}{(k)!} dy = \frac{k}{\lambda} \Big( F_{Y{_{k+1}(\tau)}}\Big)##

and of course if we don't want to truncate this expectation at some finite value of ##\tau## we can take a limit:

##E[k_{ArrivalTime}] = \lim_{\tau \to \infty} E[k_{ArrivalTime} \big | y \leq \tau] = \lim_{\tau \to \infty} \frac{k}{\lambda} \Big( F_{Y{_{k+1}(\tau)}}\Big) = \frac{k}{\lambda} \big(1 \big) = \frac{k}{\lambda}##

So the expected time of the kth arrival is ##\frac{k}{\lambda}## times the CDF associated with arrival K + 1. The issue is that the CDF for Erlang is, roughly speaking, the complement of your Poisson distribution, so you need to use a series again, in order to calculate this. By construction, a CDF will have values in [0, 1], hence you can easily upper bound ##E[k_{ArrivalTime} \big | y \leq \tau]## by ##\frac{k}{\lambda}##. Alternatively, with large enough ##\lambda## you may be able to do a normal approximation to a Poisson (and then use the CDF of associated normal distribution, which is a common analytic-ish maneuver).

- - - -
This should give you all the building blocks you need to solve or approximate or bound your problem. It would be interesting if you compared results here with your simulation. I suppose I broke my own rule here for probability -- I only solved it one way, when in fact solving / checking via another method like a simulation is vital to squeezing out bugs in your thinking, in my opinion.
 
Last edited:
  • Like
Likes Mehmood_Yasir
  • #8
StoneTemplePython said:
I think the last piece you need to think about is the mechanics of *exactly* how an Erlang distribution of order k, actually works. Here is a quick recap:
- - - -
The Erlang distribution of order k is given by:

##f_{Y{_k}} = \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!}##

The CDF for this distribution, up until some time ##\tau## given by

##F_{Y{_k}} = \int_{0}^{\tau} \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy ##

and of course, since this is a valid CDF

##\lim_{\tau \to \infty}F_{Y{_k}} = \lim_{\tau \to \infty} \int_{0}^{\tau} \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy = \int_{0}^{\infty} \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy = 1##

Now, for the expected time until the kth arrival, over some interval ##[0, \tau]## is given by

##E[k_{ArrivalTime} \big | y \leq \tau]= \int_{0}^{\tau} y \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy = \big( \frac{k}{\lambda} \frac{\lambda}{k} \big) \int_{0}^{\tau} y \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy = \frac{k}{\lambda}\int_{0}^{\tau} \frac{\lambda}{k} * y \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy = \frac{k}{\lambda}\int_{0}^{\tau} \frac{\lambda^{k + 1} y ^{k} e^{-\lambda y}}{(k)!} dy = \frac{k}{\lambda} \Big( F_{Y{_{k+1}}}\Big)##

and of course if we don't want to truncate this expectation at some finite value of ##\tau## we can take a limit:

##E[k_{ArrivalTime}] = \lim_{\tau \to \infty} E[k_{ArrivalTime} \big | y \leq \tau] = \lim_{\tau \to \infty} \frac{k}{\lambda} \Big( F_{Y{_{k+1}}}\Big) = \frac{k}{\lambda} \big(1 \big) = \frac{k}{\lambda}##

So the expected time of the kth arrival is ##\frac{k}{\lambda}## times the CDF associated with arrival K + 1. The issue is that the CDF for Erlang is, roughly speaking, the complement of your Poisson distribution, so you need to use a series again, in order to calculate this. By construction, a CDF will have values in [0, 1], hence you can easily upper bound ##E[k_{ArrivalTime} \big | y \leq \tau]## by ##\frac{k}{\lambda}##. Alternatively, with large enough ##\lambda## you may be able to do a normal approximation to a Poisson (and then use the CDF of associated normal distribution, which is a common analytic-ish maneuver).

- - - -
This should give you all the building blocks you need to solve or approximate or bound your problem. It would be interesting if you compared results here with your simulation. I suppose I broke my own rule here for probability -- I only solved it one way, when in fact solving / checking via another method like a simulation is vital to squeezing out bugs in your thinking, in my opinion.

I fully agree with this. "The expected time of the kth arrival is ##\frac{k}{\lambda}## times the CDF associated with arrival K + 1". Since in my case instead of 'r', I am considering 'r-1' more arrivals following Poisson, that's why I mentioned to ##\frac{r-1}{\lambda}##. Exactly, we can represent erlang cdf as complement of poisson cdf. So this, the expectation value of the waiting time of the first arrival would be Tmax*(po+p1+p2+..+p_r-2)+(1- sum((po+p1+p2+..+p_r-2)))*(r-1/lamda). I just compared results form this with simulations. Actually for very higher arrival rate, the simulation results fully match with this formula as (1- sum((po+p1+p2+..+p_r-2))) is almost 1 because aggregation is mainly done due to arrival of 'r' segments before Tmax expiry. resultantly, (r-1)/lambda is the delay and simulations agree with them as well. However for low lambda, like 50 arrivals/sec , 100, 200, etc... I saw the delay from this expression is firstly slightly increasing with increasing lambda, and later start decreasing and for very higher values of lambda, it then starts following the simulations results
 
  • #9
Mehmood_Yasir said:
I fully agree with this. "The expected time of the kth arrival is ##\frac{k}{\lambda}## times the CDF associated with arrival K + 1". Since in my case instead of 'r', I am considering 'r-1' more arrivals following Poisson, that's why I mentioned to ##\frac{r-1}{\lambda}##. Exactly, we can represent erlang cdf as complement of poisson cdf. So this, the expectation value of the waiting time of the first arrival would be Tmax*(po+p1+p2+..+p_r-2)+(1- sum((po+p1+p2+..+p_r-2)))*(r-1/lamda). I just compared results form this with simulations. Actually for very higher arrival rate, the simulation results fully match with this formula as (1- sum((po+p1+p2+..+p_r-2))) is almost 1 because aggregation is mainly done due to arrival of 'r' segments before Tmax expiry. resultantly, (r-1)/lambda is the delay and simulations agree with them as well. However for low lambda, like 50 arrivals/sec , 100, 200, etc... I saw the delay from this expression is firstly slightly increasing with increasing lambda, and later start decreasing and for very higher values of lambda, it then starts following the simulations results
Res-01.jpg
 
  • #10
Mehmood_Yasir said:
We represented expectation value of the waiting time of the first arrival as Tmax*(po+p1+p2+..+p_r-2)+(1- sum((po+p1+p2+..+p_r-2)))*(r-1/lamda). The results I showed in the graph you can see show some mis-tmatch for lower lambda, i.e., in the part of Tmax. Its means in the first portion of the formula related to Tmax, sum of all probabilities results more values compared to actual simulations results. As In simulations, sometime 1,2,3 or any number of arrivals < 'r-1' (excluding the first one) are multiplexed, however, the formula some how gives upper bound of the expectation values (i feel).
Now, I just removed the ##p_{r-2}## and kept the sum from ##p_{0}## to ##p_{r-3}## and Multiplied with Tmax, however, the second portion of the formula is kept same i.e., it considers the ##p_{r-2}## when calculating 1- sum(##p_{0}+p_{1}+p_{2}+p_{3}+...+p_{r-2})## * ##(r-1)/ \lambda## And surprisingly results fully match for all cases, for any Tmax, and for any 'r' ..But I could not get why deleting ##p_{r-2}## from the first part result this.
 
  • #11
Mehmood_Yasir said:
We represented expectation value of the waiting time of the first arrival as Tmax*(po+p1+p2+..+p_r-2)+(1- sum((po+p1+p2+..+p_r-2)))*(r-1/lamda). The results I showed in the graph you can see show some mis-tmatch for lower lambda, i.e., in the part of Tmax. Its means in the first portion of the formula related to Tmax, sum of all probabilities results more values compared to actual simulations results. As In simulations, sometime 1,2,3 or any number of arrivals < 'r-1' (excluding the first one) are multiplexed, however, the formula some how gives upper bound of the expectation values (i feel).
Now, I just removed the ##p_{r-2}## and kept the sum from ##p_{0}## to ##p_{r-3}## and Multiplied with Tmax, however, the second portion of the formula is kept same i.e., it considers the ##p_{r-2}## when calculating 1- sum(##p_{0}+p_{1}+p_{2}+p_{3}+...+p_{r-2})## * ##(r-1)/ \lambda## And surprisingly results fully match for all cases, for any Tmax, and for any 'r' ..But I could not get why deleting ##p_{r-2}## from the first part result this.
Here you can see the results.
Res-02.jpg
 
  • Like
Likes FactChecker
  • #12
The pictures are a great way to think about this. In particular https://www.physicsforums.com/attachments/112341/ is a very good picture.

Keep in mind that the actual analytical result is
##E[k_{ArrivalTime} \big | y \leq \tau]= \frac{k}{\lambda} \Big( F_{Y{_{k+1}(\tau)}}\Big)##, but so far as I can tell, in the first picture you are just using ##\frac{k}{\lambda}##. Put differently you are using the upper bound, not the exact analytical solution.

I just want to be sure that you realize that when I originally said:
## = ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p)(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})##

we can now rewrite this as :
## = ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p) \Big(\frac{k}{\lambda} \Big( F_{Y{_{k+1} (\tau)}}\Big)\Big)##

hence you'll see something that looks like a complement of a Poisson twice in the term ##(1-p) \Big(\frac{k}{\lambda} \Big( F_{Y{_{k+1}(\tau)}}\Big)\Big)##, first in the ##(1-p)## component, and second, something similar in ##\Big( F_{Y{_{k+1} (\tau)}}\Big)##

- - - -
edit: it looks like your second picture *might* agree with your simulation as you seem to have done some additional cancellations / simplification related to the different complements. Even then, the agreement almost looks too good ;-)
 
Last edited:
  • Like
Likes Mehmood_Yasir
  • #13
Congratulations on getting an analytical solution! I think you can see from this example that queuing problems quickly get too hard to solve analytically except for the most idealistically simplified cases. Computer simulation is the only way to do complicated work in this area.
 
  • #14
StoneTemplePython said:
The pictures are a great way to think about this. In particular https://www.physicsforums.com/attachments/112341/ is a very good picture.

Keep in mind that the actual analytical result is
##E[k_{ArrivalTime} \big | y \leq \tau]= \frac{k}{\lambda} \Big( F_{Y{_{k+1}(\tau)}}\Big)##, but so far as I can tell, in the first picture you are just using ##\frac{k}{\lambda}##. Put differently you are using the upper bound, not the exact analytical solution.

I just want to be sure that you realize that when I originally said:
## = ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p)(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})##

we can now rewrite this as :
## = ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p) \Big(\frac{k}{\lambda} \Big( F_{Y{_{k+1} (\tau)}}\Big)\Big)##

hence you'll see something that looks like a complement of a Poisson twice in the term ##(1-p) \Big(\frac{k}{\lambda} \Big( F_{Y{_{k+1}(\tau)}}\Big)\Big)##, first in the ##(1-p)## component, and second, something similar in ##\Big( F_{Y{_{k+1} (\tau)}}\Big)##

- - - -
edit: it looks like your second picture *might* agree with your simulation as you seem to have done some additional cancellations / simplification related to the different complements. Even then, the agreement almost looks too good ;-)

Hi. Good morning! A great help indeed by you. Yes, In the first picture where there are some mismatches, I computed the analytical results using ## = ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p)(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})##, I considered ExpectedTimeUntilFirstArrival=0, p is summation from ##p_{0}+p_{1}+...+p_{r-2}##. So, the overall formula was ##p*T_{max} + (1-p)*((r-1)/\lambda)##. To be a bit clear, I used ##((r-1)/\lambda)## instead of ##r/\lambda## as in this case we will have have 'r-1'th as our last arrival neglecting the FIRST one. If i use ##r/\lambda##, the matched part of the first figure over the higher arrival rate also mistmatches with simulation by a value of##1/\lambda##.

You are right. If I consider your actual proposed expression
## = ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p) \Big(\frac{k}{\lambda} \Big( F_{Y{_{k+1} (\tau)}}\Big)\Big)##

If I am not wrong, if the max. ##r## arrivals can be multiplexed, the above formula can be considered as,
## = ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p) \Big(\frac{r-1}{\lambda} \Big( F_{Y{_{r} (\tau)}}\Big)\Big)##. Right?

I mean, I used, ## (1-p) \Big(\frac{r-1}{\lambda} \Big( F_{Y{_{r} (\tau)}}\Big)\Big)##, Then yes, you are right probability factor in the second part of the equation will appear twice. The expression will like, ## = p * T_{max} + (1-p)* (\frac{r-1}{\lambda} * (1-p)## as ExpectedTimeUntilFirstArrival=0; and ##\Big( F_{Y{_{r} (\tau)}}\Big)## is actually also ##(1-p)##. Also, I can check the results quickly.Let me share with you.
 
  • #15
Mehmood_Yasir said:
Hi. Good morning! A great help indeed by you. Yes, In the first picture where there are some mismatches, I computed the analytical results using ## = ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p)(ExpectedTimeUntilArrival_{r-1}| \tau < T_{max})##, I considered ExpectedTimeUntilFirstArrival=0, p is summation from ##p_{0}+p_{1}+...+p_{r-2}##. So, the overall formula was ##p*T_{max} + (1-p)*((r-1)/\lambda)##. To be a bit clear, I used ##((r-1)/\lambda)## instead of ##r/\lambda## as in this case we will have have 'r-1'th as our last arrival neglecting the FIRST one. If i use ##r/\lambda##, the matched part of the first figure over the higher arrival rate also mistmatches with simulation by a value of##1/\lambda##.

You are right. If I consider your actual proposed expression
## = ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p) \Big(\frac{k}{\lambda} \Big( F_{Y{_{k+1} (\tau)}}\Big)\Big)##

If I am not wrong, if the max. ##r## arrivals can be multiplexed, the above formula can be considered as,
## = ExpectedTimeUntilFirstArrival + p * T_{max} + (1-p) \Big(\frac{r-1}{\lambda} \Big( F_{Y{_{r} (\tau)}}\Big)\Big)##. Right?

I mean, I used, ## (1-p) \Big(\frac{r-1}{\lambda} \Big( F_{Y{_{r} (\tau)}}\Big)\Big)##, Then yes, you are right probability factor in the second part of the equation will appear twice. The expression will like, ## = p * T_{max} + (1-p)* (\frac{r-1}{\lambda} * (1-p)## as ExpectedTimeUntilFirstArrival=0; and ##\Big( F_{Y{_{r} (\tau)}}\Big)## is actually also ##(1-p)##. Also, I can check the results quickly.Let me share with you.

I have just computed the results using this expressions, ## = [ExpectedTimeUntilFirstArrival=0] + p * T_{max} + (1-p) \Big(\frac{r-1}{\lambda} \Big( F_{Y{_{r} (\tau)}}\Big)\Big)##, in which
##p= \Sigma_{k=0}^{r-2} p_{k}## , 1-p= 1-## \Sigma_{k=0}^{r-2} p_{k}## and ##\Big( F_{Y{_{r} (\tau)}}\Big) = 1-\Sigma_{k=0}^{r-2} p_{k}##
I got these results, a bit mismatch again for lower lambda, however, the analytical values decreased some
Res-03.jpg
whow,
 
  • #16
Mehmood_Yasir said:
I have just computed the results using this expressions, ## = [ExpectedTimeUntilFirstArrival=0] + p * T_{max} + (1-p) \Big(\frac{r-1}{\lambda} \Big( F_{Y{_{r} (\tau)}}\Big)\Big)##, in which
##p= \Sigma_{k=0}^{r-2} p_{k}## , 1-p= 1-## \Sigma_{k=0}^{r-2} p_{k}## and ##\Big( F_{Y{_{r} (\tau)}}\Big) = 1-\Sigma_{k=0}^{r-2} p_{k}##
I got these results, a bit mismatch again for lower lambda, however, the analytical values decreased someView attachment 112361 whow,

My gut tells me there is an indexing by one error in one of your terms. That is, I think it should be ##\Big( F_{Y{_{r} (\tau)}}\Big) = 1-\Sigma_{k=0}^{r-1} p_{k}##

But I'll have to take closer look after some sleep. In general, indexing errors are quite easy to come by in math and programming -- especially when you switch between something that zero indexes like python to a one based indexing like matlab!

One way or the other you're basically there.
 
  • #17
StoneTemplePython said:
My gut tells me there is an indexing by one error in one of your terms. That is, I think it should be ##\Big( F_{Y{_{r} (\tau)}}\Big) = 1-\Sigma_{k=0}^{r-1} p_{k}##

But I'll have to take closer look after some sleep. In general, indexing errors are quite easy to come by in math and programming -- especially when you switch between something that zero indexes like python to a one based indexing like matlab!

One way or the other you're basically there.

OK Many Thanks. Yes, I can recheck again my terms if something off by one error.. Sure, you take sleep and I will update again. Thanks once again for this great help.
 
  • #18
I just tried to solve this.., for calculating Erlang CDF for rth arrival. The upper limit go to 'r-1' instead of 'r-2'
Here is the explanation. Unfortunately, results don't match for lower lambda,

doc_mehmood.png
 
  • #19
FactChecker said:
Congratulations on getting an analytical solution! I think you can see from this example that queuing problems quickly get too hard to solve analytically except for the most idealistically simplified cases. Computer simulation is the only way to do complicated work in this area.
Hi. Yes, I fully agree with you. Its always quite complex using Queueing models for some kind of representation/phenomenon. I have realized it.
 
  • Like
Likes FactChecker
  • #20
Mehmood_Yasir said:
I just tried to solve this.., for calculating Erlang CDF for rth arrival. The upper limit go to 'r-1' instead of 'r-2'
Here is the explanation. Unfortunately, results don't match for lower lambda,

View attachment 112379
try this instead:

##
= [ExpectedTimeUntilFirstArrival=0] + p * T_{max} + \Big(\frac{r-1}{\lambda} \Big( F_{Y{_{r} (\tau)}}\Big)\Big)
##

notice that the (1-p) is gone. There is a subtle conditioning issue here. Technically the (1-p) was ok to have in the model to begin with but after choosing the specific way to represent ##ExpectedTimeUntilArrival_{r-1}| \tau < T_{max}## that I did, I should have recognized that I was in effect double counting (read: double using) the 1-p term.

Basically it's already in the erlang density. You can see this if you look at the definition of erlang as ##
f_{Y_k} \approx \delta^{-1} * \big(TotalProbabilityAtLeastKArrivals(y + \delta) - TotalProbabilityAtLeastKArrivals(y) \big)## for some small ##\delta > 0##. Note the approximation becomes exact in ##\lim_{\delta \to 0}##.
 
  • Like
Likes Mehmood_Yasir
  • #21
Mehmood_Yasir said:
I just tried to solve this.., for calculating Erlang CDF for rth arrival. The upper limit go to 'r-1' instead of 'r-2'
Here is the explanation. Unfortunately, results don't match for lower lambda,

View attachment 112379

StoneTemplePython said:
try this instead:

##
= [ExpectedTimeUntilFirstArrival=0] + p * T_{max} + \Big(\frac{r-1}{\lambda} \Big( F_{Y{_{r} (\tau)}}\Big)\Big)
##

notice that the (1-p) is gone. There is a subtle conditioning issue here. Technically the (1-p) was ok to have in the model to begin with but after choosing the specific way to represent ##ExpectedTimeUntilArrival_{r-1}| \tau < T_{max}## that I did, I should have recognized that I was in effect double counting (read: double using) the 1-p term.

Basically it's already in the erlang density. You can see this if you look at the definition of erlang as ##
f_{Y_k} \approx \delta^{-1} * \big(TotalProbabilityAtLeastKArrivals(y + \delta) - TotalProbabilityAtLeastKArrivals(y) \big)## for some small ##\delta > 0##. Note the approximation becomes exact in ##\lim_{\delta \to 0}##.

Hi. Yes, that worked perfectly. Eliminating (1-p) from the earlier formula gave a perfect match :) But honesty I would request you to give me some detailed understanding of this. I mean considering the aggregation process, I could understand as i.e., when we mean as (p*Tmax)+(1-p)*(r-1/lambda)*F_Yk(t). For this, 'p' somehow gives the probability of aggregation due to Tmax, and its counterpart 1-p can be considered as aggregation due to maximal arrival. But, in the new one, when we remove (1-p), we used F_Yk(t) which means we only considered one more probability in the summation, i.e., the upper limit of 1-summation() goes to 'r-1' for F_Yk(t) instead of 'r-1' of (1-p) terms.

Additionally, is it not mandatory the sum of all probabilities used to calculate expectation value should sum up to 1? If yes, then should I also consider to divide the whole expression with the sum of all used probabilities? I mean normalization? or its not needed?
Other than this, ManyManyThanks for this great cooperation. I would seek some further guidance from you as well.
 
Last edited:
  • #22
Mehmood_Yasir said:
For this, 'p' somehow gives the probability of aggregation due to Tmax, and its counterpart 1-p can be considered as aggregation due to maximal arrival

My concern is that probability can be quite mind bending, and when you say p 'somehow' gives the probability of aggregation... I get worried. I gave an MIT OCW link for 6.041 on the prior page. Let me suggest the 6.041x which is running right now on edx: https://www.edx.org/course/introduction-probability-science-mitx-6-041x-2 . It started 2 weeks ago but there is a week 0 where not much happens, so you likely are only one week behind. The lectures are lucid, problems challenging and there's great forum support from MIT staff. It is perhaps my all time favorite MOOC.
- - - -
Another interpretation here is that try as I might to use clear notation, I did take a shortcut or two. So to take another stab at this. Consider the erlang distribution shown below.

##f_{Y_k} = \lim_{\delta \to 0}\delta^{-1} * \big(TotalProbabilityAtLeastKArrivals(y + \delta) - TotalProbabilityAtLeastKArrivals(y) \big)##

This exists as true in general. But what if I asked for this in a conditional world where you know that at least K arrivals occur by time y + delta ?

Clearly such a probability must be higher than the unconditional case. Indeed, some care is needed, but the basic idea is if you roll a fair die with six sides you have a 1/6 chance of getting a 2. But if I say you are in a conditional world where you know that a number ##\leq 3## was rolled, this means you chance ##= \frac{\frac{1}{6}}{\frac{1}{2}} = \frac{1}{3}##

For avoidance of doubt, you figure out and compute the exact erlang density first (##f_{Y_k}##), and afterward you figure out and compute the conditioning.
So from here, consider how we'd do conditioning for the erlang,

##f_{Y_k | K_{ArrivalsAtY+\delta}} = \frac{f_{Y_k}}{TotalProbabilityAtLeastKArrivals(y + \delta)}##

and then observe in the limit the above is equivalent to

##\lim_{\delta \to 0} f_{Y_k | K_{ArrivalsAtY+\delta}} = \lim_{\delta \to 0} \frac{f_{Y_k}}{TotalProbabilityAtLeastKArrivals(y + \delta)} = \frac{f_{Y_k}}{TotalProbabilityAtLeastKArrivals(y)}= f_{Y_k | K_{ArrivalsAtY}}##

From here just observe that ## (1-p) = TotalProbabilityAtLeastKArrivals(y=T_{max})##

so your very final formula could be written as

$$= [ExpectedTimeUntilFirstArrival=0] + p * T_{max} + (1-p)*\Big(\frac{ \frac{r-1}{\lambda } \Big( F_{Y{_{r} (\tau)}}\Big)}{1-p}\Big)

= [ExpectedTimeUntilFirstArrival=0] + p * T_{max} + \Big(\frac{r-1}{\lambda } \Big( F_{Y{_{r} (\tau)}}\Big)\Big)$$

Thus you can see the (1-p) terms cancel out.
- - - -
A huge part of probability comes down to partitioning arguments. The initial partitioning was between the cases where you time out, with probability p, and the cases you don't, with probability (1-p). From here we just said the total expectation is probability of having less than K (or r-1 if you like) arrivals * T_max + Something. Note that T_max is the deterministic amount of time that kicks given that you have less than K arrivals at T_max (i.e. process terminates early). And here Something = probability of having at least k arrivals by T_max * expected waiting time given that you have at least than K arrivals at T_max

Partitioning arguments are the underpinning of Law of Total Expectations-- the proof of which is pretty straightforward, at least for the countable case. I'd recommend spending some time with that proof.

If your questions are more fundamental than that, you really need to spend the time on something like 6.041x.

I don't think there is much more that I can say here --- I mean that with respect to the subject matter and original question, as well as time constraints.
 
Last edited:
  • Like
Likes Mehmood_Yasir
  • #23
StoneTemplePython said:
My concern is that probability can be quite mind bending, and when you say p 'somehow' gives the probability of aggregation... I get worried. I gave an MIT OCW link for 6.041 on the prior page. Let me suggest the 6.041x which is running right now on edx: https://www.edx.org/course/introduction-probability-science-mitx-6-041x-2 . It started 2 weeks ago but there is a week 0 where not much happens, so you likely are only one week behind. The lectures are lucid, problems challenging and there's great forum support from MIT staff. It is perhaps my all time favorite MOOC.
- - - -
Another interpretation here is that try as I might to use clear notation, I did take a shortcut or two. So to take another stab at this. Consider the erlang distribution shown below.

##f_{Y_k} = \lim_{\delta \to 0}\delta^{-1} * \big(TotalProbabilityAtLeastKArrivals(y + \delta) - TotalProbabilityAtLeastKArrivals(y) \big)##

This exists as true in general. But what if I asked for this in a conditional world where you know that at least K arrivals occur by time y + delta ?

Clearly such a probability must be higher than the unconditional case. Indeed, some care is needed, but the basic idea is if you roll a fair die with six sides you have a 1/6 chance of getting a 2. But if I say you are in a conditional world where you know that a number ##\leq 3## was rolled, this means you chance ##= \frac{\frac{1}{6}}{\frac{1}{2}} = \frac{1}{3}##

For avoidance of doubt, you figure out and compute the exact erlang density first (##f_{Y_k}##), and afterward you figure out and compute the conditioning.
So from here, consider how we'd do conditioning for the erlang,

##f_{Y_k | K_{ArrivalsAtY+\delta}} = \frac{f_{Y_k}}{TotalProbabilityAtLeastKArrivals(y + \delta)}##

and then observe in the limit the above is equivalent to

##\lim_{\delta \to 0} f_{Y_k | K_{ArrivalsAtY+\delta}} = \lim_{\delta \to 0} \frac{f_{Y_k}}{TotalProbabilityAtLeastKArrivals(y + \delta)} = \frac{f_{Y_k}}{TotalProbabilityAtLeastKArrivals(y)}= f_{Y_k | K_{ArrivalsAtY}}##

From here just observe that ## (1-p) = TotalProbabilityAtLeastKArrivals(y=T_{max})##

so your very final formula could be written as

$$= [ExpectedTimeUntilFirstArrival=0] + p * T_{max} + (1-p)*\Big(\frac{ \frac{r-1}{\lambda } \Big( F_{Y{_{r} (\tau)}}\Big)}{1-p}\Big)

= [ExpectedTimeUntilFirstArrival=0] + p * T_{max} + \Big(\frac{r-1}{\lambda } \Big( F_{Y{_{r} (\tau)}}\Big)\Big)$$

Thus you can see the (1-p) terms cancel out.
- - - -
A huge part of probability comes down to partitioning arguments. The initial partitioning was between the cases where you time out, with probability p, and the cases you don't, with probability (1-p). From here we just said the total expectation is probability of having less than K (or r-1 if you like) arrivals * T_max + Something. Note that T_max is the deterministic amount of time that kicks given that you have less than K arrivals at T_max (i.e. process terminates early). And here Something = probability of having at least k arrivals by T_max * expected waiting time given that you have at least than K arrivals at T_max

Partitioning arguments are the underpinning of Law of Total Expectations-- the proof of which is pretty straightforward, at least for the countable case. I'd recommend spending some time with that proof.

If your questions are more fundamental than that, you really need to spend the time on something like 6.041x.

I don't think there is much more that I can say here --- I mean that with respect to the subject matter and original question, as well as time constraints.

Thank you so much! Yes, I got it, offcourse I will be happy to go through these lectures. Indeed its a great help by you. I started learning several months ago, but to get a full hands on expertise, I have to make some more effort offcourse yes. Anyways, Thanks once again. I would also like to model the average delay caused by the 'r' arrivals and will also like to share the outcome. Mehmood
 
  • #24
Mehmood_Yasir said:
Thank you so much! Yes, I got it, offcourse I will be happy to go through these lectures. Indeed its a great help by you. I started learning several months ago, but to get a full hands on expertise, I have to make some more effort offcourse yes. Anyways, Thanks once again. I would also like to model the average delay caused by the 'r' arrivals and will also like to share the outcome. Mehmood

Well I will certainly read the outcome -- and I look forward to some more nice graphs :-)

- - --
Btw, I hope the idea underlying my most recent post was very clear. I am still not totally happy with my notation. The rest of the post is done with the idea of tweaking that. (Feel free to disregard.)

Since we were really only interested in conditioning around the neighborhood of ##T_{max}## I probably should have inserted the CDF there instead:
##F_{Y{_k}}(T_{max})= \int_{0}^{T_{max}}\Big( \lim_{\delta \to 0}\delta^{-1} * \big(TotalProbabilityAtLeastKArrivals(y + \delta) - TotalProbabilityAtLeastKArrivals(y) \big) \Big)dy ##

which leads to

##F_{Y{_k}}(T_{max})| AtleastK_{ArrivalsAtT_{max}} = \frac{1}{TotalProbabilityAtLeastKArrivals(T_{max})} \int_{0}^{T_{max}}\Big( \lim_{\delta \to 0}\delta^{-1} * \big(TotalProbabilityAtLeastKArrivals(y + \delta) - TotalProbabilityAtLeastKArrivals(y) \big) \Big)dy ##

##F_{Y{_k}}(T_{max})| AtleastK_{ArrivalsAtT_{max}} = \frac{1}{TotalProbabilityAtLeastKArrivals(T_{max})} \int_{0}^{T_{max}} \frac{\lambda^k y ^{k-1} e^{-\lambda y}}{(k-1)!} dy##

then, given the earlier results we know about expected values and Erlang CDFs:

##E[y(T_{max})| AtleastK_{ArrivalsAtT_{max}} ] = \frac{1}{TotalProbabilityAtLeastKArrivals(T_{max})} \Big(\frac{k}{\lambda} F_{Y_{k+1}}(T_{max}) \Big) = \frac{1}{1-p} \Big(\frac{k}{\lambda} F_{Y_{k+1}}(T_{max}) \Big) ##

Alternatively, since the number of arrivals is countable, there is another way of tackling this as a tree with an infinite number of branches and seeing that the riemann sum needed with respected to arrival times that occur before timeout (using some very small time interval ##\delta##) is equivalent to the Erlang expected value-- very tough to show that succinctly online though. I am a big fan of trees for both modeling discrete probability and for use in computing.

In any case, I hope the idea is clear -- perhaps clearer than the notation. It's a subtle problem.
 
  • #25
Hmmm, I found a bit of time to write up a sketch of the alternative approach that I was thinking of, regarding the countably infinite tree I mentioned, though alas without a picture.

First, start by summing over the entire probability space. (Note I use K here instead of r - 1, which is a bit easier...)

##\Sigma_{i=0}^{\infty}P(i,T_{max},\lambda) = 1##
Because the Poisson is a valid PMF, it must sum to one. Now partition this into two components, those ##< K## and those ##\geq K##

##\Big(\Sigma_{i=0}^{K-1}P(i,T_{max},\lambda) \Big) + \Sigma_{i=K}^{\infty}P(i,T_{max},\lambda) = 1##

this is a clean partition and clearly the entire set of probabilities thus still sums to one. Now move from probabilities to expected values.

##EV = \Big(\Sigma_{i=0}^{K-1}P(i,T_{max},\lambda) * T_{max}\Big) + \Sigma_{i=K}^{\infty}\big(P(i,T_{max},\lambda) ThePayoff_i\big)##

The challenge now is to solve for each '##ThePayoff_i##' which represents waiting times in each case where we have ##i \geq K##. From here, we want the value of the time from 0 to ##T_{max}## multiplied by the associated probability, over a very small step size. So we now focus on the second component of our Expected Value: i.e. focus on ##\Sigma_{i=K}^{\infty}\big(P(i,T_{max},\lambda) ThePayoff_i\big)##

This can be closely approximated by:
##\Sigma_{i=K}^{\infty} \Sigma_{j=1}^{n} \Big(\frac{j*T_{max}}{n} \delta^{-1}\big(P(i,T_{max}*\frac{j}{n}+\delta,\lambda) - (P(i,T_{max}*\frac{j}{n},\lambda)\big)\Big)##

where n is some large natural number, and ##\delta## is some small value >0.
- - - -
It's worth spending some time thinking about this.

**The Payoff**
##\frac{j*T_{max}}{n}## represents the payoff/ time until kth arrival, expressed as some fraction of ##T_{max}##, because j takes on values from 1, 2, 3, ... n. (And of course ##\frac{n}{n} * T_{max}## is the largest value your payoff can take on.)

**The Probability**
As for ## \delta^{-1}\big(P(i,T_{max}*\frac{j}{n}+\delta,\lambda) - (P(i,T_{max}*\frac{j}{n},\lambda)\big) = \frac{(P(i,T_{max}*\frac{j}{n}+\delta,\lambda) - (P(i,T_{max}*\frac{j}{n},\lambda))}{\delta}##

This component relates to the portion of unique Kth arrivals that occur at some time ##T_{max}*\frac{j}{n}+\delta## that hadn't occurred at ##T_{max}*\frac{j}{n}##. We know from calculus that linear approximations work better and better over smaller time intervals, but you also *must* have ##\delta## in the denominator to make ##\frac{(P(i,T_{max}*\frac{j}{n}+\delta,\lambda) - (P(i,T_{max}*\frac{j}{n},\lambda))}{\delta}## meaningful for any size of ##\delta## that you choose.

Put differently, to not have ##\delta## in the denominator would allow you to increase (decrease) the size of the Probability estimates, --even after holding linear approximation quality constant-- simply by increasing (decreasing) ##\delta##. Thus by putting ##\delta## in the denominator, we are holding our probability estimates constant -- except we are allowing the quality and only the the quality of our linear approximation to vary.
- - - -
Now take another look at this equation:
##\Sigma_{i=K}^{\infty} \Sigma_{j=1}^{n} \Big(\frac{j*T_{max}}{n} \delta^{-1}\big(P(i,T_{max}*\frac{j}{n}+\delta,\lambda) - (P(i,T_{max}*\frac{j}{n},\lambda)\big)\Big)##

Because *all* values in here are non-negative, we can safely switch the summation, as shown below:

##\Sigma_{j=1}^{n} \Sigma_{i=K}^{\infty} \frac{j*T_{max}}{n} \delta^{-1}\big(P(i,T_{max}*\frac{j}{n}+\delta,\lambda) - (P(i,T_{max}*\frac{j}{n},\lambda)\big)##

from here we can move ##\frac{j*T_{max}}{n}## outside the nested sigma:

##\Sigma_{j=1}^{n} \frac{j*T_{max}}{n} \Sigma_{i=K}^{\infty}\delta^{-1} \big(P(i,T_{max}*\frac{j}{n}+\delta,\lambda) - (P(i,T_{max}*\frac{j}{n},\lambda)\big)##

now recognize that the nested sigma refers to an approximation of an erlang density, so we'll take a limit to get that, and because expected values fundamentally depend on probabilities / densities (but not necessarily vice versa), then take a limit for large n...

##\Sigma_{j=1}^{n} \frac{j*T_{max}}{n} \Big(\Sigma_{i=K}^{\infty} \delta^{-1}\big(P(i,T_{max}*\frac{j}{n}+\delta,\lambda) - (P(i,T_{max}*\frac{j}{n},\lambda)\big)\Big)##

##\Sigma_{j=1}^{n} \frac{j*T_{max}}{n} \Big( \lim_{\delta \to 0} \Sigma_{i=K}^{\infty} \delta^{-1}\big(P(i,T_{max}*\frac{j}{n}+\delta,\lambda) - (P(i,T_{max}*\frac{j}{n},\lambda)\big)\Big)##

If you wanted full rigour, people would do the most nitpicking at this stage, because some would request a more in depth / careful analysis of the exact sequencing of the limits, and an explanation of what seems to be a mixing of discrete numbers relevant to probabilities, with densities. My general view is that this is tedious and misses the point, especially since the solution has been shown using other methods. This is just a complementary sketch. For expedience I'll move ahead with:

##\lim_{n \to \infty} \Sigma_{j=1}^{n} \frac{j*T_{max}}{n} \Big( \lim_{\delta \to 0} \Sigma_{i=K}^{\infty} \delta^{-1}\big(P(i,T_{max}*\frac{j}{n}+\delta,\lambda) - (P(i,T_{max}*\frac{j}{n},\lambda)\big)\Big)##

##\int_{0}^{T_{max}} y * f_{Y_{K}}(T_{max}) dy##

which is our expected value that we determined is equal to ##\frac{k}{\lambda} F_{Y_{K+1}(T_{max})}##

and plugging back into the the original equation:

##EV = \Big(\Sigma_{i=0}^{K-1}P(i,T_{max},\lambda) * T_{max}\Big) + \Big(\Sigma_{i=K}^{\infty}\big(P(i,T_{max},\lambda) ThePayoff_i\big)\Big)##

##EV = \Big(\Sigma_{i=0}^{K-1}P(i,T_{max},\lambda) * T_{max}\Big) + \Big(\frac{k}{\lambda} F_{Y_{K+1}}(T_{max})\Big)##

##EV = \Big(\Sigma_{i=0}^{K-1}P(i,T_{max},\lambda) * T_{max}\Big) + \frac{k}{\lambda} F_{Y_{K+1}}(T_{max})##

##EV = p * T_{max} + \frac{k}{\lambda} F_{Y_{K+1}}(T_{max})##
 
  • #26
I understand this alternative approach proposed by you. However, the first approach that you explained in your third last post by finding out the right erlang density is more clear to me, and technically, I could not see any issue in that as its looking quite promising solution (to my understandings, but off-course you have a deeper understanding). However, I agree for the notation being used, I can recheck again for a better understanding.
Thanks a lot for supporting! Indeed, you spent a lot of time in figuring out the issue. but indeed its also a great support and motivation for me to further learn and to explore this mechanism. Last night, I tried to model the expected delay of the subsequent arrivals as well as their average waiting time. In this time, I come up with following findings.

Based our previous knowledge, as the Expected longest waiting time of first arrival is ##EV = p * T_{max} + \frac{r-1}{\lambda} F_{Y_{r}}(T_{max})##, Also, we know that arrivals following a Poisson process appears with mean inter-arrival time of value ##\frac{1}{\lambda} ##, Therefore, the first arriving packet following Poisson process (it will be a 2nd arrival in total if we also consider the first arrival at time=0) will have expected mean time of ##EV_{FirstArrival}-\frac{1}{\lambda} ##, or in general ##EV_{i} = EV_{FirstArrival}-\frac{i}{\lambda} ## for ## i=2,3,...r ## However, I feel that it would again not work well under the low lambda cases, as the factor ##\frac{i}{\lambda} ## will be larger than ##EV_{FirstArrival}## particularly with low lambda cases, for higher lambda, it would work perfectly as the ##EV_{FirstArrival}## follows erlangian distribution as the clock does not play a vital role, and the Probability of (aggregation due to maximal arrival) is very high. I just implemented process in simulator for some analysis. For even lower lambda values, the expected time may be negative using this formula, which is not acceptable.
Res-04.jpg
Similarly, for 'r' arrivals to be aggregated, the first arrival experiences longest waiting time, whereas the last one i.e., 'r_th' experiences the shortest waiting time. If we assume that the waiting time follows a uniform distribution within the interval 0 (shortest waiting time) and ##EV_{FirstArrival}## (longest waiting time). then, the mean waiting time can be given as the half of the longest waiting time i.e., ##EV_{FirstArrival}##. Again this will only fit when we will have 0 as the shortest waiting time, and this is only valid when aggregation is done by arrival of maximal arrivals before clock time out. But this is not the case for low arrival rate as aggregation is mainly done due to time out therefore the shortest waiting segment for low lambda will be greater than 0.
Res-05.jpg
 
Last edited:
  • #27
Hmmm, I like the pictures and convergence around 1000 or so. Can you restate what your question is? I didn't totally follow.

I am thinking your question may have a simple answer since you get a fresh start after each arrival -- including that first one that we've been ignoring except for the fact that it gets the clock going. Since you mention getting negative values at times, I'm thinking a 'simple' fix is in order. If not, at least be happy that you're asymptotically correct!
 
  • Like
Likes Mehmood_Yasir
  • #28
Hi! Yes, I can explain again. The first question is about subsequent arrival delays i.e., ##2_{nd}, 3_{rd}, ... r_{th}##.

As previously mentioned, I model e.g., the expected delay of 2nd arrival as ##EV_{FirstArrival}- \frac{1}{\lambda}##. I did this because we know the arrivals following Poisson process on average have inter-arrival time ##\frac{1}{\lambda}## However, this formula does not give precise results for expected delay values of 2nd arrival for low lambda. For higher lambda, it works well. How this discrepancy can be removed? any idea. ALSO, Yes, the negative values problem for very low lambda can be fixed asymptotically, but still the values with simulations does not match. Is it a right approach to model the expected value of the subsequent arrivals delays.

Secondly, the mean waiting time which I calculated above also has the same problem for low lambda. Actually I considered mean waiting time due to 'r' arrivals as the half of the ##\frac{EV_{FirstArrival}}{2}##. The reason why I used ##\frac{EV_{FirstArrival}}{2}## as mean waiting time due to 'r' arrivals is that the first arrival
experiences expected delay of ##EV_{FirstArrival}##, whereas the last experiences 0 delay. Thus, mean waiting time is ##\frac{EV_{FirstArrival}}{2}##. But infact, this is true for higher values of lambda like you saw in picture because first arrival experiences expected delay of ##EV_{FirstArrival}##, whereas the ##last## experiences 0 delay, However, in case of low lambda, first arrival yes off-course experiences expected delay of ##EV_{FirstArrival}##, but the last experiences delay not exactly equal to 0, but sometimes (>0) i.e., whenever aggregation will be done due to time out, and this is usual case when lambda is low. How to handle this case, i mean for calculating average waiting time.
 
  • #29
The expected amount of time (given by the clock that starts after arrival one) associated with just arrival 2 is ##EV = p * T_{max} + \frac{2-1}{\lambda} F_{Y_{2}}(T_{max}) = p * T_{max} + \frac{1}{\lambda} F_{Y_{2}}(T_{max})## . For clarity, let's refer to the clock here as a grandfather clock.

Now let's say we also had a stopwatch that was to monitor this experiment from time zero and use it for keeping times, and you wanted to use it to measure the total expected time associated with the initial arrival (that gets the grandfather clock to start and nothing else), and the arrival that comes after, then that would be given by ##\frac{1}{\lambda}+ p * T_{max} + \frac{1}{\lambda} F_{Y_{2}}(T_{max})##, due to the fresh start that occurs after that initial arrival (and the linearity of expectations). Again, for this paragraph we are using the stopwatch, not the grandfather clock, for purposes of measuring total time.

Was this your question? I still can't really tell what you're trying to do on this most recent variation. Keep in mind that calculations and adjustments regarding what happens before the grandfather clock starts, should be easy.
 
  • #30
Thanks. OK Let me put my question in the form of figure for a better understanding. Just give me some time.
 
  • #31
Btw, I'll tell you that I think the original problem is much easier if we reformulate it, and solve it in stages.

Stage 1: you are offering a prize of $1 multiplied by the time of arrival for K these packets (where each inter-arrival time is i.i.d. and characterized by an exponential distribution). You have no restrictions at all, except only the first K packets get paid. How much do you expect to pay? ##$1*\frac{K}{\lambda}##

Stage 2: you say you don't like waiting around too much, so you will stop your game and do payouts at time ##T_{max}##. If otherwise qualifying arrivals haven't shown up by then, too bad -- they get nothing. So your expected payout is ##$1* \frac{K}{\lambda } F_{Y{_{K+1}}(T_{max})}##

Stage 3: you're feeling generous and decide that if otherwise qualifying arrivals show up after ##T_{max}##, they should still get something just for 'trying'... so in those cases they get paid ##T_{max}##. What is your expected payout? ##$1*p*T_{max} + $1* \frac{K}{\lambda } F_{Y{_{K+1}}(T_{max})}##

- - - - -
Maybe you can try to re-formulate you new variation of the problem into a 3 stage problem like the above?
 
  • #32
Hi. Let me check it, before that i forgot to upload this picture for better understanding.
We formulated expectation value of waiting time of first arrival previously which is the longest waiting arrival in the group of arrivals which are aggregated.
Now, I am further interested,
1) to find the expected waiting time of the second, third , fourth ... arrivals. How much they have waited. Surely, their waited time is less than the first arrival.
if we consdier poisson process in which arrivals experience exponential interarrival time with mean 1/lambda. I thought the expected waiting time of the second one should be (1/lambda) less than the first one. But for low lambda, it does not work as i showed in my previous figures. SImilarly, I want to find the expected waiting time of the third one, fourth, fifth and ...

2) secondly, I am interested to see find the average waiting time for an aggregated package. Assume, aggregated segment contains 4 arrivals, then average waiting time would be the waiting time calculated for all of them on average. Just an example , assume 'r' arrivals are aggregated. we want to find the average waiting time due to ;r; arrivals each carrying a different waiting time. out of these 'r', first arrivals will experience longest waiting time and the 'r' th arrival the shortest and in our case, 0 as it will not wait for clock and will get service. So, if we consider uniformly distributed waiting time then on average we can say the average waiting time would be (longest+shortest)/2. for 'r' arrivals being aggregated, shortest is 0 as we receive 'r' arrivals before timeout and we will not waiti and serve immediately. so 'r'th will not wait and average would be (longest+0)/2. And you saw results work very well but only for low lambda they do not match because aggregation not always contain 'r' segment and may contain less arrivals to aggregate.

sorry in the file, its not 1/lambda time, I mean I/lambda less than the first arrival waiting time
 

Attachments

  • 1.pdf
    278.9 KB · Views: 204
  • #33
The graphic you attached helps me figure out where your confusion is.

I fundamentally am unhappy, though, that you aren't taking advantage of the fresh start property here. I think the initial arrival you label as 1, should be labeled as 0, or dummy or even ignored. A Poisson process is defined as a counting process, but the fact that there is no clock running until the first arrival in your picture makes me uncomfortable.

My suggestion in your picture is relabel the far left blue arrow as "dummy", then relabel the next arrivals as 1, 2, 3, ..., k.

I think your fundamental problem is that the expected arrivals time between "dummy" and 1 (what you were calling 2) is not ##\frac{1}{\lambda}##. For any arrival process where W arrivals occur, if there is a constraint that it only counts if it occurs at a time

##\leq T_{max}##, then you get an expected waiting time of ## \frac{W}{\lambda } F_{Y{_{W+1}}(T_{max})}##. This is something I've stated and presented from many different viewpoints-- I really don't know what else to say about this. My concern is you seem to have some formulas in mind but still don't really understand where they came from or when they are or are not appropriate to use. The fact that exponential distributions are memoryless is a very big deal...

From here, I'd simply point out that you have linearity of expectations and overlapping subproblems. You can solve this top down or bottom up. The idea is quite simply that you have a formula ##p*T_{max} + \frac{K}{\lambda } F_{Y{_{K+1}}(T_{max})}## for arrivals, for some K. Try it with say K := 5. Now try it again with K := 4. The first result gives you the expected waiting time when you want 5 arrivals. The second gives you the expected waiting time when you want 4 arrivals. The difference between these two numbers must be the expected waiting time associated with wanting a 5th arrival. (Note it is vital you hold T_max constant for all these calculations.) Put differently if you know the expected waiting time associated with the first 4 arrivals, and the expected waiting time of the 5th arrival, the sum of those two items is the expected waiting time for K=5 arrivals.

It still isn't clear to me what your second question is asking, but I don't think further assistance from me here is helping. Your focus should be on building a foundation in probability in order to understand this.
 
  • Like
Likes Mehmood_Yasir
  • #34
Thank you. I am sorry, Indeed I took a lot of your time, but I learned alot. I went through the lectures and revisited the way I was calculating the expected waiting time of the ##1^{st} ## arrival, and also the way you proposed. Now, first I would like to mention the reason why I used ##1/ \lambda ## time difference between ##0^{th} ## and ##1^{st} ## arrival (in simulator). Actually in simulator, I generated a series of i.i.d interarrival times using exponential distribution. For a particular lambda, inter arrival time of all arrivals is separated by the time difference which is on average 1/lambda, which is the expected value of an exponentially distributed time. Till now, I did not use any clock. Its means that if we have 1 million of packets they all on average will be separated by a distance of ##1/ \lambda ##. Now consider a timer/stop watch and we should also not the forget the fresh start in our process as you also mentioned. From any position in the list containing inter arrival time of all packets, we will start the clock whenever a ##0^{th} ## arrival appears, but this has no impact on the average interarrival time differences, its mean that the expected waiting time till the ##1^{st} ##arrival is also ##1/ \lambda ## on average (i am only talking about simulator) as i will not take care of any clock running or not, and we can repeat the process anywhere in the whole time wherever we will see an aggregated packet is sent and a new fresh start is needed i.e. ##0^{th} ## arrival has approached, we will restart the clock. But that's how simulator is working. I was expecting the analytical should also give the same results but your argument of memoryless property is also correct. For analytical, I checked the way you mentioned.

But I used first ##k=1##, which means It will give us the expected waiting time when we want ##1^{st} ## arrival to approach, and infact it will be the waiting time to get the first arrival. Later, I again used ##k=6##, and used the same approach to calculate expected waiting time to get 6 arrivals. from previous results I knew the expecting waiting time to get ##1^{st} ## arrival, so i subtracted previous expected value when k=1 from the value when k is kept 6. This will give us the expected waiting time of the first arrival or waiting time to get 6-1 arrivals. I try to compare the results, although with low lambda, results do not fully agree like before, but yes the good thing is that i don't have any risk of negative values as it was before when i just used 1/lambda as a time difference to the first arrival after the oth arrival. I will further investigate it.
Res-06.jpg

For the second question, what I wanted to find is the average waiting time of an aggregated packet due to ;k; arrivals. Simply, how much an arrival in the aggregated packet has waited on average. Its a bit different than the individual waiting times of separate arrivals. Just as an example, let's assume k=6, and assume a clock with timer expiry as 10ms, now e.g., the ##0^{th} ## arrival has an expected waiting time 1oms, ##1^{st} ## arrival as 8.1ms, ##2^{nd} ## arrival 6ms and... ##k^{th} ## arrival experiences e.g., 0ms as , I would like to find out the average of all these values which is infact the average waiting time due to k arrivals.
I am sorry once again for taking too much of your time, I am still learning and trying to model it, and hope fully I would be able to learn these guts even more quickly with such kind of assistance. Anyhow thanks alot.
 
  • #35
Mehmood_Yasir said:
Thank you. I am sorry, Indeed I took a lot of your time, but I learned alot. I went through the lectures and revisited the way I was calculating the expected waiting time of the ##1^{st} ## arrival, and also the way you proposed. Now, first I would like to mention the reason why I used ##1/ \lambda ## time difference between ##0^{th} ## and ##1^{st} ## arrival (in simulator). Actually in simulator, I generated a series of i.i.d interarrival times using exponential distribution. For a particular lambda, inter arrival time of all arrivals is separated by the time difference which is on average 1/lambda, which is the expected value of an exponentially distributed time. Till now, I did not use any clock. Its means that if we have 1 million of packets they all on average will be separated by a distance of ##1/ \lambda ##. Now consider a timer/stop watch and we should also not the forget the fresh start in our process as you also mentioned. From any position in the list containing inter arrival time of all packets, we will start the clock whenever a ##0^{th} ## arrival appears, but this has no impact on the average interarrival time differences, its mean that the expected waiting time till the ##1^{st} ##arrival is also ##1/ \lambda ## on average (i am only talking about simulator) as i will not take care of any clock running or not, and we can repeat the process anywhere in the whole time wherever we will see an aggregated packet is sent and a new fresh start is needed i.e. ##0^{th} ## arrival has approached, we will restart the clock. But that's how simulator is working. I was expecting the analytical should also give the same results but your argument of memoryless property is also correct. For analytical, I checked the way you mentioned.

But I used first ##k=1##, which means It will give us the expected waiting time when we want ##1^{st} ## arrival to approach, and infact it will be the waiting time to get the first arrival. Later, I again used ##k=6##, and used the same approach to calculate expected waiting time to get 6 arrivals. from previous results I knew the expecting waiting time to get ##1^{st} ## arrival, so i subtracted previous expected value when k=1 from the value when k is kept 6. This will give us the expected waiting time of the first arrival or waiting time to get 6-1 arrivals. I try to compare the results, although with low lambda, results do not fully agree like before, but yes the good thing is that i don't have any risk of negative values as it was before when i just used 1/lambda as a time difference to the first arrival after the oth arrival. I will further investigate it.
View attachment 112693
For the second question, what I wanted to find is the average waiting time of an aggregated packet due to ;k; arrivals. Simply, how much an arrival in the aggregated packet has waited on average. Its a bit different than the individual waiting times of separate arrivals. Just as an example, let's assume k=6, and assume a clock with timer expiry as 10ms, now e.g., the ##0^{th} ## arrival has an expected waiting time 1oms, ##1^{st} ## arrival as 8.1ms, ##2^{nd} ## arrival 6ms and... ##k^{th} ## arrival experiences e.g., 0ms as , I would like to find out the average of all these values which is infact the average waiting time due to k arrivals.
I am sorry once again for taking too much of your time, I am still learning and trying to model it, and hope fully I would be able to learn these guts even more quickly with such kind of assistance. Anyhow thanks alot.
I made a small mistake in the legends of the above graph. Its reverse. The blue curve depicts analytical results and black shows the results from simulator.
 
Back
Top