# Jackson network in steady state

1. Jan 25, 2014

### thomas49th

1. The problem statement, all variables and given/known data
Consider a network of n queues with a Poisson arrival process of
parameter t from outside the network, and independent exponentially
distributed service times of parameters r1 to rn.

Customer that first arrived to the network initially join queue i with
probability Pi (obviously there probabilities sum to one)

A customer completeing its service at queue i will head to queue j
with probability Pij, or will leave the network with probability $$P_{i, n+1}$$

Let K(t) denote the queue length vector for each of the queues at time
t, and let l = (k_{1},...,k_{2}), k1 to kn is >= 0

be a particular value of this vector

Obtain p(k) = lim(t->inf) p(k,t) where p(k,t) = Prob[K(t) = k] in
terms of the parameters previously defined, and prove your result in
full detail

2. Relevant equations

3. The attempt at a solution
So I immediately tell that we are talking about the network going into a steady state.

To begin solving it I considered the small time interval delta t, where 1 packet may arrive, 0 arrive or many may arrive for each node. Each event has a probability. When we work through the maths it is possible to subtract from both sides of the equation

Ultimately I express $$\frac{d}{dt}p(k,t) =$$ product of sums for events
(i.e data arriving at node i * probability no data arriving at node i)

In steady state the rate of change will be 0, therefore $$\frac{d}{dt}p(k,t) = 0$$.

This is where I am stuck. Now what do I do? I know from other reading that the result will be p(k) = product form where the nodes behave independently of one another (a powerful result). But how do I find p(k) from $$\frac{d}{dt}p(k,t)$$ to get p(k). I don't think I can just integrate, nor do I solve a differential equation. What should I do from here?

Thanks

2. Jan 26, 2014

### Ray Vickson

Solving the differential equations analytically is hopeless. The whole point of steady-state analysis is that you do not need to solve the differential equations---you just need to solve some linear equations. Surely your textbook or course notes tell you what you need to do.

3. Jan 26, 2014

### thomas49th

The only textbook I have doesn't have the details to this. Can you show me what these linear equations look like? I have n equations (n is queue length), I can't picture what I'm going to get out of this.

4. Jan 26, 2014

### Ray Vickson

Google is your friend. Searching on 'Jackson Network" produces lots of hits. A particularly detailed treatment (with several worked examples) is http://www.netlab.tkk.fi/opetus/s383143/kalvot/E_qnets.pdf . See also http://www.iitg.ernet.in/skbose/qbook/Slide_Set_14.PDF for more examples, etc. A somewhat less detailed treatment is in http://en.wikipedia.org/wiki/Jackson_network .

The necessary presentation is just a bit too long for writing out here in the Forum. Anyway, others have already written it all out, and you just need to go to the sources.

5. Jan 26, 2014

### thomas49th

I've already had a look at the iitg.ernet set, but I don't understand the part "The following relations hold for the product form trial". What are they saying there?
http://gyazo.com/6856a335119c3762369bfc5ef06b213f

6. Jan 26, 2014

### haruspex

It's a bit hard to tell without seeing the whole proof (or, at least, the statement of the theorem), but it looks to me that all this is doing is taking what is claimed to be a solution (and is in product form) and substituting it as a trial solution for the equation.

7. Jan 26, 2014

### Ray Vickson

I'm not sure exactly what it is that you are not getting, so let me try to explain, without mathematical details.

(1) To get the steady-state you need to solve some linear equations. These involve the variables $p(k_1,k_2,\ldots, k_n),$ so if the queue sizes are not limited there are no limits on the $k_i$, so we have $\infty^n$ equations in $\infty^n$ unknowns. Even if we truncate to, say, 100 customers at most in each queue (so that $0 \leq k_i \leq 100 \; \forall i \, )$ we still could have a huge system: if $n = 10$ we have $101^{10}$ equations in $101^{10}$ unknowns. This is still pretty big: we would have more than $10^{20}$ equations in more than $10^{20}$ unknowns.

(2) So, we try to avoid doing that. What Jackson noticed was that under certain circumstances the solution can be factored into a product, with one factor for each queue separately. That is, we can assume the form
$$p(k_1,k_2,\ldots,k_n) = p_1(k_1) \cdot p_2(k_2)\cdot \: \cdots \: \cdot p_n(k_n).$$
This allows us to obtain a tractable system of equations to determine the individual functions $p_i(k_i)$. The articles are just dealing with the details of doing all that.

8. Feb 1, 2014

### thomas49th

thanks, I pretty much got it in the end, but there is one thing I'm still not happy about.

Each p(k) = chance of it appearing k times * (1-chance of getting zero)
In normal trials of a flipping a coin, if you have n trials and you want to find the probability of flipping k heads then you would have
P(k heads) = (0.5)^k * (1 - 0.5)^(n-k)

But in the jackson network there is no upper bound for the number of trials, and we just write p(k) = x^k(1-x). Why is the second multiplicand (the 1-x), not raised to anything?

Thanks

9. Feb 1, 2014

### Ray Vickson

Whoa...stop right there. You are confusing two totally different things.

The distribution of the number of heads in n tosses is, indeed, the binomial distribution you wrote (incorrectly). The distribution of N (= the number of customers in a queue) is of the form $\text{P}\{N = n\} = (1-p) p^n,\: n=0,1,2, \ldots$. This is called the geometric distribution (or one variant of it, at least); you could interpret this as the distribution of the number of trials before the first 'success' in a series of independent trials with success probability p per trial. In coin-tossing it would be the number of tosses before getting your first head. In principle, you could toss the coin 100,000 times or more before getting a head; that is why there is no limit on 'n'.

There is another (more common) version of the geometric distribution, which is the number of trials Y until the first success; this has distribution $\text{P}\{Y = n \} = (1-p) p^{n-1}, \: n = 1,2, \ldots.$ It does not start at n=0 because there is no toss number 0. The random variable Y includes the success toss number in the count, so must be 1 or 2 or 3 or ... .

So, the essential difference between binomial and geometric is that binomial looks at the number of trials as fixed and counts the number of successes, while the geometric allows the number of trials to vary until a first success occurs.

BTW: the correct formula for the binomial is
$$\text{P}\{ k \text{ heads }\} = {n \choose k} p^k (1-p)^{n-k}, \: k = 1,2, \ldots, n$$
You need the binomial coefficient ${n \choose k}$ to account for the various orders in which the heads can occur in the string of n tosses. I hope you already knew this and were just being careless in your writing.

10. Feb 6, 2014

### thomas49th

Ahhhhhh thank you thank you. That is making more sense now