# Poisson inflow, constant outflow question

1. Jul 5, 2011

### pmsrw3

The following problem came up in my work.

You have a tube, open at the top. Raindrops fall into the mouth of the tube at a mean rate i per second, 0 <= i < 1, in a Poisson process. There's a hole in the bottom of the tube. When there's water in the tube, it flows out at a constant rate of 1 drop/sec. (Ignore that this is not physically plausible. You can imagine that there's a peristaltic pump hooked up if it makes you feel better.) What is the mean amount of water in the tube at steady-state?

I came up with a complicated solution to this (which I will describe, if anyone wishes) in the form of a sum. But when I evaluated my solution numerically, I found that, within round-off error, computed values equaled the simple result <v> = i/(2(1 - i)). This is also plausible and has correct limiting behavior at 0 and 1. Since then I've been trying hard without success to come up with a proof or even fairly good rationale for that formula.

Thanks in advance for any ideas.

2. Jul 6, 2011

### Stephen Tashi

The problem resembles the queueing theory model where there is a single server, poisson arrivals and a constant service time. However, the result for "average number of customers in the queue" ( for example p8 of this PDF: http://www.google.com/url?sa=t&sour...LHloQ4&usg=AFQjCNFWSUKGueI5N_p_E4GxhKRThY14HQ ) doesn't look like your answer.

Are the output drops scheduled at even increments? For example, if the output drop rate is 1 per second, suppose there is no water in the tube at time t = 0 and a drop arrives at time t = 3/4. Does the output drop happen at t = 1 or t = (1 + 3/4)?

3. Jul 6, 2011

### bpet

Writing it as $N_{t+1}=\max(0,N_t+K_t-1)$ where $K_t$ is Poisson, we have the steady state equations for $q_n=P[N=n]$
$$q_{n+1}=\frac{1}{p_0}-\sum_{k=0}^n p_{n+1-k}q_k - \delta_n p_0q_0$$
where the $p_j$ are the Poisson probabilities. Taking the expectation of N leads to
$$q_0=(1-\lambda)e^{\lambda}$$
and taking the expectation of N^2 leads to
$$E[N]=\frac{\lambda^2}{2(1-\lambda)}$$

Last edited: Jul 6, 2011
4. Jul 7, 2011

### pmsrw3

No, it's in continuous time -- the drops arrive when they arrive, and the output flow is continuous. However, you can derive a discrete time Markov chain by having a clock that ticks when the first drop falls in (assume we start empty), at 1s intervals thereafter until the tube drains, then waits until the next drop for the next tick. So, for instance if drops arrived at 1.1, 1.3, 1.5, 4.6, 5.9, 6.4, ... ticks would occur at 1.1, 2.1, 3.1, 4.1, 4.6, 5.6, 5.9, 6.9, 7.9, ... At each tick the number of drops will be an integer (1, 2, 1, 0, 1, 0, 1, 1, 0, ...). You can solve for the steady-state occupancies of this chain. The mean will not be the mean of the continuous time system, because the 0's will be underweighted. (In the discrete time case, the 0 state always lasts 1 tick, whereas in the original problem it lasts on average 1/i sec.) But it is straightforward to correct for that. That's how I solved the system. (I also did a little Monte Carlo to check my results.)

I'll have to look into queueing theory -- it's probably what I need, or very close. Unfortunately, that PDF you link to just gives formulas without derivations.

Could you explain this step, please? I mean, I know $E[N] = \sum{nq_n}$, but how do you get from there to $q_0=(1-\lambda)e^{\lambda}$?

5. Jul 7, 2011

### bpet

For that you just apply $E[N]=\sum_{n=0}^{\infty}(n+1)q_{n+1}$ to the recurrence relation and after some algebra the E[N] terms cancel.

In continuous time where a new drop is added with probability $\lambda dt$, split the state N=0 into {N=0, no drops falling} with prob q and {N=0, one drop falling} with prob q_0, and the steady state relations give you $q_0=\lambda q, q_1=\lambda q_0, q_{n+2}=(1+\lambda)q_{n+1}-\lambda q_n$. Applying E[N] to the recurrence relation gives $q=1-\lambda$ and E[N^2] gives
$$E[N]=\frac{\lambda^2}{1-\lambda}$$.
The notes in Stephen Tashi's link imply the formula $E[M]=(1-q)+E[N] = \lambda+\tfrac{\lambda^2}{1-\lambda}$ which includes the drop that is falling.

6. Jul 7, 2011

### pmsrw3

I am not getting this for the SS conditions. Here is my stochastic matrix:
$$P=\left( \begin{array}{cccc} p_0+p_1 & p_2 & p_3 & \ldots \\ p_0 & p_1 & p_2 & \ldots \\ 0 & p_0 & p_1 & \ldots \\ \vdots & \vdots & \vdots & \ddots \end{array} \right)$$
From there (my lambda's are your qs), the SS condition is:
$\lambda _0=\left(p_0+p_1\right)\lambda _0+p_0\lambda _1$
$\lambda _1=p_2\lambda _0+p_1\lambda _1+p_0\lambda _2$
...
$\lambda _n=\delta _{0n}p_0\lambda _0+\underset{k=0}{\overset{n+1}{\sum }}p_{n+1-k}\lambda _k$

Solving:
$$\lambda _{n+1}=-\delta _{0n}\lambda _0+\frac{\lambda _n}{p_0}-\frac{1}{p_0}\underset{k=0}{\overset{n}{\sum }}p_{n+1-k}\lambda _k$$

Where am I going wrong?

7. Jul 7, 2011

### bpet

No you're right, there was a typo in my formula (though the end result still holds)

8. Jul 8, 2011

### pmsrw3

OK, I've got it. I looked up queueing theory in Norris (Markov Chains) and roughly followed his treatment for for an M/G/1 queue. (That's more general than my problem.) Turns out he begins by discretizing the problem in almost exactly the way I did -- only difference is that he doesn't have a tick for a drop falling into an empty tube (which has to be corrected for later when computing mean volume). The update equation for this Markov chain is

$$X_{n+1}=X_n+Y_{n+1}-\mathbb{I}_{X_n>0}$$

Y is Poisson with mean i and $\mathbb{I}_{X_n>0}$ is an indicator that is 1 if there's anything in the tube, 0 otherwise. Define the probability generating function for $X_n$

$$G(z)=\mathbb{E}\left[z^{X_n}\right]=\underset{i=0}{\overset{\infty }{\sum }}\pi_iz^i$$

Then

$$zG(z)=\mathbb{E}\left[z^{X_{n+1}+1}\right]=\mathbb{E}\left[z^{X_n+Y_{n+1}+\mathbb {I}_{X_n=0}}\right]$$
$$zG(z)=e^{i(z-1)}\left(\pi_0z+G(z)-\pi_0\right)$$

($\pi_i$ here is steady-state probability than $X_n=i$.) From here you eventually get to

$$G(z)=\frac{(i-1) (z-1) e^{i(z-1)}}{e^{i(z-1)}-z}$$

and of course you can differentiate that to get mean X. Since the ticks are not equally spaced in time (the interval after $X_n = 0$ is always longer), and since there's always an extra drop that falls in during that interval, it takes some more fiddling to get to mean volume. But it does, in fact, eventually reduce to $\frac{1}{2}\frac{i}{1-i}$.

I still suspect, when I see such a simple answer emerge after a painfully long calculation, that I missing a more elegant way to get there. But, elegant or no, I have managed to prove what the numerical calculations said.

Thanks for putting me on the right track!

9. Jul 8, 2011

### bpet

That's interesting. I'm guessing you solved for $\pi_0$ by finding the derivative of

$$zG(z)=e^{i(z-1)}\left(\pi_0z+G(z)-\pi_0\right)$$

at z=1. If so, the mean could be found without explicitly solving for G by taking the second derivative at z=1.

10. Jul 9, 2011

### pmsrw3

There's a simpler way to get $\pi_0$. At every tick, X changes by $Y-\mathbb{I}_{X>0}$. At steady-state, this has to average to 0. $Y$ has mean $i$, and $\mathbb{I}_{X>0}$ has mean $\mathbb{P}\left[X>0 \right]=1-\pi_0$, so there you have it.

Norris doesn't do it this way, actually. He requires $G(z)=1$ (evident from the definition) and solves for $\pi_0$. But this gives $G(z)=0/0$ and he has to use L'Hopital's Rule, so it is in essence the same as what you suggest.