tnich said:
But do they give the same result? I don't see that we have converged on the same result, yet.
They do give the same results -- I just need to not be lazy, and allow the plus one adjustment.
Mehmood_Yasir said:
What is the probability that a randomly arriving padestrian has crossed the crossing in a group of ##k+1## padestrians...
The answer given is ##\frac{(K+1) {(\lambda * T)}^k e^{-\lambda T} } {k! (1+\lambda T)}##
##= \frac{(K+1)}{(1+\lambda T)} \frac{(\lambda T)^k e^{-\lambda T} }{k! } = \frac{(k+1)}{(1+\lambda t)} \Big(\frac{(\lambda t)^k e^{-\lambda t} }{k! }\Big) = \frac{(k+1)}{(1+\lambda t)} p_{\lambda}(k=k, t)##
(with a slight notation change along the way there)
- - - -
so the focus is on recovering ##\frac{(k+1)}{(1+\lambda t)} p_{\lambda}(k=k, t)## as the probability for a group size of ##k + 1##. Let's re-run my people oriented approach and include the plus one increment this time around.
- - - -
##\text{prior} =
\begin{bmatrix}
\text{P group size is 1}\\
\text{P group size is 2}\\
\text{P group size is 3}\\
\vdots\\
\text{P group size is k+1}\\
\vdots\\
\end{bmatrix}
=\begin{bmatrix}
p_{\lambda}(k=0, t)\\
p_{\lambda}(k=1, t)\\
p_{\lambda}(k=2, t)\\
\vdots\\
p_{\lambda}(k=k, t)\\
\vdots\\
\end{bmatrix}##
The justification is that you have a Poisson process -- except it is shifted by +1 via the deterministic random variable (##Y_i##) that
must have a payoff of 1 person (and for avoidance of doubt remember that if we're interested in time, ##Y_i## has a finite first moment with respect to time).
Notice in particular that the prior probability associated with group size of ##(k+1)## is ##p_{\lambda}(k=k, t)##
- - - -
Now the new likelihood function looks a lot like the old one
##
\text{likelihood function} =
\begin{bmatrix}
1\\
2\\
3\\
\vdots\\
k+1\\
\vdots\\
\end{bmatrix}=
\begin{bmatrix}
0\\
1\\
2\\
\vdots\\
k\\
\vdots\\
\end{bmatrix} +
\begin{bmatrix}
1\\
1\\
1\\
\vdots\\
1\\
\vdots\\
\end{bmatrix}
##
and we now have
##\text{posterior} \propto \text{likelihood function} \circ \text{prior} = \begin{bmatrix}
(1)p_{\lambda}(k=0, t)\\
(2)p_{\lambda}(k=1, t)\\
(3)p_{\lambda}(k=2, t)\\
\vdots\\
(k+1)p_{\lambda}(k=k, t)\\
\vdots\\
\end{bmatrix}
= \begin{bmatrix}
(0)p_{\lambda}(k=0, t)\\
(1)p_{\lambda}(k=1, t)\\
(2)p_{\lambda}(k=2, t)\\
\vdots\\
(k)p_{\lambda}(k=k, t)\\
\vdots\\
\end{bmatrix} +
\begin{bmatrix}
(1)p_{\lambda}(k=0, t)\\
(1)p_{\lambda}(k=1, t)\\
(1)p_{\lambda}(k=2, t)\\
\vdots\\
(1)p_{\lambda}(k=k, t)\\
\vdots\\
\end{bmatrix}
##
Everything is real, non-negative so we can split the summation when looking for our normalizing constant --and we have the underlying power series of the exponential function, ensuring absolute convergence -- so we get a sum equal to
##\big(\lambda t \big) + \big(1\big) = \big(\lambda t + 1\big)##
-- i.e. when splitting the summation, we immediately recognize the expected value of the Poisson distribution and that the probabilities of a Poisson sum to one.
Putting this all together, we get:
##
\text{posterior} =
\begin{bmatrix}
\text{P group size is 1}\\
\text{P group size is 2}\\
\text{P group size is 3}\\
\vdots\\
\text{P group size is k+1}\\
\vdots\\
\end{bmatrix}
= \begin{bmatrix}
\frac{1}{\lambda t + 1} p_{\lambda}(k=0, t)\\
\frac{2}{\lambda t + 1}p_{\lambda}(k=1, t)\\
\frac{3}{\lambda t + 1}p_{\lambda}(k=2, t)\\
\vdots\\
\frac{k+1}{\lambda t+1}p_{\lambda}(k=k, t)\\
\vdots\\
\end{bmatrix}##
- - - -
in particular notice
##\text{P group size is k+1} = \frac{k+1}{\lambda t+1}p_{\lambda}(k=k, t)##
which is the result we were targeting.
- - - - -
edit:
a much later thought that can streamline a lot of this and put things in standard language:
the problem can be addressed, succinctly, as a renewal rewards process. Treating people in an arrival epoch as a discrete time random variable ##X##, we can see that ##X = W + 1##, where ##W## is poisson with parameter ##\lambda## and time ##t##. The process probabilistically starts over immediately after each arrival epoch
For rewards, we set up a reward of ##1## per person if they are in an arrival epoch with ##k+1## people and zero otherwise. Equivalently, we define the event ##A_n## where there are ##k+1## people in the nth arrival epoch and of course have an associated indicator random variable ##\mathbb I_{A_n}##, and the reward for epoch ##n## is given by ##R_n := \mathbb I_{A_n}\cdot X_n##
So we compute
##E\big[X_1\big] = E\big[W_1 \big] + 1 = \lambda t + 1##
##E\big[R_1\big] = E\big[\mathbb I_{A_1}\cdot X_1\big] = p_\lambda(k=k, t) \cdot (k+1) ##
but the basic renewal rewards theorems tell us
##\lim_{t \to \infty } \frac{r(t)}{t} = \frac{E[R_1]}{E[X_1]} = \frac{(k+1) \cdot p_\lambda(k=k, t)}{\lambda t + 1}=\lim_{t \to \infty } \frac{E[r(t)]}{t} ##
which is the answer