There are analytic and linguistic issues lurking in this problem, and in a manner Odysseus would appreciate, it seems that I cannot sail past both of them.
note: I edited it to say ##k## for the slightly simpler problem to be discussed below.
- - - -
The crux of the problem is what does "randomly arriving" mean? Random with respect to what? I.e. what's the distribution? My read on
@tnich 's approach was that it is a person averaged result. (Slight digression: the problem itself is actually quite close to computing the time averaged age, slow truck effect, etc. for general renewal problems, though the problem is one 'moment' down from such a thing, and it wants a specific outcome not the summed total. What follows is basically how I think about this for general renewals -- the argument should be able to be streamlined since the Poisson process is an idealized renewal.)
The technical issue is that the problem doesn't actually ask for a person averaged result. It says that there is a "randomly arriving" person and that implies a distribution. I believe the problem falls under the umbrella of problems called "random incidence". Depending on what you want this can make the problem intuitive or mathematically awkward.
The link between the person averaged result and a comparable distribution is the uniform distribution involving large enough n iterations. Depending on needs, this can be awkward because for any given problem, we always insist on large enough ##n## but can't actually pass a limit to it. If we must pass a limit, then the distribution, and in turn the question, collapses -- because the "randomly arriving" person doesn't have a distribution (or well defined process) so far as I can tell.
For what it's worth, I think that if if you want a uniform distribution, you just insist that ##n## is sufficiently large, and leave it as a parameter to optimize later on if needed -- in this case, if you are so inclined, you should be able to lazily do so via a Chebyshev bound, though it isn't needed.
- - - - -
Let me see if I can just directly state the result:
Supposing we have a large number of trials (i.e. sufficiently large ##n##), we can think of ourselves as randomly selecting some person who has arrived and making our selection "at random" -- meaning, I believe, uniformly at random.
My read on the question is that it wants to know the probability that the person we selected was in a group size of ##k##.
Before selecting a person "at random" we have a prior belief about the group size for any person we talk to and that prior belief is directly given by the Poisson distribution, since a Poisson process generates these arrivals.
##\text{prior} =
\begin{bmatrix}
\text{probability(group size 0, time length of t) }\\
\text{probability(group size 1, time length of t) }\\
\text{probability(group size 2, time length of t) }\\
\vdots\\
\text{probability(group size k, time length of t) }\\
\vdots\\
\end{bmatrix}
= \begin{bmatrix}
p_{\lambda}(k=0, t)\\
p_{\lambda}(k=1, t)\\
p_{\lambda}(k=2, t)\\
\vdots\\
p_{\lambda}(k=k, t)\\
\vdots\\
\end{bmatrix}##
But the 'paradoxes' that come into play with these sorts of problems are because your sampling unit is by people, so the likelihood of selecting the person you are talking to is directly proportional to the group size that the person came in. This immediately gives the Likelihood function.
The result here should, to whatever desired precision, agree with the "person averaged" result from tnich. But
the whole question comes down to what a "random arriving" person means and what sort of distribution that implies. The question itself is silent on this.
- - - -
simpler example:
To simplify the thought experiment, consider a modified Bernouli trial that always has 1 or 2 people in an arriving epoch with each outcome having ##\text{50%}## probability. Run the experiment many times, and put all those people in a "hat". Shuffle them. Reach into the hat and select someone "at random" i.e. off the top, and ask the person how big their group was.
You will have about twice as many people from the 2 sized group than from the 1 sized group in your 'hat' with the end result of a posterior distribution of ##\frac{1}{3}## of the people in the hat came from a group size of 1 and ##\frac{2}{3}## came from a group size of 2. This is your (posterior) probability estimate for the person chosen.