I am interpreting the question exactly as stated in post #1, which wanted to find the waiting-time distributions for customers 1, 2, 3, 4, ... . A Markov process model works perfectly for this purpose: we can find customer waiting-times, but not customer arrival times or other details. The simple Markov process model just keeps waiting-time information and dispenses with everything else. Again, the logic is very simple: customer 0 starts a new T-interval, so ##W_0 = T##. Customer #1 either arrives in the same T-interval, or else starts a new T-interval of his own. So, if ##X## is the inter-arrival time between customers 0 and 1, we have ##W_1 = W_0 - X## if ##X < W_0## (same T-interval, but less waiting), or else ##W_1 = T## if ##X > W_0## (starts a new T-interval). It works the same way for any two successive customers ##n## and ##n+1##.
I have looked at a bunch of examples, by discretizing the Markov process, so we have a geometric inter-arrival distribution instead of exponential. Of course, the first waiting-time distribution is like a reversed-exponential but with a point-mass at ##W = T## (NOT a re-normalized exponential on the interval!). Depending on the size of ##T## relative to ##1/\lambda##, the second waiting-time distribution resembles a reversed 2-Erlang, but with some differences caused by the "overspill" effects when customers start new T-intervals. By the time the customer number reaches approximately the mean number of arrivals that would have occurred in interval T, the distributions start to deviate wildly from Erlangs or reversed Erlangs.
For later and later customers the waiting-time distributions become more nearly a mix of a uniform distribution on ##(0,T)## and a finite probability mass at ##W = T##. This in in accordance with what was found in the other thread, where we finally settled on the fact that in the long-run, there is a probability of ##P_0 = 1/(1 + \lambda T)## for a customer to arrive when there are no others waiting (and so starts a new T-interval, hence waits for time ##W = T##), or else there is a probability ##P_1 = \lambda T/(1+\lambda T)## to join other customers. One can argue intuitively that (again, in the long-run) when a customer arrival is part of a group that have arrived in a common T-interval, their conditional arrival times are actually uniform over the interval (another magic property of Poisson processes).