Waiting times - Observer arriving at random time

AI Thread Summary
The discussion centers on deriving the probability density function (PDF) for the waiting time of an observer arriving at a random time before the next car passes. The initial derivation presented a PDF that seemed unusual, with the mean aligning with theoretical expectations but raising concerns about its validity due to dependencies between variables. Participants debated the use of the convolution theorem, suggesting that the uniform distribution of arrival times should be treated with a prior based on the distribution of time intervals between cars. They emphasized the need to account for dependencies in the distributions, proposing methods like using characteristic functions and inverse Fourier transforms to derive the PDF. Additionally, there was a recommendation to simulate the process to validate the theoretical model against empirical data.
ENgez
Messages
69
Reaction score
0
for an observer arriving at a random time t_1, where t=0 is the time when the last car passed, i got the following pdf for Δ^∗- the time the observe waits until the next car:

ρ_{Δ^∗}=\frac{1}{Δ^∗}⋅(e^{-\frac{Δ^∗}{τ}}−e^{-\frac{2Δ^∗}{τ}}).

the mean is τ, like the book said and it goes to 0 for Δ^∗→0 and Δ^∗→∞, but it still looks kind of weird for a probability distribution... is this correct?
a short summary of the derivation:

Δ^∗=Δ−t_1, where Δ is the time between two consecutive cars (as was found in the previous posts).

t1 has a uniform probability distribution between 0 and Δ, therefore:

ρ_{Δ^∗}=∫ρ_Δ(Δ^∗+ζ)ρ_{t_1}(ζ)dζ

for 0<ζ<Δ^∗
 
Last edited:
Physics news on Phys.org
Hey ENgez.

The issue I have is your derivation which looks like you are using the convolution theorem to get the PDF of a sum of random variables, but the thing is that your distribution for the uniform depends on the nature of delta so these are not independent.

One suggestion I have is to put a prior on the uniform distribution where the prior is the distribution of delta and then derive the posterior of the "t" distribution you have.

Because the delta and the t distributions will be dependent, you can't use something like the convolution theorem directly.

What you will have to do is consider dependent moments of the distribution and use this in accordance with the characteristic function to get back your PDF.

Normally with the characteristic function, what happens is that you get an MGF and if you have an MGF, you can obtain the PDF by using an inverse Fourier transform. But a lot of these results with using MGF's are based on independence especially when dealing with more complicated forms of functions of random variables.

The first moment is simple since E[X+Y] = E[X] + E[Y] but from then on we get results noting that VAR[X+Y] = VAR[X] + VAR[Y] - 2*COV[X,Y]. If we have independence, then the COV[X,Y] is 0 and we get the normal result.

Essentially the best way I can think of getting the distribution is to get the moments that take into account the dependencies (like the above does) and then use the inverse Fourier transform to get the PDF.

It's not going to be easy, but the fact remains that you have two distributions that are dependent and not independent and this is what makes it a little harder.
 
I think I understand your suggestion, but there must be a more straightforward way, as haruspex said there was an easy way to do this. haruspex said it had something to do with reversing time..

the probability of a car passing in an interval dt is given as dt/\tau.

if it helps the probability distribution of \Delta is \frac{dt}{\tau}e^{-\frac{\Delta}{\tau}}. thank you for your help
 
Maybe you could give your own thoughts on how you would derive the PDF for t or if you wish to relax some of the assumptions to make it easier to handle analytically.

One suggestion though would be to simulate the process say a million times on a computer software package and see what the distribution looks like: at least this way you can compare your formula with what you get with a computer.

I'd suggest something like R.
 
I guess I'm not following your (or the textbook's) model. A uniform discrete distribution assigns to the same probability to every outcome over a finite interval. I don't see this model as being very realistic for the distribution of time intervals between random events such as observing cars passing a point on a highway. You're essentially saying that the intervals of say 1,2,...,n minutes all have the same probability. Typically, the Poisson (or Erlang) model suffices for this problem, particularly for the low traffic density represented by 5 minutes between events.

The MLE for the mean is \sum_{k=0}^{k=n}/(n-1). I don't see how you can determine the distribution and expectation without data.

Note, the fact that the point in time when the observer arrives is an independent random event. What distribution would you expect for the time between the observer's arrival and the next car if the distribution of times between cars is Poisson/Erlang?
 
Last edited:
ENgez said:
I think I understand your suggestion, but there must be a more straightforward way, as haruspex said there was an easy way to do this. haruspex said it had something to do with reversing time..
No, that was for this situation: observer arrives at random time and wants to guess the interval between previous car and next. The question you're posting now has no connection with any preceding car, it's just the time from the observer's arrival to the next car. A Poisson process has no memory; future events are completely independent of past ones.
 
The reason this is more complicated is that t depends delta and you have a difference of the two (kinda like having y^2 + e^y where y is a random variable).

The other suggestion that I have is to construct a PDF where the limits are dependent: this might provide a much easier alternative than the suggestion I gave above. So you will have for example a bivariate PDF f(delta,t) but obviously t will depend on delta with the limits.
 
Back
Top