Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Waiting times - Observer arriving at random time

  1. Jul 26, 2012 #1
    for an observer arriving at a random time [itex]t_1[/itex], where t=0 is the time when the last car passed, i got the following pdf for [itex]Δ^∗[/itex]- the time the observe waits until the next car:

    [itex]ρ_{Δ^∗}=\frac{1}{Δ^∗}⋅(e^{-\frac{Δ^∗}{τ}}−e^{-\frac{2Δ^∗}{τ}})[/itex].

    the mean is τ, like the book said and it goes to 0 for [itex]Δ^∗→0[/itex] and [itex]Δ^∗→∞[/itex], but it still looks kind of weird for a probability distribution.... is this correct?



    a short summary of the derivation:

    [itex]Δ^∗=Δ−t_1[/itex], where Δ is the time between two consecutive cars (as was found in the previous posts).

    t1 has a uniform probability distribution between 0 and Δ, therefore:

    [itex]ρ_{Δ^∗}=∫ρ_Δ(Δ^∗+ζ)ρ_{t_1}(ζ)dζ[/itex]

    for [itex]0<ζ<Δ^∗[/itex]
     
    Last edited: Jul 26, 2012
  2. jcsd
  3. Jul 26, 2012 #2

    chiro

    User Avatar
    Science Advisor

    Hey ENgez.

    The issue I have is your derivation which looks like you are using the convolution theorem to get the PDF of a sum of random variables, but the thing is that your distribution for the uniform depends on the nature of delta so these are not independent.

    One suggestion I have is to put a prior on the uniform distribution where the prior is the distribution of delta and then derive the posterior of the "t" distribution you have.

    Because the delta and the t distributions will be dependent, you can't use something like the convolution theorem directly.

    What you will have to do is consider dependent moments of the distribution and use this in accordance with the characteristic function to get back your PDF.

    Normally with the characteristic function, what happens is that you get an MGF and if you have an MGF, you can obtain the PDF by using an inverse fourier transform. But a lot of these results with using MGF's are based on independence especially when dealing with more complicated forms of functions of random variables.

    The first moment is simple since E[X+Y] = E[X] + E[Y] but from then on we get results noting that VAR[X+Y] = VAR[X] + VAR[Y] - 2*COV[X,Y]. If we have independence, then the COV[X,Y] is 0 and we get the normal result.

    Essentially the best way I can think of getting the distribution is to get the moments that take into account the dependencies (like the above does) and then use the inverse fourier transform to get the PDF.

    It's not going to be easy, but the fact remains that you have two distributions that are dependent and not independent and this is what makes it a little harder.
     
  4. Jul 26, 2012 #3
    I think I understand your suggestion, but there must be a more straightforward way, as haruspex said there was an easy way to do this. haruspex said it had something to do with reversing time..

    the probability of a car passing in an interval dt is given as [itex]dt/\tau[/itex].

    if it helps the probability distribution of [itex]\Delta[/itex] is [itex]\frac{dt}{\tau}e^{-\frac{\Delta}{\tau}}[/itex].


    thank you for your help
     
  5. Jul 26, 2012 #4

    chiro

    User Avatar
    Science Advisor

    Maybe you could give your own thoughts on how you would derive the PDF for t or if you wish to relax some of the assumptions to make it easier to handle analytically.

    One suggestion though would be to simulate the process say a million times on a computer software package and see what the distribution looks like: at least this way you can compare your formula with what you get with a computer.

    I'd suggest something like R.
     
  6. Jul 26, 2012 #5
    I guess I'm not following your (or the textbook's) model. A uniform discrete distribution assigns to the same probability to every outcome over a finite interval. I don't see this model as being very realistic for the distribution of time intervals between random events such as observing cars passing a point on a highway. You're essentially saying that the intervals of say 1,2,.........,n minutes all have the same probability. Typically, the Poisson (or Erlang) model suffices for this problem, particularly for the low traffic density represented by 5 minutes between events.

    The MLE for the mean is [itex] \sum_{k=0}^{k=n}/(n-1)[/itex]. I don't see how you can determine the distribution and expectation without data.

    Note, the fact that the point in time when the observer arrives is an independent random event. What distribution would you expect for the time between the observer's arrival and the next car if the distribution of times between cars is Poisson/Erlang?
     
    Last edited: Jul 26, 2012
  7. Jul 26, 2012 #6

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    No, that was for this situation: observer arrives at random time and wants to guess the interval between previous car and next. The question you're posting now has no connection with any preceding car, it's just the time from the observer's arrival to the next car. A Poisson process has no memory; future events are completely independent of past ones.
     
  8. Jul 26, 2012 #7

    chiro

    User Avatar
    Science Advisor

    The reason this is more complicated is that t depends delta and you have a difference of the two (kinda like having y^2 + e^y where y is a random variable).

    The other suggestion that I have is to construct a PDF where the limits are dependent: this might provide a much easier alternative than the suggestion I gave above. So you will have for example a bivariate PDF f(delta,t) but obviously t will depend on delta with the limits.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Waiting times - Observer arriving at random time
  1. Waiting time paradox (Replies: 1)

  2. Random numbers timed (Replies: 1)

Loading...