Waiting times - Observer arriving at random time

In summary, the probability of a car passing in an interval dt is given as dt/\tau. If it helps the probability distribution of \Delta is \frac{dt}{\tau}e^{-\frac{\Delta}{\tau}}.
  • #1
ENgez
75
0
for an observer arriving at a random time [itex]t_1[/itex], where t=0 is the time when the last car passed, i got the following pdf for [itex]Δ^∗[/itex]- the time the observe waits until the next car:

[itex]ρ_{Δ^∗}=\frac{1}{Δ^∗}⋅(e^{-\frac{Δ^∗}{τ}}−e^{-\frac{2Δ^∗}{τ}})[/itex].

the mean is τ, like the book said and it goes to 0 for [itex]Δ^∗→0[/itex] and [itex]Δ^∗→∞[/itex], but it still looks kind of weird for a probability distribution... is this correct?
a short summary of the derivation:

[itex]Δ^∗=Δ−t_1[/itex], where Δ is the time between two consecutive cars (as was found in the previous posts).

t1 has a uniform probability distribution between 0 and Δ, therefore:

[itex]ρ_{Δ^∗}=∫ρ_Δ(Δ^∗+ζ)ρ_{t_1}(ζ)dζ[/itex]

for [itex]0<ζ<Δ^∗[/itex]
 
Last edited:
Physics news on Phys.org
  • #2
Hey ENgez.

The issue I have is your derivation which looks like you are using the convolution theorem to get the PDF of a sum of random variables, but the thing is that your distribution for the uniform depends on the nature of delta so these are not independent.

One suggestion I have is to put a prior on the uniform distribution where the prior is the distribution of delta and then derive the posterior of the "t" distribution you have.

Because the delta and the t distributions will be dependent, you can't use something like the convolution theorem directly.

What you will have to do is consider dependent moments of the distribution and use this in accordance with the characteristic function to get back your PDF.

Normally with the characteristic function, what happens is that you get an MGF and if you have an MGF, you can obtain the PDF by using an inverse Fourier transform. But a lot of these results with using MGF's are based on independence especially when dealing with more complicated forms of functions of random variables.

The first moment is simple since E[X+Y] = E[X] + E[Y] but from then on we get results noting that VAR[X+Y] = VAR[X] + VAR[Y] - 2*COV[X,Y]. If we have independence, then the COV[X,Y] is 0 and we get the normal result.

Essentially the best way I can think of getting the distribution is to get the moments that take into account the dependencies (like the above does) and then use the inverse Fourier transform to get the PDF.

It's not going to be easy, but the fact remains that you have two distributions that are dependent and not independent and this is what makes it a little harder.
 
  • #3
I think I understand your suggestion, but there must be a more straightforward way, as haruspex said there was an easy way to do this. haruspex said it had something to do with reversing time..

the probability of a car passing in an interval dt is given as [itex]dt/\tau[/itex].

if it helps the probability distribution of [itex]\Delta[/itex] is [itex]\frac{dt}{\tau}e^{-\frac{\Delta}{\tau}}[/itex]. thank you for your help
 
  • #4
Maybe you could give your own thoughts on how you would derive the PDF for t or if you wish to relax some of the assumptions to make it easier to handle analytically.

One suggestion though would be to simulate the process say a million times on a computer software package and see what the distribution looks like: at least this way you can compare your formula with what you get with a computer.

I'd suggest something like R.
 
  • #5
I guess I'm not following your (or the textbook's) model. A uniform discrete distribution assigns to the same probability to every outcome over a finite interval. I don't see this model as being very realistic for the distribution of time intervals between random events such as observing cars passing a point on a highway. You're essentially saying that the intervals of say 1,2,...,n minutes all have the same probability. Typically, the Poisson (or Erlang) model suffices for this problem, particularly for the low traffic density represented by 5 minutes between events.

The MLE for the mean is [itex] \sum_{k=0}^{k=n}/(n-1)[/itex]. I don't see how you can determine the distribution and expectation without data.

Note, the fact that the point in time when the observer arrives is an independent random event. What distribution would you expect for the time between the observer's arrival and the next car if the distribution of times between cars is Poisson/Erlang?
 
Last edited:
  • #6
ENgez said:
I think I understand your suggestion, but there must be a more straightforward way, as haruspex said there was an easy way to do this. haruspex said it had something to do with reversing time..
No, that was for this situation: observer arrives at random time and wants to guess the interval between previous car and next. The question you're posting now has no connection with any preceding car, it's just the time from the observer's arrival to the next car. A Poisson process has no memory; future events are completely independent of past ones.
 
  • #7
The reason this is more complicated is that t depends delta and you have a difference of the two (kinda like having y^2 + e^y where y is a random variable).

The other suggestion that I have is to construct a PDF where the limits are dependent: this might provide a much easier alternative than the suggestion I gave above. So you will have for example a bivariate PDF f(delta,t) but obviously t will depend on delta with the limits.
 

1. What is meant by "waiting times - observer arriving at random time"?

Waiting times - observer arriving at random time refers to a scenario in which an observer arrives at a location or event at a random or unpredictable time and records the waiting times of other observers who are already present.

2. Why is studying waiting times important in scientific research?

Studying waiting times can provide valuable insights into various phenomena, such as human behavior, response times, and efficiency of processes. It can also help in understanding patterns and trends in a particular system or environment.

3. How is waiting time measured in this type of research?

In this type of research, waiting time is typically measured as the time between the observer's arrival and the arrival of the next individual or event. It can also be measured as the time between the observer's arrival and the start of the event or activity being observed.

4. What factors can affect waiting times in this type of study?

Some factors that may influence waiting times in this type of study include the number of individuals or events being observed, the location or environment, and any external factors that may affect the arrival times of participants.

5. How can the results of studying waiting times be applied in real-world situations?

The findings of studying waiting times can have practical applications in various fields, such as transportation, healthcare, and customer service. For example, understanding waiting times can help improve the efficiency of public transportation systems or reduce wait times in healthcare facilities.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Special and General Relativity
Replies
27
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
3
Replies
75
Views
6K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
2K
Back
Top