Hitting time of two or three parallel random walks

Click For Summary
SUMMARY

The discussion focuses on the expected hitting time for parallel random walks on a one-dimensional integer lattice, specifically when considering multiple independent random walks reaching different boundaries. The expected hitting time for a single random walk to reach an integer α is established as \Theta(α^2). However, the initial assumption that the minimum hitting time for independent random walks can be derived from individual hitting times is proven incorrect through simulation. The complexity increases when adapting existing theoretical frameworks, such as those outlined in Lemma 0.2 from Cornell's course notes, to accommodate multiple boundaries.

PREREQUISITES
  • Understanding of one-dimensional random walks and martingales
  • Familiarity with hitting times and first-passage times in probability theory
  • Knowledge of order statistics and their implications in probability
  • Basic comprehension of graph theory concepts
NEXT STEPS
  • Research the implications of the Bapat–Beg theorem on order statistics
  • Explore the derivation of hitting times for multiple random walks in probability theory
  • Study the adaptation of Lemma 0.2 for multi-dimensional random walks
  • Investigate simulation techniques for validating theoretical models in random walks
USEFUL FOR

Mathematicians, statisticians, and researchers in probability theory, particularly those interested in random walks, hitting times, and order statistics.

opapa
Messages
3
Reaction score
0
Consider a one-dimensional random walk on the integer lattice, starting at 0. The next step is decided to be +1 or -1 with equal probability 0.5. The hitting time (the expected time required for the random walk to reach any integer α -- also called the first-passage time) will be \Theta(\alpha^2), since random walks are martingales. Similarly, given two numbers (α<0 and β>0), the expected time for the random walk to reach either α or β is α*β.

In my case, I have two or more independent random walks r1, r2, …, that run in parallel, and I want to find the expected time required for at least one of these random walks ri to reach either αi or βi (notice that the boundaries αi and βi can be different for each random walk). My initial intuition was that, since these random walks are independent, I could find the hitting time for each random walk and get the minimum, but I found this to be wrong through a computer simulation.

Any pointers?
Thanks
 
Physics news on Phys.org
opapa said:
My initial intuition was that, since these random walks are independent, I could find the hitting time for each random walk and get the minimum

If we throw two fair dice, is it intuitive that the expected minimum of the two dice would be the minmum of their two expected values? If it were, then the expected minimum from throwing two dice would be the same as the expected minimum from throwing one die, since we'd take the minimum of two equal numbers.
 
Thanks for the response. You are right of course that I cannot deduce something from the expected values for each walk, I need to compute the expected value from scratch.

Trying to recompute the hitting time for two random walks RW1 and RW2 (in the lines of the proof of Lemma 0.2 from http://www.cs.cornell.edu/courses/cs4850/2010sp/Scribe%20Notes%5CLecture18.pdf ), I realized that it gets very complex. The main difference is that now I am not interested in reaching a point for a first time, but in reaching a group of points, e.g., when RW11 or RW22. Furthermore, one can reach any point (RW1=i, RW2=j) following paths for which the last step is not necessarily (RW1=i-1, RW2=j-1). Therefore, it is not clear to me how the starting equation in Lemma 0.2 will be adapted for 2 dimensions.

Is there any other direction that does not have these issues?
 
Last edited by a moderator:
Finding the distribution of the minimum of two independent random variables falls under the heading of "order statistics". The distribution of the minimum depends only on the distrubutions of the random variables involved. http://en.wikipedia.org/wiki/Bapat–Beg_theorem

If you have a more complicated question that is particular to graph theory, I think you should post it in a new thread and put something indicating graph theory in the title.
 
  • Like
Likes 1 person
Thanks! The theorem seems relevant. I will have a look at it, and, if needed, I will initiate a new thread as suggested.
 
If there are an infinite number of natural numbers, and an infinite number of fractions in between any two natural numbers, and an infinite number of fractions in between any two of those fractions, and an infinite number of fractions in between any two of those fractions, and an infinite number of fractions in between any two of those fractions, and... then that must mean that there are not only infinite infinities, but an infinite number of those infinities. and an infinite number of those...

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 17 ·
Replies
17
Views
4K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
11
Views
2K
  • · Replies 6 ·
Replies
6
Views
5K