1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Homework Help: Probability: Random Walks

  1. Jun 3, 2010 #1
    1. The problem statement, all variables and given/known data

    I'm trying to revise for a probability exam in a few weeks and still getting really confused :( so I'd be really grateful for a push in the right direction:

    Consider a random walker on f0; 1; 2g who moves as follows:
    - if at 0, probability 1/2 of staying at 0 and 1/2 of moving to 1;
    - if at 1, probability 1/2 of moving to 0 and 1/2 of moving to 2;
    - if at 2, probability 1/2 of moving to 1 and 1/2 of staying at 2.

    Let the location of the random walker after k steps be Xk. Let X0 = 0.

    (a) Let N be the number of steps until the first visit to 1. Consider
    pk = P(N = k) and qk = P(Xk = 1).

    (i) Explain why pk [tex]\geq[/tex] qk for all k > 0.
    (ii) Calculate pk and qk for all k [tex]\geq[/tex] 0. (Hint: show qk+1 = 1/2(1-qk)

    2. Relevant equations

    3. The attempt at a solution

    i) It is asking me to explain why the probability that the walker takes k steps until the 1st visit to is less than or equal to the probability that the kth location of the walker is 1.

    If the walker takesk steps until the 1st visit to 1 then his kth location is 1?

    ii) I think I need to condition on something, I think the 1st step?

    P(N=k | X0=0) = P(N=k| X0 = 0 and X1=0)P(X1=0) + P(N=k| X0 = 0 and X1=1)P(X1=1)
    gives me pk = 1/2 pk + 1/2 pk+1

    P(Xk=1| X0=0) = P(Xk=1 | X0=0 and X1=0)P(X1=0) + P(Xk=1 | X0=0 and X1=1)P(X1=1)
    gives qk = 1/2 qk+1 + 1/2 I think

    This is not what I had to show?! Once I have the recurrence relations I think I can solve them.

    Thanks :)
  2. jcsd
  3. Jun 5, 2010 #2


    User Avatar
    Homework Helper

    this looks like a simple markov chain problem, as the probability of the next state is only dependent on the last state... are you working with Markov chains?

    to get started, first draw a diagram, showing the 3 states & transistion probabilities between each

    now how turning that into a 3x3 matrix, P, whose entries are conditional probabilities p_ij, with:
    [tex] p_{ij} = P[X_n = x_i |X_{n-1} = x_j] [/tex]
    ie. p_ij = the probabilty if you start in state x_j, at the next step you will be in state x_i
    Last edited: Jun 5, 2010
  4. Jun 5, 2010 #3


    User Avatar
    Homework Helper

    though thinking about it the transition matrix is useful & may be needed for part ii), but i think you can get through part i) with just a bit of thought... though if you do ii) first, i) falls out...

    First hints for p_k... the probability for being in position 1 for the first time, at step k, will just be the multiplied probabilities of a transition out of zero at each time step, which should be relatively straightforward to write down...
    Last edited: Jun 5, 2010
  5. Jun 5, 2010 #4


    User Avatar
    Homework Helper

    now for q_k... if you are in position 1 at step k-1, there is zero probability to be in pos 1 at the next step, whilst if you are in the others spots (0 or 2), then there is a probabilty of 1/2 to be at 1 in the next step...

    now compare that with the probability to jump from 0 to 1 at each step for p_k....

    if its still not making sense try writing down the matrix i suggested....
    Last edited: Jun 5, 2010
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook