1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Random walker, markov chain

  1. Nov 4, 2012 #1

    fluidistic

    User Avatar
    Gold Member

    1. The problem statement, all variables and given/known data
    Hi guys, I'm absolutely desperate on the following problem:
    Consider a random walker who can make steps only to neighbor sites in "D" dimensions where D is an arbitrary natural number. Assume that the distance between 2 adjacent sites is the same for all sites and that the probabilty for the walker to choose between any adjacent site is the same. Let [itex]\vec r[/itex] be a particular site and N be the number of steps the walker has done.
    What is the probability to find the random walker at [itex]\vec r[/itex] after N steps if he starts initially at the origin?
    Answer to this question using:
    1)the assumption that the walk can be described as a Markov chain in D dimensions.
    2)the distribution function of the displacement of the walker after N independent steps, [itex]\vec r = \vec r _1 + ... + \vec r _N[/itex].


    2. Relevant equations
    Probably a lot! Not really sure.


    3. The attempt at a solution
    I tried part 1) so far but did not go far at all.
    I know that the probability to choose any adjacent site is [itex]\frac{1}{2D}[/itex] where D is the dimension of the space the walker walks in.
    So they are asking me [itex]P_N(\vec r)[/itex]. I'll use the notation [itex]\vec r=r_1 \hat x_1 +... + r_D \hat x_D[/itex]; that's the position of the walker after N steps.
    I have the initial condition that [itex]P_0(\vec r )=\vec 0[/itex]. Since I'm given an initial condition on [itex]P_N(\vec r )[/itex] and that I'm asked to find [itex]P_N (\vec r )[/itex] I can "smell" that I'll have to solve a differential equation or something like that, but I've no idea how to find it.
    Now I know that for a Markovian process, the probability that the walker will go to say [itex]P _N(\vec r)[/itex] does not depend on its past but only on its present. Namely only on the "state" [itex]P_ {N-1} (\vec l )[/itex] where [itex]\vec l[/itex] is the site of the walker prior to [itex]\vec r[/itex].
    If I'm not wrong, then, I guess [itex]P_N (\vec r)=\frac{1}{2D} P_{N-1} (\vec l )[/itex].
    But then if I take [itex]P_{N-1} (\vec l )[/itex] it will depend only on the previous state. In the end I get the wrong result that [itex]P_N ( \vec r ) = \left ( \frac{1}{2D} \right ) ^{N} P_0 (0)=\left ( \frac{1}{2D} \right ) ^{N}[/itex].
    I know this result is wrong but I don't know what I'm doing wrong. Any help will be appreciated.
     
  2. jcsd
  3. Nov 4, 2012 #2

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Your method didn't work because it only considered one possible prior state. There are 2D to sum over.
    You could try assessing the number of ways N can be partitioned into the D dimensions, then how many ways each of those partitions can lead to the target coordinates. Or (almost the same?) get a recurrence relation based on number of steps available, number of dimensions, and remaining vector for those dimensions. I.e. knock off one dimension at a time.
     
  4. Nov 5, 2012 #3

    fluidistic

    User Avatar
    Gold Member

    First of all, thanks for the reply.
    Hmm I don't really understand. Since it's a Markov process, the current state only depends on the last one, not on the previous ones.

    Hmm I see, but would that be for part 2)? I don't see what it has to do with Markov chains (I'm explicitely asked to solve the problem by considering the process as a Markov chain in part 1).

    I made a mistake in my first post, I reach that [itex]P_N(\vec r ,N | 0, 0) = \left ( \frac{1}{2D} \right )[/itex] instead of [itex]\left ( \frac{1}{2D} \right ) ^{N}[/itex]. That's still false because there's no dependency on [itex]\vec r[/itex] while there should.

    Edit: Apparently I believe my result is wrong because I considered that the Markov chain is homogeneous while it seems it isn't.
    A homogeneous Markov chain has the property that [itex]P(X_k=i_k | X_{k-1} =i_{k-1} )[/itex] does not depend on k. Well I'm confused. In the problem the Markov chain seems homogeneous... but then I reach a non sense answer.
     
    Last edited: Nov 5, 2012
  5. Nov 5, 2012 #4

    fluidistic

    User Avatar
    Gold Member

    Attempt for 2):
    Let [itex]\vec r = \vec x_1 +... + \vec x_D[/itex]. Such that [itex]x_i[/itex] is the component of the vector r along the i'th dimension.
    For any dimension i, there are [itex]{x_ i \choose N}[/itex] ways for the walker to get to [itex]\vec x_i[/itex] in N steps. And the probability to choose one particular way is [itex]\left ( \frac{1}{2D} \right ) ^N[/itex].
    Since the position of the walker in any dimension has no influence whatsoever on its position along the other dimensions, I can safely write [itex]P _N ( \vec r ) = {x_1 \choose N} ... {x_d \choose N} \left ( \frac{1}{2D} \right ) ^N = \left ( \frac{1}{2D} \right ) ^N \sum _{i=1}^D \frac{N!}{(N-x_i )!x_i!}[/itex]. Does this look correct? I tried D=1 and I think it looked right; at least to me. What do you think? This would be my answer for the probability to find the walker at the position [itex]\vec r[/itex] after N steps.
     
  6. Nov 5, 2012 #5

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    This is incorrect: for N = x_1 there is just 1 way to get from 0 to x_1 in N steps; for N = x_1 + 1, there is no way to get from 0 to x_1 in N steps; for N = x_1 + 2, there are 2 ways to get from 0 to x_1 in N steps, etc.

    RGV
     
  7. Nov 5, 2012 #6

    fluidistic

    User Avatar
    Gold Member

    Woops I've got some latex typos. All my [itex]{n \choose k}[/itex] should be [itex]{k \choose n}[/itex]. Does this fix the problem?
     
  8. Nov 5, 2012 #7

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Quite so, but you seem to be confusing "the probability of being at a certain point after N steps given it was at a certain neighbour after N-1" with ""the probability of being at a certain point after N steps (given only that it was at the origin at step 0)".
    Isn't it clear that [itex]P_{N}(\vec{r})=\sum_{\vec{l} \in A(\vec{r})}P_{N-1}(\vec{l})/2D[/itex], where [itex]A(\vec{r})[/itex] is the set of neighbours of r?
    In the OP, you don't say you have to "use" Markov chains, merely that you can assume it behaves like one. I don't know why they tell you that since it clearly does from the preceding description.

    Wrt your later post, I don't understand how you got [itex]\left(^{x_i}_N\right)[/itex]. Maybe you meant [itex]\left(_{x_i}^N\right)[/itex], but I still don't see it. To reach xi in the i-direction there must be s steps in the 'wrong' direction and s+xi in the 'right' direction for any s<N. But the larger s is for one dimension, the less is left for the other dimensions, so it gets quite messy.

    Try this: start by considering the case where N is only just large enough to reach r, i.e. no steps in the 'wrong' direction in any dimension. That's fairly easy. Then figure out the multiplier that arises from having spare steps, i.e. a larger N for the same r. Note that an odd number of spares will give you zero!
     
  9. Nov 5, 2012 #8

    fluidistic

    User Avatar
    Gold Member

    Ok thanks guys. I'll concentrate for now on the 2) part.
    I totally overlooked that more steps in a dimension reduce the ones in the other as to reach N total steps...
     
  10. Nov 5, 2012 #9

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Sorry: what I said above is incorrect. For n = x+2 there are not just 2 ways to get from 0 to x in n steps; there are C(n,n-1) = C(n,1) = n ways. In general, the number of ways of getting from 0 to x in n steps is
    [tex] N_{n,x} = C(n, (n+x)/2) = {n \choose (n+x)/2}, [/tex]
    where it is understood that this equals 0 unless (n+x)/2 is an integer between 0 and n inclusive.

    All this is treated beautifully in the marvelous book by W. Feller "An Introduction to Probability Theory and its Applications", Vol. I (Wiley, 1968), pp. 73-75 (an oldie, but an unbeatable classic).

    RGV
     
    Last edited: Nov 5, 2012
  11. Nov 5, 2012 #10

    fluidistic

    User Avatar
    Gold Member

    Ok thank you infinitely for the information. I am going to take a look at this reference and post back in 1 or a few days.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Random walker, markov chain
  1. Markov chain (Replies: 0)

  2. Markov chains (Replies: 0)

  3. Markov Chain (Replies: 1)

  4. Markov chains (Replies: 0)

  5. Markov Chain (Replies: 1)

Loading...