1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Interesting Random Process

  1. Nov 27, 2015 #1
    • Moved from a technical forum, so homework template missing
    Hello,

    So I was asked this question the other day and don't really know how to go about it:

    "Define a random process by:

    Xn+1 = Xn+1 with probability 1/2
    Xn+1 = 1/Xn with probability 1/2

    Given that X0= 100, find the expectation of X10 "

    I've only ever really met random processes like this when you consider equal step random walks and can use characteristic functions. How would I go about solving this problem?

    Any help would be very much appreciated!
     
  2. jcsd
  3. Nov 27, 2015 #2
    Maybe we could compute the average of the next step instead of having a tree ? We could also see the average as a fix point of the average recurrence relation.
     
    Last edited: Nov 27, 2015
  4. Nov 27, 2015 #3
    In other words we write $$x_{n+1}=1/2*(x_n+1+1/x_n) $$

    Computing the fix point is not exact for n=10 this would need a computer program but this method gives average at infinity is $$\phi=\frac {1+\sqrt {5}}{2}\approx 1.62$$

    Average at ten using a program gave me 1.59
     
  5. Nov 27, 2015 #4

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    No, that first equation you wrote is false:
    [tex] E \left( \frac{1}{X_n} \right) \neq \frac{1}{E X_n} [/tex]
    For instance,
    [tex] X_1 = \begin{cases} 101 & \Pr = 1/2 \\
    \displaystyle \frac{1}{100} & \Pr = 1/2
    \end{cases}
    [/tex]
    Thus,
    [tex] E X_1 = \frac{1}{2} \left( 101 + \frac{1}{100} \right) = 50.50500000 [/tex]
    so
    [tex] \frac{1}{E X_1} \doteq 0.01980001980 [/tex]
    but
    [tex] E \left( \frac{1}{X_1} \right) = \frac{1}{2} \left( \frac{1}{101} + 100 \right) \doteq 50.00495050
    [/tex]

    Just for interest's sake: what was the nature of the program you used?

    When I do it in Maple 11 I get the following ##E X_n## values:
    [tex] \begin{array}{rl}
    n & EX_n \\
    2 & 50.75497525 \\
    3 & 38.62872549 \\
    4 & 32.59731395 \\
    5 & 26.56027159 \\
    6 & 22.03256118 \\
    7 & 18.26083028 \\
    8 & 15.24344673 \\
    9 & 12.79228905 \\
    10 & 10.81267707 \\
    \end{array}
    [/tex]

    Basically, starting from a sequence ##V_n## of ##X_n## values (which may have repeated values, so each entry has probability ##2^{-n}##), I form two new sequences ##M_{n+1} = [V_n(1)+1, \ldots ,V_n(2^n)+1]## and ##N_{n+1} = [1/V_n(1) , \ldots, 1/V_n(2^n) ]##, then get a new sequence ##V_{n+1}## of ##X_{n+1}## values by concatenating ##M_{n+1}## and ##N_{n+1}##, then sorting the entries from smallest to largest. The sequence ##V_{n+1}## has ##2^{n+1}## entries, but many entries are duplicated, so each individual entry has probability ##2^{-n-1}##. Of course, the sorting step is unnecessary, but makes the results easier to read.

    Maple did all the computations of the ##V_n## in exact rational arithmetic, so roundoff errors are not an issue. However, I let the ##V_n(i)## values be converted to floating-point numbers in the computation of ##EX_n## because the exact rational value of ##EX_n## becomes a ratio of two huge integers when ##n## gets close to 10. In fact, the exact value of ##EX_{10}## is
    [tex] \begin{array}{lcl} E X_{10} &= & \displaystyle\frac{I_{10}}{J_{10}}, \; \text{where}\\
    I_{10} &=& 1469579070445432944325748152877976451732541041634060024\\ &&653347052338236193574669483177095563513720147511715879 \\
    J_{10} &=& 1359126015084446126651435240624997445643066808400449966906136188164565\\
    &&14601326679253516313161435366438748160
    \end{array}
    [/tex]
     
    Last edited: Nov 27, 2015
  6. Nov 27, 2015 #5
    This recurrence relation is not the recurrence relation of the average but the average recurrence relation, it's not intended to be exact.

    I wrote a small C program( with an error, see the next post)

    Code (C):

    #include<stdio.h>

    double avg=0.0;

    void tree(double val, int depth)
    {
       if(depth<10)
       {
         val=val+1.0;
         tree(val, depth+1);

         val=1.0/val;
         tree(val, depth+1);
       }
       else
         avg+=val;
    }

    int main(void)
    {  
       int val0=100.0;

       tree(val0,0);

       printf("Avg : %lf\n",avg/1024.0);
    }

     
     
    Last edited: Nov 27, 2015
  7. Nov 27, 2015 #6
    I discovered an error in the code, it gives avg near 10.8.
    Code (C):

    #include<stdio.h>

    double avg=0.0;

    void tree(double val, int depth)
    {
       if(depth<10)
       {
         tree(val+1.0, depth+1);
         tree(1.0/val, depth+1);
       }
       else
         avg+=val;
    }

    int main(void)
    {  
       int val0=100.0;

       tree(val0,0);

       printf("Avg : %lf\n",avg/1024.0);
    }
     
    ------------------
    It's by making mistakes that I learn.
     
    Last edited: Nov 27, 2015
  8. Nov 27, 2015 #7
    We obtain different results, i surely made a mistake. It builds a tree starting from 100 and using both recurrence relation up to the level then the final result is averaged.
     
  9. Nov 27, 2015 #8
    I obtain different values, like rounded :

    E7 18....
    E8 15,...
    E9 12,...
     
  10. Nov 27, 2015 #9

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Actually, we obtain the same results, or close to it. I had made an error in the recursions for ##n = 7## to ##n = 10##, so those results for ##EX_n## were erroneous. They have now been corrected in the post, and I get ##E X_{10} \approx 10.81##, as do you.
     
    Last edited: Nov 27, 2015
  11. Nov 27, 2015 #10
    The question is now if we got to infinity does it converge to phi ?
     
  12. Nov 27, 2015 #11

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    I can show that asymptotically the mean value lies between 2.3 and 3.
     
  13. Nov 28, 2015 #12
    I'm interested in the proof you have.
     
  14. Nov 28, 2015 #13

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Whenever the sequence goes <1, the next step is >1. So I considered a subsequence which skips the <1 step. Sometimes that means we get two consecutive terms the same since it might follow the invert rule twice, so I skip the repeats too. This gets me to a sequence
    ##y_{n+1}=y_n+1## with probability 2/3;
    ##=1/y_n+1## with prob 1/3.
    Let the mean of this sequence be ##\mu## and that of its inverse be ##\nu##. We obtain ##\mu=3+\nu##.
    Since each term >1, 1>##\nu##>=##1/\mu##. Hence ##4>\mu>3.3##.
    Relating this back to the original sequence, the mean of that is ##\frac 23\mu+\frac 13\nu##, and hence equals ##\mu-1##.
     
    Last edited: Nov 29, 2015
  15. Nov 28, 2015 #14
    I don't understand : how do you get $$\mu=3+\nu $$ ?
     
  16. Nov 28, 2015 #15

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    ##\mu=E(y_{n+1})=\frac 23 E(y_n+1)+\frac 13 E(1/y_n+1)=\frac 23(\mu+1)+\frac 13(\nu+1)## etc.
     
  17. Nov 28, 2015 #16
    It is indeed a deep result.
     
  18. Nov 29, 2015 #17

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Wasn't happy with the upper bound, so I tried comparing the y sequence with the sequence
    ##z_{n+1}=z_n+1## with prob 2/3; =1 with prob 1/3.
    In a sense which I believe can be made rigorous, this is everywhere less than the y sequence. At least, in the sense that E(z)<E(y) and, more importantly, E(1/z)>E(1/y). E(1/z)=ln(3)/2<0.55, so the average of the original sequence lies between 2.3 and 2.55.

    Update: I generated some 2000-long y sequences starting at 3. The mean of a sequnce rarely fell outside the range 3.3 to 3.55.
     
    Last edited: Nov 30, 2015
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Interesting Random Process
  1. Compound interest (Replies: 1)

  2. Interesting Problem (Replies: 1)

Loading...