1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Probability branching process proof

  1. May 7, 2012 #1
    By conditioning on the value of X1, and then thinking of future generations as a particular
    generation of the separate branching processes spawned by these children, show that Fn(s),
    defined by Fn(s) = E(s^Xn), satisfies

    Fn(s) = F(Fn−1(s)) ∀n ≥ 2.


    I need to prove the above result and have somewhat of an idea how to but I can't get the end result.

    Here is my working thus far.

    Fn(s) = E(s^Xn) = E(s^X1 + X2 +...+Xn) = E(s^j+X2+..+Xn) = s^j(E(s^Xn-1)

    Then E(s^Xn | X1=j) = ƩE(s^Xn | X1=j)P(X1=j) = Ʃs^j(E(s^Xn-1)P(X1=j) ?

    Is there anywhere near correct? Where am I going wrong
     
  2. jcsd
  3. May 7, 2012 #2

    chiro

    User Avatar
    Science Advisor

    Hey RVP91.

    For this process can you assume only a markovian property (1st order conditional independence) or general independence (zero order conditional independence, or independence for every observation)?
     
  4. May 8, 2012 #3
    Could you explain further, after reconsideration I know for sure my original working was totally incorrect.

    Could anyone help me out? Possibly start me off?

    Thanks.
     
  5. May 8, 2012 #4
    In particular could someone explain what it is saying when it says "By conditioning on the value of X1, and then thinking of future generations as a particular
    generation of the separate branching processes spawned by these children" I think this is essentially the key but I don't understand what it means.
     
  6. May 8, 2012 #5

    chiro

    User Avatar
    Science Advisor

    1st order conditional independence is what is known as the markov property. What this means is that you have a distribution for P(A(n)|A(n-1)) (i.e. a distribution for the probability of getting a value of A(n) at given a previous known realization A(n-1)) and it says that this probability only depends on A(n-1) and no other realizations before it (like A(n-1), A(n-3) and so on).

    Zero order or absolute independence means that P(A|B) = P(A) for all events A and B: in other words A does not depend on any other data and is completely independent.
     
  7. May 8, 2012 #6
    So normally would it be zero order as the offspring at each stage are independent of the any offspring around them in the same generation.

    "By conditioning on the value of X1, and then thinking of future generations as a particular
    generation of the separate branching processes spawned by these children" does this statement change it and make it first order?
     
  8. May 8, 2012 #7

    chiro

    User Avatar
    Science Advisor

    The very nature of conditioning will make it at least first order if you are conditioning on a previous value.

    It seems that what you are saying is that the children create a new process and this translates to a new distribution. This is exactly what markovian systems do: a realization now will determine the distribution for the next realization and the actual distribution is determined by the transition probabilities in your transition matrix for discrete systems. Non-discrete systems follow the same idea, but they use different formulations.
     
  9. May 8, 2012 #8
    Oh right. I'm really confused now. Is there any chance you could perhaps give me the first few lines of the proof and then some hints on how to continue please?
     
  10. May 8, 2012 #9

    chiro

    User Avatar
    Science Advisor

    To prove anything you need assumptions that you will use.

    To prove that zero order conditional independence doesn't hold it suffices to prove that P(A|B) <> P(A) as a general statement. To prove first order it suffices to prove that P(A|B,C,D,E,...) = P(A|B) or more appropriately that P(A(n)|A(n-1),A(n-2),...,A(1)) = P(A(n)|A(n-1)).

    With the P(A|B) we consider that A = A(n) and B = any combination of states before n. By showing the backward direction you can show the forward one as well.

    For your example though, it is not this complex.

    The way I would do this under assumption of independence between X's is to use an inductive argument. Prove for n=1 and 2 and then prove for n > 2. You can use the fact that for independent X and Y, then E[s^(X+Y)] = E[s^X * s^Y] = E[s^X]E[s^Y] if X and Y are independent.

    Hopefully this will help you.
    So to get the specifics, you need to formulate your assumptions and then prove what you need to prove.

    You can either prove something from assuming an underlying distribution (explicitly stating P(A|blah) for instance to define the distribution of A given blah) or you can use data as a model to estimate the distribution properties of the underlying process itself.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Probability branching process proof
  1. Probability, The AND (Replies: 19)

Loading...