# Probability branching process proof

1. May 7, 2012

### RVP91

By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children, show that Fn(s),
deﬁned by Fn(s) = E(s^Xn), satisﬁes

Fn(s) = F(Fn−1(s)) ∀n ≥ 2.

I need to prove the above result and have somewhat of an idea how to but I can't get the end result.

Here is my working thus far.

Fn(s) = E(s^Xn) = E(s^X1 + X2 +...+Xn) = E(s^j+X2+..+Xn) = s^j(E(s^Xn-1)

Then E(s^Xn | X1=j) = ƩE(s^Xn | X1=j)P(X1=j) = Ʃs^j(E(s^Xn-1)P(X1=j) ?

Is there anywhere near correct? Where am I going wrong

2. May 7, 2012

### chiro

Hey RVP91.

For this process can you assume only a markovian property (1st order conditional independence) or general independence (zero order conditional independence, or independence for every observation)?

3. May 8, 2012

### RVP91

Could you explain further, after reconsideration I know for sure my original working was totally incorrect.

Could anyone help me out? Possibly start me off?

Thanks.

4. May 8, 2012

### RVP91

In particular could someone explain what it is saying when it says "By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children" I think this is essentially the key but I don't understand what it means.

5. May 8, 2012

### chiro

1st order conditional independence is what is known as the markov property. What this means is that you have a distribution for P(A(n)|A(n-1)) (i.e. a distribution for the probability of getting a value of A(n) at given a previous known realization A(n-1)) and it says that this probability only depends on A(n-1) and no other realizations before it (like A(n-1), A(n-3) and so on).

Zero order or absolute independence means that P(A|B) = P(A) for all events A and B: in other words A does not depend on any other data and is completely independent.

6. May 8, 2012

### RVP91

So normally would it be zero order as the offspring at each stage are independent of the any offspring around them in the same generation.

"By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children" does this statement change it and make it first order?

7. May 8, 2012

### chiro

The very nature of conditioning will make it at least first order if you are conditioning on a previous value.

It seems that what you are saying is that the children create a new process and this translates to a new distribution. This is exactly what markovian systems do: a realization now will determine the distribution for the next realization and the actual distribution is determined by the transition probabilities in your transition matrix for discrete systems. Non-discrete systems follow the same idea, but they use different formulations.

8. May 8, 2012

### RVP91

Oh right. I'm really confused now. Is there any chance you could perhaps give me the first few lines of the proof and then some hints on how to continue please?

9. May 8, 2012

### chiro

To prove anything you need assumptions that you will use.

To prove that zero order conditional independence doesn't hold it suffices to prove that P(A|B) <> P(A) as a general statement. To prove first order it suffices to prove that P(A|B,C,D,E,...) = P(A|B) or more appropriately that P(A(n)|A(n-1),A(n-2),...,A(1)) = P(A(n)|A(n-1)).

With the P(A|B) we consider that A = A(n) and B = any combination of states before n. By showing the backward direction you can show the forward one as well.

For your example though, it is not this complex.

The way I would do this under assumption of independence between X's is to use an inductive argument. Prove for n=1 and 2 and then prove for n > 2. You can use the fact that for independent X and Y, then E[s^(X+Y)] = E[s^X * s^Y] = E[s^X]E[s^Y] if X and Y are independent.