Probability branching process proof


by RVP91
Tags: branching, probability, process, proof
RVP91
RVP91 is offline
#1
May7-12, 04:37 PM
P: 50
By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children, show that Fn(s),
defined by Fn(s) = E(s^Xn), satisfies

Fn(s) = F(Fn−1(s)) ∀n ≥ 2.


I need to prove the above result and have somewhat of an idea how to but I can't get the end result.

Here is my working thus far.

Fn(s) = E(s^Xn) = E(s^X1 + X2 +...+Xn) = E(s^j+X2+..+Xn) = s^j(E(s^Xn-1)

Then E(s^Xn | X1=j) = ƩE(s^Xn | X1=j)P(X1=j) = Ʃs^j(E(s^Xn-1)P(X1=j) ?

Is there anywhere near correct? Where am I going wrong
Phys.Org News Partner Mathematics news on Phys.org
Math modeling handbook now available
Hyperbolic homogeneous polynomials, oh my!
Researchers help Boston Marathon organizers plan for 2014 race
chiro
chiro is offline
#2
May7-12, 11:53 PM
P: 4,570
Hey RVP91.

For this process can you assume only a markovian property (1st order conditional independence) or general independence (zero order conditional independence, or independence for every observation)?
RVP91
RVP91 is offline
#3
May8-12, 01:56 PM
P: 50
Could you explain further, after reconsideration I know for sure my original working was totally incorrect.

Could anyone help me out? Possibly start me off?

Thanks.

RVP91
RVP91 is offline
#4
May8-12, 07:22 PM
P: 50

Probability branching process proof


In particular could someone explain what it is saying when it says "By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children" I think this is essentially the key but I don't understand what it means.
chiro
chiro is offline
#5
May8-12, 07:22 PM
P: 4,570
Quote Quote by RVP91 View Post
Could you explain further, after reconsideration I know for sure my original working was totally incorrect.

Could anyone help me out? Possibly start me off?

Thanks.
1st order conditional independence is what is known as the markov property. What this means is that you have a distribution for P(A(n)|A(n-1)) (i.e. a distribution for the probability of getting a value of A(n) at given a previous known realization A(n-1)) and it says that this probability only depends on A(n-1) and no other realizations before it (like A(n-1), A(n-3) and so on).

Zero order or absolute independence means that P(A|B) = P(A) for all events A and B: in other words A does not depend on any other data and is completely independent.
RVP91
RVP91 is offline
#6
May8-12, 07:25 PM
P: 50
So normally would it be zero order as the offspring at each stage are independent of the any offspring around them in the same generation.

"By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children" does this statement change it and make it first order?
chiro
chiro is offline
#7
May8-12, 07:31 PM
P: 4,570
Quote Quote by RVP91 View Post
So normally would it be zero order as the offspring at each stage are independent of the any offspring around them in the same generation.

"By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children" does this statement change it and make it first order?
The very nature of conditioning will make it at least first order if you are conditioning on a previous value.

It seems that what you are saying is that the children create a new process and this translates to a new distribution. This is exactly what markovian systems do: a realization now will determine the distribution for the next realization and the actual distribution is determined by the transition probabilities in your transition matrix for discrete systems. Non-discrete systems follow the same idea, but they use different formulations.
RVP91
RVP91 is offline
#8
May8-12, 08:14 PM
P: 50
Oh right. I'm really confused now. Is there any chance you could perhaps give me the first few lines of the proof and then some hints on how to continue please?
chiro
chiro is offline
#9
May8-12, 09:09 PM
P: 4,570
Quote Quote by RVP91 View Post
Oh right. I'm really confused now. Is there any chance you could perhaps give me the first few lines of the proof and then some hints on how to continue please?
To prove anything you need assumptions that you will use.

To prove that zero order conditional independence doesn't hold it suffices to prove that P(A|B) <> P(A) as a general statement. To prove first order it suffices to prove that P(A|B,C,D,E,...) = P(A|B) or more appropriately that P(A(n)|A(n-1),A(n-2),...,A(1)) = P(A(n)|A(n-1)).

With the P(A|B) we consider that A = A(n) and B = any combination of states before n. By showing the backward direction you can show the forward one as well.

For your example though, it is not this complex.

The way I would do this under assumption of independence between X's is to use an inductive argument. Prove for n=1 and 2 and then prove for n > 2. You can use the fact that for independent X and Y, then E[s^(X+Y)] = E[s^X * s^Y] = E[s^X]E[s^Y] if X and Y are independent.

Hopefully this will help you.
So to get the specifics, you need to formulate your assumptions and then prove what you need to prove.

You can either prove something from assuming an underlying distribution (explicitly stating P(A|blah) for instance to define the distribution of A given blah) or you can use data as a model to estimate the distribution properties of the underlying process itself.


Register to reply

Related Discussions
branching process, inductive proof Calculus & Beyond Homework 0
Probability transitions for branching process Differential Equations 2
a special kind of branching process Set Theory, Logic, Probability, Statistics 7
the basic concept of branching process Set Theory, Logic, Probability, Statistics 1
Generating functions in the branching process. Calculus & Beyond Homework 1