Probability branching process proof

Click For Summary

Discussion Overview

The discussion revolves around proving a result related to a probability branching process, specifically focusing on the function Fn(s) defined as Fn(s) = E(s^Xn). Participants are exploring the implications of conditioning on the value of X1 and how it affects the future generations in the branching process. The scope includes mathematical reasoning and conceptual clarification regarding independence properties in the context of branching processes.

Discussion Character

  • Mathematical reasoning
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant attempts to derive the relationship Fn(s) = F(Fn−1(s)) but expresses uncertainty about their working and seeks assistance.
  • Another participant questions whether to assume first-order conditional independence (Markov property) or general independence for the process.
  • Clarifications are sought regarding the meaning of conditioning on X1 and its implications for the independence of future generations.
  • There is a discussion on whether the offspring in the branching process are independent of each other, with some suggesting that conditioning on X1 implies a first-order dependence.
  • A participant suggests using an inductive argument to prove the result under the assumption of independence between the random variables involved.
  • There are requests for hints or initial steps in the proof, indicating confusion about the approach to take.

Areas of Agreement / Disagreement

Participants express differing views on the nature of independence in the branching process, with some advocating for zero-order independence and others suggesting that conditioning introduces first-order dependence. The discussion remains unresolved regarding the correct assumptions to use in the proof.

Contextual Notes

Participants highlight the need for clear assumptions when proving properties of the branching process, but there are unresolved questions about the implications of conditioning and the nature of independence among the random variables.

RVP91
Messages
50
Reaction score
0
By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children, show that Fn(s),
defined by Fn(s) = E(s^Xn), satisfies

Fn(s) = F(Fn−1(s)) ∀n ≥ 2.


I need to prove the above result and have somewhat of an idea how to but I can't get the end result.

Here is my working thus far.

Fn(s) = E(s^Xn) = E(s^X1 + X2 +...+Xn) = E(s^j+X2+..+Xn) = s^j(E(s^Xn-1)

Then E(s^Xn | X1=j) = ƩE(s^Xn | X1=j)P(X1=j) = Ʃs^j(E(s^Xn-1)P(X1=j) ?

Is there anywhere near correct? Where am I going wrong
 
Physics news on Phys.org
Hey RVP91.

For this process can you assume only a markovian property (1st order conditional independence) or general independence (zero order conditional independence, or independence for every observation)?
 
Could you explain further, after reconsideration I know for sure my original working was totally incorrect.

Could anyone help me out? Possibly start me off?

Thanks.
 
In particular could someone explain what it is saying when it says "By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children" I think this is essentially the key but I don't understand what it means.
 
RVP91 said:
Could you explain further, after reconsideration I know for sure my original working was totally incorrect.

Could anyone help me out? Possibly start me off?

Thanks.

1st order conditional independence is what is known as the markov property. What this means is that you have a distribution for P(A(n)|A(n-1)) (i.e. a distribution for the probability of getting a value of A(n) at given a previous known realization A(n-1)) and it says that this probability only depends on A(n-1) and no other realizations before it (like A(n-1), A(n-3) and so on).

Zero order or absolute independence means that P(A|B) = P(A) for all events A and B: in other words A does not depend on any other data and is completely independent.
 
So normally would it be zero order as the offspring at each stage are independent of the any offspring around them in the same generation.

"By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children" does this statement change it and make it first order?
 
RVP91 said:
So normally would it be zero order as the offspring at each stage are independent of the any offspring around them in the same generation.

"By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children" does this statement change it and make it first order?

The very nature of conditioning will make it at least first order if you are conditioning on a previous value.

It seems that what you are saying is that the children create a new process and this translates to a new distribution. This is exactly what markovian systems do: a realization now will determine the distribution for the next realization and the actual distribution is determined by the transition probabilities in your transition matrix for discrete systems. Non-discrete systems follow the same idea, but they use different formulations.
 
Oh right. I'm really confused now. Is there any chance you could perhaps give me the first few lines of the proof and then some hints on how to continue please?
 
RVP91 said:
Oh right. I'm really confused now. Is there any chance you could perhaps give me the first few lines of the proof and then some hints on how to continue please?

To prove anything you need assumptions that you will use.

To prove that zero order conditional independence doesn't hold it suffices to prove that P(A|B) <> P(A) as a general statement. To prove first order it suffices to prove that P(A|B,C,D,E,...) = P(A|B) or more appropriately that P(A(n)|A(n-1),A(n-2),...,A(1)) = P(A(n)|A(n-1)).

With the P(A|B) we consider that A = A(n) and B = any combination of states before n. By showing the backward direction you can show the forward one as well.

For your example though, it is not this complex.

The way I would do this under assumption of independence between X's is to use an inductive argument. Prove for n=1 and 2 and then prove for n > 2. You can use the fact that for independent X and Y, then E[s^(X+Y)] = E[s^X * s^Y] = E[s^X]E[s^Y] if X and Y are independent.

Hopefully this will help you.
So to get the specifics, you need to formulate your assumptions and then prove what you need to prove.

You can either prove something from assuming an underlying distribution (explicitly stating P(A|blah) for instance to define the distribution of A given blah) or you can use data as a model to estimate the distribution properties of the underlying process itself.
 

Similar threads

Replies
3
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K