Probability branching process proof

In summary: You can estimate the distribution of A given blah in this case as P(A|B) = E[s^Xn | X1=j] = ƩE[s^Xn | X1=j]P(X1=j).Using the fact that Xn is an independent observation of X1 and that P(A|B) = P(A|C) if B = C you can conclude that P(A|B) = P(A|X1=j) = E[s^X1 | X1=j] = ƩE[s^X1 | X1=j]P(X1=j).You can continue this process for all n > 2 to prove what you need to prove.In summary
  • #1
RVP91
50
0
By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children, show that Fn(s),
defined by Fn(s) = E(s^Xn), satisfies

Fn(s) = F(Fn−1(s)) ∀n ≥ 2.


I need to prove the above result and have somewhat of an idea how to but I can't get the end result.

Here is my working thus far.

Fn(s) = E(s^Xn) = E(s^X1 + X2 +...+Xn) = E(s^j+X2+..+Xn) = s^j(E(s^Xn-1)

Then E(s^Xn | X1=j) = ƩE(s^Xn | X1=j)P(X1=j) = Ʃs^j(E(s^Xn-1)P(X1=j) ?

Is there anywhere near correct? Where am I going wrong
 
Mathematics news on Phys.org
  • #2
Hey RVP91.

For this process can you assume only a markovian property (1st order conditional independence) or general independence (zero order conditional independence, or independence for every observation)?
 
  • #3
Could you explain further, after reconsideration I know for sure my original working was totally incorrect.

Could anyone help me out? Possibly start me off?

Thanks.
 
  • #4
In particular could someone explain what it is saying when it says "By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children" I think this is essentially the key but I don't understand what it means.
 
  • #5
RVP91 said:
Could you explain further, after reconsideration I know for sure my original working was totally incorrect.

Could anyone help me out? Possibly start me off?

Thanks.

1st order conditional independence is what is known as the markov property. What this means is that you have a distribution for P(A(n)|A(n-1)) (i.e. a distribution for the probability of getting a value of A(n) at given a previous known realization A(n-1)) and it says that this probability only depends on A(n-1) and no other realizations before it (like A(n-1), A(n-3) and so on).

Zero order or absolute independence means that P(A|B) = P(A) for all events A and B: in other words A does not depend on any other data and is completely independent.
 
  • #6
So normally would it be zero order as the offspring at each stage are independent of the any offspring around them in the same generation.

"By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children" does this statement change it and make it first order?
 
  • #7
RVP91 said:
So normally would it be zero order as the offspring at each stage are independent of the any offspring around them in the same generation.

"By conditioning on the value of X1, and then thinking of future generations as a particular
generation of the separate branching processes spawned by these children" does this statement change it and make it first order?

The very nature of conditioning will make it at least first order if you are conditioning on a previous value.

It seems that what you are saying is that the children create a new process and this translates to a new distribution. This is exactly what markovian systems do: a realization now will determine the distribution for the next realization and the actual distribution is determined by the transition probabilities in your transition matrix for discrete systems. Non-discrete systems follow the same idea, but they use different formulations.
 
  • #8
Oh right. I'm really confused now. Is there any chance you could perhaps give me the first few lines of the proof and then some hints on how to continue please?
 
  • #9
RVP91 said:
Oh right. I'm really confused now. Is there any chance you could perhaps give me the first few lines of the proof and then some hints on how to continue please?

To prove anything you need assumptions that you will use.

To prove that zero order conditional independence doesn't hold it suffices to prove that P(A|B) <> P(A) as a general statement. To prove first order it suffices to prove that P(A|B,C,D,E,...) = P(A|B) or more appropriately that P(A(n)|A(n-1),A(n-2),...,A(1)) = P(A(n)|A(n-1)).

With the P(A|B) we consider that A = A(n) and B = any combination of states before n. By showing the backward direction you can show the forward one as well.

For your example though, it is not this complex.

The way I would do this under assumption of independence between X's is to use an inductive argument. Prove for n=1 and 2 and then prove for n > 2. You can use the fact that for independent X and Y, then E[s^(X+Y)] = E[s^X * s^Y] = E[s^X]E[s^Y] if X and Y are independent.

Hopefully this will help you.
So to get the specifics, you need to formulate your assumptions and then prove what you need to prove.

You can either prove something from assuming an underlying distribution (explicitly stating P(A|blah) for instance to define the distribution of A given blah) or you can use data as a model to estimate the distribution properties of the underlying process itself.
 

1. What is a probability branching process?

A probability branching process is a mathematical model used to describe the growth and spread of a population over time. It involves a sequence of random variables that represent the number of offspring produced by each individual in a population.

2. How does a probability branching process proof work?

A probability branching process proof involves using mathematical techniques and formulas to show the convergence of the process to a certain probability distribution. This involves analyzing the probabilities of different outcomes and their relationships to each other.

3. What is the importance of probability branching process proofs?

Probability branching process proofs are important because they provide a rigorous mathematical foundation for understanding the growth and spread of populations. They also have applications in various fields, such as biology, economics, and computer science.

4. What are some common assumptions made in probability branching process proofs?

Some common assumptions made in probability branching process proofs include the independence of offspring, constant probability of producing offspring, and finite mean and variance of the number of offspring produced.

5. Can probability branching process proofs be applied to real-world situations?

Yes, probability branching process proofs can be applied to real-world situations. For example, they can be used to model the spread of diseases in a population, the growth of a company, or the branching of a family tree.

Similar threads

Replies
4
Views
563
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
16
Views
1K
  • Computing and Technology
Replies
11
Views
2K
  • Programming and Computer Science
Replies
1
Views
993
  • Classical Physics
Replies
1
Views
768
Replies
17
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Precalculus Mathematics Homework Help
Replies
4
Views
717
Back
Top