Summing X_i with Binomial Distribution: What is the Problem?

  • Thread starter Thread starter autobot.d
  • Start date Start date
  • Tags Tags
    Class
autobot.d
Messages
67
Reaction score
0
What kind of problem is this?
X_i \textrm{are iid with known mean and variance, } \mu \textrm{ and } \sigma ^2 \textrm{respectively. }
m \sim \textrm{Binomial(n,p), n is known.}

S = \sum^{m}_{i=1} X_i

How do I work with this? This what I have thought of.

S = \sum^{m}_{i=1} X_i = mX_1 \textrm{(since iid)}
so for the mean of S
\bar{S} = \bar{mX_i} = np \mu ?
or to find mean of S use expected value
E(S) = E(mX_i) = E(mX_1) \textrm{ (since iid)}
but then what?

Any help would be appreciated. I am guessing this kind of problem has a name?

Thanks.
 
Physics news on Phys.org
I am missing the problem statement - what are you supposed to do? Calculate mean and variance of S?

As the X_i are independent, their sum is not the same as m times X_1 (this would need 100% correlation),
You can use the mean for X_i in your expression for the mean of S. The result for the mean is right, but you have to fix the calculation steps - otherwise you run into problems with the variance.
 
But what about the fact that the m in the summation limit is Binomially distributed? I do not understand what that does? Thanks.
 
You can express this as "f0 probability that 0 X_i are added, f1 probability that 1 X_i is added, ...", calculate the variance in each of those cases, and combine them afterwards.
For the expectation value, this does not matter, you can take the expectation value of both separately as all X_i have the same distribution.
 
so for "f0 probability that 0 X_i are added", and assuming f0 = {0 \choose n}p^0(1-p)^{n}=(1-p)^n

and so on, then what do I do with all the f0, f1, ...
is the average like a weighted average

pretty lost so any help is greatly appreciated. Thanks.
 
It is like a weighted average, indeed, and it follows the same rules for expectation values and variances.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top