- #26

StoneTemplePython

Science Advisor

Gold Member

2019 Award

- 1,169

- 569

Or the flip side -- I centered and normalized so that you always have zero mean and 1 std deviation on the normal random variable. Note: the process of converting it to zero mean changes the interpretation of a and b a little bit so I went back and edited my post -- a belated thought on my end.Thanks a lot, I think I get what you mean with this. You're basically transforming the distribution so that you can show that the standard deviation would always stay within the boundaries of ##a## and ##b## if I understand correctly.

I've got one question. I've tried to test this summation using summation calculators from several websites but they all give deviating answers from 1. This is what I'm trying to calculate:

$$\lim_{n \rightarrow \infty} \sum_{k=a \cdot n}^{b\cdot n} \frac{n!}{k!\cdot (n-k)!} \cdot \frac{1}{2^n}$$

Here, ##a## and ##b## are any percentages of ##n## equally far from the mean (##0.5n##) where ##b > a##. Shouldn't this limit give the answer ##1## again, regardless of how wide or small the range ##a-b## is?

so long as ##a## is less than the mean and ##b## greater than the mean, and you scale them both by n, then in principle it should tend to one. The series you are evaluating doesn't have a direct, closed form, though, so I have no idea how its being evaluated by whatever computer algebra systems. Also the form you are using isn't with zero mean, which could create some challenges.

There are basically two ways to evaluate it: 1.) think more abstractly and learn your limit laws in probability. 2.) if you code: run a bunch of simulation trials, where each one has you tossing a very large number n -- the averaged result of heads in that range should be awfully close to one.