Is the Mean of a Sum of Randomly Chosen Numbers Always 1?

suyver
Messages
247
Reaction score
0
I choose a random number p_1 \in [0,1) and a subsequent series of (increasingly smaller) random numbers p_i \in [0, p_{i-1}). Then I can calculate the sum \sum_{i=1}^\infty p_i. Naturally, this sum is dependent on the random numbers chosen, so its particular result is not very insightful. However, it appears that its mean is rather surprising:
\left< \sum_{i=1}^\infty p_i \right>=1
Does anybody know a proof as to why this is the case?
 
Physics news on Phys.org
<pi> = (1/2)i. Sum is then 1/2 + 1/4 + ... = 1.
 
yes, I thought about that one too. But I got confused when I did a bunch of numerical simulations. Sometimes, the sum got very large (>1000). So I worry about the case where the sum may actually diverge: theoretically this is clearly possible (e.g.: sum of 1/n). Do we need to prove that this is not an issue?
 
suyver said:
yes, I thought about that one too. But I got confused when I did a bunch of numerical simulations. Sometimes, the sum got very large (>1000). So I worry about the case where the sum may actually diverge: theoretically this is clearly possible (e.g.: sum of 1/n). Do we need to prove that this is not an issue?

I think you need to take a look at your numerical simulation.

The probability would be 1 - C(100,30)/2^(-100) = 1 - [100!/(70!30!)]/2^(-100) = 1 - 2.3*10^(-5) that at least 30 of the first 1000 numbers will be < (1/2) * (their predecessor).

The rest of the numbers would than be smaller han 2^(-30), and you would need at least 10^12 numbers to get to a thousand, and after a another 100 numbers, your p(i) would likely be reduced by at least another 2^(-30), etc.
 
Last edited:
This should be in the probability forum. I bet you'd get some more answers there.

If you want prove this "from scratch", then you have to show that the series \sum P_i represents a valid random variable, and that the partial sums converge to it in a sense sufficiently strong to guarantee that the limit of means is the mean of the limit.

Perhaps there is a theorem in probability that says that if the limit of means exists then all the rest of that is automatically true, but I don't know about that.
 
For this result, you must restrict the sequences {pi} to those for which ##\sum_{i=1}^\infty p_i## converge, because the inclusion of sequences for which the sum does not converge will clearly prevent the existence of an expected value (mean) of your distribution. There may also be restrictions upon the distribution of pi as a random variable in [0, pi-1), but it has been a long time since I looked at the pertinent theorems by Khinchin et al.
 
willem2 said:
suyver said:
yes, I thought about that one too. But I got confused when I did a bunch of numerical simulations. Sometimes, the sum got very large (>1000). So I worry about the case where the sum may actually diverge: theoretically this is clearly possible (e.g.: sum of 1/n). Do we need to prove that this is not an issue?
I think you need to take a look at your numerical simulation.
He probably needs to look at his random number generator.

I used perl to simulation this using $x=rand($x) to generate the terms in the sequence. I too starting seeing large sums start appearing after running for several hundred or more iterations. The problem is that perl apparently interprets rand(0) as meaning "The user is an idiot. He must have meant rand(1)."
 
Back
Top