Is the Mean of a Sum of Randomly Chosen Numbers Always 1?

Click For Summary

Discussion Overview

The discussion revolves around the mean of a sum of randomly chosen numbers from a specified range, particularly focusing on whether the mean of the infinite series converges to 1. Participants explore theoretical implications, numerical simulations, and the conditions under which the sum converges.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant proposes that the mean of the sum of a series of random numbers chosen from a decreasing sequence converges to 1, asking for a proof of this assertion.
  • Another participant suggests a specific sequence of numbers that leads to a sum of 1, but does not address the general case.
  • Concerns are raised about the potential for divergence in the sum based on numerical simulations, with one participant noting instances where the sum exceeded 1000.
  • A participant emphasizes the need to ensure that the sequences chosen for the random numbers converge, as non-converging sequences would invalidate the existence of a mean.
  • There is a suggestion that the discussion might be more appropriate for a probability forum to attract more insights.
  • Another participant mentions the importance of the random number generator used in simulations, indicating that it may affect the results observed.

Areas of Agreement / Disagreement

Participants express uncertainty regarding the convergence of the sum and the implications for the mean. There is no consensus on whether the mean is always 1, and multiple competing views regarding the conditions for convergence are present.

Contextual Notes

Participants note that the results depend on the choice of sequences and the behavior of the random number generator, which may introduce variability in numerical simulations. The discussion highlights the need for rigorous proof regarding convergence and the conditions under which the mean can be determined.

Who May Find This Useful

This discussion may be of interest to those studying probability theory, random processes, or mathematical analysis, particularly in the context of convergence and expected values.

suyver
Messages
247
Reaction score
0
I choose a random number p_1 \in [0,1) and a subsequent series of (increasingly smaller) random numbers p_i \in [0, p_{i-1}). Then I can calculate the sum \sum_{i=1}^\infty p_i. Naturally, this sum is dependent on the random numbers chosen, so its particular result is not very insightful. However, it appears that its mean is rather surprising:
\left< \sum_{i=1}^\infty p_i \right>=1
Does anybody know a proof as to why this is the case?
 
Physics news on Phys.org
<pi> = (1/2)i. Sum is then 1/2 + 1/4 + ... = 1.
 
yes, I thought about that one too. But I got confused when I did a bunch of numerical simulations. Sometimes, the sum got very large (>1000). So I worry about the case where the sum may actually diverge: theoretically this is clearly possible (e.g.: sum of 1/n). Do we need to prove that this is not an issue?
 
suyver said:
yes, I thought about that one too. But I got confused when I did a bunch of numerical simulations. Sometimes, the sum got very large (>1000). So I worry about the case where the sum may actually diverge: theoretically this is clearly possible (e.g.: sum of 1/n). Do we need to prove that this is not an issue?

I think you need to take a look at your numerical simulation.

The probability would be 1 - C(100,30)/2^(-100) = 1 - [100!/(70!30!)]/2^(-100) = 1 - 2.3*10^(-5) that at least 30 of the first 1000 numbers will be < (1/2) * (their predecessor).

The rest of the numbers would than be smaller han 2^(-30), and you would need at least 10^12 numbers to get to a thousand, and after a another 100 numbers, your p(i) would likely be reduced by at least another 2^(-30), etc.
 
Last edited:
This should be in the probability forum. I bet you'd get some more answers there.

If you want prove this "from scratch", then you have to show that the series \sum P_i represents a valid random variable, and that the partial sums converge to it in a sense sufficiently strong to guarantee that the limit of means is the mean of the limit.

Perhaps there is a theorem in probability that says that if the limit of means exists then all the rest of that is automatically true, but I don't know about that.
 
For this result, you must restrict the sequences {pi} to those for which ##\sum_{i=1}^\infty p_i## converge, because the inclusion of sequences for which the sum does not converge will clearly prevent the existence of an expected value (mean) of your distribution. There may also be restrictions upon the distribution of pi as a random variable in [0, pi-1), but it has been a long time since I looked at the pertinent theorems by Khinchin et al.
 
willem2 said:
suyver said:
yes, I thought about that one too. But I got confused when I did a bunch of numerical simulations. Sometimes, the sum got very large (>1000). So I worry about the case where the sum may actually diverge: theoretically this is clearly possible (e.g.: sum of 1/n). Do we need to prove that this is not an issue?
I think you need to take a look at your numerical simulation.
He probably needs to look at his random number generator.

I used perl to simulation this using $x=rand($x) to generate the terms in the sequence. I too starting seeing large sums start appearing after running for several hundred or more iterations. The problem is that perl apparently interprets rand(0) as meaning "The user is an idiot. He must have meant rand(1)."
 

Similar threads

Replies
15
Views
2K
  • · Replies 2 ·
Replies
2
Views
998
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 17 ·
Replies
17
Views
2K