Probability Histogram and Central Limit Theorem

pociteh
Messages
2
Reaction score
0
Hi,


I have trouble understanding the convergence of empirical histogram to probability histogram and the convergence of empirical histogram to normal curve.

It was written in my lecture notes that as the number of repetitions goes large, empirical histogram converges to probability histogram, and as the number of draws goes large, probability histogram converges to the normal curve (Central Limit Theorem). It was also said that if the number of repetitions and number of draws are both large, the empirical histogram converges to the normal curve.

Sounds OK so far, but I still have doubts:

1. Suppose I toss a fair coin 25 times and count the number of heads. As the number of repetition goes large, does the empirical histogram converge to probability histogram and then the probability histogram converge to normal curve, or does the empirical histogram only converge to probability histogram? Also, here, the number of draws = 25 and the number of repetitions is x (x keeps increasing), right? (I still kind of confuse the term 'draws' and 'repetitions' at times)

2. Suppose I do another experiment similar to no (1), but I toss it 100 times. Same question.

3. Suppose I do the same experiment again, but I toss 1000 times. Same question.



Please help enlighten. Thank you!
 
Physics news on Phys.org
So what would happen is tossing a coin 25 times is a single experiment. You might then repeat that experiment 10,000 times, plotting each time what the number of heads is. As you do this more often, your plot will start to look like a normal distribution centered around 12.5. It'll be rough because you have discrete data points of course, so you have to extrapolate what the curve should look like between them.

For the 100 tosses, again you would measure the number of heads in a 100 toss experiment. You expect around 50. Then if you perform the experiment a large number of times (say 10,000), you get a bunch of data points for the number of heads ranging from 0 to 100, and as you plot more points, it resembles the normal distribution.

Etc.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top