Standard Deviation in terms of Probability Function

• falranger
In summary, the conversation discusses the use of Latex and equation editor for a problem involving random samples drawn from a distribution. The main difficulty is due to the use of incorrect notation, leading to confusion about the mean and variance of the distribution. It is clarified that μ represents the mean of the distribution and is not related to the measured values. The conversation also touches upon the concept of limits and their role in finding the mean and variance in large-N limits.
falranger

Homework Statement

I've still yet to learn Latex since I'm pretty good with words equation editor, so here's the question typed out in words.

Homework Equations

I really don't know what to do here.

The Attempt at a Solution

Have you tried starting by expanding out the brackets in the sum?

I did:

There's a 1/N outside the sum.

But I can't really see how to turn mu^2/n into P(x_j)

$$\frac{1}{N}\sum_{i=1}^N \mu^2 = ?$$
... though you seem to be changing between n and N in the sums.

Yeah, N is the number of trials, and n is the number of possible outcomes from each trial. and also, considering mu is also a summation, I don't really know how to calculate mu^2.

falranger said:
I did:

The expression
$$\mu = \lim_{N \to \infty} \frac{1}{N} \sum_{i=1}^N x_i$$
is correct, but the expression
$$\mu = \lim_{N \to \infty} \sum_{j=1}^n x_j P(x_j)$$ is false. Do you see why?

falranger said:
Yeah, N is the number of trials, and n is the number of possible outcomes from each trial. and also, considering mu is also a summation, I don't really know how to calculate mu^2.
So what is ##\mu## the mean of?

falranger said:

Homework Statement

I've still yet to learn Latex since I'm pretty good with words equation editor, so here's the question typed out in words.

Homework Equations

I really don't know what to do here.

The Attempt at a Solution

A lot of your difficulties are, I suspect, a result of using bad notation, so first, let's change it to something sensible. You have a random sample ##X_1, X_2, \ldots, X_N## drawn independently from a distribution ##\{ v_j, P(v_j), \: j = 1,2,\ldots n \}##. That is the possible values of each ##X_i ## are ##v_1, v_2, \ldots, v_n## with probabilities ##P(v_1), P(v_2), \ldots, P(v_n).## Now by definition, ##\mu = \sum_{j=1}^n P(v_j) v_j ## and ##\sigma^2 = \sum_{j=1}^n P(v_j)(v_j - \mu)^2.## Note that there are no N's or limits, or anything like that in these two expressions!

You are being asked to show that μ and σ2 are also given by the limiting expressions wiritten in the question. (To be technical, these are "almost-sure" equalities, but never mind than for now.)

Last edited:
Simon Bridge said:
So what is ##\mu## the mean of?

μ is supposed to be the average value of measured x.

And also, the question is given to us like that, but I understand what you mean since I do believe that's what it should be. And I've been wondering about the N and limit myself since at the end there are no N values in the expression.

falranger said:
μ is supposed to be the average value of measured x.

And also, the question is given to us like that, but I understand what you mean since I do believe that's what it should be. And I've been wondering about the N and limit myself since at the end there are no N values in the expression.

No, ##\mu## is not supposed to be the average of the measured x; it is supposed to be the mean of the distribution---given by the formula in my previous post.

Yes, there are no N values and limits in the end, because what you are being asked to show is the the large-N limit equals something, and that 'something' is just a number, not a function of N. For example, ##\lim_{N \to \infty} 1 + (1/N) = 1,## and at that point the '1' does not have any N's in it, or any 'lim', or anything like that: it is just the number '1'.

Ray Vickson said:
No, ##\mu## is not supposed to be the average of the measured x; it is supposed to be the mean of the distribution---given by the formula in my previous post.

Yes, there are no N values and limits in the end, because what you are being asked to show is the the large-N limit equals something, and that 'something' is just a number, not a function of N. For example, ##\lim_{N \to \infty} 1 + (1/N) = 1,## and at that point the '1' does not have any N's in it, or any 'lim', or anything like that: it is just the number '1'.

Ok I realize this. But I'm still not sure how to move on. For one thing,

falranger said:
Ok I realize this. But I'm still not sure how to move on. For one thing,

The number ##\mu^2## is just the square of the number ##\mu##---no, I am not kidding! It is a number, and has nothing at all to do with ##X_1, X_2, \ldots, X_N##.

This is where I am currently stuck at:

If μ is just a number then I don't see a way to move on.

What is the definition of Standard Deviation in terms of Probability Function?

Standard Deviation in terms of Probability Function is a measure of how spread out the values of a probability function are from the mean or expected value. It is calculated by finding the square root of the variance, which is the average of the squared differences from the mean. In simpler terms, it tells us how much the values of a probability function vary from their average value.

How is Standard Deviation in terms of Probability Function calculated?

Standard Deviation in terms of Probability Function is calculated by finding the square root of the variance. The variance is calculated by taking the average of the squared differences between each value in the probability function and the mean. The formula for variance is:
Variance = (sum of (x-mean)^2)/n
And the formula for Standard Deviation is:
Standard Deviation = square root of Variance

What does Standard Deviation in terms of Probability Function tell us about a data set?

Standard Deviation in terms of Probability Function tells us about the spread or variability of a data set. A higher standard deviation indicates that the values in the data set are more spread out, while a lower standard deviation indicates that the values are closer to the mean. It helps us understand the distribution of the data and can be used to make predictions about the likelihood of certain values occurring.

How is Standard Deviation in terms of Probability Function related to the Normal Distribution?

Standard Deviation in terms of Probability Function is closely related to the Normal Distribution, also known as the Gaussian Distribution. In a Normal Distribution, approximately 68% of the data falls within one standard deviation of the mean, 95% falls within two standard deviations, and 99.7% falls within three standard deviations. This is known as the 68-95-99.7 rule. The Normal Distribution is often used to model real-world data, and Standard Deviation is a key factor in understanding this distribution.

How is Standard Deviation in terms of Probability Function used in hypothesis testing?

Standard Deviation in terms of Probability Function is used in hypothesis testing to assess the likelihood that a certain result could have occurred by chance. In hypothesis testing, we compare a sample of data to a known population and calculate the standard deviation of the sample. If the standard deviation is significantly different from the standard deviation of the population, it can indicate that the sample is not representative of the population and that our results may not be reliable. This helps us determine the validity of our hypotheses.

Replies
3
Views
811
Replies
2
Views
2K
Replies
24
Views
2K
Replies
2
Views
2K
Replies
4
Views
2K
Replies
13
Views
2K
Replies
3
Views
978
Replies
2
Views
1K
Replies
1
Views
1K
Replies
3
Views
2K