- #1

- 46

- 0

Can someone please please show me why Var(x) = E[ x^2] - (E[X])^2.

I just don't get it. THanks in advance.

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- Thread starter vptran84
- Start date

In summary, the variance of a random variable X is equal to the expected value of X^2 minus the square of the expected value of X. This follows from the properties of expectation values and integrals.

- #1

- 46

- 0

Can someone please please show me why Var(x) = E[ x^2] - (E[X])^2.

I just don't get it. THanks in advance.

Physics news on Phys.org

- #2

- 295

- 0

Var(X)=E([X-E(X)]^2)

- #3

- 46

- 0

- #4

Staff Emeritus

Science Advisor

Gold Member

- 2,960

- 4

vptran84 said:

[tex](x-<x>)^2=x^2-2x<x>+<x>^2[/tex]

[tex]\sigma^2=<(x-<x>)^2>=<x^2-2x<x>+<x>^2>=<x^2>-2<x><x>+<x>^2[/tex]

[tex]\sigma^2=<x^2>-<x>^2[/tex]

This just follows from the properties of expectation values, which follow from the properties of integrals:

[tex]<x>=\frac{\int xf(x)dx}{\int f(x)dx}=\int xP(x)dx[/tex]

- #5

- 520

- 0

So <x> = E(x). The denominator there, [tex]\int f(x)dx[/tex], always equals 1 because f is a probability density function. The definition of an expected value is just the numerator of that fraction.SpaceTiger said:[tex]<x>=\frac{\int xf(x)dx}{\int f(x)dx}=\int xP(x)dx[/tex]

- #6

Staff Emeritus

Science Advisor

Gold Member

- 2,960

- 4

BicycleTree said:So <x> = E(x). The denominator there, [tex]\int f(x)dx[/tex], always equals 1 because f is a probability density function. The definition of an expected value is just the numerator of that fraction.

I'm not defining f(x) to be the probability density, just some distribution function. P(x) is the probability density. Sorry for not making that clear. But yes, <x>=E(x).

- #7

- 520

- 0

What's the difference between a distribution function and a density function? In my course they were used synonymously, except a distribution function could also be a probability mass function.

- #8

Staff Emeritus

Science Advisor

Gold Member

- 2,960

- 4

BicycleTree said:So then what is <x>?

You were right, it's the expectation value.

What's the difference between a distribution function and a density function?

I suppose I was using the wrong terminology. What my advisors sometimes call simply a "distribution function" is sometimes actually a "frequency distribution". This is what I meant by f(x). The idea is that the integral over its domain is not equal to one, but is instead equal to the number of objects in the sample (for example). The "distribution function" is actually something entirely different; that is, the cumulative probability of the value being less than x. Check Mathworld for the definitions of these things if you want more precision. If you don't want to bother (I wouldn't blame you), then disregard the middle part of my last equation (with f(x)) and just consider the part with P(x), the probability density.

- #9

- 520

- 0

It's just a different notation. In the course I took, P(...) means the probability of the stuff in the parentheses (which would be a logical formula). So you might say P(X=x). Also, distribution functions were denoted by capital letters and density/mass functions were denoted by the corresponding lowercase letters, so even if P didn't mean "the probability of," it would be a distribution function, not a density function.

- #10

- 73

- 0

vptran84 said:

Can someone please please show me why Var(x) = E[ x^2] - (E[X])^2.

I just don't get it. THanks in advance.

For a random variable X, the variance is defined as: Var(X) = E[(X-E[X])^2].

Thus, Var(X) = E[(X^2 - 2XE[X] - (E[X])^2]. Remember that the expected value of a constant, say a, is this constant: E(a) = a, where a is a constant. And also that: E[ E[X] ] = E[X].

Then, we have that:

Var(X) = E[X^2] - E[2XE[X]] - E[(E[X])^2] = E[X^2] - 2E[X]E[X] - (E[X])^2

Var(X) = E[X^2] - 2(E[X])^2 - (E[X])^2 = E[X^2] - (E[X])^2

Var(X) = E[X^2] - (E[X])^2, which is what you are trying to prove.

"Var(x)" represents the variance of a random variable "x". It measures the spread or variability of the data points around the mean. "E[x^2]" represents the expected value of "x" squared, while "(E[X])^2" represents the square of the expected value of "x". Substracting the square of the mean from the expected value of the squared variable gives us the variance.

Calculating the variance allows us to quantify the variability of the data and understand how spread out the data points are from the mean. It also helps in comparing different datasets and identifying patterns or trends in the data.

The standard deviation is the square root of the variance. It measures the average distance of the data points from the mean. A higher variance indicates a larger spread of data points, resulting in a higher standard deviation.

No, the variance cannot be negative. It is always a non-negative value, as it represents the squared differences between the data points and the mean.

The variance is a critical concept in statistics and is used in various fields of research, such as economics, biology, psychology, and physics. It is used to analyze data, make predictions and in hypothesis testing. It also helps in understanding the accuracy and reliability of data and identifying outliers or unusual data points.

Share:

- Replies
- 5

- Views
- 2K

- Replies
- 2

- Views
- 2K

- Replies
- 10

- Views
- 924

- Replies
- 3

- Views
- 928

- Replies
- 3

- Views
- 2K

- Replies
- 8

- Views
- 1K

- Replies
- 2

- Views
- 3K

- Replies
- 15

- Views
- 931

- Replies
- 3

- Views
- 778

- Replies
- 1

- Views
- 1K