Calculating Variance of Independent Random Variables: A Simple Guide

  • Thread starter ericm1234
  • Start date
  • Tags
    Stats
In summary, the conversation discusses the use of the formula var(x)=E(x^2)-E(x)^2 and how it can be used to calculate var(x^2) without resorting to integration. The concept of "independent" variables is also brought up, with a request for an explanation of why E(x^2) is not equal to E(x)*E(x) in this context. The experts suggest writing out the math and considering the definitions of independence to better understand the concept.
  • #1
ericm1234
73
2
1. I know var(x)=E(x^2)-E(x)^2; is there a repeated way to use this to attain var(x^2)? Or how in general, without resorting to integration, can I calculate it?

2. We typically deal with "i.i.d random variables X_i" and do things like find var(X) given E(X^2) etc..it never occurred to me until now, but if the X's are "independent" then why is E(x^2) not equal to E(x)*E(X)?? (the answer I'm awaiting will probably be obvious, though I can't figure this out right now)
 
Physics news on Phys.org
  • #2
1. put Y=X^2 and find var(y).
2. do the math and see.
 
  • #3
Doesn't help because I then need to find E(x^4); I'm dealing with a continuous function, hence my question about trying to avoid integrating.

"do the math" doesn't help my understanding. The term 'independent' is used in regards to a series of random variables X from the same distribution and yet the definition of independence doesn't apply here as far as E(X^2)+E(X)*E(X) being true.
 
  • #4
I can make up an example to show it's obviously not true; I'm asking for an explanation of the use of the word independent in this context.
 
  • #5
Doesn't help because I then need to find E(x^4); I'm dealing with a continuous function, hence my question about trying to avoid integrating.
I suspect you have misunderstood - you asked a question - I pointed you to the path where you are most likely to be able to find the answer. This objection/protest suggests to me that you found the answer - thus: it did help ;)

But it is possible that I misunderstood - perhaps you could rephrase your question?

i'm asking for an explanation of the use of the word independent in this context.
I'm afraid I can only answer the questions you write down. You wrote:
if the X's are "independent" then why is E(x^2) not equal to E(x)*E(X)?
... and that was the question I answered.
You are correct that a specific example will not suffice - have to work harder than that.
Try rewriting the question as a mathematical statement you have to prove/disprove. i.e. ##E[X^2]=[E[X]]^2## ... but expand it to the definitions.

Now to your new question:
The word "independent" is a label for a set of mathematical properties that you can best understand by doing the math. Turn the thought around: what is it about the mathematical property of "independent", in this context, that leads you to think E(x^2) should have that form?
 
  • #6
It is by the definition of "independence" in statistics that E(X*Y)=E(X)*E(Y). If two X's are independent should not E(X^2)=E(X)*E(X) from this context?
A bunch of iid X's from say, a normal distribution are independent with each other and yet do not fit the above defintion of independence. Please explain/point out where my simplistic reasoning has failed.
It was awhile ago that I dealt with my stats material; I am looking for a straight forward explanation of why these two uses of the word "independent" do not mesh; I am not looking for an exercise.
 
  • #7
The reason E(X^2)=E(X)*E(X) does not work is that X refers not to two different random variables, identically distributed but independent, but to a single random variable.
You don't need to carry out the integrations to see mathematically why the two expressions E(X^2) and E(X)*E(X) are not the same, just do this:

Step 1: Write out the integral that gives E(X^2)

Step 2: Write out the product of the two integrals that give E(X) * E(x)

and think about why the two expressions are not the same
 
  • #8
Thanks statdad - that's pretty much what I've been trying to get ericm1234 to do ;)
Spelling it out is the next step.

@ericm1234:
Since you resist writing down any actual math... try thinking about it this way:
if you have two distributions Y and Z, but both of them depend on a third distribution X, then are Y and Z independent of each other? i.e. is X independent of itself?

But seriously, you must get used to thinking in terms of the actual math.
 
Last edited:

FAQ: Calculating Variance of Independent Random Variables: A Simple Guide

What are the 2 simple stats questions?

The 2 simple stats questions refer to two basic statistical problems that are commonly encountered in research and data analysis. These are the mean and standard deviation of a data set.

What is the mean in statistics?

The mean, also known as the average, is a measure of central tendency in statistics. It is calculated by adding up all the values in a data set and dividing it by the total number of values.

What is the standard deviation in statistics?

The standard deviation is a measure of variability in a data set. It tells us how spread out the data is from the mean. A smaller standard deviation indicates that the data points are closer to the mean, while a larger standard deviation indicates a wider spread of data.

How do you calculate the mean?

To calculate the mean, add up all the values in the data set and divide it by the total number of values. For example, if the data set is 3, 5, 7, the mean would be (3+5+7)/3 = 5.

How do you calculate the standard deviation?

To calculate the standard deviation, first find the mean of the data set. Then, subtract the mean from each data point and square the result. Next, add up all the squared values and divide it by the total number of values. Finally, take the square root of this value to get the standard deviation. It can also be calculated using statistical software or a calculator.

Back
Top