# Calculation of total standard deviation over samples of different size

1. Oct 31, 2008

### Simon666

Hello,

The standard deviation is calculated as:

http://www.mathsrevision.net/gcse/sdeviation2.gif [Broken]

Now the problem I have is that how you calculate the standard deviation (more accurately?) over both samplesvif you have two samples of different size, n1 and n2, in which the level of the average µ1 and µ2 can change, but the distribution remains otherwise the same?

Does it make mathematical sense to calculate an "overall" standard deviation using both samples, supposing more means better accuracy? And how would this overall standard deviation then be calculated?

Anyway, wikipedia seems to suggest that the variance...X2) + (n3-1)S²(X3) ) / (n1 + n2 + n3 - 3) ) ?

Last edited by a moderator: May 3, 2017
2. Oct 31, 2008

Are you asking about the notion of "pooling" standard deviations. This is the idea that comes up in the following setting

1. You have two or more different samples, from populations for which the means may not all be the same
2. You are will to to believe, or have evidence from somewhere, that the different populations have the same standard deviation

In other words, there may be differences in location among the populations but the variability is essentially the same.

If you perform a classical (normal distribution based) inference, it turns out that combining the individual data to calculate a single measure of variability results in tests and intervals that are preferable over those using the individual standard deviations or variances. The process is known as "pooling".

For two samples, the "pooled" variance is

$$s^2_p = \frac{(n_1-1)s_1^2 + (n_2 -1) s_2^2}{n_1 + n_2 - 2}$$

Note that the two-sample confidence interval for the difference of two means in this case is

$$\left(\overline x_1 -\overline x_2 \right) \pm t_{\frac{\alpha}2} \sqrt{\,s_p^2 \left(\frac 1 {n_1} + \frac 1 {n_2}\right)}$$

Also as you note - similar formulae exist for more than two samples.

3. Nov 1, 2008

### ssd

$$s^2_p = \frac{n_1s_1^2 + n_2 s_2^2+ n_1(mu_1-mu)^2 +n_2(mu_2-mu)^2}{n_1 + n_2 }$$
where mu1 and mu2 are the respective sample means and mu is the pooled mean.

The previous post is not generally true. That is the particular case for testing the difference of means of two univariate normal distbns, when the two samples are independent, popln std dev's are unknown but assumed to be equal. The test statistic is Fisher's t.

Last edited: Nov 1, 2008
4. Nov 1, 2008

The comment about my CI is correct, and the particular instance to which it applies is the only situation where two scalar standard deviations are pooled with that formula (situation: comparison of two means, standard devs assumed equal), but this

$$s^2_p = \frac{n_1s_1^2 + n_2 s_2^2+ n_1(mu_1-mu)^2 +n_2(mu_2-mu)^2}{n_1 + n_2 }$$
???

What is it?

Last edited: Nov 1, 2008
5. Nov 5, 2008

### ssd

This is nothing but the square of pooled s.d. of two groups of observations where,
for the i th group :
the mean is mu_i,
sd is s_i,
number of observations is n_i,
and mu is the pooled mean. i=1,2.

I dont understand why the question of a "situation" arose where the samples are independent?

NO, NOT IN GENERAL. Consider the case when the two population means are known to be "not equal".

Last edited: Nov 5, 2008
6. Nov 7, 2008

The standard deviations being unequal was the situation being discussed.
I still don't know where your formula comes from - why the need for the squares of the difference in means?

7. Nov 8, 2008

### ssd

Are you a person of Statistics?
Unequal population s.d. case- is that you were taking about for t-test statistic and/or CI? Then you are wrong. The formula you gave stands only in the case of EQUAL popln s.d. assumption.

Simply, from definition of standard deviation. It worked out in many text books of descriptive statistics of high-school standard.

Last edited: Nov 8, 2008
8. Nov 8, 2008

I have a feeling I know far more statistics than you, but I could be wrong. I would be more than willing to discuss backgrounds, but that is not what this forum is for. I still say
your formula is not the formula for pooled variance in the t-test/confidence interval being discussed. It is not the formula for the variance of a single sample - where did you find it? reference?

9. Nov 9, 2008

### ssd

I am sorry if my words did hurt you. I simply thought you to be man of other subject.
Reference you are looking for : Any standard text book of 10+2 standard.

I NEVER said, the expression I gave has something to do with t-test . Pls dont superimpose your wrong thoughts over my words.

I dont understand from where t statistic comming from. Pls have a look at op's question (and the thread title as well). This is a situation of descriptive stats (as I mentioned earlier). The basic question was of the expression of pooled sd of two groups of observations. In search of its answer, he found and posted a link about t stat......which is quite irrelevent to the answer he is looking for.

From where did I get the expression: I knew it and worked it out first time 25 years back, and regularly proving/teaching this till date. (And no one could disproved it upto 9th nov, 2008.)

If you cannot workout from definition of sd (or sd^2) then for your information I am attaching the elementory calculations worked out in standard text. Hope you will be able to understand them.

Pls note: The symbols for means and pooled sd in the print is different than what I wrote (they used x_bar, where I used mu. For the pooled sd, they wrote s, I used s_p. But I think you will understand that it really makes no difference in the mathematics).

Ps:
About non equality of the popln s.d.s :
You claimed that you were talking about the case of unequal sds!!!!!!!!!!
Have you heard of "Fisher-Behrens Problem"? In which condition that is applicable? Does you CI expression hold then?

Last edited: Nov 9, 2008
10. Aug 12, 2009

### mcdevoy

Hi ssd,

I have been searching various forums, posts & references trying to find a way of combining standard deviations. I think the method you quote is what I'm looking for, so could you please tell me, who the Author is of the book (Fundamentals of Statistics), you quoted earlier?

Thanks

11. Aug 16, 2009

### ssd

The Authors are: A.M.Goon, M.K. Gupta, B. Dasgupta. (All from India).
Publisher: The World Press Private Ltd.
Info: More than 7 editions have been done before 1990.

12. Oct 20, 2011

### fanfan

Hi ssd,

Thank you for scanning the page. I have just one question, and I hope you can enlighten me. :)

The equation in the middle of the page gives:

sum_j(x_1j -x_bar)^2 = sum_j{(x_1j - x1_bar) + (x1_bar -x_bar)}^2

why it is equal to

sum_j(x_1j - x1_bar)^2 + n1 (x1_bar -x_bar)^2

isn't there a part missing

sum_j(2*(x_1j - x1_bar)(x1_bar -x_bar)) ?

Thanks

13. Oct 21, 2011

### Stephen Tashi

$$\sum_j (x_{1j} - \bar x )^2 = \sum_j ( ( x_{1j} - \bar x_1 ) + (\bar x_1 - \bar x ))^2$$
$$= \sum_j ( x_{1j} - \bar x_1) ^2 + \sum_j (\bar x_1 - \bar x )^2 + 2 \sum_j (( x_{1j} - \bar x_1)(\bar x_1 - \bar x ))$$
$$= \sum_j ( x_{1j} - \bar x_1 )^2 + \sum_j (\bar x_1 - \bar x )^2 + 2 (\bar x_1 - \bar x ) \sum_j( x_{1j} - \bar x_1 )$$
$$= \sum_j ( ( x_{1j} - \bar x_1 )^2 + \sum_j (\bar x_1 - \bar x )^2 + 2 (\bar x_1 - \bar x )( \sum_j x_{1j} - \sum_j \bar x_1 )$$
$$= \sum_j ( x_{1j} - \bar x_1 )^2 + \sum_j (\bar x_1 - \bar x )^2 + 2 (\bar x_1 - \bar x ) (n_1 \bar x_1 - n_1 \bar x_1 )$$
$$= \sum_j ( x_{1j} - \bar x_1 )^2 + \sum_j (\bar x_1 - \bar x )^2 + 2 (\bar x_1 - \bar x ) (0)$$
$$= \sum_j ( x_{1j} - \bar x_1 )^2 + \sum_j (\bar x_1 - \bar x )^2$$
$$= \sum_j ( x_{1j} - \bar x_1 )^2 + n_1 (\bar x_1 - \bar x )^2$$

As I understand the scanned text, if you want to compute the pooled sample variance of two samples, you could simply treat all the data as one sample and do it that way. In this age of computers, that might be simplest thing to do. The formula for the pooled variance gives exactly the same result as doing that.

As pointed out in previous posts, there is a difference between the variance of a sample (pooled or otherwise) and a computation using sample values that is proposed as an estimator of the population variance (or of the common variance of two different populations). The formula for the sample variance may or may not be the best estimator for the population variance. Whether it is, depends on how you define "best" and what information is known about the distribution of the population.

14. Oct 27, 2011

### fanfan

Hi Stephen Tashi,

Thank you very much for the detailed explanation. Now it is clear to me why the term disappeared. :)

For your text comments, I want to make sure I understand them correctly, and I am trying to describe the problem in my words and please tell me whether it makes sense.

Here I have for example 5 groups of data, each has ni data points in the group (i=1:5).For each of the groups I have already calculated the mean and std using:

xi_bar = Ʃxi/ni
σ^2 = (xi-xi_bar)^2/(ni-1), ni is the number of data points in group i.

So if I want to pool 5 groups together and calculate the total mean and std, I can either use the equation above on all data points Ʃni, or I can use the equation for pooled std and the mean will be

x_bar = Ʃ(xi_bar*ni)/(Ʃni-1), right?

The results from two methods are the same?

I do not quite understand what 'the best estimator' you mean from your last paragraph.
Can you again spend some time to explain this to me, thanks.

15. Oct 27, 2011

### Stephen Tashi

The mean and variance of a sample, are formulas which have standard definitions. The field of study that states these definitions is "descriptive statistics". If you give numbers for the mean and variance of a sample, then people will assume you obeyed these formulas - or used formulas which give exactly the same numerical answers. It's merely a matter of obeying standard conventions.

When you want to use the numbers in a sample to estimate the mean and variance of a population (or a "random variable") there are no set rules for what formula you can use. What you do will depend on what you know about the distribution of the population.

There are three different concepts involved:
1) The properties of the sample ( such as its mean and variance)
2) The properties of the population ( such its mean and variance)
3) The formulae or procedures that you apply to the data in the sample to estimate the properties of the population.

For example, suppose the population is defined by a random variable X that has a discrete distribution with two unknown parameters A and B. Suppose we know that X has only 3 values with non-zero probabilities and that these are given by:

probability that X = M + A is 1/3
probabiltiy that X = M - A is 1/3
probabiltiy that X = M is 1/3

Suppose we take a sample of 4 random draws from this distribution and the results are:
{ -3, 1, 5, 5 }. Then we know "by inspection" that M = 1 and A = 4. The mean of the population is therefore 1. (There is a standard definition for the mean of a distribution and if you apply it to the above list of probabilities, using M = 1 and A = 4, you get that the mean is 1.)

However, if you state that you have computed the mean of the sample , this tells people that you are stating the number ( -3 + 1 + 5 + 5)/ 4. You aren't supposed to say that mean of the sample is 1 even though you know that the sample implies that the mean of the population is 1.

Suppose you have a sample of N values of the random variable X and let the sample mean be $\bar x$. I'm not an expert in descriptive statistics, but I think that if you state a number for the sample variance, it is always suposed to be the number:

$$\frac {\sum (x - \bar x)^2}{N}$$

and not the number:

$$\frac {\sum (x - \bar x)^2}{N-1}$$

If you are estimating the variance of the population, you are free to use the latter formula and people advocate doing this when N is "small". To understand why, you have to study the statistical theory of "estimators".

--------------------

No. You wouldn't divide by $\sum n_i - 1.$ Divide by $\sum n_i$.

Last edited: Oct 27, 2011