Register to reply 
Normalizing standard deviation between two data sets 
Share this thread: 
#1
Oct708, 09:43 AM

P: 174

I have a baseline set of data collected which has a standard deviation of X, and then I collected another set of the data under a different condition (different temperature), and this has a different standard deviation Y. How do I cancel out the standard deviations, to only see the difference of the actual data as it varies with the condition change (temperature)? I try to stay vague, but if this doesn't make sense of what I'm looking for, I'll give my application example. Just a general topic to point me to would help if nothing else.



#2
Oct708, 09:56 AM

Math
Emeritus
Sci Advisor
Thanks
PF Gold
P: 39,323

Seems to me that you could convert both to a "standard" normal distribution by taking [itex]x'= (x \mu_x)/\sigma_x[/itex] and [itex]y'= (y \mu_y)/\sigma_y[/itex]. If you don't want to worry about the means, just dividing by the standard deviation of each should give you a distribution with standard deviation 1.



#3
Oct708, 01:36 PM

Sci Advisor
P: 8,339

The change in temperature may have affected both the mean and standard deviation of the "true" probability distribution. If by eye the two sample standard deviations look the same, just use a normal ttest to see if the means are different. If the standard deviations look very different, or if you suspect on theoretical grounds that the standard deviations are different, then use a ttest variant in which the standard deviations are not assumed equal.
See for example: 4.3.3 Unequal sample sizes, unequal variance http://en.wikipedia.org/wiki/Student%27s_ttest 


#4
Oct708, 03:23 PM

HW Helper
P: 1,361

Normalizing standard deviation between two data sets
One more comment on this. Graph your data first  whether a simple dotplot, stemplots, boxplot, or histogram if the samples are large. Look for evidence of outliers and/or skewness. Both of these can cause problems with the classical procedures, as they are not robust in the face of departures from normality. If you see skewness (or even several outliers with overall symmetry) you should also do a nonparametric test (Wilcoxon or equivalent) as well. (I would suggest always doing this, but my training is in nonparametrics.) Intuitively, if the two results are in agreement, the ttest results may be good enough. If the two results are in great disagreement, you should suspect the ttest results.
(DO NOT be tempted to throw away outliers in order to obtain a specific result: unless the outliers are due to recording error, that is not valid) 


#5
Oct708, 03:45 PM

Sci Advisor
P: 8,339




#6
Oct708, 07:39 PM

HW Helper
P: 1,361

No data is truly normal (Gaussian), although the 'middle' can quite closely resemble normally distributed data. If your initial graphs indicate severe nonnormality, it's usually best to avoid normalbased inferences altogether.
If you do both a ttest and a nonparametric test for comparison, as a simple check, there isn't any real need to adjust significance levels at all. 


Register to reply 
Related Discussions  
Standard deviation of aggregated data  Set Theory, Logic, Probability, Statistics  3  
Standard deviation and mean  combining the two  Calculus & Beyond Homework  1  
Is con't fn maps compact sets to compact sets converse true?  Calculus  3  
How do I determine which standard deviation to use after normalizing a set of data?  Set Theory, Logic, Probability, Statistics  1  
While i'm talking books, is there a Fields, Sets and Algebras for Dummies  High Energy, Nuclear, Particle Physics  1 