Register to reply

Normalizing standard deviation between two data sets

by TheAnalogKid83
Tags: data, deviation, normalizing, sets, standard
Share this thread:
TheAnalogKid83
#1
Oct7-08, 09:43 AM
P: 174
I have a baseline set of data collected which has a standard deviation of X, and then I collected another set of the data under a different condition (different temperature), and this has a different standard deviation Y. How do I cancel out the standard deviations, to only see the difference of the actual data as it varies with the condition change (temperature)? I try to stay vague, but if this doesn't make sense of what I'm looking for, I'll give my application example. Just a general topic to point me to would help if nothing else.
Phys.Org News Partner Science news on Phys.org
New type of solar concentrator desn't block the view
Researchers demonstrate ultra low-field nuclear magnetic resonance using Earth's magnetic field
Asian inventions dominate energy storage systems
HallsofIvy
#2
Oct7-08, 09:56 AM
Math
Emeritus
Sci Advisor
Thanks
PF Gold
P: 39,490
Seems to me that you could convert both to a "standard" normal distribution by taking [itex]x'= (x- \mu_x)/\sigma_x[/itex] and [itex]y'= (y- \mu_y)/\sigma_y[/itex]. If you don't want to worry about the means, just dividing by the standard deviation of each should give you a distribution with standard deviation 1.
atyy
#3
Oct7-08, 01:36 PM
Sci Advisor
P: 8,518
The change in temperature may have affected both the mean and standard deviation of the "true" probability distribution. If by eye the two sample standard deviations look the same, just use a normal t-test to see if the means are different. If the standard deviations look very different, or if you suspect on theoretical grounds that the standard deviations are different, then use a t-test variant in which the standard deviations are not assumed equal.

See for example:
4.3.3 Unequal sample sizes, unequal variance
http://en.wikipedia.org/wiki/Student%27s_t-test

statdad
#4
Oct7-08, 03:23 PM
HW Helper
P: 1,371
Normalizing standard deviation between two data sets

One more comment on this. Graph your data first - whether a simple dotplot, stemplots, boxplot, or histogram if the samples are large. Look for evidence of outliers and/or skewness. Both of these can cause problems with the classical procedures, as they are not robust in the face of departures from normality. If you see skewness (or even several outliers with overall symmetry) you should also do a non-parametric test (Wilcoxon or equivalent) as well. (I would suggest always doing this, but my training is in non-parametrics.) Intuitively, if the two results are in agreement, the t-test results may be good enough. If the two results are in great disagreement, you should suspect the t-test results.
(DO NOT be tempted to throw away outliers in order to obtain a specific result: unless the outliers are due to recording error, that is not valid)
atyy
#5
Oct7-08, 03:45 PM
Sci Advisor
P: 8,518
Quote Quote by statdad View Post
One more comment on this. Graph your data first - whether a simple dotplot, stemplots, boxplot, or histogram if the samples are large. Look for evidence of outliers and/or skewness. Both of these can cause problems with the classical procedures, as they are not robust in the face of departures from normality. If you see skewness (or even several outliers with overall symmetry) you should also do a non-parametric test (Wilcoxon or equivalent) as well. (I would suggest always doing this, but my training is in non-parametrics.) Intuitively, if the two results are in agreement, the t-test results may be good enough. If the two results are in great disagreement, you should suspect the t-test results.
(DO NOT be tempted to throw away outliers in order to obtain a specific result: unless the outliers are due to recording error, that is not valid)
Yes, I agree with that. One thing I've never understood properly is if you do two independent tests, say t-test and Wilcoxon, then should you change the p value for what you accept as "significant" (ie. analogous to Bonferroni and its ilk)? In which case, maybe do only the Wilcoxon if non-Gaussianity is suspected? I usually just distrust statistics and collect more data, unless I need the paper published immediately.
statdad
#6
Oct7-08, 07:39 PM
HW Helper
P: 1,371
No data is truly normal (Gaussian), although the 'middle' can quite closely resemble normally distributed data. If your initial graphs indicate severe non-normality, it's usually best to avoid normal-based inferences altogether.
If you do both a t-test and a non-parametric test for comparison, as a simple check, there isn't any real need to adjust significance levels at all.


Register to reply

Related Discussions
Standard deviation of aggregated data Set Theory, Logic, Probability, Statistics 3
Standard deviation and mean - combining the two Calculus & Beyond Homework 1
Is con't fn maps compact sets to compact sets converse true? Calculus 3
How do I determine which standard deviation to use after normalizing a set of data? Set Theory, Logic, Probability, Statistics 1
While i'm talking books, is there a Fields, Sets and Algebras for Dummies High Energy, Nuclear, Particle Physics 1