Normalizing standard deviation between two data sets

Click For Summary
To normalize the standard deviation between two data sets collected under different conditions, one approach is to standardize both sets using the formula x' = (x - μ_x) / σ_x and y' = (y - μ_y) / σ_y, which results in distributions with a standard deviation of 1. If the standard deviations appear significantly different, a t-test variant that does not assume equal variances should be used. Graphing the data first is crucial to identify outliers and skewness, which can affect the validity of classical statistical tests. In cases of non-normality, non-parametric tests like the Wilcoxon should be considered. When comparing results from both t-tests and non-parametric tests, adjusting significance levels is generally unnecessary unless severe non-normality is present.
TheAnalogKid83
Messages
174
Reaction score
0
"Normalizing" standard deviation between two data sets

I have a baseline set of data collected which has a standard deviation of X, and then I collected another set of the data under a different condition (different temperature), and this has a different standard deviation Y. How do I cancel out the standard deviations, to only see the difference of the actual data as it varies with the condition change (temperature)? I try to stay vague, but if this doesn't make sense of what I'm looking for, I'll give my application example. Just a general topic to point me to would help if nothing else.
 
Physics news on Phys.org


Seems to me that you could convert both to a "standard" normal distribution by taking x'= (x- \mu_x)/\sigma_x and y'= (y- \mu_y)/\sigma_y. If you don't want to worry about the means, just dividing by the standard deviation of each should give you a distribution with standard deviation 1.
 


The change in temperature may have affected both the mean and standard deviation of the "true" probability distribution. If by eye the two sample standard deviations look the same, just use a normal t-test to see if the means are different. If the standard deviations look very different, or if you suspect on theoretical grounds that the standard deviations are different, then use a t-test variant in which the standard deviations are not assumed equal.

See for example:
4.3.3 Unequal sample sizes, unequal variance
http://en.wikipedia.org/wiki/Student's_t-test
 


One more comment on this. Graph your data first - whether a simple dotplot, stemplots, boxplot, or histogram if the samples are large. Look for evidence of outliers and/or skewness. Both of these can cause problems with the classical procedures, as they are not robust in the face of departures from normality. If you see skewness (or even several outliers with overall symmetry) you should also do a non-parametric test (Wilcoxon or equivalent) as well. (I would suggest always doing this, but my training is in non-parametrics.) Intuitively, if the two results are in agreement, the t-test results may be good enough. If the two results are in great disagreement, you should suspect the t-test results.
(DO NOT be tempted to throw away outliers in order to obtain a specific result: unless the outliers are due to recording error, that is not valid)
 


statdad said:
One more comment on this. Graph your data first - whether a simple dotplot, stemplots, boxplot, or histogram if the samples are large. Look for evidence of outliers and/or skewness. Both of these can cause problems with the classical procedures, as they are not robust in the face of departures from normality. If you see skewness (or even several outliers with overall symmetry) you should also do a non-parametric test (Wilcoxon or equivalent) as well. (I would suggest always doing this, but my training is in non-parametrics.) Intuitively, if the two results are in agreement, the t-test results may be good enough. If the two results are in great disagreement, you should suspect the t-test results.
(DO NOT be tempted to throw away outliers in order to obtain a specific result: unless the outliers are due to recording error, that is not valid)

Yes, I agree with that. One thing I've never understood properly is if you do two independent tests, say t-test and Wilcoxon, then should you change the p value for what you accept as "significant" (ie. analogous to Bonferroni and its ilk)? In which case, maybe do only the Wilcoxon if non-Gaussianity is suspected? I usually just distrust statistics and collect more data, unless I need the paper published immediately. :rolleyes:
 


No data is truly normal (Gaussian), although the 'middle' can quite closely resemble normally distributed data. If your initial graphs indicate severe non-normality, it's usually best to avoid normal-based inferences altogether.
If you do both a t-test and a non-parametric test for comparison, as a simple check, there isn't any real need to adjust significance levels at all.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 24 ·
Replies
24
Views
6K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K