Normalizing standard deviation between two data sets

  1. "Normalizing" standard deviation between two data sets

    I have a baseline set of data collected which has a standard deviation of X, and then I collected another set of the data under a different condition (different temperature), and this has a different standard deviation Y. How do I cancel out the standard deviations, to only see the difference of the actual data as it varies with the condition change (temperature)? I try to stay vague, but if this doesn't make sense of what I'm looking for, I'll give my application example. Just a general topic to point me to would help if nothing else.
     
  2. jcsd
  3. HallsofIvy

    HallsofIvy 40,678
    Staff Emeritus
    Science Advisor

    Re: "Normalizing" standard deviation between two data sets

    Seems to me that you could convert both to a "standard" normal distribution by taking [itex]x'= (x- \mu_x)/\sigma_x[/itex] and [itex]y'= (y- \mu_y)/\sigma_y[/itex]. If you don't want to worry about the means, just dividing by the standard deviation of each should give you a distribution with standard deviation 1.
     
  4. atyy

    atyy 10,642
    Science Advisor

    Re: "Normalizing" standard deviation between two data sets

    The change in temperature may have affected both the mean and standard deviation of the "true" probability distribution. If by eye the two sample standard deviations look the same, just use a normal t-test to see if the means are different. If the standard deviations look very different, or if you suspect on theoretical grounds that the standard deviations are different, then use a t-test variant in which the standard deviations are not assumed equal.

    See for example:
    4.3.3 Unequal sample sizes, unequal variance
    http://en.wikipedia.org/wiki/Student's_t-test
     
  5. statdad

    statdad 1,478
    Homework Helper

    Re: "Normalizing" standard deviation between two data sets

    One more comment on this. Graph your data first - whether a simple dotplot, stemplots, boxplot, or histogram if the samples are large. Look for evidence of outliers and/or skewness. Both of these can cause problems with the classical procedures, as they are not robust in the face of departures from normality. If you see skewness (or even several outliers with overall symmetry) you should also do a non-parametric test (Wilcoxon or equivalent) as well. (I would suggest always doing this, but my training is in non-parametrics.) Intuitively, if the two results are in agreement, the t-test results may be good enough. If the two results are in great disagreement, you should suspect the t-test results.
    (DO NOT be tempted to throw away outliers in order to obtain a specific result: unless the outliers are due to recording error, that is not valid)
     
  6. atyy

    atyy 10,642
    Science Advisor

    Re: "Normalizing" standard deviation between two data sets

    Yes, I agree with that. One thing I've never understood properly is if you do two independent tests, say t-test and Wilcoxon, then should you change the p value for what you accept as "significant" (ie. analogous to Bonferroni and its ilk)? In which case, maybe do only the Wilcoxon if non-Gaussianity is suspected? I usually just distrust statistics and collect more data, unless I need the paper published immediately. :rolleyes:
     
  7. statdad

    statdad 1,478
    Homework Helper

    Re: "Normalizing" standard deviation between two data sets

    No data is truly normal (Gaussian), although the 'middle' can quite closely resemble normally distributed data. If your initial graphs indicate severe non-normality, it's usually best to avoid normal-based inferences altogether.
    If you do both a t-test and a non-parametric test for comparison, as a simple check, there isn't any real need to adjust significance levels at all.
     
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook

Have something to add?