How get the significant difference from arithmetic means and SD?

leonidnk
Messages
8
Reaction score
0
The raw data are lost, I have only final outputs (arithmetic means and standard deviations) for equally long measurement series. How I can compare results (calculate results) for the purpose to see significant statistic difference (significant difference, p<0.05)?

Thank you in advance.
 
Physics news on Phys.org
I have to explain. I have a two measurement series (1 and 2) of N measurements each. One series has arithmetic mean x1 and standard deviation s1, another series has arithmetic mean x2 and standard deviation s2. How I can get know, have the series 1 and 2 significant statistical difference or not?:confused:
 
leonidnk said:
I have to explain. I have a two measurement series (1 and 2) of N measurements each. One series has arithmetic mean x1 and standard deviation s1, another series has arithmetic mean x2 and standard deviation s2. How I can get know, have the series 1 and 2 significant statistical difference or not?:confused:

Do you know the value of N? If not, you can't do it. If you do, then I don't understand your question. It's stats 101.
 
Let s = (s1² + s2²)1/2. If |x1-x2| << s, they are statistically the same. If >>s, they are statistically different. In between it is a judgment call.
 
sw vandecarr said:
do you know the value of n? If not, you can't do it. If you do, then i don't understand your question. It's stats 101.

n=45
 
mathman said:
Let s = (s1² + s2²)1/2. If |x1-x2| << s, they are statistically the same. If >>s, they are statistically different. In between it is a judgment call.

Thank you. What about p-value? Is it possible to calculate it?
 
mathman said:
Let s = (s1² + s2²)1/2. If |x1-x2| << s, they are statistically the same. If >>s, they are statistically different. In between it is a judgment call.

That doesn't allow you to evaluate statistical significance, which is what the OP asked. You need to know the standard error of the mean.

Edit. OK if n=45 for each of both samples, you can do it. You don't need the actual data. Look up the standard error and how to construct confidence intervals. If the 95% confidence intervals overlap, the two samples are not statistically different by the usual standard. You can also use the T test for 2 random samples to get a p value.

http://ccnmtl.columbia.edu/projects/qmss/the_ttest/twosample_ttest.html
 
Last edited:
SW VandeCarr said:
That doesn't allow you to evaluate statistical significance, which is what the OP asked. You need to know the standard error of the mean.

Edit. OK if n=45 for each of both samples, you can do it. You don't need the actual data. Look up the standard error and how to construct confidence intervals. If the 95% confidence intervals overlap, the two samples are not statistically different by the usual standard. You can also use the T test for 2 random samples to get a p value.

http://ccnmtl.columbia.edu/projects/qmss/the_ttest/twosample_ttest.html

Thanks. OK, I can get a t-value by that, but how I can get a p-value? If we look at http://graphpad.com/quickcalcs/PValue1.cfm, we see I need some DF-value. How I can get it?
 
leonidnk said:
Thanks. OK, I can get a t-value by that, but how I can get a p-value? If we look at http://graphpad.com/quickcalcs/PValue1.cfm, we see I need some DF-value. How I can get it?

Just put in 45. The t values for dfs above 30 don't change much. This test is used for small samples.

Your samples are large of enough for the usual tests for two proportions using the Z score also.

http://www.cliffsnotes.com/study_guide/TwoSample-zTest-Comparing-Two-Means.topicArticleId-25951,articleId-25938.html . Let me know how they compare for your data.
 
Last edited by a moderator:
  • #10
SW VandeCarr said:
Just put in 45. The t values for dfs above 30 don't change much. This test is used for small samples.

Your samples are large of enough for the usual tests for two proportions using the Z score also.

http://www.cliffsnotes.com/study_guide/TwoSample-zTest-Comparing-Two-Means.topicArticleId-25951,articleId-25938.html . Let me know how they compare for your data.

Thank you. But I cannot see any big difference between this Z-test and T-test on http://ccnmtl.columbia.edu/projects/qmss/the_ttest/twosample_ttest.html.
 
Last edited by a moderator:
  • #12
SW VandeCarr said:
Good. Which gives the higher p value?

If these methods are the same, they should give a same p-value.
 
  • #13
leonidnk said:
If these methods are the same, they should give a same p-value.

They are not the same. There's three common tests for statistical significance which give Z scores, t scores or chi square scores for three distributions: the Gaussian (normal), the T and the chi square. The latter two are more precise for small samples and are sensitive to the number of comparisons or degrees of freedom. (means and SDs are summary statistics). Both the T and the (central)chi square distributions happen to converge to the Gaussian for larger sample sizes or number of comparisons. Generally samples of about 30 or more are adequate for Z score testing when the distribution of values within samples are assumed to be well behaved (few or no outliers).

The p values of each of the three are calculated with respect to their own distributions, but will converge to the Z score calculation with sufficiently large samples. Without knowing the shape of your distributions, I thought the t-statistic was probably a bit better.
 
Last edited:
  • #14
SW VandeCarr said:
They are not the same. There's three common tests for statistical significance which give Z scores, t scores or chi square scores for three distributions: the Gaussian (normal), the T and the chi square. The latter two are more precise for small samples and are sensitive to the number of comparisons or degrees of freedom. (means and SDs are summary statistics). Both the T and the (central)chi square distributions happen to converge to the Gaussian for larger sample sizes or number of comparisons. Generally samples of about 30 or more are adequate for Z score testing when the distribution of values within samples are assumed to be well behaved (few or no outliers).

The p values of each of the three are calculated with respect to their own distributions, but will converge to the Z score calculation with sufficiently large samples. Without knowing the shape of your distributions, I thought the t-statistic was probably a bit better.

Thank you very much!:smile:
 
Back
Top