How get the significant difference from arithmetic means and SD?

In summary, the individual is seeking to compare results from two measurement series (1 and 2) of N measurements each, with known arithmetic means and standard deviations. They are looking to determine if the two series have significant statistical differences using a p-value calculation. It is recommended to use a t-test or z-test with a sample size of 45 or more to compare the means of the two series and determine statistical significance. The choice between t-test and z-test may depend on the shape of the distributions.
  • #1
leonidnk
8
0
The raw data are lost, I have only final outputs (arithmetic means and standard deviations) for equally long measurement series. How I can compare results (calculate results) for the purpose to see significant statistic difference (significant difference, p<0.05)?

Thank you in advance.
 
Physics news on Phys.org
  • #2
I have to explain. I have a two measurement series (1 and 2) of N measurements each. One series has arithmetic mean x1 and standard deviation s1, another series has arithmetic mean x2 and standard deviation s2. How I can get know, have the series 1 and 2 significant statistical difference or not?:confused:
 
  • #3
leonidnk said:
I have to explain. I have a two measurement series (1 and 2) of N measurements each. One series has arithmetic mean x1 and standard deviation s1, another series has arithmetic mean x2 and standard deviation s2. How I can get know, have the series 1 and 2 significant statistical difference or not?:confused:

Do you know the value of N? If not, you can't do it. If you do, then I don't understand your question. It's stats 101.
 
  • #4
Let s = (s1² + s2²)1/2. If |x1-x2| << s, they are statistically the same. If >>s, they are statistically different. In between it is a judgment call.
 
  • #5
sw vandecarr said:
do you know the value of n? If not, you can't do it. If you do, then i don't understand your question. It's stats 101.

n=45
 
  • #6
mathman said:
Let s = (s1² + s2²)1/2. If |x1-x2| << s, they are statistically the same. If >>s, they are statistically different. In between it is a judgment call.

Thank you. What about p-value? Is it possible to calculate it?
 
  • #7
mathman said:
Let s = (s1² + s2²)1/2. If |x1-x2| << s, they are statistically the same. If >>s, they are statistically different. In between it is a judgment call.

That doesn't allow you to evaluate statistical significance, which is what the OP asked. You need to know the standard error of the mean.

Edit. OK if n=45 for each of both samples, you can do it. You don't need the actual data. Look up the standard error and how to construct confidence intervals. If the 95% confidence intervals overlap, the two samples are not statistically different by the usual standard. You can also use the T test for 2 random samples to get a p value.

http://ccnmtl.columbia.edu/projects/qmss/the_ttest/twosample_ttest.html
 
Last edited:
  • #8
SW VandeCarr said:
That doesn't allow you to evaluate statistical significance, which is what the OP asked. You need to know the standard error of the mean.

Edit. OK if n=45 for each of both samples, you can do it. You don't need the actual data. Look up the standard error and how to construct confidence intervals. If the 95% confidence intervals overlap, the two samples are not statistically different by the usual standard. You can also use the T test for 2 random samples to get a p value.

http://ccnmtl.columbia.edu/projects/qmss/the_ttest/twosample_ttest.html

Thanks. OK, I can get a t-value by that, but how I can get a p-value? If we look at http://graphpad.com/quickcalcs/PValue1.cfm, we see I need some DF-value. How I can get it?
 
  • #9
leonidnk said:
Thanks. OK, I can get a t-value by that, but how I can get a p-value? If we look at http://graphpad.com/quickcalcs/PValue1.cfm, we see I need some DF-value. How I can get it?

Just put in 45. The t values for dfs above 30 don't change much. This test is used for small samples.

Your samples are large of enough for the usual tests for two proportions using the Z score also.

http://www.cliffsnotes.com/study_guide/TwoSample-zTest-Comparing-Two-Means.topicArticleId-25951,articleId-25938.html . Let me know how they compare for your data.
 
Last edited by a moderator:
  • #10
SW VandeCarr said:
Just put in 45. The t values for dfs above 30 don't change much. This test is used for small samples.

Your samples are large of enough for the usual tests for two proportions using the Z score also.

http://www.cliffsnotes.com/study_guide/TwoSample-zTest-Comparing-Two-Means.topicArticleId-25951,articleId-25938.html . Let me know how they compare for your data.

Thank you. But I cannot see any big difference between this Z-test and T-test on http://ccnmtl.columbia.edu/projects/qmss/the_ttest/twosample_ttest.html.
 
Last edited by a moderator:
  • #12
SW VandeCarr said:
Good. Which gives the higher p value?

If these methods are the same, they should give a same p-value.
 
  • #13
leonidnk said:
If these methods are the same, they should give a same p-value.

They are not the same. There's three common tests for statistical significance which give Z scores, t scores or chi square scores for three distributions: the Gaussian (normal), the T and the chi square. The latter two are more precise for small samples and are sensitive to the number of comparisons or degrees of freedom. (means and SDs are summary statistics). Both the T and the (central)chi square distributions happen to converge to the Gaussian for larger sample sizes or number of comparisons. Generally samples of about 30 or more are adequate for Z score testing when the distribution of values within samples are assumed to be well behaved (few or no outliers).

The p values of each of the three are calculated with respect to their own distributions, but will converge to the Z score calculation with sufficiently large samples. Without knowing the shape of your distributions, I thought the t-statistic was probably a bit better.
 
Last edited:
  • #14
SW VandeCarr said:
They are not the same. There's three common tests for statistical significance which give Z scores, t scores or chi square scores for three distributions: the Gaussian (normal), the T and the chi square. The latter two are more precise for small samples and are sensitive to the number of comparisons or degrees of freedom. (means and SDs are summary statistics). Both the T and the (central)chi square distributions happen to converge to the Gaussian for larger sample sizes or number of comparisons. Generally samples of about 30 or more are adequate for Z score testing when the distribution of values within samples are assumed to be well behaved (few or no outliers).

The p values of each of the three are calculated with respect to their own distributions, but will converge to the Z score calculation with sufficiently large samples. Without knowing the shape of your distributions, I thought the t-statistic was probably a bit better.

Thank you very much!:smile:
 

What is the significance of calculating the difference between arithmetic means and standard deviation?

The difference between arithmetic means and standard deviation is an important measure in statistics that helps determine the spread or variability of a set of data. It is used to identify the extent to which data points deviate from the mean and can provide valuable insights into the distribution of the data.

How do you calculate the difference between arithmetic means and standard deviation?

The difference between arithmetic means and standard deviation can be calculated by subtracting the mean of one set of data from the mean of another set of data. This difference is then divided by the standard deviation of the combined data sets. The result is a measure of the difference between the two data sets in terms of standard deviation units.

What does a significant difference between arithmetic means and standard deviation indicate?

A significant difference between arithmetic means and standard deviation indicates that there is a significant difference in the variability of the data sets. This could be due to a variety of factors, such as different sample sizes, different populations, or different measurement methods. It is important to further investigate the cause of the difference to determine its significance.

How can the difference between arithmetic means and standard deviation be used in hypothesis testing?

In hypothesis testing, the difference between arithmetic means and standard deviation can be used to assess the significance of the difference between two data sets. This is done by calculating a test statistic, such as the t-statistic or z-score, and comparing it to a critical value. If the test statistic is greater than the critical value, the difference between the data sets is considered significant.

What are some potential limitations of using the difference between arithmetic means and standard deviation?

While the difference between arithmetic means and standard deviation can provide valuable insights into the spread of data, it is important to note that it is just one measure of variability. It may not fully capture the distribution of the data or provide a complete understanding of the differences between two data sets. Additionally, it is important to consider the context and potential biases in the data when interpreting the results of this calculation.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
916
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
792
  • Set Theory, Logic, Probability, Statistics
Replies
22
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
713
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
468
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
3K
Back
Top