I Error on the mean

  • Thread starter kelly0303
  • Start date
140
8
Hello! I have some measurements with errors associated with them: ##x_i \pm \delta x_i## and I want to cite the value of the mean with its error. I see online that the error on the mean is defined as ##\sigma/\sqrt N##, where ##\sigma## is the standard deviation of my measurements and ##N## is the number of measurements. However, this formula seems to ignore the error on the individual data points. How should I compute the error in this case? Thank you!
 
283
127
In your case, σ/√N equation is not very useful;
better method may be to define the y-axis values band which contains for example 95% of error bars. Mean value will be the the value of band center resulting in such band being the most narrow.
 
28,288
4,640
Or you could use the standard propagation of errors formula.
 
140
8
Or you could use the standard propagation of errors formula.
I was thinking to do that, but that doesn't take into account the standard deviation of the data itself, just the errors on the individual points. Would that be correct?
 
140
8
Or you could use the standard propagation of errors formula.
Just to make sure I am clear about my question, say I have 2 measurements for a physical quantity of ##10 \pm 3## and ##12 \pm 2##. The mean would be ##11##, what error should I put on this mean value?
 
140
8
Whatever the propagation of errors formula says.
But I am still confused. If my results are ##10 \pm 3## and ##12 \pm 2## or ##1000 \pm 3## and ##1200 \pm 2## (say from 2 different experiments), the propagation of errors formula would give the same value in both cases. But obviously the 2 situation are widely different. The propagation of error gives a measurement of the precision of the measurement, but it completely ignores the accuracy (which would be the standard deviation of the 2 measurements?). My question is how can I reflect both these values in my final error on the mean.
 

FactChecker

Science Advisor
Gold Member
2018 Award
5,086
1,790
With no other information, you should consider each measurement as a random result from a random variable. Apply the statistical equations and theorems to the measured values.
 
140
8
With no other information, you should consider each measurement as a random result from a random variable. Apply the statistical equations and theorems to the measured values.
So what would be the error in my initial example?
 
140
8
The variance of the mean is not widely different.
I am not sure what you mean. In the first case the sigma is 100 so the error on the mean, given by ##\sigma/\sqrt N## is ##100/\sqrt 2 = 70.71## in the other case we have ##1/\sqrt 2 = 0.71##. The results are quite different. Am I miss understanding something here?
 

FactChecker

Science Advisor
Gold Member
2018 Award
5,086
1,790
I would calculate the sample mean and sample standard deviation of the measurements using the usual formulas.
 
140
8
I would calculate the sample mean and sample standard deviation of the measurements using the usual formulas.
But wouldn't that completely ignore the error on the individual data points?
 

RPinPA

Science Advisor
Homework Helper
506
283
But wouldn't that completely ignore the error on the individual data points?
The variance of ##\sum_i x_i## is the sum of the variances of the individual ##x_i##, under the assumption the ##x_i## are independent. Your individual errors are presumably something like ##90\%## confidence limits, and so proportional to the standard deviation of each ##x_i##. Thus you can calculate the individual variances and they very much depend on the error of the individual data points.

The variance of the sample mean ##\bar x = (1/N) \sum_i x_i## is equal to ##(1/N)^2## times the variance of ##\sum_i x_i##.

Note that when the data points all have the same variance ##\sigma^2## this reduces to the standard result ##\sigma_{\bar x}^2 = (1/N^2) * N\sigma^2 = \sigma^2 / N## or ##\sigma_{\bar x} = \sigma/\sqrt N##
 
Last edited:
140
8
The variance of ##\sum_i x_i## is the sum of the variances of the individual ##x_i##, under the assumption the ##x_i## are independent. Your individual errors are presumably something like ##90\%## confidence limits, and so proportional to the standard deviation of each ##x_i##. Thus you can calculate the individual variances and they very much depend on the error of the individual data points.

The variance of the sample mean ##\bar x = (1/N) \sum_i x_i## is equal to ##(1/N)^2## times the variance of ##\sum_i x_i##.

Note that when the data points all have the same variance ##\sigma^2## this reduces to the standard result ##\sigma_{\bar x}^2 = (1/N^2) * N\sigma^2 = \sigma^2 / N## or ##\sigma_{\bar x} = \sigma/\sqrt N##
So assuming we have (for simplicity) ##10 \pm 1## and ##12 \pm 1## the final result would be ##11 \pm 1/\sqrt 2##. But if we have ##1000 \pm 1## and ##1200 \pm 1## the result would be ##1100 \pm 1/\sqrt 2##. So the error would be the same in both cases? Shouldn't I account somehow for the fact that in the second case my measurements are so far apart? I was thinking of doing something like this: first get the standard deviation of the samples ##\sigma_1##, (which doesn't care about the individual errors on the data points,) then get the error obtained by error propagation when calculating the mean, ##\sigma_2## (which doesn't care about the distribution of the data points) then add them in quadrature. This way I would take both effects into account. I am just not sure if this is right or how should I adjust it to my problem.
 

RPinPA

Science Advisor
Homework Helper
506
283
So assuming we have (for simplicity) ##10 \pm 1## and ##12 \pm 1## the final result would be ##11 \pm 1/\sqrt 2##. But if we have ##1000 \pm 1## and ##1200 \pm 1## the result would be ##1100 \pm 1/\sqrt 2##. So the error would be the same in both cases? Shouldn't I account somehow for the fact that in the second case my measurements are so far apart? I was thinking of doing something like this: first get the standard deviation of the samples ##\sigma_1##, (which doesn't care about the individual errors on the data points,) then get the error obtained by error propagation when calculating the mean, ##\sigma_2## (which doesn't care about the distribution of the data points) then add them in quadrature. This way I would take both effects into account. I am just not sure if this is right or how should I adjust it to my problem.
OK, I sort of see what you're saying, but the sample variance is the wrong statistic to answer the question you're asking implicitly.

When you take a sample mean, you are assuming that all your measurements are of the same quantity, You are assuming there is some unchanging "real" value, and that measurements are disturbed from that by some process which is random and symmetric unless there's some reason to think otherwise (normal distribution being the usual assumption). The sample variance estimates that experimental error that is causing the measured values to be different from the true value, above or below.

So if you had a measurement ##1000 \pm 1##, meaning that you are ##90\%## confident that the actual value is between ##999## and ##1001##, and then you take another measurement and you are ##90\%## confident that the actual value is between ##1199## and ##1201##, then you have a problem. The question you are asking implicitly is "are those measuring the same thing" and that gets you into hypothesis testing, which would give you the result that those are not the same measurement. And then you have to ask if ##\bar x## means anything.

My response was an answer to the narrow question of "what is the variance of ##\bar x## in terms of the variance of individual sample points? If you know the variance of the individual data points, my analysis is valid. The sample variance is what you use as an estimator of the true variance when you don't know it, and you'd have to ask the question with measurements like that as to whether the ##\pm 1## was really a good estimate of the individual variances.
 
28,288
4,640
I am not sure what you mean. In the first case the sigma is 100 so the error on the mean, given by ##\sigma/\sqrt N## is ##100/\sqrt 2 = 70.71## in the other case we have ##1/\sqrt 2 = 0.71##. The results are quite different. Am I miss understanding something here?
I have no idea where you are getting 100 from. If your results are ##10 \pm 3## and ##12 \pm 2## or ##1000 \pm 3## and ##1200 \pm 2## then you can do a simple Monte Carlo simulation (assume normally distributed) to see what the distribution of your mean would be. For 10000 draws of (X+Y)/2 where X~N(10,3) and Y~N(12,2) I got a mean of 11.0 and a standard deviation of 1.79. For 10000 draws of (X+Y)/2 where X~N(1000,3) and Y~N(1200,2) I got a mean of 1100 and a standard deviation of 1.81. So the variance is the same in both cases as close as you can expect with Monte Carlo.

Please run a Monte Carlo simulation yourself. The propagation of errors formula correctly identified that the variance is the same in both cases. You are incorrect in thinking that the variance is widely different in the two cases.
 

Stephen Tashi

Science Advisor
6,730
1,084
Hello! I have some measurements with errors associated with them: ##x_i \pm \delta x_i## and I want to cite the value of the mean with its error.
You aren't specific about the meaning of "error". (What is meant by "the mean" is also ambiguous.)

On one hand, a measuring instrument can have a calibration guarantee such as ##\pm##2%.

On the other hand, a particular person can have a weight that differs from the mean weight of the population of people in his country. The difference between the person's weight and the mean weight of the population isn't necessarily an "error" on the person's part. His weight might be the optimal weight for his health. Calling this difference a "deviation" is clearer terminology.

And yet, on a third hand, a number can be computed from sample data and published as an estimate of the mean weight of the population. The difference between the estimate and the actual mean weight can be considered an "error".

I want to cite the value of the mean with its error.
I know that some people use terminology that is ambiguous or completely screwed-up and yet manage to function well. However, for many people, increasing the precision of terminology helps increase understanding. In talking about statistics, it is a struggle to avoid ambiguity because terms like"the mean", "the standard deviation" etc. are themselves ambiguous.

The two major branches of statistics are 1) Hypothesis testing and 2) Estimation. I think you want to publish an estimate of the population mean (for whatever population you are considering). You also want to publish an estimate of the standard deviation of something. What that thing is, isn't simple to sort out! Statistics is subjective and different fields of study have different traditions. You should ask people in your field of study about traditional ways of computing the numbers you want to publish.

If we ignore tradition, we have to face the fact that problems studied in statistics are conceptually sophisticated and complicated - even when their computations are simple arithmetic. A number such as 1036.8 does not have a standard deviation. It doesn't vary. It is only when we model a random process that generates the number that we can associate a standard deviation with the number. What is the model for the random process that generates the estimate that you wish to publish?

Picking a probability model for a situation is subjective, but unless you have a specific model in mind, there is no objectively correct way to associated a standard deviation with one numerical value generated by that process. Rather than mastering the art of creating probability models, you may find it simpler to investigate traditions!
 
140
8
I have no idea where you are getting 100 from. If your results are ##10 \pm 3## and ##12 \pm 2## or ##1000 \pm 3## and ##1200 \pm 2## then you can do a simple Monte Carlo simulation (assume normally distributed) to see what the distribution of your mean would be. For 10000 draws of (X+Y)/2 where X~N(10,3) and Y~N(12,2) I got a mean of 11.0 and a standard deviation of 1.79. For 10000 draws of (X+Y)/2 where X~N(1000,3) and Y~N(1200,2) I got a mean of 1100 and a standard deviation of 1.81. So the variance is the same in both cases as close as you can expect with Monte Carlo.

Please run a Monte Carlo simulation yourself. The propagation of errors formula correctly identified that the variance is the same in both cases. You are incorrect in thinking that the variance is widely different in the two cases.
Why would I assume X~N(1000,3)? What I mean is that I make 2 measurements. For the first one I get a value of 1000 with an empirical error (say from equipment systematics) of 3. For the second one I get 1200 with an empirical error of 2. Both numbers come from the same distribution (which I don't know), not from 2 different distributions. The empirical value of the mean is 1100 and the empirical value of the sigma (ignoring the error on each individual measurement) is $$\sqrt{(1200-1100)^2+(1100-1000)^2}=100\sqrt 2$$ In the other case the empirical value of the sigma is ##\sqrt 2##. They are 2 orders of magnitude apart, which is quite a lot. Is my logic flawed? So what I am trying to figure out is how to take into account, in the end, both this sigma given by how far away my 2 measurements are, but also how big is the error on each measurement.
 
140
8
You aren't specific about the meaning of "error". (What is meant by "the mean" is also ambiguous.)

On one hand, a measuring instrument can have a calibration guarantee such as ##\pm##2%.

On the other hand, a particular person can have a weight that differs from the mean weight of the population of people in his country. The difference between the person's weight and the mean weight of the population isn't necessarily an "error" on the person's part. His weight might be the optimal weight for his health. Calling this difference a "deviation" is clearer terminology.

And yet, on a third hand, a number can be computed from sample data and published as an estimate of the mean weight of the population. The difference between the estimate and the actual mean weight can be considered an "error".



I know that some people use terminology that is ambiguous or completely screwed-up and yet manage to function well. However, for many people, increasing the precision of terminology helps increase understanding. In talking about statistics, it is a struggle to avoid ambiguity because terms like"the mean", "the standard deviation" etc. are themselves ambiguous.

The two major branches of statistics are 1) Hypothesis testing and 2) Estimation. I think you want to publish an estimate of the population mean (for whatever population you are considering). You also want to publish an estimate of the standard deviation of something. What that thing is, isn't simple to sort out! Statistics is subjective and different fields of study have different traditions. You should ask people in your field of study about traditional ways of computing the numbers you want to publish.

If we ignore tradition, we have to face the fact that problems studied in statistics are conceptually sophisticated and complicated - even when their computations are simple arithmetic. A number such as 1036.8 does not have a standard deviation. It doesn't vary. It is only when we model a random process that generates the number that we can associate a standard deviation with the number. What is the model for the random process that generates the estimate that you wish to publish?

Picking a probability model for a situation is subjective, but unless you have a specific model in mind, there is no objectively correct way to associated a standard deviation with one numerical value generated by that process. Rather than mastering the art of creating probability models, you may find it simpler to investigate traditions!
Thank you for your reply and I am sorry if I was not clear. What I mean by error is any uncertainty associated with one single measurement. As a concrete example, say I measure the length of of object with a ruler that can't go lower than 1 (in some arbitrary units). So for any measurement my error is 1. Now lets say that I do 2 measurements getting ##100 \pm 1## and ##110 \pm 1##. The error that I mentioned above is this "1" obtained from each measurement (the error associated to an individual data point). What I am trying to figure out is, assuming that I say that the length of the object is the mean i.e. 105, what error should I associate with this value? Sorry again if i was not clear before.
 
28,288
4,640
Why would I assume X~N(1000,3)? What I mean is that I make 2 measurements. For the first one I get a value of 1000 with an empirical error (say from equipment systematics) of 3. For the second one I get 1200 with an empirical error of 2. Both numbers come from the same distribution (which I don't know), not from 2 different distributions.
It is completely meaningless to say that they come from the same distribution and then give different variances for the two measurements. If they come from the same distribution then they must have the same variance.
 
140
8
It is completely meaningless to say that they come from the same distribution and then give different variances for the two measurements.
Why would that be. Continuing my example I gave above, say that I measure the length with a ruler that can't go below 1 and I get ##100 \pm 1## then I measure the same object with a ruler that can't go below 5, and I get ##110 \pm 5##. The measurements of the length will form a gaussian distribution, but the error I get on each individual measurement has to do with my measuring instrument, not with the original distribution.
 
28,288
4,640
Why would that be. Continuing my example I gave above, say that I measure the length with a ruler that can't go below 1 and I get ##100 \pm 1## then I measure the same object with a ruler that can't go below 5, and I get ##110 \pm 5##. The measurements of the length will form a gaussian distribution, but the error I get has to do with my measuring instrument, not with the original distribution.
The measurements with those two rulers would clearly not have the same distribution.
 
140
8
The measurements with those two rulers would clearly not have the same distribution.
ok... but still, how do I combine them into one result.
 

Stephen Tashi

Science Advisor
6,730
1,084
ok... but still, how do I combine them into one result.
You have to be specific about the interpretation of the number you wish to publish. Your example of the "uncertainty" in measuring with a ruler doesn't involve probability. If the ruler can be read to ##\pm## a tenth of an inch, then a measurement with the ruler is guanteed to be within that distance of the actual value. There is no probability associated with that guarantee - except that it is 100% probable that it's true.

"Uncertainty" is an ambiguous term. How is it interpreted in your field of study? Is it supposed the be the standard deviation of a random variable? Or is it supposed to give an absolute guarantee about something?

To repeat, there is no objective answer to your question unless the question is converted to an unambiguous form. To make that conversion, you must appreciate the conceptual sophistication of statistics.
 

Want to reply to this thread?

"Error on the mean" You must log in or register to reply here.

Related Threads for: Error on the mean

  • Posted
Replies
2
Views
3K
Replies
8
Views
829
Replies
7
Views
2K
Replies
8
Views
1K
  • Posted
Replies
4
Views
5K
Replies
4
Views
664
Replies
5
Views
5K
Replies
4
Views
3K

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top