# True value of exp

1. May 5, 2008

### dazhuo

Questio:we take repeated measurements in experiment, and we can get average x. unfornately, it's not true value of the measuerement. how can we difine the true value?

If we eliminate all systematic errors, take infinite number of measurements and then take the average(by math, it's the average of population, not sample but also not pratical). Is that average the true value? anybody can clearfy it.

2. May 5, 2008

### CompuChip

Welcome to PF.
Do you mean: why can the expectation value of the value you get when throwing a die be 3,5 when we can only get integer numbers and is there a way to define a number which we will get most of the time?

3. May 5, 2008

### Shimo

From the Central Limit Theorem (correct me if that's wrong,) taking the average of an infinite number of measurements will be the true value, assuming the measurements are independent.

4. May 5, 2008

### Staff: Mentor

If there is error in the measurements (there is always error in the measurements), you can never get the "true" value through experiment alone.

5. May 5, 2008

### f95toli

Not to mention the fact that the result in most experiments is a real number (e.g. 1.33634) and there is no such thing as an "exact" real number.

There are of course exceptions, an experiment where we know before we start that the answer will be either "2 apples" or "3 oranges" can of course result in an "exact" answer.

6. May 6, 2008

### Andy Resnick

In experimental science, 'error' does not mean mistake. An essential source of error is the discrete nature of measurement- a ruler does not have infinitely fine gradations, for example. When reporting results, one does not say (or presume to say) what the 'true value of X is'. In fact, one can argue that the concept of "true value" is meaningless.

Repeated measurements, if independent, tend to form a Gaussian distribution, so experimental results are presented in terms of the mean and standard deviation. Careful papers (think stuff coming out of a National Standards Lab) give an accounting for the various sources of measurement error and the magnitudes of the error.

7. May 6, 2008

### dazhuo

I am reading the data analysis. the mean and standerd deviation is just for the sample but not for the population. And what we really want is the error of mean about the real(i do not wana use 'true') value. And from the material on my hand, the error of mean about the real value is standerd s/sqrt(N). where s is standerd deviation and N is the times of measurements. Is my understanding correct?

8. May 6, 2008

### Crosson

The example you give is a rational number, 133634/100000.

There are many exact real numbers that are not rational e.g. $\sqrt{2} , \pi$.

9. May 6, 2008

### f95toli

Sorry, I forgot the ....

10. May 7, 2008

### Andy Resnick

I'm not sure- I don't understand what you are saying. Are you refering to a t-test? That's the probability that your measurement is indistinguishable from another measurement.

11. May 9, 2008

### dazhuo

maybe i should clarify my point. we take the avearge of our repeated measured values. And also calculate the standerd deviation 'S'. but the standerd deviation 's' is just spread of all our measured values. what we really want is how close our avearge 'X' to real value(which we can never achieve). Therefore I am a little worrid how we know the deviation magilitude of 'X' to real value. anybody can answer? I just started to study data analysis, so maybe my question is just nonsense.

most memembers of this forum are phy undergraduate? or graduate? I hope I can discuss problems with some ppl interested in phy.

12. May 9, 2008

### Andy Resnick

Ah- you are interested in the difference between "accuracy" and "precision". A low standard deviation means the precision is high. It's possible to have a highly precise measurement with poor accuracy. It's also possible to have a highly accurate measurement suffering from poor precision.

There's a lot of information about this out there; you may be interested in the notion of 'accepted values' of quantities.

13. May 12, 2008

### Lojzek

The reason why average value has different standard deviation than the variable is the definition of average value, which includes sumation:
It is easy to show that the squared standard deviation of a sum (of independent variables) equals the sum of squares of standard deviations. So if a variable has standard deviation s, then the standard deviation of a sum of N independend measurements of this variable will have standard deviation sqrt(N)*s. The average is defined as the sum of N values divided by N, so it has standard deviation sqrt(N)*s/N=s/sqrt(N).
This is something we would expect, since sumation of multiple measurements with errors of different signs causes partial canceling of the errors, which results in smaller relative error.