- #1

- 144

- 17

## Summary:

- How can even a few measured values be "honored" by a factor of ##1/\sqrt{N}##?

## Main Question or Discussion Point

I have a question about statistics or measurement technology.

Let be ##K## and ##N## natural numbers. We measure a variable ##x_{kn}## where ##k\in\{1,K\}## and ##n\in\{1,N\}##.

Let us say that you have ##K## measurement series to measure ##N## values ##x_{kn}##

No matter how big ##N## is.

The measured value is the mean value of ##x_{kn}## plus or minus the uncertencies ##u_k##.

The uncertainty is ##s_{xk}## and we can correct the result by a student factor ##t_k## to be more precise for ##N \leq 200##, thumb rule (sorry, it is engineering I am speaking about).

Now the mircacle begins (honestly, I just don't understand it). The saying goes: The value we measured is much more accurate as we may believe. If we measure ##k \rightarrow \infty## with ##N## measurements all the time, then we can be much more accurate - which is obviously true - to estimate the "true value". We calculate basically the standard deviation of the mean value. We get a factor ##1/\sqrt{N}## which lets the uncertainty be way smaller.

And here I need your help: This is done even for ##K=1##, the telling is "imagine that we did it several times, very often"... but we didn't! In the measurement theory we will divide by ##\sqrt10## for even ##10## single values.

How does this work?

Thanks,

Jens

Let be ##K## and ##N## natural numbers. We measure a variable ##x_{kn}## where ##k\in\{1,K\}## and ##n\in\{1,N\}##.

Let us say that you have ##K## measurement series to measure ##N## values ##x_{kn}##

No matter how big ##N## is.

The measured value is the mean value of ##x_{kn}## plus or minus the uncertencies ##u_k##.

The uncertainty is ##s_{xk}## and we can correct the result by a student factor ##t_k## to be more precise for ##N \leq 200##, thumb rule (sorry, it is engineering I am speaking about).

Now the mircacle begins (honestly, I just don't understand it). The saying goes: The value we measured is much more accurate as we may believe. If we measure ##k \rightarrow \infty## with ##N## measurements all the time, then we can be much more accurate - which is obviously true - to estimate the "true value". We calculate basically the standard deviation of the mean value. We get a factor ##1/\sqrt{N}## which lets the uncertainty be way smaller.

And here I need your help: This is done even for ##K=1##, the telling is "imagine that we did it several times, very often"... but we didn't! In the measurement theory we will divide by ##\sqrt10## for even ##10## single values.

How does this work?

Thanks,

Jens