B Measurements - How can it be that precise?

  • Thread starter Thread starter Omega0
  • Start date Start date
  • Tags Tags
    Measurements
Click For Summary
The discussion centers on the precision of measurements and the role of uncertainty in statistical analysis. It highlights that averaging multiple measurements can reduce uncertainty, as indicated by the factor of 1/√N, which suggests that more measurements lead to a more accurate estimate of the true value. However, confusion arises regarding the interpretation of uncertainty versus error, and the implications of having only one measurement. Participants emphasize the importance of understanding the statistical properties of measurements and the assumptions behind them, particularly regarding independent random variables. Ultimately, the conversation underscores the complexity of measurement theory and the need for clarity in discussing uncertainty and accuracy.
  • #31
Dale said:
What is a “measurement campaign” and why do you need one at all?
Well, how do you call it to measure things? In US english? A measurement? We have different words in Germany I guess. A "measurement" would be a value and this "campaign" would be a bunch of values. So please don't ask me about the correct terms of measurement technology in english.
A measurement campaign is for me to measure several values in a row (under certain conditions etc.)
 
Physics news on Phys.org
  • #32
Omega0 said:
It is - tell me if I am wrong - as if you expect the measurement campaign to be N times identical, correct?
Yes, the formula only works if the N measurements are independent and identically distributed. That is usually an explicit assumption in the derivation, often phrased in terms of random sampling of a fixed large population
 
Last edited:
  • Like
Likes Omega0 and FactChecker
  • #33
Omega0 said:
Let us get to the last step which is the one which made we wonder a lot. We take all ##s^2(y_k)## to be literally the same, as they are made of the same input values. It is - tell me if I am wrong - as if you expect the measurement campaign to be ##N## times identical, correct?

This is why you have suddenly such a simple formula. This is why suddenly $$s(\bar{x})=\frac{s(x_i)}{\sqrt{N}}$$
That equation does not require that the values of the sample are identical. It requires that they are all independent samples from the same distribution. Aren't you talking about independent samples? If they are from distributions that are correlated, then the formula must include the correlation coefficients.
 
  • Like
Likes Omega0
  • #34
Omega0 said:
Now let us speak about the standard deviation of the mean value of arbitrary functions which have ##N## measured values each.
Let us get to the last step which is the one which made we wonder a lot. We take all s2(yk) to be literally the same, as they are made of the same input values.

You must distinguish between the mean value and standard deviation of a random variable versus the mean value and the standard deviation of a particular set of samples of that random variable.

Suppose the mean of random variable X is ##\mu_X## and its variance is ##\sigma^2_X##. The mean ##M## of N independent samples of that random variable is also a random variable. The mean of ##M## is ##\mu## and the variance of ##M## is ##\sigma^2_X/ N##.

However for a particular N samples of X, there is no guarantee the the mean of those N values will be ##\mu## and there is no guarantee that the variance of those N values will be ##\sigma^2_X/N##. In fact, for typical random variables, it's unlikely that sample statistics will match population parameters.

So you can't formulate a proof that the variance of ##M## is ##\sigma_X^2/N## by imagining that you are working with ##N## particular values of ##X##.
 
  • Like
Likes Omega0
  • #35
Look at the Central Limit Theorem.
 
  • #36
Guys, could you give me an online source where the derivation of the factor $$1/\sqrt{N}$$ for the standard derivation for one data sample is derived correctly?
Thanks.
 
  • #39
Dale, that's it. Now I got it. Thanks to you and the other guys, Stephen, FastChecker etc.

To justify my stupid fault a bit: In the (very good) book it says, see the derivation above, my translation: "Here the ##s^2\left(y_i\right) ## are all the same because they are built each from the same input values. "
What I absolutely didn't understand is that, naturally, they don't need to be the same input values but the same ##s^2##.
This was my big failure.
Having said this, I would have found it fair if the professors would have remarked this. Seems they expected a knowledge in statistics above my level.

Thanks and sorry for my ignorance.
 
  • Like
Likes FactChecker and Dale
  • #40
Which formulae for population parameters also work for sample statistics that estimate them?

If we have a random samples ##S_x##, ##S_y## of N things taken from each of two random variables ##X## and ##Y##, we can imagine the values in ##S_x## and ##S_y## to define an empirical joint probability distributionj. So the sample mean of the pairwise sums of values in in ##S_x## and ##S_y## should be the sum of their sample means - analagous (but not identical) to the fact that ##E(X+Y) = E(X) + E(Y)##

However if ##X## and ##Y## are independent random variables, there is no guarantee that the empirical distribution of N pairs of numbers of the form ##(x,y), x \in S_x, y\in S_y## will factor as an distribution of x-value times an (independent) distribution of y-values. So we can't conclude the sample variance of ##x+y## is the sample variance of the x-values plus the sample variance of the y-values.

If, instead of N pairs of values, we looked at all the possible ##N^2## values ##(x,y)## we would get an empirical distribution where the x and y values are independent. Then something analgous to ##Var(X+Y) = Var(X) + Var(Y)## should work out. We would have to be specific about what definition of "sample variance" is used. For example, can we used the unbiased estimators of the population variances?
 
  • #41
I prefer to think that (using variables with zero mean for simple equations):
$$\sigma^2_{X+Y} = E( (X+Y)^2 ) = E( X^2 + 2XY + Y^2) = E(X^2) + 2E(XY) + E(Y^2)$$ $$ = \sigma^2_X + 2*cov(XY) + \sigma^2_Y$$
So the independence of ##X## and ##Y## implies the desired result.
 
  • #42
FactChecker said:
So the independence of ##X## and ##Y## implies the desired result.

##E(X^2) = \sigma_X^2 ## in the case of a random variable with zero mean. For a set of data ##x_i## realized from such a random variable, we don't necessarily have a zero sample mean.
 
  • Like
Likes FactChecker
  • #43
Stephen Tashi said:
##E(X^2) = \sigma_X^2 ## in the case of a random variable with zero mean. For a set of data ##x_i## realized from such a random variable, we don't necessarily have a zero sample mean.
Good point. But in terms of expected values, it is correct.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
29
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 20 ·
Replies
20
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
Replies
9
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K