Bevington and uncertainty of a weighted mean

  • Thread starter Thread starter pkennedy
  • Start date Start date
  • Tags Tags
    Mean Uncertainty
pkennedy
Messages
4
Reaction score
0
I'm trying to decide on a method for calculating a weighted mean for my data.

In Bevington's Data Reduction and Error Analysis for the Physical Sciences 2nd Ed. Equation 4.19, the variance of the weighted mean is:
attachment.php?attachmentid=30233&d=1291144713.png


However, Bevington also suggests the use of Equation 4.22 substituted into 4.23 to calculate the variance of the weighted mean:
attachment.php?attachmentid=30234&d=1291144713.png


These two formula are not equivalent (even if weight is defined as the square of the inverse standard deviation).

To muddy the waters even further, a coworker has suggested the following variance calculation:
attachment.php?attachmentid=30235&d=1291145631.png


Could someone explain why these formulae are different when they are all used to calculate the standard deviation of the weighted mean? Do they serve different purposes?
 

Attachments

  • uncertainty_of_weighted_mean.png
    uncertainty_of_weighted_mean.png
    695 bytes · Views: 933
  • Bevington_Eq4.22_into_4.23.png
    Bevington_Eq4.22_into_4.23.png
    900 bytes · Views: 870
  • weighting_formulas.png
    weighting_formulas.png
    2.4 KB · Views: 823
Physics news on Phys.org
Further information:

Equation 4.19 was derived by applying the error propagation equation:

attachment.php?attachmentid=30257&d=1291222596.png


to the equation for the weighted mean:

https://www.physicsforums.com/attachment.php?attachmentid=30258&d=1291222596.

On the other hand, the substitution of Equation 4.22 into Equation 4.23 appears very much like the definition of standard deviation (tweaked for weighted values).

Shouldn't the uncertainty on the weighted mean calculated in Equation 4.19 be identical to the uncertainty calculated by the substitution of Equation 4.22 into 4.23?
 

Attachments

  • error_propagation_equation.png
    error_propagation_equation.png
    1.4 KB · Views: 836
  • weighted_average.png
    weighted_average.png
    977 bytes · Views: 896
I can only provide a lay explanation about which to use, not the mathematical derivation sorry.

The first equation arises when each observation comes from a different probability distribution with known variance \sigmai and we assign weights to the observations based on their known variance such that wi=1/\sigmai^2. Therefore there is no more wi term in the variance formula.


The second equation arises when you do not know the variances of the observations before hand obviously. And actually, without the known variances, you are merely computing an estimate of the variance of the mean, not computing the variance of the mean. It would be more appropriate to write \widehat{\mu}2= ... instead.

The tangible difference between them (if you do not understand estimation theory and are just interested in this for your work...because I do not really understand or can explain it well either) is that the first formula can be calculated before any data is calculated as we are working with presumably known information, but the second formula can only be calculated with the data and their mean.

The third equation...well... I cannot really verify it at the moment but I think N-1 should be used instead of N to correct for biasness. The question is, is the second formula or the third formula a better estimator for variance of mean? I suspect the additional terms are included to correct biasness.

Edit: damn latex I can't figure out the problem with the notation
 
Thank you very much for your reply.

The second equation arises when you do not know the variances of the observations before hand obviously. And actually, without the known variances, you are merely computing an estimate of the variance of the mean, not computing the variance of the mean. It would be more appropriate to write LaTeX Code: \\widehat{\\mu} 2= ... instead.

So the difference between the first equation (4.19) and the second (4.22 into 4.23) is that the first equation (given precisely known variances) will produce the exact variance of the weighted mean but the second equation will produce an estimate of the variance of the weighted mean?

[Edit]:
This raises a practical question: if the weights used to calculate the weighted mean are equal to the inverse square of *estimates* of the variances (obtained by the typical unweighted variance calculations on measured data), is the first or second equation more appropriate?
 
Last edited:
The first one then. But usually when we substitute sample variance into an equation requiring known population variance directly, some correction has to be done depending on the distribution of the data.

For example, the t-test and z-test are both applied on a test on the sample mean, depending on whether the population variance is assumed to be known or unknown.
 
I vaguely recall using those tests before but not for the purpose of correcting data. I'll need to look for a resource to explain how these corrections should be done (any suggestions?).

Thanks again.
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Back
Top