JMF2
- 1
- 0
Hi, not sure which subforum I should post this in, because it’s a cross-cutting topic and I’d like to hear opinions from physicists and engineers with different backgrounds, as well as students and teachers in those disciplines.
In the academic setting, I have seen that two methods or criteria are taught to calculate the margin of error or uncertainty of a variable that is calculated from other measurements. The first one is simpler and is sometimes taught as "error theory", while the other (GUM guidelines, etc) is based on statistical theory. Both are summarized in the attached image (although the formulas shown for the second method correspond to a particular case of uncorrelated variables):
Which one is currently taught most commonly in your undergraduate courses? If the second one is taught, are the formulas derived from the underlying statistical theory, or are they just presented as a recipe?
It is often said that the first method is more conservative and results in wider margins, while the second one gives more “realistic” estimates. However, in practice, the statistical uncertainty calculated by the GUM method is often multiplied by a factor (usually k = 2) to obtain the “expanded uncertainty,” which ends up being more conservative than the error calculated with the first method. So, bearing in mind that both methods only provides estimations, I am still not very sure if the added complexity is worthwile in most cases.
I was taught the easier method during my bachelor’s degree, but later I’ve seen that in professional practice only the GUM method is really used (at least in fields where metrology is important). Are there any scientific or technical fields where the first method is used professionally for real-world stuff?
In the academic setting, I have seen that two methods or criteria are taught to calculate the margin of error or uncertainty of a variable that is calculated from other measurements. The first one is simpler and is sometimes taught as "error theory", while the other (GUM guidelines, etc) is based on statistical theory. Both are summarized in the attached image (although the formulas shown for the second method correspond to a particular case of uncorrelated variables):
Which one is currently taught most commonly in your undergraduate courses? If the second one is taught, are the formulas derived from the underlying statistical theory, or are they just presented as a recipe?
It is often said that the first method is more conservative and results in wider margins, while the second one gives more “realistic” estimates. However, in practice, the statistical uncertainty calculated by the GUM method is often multiplied by a factor (usually k = 2) to obtain the “expanded uncertainty,” which ends up being more conservative than the error calculated with the first method. So, bearing in mind that both methods only provides estimations, I am still not very sure if the added complexity is worthwile in most cases.
I was taught the easier method during my bachelor’s degree, but later I’ve seen that in professional practice only the GUM method is really used (at least in fields where metrology is important). Are there any scientific or technical fields where the first method is used professionally for real-world stuff?