Understanding Error Propagation in Repeated Measurements for Accurate Results

  • Thread starter Thread starter finerty
  • Start date Start date
  • Tags Tags
    Error
Click For Summary

Homework Help Overview

The discussion revolves around understanding error propagation in repeated measurements, specifically in the context of measuring height with a defined error margin. The original poster expresses confusion about how to calculate the mean of multiple measurements and the associated error, questioning why the error appears to increase when averaging multiple results.

Discussion Character

  • Exploratory, Assumption checking, Conceptual clarification

Approaches and Questions Raised

  • Participants discuss the method of calculating mean and error from repeated measurements, with some suggesting that errors may not simply add up as initially thought. Others explore the concept of standard deviation of the mean (SDOM) as a more appropriate measure of error for repeated trials.

Discussion Status

Participants are actively engaging with the concepts of measurement error and its implications on results. Some have suggested resources for further reading, while others are clarifying the distinction between measurement uncertainty and the uncertainty of the quantity being measured. There is an ongoing exploration of different methods for calculating error, with no explicit consensus yet reached.

Contextual Notes

There is mention of constraints related to the precision of measurements (e.g., a minimum reading error of ±1 cm) and the need to consider both types of error in the project. The original poster is also seeking clarification on specific formulas for calculating error in the context of their project requirements.

finerty
Messages
7
Reaction score
0
sry wrong forumn i think

delete please






(right I am doing a physics project and i get a set of results.

say I am measuring height and the error is +-1 cm (easy number)

now say i repeat this and get 10 results now to get the mean i have to add them all together which acording to error rules means i have to add the errors together so i have an error of +- 10cm then i divide by 10 to give the mean now because 10 dosne't have an error nothing happens to the error

this means that using the mean of 10 results gives me a greater error than using just one
this seems to go against the whole point of using repeats

is there a different rule i should use when working out a mean)
 
Last edited:
Physics news on Phys.org
right I am doing a physics project and i get a set of results.

say I am measuring height and the error is +-1 cm (easy number)

now say i repeat this and get 10 results now to get the mean i have to add them all together which acording to error rules means i have to add the errors together so i have an error of +- 10cm then i divide by 10 to give the mean now because 10 dosne't have an error nothing happens to the error

this means that using the mean of 10 results gives me a greater error than using just one
this seems to go against the whole point of using repeats

is there a different rule i should use when working out a mean


ive looked in my textbook and scaned the internet but can't find anything thank you for any help
 
Usually when you add all of the +-1 errors together you don;t get 10 or -10 but rather some of the +1s and -1s cancel out so you get a number smaller than 10 and bigger than -10. Divide that number by 10 and you get a number less than 1 and bigger than -1. The more repeats you use, the more of a chance there is that the total will be very close to 0 so the error will be very small.
 
all of the errors are the same beacuse there roundings (say its impossible to read off closer than 1cm)

what you said makes perfect sense but is there a mathmatical way to calculate this depending on the number of samples

i found this but I am not sure if it applies any ideas http://www.ruf.rice.edu/~lane/hyperstat/A103735.html
 
Last edited by a moderator:
There are many ways to generate an estimation of error. First, a plug.

If you have an interest in experimental physics, or are an aspiring theoretician who wants to understand the voodoo that goes on in a laboratory, you should pick up a copy of "An Introduction to Error Analysis" by Taylor. It's one of the friendliest, well written, most brief, easily accessible books I've ever seen. It's written in the style of Div Grad Curl, or a David Griffiths textbook.

Let's agree on some terminology. When you report a measurement, you should always report it as:

[tex] x = x_{\hbox{\small best}} \pm \Delta x[/tex]

where [itex]x_{\hbox{\small best}}[/itex] is your best estimate for the value of x (it would almost always be the mean of your measurements), and [itex]\Delta x[/itex] gives an estimate of the error in your best guess. There are many, many ways of calculating error. However, for repeated independent trials, you should be using the standard deviation of the mean (SDOM) for [itex]\Delta x[/itex].

What you'll find is that by using SDOM, the more measurements you make, the smaller the SDOM will be. If it doesn't get smaller, you're most likely doing something very wrong.

You should be aware that there are two very different things here:

1. generating an uncertainty for a single quantity due to repeated measurements.

For example, you take 10 length measurements of the length of a table. That's what you're doing. One good way of generating an error estimate would be the SDOM.

2. generating an uncertainty for a function of several measurements.

For example, the table is much longer than your ruler, and so you need to take 3 sets of measurements. Each set consists of 10 "trials". Hmmm... lame example, but you get the idea. A better example would be measure the length, width, height of a box to calculate a volume. The rule you stated in your question is appropriate for generating uncertainty for THIS kind of measurement. But you should also know, it's a very bad rule. To see why, consider:

[tex] f(x,y) = x_{\hbox{\small best}} + y_{\hbox{\small best}} \pm (\Delta x + \Delta y)[/tex]

This assumes the WORST CASE SCENARIO. It assumes that you made a measurement of x, and got the worst possible measurement. But even worse than that, you made a measurement of y and also got the worst possible measurement. But even worse still, your errors in x and y were in "the same direction". There's a good chance that x might have been an overestimate, and y an underestimate, producing cancelling errors. But by adding [itex]\Delta x + \Delta y[/itex], you've assumed that both were complete overestimates. Or both were complete underestimates, with no cancellation of error.

For this reason, when computing the sum or difference of measured quantities, you should add the error estimates in quadrature, not direct summation.

I've given you more than you've asked for. Hope it didn't confuse. Do look into the Taylor book. It's really, really good.
 
Last edited by a moderator:
lets call the ith measurement [itex]x_i[/tex] then you're calculating<br /> <br /> [tex]\bar{x} = \frac{\sum _{i=1} ^N x_i}{N}[/tex]<br /> <br /> The uncertainity then is <br /> <br /> [tex]u = \frac{\sum _{i=1} ^N (u_i)^2}{N^2}[/tex]<br /> <br /> where [itex]u_i[/tex] is the uncertainty in the ith measurement. This is actually lower than you original uncertainty.[/itex][/itex]
 
right i think i understand

rather than saying i think there will be an error of -+ 1

i use this http://www.ruf.rice.edu/~lane/hyperstat/A103735.html
i know its not the standard deveation of the mean but it seems to be what I am after

which will give me an estimate of the error of a mean from my 3 measures of the length of the table

and then i can use the very 'bad' rule to find the worse case scenario for things calculated from these means
 
Last edited by a moderator:
You got it! :)Maybe a very nice way of thinking of this is that you have two things here (just thinking out loud in case I ever have to teach this stuff):

1. The uncertainty in your measurement.

2. The uncertainty in the thing you're trying to measure.

The uncertainty in your measurement is ALWAYS +/- 1cm. That's why if you take a million billion measurements, the uncertainty will be +/- 1cm. If you can only read up to half-tick marks on the first measurement, you can only read up to half-tick marks on your billionth measurement. Repeated measurements won't change that.

The uncertainty in what you're trying to measure, however, is different. It's related to, but different from, the uncertainty in the measurement. Your estimate becomes better as more and more trials cluster around the mean.

The difference between error in a measurement and error in the quantity you're trying to measure is subtle, but important.
 
right

um (and i know I am starting to sound a little stupid here but maths isn't my strong point, thinking is :) )

i have to include in my project that i have thought about error and followed ti though the formulas i applie

now in this i believe its talking about the first type of error beacuse its just assumed that the error in the second case is reduced by the repeats (you just say i will repeat blahblahblah to reduce error)

where as what i think its mainly after is how far out your final result could be beacuase of the inherrent errors of the measuring

ie. 'i know there's an error measureing time with a camera because the frames are only 24 a sec so i know there is an error of 1/24th of a sec'

you say that i should just say that the mean time also has an error of
1/24th of a sec but my teacher mentioned that there is a formula for working out the means error and i don't think he ment the one we have already talked about


so you see i repeat the test to reduce the second type of error but this might increase the first ie. i read 1cm to high on all of them but it is likly they will cancel out so the more i do the more will cancel out and the lower the error will get and its a formula for this i can't find


sry to keep going on
 
Last edited:
  • #10
The standard deviation estimate s (which is an estimate of the error in the sample average x) is given by
s2=sum((xi-x)2)/(N-1)
N-1 is used, not N because of the uncertainty of x.
Statistically the true mean will be within s of x about 67% of the time. Within 2s it will be about 90%.
 

Similar threads

Replies
15
Views
2K
Replies
4
Views
3K
Replies
15
Views
1K
Replies
6
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 15 ·
Replies
15
Views
4K
  • · Replies 1 ·
Replies
1
Views
840
  • · Replies 2 ·
Replies
2
Views
759
  • · Replies 6 ·
Replies
6
Views
5K