What Is the Uncertainty in the Gradient of My Linear Graph?

Click For Summary

Discussion Overview

The discussion revolves around calculating the uncertainty in the gradient of a linear graph represented by the equation y=mx+c, particularly when y-values have a specified uncertainty and x-values do not. Participants explore various methods for determining this uncertainty, including the use of Excel's linear regression functions, graphical representations, and concepts of error propagation and experimental uncertainty.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant inquires about the gradient's uncertainty given that y-values have an uncertainty of ±1, while x-values do not.
  • Another suggests using graphical error bars to visualize the variation in the gradient that fits within the data's envelope.
  • A participant explains that the uncertainty in the gradient can be characterized by its standard error, which can be obtained from Excel's LINEST function, but emphasizes the need for understanding the limitations of linear regression.
  • Concerns are raised about the relationship between systematic and random errors, particularly in the context of x-values measured with certainty.
  • Discussion includes the distinction between experimental uncertainty and error propagation, with references to how these concepts apply to the uncertainty in the gradient.
  • Some participants express confusion about whether the gradient's uncertainty varies with different x-values and how to interpret this in the context of human judgment errors.
  • There is mention of confidence limits and the subjective nature of determining certain measurements, such as angles in a polarizer experiment.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the best method for calculating the gradient's uncertainty, with multiple competing views and interpretations of error propagation and uncertainty remaining unresolved.

Contextual Notes

Participants highlight the complexity of distinguishing between different types of uncertainties, including precision uncertainty, experimental uncertainty, and population uncertainty, without resolving how these concepts specifically apply to the gradient calculation.

quietrain
Messages
648
Reaction score
2
hi, just a simple question

i have a linear graph y=mx+c

lets say my y values have an uncertainty of ±1 . my x values don't have uncertainties.

so what will my gradient's uncertainty be?

PS: can i just use the linear least square fit function from microsoft excel? do they calculate the same uncertainty?i.e is the standard deviation from this method, the same as the uncertainty that i am going to calculate as per above?

thanks!
 
Physics news on Phys.org
Can you simply display your values graphically, along with the error bars in the Y values and then see the variation in M that will fit within the 'envelope'?

Edit: is M held to a single value throughout the range? Or is it allowed to vary with X?
 
The uncertainty in the gradient is characterised by its standard error which Excel's LINEST function can provide using INDEX(LINEST(known_y's, known_x's, const, TRUE), 2,1). Given the standard error you can calculate confidence limits for the gradient in the same way as you presumably have for the data points in order to make the (statistically meaningless) statement that the 'y values have an uncertainty of ±1'. Note that the standard errors in m and c are in general more closely linked to the range in the x values than the errors in the y values.

To correctly interpret the regression statistics I think you should do some reading around the subject: Anscombe's quartet is IMHO a good start to understand some of the limitations of linear regression and the importance of graphical analysis.
 
Thanks for that link, MrAnchovy ! I have not seen that before. An unequivocal demonstration in the power of graphics & the human eye-brain.
 
wow, i didn't expect it to be this complicated? i don't think i am at your levels :(

i mean

lets say for my y-values, they are measurements of length of an object.

the uncertainty is say ±1cm.

so if i do a plot of length against anything(no uncertainty in x), then what will my gradient's uncertainty be?

for grad = y/x

so (σgrad / grad)2 = (σy / y)2 + (σx / x)2 ? is it just like that?

is this uncertainty propagation? my notes says its just like that?

so since i don't have x uncertainties, i end up with

σgrad / grad = σy / y ?

but that's weird, it says every σgrad is different, because the value σy / y * grad is different for every y value...
 
(Actually, now that I look at it, I'm not sure this answers your question at all. You've just been given a liner equation, and an uncertainty... Usually linear equations are accompanied by a correlation coefficient, not an uncertainty. )

I was asking some questions along this line some months ago

https://www.physicsforums.com/showthread.php?t=525091

The main thing that ought to be distinguished is systematic errors and random errors. You might not have any "random error" in your x-values, because you are measuring the same x-value in the same way, and getting the same result.

However, there may be a systematic error in your measurement of your x-value.

A quick google search gave me this article which gives some more detail:

http://www.ece.rochester.edu/courses/ECE111/error_uncertainty.pdf

I'm not entirely certain how to account for possible systematic errors... You can really only account for systematic errors you KNOW are present.
 
Ah, I think you are talking about two different things here.

Experimental uncertainty refers to data that are subject to inaccuracy due to one or many random variables affecting the measurement of a quantity. Using certain assumptions, regression analysis can be used to determine the parameters of a 'line of best fit', including the standard errors in these parameters. This analysis in not simple and I have given some hints as to where to start.

Error propagation refers to data that are imprecise due to some limitation of the method used to measure (or record) a quantity. For instance an error of ±1cm would be produced by a ruler marked at 2cm intervals if you were unable to interpolate between markings.

In this case the analysis is fairly simple. The errors in y can be characterised by y = y' + ε where y is the true value of y, y' is the observed value and ε is the error (in this case, -1cm ≤ ε ≤ 1cm).

Taking y = mx + c, we have y' + ε = mx + c and y' = m'x + c where y' are the observed values of y and m' is the observed gradient. These give m' = m + ε / x. So the error in the gradient is 1cm divided by x.

Note that the formula σy / y implies that the precision is proportional to y rather than constant; a similar analysis can be performed.

For imprecise measurements the simplest way to determine the minimum and maximum gradients that fit the data is as gmax137 says to draw a graph with error bars and plot the steepest/shallowest lines that pass through each bar.
 
but if the grad error is m' = m + ε / x , then wouldn't it mean that for every x, i have a new m'?

lets say i am given x values as 10,20,30,40,50

then wouldn't my m' keep changing for different x? issn't that weird?

i didn't measure x by the way, those values were given.

but now that you guys talk about error propagation vs uncertainty, i realize that what i am talking about issn't exactly error propgation?

i think i am talking about confidence limits? it's not about systematic or maybe random errors.

i think it's more about human error.

lets say i want to determine the angle of a polarizer which produces the brightest emitted light.

but my eyes tells me the brightest light is over a range of angles. i can't pinpoint the angle that is...

so over a range of angles, say 1 degree. so that is my uncertainty in y.

so with x fixed/given, how will the gradient's error/uncertainty work out?

for that matter, is this called uncertainty or error propagation? or is it a human judgement error?
 
quietrain said:
but if the grad error is m' = m + ε / x , then wouldn't it mean that for every x, i have a new m'?

lets say i am given x values as 10,20,30,40,50

then wouldn't my m' keep changing for different x? issn't that weird?

i didn't measure x by the way, those values were given.

Did they give you any y-values?

Let's say you have (10,y(10)), (20,y(20)), (30,y(30)), (40,y(40))

You know your x-values with perfect certainty, but your y-values are only known to an uncertainty of +/- 1. So you could calculate, one slope by adding 1 to your y-value on the right, and subtracting 1 from your y-value to the left.

\frac{\Delta y}{\Delta x}=\frac{(y(40)+1)-(y(10)-1)}{40-10}

Then calculate another slope, similarly, by subtracting 1 from your y-value on the right, and adding 1 to your y-value on the left.

but now that you guys talk about error propagation vs uncertainty, i realize that what i am talking about issn't exactly error propgation?

i think i am talking about confidence limits? it's not about systematic or maybe random errors.

i think it's more about human error.

Well, I'm troubled whenever there seem to be several ideas which are not carefully distinguished. As near as I can tell, we are dealing with many different concepts here:

Precision uncertainty: If you have a meter marked in millimeters, you can make a guess down to the nearest 10th of a millimeter, but you should make a note that your scale is not as precise as that.

Experimental uncertainty: If you perform several trials measuring the same quantity, getting slightly different values each time, you can use statistics to estimate the uncertainty.

Population uncertainty: If you perform several trials measuring different quantities which you expect to be near each other, but are not necessarily exactly the same.
lets say i want to determine the angle of a polarizer which produces the brightest emitted light.

but my eyes tells me the brightest light is over a range of angles. i can't pinpoint the angle that is...

so over a range of angles, say 1 degree. so that is my uncertainty in y.

so with x fixed/given, how will the gradient's error/uncertainty work out?

for that matter, is this called uncertainty or error propagation? or is it a human judgement error?

I'm not sure, but I think what you are looking at is a population uncertainty. Each pinpoint of light represents its own trial. Each trial interacted with a different point on the screen. And on average, those pinpoints land at the center of the light.

No, I don't think it is human judgement error, because the light beam really did not land at a point, but in a spread.
 
  • #10
quietrain said:
but if the grad error is m' = m + ε / x , then wouldn't it mean that for every x, i have a new m'?

lets say i am given x values as 10,20,30,40,50

then wouldn't my m' keep changing for different x? issn't that weird?

Yes it would, no it isn't weird because to get the value of y you multiply the gradient by the value of x: if the maximum error in the gradient were constant then the maximum error in y would increase proportionately to x whereas you said that the maximum error was a constant 1cm.

But it's probably not very relevant either - this is probably more helpful:

Estimate the true slope m by drawing a line through two points (x1, y'1) and (x2, y'2).

We have m' = (y'2 - y'1) / (x2 - x1). Inserting the precision errors, we get

m' = (y2 + ε2 - y1 - ε1) / (x2 - x1) = (y2 - y1) / (x2 - x1) + (ε21) / (x2 - x1) = m + 2ε / (x2 - x1).

So with your example and using the two extremes of x, the precision error in the gradient m' - m = 2 x 1cm / (50xunit - 10xunit) = 0.05cm.xunit-1

quietrain said:
i think i am talking about confidence limits

Well in that case you can't make a statement like 'the error in y is ±1cm', and what's more you don't need to, but I am afraid the answer is as I said in my first post, not simple.

Fitting a straight line to a data set analytically is called linear regression and the most common form is least squares fitting. If you scroll to the bottom of that page you will see expressions for the standard errors (equivalent to the standard deviation from which you can derive confidence limits) in the intercept a and the gradient b.

I don't know of any simpler way to show this I am afraid.
 
  • #11
wow... ok thanks guys!
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 20 ·
Replies
20
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 28 ·
Replies
28
Views
3K
  • · Replies 48 ·
2
Replies
48
Views
5K
Replies
3
Views
2K