Finding the Uncertainty of the Slope Parameter of a Liner Regression

richardc
Messages
7
Reaction score
1
Finding the Uncertainty of the Slope Parameter of a Linear Regression

Suppose I have measurements x_i \pm \sigma_{xi} and y_i \pm \sigma_{yi} where \sigma is the uncertainty in the measurement. If I use a linear regression to estimate the value of b in y=a+bx, I'm struggling to find a straightforward way to compute the uncertainty of b that arises from the measurement uncertainties. This seems like it should be a very common problem, so I'm not sure why I can't find a simple algorithm or formula.

Thank you for any advice.
 
Last edited:
Physics news on Phys.org
Are you using "uncertainty" to mean "standard deviation"?

It's a common problem, but it's not simple. After all, your data gives only one value for b, so how can you estimate the standard deviation of b from a sample of size 1 ?

The common way to get an answer is to oversimplify matters and compute a "linearized asymptotic" estimate. The value of b is some function F of the (x_i,y_i). Let L be the linear approximation for the function F. Assume that near the observed values in the sample that this well approximates the random variable b as a linear combination of the x_i and y_i. When you have a random variable expressed as linear combination of other random variables, you can work on expressing its standard deviation in terms of the standard deviations of the other random variables.

That's the general picture. If it's what you want to do then we can try to look up the specifics. I don't know them from memory.
 
Thank you for clarifying the problem.

With N observation pairs I believe I can write b=\frac{N \sum x_i y_i - \sum x_i \sum y_i}{N \sum x_i^2 - (\sum x_i)^2}.

I suppose the propagation of error formula \sigma_f^2=\sum (\frac{\partial f}{\partial x_i} \sigma_{x_i} )^2 is then applied to a linear approximation of b?
 
You state a problem where there is an error in measurement for x_i as well as for y_i. In such a problem, people often use "total least squares" regression. I think the computation of the slope in "total least squares" regression is different than in ordinary least square regression, which assumes no error in the measurement of the x_i. I think the formula you gave for b is for ordinary least squares regression.

Of course, one may ask the question: If I fit a straight line to data using the estimator for slope used in ordinary least squares regression and my data also has errors in the x_i then what is the standard deviation of this estimator. If that's the question, you need terms involving \frac{\partial f}{\partial y_i} \sigma^2_{y_i} and \frac{\partial f}{\partial x_i} \sigma^2 x_i

I don't know if the estimator for slope in ordinary least squares regression is an unbiased estimator if there are errors in the x_i.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top