# Linear fit, force intercept with X-uncertainties

I have some experimental data, in this case, we performed a study of the Zeeman effect in Cadmium with the use of a Fabry-Perot inferferometer. The data should fit a straight line, but I would like to force the intercept through the origin since the relation between the wavenumber difference and the magnetic field is proportionality.

My problem is that both x and y experimental values have significant uncertanties, and I am not sure of how to calculate the estimated uncertainty of the fit parameter, being the relevant result of the experiment.

The data are the following:

x-----------y
B (mT) \ Δν (1/m)
756 \ 61,25
621 \ 47,06
518 \ 45,30
414 \ 33,54

## The Attempt at a Solution

I've been reading a chapter in Press et al. Numerical Recipes on how to perform the same analysis without forcing the intercept. It seems rather difficult, I wonder if it gets any easier if the intercept is forced, since there is only one fit parameter.

## Answers and Replies

Buzz Bloom
Gold Member
estimated uncertainty of the fit parameter
Hi marksman95:

Although I know nothing about the Zeeman effect nor about cadmium, I am familiar with data fitting.
The data should fit a straight line
For a linear fit, y = a x + b. You have four data points {(xi, yi), i = 1..4}. Therefore each data point has the following error with respect to the linear fit.
Ei = a xi + b - yi, i = 1..4
You want to calculate the values of a and b that minimizes the sum of the squares of Ei:
E = ∑(i=1..4) (a xi + b - yi)2.​
Do you know how to proceed from this? If you are certain a must be zero, you may use that in your calculations.

Regards,
Buzz

Thanks for your reply, Buzz. I am familiar with data fitting up to the point you have made. I know how the least squares parameters are obtained from the data and all (I'm a graduate physicist already, I don't know much but this, I do know.)

My question was much more specific and I have been in trouble finding answers in the common bibliography used for numerical methods, as I pointed out.

As I said, thank you anyway!

Buzz Bloom
Gold Member
My question was much more specific
Hi marksman95:

I am sorry I misunderstood what you were asking. Would you rephrase the specific question to make it clearer.

Regards,
Buzz

tnich
Homework Helper
I know that least squares methods are often presented as sets of formulas. If you understand the theory behind the formulas, you can derive them yourself for any special case. Then you can use the results to estimate parameters like standard error. So, first a little bit of theory.

Given ##m## (row) vectors of ##n## independent variables ##X_i \equiv [x_{i1}, x_{i2}, \dots ,x_{in}], 1\leq i \leq m## and ##m## dependent variables (measurements) ##y_i, 1\leq i \leq m##, and a vector of linear fit parameters ##A = [a_1~a_2~ \dots ~a_n]^T## that we want to estimate, we can write the sum of the squared differences between the measurements and their estimates as
I have some experimental data, in this case, we performed a study of the Zeeman effect in Cadmium with the use of a Fabry-Perot inferferometer. The data should fit a straight line, but I would like to force the intercept through the origin since the relation between the wavenumber difference and the magnetic field is proportionality.

My problem is that both x and y experimental values have significant uncertanties, and I am not sure of how to calculate the estimated uncertainty of the fit parameter, being the relevant result of the experiment.

The data are the following:

x-----------y
B (mT) \ Δν (1/m)
756 \ 61,25
621 \ 47,06
518 \ 45,30
414 \ 33,54

## The Attempt at a Solution

I've been reading a chapter in Press et al. Numerical Recipes on how to perform the same analysis without forcing the intercept. It seems rather difficult, I wonder if it gets any easier if the intercept is forced, since there is only one fit parameter.
It sounds as though you are looking for a Total Least Squares solution. Here is a wikipedia article that describes the technique. Perhaps you have already found this technique in your research. This article does use your problem as a specific example.

Of course!

The question is: how to make a linear fit, force the intercept to 0, and calculate the standard error for the slope if both X and Y have significant uncertainties.

I think it is enough clear this way.

Thank you in advance!

I know that least squares methods are often presented as sets of formulas. If you understand the theory behind the formulas, you can derive them yourself for any special case. Then you can use the results to estimate parameters like standard error. So, first a little bit of theory.

Given ##m## (row) vectors of ##n## independent variables ##X_i \equiv [x_{i1}, x_{i2}, \dots ,x_{in}], 1\leq i \leq m## and ##m## dependent variables (measurements) ##y_i, 1\leq i \leq m##, and a vector of linear fit parameters ##A = [a_1~a_2~ \dots ~a_n]^T## that we want to estimate, we can write the sum of the squared differences between the measurements and their estimates as

It sounds as though you are looking for a Total Least Squares solution. Here is a wikipedia article that describes the technique. Perhaps you have already found this technique in your research. This article does use your problem as a specific example.

Oh, nice! I will check that tomorrow. Thanks for the effort.