Calculating error in fit parameter

  • Context: Graduate 
  • Thread starter Thread starter NanakiXIII
  • Start date Start date
  • Tags Tags
    Error Fit Parameter
Click For Summary

Discussion Overview

The discussion revolves around calculating the error in a fit parameter derived from experimental data. Participants explore methods for estimating the value of a property A, which is defined as the ratio of measured data points (y/x), and the associated uncertainties in this parameter. The conversation includes aspects of data fitting, error analysis, and the implications of different averaging methods.

Discussion Character

  • Exploratory, Technical explanation, Debate/contested, Mathematical reasoning

Main Points Raised

  • One participant describes their approach of calculating A as A = y/x and expresses confusion about the best method for determining the error in A.
  • Another participant suggests that a common method for calculating error involves summing the squares of the differences between actual and fitted points, though this is not universally accepted.
  • A third participant emphasizes the need for an error representation of the form A = a ± b, questioning the appropriateness of simply averaging the squares of the residuals due to unit discrepancies.
  • A later post presents a derivation of A using least squares fitting, leading to two different expressions for A: one from the least squares fit and another from averaging the ratios y/x, prompting a question about which method is superior.

Areas of Agreement / Disagreement

Participants express differing views on the best method for calculating the error in the parameter A, with no consensus reached on which approach is preferable. There is also uncertainty regarding the implications of using different averaging methods.

Contextual Notes

Participants note limitations in their understanding of error calculation methods and the implications of different approaches on the results. There is mention of unit discrepancies when averaging residuals, which may affect the interpretation of the error.

NanakiXIII
Messages
391
Reaction score
0
I'm working on an experiment and trying to do some data analysis which seems to me should be fairly trivial, but I can't for the life of me figure it out.

I want to know the value of some property [tex]A[/tex] and to do so I'm measuring data points [tex](x,y)[/tex], which are related to the value I want by

[tex]A = \frac{y}{x}[/tex].

So what I could do, is just calculate all my [tex]\frac{y_i}{x_i}[/tex] and find the mean and standard deviation and whatever I want, but I remember once being told that it's a good idea to do the following. I plotted my [tex](x,y)[/tex] as datapoints, then fitted with [tex]y = A x[/tex].

I don't remember why this was a good idea (or, why it was a better idea than just averaging), but it makes for nice graphs, anyway. The fitting went fine, but what I'm stuck on is how to calculate the error in [tex]A[/tex]. I've never been very interested in data analysis, and it's all a little hazy. Is what I'm doing possible and useful? How do I calculate an error in my parameter? I'm sure it's something to do with the sum of the squares of the residuals, but there has to be more to it than that, because that doesn't even have the right units.

If anyone could help untangle my messy knowledge on this subject, I'd be very grateful.


Postscript: I'm not using data analysis software. I'm pretty sure Origin would just do this for me, but I'm just using Mathematica to plot and fit. Since this problem is pretty simple, I want to try to do it by hand, rather than having Origin calculate these things for me without knowing what it actually does.
 
Physics news on Phys.org
A common method for calculating error value is to sum up the squares of the differences between actual and fitted points. The sum of the absolute values of the differences could also be used, but it's not a common practice. The main purpose is to compare the total error between two different fit algorithms.
 
I'm not interested in comparing fit algorithms, I just want an error of the type [tex]A = a \pm b[/tex]. Just averaging the squares of the residuals would give me something with the units of [tex]y^2[/tex], and a much too big number (since this number would depend on where in the range of the function your datapoints are).
 
I've worked it out a little more. If I make a least squares fit, what I end up with for [tex]A[/tex] is

[tex] \frac{\partial}{\partial A} \sum_{i} (y_i - A x_i)^2 = -2 \sum_{i} x_i (y_i - A x_i) = 0<br /> \longrightarrow \sum_{i} y_i - \sum_{i} A x_i = 0<br /> \longrightarrow A = \frac{\sum_{i} y_i}{\sum_{i} x_i} = \frac{\bar{y}}{\bar{x}},[/tex]

where the bars indicate the mean. If I just calculate the mean of [tex]A[/tex], I get

[tex] \bar{A} = \frac{1}{N} \sum_{i} \frac{y_i}{x_i} = \texnormal{mean}(\frac{y}{x}).[/tex]

These aren't the same thing, though numerically they're very similar. Which is a better method?
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
Replies
6
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 11 ·
Replies
11
Views
1K
Replies
8
Views
2K
  • · Replies 10 ·
Replies
10
Views
8K