Error propagation when dividing by fitted model

In summary, the conversation discusses how to calculate the uncertainty on the relative intensity values in a second dataset (I_result) compared to a fitted model of the first dataset (I_mod). The suggested method involves using the standard deviation of the likelihood distribution for the fitted parameters of the model (m and b) and the estimated errors on the data points in the second dataset (I_2). However, the best approach is to carefully consider what information is being sought and how to compare the two datasets, and to think about how the errors will propagate in each case.
  • #1
Astr0fiend
2
0

Homework Statement



I am taking a dataset of intensity vs. frequency (which I'll call dataset I_1), and fitting it with a linear model (I_mod). I want to divide another intensity vs. wavelength data (I_2) by this fitted model to get fractional changes in the second data set compared to the model of the first (I_result).

The problem is that I cannot for the life of me work out the correct way to propagate/combine the uncertainty on the fitted parameters from I_mod with the uncertainty on the intensity measurements in dataset I_2 to get the final uncertainty estimate on the relative intensity values in I_result.

Homework Equations



The fit equation is just the standard form for a straight line: I = m*nu + b, with I = model intensity at frequency nu, m the gradient, and b the y-offset. The fitting is done via a Monte Carlo Markov method, which leaves me with Gaussian likelihood functions for the parameters m and b from which I can estimate their standard deviation - i.e. the errors on the slope and the y-offset.

The data set I_2 has errors associated with each of the data points, estimated during the measurement process.

The Attempt at a Solution



No idea really, and I've been looking around for ages.

I was thinking that I could take the uncertainty in the I_mod slope as estimated from the standard deviation of its likelihood distribution obtained from the fitting process (call it d.m, say), then calculate d.m*nu to obtain the uncertainty on the model data points due to the slope uncertainty. Then take the uncertainty in the y-offset as estimated from the standard deviation of its likelihood distribution (d.b), and add these in quadrature to get the total error on each of the model data points. I.e.:

d.I_mod = sqrt( (d.m*nu)^2 + (d.b)^2 )

This gives me errors on the model data points - d.I_mod - and I already have estimates for the errors on the data points for my second data set which I'll call d.I_2. Because I'm dividing the data points in I_2 by I_mod to get I_result, I was then thinking I would add the fractional errors in each of these points - i.e.:

total fractional error on I_result datapoint = sqrt( (d.I_2 / I_2)^2 + (d.I_mod / I_mod)^2 )

I have a feeling that this is a terrible way to do things, but it's the best I've come up with :(

Any help much appreciated, even if it is just pointing we to a website with some basic stats covering this problem.
 
Physics news on Phys.org
  • #2
Welcome to PF;
The secret is to be careful about describing what you want to know rather than what you are trying to do.

i.e. if you want to know how close your new dataset is to the model, you would find the difference between the data and the model.

You could compare the two datasets by getting a new model for the second dataset and seeing how the fitted parameters are different...

Dividing a dataset by a model would, graphically, involve plotting the the intensity data against that predicted by the model. The resulting curve will tell you about the relationship between the new data and the model.

In each case you should be able to see how the errors propagate.

But what I am saying is that you need to think more about what you hope to find out.
 

1. What is error propagation when dividing by fitted model?

Error propagation when dividing by fitted model is a method used in statistics to estimate the uncertainty or error associated with a calculated value that is obtained by dividing two quantities, one of which is a fitted model. It takes into account the uncertainties in the fitted model and the input data to determine the overall uncertainty in the calculated value.

2. Why is error propagation important when dividing by fitted model?

Error propagation is important when dividing by fitted model because it allows us to quantify the uncertainty in our calculated values. This is especially important in scientific research, as it helps us understand the reliability of our results and make informed decisions based on those results.

3. How is error propagation calculated when dividing by fitted model?

The error propagation formula when dividing by fitted model is:
σ(f(x)) = |f'(x)| * σ(x)
where σ(f(x)) is the uncertainty in the calculated value, f'(x) is the derivative of the fitted model at the point x, and σ(x) is the uncertainty in the input data.

4. Can error propagation when dividing by fitted model be applied to any type of model?

Yes, error propagation when dividing by fitted model can be applied to any type of model, as long as the model is differentiable. This means that the model must have a well-defined derivative at the point where the calculation is being made.

5. How can we minimize error propagation when dividing by fitted model?

To minimize error propagation when dividing by fitted model, we can try to reduce the uncertainties in the input data and choose a well-fitted model with a low uncertainty. Additionally, using more precise and accurate measurement techniques can also help minimize error propagation.

Similar threads

  • Calculus and Beyond Homework Help
Replies
3
Views
914
  • Introductory Physics Homework Help
Replies
15
Views
1K
  • STEM Educators and Teaching
Replies
11
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
921
  • Set Theory, Logic, Probability, Statistics
Replies
22
Views
1K
Replies
8
Views
2K
  • Other Physics Topics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
30
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
23
Views
2K
Back
Top