- #1
Astr0fiend
- 2
- 0
Homework Statement
I am taking a dataset of intensity vs. frequency (which I'll call dataset I_1), and fitting it with a linear model (I_mod). I want to divide another intensity vs. wavelength data (I_2) by this fitted model to get fractional changes in the second data set compared to the model of the first (I_result).
The problem is that I cannot for the life of me work out the correct way to propagate/combine the uncertainty on the fitted parameters from I_mod with the uncertainty on the intensity measurements in dataset I_2 to get the final uncertainty estimate on the relative intensity values in I_result.
Homework Equations
The fit equation is just the standard form for a straight line: I = m*nu + b, with I = model intensity at frequency nu, m the gradient, and b the y-offset. The fitting is done via a Monte Carlo Markov method, which leaves me with Gaussian likelihood functions for the parameters m and b from which I can estimate their standard deviation - i.e. the errors on the slope and the y-offset.
The data set I_2 has errors associated with each of the data points, estimated during the measurement process.
The Attempt at a Solution
No idea really, and I've been looking around for ages.
I was thinking that I could take the uncertainty in the I_mod slope as estimated from the standard deviation of its likelihood distribution obtained from the fitting process (call it d.m, say), then calculate d.m*nu to obtain the uncertainty on the model data points due to the slope uncertainty. Then take the uncertainty in the y-offset as estimated from the standard deviation of its likelihood distribution (d.b), and add these in quadrature to get the total error on each of the model data points. I.e.:
d.I_mod = sqrt( (d.m*nu)^2 + (d.b)^2 )
This gives me errors on the model data points - d.I_mod - and I already have estimates for the errors on the data points for my second data set which I'll call d.I_2. Because I'm dividing the data points in I_2 by I_mod to get I_result, I was then thinking I would add the fractional errors in each of these points - i.e.:
total fractional error on I_result datapoint = sqrt( (d.I_2 / I_2)^2 + (d.I_mod / I_mod)^2 )
I have a feeling that this is a terrible way to do things, but it's the best I've come up with :(
Any help much appreciated, even if it is just pointing we to a website with some basic stats covering this problem.