Some additional thoughts which you may find helpful:
You are trying to measure uncertainty in your parameters. This cries out for a Bayesian analysis.
In more detail:
What you
have is a
likelihood: P(\text{data} | \text{params}). (If you
knew the true values of \left\{A, B, t_c, \alpha\right\}, you could compute the probability of seeing any particular data values, including the ones you
actually saw; this is where your individual error bars come in.)
What you
want is a
posterior: P(\text{params} | \text{data}). (You wish you could compute the probability (density) for any arbitrary values of \left\{A, B, t_c, \alpha\right\},
given the data which you
actually saw. If you had P(\text{params} | \text{data}), you could compute any desired quantity from it: the "best" parameter values, uncertainty ranges for each parameter, correlations between parameters, etc.)
What you
need is
Bayes' rule:
<br />
P(\text{params} | \text{data}) \propto P(\text{data} | \text{params}) P(\text{params}).<br />
This is a fundamental formula from probability theory. It means you need one
more quantity, the
prior: P(\text{params}). For any given set of parameter values, how
plausible are they
a priori? You will need to explain and defend your choice here (some choices more than others).
Once you have all these ingredients, here is how you apply them:
- Find the parameters \hat{p} \equiv \left\{\hat{A}, \hat{B}, \hat{t_c}, \hat{\alpha}\right\} which maximize the posterior, P(\text{params} | \text{data}). Using least squares is a good idea.
- Your prior, P(\text{params}), will show up as penalty terms (which might be zero depending on the choice of prior).
- Find the Hessian (the second derivative matrix) of P(\text{params} | \text{data}) in the neighbourhood of \hat{p}.
- Note that all the first derivatives are 0, since \hat{p} is a local optimum!
- If you quit here, you're already doing pretty well! But if you want to continue...
- Take independent Monte Carlo draws from the Gaussian approximation to your posterior, and weight each draw by a factor of \frac{P(\text{params} | \text{data})}{P_\text{Gauss}(\text{params})}. Now you can take weighted averages over your Monte Carlo draws to compute whatever statistical properties your heart desires.
- This is probably a small correction on step 2. However, you might find that your parameters are highly non-Gaussian in this neighbourhood; and you'd be glad you checked.
In case you're interested, I wrote a paper doing basically this kind of analysis a few years ago. I explained my priors and likelihood; ran Monte Carlo; and presented the resulting uncertainty.
(I
didn't take a Gaussian approximation to the posterior. That might have been easier than the Markov Chain I
did use, but my stats knowledge wasn't the best at the time.)
Here is
the issue it appeared in (together with discussion of the paper), and here is
a direct link to the PDF. The model building happens in Section 3.