# Curve-fitting y=x^p+C

Curve-fitting y=Ax^p+C

Howdy,

So I have some data that I suspect to follow a

$$y=x^p+C$$

relationship, where p and C are unknown, real numbers. The y values contain some uncertainties, so I want to use a least squares (or similar) method to fit a curve and quantify the goodness of fit. I actually have a value for y(0) most of the time, so what I have been doing is using that as my value for C, and plotting

$$\ln(y-y(0))[\tex] versus [tex]\ln x$$

from which I get the gradient p using standard linear regression. This works fine for the most part, but it involves the erroneous assumption that there is no uncertainty in the value for C. It also exaggerates the deviations from this model near x=0, which is problematic when the choice of C is not that great. I thought I could maybe vary C and minimise the residuals of the linearised plot and obtain a best value for C that way, but I'm thinking there's probably a more elegant way.

This seems like it must be a really common issue, but I can't find anything about it anywhere. Any ideas? Cheers :)

Last edited:

Just fit a non linear regression model $y = B_0 + x^{B_1}$. Most statistical packages (and certainly R) will have that capability. The least squares estimates will still work fine under the usual assumptions.

If you need a linear regression you may try

$\int\limits_{x_0 }^x {y \cdot dx} = \frac{{x \cdot y}}{{p + 1}} + \frac{{p \cdot C \cdot x}}{{p + 1}} - \frac{{x_0 \cdot \left( {y_0 - C} \right)}}{{p + 1}} - C$

AlephZero
Homework Helper
It's your data so you can try to fit type of curve you like, but think about what your equation means physically.

You are saying that $x$ and $y$ are measured in units such that $x^p$ and $y$ have the same units. That is a very strange coincidence, especially if $p$ is an arbitrary real number.

It would make a lot more physical sense to fit a curve like $y = Ax^p + C$, or writing the same thing a different way, $y = e^{ax} + C,$ where the constants $A$ or $a$ are units-dependent.

Thanks for the advice everyone. You are right AlephZero, I forgot to include A, though units aren't an issue because x and y are both dimensionless. So I really have 3 unknowns that I need to work out.

Thanks for the advice everyone. You are right AlephZero, I forgot to include A, though units aren't an issue because x and y are both dimensionless. So I really have 3 unknowns that I need to work out.

One more unknown isn't an issue, since your model is a pretty standard non-linear regression model. Most statistics packages will let you fit it, but you'll need to provide reasonable starting values (non-linear least squares estimation uses iterative methods, like Newton's method, so you need to provide starting values for the parameters. They don't have to be super accurate; just reasonable). Remember to do the usual diagnostics afterwards (is error variance constant/normal? etc)

Yep, I'm just checking out the nlinfit function in matlab. It seems to be the way to go. The 'guesses' shouldn't be a problem because I can use my 'bad' data from the attempted linear regression as a starting point.

Thanks heaps for you help. :)

Hello ballzac,

Fitting y=a+b(x^c) to experimental data (x,y) can be carried out thanks to various non-linear regression methods which require recursive processes.
Nevertheless, a non-recursive process was published in the paper :
"Régressions et équations intégrales" (in French, but the process itself is short and legible page 17)
http://www.scribd.com/JJacquelin/documents

Redbelly98
Staff Emeritus
Homework Helper
It would make a lot more physical sense to fit a curve like $y = Ax^p + C$, or writing the same thing a different way, $y = e^{ax} + C,$ where the constants $A$ or $a$ are units-dependent.
I'd just like to correct this. Axp is a power function, while eax is exponential. They are not the same thing written in a different way -- I believe you'll agree if you think it over.

Hello ballzac,

Fitting y=a+b(x^c) to experimental data (x,y) can be carried out thanks to various non-linear regression methods which require recursive processes.
Nevertheless, a non-recursive process was published in the paper :
"Régressions et équations intégrales" (in French, but the process itself is short and legible page 17)
http://www.scribd.com/JJacquelin/documents
Thanks for posting.

I've managed to fit a really nice curve to my data. Plotting the residuals, I see that there is structure that's unaccounted for in my model, but the residuals are very small, so it does provide an accurate 'rule of thumb'.

The curve fits better than what I obtained using a log-log plot with linear regression, but when I subtract the intercept obtained using nonlinear regression and then plot log-log, the data does not look linear, and the fitted line completely misses that first few points. This is because the log-log plot exagerates errors in the small values, and suppresses them in the high values.

I find this interesting because it brings into question some of the previous work I've done where the data went through the origin and I quite happily used linear regression on a log-log plot. Still, this it the standard way these sort of problems are expected to be tackled, and all data is generally presented in a linearised form if possible. I just don't think it's that good if it causes the RMS error in the fitted curve to be higher than it would be if the data wasn't linearised in the first place and if non-linear regression is used instead of linear.

Thanks for posting.

I've managed to fit a really nice curve to my data. Plotting the residuals, I see that there is structure that's unaccounted for in my model, but the residuals are very small, so it does provide an accurate 'rule of thumb'.

The curve fits better than what I obtained using a log-log plot with linear regression, but when I subtract the intercept obtained using nonlinear regression and then plot log-log, the data does not look linear, and the fitted line completely misses that first few points. This is because the log-log plot exagerates errors in the small values, and suppresses them in the high values.

I find this interesting because it brings into question some of the previous work I've done where the data went through the origin and I quite happily used linear regression on a log-log plot. Still, this it the standard way these sort of problems are expected to be tackled, and all data is generally presented in a linearised form if possible. I just don't think it's that good if it causes the RMS error in the fitted curve to be higher than it would be if the data wasn't linearised in the first place and if non-linear regression is used instead of linear.

What do the residuals looks like? Be very weary of non-normality or changing variance; it can completely destroy the credibility of your estimators. When in doubt, run a robust regression model and see if the estimates are consistent. The two data sets are shown as red squares in the upper two plots, with the fitted curves shown in blue. The residuals are shown for each data point under each respective plot. Note the 10^-3 on the y-axis for the residuals.

For actually finding a model that explains the data, this might not be the best fit, but that might not matter. The purpose of these curves is to motivate the choice of certain experimental parameters in order to minimise error in a particular experiment. There is no real reason to try to extrapolate these data, and given a value between 0 and 8 x10-3, one can quite reliably choose a value for these two parameters.

#### Attachments

If you're not doing any inference and are just trying to "see what the curve looks like", then you should be fine. It looks to me like the error variance is increasing quadratically-ish; if you wanted to be rigorous, you might try fitting a weighted least squares model.

I might have to do some reading as I don't know much about this subject, but yes, for my current purposes I think what I've done will suffice.

It actually amazes me how little emphasis there was on data analysis in my undergrad physics studies.

I might have to do some reading as I don't know much about this subject, but yes, for my current purposes I think what I've done will suffice.

It actually amazes me how little emphasis there was on data analysis in my undergrad physics studies.

If you want to familiarize yourself with the basics, I recommend Montgomery's Introduction to Linear Regression Analysis.

If you want to familiarize yourself with the basics, I recommend Montgomery's Introduction to Linear Regression Analysis.
Thanks for the recommendation, and thanks for all the help :)

Mute
Homework Helper
If at some point you are actually interested in getting precise estimates for your fit and you want to use a power-law model, you need to be careful how you do things.

For instance, your original attempt, a least squares regression on the logarithm of the data, is very subject to systematic errors in the measurements. I am not sure how the nonlinear curve fit works, but it may have similar problems too.

This paper discusses proper fitting of power-laws to experimental data using maximum likelihood estimators, which is a better method of fitting power laws to data. It is formulated for the case of fitting to power law distributions (which tend to have negative powers), so some modifications may be necessary for your specific case. It also discusses, to some extent other distributions that may also fit power law data, but I don't think it gives a maximum likelihood estimator for those.

The thing about power law fits is that there's lots of experimental data out there that looks like it's maybe power law, so people love to go around haphazardly fitting power laws to things which may or may not actually be power laws, so if you get to the stage where you are interested in actually fitting your data to a model and wanting good estimates for your parameters, this paper is a must-read.

Edit: The paper also has an associated website with some codes (mostly matlab, but some has been written for R or other languages) that do maximum-likelihood estimates of some example data to download.

Thanks for the info and links. Some of my data I have very good reason to believe follows a power law, and I'm planning on deriving the relationship analytically at some stage, but some of it I am just guessing. At this stage, I wouldn't even know where to begin working out what the relationship should be analytically for these.

I will read the paper you suggest and try applying it to the case where I have good reason to believe it follows a power law.

Cheers

Hello ballzac,
I have tested the power law y=a+b(X^c) with your data and observed that the fit is rather good.
I used data coming from your graph (post #12) which is not accurate enough. It should be better if you pubished the data on numerical form instead of on a graph.

I'm confused The plots I posted in #12 included curves fitted using nonlinear regression as well as the associated constants A, p, and C (b, c, and a, respectively, using your convension).

I'm confused The plots I posted in #12 included curves fitted using nonlinear regression as well as the associated constants A, p, and C (b, c, and a, respectively, using your convension).

I well understand your convention A, p and C.
My results are close to your results.
What I mean is that I cannot compare your- and my- results because my input data is taken from your graph which in not accurate enough. For example, on your graph, the four points on the left have the same y-value, due to the discretization (pixels position). So the data measured on the graph is not accurate.

I'm just curious what results you're actually hoping to compare? If you use nonlinear regression, your results should be exactly the same as mine given the same data. You haven't actually stated what method your using.

Here's my data. The first column is the x-axis and the other two columns are each of the respective y values. Thanks for your input :)

0.0008 0.7276 -3.9885
0.0016 0.7276 -3.9885
0.0023 0.7276 -3.9883
0.0031 0.7277 -3.9877
0.0039 0.7281 -3.9862
0.0047 0.7298 -3.9827
0.0055 0.7344 -3.9754
0.0063 0.7446 -3.9621
0.0070 0.7625 -3.9407
0.0078 0.7897 -3.9110

I'm just curious what results you're actually hoping to compare? If you use nonlinear regression, your results should be exactly the same as mine given the same data. You haven't actually stated what method your using.

Thank you for having published the input data.
I agree that the results should be exactly the same if the criteria for optimization where the same, which is far to be the case.
I compare the results in the two cases (A) and (B), as shown on the joint figure.
Case (B) : with the values A4, C4 and p4 given in your post Nov7-11 10:17 PM.
Probably, the criterium for optimization was the minimal mean square deviations. Since the equation is non linear relatively to the parameters to be optimized, the process is probably recussive with guessed starting values.
Case (A) : The method used is referenced in post Nov7-11 06:37 PM. As explained, the criteria is completly different, since the goal is to process without recurence and without starting guess values. Of course, the mean square deviation is expected to be not so good than with method (B). On the other hand, the computer program is very simple to write and, more important, the number of operations during the computation is much lower, which is an advantage in case of big imput data (no advantage in the present case of only 10 points).

I was interested to see if the methods (A) and (B) lead to close results for the given example, even if the small number of points is not favourable for method(A).

#### Attachments

Last edited:
Hey, thanks for all the effort you've gone to.

For the purposes of having a heuristic model, I think both of them are 'pretty good' fits. In fact, the uncertainty in those figures is roughly plus or minus the last decimal place (except for the x values which are 1,2,3 etc. divided by 1280 exactly) and it's possible, though I haven't checked yet, that both models would go through every point, within the uncertainties. As far as actually determining the 'real' formula, I really need to work out what it should be analytically, and then decide if these models are consistent with the analytical result, within the uncertainties.