# Homework Help: Exponential Least Squares Method

1. Nov 26, 2015

### RicardoMP

1. The problem statement, all variables and given/known data
Hi! I've been interpolating a data set using Legendre polynomials and Newton's method and now I've been asked to, given a data set, approximate the function using the Least Squares Method with the following fitting function: $g(r)=a+be^{cr}$, where $a$, $b$ and $c$ are parameters to be found and my objective is to find the set of equations in the form F(r)=0 that gives me these parameters.

2. The attempt at a solution
I am familiar with the method to find the parameters for polynomial and exponential fitting functions. In the case of a exponential fitting function of the form $ae^{bx}$, I would use the logarithm $ln(ae^{bx})=ln(a)+bx$ and find the parameters with that. However, i'm getting a little frustrated for not being able to find a way to do the same with $g(r)=a+be^{cr}$. Is there any trick that can be suggested?

2. Nov 26, 2015

### Dr. Courtney

Problems like this can be addressed with non-linear least squares.

A lot of graphing programs have the functionality built in, and the user can define new non-linear functions.

3. Nov 26, 2015

### RicardoMP

Thank you for your suggestion, but I'm trying to find a way of doing it analytically, by building a set of non linear equations and solving it. I'm aware that I must use non-linear Least Squares Method in this case, but my problem is the parameter "a" that stops me from using the logarithm to separate the parameter "c" from the exponential and i'm having some trouble trying to solve that. In the end, I want to transform $a+be^{cr}$ into something that gives me the linear combination of the functions to be used in the Least Square Method.

4. Nov 26, 2015

### Dr. Courtney

It cannot be done in a single step, it must be iterative.

5. Nov 26, 2015

### Ray Vickson

I don't think it can be done "analytically"; that is why people have developed numerical methods---to handle problems that matter, but cannot be dealt with using closed-form expressions.

Basically, you can express the sum of squared errors as a function S(a,b,c), which you minimize by setting its partial derivatives to zero. That gives you three coupled nonlinear equations in the three unknowns (a,b,c). If you write them out in detail in a couple of small examples you will soon see why the system is analytically unsolvable.

6. Nov 26, 2015

### RicardoMP

I see. So I have to build my set of normal equations and solve it using, for example, Newton's Method (which is actually the method I'm being asked to use in later problems). My problem is coming up with a set of linearly independent functions that I can use to build my set of normal equations. For example, if my fitting function was $f(x)=a+bx^2$ (where a and b are unknown parameters), my set of functions would be $\phi_0 = 1$ and $\phi_1 = x^2$. In my case, the fitting function is $a+be^{cr}$, which I'm not being able to transform into something that gives me such set of functions. That is what I'm having a hard time with.

7. Nov 26, 2015

### Ray Vickson

The linearly-independent functions you need are $f_0(x) = 1$ and $f_1(x) = e^{cx}$. The trouble is that you do not know what value of $c$ to use, so determining its value is an important part of the problem. So, all you can do is deal with the nonlinear problem
$$\min_{a,b,c} S(a,b,c) = \sum_{i=1}^n \left(a +b e^{c x_i}- y_i \right)^2,$$
by trying to solve the system
$$\begin{array}{rcll} \displaystyle \frac{\partial S}{\partial a} &= & 2\sum_{i=1}^n (a + b e^{c x_i} - y_i) & = 0 \\ \\ \displaystyle \frac{\partial S}{\partial b} &= & 2\sum_{i=1}^n e^{c x_i} (a + b e^{c x_i} - y_i) & = 0 \\ \\ \displaystyle \frac{\partial S}{\partial c} &= & 2\sum_{i=1}^n b x_i e^{c x_i} (a + b e^{c x_i} - y_i) & = 0 \end{array}$$.
It can be important to start the procedure from a decent approximation. One way to get such a starting point might be to use the approximation $e^{cx} \approx 1 + cx + c^2 x^2/2$, giving the rough fit $y \approx A + Bx + Cx^2$, where $A = a+b$, $B = bc$ and $C = b c^2/2$. If you use ordinary least-squares to find $A,B,C$, you can solve for your $a,b,c$ to get a starting point for the more precise procedure.

The nonlinear least-squares problem has been studied for decades by hundreds of researchers, and numerous reasonably good methods are widely available. If you want to re-invent the wheel, then OK, go for it; but if all you want to do is solve some specific problems then why not use one of the many existing algorithms? For more details, see, eg., https://en.wikipedia.org/wiki/Non-linear_least_squares
or https://en.wikipedia.org/wiki/Levenberg–Marquardt_algorithm
or http://www.seas.ucla.edu/~vandenbe/103/lectures/nlls.pdf [Broken] .

Last edited by a moderator: May 7, 2017
8. Nov 27, 2015

### RicardoMP

I see! I was too obsessed in trying to solve the problem through the examples I mentioned earlier and kept avoiding the derivatives of the sum of squared errors. I will try to work on it now and maybe later post the results i've arrived to. Thank you very much for the help! :)

Last edited by a moderator: May 7, 2017
9. Nov 27, 2015

### Ray Vickson

Note that there was a typo in my expression for the starting point: it should have said $C = b c^2/2$. (This has been corrected in post #7, but I cannot correct it in your response above because I am not permitted to edit someone else's post, even the part where it quotes my own post.)

10. Dec 3, 2015

### RicardoMP

I've been meaning to post my results here, but I forgot to do so in the last days. So, I did minimize the sum of squared errors using the partial derivatives and found the parameters using a variety of methods (Newton's, fixed-point and Broyden's) to solve the system, coded by myself in Wolfram Mathematica. The set of data with which I had to work with is the .png file I uploaded and the parameters I got using, for example, Newton's Method were: a=1.98215, b=-4.98174, c=-0.503306. I sure hope they are right, hehe! Thank you very much for the help!

#### Attached Files:

• ###### values.PNG
File size:
2 KB
Views:
120
11. Dec 3, 2015

### Ray Vickson

Assuming you mean $T = a + b e^{cr}$ I tackled the problem using Maple's "Optimization" package. The result is
[a = 1.98216100005468632, b = -4.98174954071660458, c = -.503303358459761707]
The Maple instruction was very simple: it was just "Minimize(S)" (after loading the Optimization package and giving a formula for S. Sometimes Maple needs a bit of a nudge, via an approximate starting point for example, but in this case it does fine all on its own. Maple's numerical optimization packages utilize the well-developed and thoroughly-researched algorithms from NAG (the Numerical Algorithms Group).

I would be willing to bet that full-fledged Mathematica (not Wolfram Alpha) has similar optimization capabilities, but that is a guess on my part; I have no access to Mathematica, so cannot say for sure what its packages include.

Last edited: Dec 3, 2015
12. Dec 4, 2015

### RicardoMP

Pheww, I'm reassured to know you got the same parameters as me! I actually used Mathematica and had to code the algorithms since that was one of the requirements of the problem given to me. It's the first time I've heard about Maple. Is it a Mathematica-like software? Is it widely used? Once again, thank you! :)

13. Dec 4, 2015

### Ray Vickson

Yes, Maple is essentially a Mathematica alternative, with very similar capabilities. In surveys of computer algebra systems, it typically happens that Mathematical comes out a bit ahead for certain types of tasks, while Maple comes out a bit ahead for other types of tasks.

Maple is widely used throughout the world, but maybe not quite as widely used as Mathematica, due in part at leaset to the latter's more effective marketing.