# Fit to (orthgonal?) polynomial function

1. May 14, 2013

### Tojur

Hi all. I need some advice in a project I'm into.

I have some experimental (simulation) data and i need to find a function that fits to it. The experimental data behaviour change when I modify some parameters I have. My goal is, from that single function, been able to predict how the experimental data will change, acording to the parameters: I mean, been able to find an analytical expression that represents all the information I have.

For the characteristics of my problem, I've decided to try with a polynomial function. To been able to see how the factors varies, i've done some fits with mathematica. However, every time i change the polynom degree, not only the factor changes of value, but its signs can also change (for example, the factor of the cuadratic part, in a 4-degree polynom fit is positive, and in a 5-degree polynom fit is negative). I'm aware that there is not a unique function that can represent some data. However, this behaviour is a big problem to my goal since this breaks down any truly generalization attempt of my solution.

As far as I suspect, a possible solution for this problem is trying to make the fit in a set of orthogonal functions: particularly, in orthogonal polynomial functions. However i would like to know your opinion on this. In particular two aspects worries me of this aproach: the first one is if making my fit in orthogonal polynomials (or orthogonal functions in general), would really solve my problem of changing cofficients with the degree of the function that i use to make the fit. The second one is that some of this sets (the few I know), like the Legendre polynomials, have even in their high degree polynomials, terms that include cuadratic dependence: i wonder if this would not be a problem, since I believe that the data have a strong dependence on this term (this is more an observational hunch, nothing rigorous)

I hope I make myself clear. Any suggestion or advice would be really apreciated. Also, any bibliography to develop the orthogonal fit would be nice (if it is still a good idea, of course).

2. May 15, 2013

### Stephen Tashi

Are you using single variable polynomials? If you expect to pedict some result when you vary parameters (plural), how would you expect to do this with a polynomial in one variable? Are you hoping to find how the coefficients of a single variable polynomial depend on the several parameters?

3. May 15, 2013

### Tojur

No, of course not. The function I'm looking for, is a polynom with respect to a single principal variable "x". The dependence with the other variables (that I call parameters) is something that will be inserted in the coefficients of the polynomial expansion (or this is the what Ihope to be able to do later). But before to do that, i need to do some fit with respect to the variable "x" that does not depend in the degree of the polynom; once i have that invariable coefficients, I can analize how those depend in the parameters.

I'm sorry if I wasn't very clear the first time (I hope I have been clear enoguh this time)

Sorry, I misread: " Are you hoping to find how the coefficients of a single variable polynomial depend on the several parameters? " the answer to that is yes :D

Last edited: May 15, 2013
4. May 15, 2013

### Stephen Tashi

Let me restate your question, as I understand it:

There is a simulation $S(P)$ whose output is a set of data points $(x_i, y_i)$ and whose input is the vector of parameters $P = (p_1,p_2,..p_n)$.

Your goal is to find an approximation of the output of the simulation that has the form:
$y = \sum_{i=1}^M c_i(P) f_i(x)$ where the $f_i$ are a set of functions that do not depend on the parameter vector $P$ and the $c_i$ are a set of functions that may depend on the parameter vector $P$ but do not depend on the variable $x$.

You will consider such an approximation to be successful if small changes in the $P$ vector produce only small changes in the functions $c_i$.

-----

I'm sure most mathematicians would first try a set of functions $f_i$ that are orthogonal since a representation using a non-orthogonal set of functions can be re-written as a representation using orthogonal basis functions.

Is the simulation you mentioned deterministic or stochastic? If the simulation is stochastic then I think fitting a polynomial to the output curve is not a good approach.

Representing a function as a sum of other functions (such as orthogonal polynomials or trigonometric functions) is appropriate when the function represents something that is (or can be conceptualized) as being literally a "sum", i.e. an arithmetic addition of other functions. (This is more specific than saying the function is the "combined effect" of other functions. )

Often a function representing a real-life phenomenon cannot be exactly reprsented by a finite sum of orthogonal polynomials. In such a case, a useful representation will have coefficients that are large on a few of the component polynomials and small on the rest of them.

Without knowing what you are simulating, I can't say whether it is plausible that your output curve is an arithmetic sum of functions. I also don't know if representing it as orthogonal polynomials or orthogonal trigonometric functions (i.e. fourier analysis) would be useful.

Can the changes you want to perform on the parameters $P$ be visualized as any sort of transformation that couples the change in one component of $P$ to a change in other components of it? For example, if $p_1$ is the pressure in a pipe and $p_2$ is the rate of flow of some chemical through the same pipe then plausible changes in the $P$ vector may not be changes that vary $p_1$ and $p_2$ in a completely independent manner. The change in $p_2$ may not be a function only of the change in $p_1$, but physical constraints ( i.e. adusting various valves ) may imply that a change in $p_2$ is also not entirely independent of a change in $p_1$.

5. May 15, 2013

### Tojur

First of all, thanks for taking the time to answer.

Youŕ explanation of my problem is correct. The simulation i'm using is in fact deterministic. The data that i obtain from the simulation is (as far as I assume based in information I have) some kind of hibrid or known functions like the gaussian function, the bessel function (0-order of the first kind), and, from what I see, linear function and probably others; put another way, I would say that my data represent in some manner, transition functions between those, or at least, between functions alike (with similar behaviour). From the series expantion of this functions, I though that my answer could be written as a polynomial expansion. My idea is that depending in the parameters, it would transform in one or in other function, and could predict the midle states that is what my data represents. So, do you think it would be useful to try an orthogonal polynom expantion?

Now, going a step forward, once I have the fits, i will try to find how the expantion coefficients "c", depend in the parameters P. These parameters depend on each other: they are not independent, but this is expected from the characteristics of my problem. In fact, this parameters represent measurable quantities.

6. May 16, 2013

### Stephen Tashi

Without knowing the details of your investigation, I can't guess what you should try first.

I agree that the approach you describe may work. However, it won't be simple to see the results of that approach as a blending of known functions unless the functions themselves appear as members of the basis functions $f_i$. For example, if the $f_i$ are polynomials and the "natural" output of the simulation is a blend of gaussians and bessel functions, it wont be simple to recognize how a sum of polynomials can be represented as a sum of a gaussian plus a bessel function.

If you are convinced that the output of the simulation is a blend of certain functions, you can seek to represent it by letting the $f_i$ be those functions. If you literally want it to be a "mixture" of those functions, you can add the constraint $\sum_{i=1}^M c_i(P) = 1$. Suppose you choose functions $f_i$ that are not orthogonal. You must find a way to compute the coefficients $c_i(P)$ when give a particular set of data $(x_i,y_i)$. One approach is to define the fit as a leas squares problem:

We are given the data set of $N$ points $(x_j,y_j)$. We are given $M$ known functions $f_i$. Find the constants $c_i$ that minimize the square errors given by $\sum_{j=1}^N ( y_i - \sum_{i=1}^M c_i f_i(x_i))^2$.

With orthogonal functions we expect to find a unique solution for the $c_i$ and a solution to the minimization can be estimated by "taking the inner product of the data with each basis function", so to speak. With non-orthogonal functions, there might be more than one solution for the $c_i$ and it may be hard to solve the minimization problem. How hard would it be to solve in the minimization problem with your data?

7. May 23, 2013

### Tojur

Hi. thanks for your suggestion. I didn't answer before because i've reading about the subject. In particular, about fit in orthogonal polynoms, and trying to execute it. My idea is change the normal range of orthogonality of the polynoms (that is from -1 to 1 to chebyshev polynomials for example) to span them over the interval in wich is my data. This can be done with a simple variable change. However there is a problem: I want to find the chebyshev polynomial expantion of my discrete set of points

$f(x) = \sum\limits_{k=1}^N c_kT_j(x_k)$

On the other hand, the discrete condition of orthogonality for chebyshev is

$\sum\limits_{k=1}^m T_i(x_k)T_j(x_k)= \left\{ \begin{array}{c l} 0\ \ i \neq j\\ m/2\ \ i=j \neq 0\\ m\ \ i=j=0 \end{array}\right.$

only if

$x=cos ( \dfrac{\pi (k-1/2)}{n} ) \ \ \ k=1,2,...n$

From this condition the coeficients can be found

$c_k=\dfrac{2}{N}\sum\limits_{k=1}^N f(x_k)T_{j-1}(x_k)$

This seems easy to calculate. The problem is that for do this, I need to determine what is the value of $f(x)$ in the points $x_k$ and it could happen that only a few of my data are evaluated exactly in these points: most of them could be in middle points. I know that if I use a real high degree in the expantion I will found some $x_k$ that corresponds to my data. However my goal is to make the expantion without have to make an extremely large expantion. I have searched for some example to make a fit over discrete data with orthogonal polynomials, but i could find anything (only with continuous functions).

Any idea, advice o useful bibliography (with an example would be great) will help me a lot.

8. May 24, 2013

### Stephen Tashi

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook