# Can a polynomial model any continuous function?

CraigH

Number Nine
You might look at the Stone-Weierstrss theorem, which says that any continuous function on a closed interval can be approximated by a polynomial function. I'm not sure about the general case offhand, but I strongly suspect that a bump function would provide a counterexample.

• 1 person
If I could use any polynomial up to degree ∞, then can I get a close fit to any continuous function?

There is no such thing as "polynomial up to degree ∞". Polynomials must have finite degree. What you are thinking of is Taylor series. And the answer to that question is no: there are smooth functions which are not analytic.

Edit: I just noticed you said "close fit" and not equal. Disregard what I said above.

So is it also true that you can fit a polynomial to any function if you use enough exponents?

The Stone-Weierstrass theorem states that given a continuous function ##f : [a,b] \rightarrow \mathbb{R}## and an ##\epsilon > 0## there exists a polynomial p(x) with the property
##|f(x)-p(x)|<\epsilon## for all ##x\in [a,b]##.

That is, provided you are willing to specify a fixed error, there always is a polynomial that is close enough. Note how the theorem says nothing about exponents.

Last edited:
• 1 person
CraigH
Thankyou both for your answers, they have been very helpful. The reason I needed to know this is I am programming a neural network, which will take an input vector in ℝ$^{6}$ (or a higher dimension) and find a function that will map this input to a vector in ℝ$^{3}$.
I just wanted to make sure that this function can be a polynomial, as I have to pre define the form of this function and its degree. The neural network will use a learning algorithm and hopefully find coefficients that will create a function that correctly maps the input to the output.
Thanks again, this website never fails to provide help!

CraigH
Do you know a general "rule of thumb" for how many exponents I will need? I've searched all over the web and there are no papers or resources that I can find, apart from this how-to-choose-the-degree...

each of the elements in the input vector is related to the output vector by the inverses square law. Each element in the input vector represents the reading of a sensor. These sensors will be placed around a radioactive source. The closer the source is to a sensor the larger its value will be. This value is inversely proportional to the square of the distance from the radiation.
With an array of these sensors there will be a function that maps the input from all of the sensors to a specific location in 3d space. hence a 6+ dimensional input vector and a 3 dimensional output vector.

There is probably an analytical solution to this problem that I could calculate, however a machine learning approach will be better as the readings on the sensors won't actually be exactly proportional, and a machine learning approach allows for other insights into the source.

So my question, for this problem, what is an approximate number of exponents I might need? I know this is a very specific question and its a bit off topic for this forum, but I really have no clue where to start. Do these things usually have a range of 4 to 10 exponents? 10 to 100? I'm going to use trial and error to find the best function but I just don't know the range these things usually are in.

Staff Emeritus
Thankyou both for your answers, they have been very helpful. The reason I needed to know this is I am programming a neural network, which will take an input vector in ℝ$^{6}$ (or a higher dimension) and find a function that will map this input to a vector in ℝ$^{3}$.