This isn't really a question (other than to ask if this has been considered before) but rather an interesting observation.

Lest anyone misunderstands the purpose of this post, I'm not trying to argue that maths is 'broken' or anything like that - maybe just that sometimes the 'obvious' solution isn't necessarily always the right one.

If one takes the Taylor expansion of a function, and the Taylor expansion of its inverse, and substitutes one into the other, one would expect to get a lot of complicated sums converging to zero (or one for the coefficient of x).

For example: Let y = Sin(x), and so x = Arcsin(y).

Take the Taylor expansion of Arcsin(y) and substitute for y the Taylor expansion of Sin(x).

The LHS is Arcsin(sin(x)) = x, and the RHS is an infinite series with terms in odd powers of x and, as you would expect, the coefficients of each term equate to 0, other than for x (when it's 1) leaving x=x.

Now, as above, but let y = (1-x^2) and so the inverse x = (1-y)^1/2

LHS = x as before

Taylor expansion of (1-y)^1/2 = infinite sum of powers of y

When you substitute (1-x^2) for y, you can see that you will only get even powers of x on the RHS.

So, we have an equation x = infinite series of powers of x^2, ie no x on the RHS, and so the terms can't cancel out.

Subtract x from both sides, we have an infinite polynomial with non zero (actually infinite) coefficients, but which equals zero everywhere.

I appreciate that there are logical issues in summing divergent sums, but if one looks at the results for the first n terms of (1-y)^1/2, substituting y = (1-x^2) and calculating the terms, you can see what's happening.

As you take larger numbers for n, you get an expanding interval where all the numbers are

For example, when n=60, between x=0.5 and x=1.3, the value of the polynomial doesn't exceed 10^-6.

You'll also see that the coefficients are rapidly diverging. However if you plot the coefficient of x^k against k (k=0,n) you'll see that they are becoming an awfully close fit to a damped exponential cosine function. so whilst they are tending to infinity, they are doing so in a controlled fashion.

Taking these together I believe you can see that it is meaningful to talk about an infinite polynomial with infinite coefficients that always equals zero, and it's not just some mathematical trick.

Is this interesting ? At the very least, when one sees an argument that concludes that because a Taylor series equals zero everywhere, the coeffients must also all equal zero isn't necessarily true. Additionally, this function (if it can be called a function with infinite coefficients) can't itself be expanded using the Taylor series (if function is zero everywhere, then it and all it's derivatives will = 0, giving a null expansion).

Over to you.

Gareth

Lest anyone misunderstands the purpose of this post, I'm not trying to argue that maths is 'broken' or anything like that - maybe just that sometimes the 'obvious' solution isn't necessarily always the right one.

If one takes the Taylor expansion of a function, and the Taylor expansion of its inverse, and substitutes one into the other, one would expect to get a lot of complicated sums converging to zero (or one for the coefficient of x).

For example: Let y = Sin(x), and so x = Arcsin(y).

Take the Taylor expansion of Arcsin(y) and substitute for y the Taylor expansion of Sin(x).

The LHS is Arcsin(sin(x)) = x, and the RHS is an infinite series with terms in odd powers of x and, as you would expect, the coefficients of each term equate to 0, other than for x (when it's 1) leaving x=x.

Now, as above, but let y = (1-x^2) and so the inverse x = (1-y)^1/2

LHS = x as before

Taylor expansion of (1-y)^1/2 = infinite sum of powers of y

When you substitute (1-x^2) for y, you can see that you will only get even powers of x on the RHS.

So, we have an equation x = infinite series of powers of x^2, ie no x on the RHS, and so the terms can't cancel out.

Subtract x from both sides, we have an infinite polynomial with non zero (actually infinite) coefficients, but which equals zero everywhere.

I appreciate that there are logical issues in summing divergent sums, but if one looks at the results for the first n terms of (1-y)^1/2, substituting y = (1-x^2) and calculating the terms, you can see what's happening.

As you take larger numbers for n, you get an expanding interval where all the numbers are

__very__close to zero.For example, when n=60, between x=0.5 and x=1.3, the value of the polynomial doesn't exceed 10^-6.

You'll also see that the coefficients are rapidly diverging. However if you plot the coefficient of x^k against k (k=0,n) you'll see that they are becoming an awfully close fit to a damped exponential cosine function. so whilst they are tending to infinity, they are doing so in a controlled fashion.

Taking these together I believe you can see that it is meaningful to talk about an infinite polynomial with infinite coefficients that always equals zero, and it's not just some mathematical trick.

Is this interesting ? At the very least, when one sees an argument that concludes that because a Taylor series equals zero everywhere, the coeffients must also all equal zero isn't necessarily true. Additionally, this function (if it can be called a function with infinite coefficients) can't itself be expanded using the Taylor series (if function is zero everywhere, then it and all it's derivatives will = 0, giving a null expansion).

Over to you.

Gareth

Last edited: