An infinite zero polynomial with non-zero coefficients

In summary: The sum along any row is finite (zero), and the sum along any column is infinite.In summary, In summary, the conversation discusses an interesting observation regarding the Taylor series of a function and its inverse. The conclusion is that sometimes the 'obvious' solution is not always the correct one, as demonstrated through examples of infinite polynomials with infinite coefficients that always equal zero. These examples also show that a non-zero Taylor series can sum to
  • #1
taylog1
11
0
This isn't really a question (other than to ask if this has been considered before) but rather an interesting observation.

Lest anyone misunderstands the purpose of this post, I'm not trying to argue that maths is 'broken' or anything like that - maybe just that sometimes the 'obvious' solution isn't necessarily always the right one.


If one takes the Taylor expansion of a function, and the Taylor expansion of its inverse, and substitutes one into the other, one would expect to get a lot of complicated sums converging to zero (or one for the coefficient of x).

For example: Let y = Sin(x), and so x = Arcsin(y).

Take the Taylor expansion of Arcsin(y) and substitute for y the Taylor expansion of Sin(x).

The LHS is Arcsin(sin(x)) = x, and the RHS is an infinite series with terms in odd powers of x and, as you would expect, the coefficients of each term equate to 0, other than for x (when it's 1) leaving x=x.


Now, as above, but let y = (1-x^2) and so the inverse x = (1-y)^1/2

LHS = x as before

Taylor expansion of (1-y)^1/2 = infinite sum of powers of y

When you substitute (1-x^2) for y, you can see that you will only get even powers of x on the RHS.


So, we have an equation x = infinite series of powers of x^2, ie no x on the RHS, and so the terms can't cancel out.

Subtract x from both sides, we have an infinite polynomial with non zero (actually infinite) coefficients, but which equals zero everywhere.

I appreciate that there are logical issues in summing divergent sums, but if one looks at the results for the first n terms of (1-y)^1/2, substituting y = (1-x^2) and calculating the terms, you can see what's happening.


As you take larger numbers for n, you get an expanding interval where all the numbers are very close to zero.

For example, when n=60, between x=0.5 and x=1.3, the value of the polynomial doesn't exceed 10^-6.

You'll also see that the coefficients are rapidly diverging. However if you plot the coefficient of x^k against k (k=0,n) you'll see that they are becoming an awfully close fit to a damped exponential cosine function. so whilst they are tending to infinity, they are doing so in a controlled fashion.

Taking these together I believe you can see that it is meaningful to talk about an infinite polynomial with infinite coefficients that always equals zero, and it's not just some mathematical trick.


Is this interesting ? At the very least, when one sees an argument that concludes that because a Taylor series equals zero everywhere, the coeffients must also all equal zero isn't necessarily true. Additionally, this function (if it can be called a function with infinite coefficients) can't itself be expanded using the Taylor series (if function is zero everywhere, then it and all it's derivatives will = 0, giving a null expansion).

Over to you.

Gareth
 
Last edited:
Mathematics news on Phys.org
  • #2
I haven't worked through your examples but I do have a couple of comments. You will have radius of convergence issues in your examples. That aside, it is of course well known that there are infinitely differentiable functions that don't equal their Taylor series expansions. For example,
[tex]f(x) = e^{-\frac 1 {x^2}},\ x \neq 0[/tex]
with f(0) = 0. This has an identically 0 Taylor series which only equals the function at x=0.

Also, perhaps not directly relevant, you can easily construct nonzero Fourier Series which sum to zero on a given finite interval. Still, assuming the convergence details are worked out, you might have an interesting example. I haven't run across an example of a non-zero Taylor series which sums to zero.
 
  • #3
Thanks - Agree with all your points, my comment re it not equalling it's Taylor expansion only occurred to me when I was typing it up, so not well thought out.
Re the examples, the first does as one would expect, it's the second that's the interesting one.


I've got a spreadsheet if anyone wants to play with this - unfortunately it's 161k so have posted it here.

http://www.scribd.com/doc/21966457

Gareth
 
  • #4
When you say "Taylor series", I assume you mean "Taylor series about 0"?


It doesn't make sense to talk about a power series whose coefficients are infinite -- by definition, the coefficients of a power series are real numbers. (Or complex numbers, or whatever field or ring you're defining them over)

Now, it might make sense to generalize the idea of a power series -- but you can't be careless about it. I've used a similar method to produce the "power series":
[tex]f(x) = \sum_{1 \leq k \leq +\infty} (+\infty) x^k[/tex]​
Now, can you tell me if f should be the zero function or not? Not really -- your method is heavily dependent on knowing how the coefficients were generated.


If I reorganize what you're doing... you have a double sum... let me draw it in a grid:
Code:
c00 c01 c02 c03 c04 ...
c10 c11 c12 c13 c14 ...
c20 c21 c22 c23 c14 ...
c30 c31 c32 c33 c14 ...
c40 c41 c42 c43 c14 ...
 :   :   :   :   :  
 :   :   :   :   :
Let cn_ denote the sum of row n. Let c_n denote the sum of column n.

Your double sum is defined so that not only do each of the cn_ exist, but the sum of the cn_ exists as well. However, none of the c_n exist.

Here's a less complicated example:
Code:
 1 -1  0  0  0 ...
 1  1 -2  0  0 ...
 1  1  1 -3  0 ...
 1  1  1  1 -4 ...
 :  :  :  :  : 
 :  :  :  :  :
The sum along any row is zero, and the sum along every column is infinite.

Here's another example that is quite pathological:
Code:
 1  0  0  0  0 ...
-1  1  0  0  0 ...
 0 -1  1  0  0 ...
 0  0 -1  1  0 ...
 0  0  0 -1  1 ...
 :  :  :  :  :
 :  :  :  :  :
This time, the first row sum is 1, and the remaining row sums are zero. Adding up all the row sums gives 1.

However, note that all of the column sums exist, and are zero! Adding up all of the column sums gives zero!


All of this is really just a kind of conditionally convergent series. As you know, the value of a conditionally convergent series depends heavily on the order you add its terms -- if you rearrange the terms you can get a different sum.

When you decide to add this collection of numbers by adding up the rows, then adding those results, that one particular order for summing them.

If you add up the columns and add those, that's another order.
 
  • #5
Yes, Taylor series about 0.
When I wrote this up I was torn between trying to get my general idea across reasonably succinctly, or being very precise and boring the pants off everyone (or more likely losing myself in the detail).
I take your point, and this is what I was alluding to when I said 'logical issues in summing divergent sums'. To try and answer this:

When I talk about the value of the infinite series I was thinking of this.
Take the terms of the taylor expansion of (1-y)^.5 up to y^n, and then substitute y=1-x^2.
This will give a finite polynomial in x of power 2n.
Pick a value for x and calculate the value of the finite polynomial at x.
As n tends to infinity, the value of the finite polynomials at this particular x will converge to zero.
This will happen whatever x one initially takes.

I think that in this instance this gives a well defined limit, but take your point that you need to be very careful when doing this. For example if I decided to take the full taylor expansion of (1-y)^.5, substitute 1-x^2 and only look at the first few terms of x, I will get an infinite value.

Applying this to the point you make in your examples - whereas the limit of the finite polynomials around x is unique and well defined, 'unpeeling' the polynomials and summing the terms for each power of x gives you the infinities. One could argue that this is where the problem is and no go there, but it won't get away from the fact that the coefficient of x (ie power 1) in the polynomial (finite or otherwise) can never be 0, so even if we were to get rid of the higher powers it's not going to vanish entirely [I'm sure this can be explained better, I'll sleep on it].



Apologies if I didn't make myself clear the first time around, and I do appreciate all the comments, so apologies in advance if I inadvertantly give offense to anyone.

Gareth
 
Last edited:
  • #6
After a good night's sleep, I think this is a better articulation of the problem:

[I've gone for the Maclaurin series rather than Taylor series to avoid ambiguities re what number the Taylor series is being expanded around. Clearly one could generalise this.]


Let P(y) be the Maclaurin series of a function F(y) that is convergent within a given radius.

Let G(x)=y and R(x) be its Maclaurin series. Substitute R(x) for y in P(y), giving us another infinite polynomial S(x).

I can think of 5 outcomes / ways of viewing the outcome of this substitution:


1) S(x) is the well behaved Maclaurin series of the equation F(G(x))


2) If F(G(x)) = x then S(x) can mostly cancel out, leaving just x, and the identity x=x (such as occurs for Sin and Arcsin). (ie a special case of 1)).


In the example with F(y) = (1-y)^.5 and G(x) = 1-x^2, we can see that 1) isn't the correct outcome because in P(R(x)) the coefficients of x^n are infinite.

Neither is 2) correct because the expansion after substitution only has even powers of x^n, so x=x isn't a possibility.


3) The radius of convergence of P(y) is such that G(x) is incompatable. For example, if P(y) only converged for abs(Y) < 1, then substituting y=1+x^2 is asking for trouble.

In this case, the radius of convergence of the Maclaurin series of (1-y)^.5 is abs(y)<=1, and so if we substitute y=(1-x^2) we must restrict x to the range[0, sqrt(2)]

So, my earlier assertion that the infinite polynomial was 0 for all x was incorrect, only for x in the range [0, sqrt(2)]

This probably ties back to LCKurtz's observation re Fourier Series. It's possible that this infinite polynomial is the polynomial expansion of a Fourier Series that is zero over this range.


4) There is some other limitation on what is an acceptable substitution. The substitution used fails this test, giving a meaningless S(x)


5) If a polynomial has infinite coefficients, but which are related in a certain way (eg they are the limits of a series of coefficients that diverge, but whist diverging are still related through a damped exponential cosine function), it is still meaningful to describe an infinite sum of infinite numbers as being finite. This is less of a solution than a suggestion of a way to sum a conditionally convergent / divergent series that may be meaningful.



Re option 4 (probably most people's preferred solution ?).

We are substituting what looks like a well behaved function [y=1-x^2] into a convergent series in y, but even within the range where it 'should' converge the coefficients of the powers of x are infinite.

This surprises me - it suggests that if through investigating a problem, one comes across an infinite series polynomial where the coefficients tend to infinity, there may still be a substitution one could make that would make the series 'sensible' / convergent. Also that one has to be very careful when performing such substitutions - and can anyone tell me when it is and isn't valid ? [an explanation of why it doesn't work in this instance would help]

Gareth
 
Last edited:

1. What is an infinite zero polynomial with non-zero coefficients?

An infinite zero polynomial with non-zero coefficients is a polynomial that has infinitely many terms with coefficients that are not equal to zero, but all the terms have exponents of zero. This means that the polynomial is essentially equal to zero, despite having non-zero coefficients.

2. How can a polynomial have infinitely many terms with zero exponents?

This can happen when the polynomial is divided by a variable or expression that is equal to zero. This results in an infinite number of terms with zero exponents, making the polynomial effectively equal to zero.

3. Can an infinite zero polynomial with non-zero coefficients have a degree?

No, an infinite zero polynomial with non-zero coefficients does not have a degree. Since all the terms have zero exponents, there is no highest degree term in the polynomial.

4. What is the difference between an infinite zero polynomial with non-zero coefficients and a regular zero polynomial?

The main difference is that an infinite zero polynomial with non-zero coefficients has infinitely many terms with non-zero coefficients, while a regular zero polynomial has no terms with non-zero coefficients. This means that the infinite zero polynomial has the potential to be equal to a non-zero value, while the regular zero polynomial is always equal to zero.

5. Why is an infinite zero polynomial with non-zero coefficients important in mathematics?

An infinite zero polynomial with non-zero coefficients is important because it can help simplify equations and expressions, making them easier to solve. It also has applications in calculus and other branches of mathematics, as it represents a unique concept in polynomial algebra.

Similar threads

Replies
3
Views
676
Replies
8
Views
1K
Replies
20
Views
1K
  • General Math
Replies
7
Views
1K
Replies
4
Views
383
  • General Math
Replies
5
Views
2K
Replies
3
Views
705
  • General Math
Replies
2
Views
1K
Replies
30
Views
4K
Replies
4
Views
768
Back
Top