Can Taylor series be used to get the roots of a polynomial?

Click For Summary

Discussion Overview

The discussion revolves around the use of Taylor series to find the roots of a polynomial. Participants explore the methodology of expressing a polynomial as a function and deriving its inverse through Taylor series expansion, examining both the theoretical underpinnings and practical limitations of this approach.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant proposes writing a polynomial in the form $$a_nx^n+a_{n-1}x^{n-1}+...a_2x^2+a_1x=c$$ and using the Taylor series of its inverse to find roots.
  • Another participant questions the feasibility of finding $$f^{-1}(x)$$ without knowing its explicit formula, suggesting that this may limit the method's effectiveness.
  • Concerns are raised about the existence of $$f^{-1}(x)$$ as a function, noting that a polynomial can have multiple roots.
  • Some participants discuss the relationship between their method and the Lagrange inversion theorem, questioning whether the coefficients obtained are the same.
  • One participant expresses uncertainty about the concept of radius of convergence and its implications for the Taylor series, while another clarifies that not all series converge for all values of $$x$$.
  • There is a suggestion to test the method on a quadratic equation to see if it yields a terminating series or a closed-form expression.
  • A participant reflects on their findings, noting that their coefficients for a specific polynomial matched those from the Lagrange inversion theorem, suggesting a potential simplification of their approach.

Areas of Agreement / Disagreement

Participants express a range of views on the validity and limitations of using Taylor series for finding polynomial roots. There is no consensus on the effectiveness of the method, and multiple competing perspectives on its feasibility and theoretical foundations remain present.

Contextual Notes

Participants highlight the need for further exploration regarding the evaluation of derivatives at specific points and the implications of the radius of convergence for the Taylor series. The discussion also touches on the potential for divergent series in certain cases.

Who May Find This Useful

This discussion may be of interest to those studying polynomial equations, Taylor series, inverse functions, and mathematical methods for root-finding in theoretical and applied contexts.

Kumar8434
Messages
121
Reaction score
5
I'm using this method:

First, write the polynomial in this form:
$$a_nx^n+a_{n-1}x^{n-1}+...a_2x^2+a_1x=c$$
Let the LHS of this expression be the function ##f(x)##. I'm going to write the Taylor series of ##f^{-1}(x)## around ##x=0## and then put ##x=c## in it to get ##f^{-1}(c)## which will be the value of ##x##.

Since, ##f^{-1}(0)=0## here, so we've got the first term of our Taylor series as 0.

Now, the only thing that remains is calculating the derivatives of ##f^{-1}(x)## at ##x=0##.

I'm using the fact that $$\frac{d(f^{-1}(x)}{dx}=\frac{1}{f^{'}(f(^{-1}x))}$$

By differentiating this equation, we can get the second derivative of ##f^{-1}(x)## as:
$$\frac{d^2(f^{-1}(x))}{dx^2}=-\frac{1}{(f^{'}(f(^{-1}x)))^2}*f^{''}(f(^{-1}x))*f^{-1'}(x)$$

Similarly, we can get the other derivatives by further differentiation of this equation. Then we can evaluate all the derivatives at ##x=0## to get the Taylor series of ##f^{-1}(x)## and evaluate it at ##x=c## to get the value of ##x##.

1.Is this method correct?

2.Can something be done to make it better and remove the limitations?
 
Last edited:
Physics news on Phys.org
Kumar8434 said:
I'm using the fact that $$\frac{d(f^{-1}(x)}{dx}=\frac{1}{f^{'}(f(x))}$$
##\frac{d(f^{-1}(x))}{dx} = \frac{1}{f '(f^{-1}(x))}##
 
Stephen Tashi said:
##\frac{d(f^{-1}(x))}{dx} = \frac{1}{f '(f^{-1}(x))}##
Oh, I got it wrong. I'm going to edit it. Is there still some hope in this approach?
 
Kumar8434 said:
Is there still some hope in this approach?

It appears that in order to find ##f^{-1}(x)## by that method, you'd need to know ##f^{-1}(x)## to begin with.

So, if there is any hope for that method, you need to discover a way to evaluate expressions like ##f'(f^{-1}(x))## at ##x = 0## without knowing the explicit formula for ##f^{-1}(x)##.

There is also the technicality that ##f^{-1}(x)## doesn't necessarily exist (as a function). A polynomial of degree n can have up to n distinct roots. Of course, if you could develop an algorithm that found just one of the roots, that would be progress.
 
Stephen Tashi said:
It appears that in order to find ##f^{-1}(x)## by that method, you'd need to know ##f^{-1}(x)## to begin with.

So, if there is any hope for that method, you need to discover a way to evaluate expressions like ##f'(f^{-1}(x))## at ##x = 0## without knowing the explicit formula for ##f^{-1}(x)##.

There is also the technicality that ##f^{-1}(x)## doesn't necessarily exist (as a function). A polynomial of degree n can have up to n distinct roots. Of course, if you could develop an algorithm that found just one of the roots, that would be progress.
But I know the value of ##f^{-1}(x)## at ##x=0##. After doing all the formulas, what I have to do in the end in evaluating that expression at ##x=0## and I've the value at ##x=0##.
For example, $$f^{-1'}(x)=\frac{d(f^{-1}(x)}{dx}=\frac{1}{f^{'}(f(^{-1}(x))}$$
$$=\frac{1}{n*a_{n}(f(^{-1}(x)))^{n-1}+(n-1)a_{n-1}(f(^{-1}(x)))^{n-2}+....+2a_2f^{-1}(x)+a_1}$$
which gives $$\frac{1}{a_1}$$ at ##x=0## since $$f^{-1}(0)=0$$
Similarly,
$$\frac{d^2(f^{-1}(x))}{dx^2}=-\frac{1}{(f^{'}(f(^{-1}(x)))^2}*f^{''}(f(^{-1}x))*f^{-1'}(x)$$
We already have the value of the first derivative at ##x=0## so we can substitute that here, so
$$\frac{d^2(f^{-1}(x))}{dx^2}=-\frac{1}{a_1^2}*2a_2*\frac{1}{a_1}$$
$$=-\frac{2a_2}{a_1^3}$$
I think this process can be continued to get more derivatives by the product rule.
 
Last edited:
I think what you are doing is equivalent to: https://en.wikipedia.org/wiki/Lagrange_inversion_theorem.

The resulting infinite series has some radius of convergence. So an important question is whether x = c is a value in that radius of convergence.

Try the method on a quadratic equation and see if you get a series that terminates - or peharps a series whose sum has a closed form expression.
 
Stephen Tashi said:
I think what you are doing is equivalent to: https://en.wikipedia.org/wiki/Lagrange_inversion_theorem.

The resulting infinite series has some radius of convergence. So an important question is whether x = c is a value in that radius of convergence.

Try the method on a quadratic equation and see if you get a series that terminates - or peharps a series whose sum has a closed form expression.
I looked up this Lagrange inversion theorem. I didn't understand much but it was also about getting Taylor series of inverse functions. Are the coefficients obtained also the same in Lagrange inversion theorem and my method.
And, I'm new with Taylor series. I've not yet studied it in school. I've just read about it on the internet. So, I still don't understand what does this radius of convergence mean. What does it mean? I mean, we have regularly increasing factorials in the denominators of the Taylor series, so I think the series will always converge. Even if ##(x-c)## is large, then after a large number of terms, I think factorials will become dominating and the denominators will be so large that even the large exponent of ##(x-c)## won't contribute much to the answer. I think the Taylor series will always converge even if ##c## is not in some required range.
 
Last edited:
Kumar8434 said:
So, I still don't understand what does this radius of convergence mean. What does it mean?
A series expressed as a sum of terms that are functions of a variable ##x## may not converge for all possible values of ##x##. If it converges for only those values of ##x## in an interval ##(a,b)##, it is customary to write that interval as ##(c - r, c + r)## where ##r## is the "radius of convergence" and ##c## is the center of the interval. The term "radius" makes more sense when we are dealing with functions of a complex variable. Then there can be a circle of convergence in the complex plane. That circle has a radius.

Taylor series are an example of a "power series" since the terms involve sums of powers of the variable ##x##. Power series have some interval or circle of convergence. (If they converge for all values of ##x##, we count that as an "infinite" radius of convergence.) A non-power series might not have a interval or circle of convergence. The values at which it converges might form some irregular shape.

I mean, we have regularly increasing factorials in the denominators of the Taylor series, so I think the series will always converge.
No, it won't. I suggest you make a post on the topic of "Are there smooth functions whose Taylor series diverges?". That is a specific mathematical question - and maybe someone will answer your post besides me!

There are also examples of functions f(a) whose Taylor series converges at the value of x=a, but does not converge to f(a).
Even if ##(x-c)## is large, then after a large number of terms, I think factorials will become dominating and the denominators will be so large that even the large exponent of ##(x-c)## won't contribute much to the answer.

More is involved that the factorials and the powers of ##(x-c)## you also have to consider the other factors in the terms.
 
Are the coefficients obtained also the same in Lagrange inversion theorem and my method.
That's a good question. I don't know the answer. You like examples. Try the quadratic equation both ways.
 
  • #10
Stephen Tashi said:
That's a good question. I don't know the answer. You like examples. Try the quadratic equation both ways.
I evaluated the first and second coefficients for ##f(x)=px^2+qx##. They were the same, so I think both the methods give the same series.
Except the Lagrange formula for the coefficients seems to be a better version of my formula. I think this formula of mine can be simplified to get that formula. So, I re-discovered this Lagrange inversion on my own!
Thanks. I'll stop asking these type of questions now if they resemble crackpotism.
 
  • #11
Kumar8434 said:
So, I re-discovered this Lagrange inversion on my own!
Yes, the real variable version.

Is there a way the series expansion can reveal two unequal roots of a quadratic?

Thanks. I'll stop asking these type of questions now if they resemble crackpotism.

They only resemble crackpotism if you assert things are true just because of numerical examples.

When people first learn some concepts of calculus, this can result in a kind of mental intoxication. The possibilities for doing algebraic manipulations suddenly explode. One day it seems hard just to solve quadratic equations and then suddenly the sky's the limit. You should enjoy the intoxication while it lasts.

What you will eventually encounter is that algebraic manipulations are not completely reliable - especially when the subject matter deals with "infinitely" small or large things. In particular, you need to be wary of thinking of an function as an algorithm that can be expressed by a concise formulae. The definition of a function is permitted to have all sorts of complicated "if...then..." conditions.

There is a branch of mathematics called "The Calculus of Finite Differences" where algebra is less inhibited. (e.g.
https://www.physicsforums.com/threa...sequences-recursive-reps.892720/#post-5619687
https://www.physicsforums.com/threa...interpolation-polynomial.903217/#post-5688309 )
 
  • Like
Likes   Reactions: Kumar8434
  • #12
Stephen Tashi said:
So an important question is whether x = c is a value in that radius of convergence.
.
Can we first use Newton's method to get an approximate root, then apply Lagrange inversion around that point? I don't know about Newton's method but I've heard about it that it's also a method to get polynomial roots. So, what would be better: To use Newton's method all along, or to use Newton's method first to get an approximate root, then apply Lagrange inversion?
 
  • #13
Stephen Tashi said:
Of course, if you could develop an algorithm that found just one of the roots, that would be progress.

I think that in general there can be no such algorithm, that is you cannot find just one root unless there is something special about the polynomial, or you have been given some extra fact about it, like if you know for some reason that one root is twice another. Which is tantamount to having two equations to eliminate between. Obviously if you could find one root by a general algorithm you could find them all. But in general trying to pick out one root is like trying to pin a globule of mercury. Every expression you come up with involves all the roots in indistinguishable, i.e. symmetrical, fashion.
 
  • #14
Kumar8434 said:
Can we first use Newton's method to get an approximate root, then apply Lagrange inversion around that point?

Newton's method requires an initial estimate (which can be a guess) for a root. So Newton's method does not locate all the roots of an equation.

Are you interested in computer programming? Solving problems with numerical methods utilizes many approximations and you seem to be interested in numerical approximations. Programming also makes it easier to study methods like Newton's method because putting the steps of the algorithm into computer code makes the steps of the algorithm memorable.
 
  • #15
Stephen Tashi said:
Newton's method requires an initial estimate (which can be a guess) for a root. So Newton's method does not locate all the roots of an equation.

Are you interested in computer programming? Solving problems with numerical methods utilizes many approximations and you seem to be interested in numerical approximations. Programming also makes it easier to study methods like Newton's method because putting the steps of the algorithm into computer code makes the steps of the algorithm memorable.
Yeah, I've been learning it for a month now. I can only do something like prime factorization in C++ until now. I'd try to develop my skills.
 
  • #16
Kumar8434 said:
Yeah, I've been learning it for a month now. I can only do something like prime factorization in C++ until now. I'd try to develop my skills.

If you started with C++ without first learning C, you began your swimming lessons by jumping into deep water!
 
  • #17
Stephen Tashi said:
If you started with C++ without first learning C, you began your swimming lessons by jumping into deep water!
I searched on the internet before I started learning and many people said that you need not learn C and that they're the same thing. I'm learning with an eBook file and I've had no problem in learning C++ until now. I've learned upto writing functions until now.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K