Proving the Existence of Roots in Complex Polynomials

In summary, the shortest proof I know uses Liouville's theorem, if ##p(z)## has no roots, then ##\frac1{p(z)}## is entire and bounded, thus a constant.
  • #1
Delta2
Insights Author
Gold Member
6,002
2,625
TL;DR Summary
Every polynomial in C of degree n has exactly n roots in C.
How do we prove that every polynomial (with coefficients from C) of degree n has exactly n roots in C?

This is not a homework (I wish I was young enough to have homework) I guess this is covered in every typical undergraduate introductory algebra course but for the time being I can't find my algebra book from my undergraduate studies (25 years ago).
 
  • Like
Likes Hall
Physics news on Phys.org
  • #2
Fundamental theorem of algebra, it is called. You will be able to find a good proof on the web.
 
  • Like
Likes WWGD and Delta2
  • #3
It is only true if you count multiplicities. So a complex polynomial has at most ##n## distinct roots. There are quite a lot of very different proofs. Hence the question is: what are we allowed to use?

The shortest answer is: ##\mathbb{C}## is algebraically closed.
 
  • #4
fresh_42 said:
The shortest answer is: ##\mathbb{C}## is algebraically closed.
That's circular.
 
  • Like
Likes valenumr, swampwiz, graphking and 2 others
  • #5
anuttarasammyak said:
Fundamental theorem of algebra, it is called.
I had an algebra professor who liked to say that it is no longer a fundament theorem in algebra, and it has never been an algebra theorem (every proof requires analysis).
 
  • #6
martinbn said:
That's circular.
No, it is not. You still have a formal induction to perform. It is short.
 
  • #7
fresh_42 said:
No, it is not. You still have a formal induction to perform. It is short.
How do you prove that ##\mathbb C## is algebraically closed?
 
  • #8
martinbn said:
How do you prove that ##\mathbb C## is algebraically closed?
I haven't said the two statements weren't equivalent.
 
  • #9
fresh_42 said:
I haven't said the two statements weren't equivalent.
How is that a proof then?
 
  • #10
martinbn said:
How is that a proof then?
It stretches my argument: proof depends on what is acceptable to be used! Admittedly in its extreme form, but anyway.

Here is another one:
All prime ideals in ##\mathbb{C}[x]## are generated by a single linear polynomial. The assertion now follows from Hilbert's Nullstellensatz.

And another one:
Irreducible polynomials over the reals have at most degree ##2## by the mean value theorem. Now all quadratic polynomials can be split over ##\mathbb{C}## by solving Vieta's formulas.You never start from scratch, pardon, set theory and ZF. So any demand to prove something requires a list of known facts that hasn't been given. Algebraic completeness is simply the shortest fact to deduce the FTA.
 
  • #11
fresh_42 said:
It stretches my argument: proof depends on what is acceptable to be used! Admittedly in its extreme form, but anyway.

Here is another one:
All prime ideals in ##\mathbb{C}[x]## are generated by a single linear polynomial. The assertion now follows from Hilbert's Nullstellensatz.

You never start from scratch, pardon, set theory and ZF. So any demand to prove something requires a list of known facts that hasn't been given. Algebraic completeness is simply the shortest fact to deduce the FTA.
But Hilbert theorem assumes algebraically closed fields.

Of course you don't have to go to ZF, but showing it is equivalent to something else is not always a poof. They can be both wrong for example. Surely you would agree that showing that the Riemann hypothesis is equivalent to some other statement is not a proof and you don't go to collect you million bucks.
 
  • Like
Likes S.G. Janssens
  • #12
martinbn said:
But Hilbert theorem assumes algebraically closed fields.

Of course you don't have to go to ZF, but showing it is equivalent to something else is not always a poof. They can be both wrong for example. Surely you would agree that showing that the Riemann hypothesis is equivalent to some other statement is not a proof and you don't go to collect you million bucks.
And my second solution uses topological completeness of ##\mathbb{R}##. You always use something. The only question is, what this something may be. If you are willing to accept the truth of an equivalent statement, then we're done. Otherwise, you keep going on to ask down the entire road to ZF.
 
  • #13
The shortest proof I know uses Liouville's theorem, if ##p(z)## has no roots, then ##\frac1{p(z)}## is entire and bounded, thus a constant.
 
  • Like
Likes FactChecker, S.G. Janssens and wrobel
  • #14
I like variational argument. This proof is not shortest one but it is still quite elementary and uses some fundamental ideas that go far outside complex analysis.

Let $$f(z)=\sum_{k=0}^n a_kz^k,\quad a_n\ne 0,\quad n\in\mathbb{N}$$ be a polynomial.

It is easy to see that ##|f(z)|\to\infty## as ##|z|\to\infty##. This property is referred to as coercivity. It implies that the function ##z\mapsto|f(z)|## attains its minimum in ##\mathbb{C}:## $$ \exists z':\quad |f(z')|=\inf_{z\in\mathbb{C}}|f(z)|.$$ Then it is not hard to show that this minimum is a root: ##f(z')=0##.
 
Last edited:
  • Like
  • Informative
Likes swampwiz, S.G. Janssens, martinbn and 1 other person
  • #15
fresh_42 said:
No, it is not. You still have a formal induction to perform. It is short.
What exactly do you mean that ##\mathbb{C}## is algebraically closed ( do you mean that every polynomial has at least one root?) and what is this induction you speak about?
 
  • #16
Delta2 said:
What exactly do you mean that ##\mathbb{C}## is algebraically closed ( do you mean that every polynomial has at least one root?) and what is this induction you speak about?
You can see from the discussion about it, that it is closely related. This means it depends on the definition of algebraically closed and the phrasing of the fundamental theorem of algebra what is left to prove.

I had in mind:

FTA = every polynomial has a zero
Closure = every polynomial splits into linear factors

Closure ##\Longrightarrow ## FTA: obvious
FTA ##\Longrightarrow ## Closure:
##\deg p =1:\quad p=x-z_1## Induction basis.
##\deg p=n:\quad \exists z_1\, : \,p(z_1)=0 \Longrightarrow p(x)=(x-z_1)p_1(x) \wedge \deg p_1=n-1##
Induction hypothesis: ##p_1(x)=(x-z_2)\cdot\ldots\cdot (x-z_{n}) ##
Induction step: ##p(x)=(x-z_1)p_1(x)=(x-z_1)(x-z_2)\cdot\ldots\cdot (x-z_{n}) ##

This is an extreme example, as the two statements are equivalent or even the same if you define and phrase them accordingly. It only should have demonstrated that it is important to describe where to start, i.e. what can be taken for granted.

You can also see from the discussion above, that there are dozens of very different proofs. Each one uses a different technique from a different mathematical field. This is why it is so important to describe the starting point of a possible proof, in this case more than in any other.
 
  • Informative
Likes Delta2
  • #17
martinbn said:
I had an algebra professor who liked to say that it is no longer a fundament theorem in algebra, and it has never been an algebra theorem (every proof requires analysis).
There is a fairly recent paper that claims a purely algebraic proof.

https://www.researchgate.net/publication/275364031_A_Purely_Algebraic_Proof_of_the_Fundamental_Theorem_of_Algebra
 
  • Like
Likes Delta2
  • #18
wrobel said:
I like variational argument. This proof is not shortest one but it is still quite elementary and uses some fundamental ideas that go far outside complex analysis.

Let $$f(z)=\sum_{k=0}^n a_kz^k,\quad a_n\ne 0,\quad n\in\mathbb{N}$$ be a polynomial.

It is easy to see that ##|f(z)|\to\infty## as ##|z|\to\infty##. This property is referred to as coercivity. It implies that the function ##z\mapsto|f(z)|## attains its minimum in ##\mathbb{C}:## $$ \exists z':\quad |f(z')|=\inf_{z\in\mathbb{C}}|f(z)|.$$ Then it is not hard to show that this minimum is a root: ##f(z')=0##.
What if we restricted to a ball about the origin and then we can use that Continuous functions on compact sets achieve extrema and show minimum is 0? Edit: Maybe we can use Analyticity and then f(z) has finitely many lacunary values and go from there?
 
  • #19
WWGD said:
What if we restricted to a ball about the origin and then we can use that Continuous functions on compact sets achieve extrema and show minimum is 0? Edit: Maybe we can use Analyticity and then f(z) has finitely many lacunary values and go from there?
martinbn said:
The shortest proof I know uses Liouville's theorem, if ##p(z)## has no roots, then ##\frac1{p(z)}## is entire and bounded, thus a constant.
 
  • #20
WWGD said:
What if we restricted to a ball about the origin and then we can use that Continuous functions on compact sets achieve extrema and show minimum is 0? Edit: Maybe we can use Analyticity and then f(z) has finitely many lacunary values and go from there?
Let's hold an elementary argument. Let ##z_k## be a minimizing sequence:
$$|f(z_k)|\to\inf_{z\in\mathbb{C}}|f(z)|,\quad |f(z_1)|\ge |f(z_2)|\ge\ldots$$
Such a sequence exists by definition of inf.

By coercivity this sequence is bounded ##\sup_k|z_k|<\infty.## Thus it contains a convergent subsequence: ##z_{k_j}\to z'##.

Then assume that ##f(z')\ne 0## and consider an expansion
$$f(z)=f(z')+c(z-z')^p+(z-z')^{p+1}Q(z-z'),\quad c\ne 0.$$
 
Last edited:
  • Like
Likes WWGD and S.G. Janssens
  • #21
a polynomial map extends to a holomorphic map from the riemann sphere to itself, sending infinity to infinity. this map sends every closed, hence compact, set to a compact, hence closed, set. Moreover, if the polynomial is non constant, it is locally equivalent to a map of form x-->x^n, n≥ 1, (the proof uses the inverse function theorem), and is thus an open map. Thus the image of the riemann sphere is a subset of the riemann sphere which is both open and closed, hence either empty or the entire sphere. Since it is not empty, it is all of the riemann sphere. hence the image of the complex plane is the entire complex plane. in particular, zero is in the image, so every non constant polynomial has a zero.
 
Last edited:
  • #22
It seems that historically it was the first pure existence theorem. One does not solve an equation explicitly and even does not provide any constructive procedure to obtain a solution but just asserts that the solution exists
 
  • Informative
Likes Delta2
  • #23
I read somewhere that when David Hilbert explained to students what pure existence theorems are he said "there exist a person in this auditorium who has the least number of hair on his\her body.
 
  • #24
I have spoken to mathematicians who talk about irreducible complexity in mathematics. The idea is that theorems all require a certain amount of inescapable work. I am not sure if this has ever been rigorously measured or not.

A short proof, as @fresh_42 has emphasized, does not mean that is requires less work. It just takes a lot of work as a given - e.g. the use of Louiville's Theorem to prove that every complex polynomial has a root depends on foundational work in complex analysis. As he points out the shortest proof is " therefore all polynomials have a root". This proof assumes all of the necessary arguments as given.

All of the proofs of the FTA seem to depend on the topology of the real line and of the complex plane and this is a lot of foundational work. The paper that claimed a purely algebraic proof uses the hyperreals so I wonder if this is just a long end run around the usual approach. I haven 't read the paper.

For myself, the fascinating question about the Fundamental Theorem of Algebra is how it depends upon notions of calculus in the plane and becomes straight forward when one recognizes that polynomials are analytic functions. One wonders about the unity of mathematics when algebraic questions require analysis to be solved. There are many questions that seem to become tractable through analysis - e.g. questions in number theory. Why is this?

A related question is how much structure is ultimately needed to prove a theorem. Complex polynomials need not be viewed as entire functions but merely as smooth functions. In this case one needs to count singularities and show that a polynomial can be extended to a smooth function of the 2 sphere into itself - as @mathwonk explained in post #21. This is also the idea underlying the "coercivity" proof that @wrobel indicates in post #14.
 
  • Like
Likes fresh_42
  • #26
martinbn said:
By Gauss-Bonnet
The following Rouché theorem belongs to the same field of ideas. The Fundamental theorem of algebra follows from the Rouché theorem as well.
Let ##M\subset\mathbb{R}^m## be a bounded domain with a smooth boundary ##\partial M,\quad \overline M=M\cup\partial M##.
Let ##H,K:\overline M\to \mathbb{R}^m## be continuous mappings such that for some point ##a\in\mathbb{R}^m,\quad a\notin (H+K)(\partial M),\quad a\notin H(\partial M)## one has
$$\|K(x)\|\le \|H(x)-a\|,\quad \forall x\in\partial M.$$
Then ##\mathrm{deg}\,(H+K,a)=\mathrm{deg}\,(H,a).##
Famous Brouwer's fixed point theorem follows from here as well.
 
Last edited:
  • #27
Delta2 said:
Summary:: Every polynomial in C of degree n has exactly n roots in C.

How do we prove that every polynomial (with coefficients from C) of degree n has exactly n roots in C?

This is not a homework (I wish I was young enough to have homework) I guess this is covered in every typical undergraduate introductory algebra course but for the time being I can't find my algebra book from my undergraduate studies (25 years ago).
@Delta2 Looking over the replies I see that your question has not been answered directly. The Fundamental Theorem of Algebra tells you that every polynomial has at least one root. But this does not say that every polynomial of degree n has exactly n-roots. To get this you also need to convince yourself that if a polynomial has a root then it factors into a polynomial of the form (Z-the root)×a polynomial of degree n-1. But then this new polynomial of degree n-1 also has a root by the Fundamental Theorem of Algebra so one gets a second factor (Z-second root). This process ends after n steps and since the polynomial has degree n it can not have any further roots because then its degree would be more than n.

So over the complex numbers a polynomial of degree n factors as (Z-root_1)×...×(Z-root_n). Note that some of the roots may be repeated and after counting repeats there are exactly n roots.

One might ask why use the the Fundamental Theorem of Algebra when there are algebraic formulas to solve polynomial equations. There is the quadratic equation which we learn in high school. There are similar equations for cubic and fourth degree polynomials. So... what's the big deal?

I imagine that for many years mathematicians searched for formulas for fifth degree equations. But they failed and it was finally proved that no such formula exists for the general fifth degree polynomial or for that matter for polynomials of any degree higher than four.

Interestingly, forumlas such as the quadratic formula are purely algebraic. They do not use calculus or any other form of analysis. The Fundamental Theorem of Algebra though was first at least proved using analysis. This gets around the problem of a general formula.

The quadratic formula and the cubic and quartic do more than verify that polynomials of degrees 2,3,and 4 have roots. They actually find the roots. The Fundamental Theorem of Algebra does not find the roots. It merely asserts their existence. So the problem of finding roots is not solved by the FTA.
 
Last edited:
  • Informative
Likes Delta2
  • #28
@Delta2 I imagine now a days with computers there are search methods for finding roots of complex polynomials. Of course a computer can only approximate a zero but still one might imagine an ideal computer whose search algorithm converges to an exact value. Just as one might ask whether there is a general formula to find the roots of a polynomial, one may ask whether there is a universal algorithm that converges to a root.

While I know little about search algorithms, the arguments given above by @mathwonk and @wrobel are suggestive. The first thing they tell us is that a complex polynomial is an open mapping in the entire plane. This means that on any closed disk if it obtains its maximum absolute value somewhere in the interior of the disk then it must be a constant. Similarly, if its minimum absolute value is not a zero then it must obtain its minimum absolute value on the boundary of the disk and if the polynomial is not constant this minimum must be smaller than any of its values in the interior of the disk. So for any value P(z) that is not zero there is some other arbitrarily nearby value that has a smaller absolute value.

This suggests that there might be a computer algorithm to find a direction of descent at any point in the plane that is not a zero of the polynomial.

The second thing that posts 21 and 14 tell us is that a polynomial does in fact obtain an minimum absolute value somewhere in the complex plane (not at infinity). This gives hope that there is some algorithm that will converge. It also clinches the Fundamental Theorem of Algebra since no point of positive absolute value can be the minimum.
 
Last edited:
  • Like
Likes Delta2
  • #29
Here is a an idea for a proof of the FTA that uses the Poincare-Hopf Index theorem for the sum of the indices of a vector field.
Not sure if this works.

The reciprocal of a complex polynomial, ##1/P(Z)##, when viewed as a vector field on the complex plane can be extended to have a zero at infinity ( since every polynomial converges to infinity smoothly)and so defines a vector field on the entire sphere except at the zeros of the polynomial if it has any.

If ##P## has no zeros the vector field is defined on the entire sphere and has a single zero at infinity. In this case, the index must be 2 by the Poincare-Hopf Index Theorem since the Euler characteristic of the sphere is 2.

If it is not 2 then there must be a zero somewhere in the complex plane.

For a polynomial of degree ##n##, I would think without a proof, that the index of the zero is ##2+n## so every polynomial has a zero except for the non-zero constant polynomial of degree 0. A proof might start with ##Z^{-n}## then show for large enough modulus an arbitrary polynomial's reciprocal is arbitrarily close so it must also have the same index. In any case, one would only need that the index is not equal to 2 at infinity.

Example: ##P(Z)=Z##
##1/P(Z) = |Z|^{-2}\bar Z##. The index at infinity of ##\bar Z## is three. The index of its zero at 0 is -1.

Using the inverse of Stereographic projection, this is easy to visualize.

Afterthought:

Another way to compute the index might be to us the vector field ##V=1/P(Z)## to pull back the connection 1 form ##ω## on the tangent circle bundle of the sphere to get a 1 form on the sphere minus the singularities of ##V##. This form is defined in a disk around the North Pole with the North Pole removed. One can then integrate this form over circles centered at the pole and take the limit as their radii go to zero.

This would likely be a messy computation since ##V## would first need to be sent to the
tangent space of the sphere under the inverse of Stereographic projection then normalized to have length 1 away from its singularities. One then needs to define the connection 1 form on the tangent circle bundle for the standard metric on the sphere and pull it back via this normalized vector field and finally compute the limit of these integrals.
 
Last edited:
  • Like
Likes mathwonk
  • #31
The easiest proof uses topology, which has a nice intuitive feel, allowing folks to use aspects of it without going into the nitty-gritty.

First presume that these solutions exist in ℂ, then presume a path that x takes in which it is a circle of very large radius in ℂ, and thus which encircles all the solutions. The corresponding path of the polynomial output will also be in ℂ, and in the form large, wavy circle-ish share - i.e., the ultimate term will be a large circle, with the lesser terms generating the waviness - and such that the waviness is too small for the path to miss winding around the origin.

Now decrease the radius of the path of x, which will cause the polynomial path to become a smaller, relatively more wavier circle, and at some point, the waves will be relatively strong enough to touch the origin. Since the origin represents the value 0, this must mean that there is some value in the circle of x that is a solution of the polynomial, and so the polynomial can be factored into a pair of factors in which one is a linear term with the negative of this just found solution and a residual factor that has degree -1 from the original.

Now simply repeat the exercise using the residual factor; each iteration will result in another solution being found, leaving as the residual a corresponding lower degree polynomial. At some point the residual polynomial will be of degree 1, and thus the solution for it is trivially the free term. The net effect of all this is that the number of solutions that are found will correspond to the degree of the original polynomial.

QED
 
  • #32
Just a question, re post #27. I don't see why the quadratic formula is purely algebraic. I.e. how does one prove that a square root exists, even for every positive real number, without analysis? e.g. how does one prove sqrt(2) exists without taking a least upper bound?
 
  • #33
mathwonk said:
how does one prove sqrt(2) exists without taking a least upper bound?

I know two methods: Dedekind cuts, or Cauchy sequences. Are there others?

##\sqrt{2}## might be a bad example since it is the length of a diagonal, hence exists. But what about ##\sqrt[7]{\pi^e}##?
 
  • Like
Likes swampwiz
  • #34
good points. to be more precise, i wondered how does one prove it without analysis, i.e. limits. dedekind cuts and cauchy sequences seem like analysis, and existence of "diagonals" do not seem to be very algebraic either. so my curiosity was about how one proves existence of roots of quadratic polynomials purely within algebra, and I was looking for an algebraic proof of existence of a square root. I agree that, if you take enough geometry axioms for granted to establish a theory of area, that euclid does it in his Elements, book 1. I still do not see any algebraic proof in that. In algebra, I assume we have integers and rationals, but I don't see how to even define real numbers algebraically, but maybe one can consider algebraic field extensions of the rationals, and say one has square roots in a field of form Q[X]/(f(X)), where f is a quadratic polynomial. But if one starts from say axioms for reals, it takes a limiting type of proof to establish even square roots exist. Even if one takes the algebraic field extension approach, it seems not so elementary to define a big enough field extension F to contain arbitrary square roots of all elements of F, without some kind of zorn lemma. not analysis maybe but pretty modern use of infinities. I was originally assuming we started from a definition of reals in some form, usually axiomatically via existence of lub's, or even complexes, and then tries to prove existence of square roots just by algebra. even in geometry you seem to have to assume that certain constructions with compass and straightedge do actually work, i.e. that certain circles do meet certain lines, which is a kind of completeness axiom, as used in analysis. still just rambling...
 
  • #35
I think when it comes to solvability in radicals, the ability to take roots is a given. For example given that you can solve any equation of the form ##x^2=a##, can you solve the general quadartic equation? And the answer is yes. Here I assume characteristic different than ##2##, otherwise one has to change the eqaution appropriatly.
 
  • Like
Likes lavinia

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
902
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
962
  • Linear and Abstract Algebra
Replies
2
Views
1K
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
2K
Back
Top