If a polynomial is identically zero, then all its coefficients are 0

AI Thread Summary
If a polynomial is identically zero, then all its coefficients must be zero. The proof begins by assuming a polynomial p(x) equals zero and analyzing the implications when x equals zero, leading to the conclusion that the constant term a_0 must be zero. By setting x to non-zero values, the argument is repeated for each coefficient, ultimately showing that all coefficients a_i must equal zero. The discussion emphasizes the importance of continuity and proper handling of limits in polynomial expressions. Overall, the proof can be simplified by focusing on the contradiction arising from assuming any coefficient is non-zero.
Eclair_de_XII
Messages
1,082
Reaction score
91
Homework Statement
Let ##p(x)=a_0+\ldots+a_nx^n## be a polynomial of degree at most ##n##. Prove that if ##p(x)=0##, then all coefficients ##a_i=0##.
Relevant Equations
Induction: Suppose ##S## is a well-ordered set with a least element ##a##. Then ##S## satisfies the induction hypothesis if whenever ##n\in S##, ##n+1\in S##. Note: Did not know how to incorporate this into proof attempt.

Also, the property of real numbers: if for ##a,b\in \mathbb{R}##,whenever ##ab=0##, either ##a=0## or ##b=0##. I invoke this property in the second line of the equation in the second part of this proof attempt.
Suppose

##a_0+a_1x+\ldots+a_nx^n=0##

and restrict the domain of ##p## to the set of real numbers excluding the roots of ##p##. Note that:

if ##a_0 == 0##: ##x=0## is a root of ##p##
else: ##x=0## is not a root of ##p##

Assume the latter. Subtract ##a_0## from both sides of the equation.

##x(a_1+\ldots+a_nx^{n-1})=-a_0##

Set ##x=0## since it is not a root of ##p##, and we have a contradiction: ##0=-a_0\neq 0##. Hence, ##a_0=0##.

%%%

Now we proceed as follows:

\begin{align}
a_0+a_1x+a_2x^2+\ldots+a_nx^n&=&a_1x+a_2x^2+\ldots+a_nx^n\\
&=&x(a_1+a_2x+\ldots+a_nx^{n-1})\\
&=&0
\end{align}

Note that again, ##x=0## is a root of ##p##. Assume ##x\neq 0##. This means:

##a_1+a_2x+\ldots+a_nx^{n-1}=0##

And here we repeat the argument above (the triple percentage signs) for powers of ##x## from ##1## to ##n## until we conclude that ##a_i=0## for all ##i\in\{1,\ldots,n\}##.
 
Physics news on Phys.org
I think you can simplify and strengthen your proof significantly using only the last part of your proof. What does ##p(x)\equiv 0## say about ##a_0## when ##x=0##? Then what does ##x*(a_1+a_2*x+...+a_n*x^{n-1}) \equiv 0## say when ##x \ne 0##. Then use continuity to look at ##(a_1+a_2*x+...+a_n*x^{n-1}) ## where ##x=0##.

This may really be the same proof that you have, but I think you can state the proof more clearly.
 
If ##a_0\neq 0##, then we get a contradiction for ##x=0##. So the polynomial must be of the form ##a_nx^n + \ldots + a_1x##. Since we are working over a field, there are no zero divisors. Therefore, after factoring we conclude ##a_1=0##. Similarly, the rest follows.
 
  • Like
Likes Delta2
FactChecker said:
you can state the proof more clearly

Suppose ##p(x)=0##. Then ##a_0+\ldots+a_nx^n=0##. Set ##x=0##. Then ##a_0=0##. Now ##x(a_1+\ldots+a_nx^{n-1})=0##. If ##x\neq 0##, then this means that ##a_1+\ldots+a_nx^{n-1}=0##. Repeat the process for ##a_1## and keep doing this until you reach the conclusion that ##a_n=0##.
 
I feel like the last point elides the point a bit. You said that ##a_1+...+a_n x^{n-1}=0## when ##x\neq 0##. You then say to repeat the previous step, which only involved looking at when ##x=0##. How does that work?
 
  • Like
Likes FactChecker
You should start using the LaTex "\equiv" to get ##\equiv## where appropriate. Also, use continuity to address the issue that @Office_Shredder points out.
 
FactChecker said:
use continuity to address the issue that @Office_Shredder points out.

Denote a polynomial ##p(x)=a_0+\ldots+a_nx^n##.

Suppose ##p(x)\equiv 0##. Set ##x=0## such that ##a_0=0##. Hence, ##p(0)=a_0=0##. Now we have ##p(x)=a_1x+\ldots+a_nx^n=x(a_1+\ldots+a_nx^{n-1})##.

Let ##\epsilon>0##.
Set ##M=\max\{|a_i|:i\neq 1\}##
Suppose ##\delta=\min\left\{1,\frac{\epsilon}{M(n-1))}\right\}##.

Then if ##\delta >|x|## where ##x\neq 0##:

\begin{align}
|(a_1+\ldots+a_nx^n)-a_1|&<&|a_2|\delta+\ldots+|a_n|\delta ^{n-1}\\
&\leq&|a_2|\delta+\ldots+|a_n|\delta \\
&\leq&|a_2|\frac{\epsilon}{M(n-1)}+\ldots+|a_n|\frac{\epsilon}{M(n-1)}\\
&\leq&\frac{\epsilon}{n-1}+\ldots+\frac{\epsilon}{n-1}\\
&=&\frac{n-2}{n-1}\epsilon\\
&<&\epsilon
\end{align}

Hence, for ##x\approx 0##, ##a_1+\ldots+a_nx^{n-1}\approx a_1##.

##p(x)=x(a_1+\ldots+a_nx^{n-1})##
##\frac{p(x)}{x}=a_1+\ldots+a_nx^{n-1}##
##\frac{p(x)}{x}\approx a_1##
##\frac{p(x)}{x}\approx \frac{a_0}{x}##
##0=\frac{a_0}{x}\approx a_1##

Frankly, I'm a bit dubious about the relation I wrote in line four.
 
Last edited:
Have you proven that the sums and products of continuous functions are continuous? If so, you can easily state that any polynomial is continuous.
 
Not yet, in the book I'm using. I've definitely read a proof of it in real analysis three years ago.

Is there still something off with my attempt at a solution? I mean, I'm pretty sure I could argue that:

##p(x)\approx xa_1##
##p(x)\approx a_0##
##a_0\approx xa_1##

because ##x\approx 0## and not necessarily because ##a_1\approx 0##
 
Last edited:
  • #10
I decided that it would be better to just show that you cannot express ##x## raised to some power ##k## as a linear combination of other powers of ##x##.

Suppose that you could express ##x^k## as a linear combination of powers of ##x## different from ##k##.

##x^k=\sum_{i\neq k} a_ix^i##

The left-hand side has the single root: ##x=0##.
The right-hand side has possibly many roots, possibly including zero. If the latter holds, then you can factor a power of ##x## (call it ##l##) out of both sides.

##x^{(k-l)}=\sum_{i\neq k} a_ix^{(i-l)}##

If ##l\geq k##, then the left-hand side has no roots, and the right-hand side has roots.
If ##l < k##, then the left-hand side has the single root ##x=0##, and the set of roots of the right-hand side excludes zero.

This is a contradiction.
 
Last edited:
  • #11
Eclair_de_XII said:
Not yet, in the book I'm using. I've definitely read a proof of it in real analysis three years ago.

Is there still something off with my attempt at a solution?
I'm not sure about all the details. I am sure that something like that could be made to work. It just seems almost easier to prove the general principles about sums and products of continuous functions and to then say that any polynomial is the summation of products of continuous functions, x and ##a_i##.
 
  • #12
A similar approach focusing on the big power instead of evaluating at 0 (which is apparently tricky).

If ##p(x)\equiv 0## then ##p(x)/x^n \equiv 0 ## for ##x\neq 0##. This is ##a_n## plus terms with x in the denominator. If you let x go to infinity you can get ##a_n = 0##. You can then proceed downward from there.

I also agree it's probably easier to just prove that sums and products are continuous than trying to do prove your polynomial is continuous.
 
  • Like
Likes FactChecker
  • #13
Office_Shredder said:
A similar approach focusing on the big power instead of evaluating at 0 (which is apparently tricky).

If ##p(x)\equiv 0## then ##p(x)/x^n \equiv 0 ## for ##x\neq 0##. This is ##a_n## plus terms with x in the denominator. If you let x go to infinity you can get ##a_n = 0##. You can then proceed downward from there.
Good idea. That gets my vote. In fact, if you assume that ##a_n## is the first non-zero coefficient, it's a simple proof by contradiction. I don't think that you even have to "proceed downward from there".
 
  • #14
Is the proof attempt in post #10 not correct?
 
  • #15
Eclair_de_XII said:
Is the proof attempt in post #10 not correct?
It certainly does not look like a proof to me. You are stating a lot about the roots of polynomials for which you do not reference a theorem or lemma. I suspect that you are using facts that are not established in your class yet. They may not even be true.
 

Similar threads

Back
Top