Questions regarding polynomial divisions and their roots

  • #1
Adgorn
131
16
Hello everyone,
Going through calculus study, there is a vague point regarding polynomials I'd like to make clear.

Say there's a polynomial ##f## with a root at ##a## with multiplicity ##2##, i.e. ##f(x)=(x-a)^2g(x)## where ##g## is some other polynomial. I define ##h(x)=\frac {f(x)} {x-a}##. If I understand correctly, ##h(x)=(x-a)g(x)## for all ##x≠a## and is undefined at ##x=a##.

Now, although ##h(a)## is undefined, it is quite clearly divisible by ##x-a##. The book I'm using seems to treat this as a polynomial, despite the fact that it has point where it is undefined. I'm guessing it does that by assuming that ##h(a)=0## or something of the sort, but I'm not sure, which leads my to my questions:

1. For a polynomial ##f##, are the statements that ##f(a)=0##, that ##f## has a root at ##a## and that that ##f## is divisible by ##x-a## identical?
2. Does dividing 2 polynomials (without a remainder) result in a polynomial? If so, can that polynomial have roots at points where the denominator of the division was 0?

Thanks in advance to all the helpers
 

Answers and Replies

  • #2
.Scott
Science Advisor
Homework Helper
3,048
1,264
Without multiplicities, a polynomial f(x) that is zero at a1, a2, a3, ... an, can be expressed as:
C(x-a1)(x-a2)(x-a3)...(x-an) where C is the constant factor for the ##x^n## term.

So your ##h(x)## has an ##x-a## in both the numerator and denominator, making it undefined at ##x=a##. But it is otherwise identical to ##h2(x)## that is defined as the removable by synthetic division of one (x-a) term from f(x).

In your example, ##h2(a)## will be zero only because there were two (x-a) terms originally.

To answer you questions:
1) Yes.
2) Yes, as long as the method of division does not result in the inclusion of an (x-a)/x-a) term.


Hello everyone,
Going through calculus study, there is a vague point regarding polynomials I'd like to make clear.

Say there's a polynomial ##f## with a root at ##a## with multiplicity ##2##, i.e. ##f(x)=(x-a)^2g(x)## where ##g## is some other polynomial. I define ##h(x)=\frac {f(x)} {x-a}##. If I understand correctly, ##h(x)=(x-a)g(x)## for all ##x≠a## and is undefined at ##x=a##.

Now, although ##h(a)## is undefined, it is quite clearly divisible by ##x-a##. The book I'm using seems to treat this as a polynomial, despite the fact that it has point where it is undefined. I'm guessing it does that by assuming that ##h(a)=0## or something of the sort, but I'm not sure, which leads my to my questions:

1. For a polynomial ##f##, are the statements that ##f(a)=0##, that ##f## has a root at ##a## and that that ##f## is divisible by ##x-a## identical?
2. Does dividing 2 polynomials (without a remainder) result in a polynomial? If so, can that polynomial have roots at points where the denominator of the division was 0?

Thanks in advance to all the helpers
 
  • #3
Without multiplicities, a polynomial f(x) that is zero at a1, a2, a3, ... an, can be expressed as:
C(x-a1)(x-a2)(x-a3)...(x-an) where C is the constant factor for the ##x^n## term

Who said we are working over an algebraically closed field?

I.e. what you wrote is not true for ##x^2+1## over the real numbers.
 
  • #4
.Scott
Science Advisor
Homework Helper
3,048
1,264
That particular statement of mine requires an algebraically closed field.
But what it illustrates applies whether or not the OP is working with complex numbers.
 
  • #5
That particular statement of mine requires an algebraically closed field.
But what it illustrates applies whether or not the OP is working with complex numbers.

Nobody mentioned complex numbers, but point taken.
 
  • #6
suremarc
147
64
1. For a polynomial ##f##, are the statements that ##f(a)=0##, that ##f## has a root at ##f## and that that ##f## is divisible by ##x−a## identical?
Yes; to see this, one can show that for any polynomial ##P##, ##x-a## divides ##P(x)-P(a)##. From this the equivalence of the three statements should follow.

2. Does dividing 2 polynomials (without a remainder) result in a polynomial? If so, can that polynomial have roots at points where the denominator of the division was 0?
If you’re referring to the quotient ##P(x)/Q(x)## of functions P and Q, then in general it doesn’t make sense to evaluate the quotient at zeroes of Q. However if P and Q are polynomials, then there is a more natural interpretation using polynomial division. If ##Q(x)=x-a##, then we have a unique polynomial ##R(x)## such that ##P(x)=(x-a)R(x)+P(a)##, and we can write ##P(x)/(x-a)## as ##R(x)+P(a)/(x-a)##, which agrees with the former.
If ##P(a)=0##, then we can extend the quotient of functions ##P(x)/(x-a): \mathbb{R}-\{a\}\to\mathbb{R}## to a function defined on all of ##\mathbb{R}## by computing ##R(x)##, which can be done with polynomial long division.
 
  • #7
WWGD
Science Advisor
Gold Member
6,044
7,360
Just to add a point that may be helpful: one may have a polynomial _split_ , i.e., h(x)=f(x)g(x) . Unless one of the t wo is linear , there is no guarantee h has a root if the underlying field is not complete. For that guarantee, you need one of the polys to be of degree 1.
 
  • #8
FactChecker
Science Advisor
Gold Member
7,296
3,136
Say there's a polynomial ##f## with a root at ##a## with multiplicity ##2##, i.e. ##f(x)=(x-a)^2g(x)## where ##g## is some other polynomial. I define ##h(x)=\frac {f(x)} {x-a}##. If I understand correctly, ##h(x)=(x-a)g(x)## for all ##x≠a## and is undefined at ##x=a##.
You are correct and it is very good that you are keeping track of the exception points of your calculations. This is an example where everyone just gets used to defining the resulting function to the value at that point which extends it to equal the polynomial at that point. They are a little careless in not stating that fact at least once somewhere.
 
Last edited:
  • #9
WWGD
Science Advisor
Gold Member
6,044
7,360
Hello everyone,
Going through calculus study, there is a vague point regarding polynomials I'd like to make clear.

Say there's a polynomial ##f## with a root at ##a## with multiplicity ##2##, i.e. ##f(x)=(x-a)^2g(x)## where ##g## is some other polynomial. I define ##h(x)=\frac {f(x)} {x-a}##. If I understand correctly, ##h(x)=(x-a)g(x)## for all ##x≠a## and is undefined at ##x=a##.

A point that may be helpful. Note that, while what you say is correct, strictly speaking, it is possible to define f(x) at x=a, albeit not always in such a way as to make it continuous. While f(x), as written , is not defined at x=a, it is possible to define it. Defining f(a)=0 is a possibility.
 
  • #10
Adgorn
131
16
Without multiplicities, a polynomial f(x) that is zero at a1, a2, a3, ... an, can be expressed as:
C(x-a1)(x-a2)(x-a3)...(x-an) where C is the constant factor for the ##x^n## term.

So your ##h(x)## has an ##x-a## in both the numerator and denominator, making it undefined at ##x=a##. But it is otherwise identical to ##h2(x)## that is defined as the removable by synthetic division of one (x-a) term from f(x).

In your example, ##h2(a)## will be zero only because there were two (x-a) terms originally.

To answer you questions:
1) Yes.
2) Yes, as long as the method of division does not result in the inclusion of an (x-a)/x-a) term.
Yes; to see this, one can show that for any polynomial ##P##, ##x-a## divides ##P(x)-P(a)##. From this the equivalence of the three statements should follow.


If you’re referring to the quotient ##P(x)/Q(x)## of functions P and Q, then in general it doesn’t make sense to evaluate the quotient at zeroes of Q. However if P and Q are polynomials, then there is a more natural interpretation using polynomial division. If ##Q(x)=x-a##, then we have a unique polynomial ##R(x)## such that ##P(x)=(x-a)R(x)+P(a)##, and we can write ##P(x)/(x-a)## as ##R(x)+P(a)/(x-a)##, which agrees with the former.
If ##P(a)=0##, then we can extend the quotient of functions ##P(x)/(x-a): \mathbb{R}-\{a\}\to\mathbb{R}## to a function defined on all of ##\mathbb{R}## by computing ##R(x)##, which can be done with polynomial long division.
You are correct and it is very good that you are keeping track of the exception points of your calculations. This is an example where everyone just gets used to defining the resulting function to the value at that point which extends it to equal the polynomial at that point. They are a little careless in not stating that fact at least once somewhere.
A point that may be helpful. Note that, while what you say is correct, strictly speaking, it is possible to define f(x) at x=a, albeit not always in such a way as to make it continuous. While f(x), as written , is not defined at x=a, it is possible to define it. Defining f(a)=0 is a possibility.

Thanks for all the helpful comments!

So, if I understand correctly (I'm currently working with the real numbers), when we have a polynomial ##f(x)=g(x)h(x)## where ##g## has roots and we divide to obtain ##\frac {f} {g}##, the initial result is not actually ##h##, but rather a function that is equal to ##h## at all points except at the roots of ##g##. At the points of those roots it is undefined, because the function ##\frac f g## essentially takes the value of ##h## at those points, multiplies it by ##0## then divides it by ##0##, which makes it undefined.

In order to make the quotient ##\frac {f(x)} {g(x)}## equal to ##h(x)## for all ##x##, we can either cancel out the factors of ##g(x)## in the quotient itself, or we can define the function ##\frac {f(x)} {g(x)}## to be equal ##h(x)## at those points.

The important thing to keep in mind (again, if I understand correctly), is that unless either of those steps is performed, the quotient ##\frac f g## is not actually a polynomial (assuming ##g## has roots), since it is undefined at some points. Only after one of those steps is performed (for example, by polynomial long division which is identical to canceling out the factors), is the function ##h(x)## obtained.

Hopefully I got that right,
 
  • #11
Thanks for all the helpful comments!

So, if I understand correctly (I'm currently working with the real numbers), when we have a polynomial ##f(x)=g(x)h(x)## where ##g## has roots and we divide to obtain ##\frac {f} {g}##, the initial result is not actually ##h##, but rather a function that is equal to ##h## at all points except at the roots of ##g##. At the points of those roots it is undefined, because the function ##\frac f g## essentially takes the value of ##h## at those points, multiplies it by ##0## then divides it by ##0##, which makes it undefined.

In order to make the quotient ##\frac {f(x)} {g(x)}## equal to ##h(x)## for all ##x##, we can either cancel out the factors of ##g(x)## in the quotient itself, or we can define the function ##\frac {f(x)} {g(x)}## to be equal ##h(x)## at those points.

The important thing to keep in mind (again, if I understand correctly), is that unless either of those steps is performed, the quotient ##\frac f g## is not actually a polynomial (assuming ##g## has roots), since it is undefined at some points. Only after one of those steps is performed (for example, by polynomial long division which is identical to canceling out the factors), is the function ##h(x)## obtained.

Hopefully I got that right,

The main idea is certainly correct, but you must watch out with situations like

##f(x) = 1, g(x) = x##

Neither is the quotient ##f/g## a polynomial, nor can it be defined in a "continuous" way at ##0##.
 
  • #12
Adgorn
131
16
The main idea is certainly correct, but you must watch out with situations like

##f(x) = 1, g(x) = x##

Neither is the quotient ##f/g## a polynomial, nor can it be defined in a "continuous" way at ##0##.

Of course, all these deductions are based on the assumption that ##f(x)=g(x)h(x)##, if that isn't the case the statements would be false, for example if the denominator is of a higher degree than the numerator, like in your example.

Thanks for the help everyone
 
  • #13
Stephen Tashi
Science Advisor
7,769
1,534
So, if I understand correctly (I'm currently working with the real numbers), when we have a polynomial ##f(x)=g(x)h(x)## where ##g## has roots and we divide to obtain ##\frac {f} {g}##, the initial result is not actually ##h##, but rather a function that is equal to ##h## at all points except at the roots of ##g##. At the points of those roots it is undefined, because the function ##\frac f g## essentially takes the value of ##h## at those points, multiplies it by ##0## then divides it by ##0##, which makes it undefined.

You are correct. However, you will find writings on mathematics that use the convention that a function defined by an algebraic expression ##w(x)## that fails to define a value at an isolated point x = a is "understood" to take the value ##lim_{x \rightarrow a} w(x)## when that limit exists. For example, people writing in that style don't distinguish between the constant function ##w(x)## defined as ##w(x) =1 ## and the function ##w(x)## defined as ##w(x) = x/x##.
 

Suggested for: Questions regarding polynomial divisions and their roots

Replies
4
Views
469
Replies
6
Views
743
  • Last Post
Replies
2
Views
403
  • Last Post
Replies
1
Views
625
Replies
17
Views
810
Replies
3
Views
1K
  • Last Post
Replies
3
Views
271
  • Last Post
Replies
3
Views
572
  • Last Post
Replies
6
Views
672
  • Last Post
Replies
1
Views
459
Top