How is the Partial Fraction Series Used in Reducing Polynomials?

Click For Summary

Discussion Overview

The discussion centers on the use of partial fraction decomposition in reducing polynomials, particularly focusing on the conditions under which polynomials can be expressed in terms of distinct linear or quadratic factors. Participants explore theoretical aspects, proofs, and the formulation of partial fractions.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Some participants question whether every polynomial of degree greater than 2 can be reduced to distinct linear or quadratic factors, seeking proof for this assertion.
  • One participant references the fundamental theorem of algebra, stating that irreducible polynomials over the complex numbers are linear, while those with real coefficients must occur in conjugate pairs, leading to quadratics.
  • There is a discussion about the necessity of including higher powers in the denominators of partial fractions, with some participants expressing confusion about why the general form requires this structure.
  • Participants note that simply using a single term for each factor does not account for all cases, as shown through examples comparing different forms of rational functions.
  • Clarifications are made regarding the notation used for coefficients in the partial fraction decomposition, with some participants asserting that different letters (e.g., A and A') are merely notational and not related.
  • One participant emphasizes that the structure of partial fractions aids in integration, suggesting that breaking down rational functions into simpler components facilitates finding antiderivatives.
  • Examples are provided to illustrate why certain terms cannot be omitted in the decomposition, highlighting the relationship between coefficients in different terms.

Areas of Agreement / Disagreement

Participants express differing views on the necessity and formulation of partial fraction decomposition, with no consensus reached on the proofs or the generalization of the theorem. Some agree on the fundamental theorem's implications, while others remain uncertain about specific aspects of the decomposition process.

Contextual Notes

Participants acknowledge limitations in their understanding of the proofs related to partial fraction decomposition and the conditions under which certain forms are valid. There is a recognition that examples used may not always be the best representations of the concepts discussed.

MathewsMD
Messages
430
Reaction score
7
http://en.wikipedia.org/wiki/Partial_fraction_decomposition

In general, if you have a proper rational function, then:

if ## R(x) = \frac {P(x)}{Q(x)} ## and ## Q(x) = (mx + b)^n ... (ax^2 + bx + c)^p ## where ##Q(x)## is composed of distinct linear powers and/or distinct irreducible quadratic powers.

I had two questions:

1) Can we actually reduce every polynomial of powers greater than 2 to either a distinct linear or quadratic power? I may not be thinking correctly, but does anyone know of a proof they could link me to? For a an expression like ##x^{12} - πx^3 - 542.43x + 21## I am having trouble simplifying this to an irreducible quadratic or linear factor.

2) Why can we show that ## R(x) = \frac {A}{mx+b} + \frac {B}{(mx+b)^2}... \frac {C}{(mx+b)^2} + \frac {Lx + D}{ax^2 + bx + c} + \frac {Mx + E}{(ax^2 + bx + c)^2}...+\frac {Nx + F}{(ax^2 + bx + c)^p} ##

For each distinct linear and irreducible quadratic power, why can we not just show this as:

## R(x) = \frac {A}{(mx+b)^n} + \frac {Lx + D}{(ax^2 + bx + c)^p} ##

I know the above is incorrect, but I'm wondering how it was proved that we must add a power to the denominator for each subsequent fraction in the series.

Also, I've seen it written as ## A_1 + A_2 +...+ A_n## instead of ## A + B +...+ C## and I was wondering if there's any relationship between A1 and An in this case.

I've looked online but have not found the proof(s) that I'm looking for.

Any clarification would be great!
 
Last edited:
Physics news on Phys.org
MathewsMD said:
1) Can we actually reduce every polynomial of powers greater than 2 to either a distinct linear or quadratic power? I may not be thinking correctly, but does anyone know of a proof they could link me to? For a an expression like ##x^{12} - πx^3 - 542.43x + 21## I am having trouble simplifying this to an irreducible quadratic or linear factor.

Very easy. The fundamental theorem of algebra shows that the only irreducible polynomials over C are linear. But if the polynomial has real coefficients then the roots must occur in conjugate pairs, hence we are left with quadratics.

2) Why can we show that ## R(x) = \frac {A}{mx+b} + \frac {B}{(mx+b)^2}... \frac {C}{(mx+b)^2} + \frac {Lx + D}{ax^2 + bx + c} + \frac {Mx + E}{(ax^2 + bx + c)^2}...+\frac {Nx + F}{(ax^2 + bx + c)^p} ##

For each distinct linear and irreducible quadratic power, why can we not just show this as:

## R(x) = \frac {A}{(mx+b)^n} + \frac {Lx + D}{(ax^2 + bx + c)^p} ##

I know the above is incorrect, but I'm wondering how it was proved that we must add a power to the denominator for each subsequent fraction in the series.

Notice that ##\frac {A}{(mx+b)^n} + \frac {Lx + D}{(ax^2 + bx + c)^p} \neq \frac {A}{(mx+b)^n} + \frac {A'}{(mx+b)^{n-1}} + \frac {Lx + D}{(ax^2 + bx + c)^p}## and its easy to see that you get different rational functions. In other words, the left hand side doesn't cover all cases.
 
Last edited:
pwsnafu said:
Very easy. The fundamental theorem of algebra shows that the only irreducible polynomials over C are linear. But if the polynomial has real coefficients then the roots must occur in conjugate pairs, hence we are left with quadratics.
Notice that if that did work then ##\frac {A}{(mx+b)^n} + \frac {Lx + D}{(ax^2 + bx + c)^p} = \frac {A}{(mx+b)^n} + \frac {A'}{(mx+b)^{n-1}} + \frac {Lx + D}{(ax^2 + bx + c)^p}## and its easy to see that you get different rational functions. In other words, the left hand side doesn't cover all cases.

Okay, I just went to check the theorem again and that question's been answered.

Regarding my second question, I am still confused. I realize my statement is not correct, I was just trying to ask why exactly the partial fractions can be written in the general form I stated above with subsequently increasing powers. Also, are A and A' related in any way?

If we have ## \frac {1}{x^2} ## why is it written as ## \frac {A}{x} + \frac {B}{x^2} ## instead of ## \frac {mx + b}{x^2}##? Having a proof or an explanation why the partial fraction series was formulated as stated above would be helpful. Thank you for referring me to the fundamental theorem of algebra.
 
MathewsMD said:
Also, are A and A' related in any way?

They are not. Just notation.

If we have ## \frac {1}{x^2} ## why is it written as ## \frac {A}{x} + \frac {B}{x^2} ## instead of ## \frac {mx + b}{x^2}##?

It's neither. ##\frac{1}{x^2}## is the partial fraction decomposition of itself.

As to your question, it's easier to work with when integrating. The point of partial fractions is a divide and conquer approach to finding antiderivatives of rational functions. So instead of trying to integrate ##\frac{x}{(x+1)^2}## in one step, you consider ##\frac{1}{x-1}-\frac{1}{(x-1)^2}## in two steps. ##\frac{1}{x^2}## is a bad example.

Edit: I just realized that ##\frac{x}{(x+1)^2}## is also a poor example because you can integrate it directly.

Having a proof or an explanation why the partial fraction series was formulated as stated above would be helpful.

What specifically are you asking?

It should be obvious that if ##\frac{x}{(x+1)^2} = \frac{Ax+b}{(x+1)^2}## then A=1 and B=0, hence you haven't made the problem easier so you need to consider lower powers. That was the question in the first post.

If you are asking for existence then a proof sketch is on the Wikipedia page.
 
pwsnafu said:
They are not. Just notation.
It's neither. ##\frac{1}{x^2}## is the partial fraction decomposition of itself.

As to your question, it's easier to work with when integrating. The point of partial fractions is a divide and conquer approach to finding antiderivatives of rational functions. So instead of trying to integrate ##\frac{x}{(x+1)^2}## in one step, you consider ##\frac{1}{x-1}-\frac{1}{(x-1)^2}## in two steps. ##\frac{1}{x^2}## is a bad example.

Edit: I just realized that ##\frac{x}{(x+1)^2}## is also a poor example because you can integrate it directly.
What specifically are you asking?

It should be obvious that if ##\frac{x}{(x+1)^2} = \frac{Ax+b}{(x+1)^2}## then A=1 and B=0, hence you haven't made the problem easier so you need to consider lower powers. That was the question in the first post.

If you are asking for existence then a proof sketch is on the Wikipedia page.

My question is why we generalize the theorem as:

##\frac{x}{(x+1)^2} = \frac{Ax+b}{(x+1)}+ \frac{Mx+c}{(x+1)^2}## instead of just ##\frac{x}{(x+1)^2} = \frac{Ax+b}{(x+1)^2}##. I undertand in this case that is easy to just let A = 1 and B = 0, but in the general case of ## f(x) = \frac {P(x)}{(Q(x))^n} = \frac {Ax + b}{Q(x)} + \frac {Mx + c}{(Q(x))^2} + ... + \frac {Ex + d}{(Q(x))^n} ##

I just don't understand why it's necessary when writing the general form of a partial fraction to include a series where the denominators begin from i = 1 to i = n depending on the power (n) of the irreducible quadratic or linear function.
 
This is probably best illustrated by an example. I'm just going to assume ##Q(x) = x+1## and we'll consider successive n. Changing Q doesn't change the argument, it makes it less clear.

Let n=2. The numerator has to have degree less the denominator so it is at most linear. We have
\frac{Ax+B}{(x+1)^2} = \frac{a}{x+1}+\frac{b}{(x+1)^2} = \frac{ax + (a+b)}{(x+1)^2}
Now, your question was "why can't we just toss the first term?" Well, equating coefficients we see that ##a## is completely determined by ##A##, i.e. ##a = 0 \iff A = 0##. So we can toss the first term exactly when we are presented a problem where numerator is a constant.

Let n=3. The numerator is at most quadratic. We have:
\frac{Ax^2+Bx+C}{(x+1)^3} = \frac{a}{x+1}+\frac{b}{(x+1)^2}+\frac{c}{(x+1)^3} = \frac{ax^2 + (2a+b)x + (a+b+c)}{(x+1)^3}
As before, we can toss the first term if and only if A=0. The middle term is more interesting. Because we know ##a = A##, we have ##b = 0 \iff B = 2A##. So we can toss the middle term only for problems when B is twice A.

And this argument works for n=4 and so on. In general we have an expression
\frac{A_{n-1}x^{n-1}+A_{n-2}x^{n-2}+\ldots+A_0}{(x+1)^n} = \frac{ax^{n-1}+(na+b)x^{n-2}+\ldots}{(x+1)^n}
and you can see again ##a=0 \iff A_{n-1}=0## and ##b=0 \iff A_{n-2}=nA_{n-1}## et cetera.

Anyway, we write the expression in our form because the P(x) we get at the start is arbitrary. We don't know if ##A=0## or if ##B=2A## when we start.
 
Last edited:
  • Like
Likes   Reactions: 1 person
pwsnafu said:
This is probably best illustrated by an example. I'm just going to assume ##Q(x) = x+1## and we'll consider successive n. Changing Q doesn't change the argument, it makes it less clear.

Let n=2. The numerator has to have degree less the denominator so it is at most linear. We have
\frac{Ax+B}{(x+1)^2} = \frac{a}{x+1}+\frac{b}{(x+1)^2} = \frac{ax + (a+b)}{(x+1)^2}
Now, your question was "why can't we just toss the first term?" Well, equating coefficients we see that ##a## is completely determined by ##A##, i.e. ##a = 0 \iff A = 0##. So we can toss the first term exactly when we are presented a problem where numerator is a constant.

Let n=3. The numerator is at most quadratic. We have:
\frac{Ax^2+Bx+C}{(x+1)^3} = \frac{a}{x+1}+\frac{b}{(x+1)^2}+\frac{c}{(x+1)^3} = \frac{ax^2 + (2a+b)x + (a+b+c)}{(x+1)^3}
As before, we can toss the first term if and only if A=0. The middle term is more interesting. Because we know ##a = A##, we have ##b = 0 \iff B = 2A##. So we can toss the middle term only for problems when B is twice A.

And this argument works for n=4 and so on. In general we have an expression
\frac{A_{n-1}x^{n-1}+A_{n-2}x^{n-2}+\ldots+A_0}{(x+1)^n} = \frac{ax^{n-1}+(na+b)x^{n-2}+\ldots}{(x+1)^n}
and you can see again ##a=0 \iff A_{n-1}=0## and ##b=0 \iff A_{n-2}=nA_{n-1}## et cetera.

Anyway, we write the expression in our form because the P(x) we get at the start is arbitrary. We don't know if ##A=0## or if ##B=2A## when we start.

Thank you!
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K
Replies
6
Views
2K