Matrix of columns of polynomials coefficients invertibility

In summary, partial fraction decomposition involves breaking down a ratio of polynomials into a sum of simpler fractions. This can be done using a matrix equation, where the coefficients of the original numerator are related to the coefficients of the partial fraction numerators. The enhanced certain polynomial, which includes the offset, is a unique product of factors that all have a Greatest Common Factor (GCF) of 1. This ensures that the coefficients of the enhanced certain polynomial are independent, making the matrix of these polynomials invertible. While the proof for the standard algorithm for partial fraction decomposition may be tedious and involve Bézout's Identity, it can be shown that the rows of the matrix are independent, and there may be potential for further study on the
  • #1
swampwiz
571
83
I am reviewing the method of partial fraction decomposition, and I get to the point that I have a matrix equation that relates the coefficients of the the original numerator to the coefficients of the numerators of the partial fractions, with the each column corresponding to a certain polynomial at an offset that is the corresponding power of the term in the numerator of a particular partial fraction.

Anyway, the certain polynomial is the product of all unique, non-unity factors - i.e., that have a Greatest Common Factor (GCF) with respect to the others as 1 - with varying levels of multiplicity; the offset can be folded into the certain polynomial by simply factoring in the unity function to get an enhanced certain polynomial (for the case of no offset, the enhanced is the same as the regular).

E.G.

original fraction: [ R( x ) / D( x ) ]

R( x ) = r0 + r1 x + r2 x2 + r3 x3

D( x ) = ( x - 1 )2 ( x2 + x + 1 )

f( x ) = ( x - 1 ) ( x2 + x + 1 ) = x3 - 1

g( x ) = ( x2 + x + 1 )

h( x ) = ( x - 1 )2 = x2 + 2 x + 1

[ R( x ) / D( x ) ] = [ A / ( x - 1 ) ] + [ B / ( x - 1 )2 ] + [ ( C x + D ) / ( x2 + x + 1 ) ]

R( x ) = A f( x ) + B g( x ) + ( C + d X ) h( x )

[ M ] = [ { f( x ) } { g( x ) } { h( x ) } { x h( x ) } ] ← not actual functions

{ r } = [ M ] { A B C D }T

[ M ] :

[ 1 1 1 0 ]
[ 0 1 2 1 ]
[ 0 1 1 2 ]
[ 1 0 0 1 ]

Now, it seems that because each one of these enhanced certain polynomials are a unique product of factors that all have a GCF with respect to each other of 1 (i.e., the GCF of the factors), the coefficients of the enhanced certain polynomial cannot have any linear dependency on any other, and thus the matrix of these columns of enhanced certain polynomials is invertible (i.e., determinant not 0). However, it seems that there should somehow be some type of theorem/lemma that proves this; what is this theorem/lemma?
 
Mathematics news on Phys.org
  • #2
swampwiz said:
Now, it seems that because each one of these enhanced certain polynomials are a unique product of factors that all have a GCF with respect to each other of 1 (i.e., the GCF of the factors), the coefficients of the enhanced certain polynomial cannot have any linear dependency on any other, and thus the matrix of these columns of enhanced certain polynomials is invertible (i.e., determinant not 0). However, it seems that there should somehow be some type of theorem/lemma that proves this; what is this theorem/lemma?

One way to interpret your question is "What is the proof that the standard algorithm for decomposing a ratio of polynomials as a sum of fractions works ?". Proofs that algorithms work tend to be tedious and boring, so there may be shortage of volunteers for presenting such a proof. I've seen people say that the key is "Bezout's Lemma" and the rest is left to the reader.

If you take for granted that the standard algorithm works then it can't lead to a problem of solving a systems of equations whose associated coefficient matrix is not invertible - so there is a proof-by-contradiction that the rows of such a system are independent (which is a stronger statement than saying the rows are pairwise independent).

However, my guess is that you are actually inquiring if there is any interesting mathematics that studies in more detail the relations between invertible matrices and problems of writing a ratio of polynomials as a sum of fractions. If that's what you're curious about, try to formulate some questions that deal with possible relations.

I haven't thought about the subject. Just off the top of my head, we could ask questions like:

Can any invertible matrix be associated with a matrix that arises in writing some fraction of polynomials as sum of fractions ?

Can the relation between such problems and invertible matrices be used to define equivalence classes of invertible matrices that differ from "exact" equality? Can we write matrices whose elements are functions of parameters such that they define a family of invertible matrices ?
 
  • #3
Stephen Tashi said:
One way to interpret your question is "What is the proof that the standard algorithm for decomposing a ratio of polynomials as a sum of fractions works ?". Proofs that algorithms work tend to be tedious and boring, so there may be shortage of volunteers for presenting such a proof. I've seen people say that the key is "Bezout's Lemma" and the rest is left to the reader.

If you take for granted that the standard algorithm works then it can't lead to a problem of solving a systems of equations whose associated coefficient matrix is not invertible - so there is a proof-by-contradiction that the rows of such a system are independent (which is a stronger statement than saying the rows are pairwise independent).

However, my guess is that you are actually inquiring if there is any interesting mathematics that studies in more detail the relations between invertible matrices and problems of writing a ratio of polynomials as a sum of fractions. If that's what you're curious about, try to formulate some questions that deal with possible relations.

I haven't thought about the subject. Just off the top of my head, we could ask questions like:

Can any invertible matrix be associated with a matrix that arises in writing some fraction of polynomials as sum of fractions ?

Can the relation between such problems and invertible matrices be used to define equivalence classes of invertible matrices that differ from "exact" equality? Can we write matrices whose elements are functions of parameters such that they define a family of invertible matrices ?

I have found a good explanation here that I am working through, and yes, it seems that Bézout's Identity says that there must be some polynomials that exist such that sum of the products of these polynomials and factorizations of the original denominator is 1, and from that there must exist partial fraction numerators such that it all works.

http://math.stackexchange.com/questions/692103/why-does-partial-fraction-decomposition-always-work
 

1. What is a matrix of columns of polynomials coefficients invertibility?

A matrix of columns of polynomials coefficients invertibility is a mathematical concept used to determine whether a matrix composed of columns of polynomials is invertible, meaning that it has an inverse matrix. This concept is important in linear algebra and is useful in solving systems of polynomial equations.

2. How is the invertibility of a matrix of columns of polynomials coefficients determined?

The invertibility of a matrix of columns of polynomials coefficients is determined by calculating the determinant of the matrix. If the determinant is non-zero, the matrix is invertible. If the determinant is zero, the matrix is not invertible.

3. What is the significance of a matrix of columns of polynomials coefficients being invertible?

A matrix of columns of polynomials coefficients being invertible indicates that it is possible to solve a system of polynomial equations that the matrix represents. This is useful in many areas of science and engineering, including computer graphics, physics, and chemistry.

4. Can a matrix of columns of polynomials coefficients be invertible if it has complex coefficients?

Yes, a matrix of columns of polynomials coefficients can be invertible even if it has complex coefficients. The invertibility of the matrix depends on the value of the determinant, which can be calculated using complex numbers. Therefore, a matrix with complex coefficients can still be invertible as long as its determinant is non-zero.

5. Are there any applications of the concept of matrix of columns of polynomials coefficients invertibility?

Yes, there are many applications of the concept of matrix of columns of polynomials coefficients invertibility. Some examples include solving systems of polynomial equations in computer graphics, signal processing, and cryptography. It is also useful in solving differential equations and in numerical analysis.

Similar threads

Replies
1
Views
756
Replies
1
Views
892
Replies
1
Views
831
  • General Math
Replies
7
Views
883
Replies
3
Views
731
Replies
4
Views
409
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
273
Replies
4
Views
997
  • Linear and Abstract Algebra
Replies
8
Views
1K
Back
Top