I Matrix of columns of polynomials coefficients invertibility

swampwiz
Messages
567
Reaction score
83
I am reviewing the method of partial fraction decomposition, and I get to the point that I have a matrix equation that relates the coefficients of the the original numerator to the coefficients of the numerators of the partial fractions, with the each column corresponding to a certain polynomial at an offset that is the corresponding power of the term in the numerator of a particular partial fraction.

Anyway, the certain polynomial is the product of all unique, non-unity factors - i.e., that have a Greatest Common Factor (GCF) with respect to the others as 1 - with varying levels of multiplicity; the offset can be folded into the certain polynomial by simply factoring in the unity function to get an enhanced certain polynomial (for the case of no offset, the enhanced is the same as the regular).

E.G.

original fraction: [ R( x ) / D( x ) ]

R( x ) = r0 + r1 x + r2 x2 + r3 x3

D( x ) = ( x - 1 )2 ( x2 + x + 1 )

f( x ) = ( x - 1 ) ( x2 + x + 1 ) = x3 - 1

g( x ) = ( x2 + x + 1 )

h( x ) = ( x - 1 )2 = x2 + 2 x + 1

[ R( x ) / D( x ) ] = [ A / ( x - 1 ) ] + [ B / ( x - 1 )2 ] + [ ( C x + D ) / ( x2 + x + 1 ) ]

R( x ) = A f( x ) + B g( x ) + ( C + d X ) h( x )

[ M ] = [ { f( x ) } { g( x ) } { h( x ) } { x h( x ) } ] ← not actual functions

{ r } = [ M ] { A B C D }T

[ M ] :

[ 1 1 1 0 ]
[ 0 1 2 1 ]
[ 0 1 1 2 ]
[ 1 0 0 1 ]

Now, it seems that because each one of these enhanced certain polynomials are a unique product of factors that all have a GCF with respect to each other of 1 (i.e., the GCF of the factors), the coefficients of the enhanced certain polynomial cannot have any linear dependency on any other, and thus the matrix of these columns of enhanced certain polynomials is invertible (i.e., determinant not 0). However, it seems that there should somehow be some type of theorem/lemma that proves this; what is this theorem/lemma?
 
Mathematics news on Phys.org
swampwiz said:
Now, it seems that because each one of these enhanced certain polynomials are a unique product of factors that all have a GCF with respect to each other of 1 (i.e., the GCF of the factors), the coefficients of the enhanced certain polynomial cannot have any linear dependency on any other, and thus the matrix of these columns of enhanced certain polynomials is invertible (i.e., determinant not 0). However, it seems that there should somehow be some type of theorem/lemma that proves this; what is this theorem/lemma?

One way to interpret your question is "What is the proof that the standard algorithm for decomposing a ratio of polynomials as a sum of fractions works ?". Proofs that algorithms work tend to be tedious and boring, so there may be shortage of volunteers for presenting such a proof. I've seen people say that the key is "Bezout's Lemma" and the rest is left to the reader.

If you take for granted that the standard algorithm works then it can't lead to a problem of solving a systems of equations whose associated coefficient matrix is not invertible - so there is a proof-by-contradiction that the rows of such a system are independent (which is a stronger statement than saying the rows are pairwise independent).

However, my guess is that you are actually inquiring if there is any interesting mathematics that studies in more detail the relations between invertible matrices and problems of writing a ratio of polynomials as a sum of fractions. If that's what you're curious about, try to formulate some questions that deal with possible relations.

I haven't thought about the subject. Just off the top of my head, we could ask questions like:

Can any invertible matrix be associated with a matrix that arises in writing some fraction of polynomials as sum of fractions ?

Can the relation between such problems and invertible matrices be used to define equivalence classes of invertible matrices that differ from "exact" equality? Can we write matrices whose elements are functions of parameters such that they define a family of invertible matrices ?
 
Stephen Tashi said:
One way to interpret your question is "What is the proof that the standard algorithm for decomposing a ratio of polynomials as a sum of fractions works ?". Proofs that algorithms work tend to be tedious and boring, so there may be shortage of volunteers for presenting such a proof. I've seen people say that the key is "Bezout's Lemma" and the rest is left to the reader.

If you take for granted that the standard algorithm works then it can't lead to a problem of solving a systems of equations whose associated coefficient matrix is not invertible - so there is a proof-by-contradiction that the rows of such a system are independent (which is a stronger statement than saying the rows are pairwise independent).

However, my guess is that you are actually inquiring if there is any interesting mathematics that studies in more detail the relations between invertible matrices and problems of writing a ratio of polynomials as a sum of fractions. If that's what you're curious about, try to formulate some questions that deal with possible relations.

I haven't thought about the subject. Just off the top of my head, we could ask questions like:

Can any invertible matrix be associated with a matrix that arises in writing some fraction of polynomials as sum of fractions ?

Can the relation between such problems and invertible matrices be used to define equivalence classes of invertible matrices that differ from "exact" equality? Can we write matrices whose elements are functions of parameters such that they define a family of invertible matrices ?

I have found a good explanation here that I am working through, and yes, it seems that Bézout's Identity says that there must be some polynomials that exist such that sum of the products of these polynomials and factorizations of the original denominator is 1, and from that there must exist partial fraction numerators such that it all works.

http://math.stackexchange.com/questions/692103/why-does-partial-fraction-decomposition-always-work
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top