Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Matrix of columns of polynomials coefficients invertibility

  1. Jun 29, 2016 #1
    I am reviewing the method of partial fraction decomposition, and I get to the point that I have a matrix equation that relates the coefficients of the the original numerator to the coefficients of the numerators of the partial fractions, with the each column corresponding to a certain polynomial at an offset that is the corresponding power of the term in the numerator of a particular partial fraction.

    Anyway, the certain polynomial is the product of all unique, non-unity factors - i.e., that have a Greatest Common Factor (GCF) with respect to the others as 1 - with varying levels of multiplicity; the offset can be folded into the certain polynomial by simply factoring in the unity function to get an enhanced certain polynomial (for the case of no offset, the enhanced is the same as the regular).

    E.G.

    original fraction: [ R( x ) / D( x ) ]

    R( x ) = r0 + r1 x + r2 x2 + r3 x3

    D( x ) = ( x - 1 )2 ( x2 + x + 1 )

    f( x ) = ( x - 1 ) ( x2 + x + 1 ) = x3 - 1

    g( x ) = ( x2 + x + 1 )

    h( x ) = ( x - 1 )2 = x2 + 2 x + 1

    [ R( x ) / D( x ) ] = [ A / ( x - 1 ) ] + [ B / ( x - 1 )2 ] + [ ( C x + D ) / ( x2 + x + 1 ) ]

    R( x ) = A f( x ) + B g( x ) + ( C + d X ) h( x )

    [ M ] = [ { f( x ) } { g( x ) } { h( x ) } { x h( x ) } ] ← not actual functions

    { r } = [ M ] { A B C D }T

    [ M ] :

    [ 1 1 1 0 ]
    [ 0 1 2 1 ]
    [ 0 1 1 2 ]
    [ 1 0 0 1 ]

    Now, it seems that because each one of these enhanced certain polynomials are a unique product of factors that all have a GCF with respect to each other of 1 (i.e., the GCF of the factors), the coefficients of the enhanced certain polynomial cannot have any linear dependency on any other, and thus the matrix of these columns of enhanced certain polynomials is invertible (i.e., determinant not 0). However, it seems that there should somehow be some type of theorem/lemma that proves this; what is this theorem/lemma?
     
  2. jcsd
  3. Jul 1, 2016 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    One way to interpret your question is "What is the proof that the standard algorithm for decomposing a ratio of polynomials as a sum of fractions works ?". Proofs that algorithms work tend to be tedious and boring, so there may be shortage of volunteers for presenting such a proof. I've seen people say that the key is "Bezout's Lemma" and the rest is left to the reader.

    If you take for granted that the standard algorithm works then it can't lead to a problem of solving a systems of equations whose associated coefficient matrix is not invertible - so there is a proof-by-contradiction that the rows of such a system are independent (which is a stronger statement than saying the rows are pairwise independent).

    However, my guess is that you are actually inquiring if there is any interesting mathematics that studies in more detail the relations between invertible matrices and problems of writing a ratio of polynomials as a sum of fractions. If that's what you're curious about, try to formulate some questions that deal with possible relations.

    I haven't thought about the subject. Just off the top of my head, we could ask questions like:

    Can any invertible matrix be associated with a matrix that arises in writing some fraction of polynomials as sum of fractions ?

    Can the relation between such problems and invertible matrices be used to define equivalence classes of invertible matrices that differ from "exact" equality? Can we write matrices whose elements are functions of parameters such that they define a family of invertible matrices ?
     
  4. Jul 1, 2016 #3
    I have found a good explanation here that I am working through, and yes, it seems that Bézout's Identity says that there must be some polynomials that exist such that sum of the products of these polynomials and factorizations of the original denominator is 1, and from that there must exist partial fraction numerators such that it all works.

    http://math.stackexchange.com/questions/692103/why-does-partial-fraction-decomposition-always-work
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Matrix of columns of polynomials coefficients invertibility
Loading...