Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Some questions about eigenvector computation

  1. Apr 12, 2016 #1
    NOTE: For the answers to all these questions, I'd like an explanation (or a reference to a book or internet page) of how the answer has been derived.

    This question can be presumed to be for the general eigenproblem in which [ K ] & [ M ] are Hermitian matrices, with [ M ] also being positive definite, or [ K ] is a normal matrix and [ M ] is the identity matrix. My Question #0 is whether these conditions must be met for there to be a complete eigensolution, or are they more narrow or broad. (I understand that there is finagling that can be done to get [ K ] & [ M ] to get [ M ] to be positive definite, which is called a positive definite pencil)

    [ K ] { x } = λ [ M ] { x }

    There is the characteristic matrix which is a function of λ

    [ G( λ ) ] = [ K ] - λ[ M ]

    and the eigenproblem matrix EQ

    [ G( λ ) ] { φ } = { 0 }

    from which the set of λ is solved by setting the determinant of [ G( λ ) ] to 0. So far so good.

    The next step is to solve for the eigenvectors corresponding to each eigenvalue (presume here that the eigenvalues are distinct - the question of what to do if there are repeated values is a question for another thread). Now I get that [ G( λ ) ] has some rank of linear dependency as its determinant is de facto 0 as per the condition. My Question #1 is whether it always has linear dependency of rank 1, or can it have a higher rank.

    OK, so the eigenproblem matrix EQ is partitioned into a singular boundary section and an internal section for the rest of the coordinates

    Gbb( λ ) φb + { Gbi( λ ) }T { φb } = 0

    [ Gib( λ ) ] { φb } + [ Gii( λ ) ] { φi } = 0

    The value for φb is then assigned some dummy value (i.e., typically 1), so that the latter partition EQ becomes

    { φi } = [ Gii[/SUB ]( λ ) ]-1 [ Gib( λ ) ]

    So obviously [ Gii( λ ) ] must be invertible, and thus b must be chosen to result in this. My Question #2 is whether it is guaranteed that there will always be some b such that the resulting [ Gii( λ ) ] is invertible. My Question #3 is if it turns out that it is not invertible, does that imply that the value of that coordinate's element in { φ } will eventually be calculated to be 0, and if so, is there the converse implication.

    And as for Gbb( λ ), there doesn't seem to be the condition that it not be 0 as nothing is being solved for the b partition, although it sure seems like there should be. My Question is #4 is whether there is such a condition, and if so, it is a condition that is somehow always met, and if not, does that mean that that coordinate cannot be chosen for b.

    Thanks



    NOTE: For the answers to all these questions, I'd like an explanation (or a reference to a book or internet page) of how the answer has been derived.

    This question can be presumed to be for the general eigenproblem in which [ K ] & [ M ] are positive Hermitian matrices, or [ K ] is a normal matrix and [ M ] is the identity matrix.

    [ K ] { x } = λ [ M ] { x }

    There is the characteristic matrix which is a function of λ

    [ G( λ ) ] = [ K ] - λ[ M ]

    and the eigenproblem matrix EQ

    [ G( λ ) ] { φ } = { 0 }

    from which the set of λ is solved by setting the determinant of [ G( λ ) ] to 0. So far so good.

    The next step is to solve for the eigenvectors corresponding to each eigenvalue (presume here that the eigenvalues are distinct - the question of what to do if there are repeated values is a question for another thread). Now I get that [ G( λ ) ] has some rank of linear dependency as its determinant is de facto 0 as per the condition. My Question #1 is whether it always has linear dependency of rank 1, or can it have a higher rank.

    OK, so the eigenproblem matrix EQ is partitioned into a singular boundary section and an internal section for the rest of the coordinates

    Gbb( λ ) φb + { Gbi( λ ) }T { φb } = 0

    [ Gib( λ ) ] { φb } + [ Gii( λ ) ] { φi } = 0

    The value for φb is then assigned some dummy value (i.e., typically 1), so that the latter partition EQ becomes

    { φi } = [ Gii( λ ) ]-1 [ Gib( λ ) ]

    So obviously [ Gii( λ ) ] must be invertible, and thus b must be chosen to result in this. My Question #2 is whether it is guaranteed that there will always be some b such that the resulting [ Gii( λ ) ] is invertible. My Question #3 is if it turns out that it is not invertible, does that imply that the value of that coordinate's element in { φ } will eventually be calculated to be 0, and if so, is there the converse implication.

    And as for Gbb( λ ), there doesn't seem to be the condition that it not be 0 as nothing is being solved for the b partition, although it sure seems like there should be. My Question is #4 is whether there is such a condition, and if so, it is a condition that is somehow always met, and if not, does that mean that that coordinate cannot be chosen for b.

    Thanks
     
    Last edited: Apr 12, 2016
  2. jcsd
  3. Apr 17, 2016 #2
    Thanks for the post! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post?
     
  4. Apr 18, 2016 #3
    Well, I have been reading up, and I am getting a better idea about the theory which would answer my questions, but at present, I still am confused.
     
  5. Apr 19, 2016 #4

    chiro

    User Avatar
    Science Advisor

    Hey swapwiz.

    Do you understand how a zero determinant leads to dependence between vectors (within a system) in some way?

    That is the most crucial aspect of getting towards the characteristic polynomial and from there on-wards it is using the answer to satisfy this property.
     
  6. Apr 20, 2016 #5
    Well, I have been reading up, and I am getting a better idea about the theory which would answer my questions, but at present, I still am confused.
    Yes, I understand that the homogenous matrix EQ yields a trivial result (i.e., of all 0's), but that non-trivial results are possible when the determinant of the coefficient matrix is 0. I guess where I am confused is how to determine how many linear dependency constraints there are with such a zero-determinant matrix, since more than one dependency yields the same zero-determinant value. I have a hunch that for the eigenproblem, there is always the one linear dependency such that the eigenvectors are all relative to each other as per a certain amount, and each repeat of an eigenvalue results in another dependency. Is there any way short of doing an eigendecomposition to determine how many linear dependencies there are? Also, is the number of linear dependency constraints always equal to the difference of the matrix size and its rank?
     
  7. Apr 21, 2016 #6

    chiro

    User Avatar
    Science Advisor

    Yes - that is the point.

    Think about the situation where either have a zero-vector or a non-zero vector.

    That will help you resolve the problem.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Some questions about eigenvector computation
  1. Eigenvector question (Replies: 1)

Loading...