Eigenvalues of the product of two matrices

  1. Hello everyone,

    Before I ask my question, be informed that I haven't had any formal course in linear algebra, so please forgive me if the question has a well-known answer.

    I have two symmetric matrices, A and B. We know the eigenvalues and eigenvectors of A, and B. Now I need to calculate eigenvalues of the product of the two matrices, AB. The questions:

    1. Is there a relation between the eigenvalues of AB and those of A and B?

    2. If the answer to the above question is no, Can they be of any help in finding the eigenvalues of AB?

    Thanks
     
  2. jcsd
  3. HallsofIvy

    HallsofIvy 40,241
    Staff Emeritus
    Science Advisor

    In general, no, unless they happen to have the same eigenvectors. If [itex]\lambda[/itex] is an eigenvalue of A and [itex]\mu[/itex] is an eigenvalue of B, both corresponding to eigenvector v, then we can say
    [tex](AB)v= A(Bv)= A(\mu v)= \mu Av= \mu(\lambda v=(\mu\lambda)v[/tex]
    That is, the eigenvalues of AB (and BA) are the products of corresponding eigenvalues of A and B separately. But that is only true if A and B have the same eigenvectors. If not, there will be no relationship.
     
    1 person likes this.
  4. Thank's HallsofIvy,

    In fact I need to solve the following equation for the scalar ω2 :

    det(A-ω2B)=0

    This is equal to finding the eigenvalues of B-1A. I was hoping to find the eigenvalues without multiplying the matrices, because my matrices are very large and very sparse.
     
  5. Not an expert on linear algebra, but anyway:
    I think you can get bounds on the modulus of the eigenvalues of the product. There are very short, 1 or 2 line, proofs, based on considering scalars x'Ay (where x and y are column vectors and prime is transpose), that real symmetric matrices have real eigenvalues and that the eigenspaces corresponding to distinct eigenvalues are orthogonal. Somehow I think it can also be shown further that all real symmetric matrices have their own orthonormal basis of eigenvectors ("spectral theorem"?). This enables any eigenvector of AB to be expanded as a linear combination of orthonormal eigenvectors of B. Arguing along such lines you can (I think?) show that the modulus of any eigenvalue of AB cannot exceed the product of the moduli of the two largest eigenvalues, respectively, of A and of B. There is a whole theory of "matrix norms" which I do not know well, which can be used to produce such arguments.

    If I am recalling correctly (that A and B each has an orthonormal basis of eigenvectors), then there is an orthogonal transformation mapping each member of one basis onto a different member of the other, which may possibly have consequences relevant to your question.
     
    Last edited: Apr 2, 2012
  6. AlephZero

    AlephZero 7,298
    Science Advisor
    Homework Helper

    Solving this by forming ##B^{-1}A## is not a good idea even for small matrices, because ##B^{-1}A## is not even symmetric when ##A## and ##B## are symmetric.

    If one of ##A## or ##B## is positive definite, a slightly better idea for small matrices is to do a Cholesky factorization, say ## A = LL^T##, find the eigenpairs of ##L^{-1}BL^{-T}## and then transform the results back to the original problem.

    If A and B are large sparse matrices, what you really want is a good implementation of the Lanczos method, for example ARPACK which AFAIK is included Matlab (or Google will find the Fortran code). But don't try to implement Lanczos yourself - the math description of the algorithm looks simple enough, but there are some practical issues involved in getting it to work reliably which you probably don't want to know about!
     
  7. Yes, I have recently learned that the eigenvectors corresponding to distinct eigenvalues of a symmetric matrix are orthonormal. The proof is easy:

    [itex]Ax_{1}=\lambda_{1} x_{1} → x^{T}_{2}Ax_{1}=\lambda_{1} x^{T}_{2} x_{1}\:\:\: (1)[/itex]

    [itex]Ax_{2}=\lambda_{2} x2 → x^{T}_{1}Ax_{2}=\lambda_{2} x^{T}_{1} x_{2}[/itex]

    transposoing the latter, we have

    [itex]x^{T}_{2}A^{T}x_{1}=\lambda_{2} x^{T}_{2} x_{1}\:\:\: (2)[/itex]

    if A is symmetric then (1) and (2) yields [itex] x^{T}_{2} x_{1} =0[/itex] or [itex] \lambda_{1} = \lambda_{2} [/itex].

    Some methods use this important property but I don't remember which ones exactly. About the bounds, I tried a little but my conclusion was different from yours. According to my rough calculations (about which I'm not sure ), the upper bound becomes the ration of the largest eigenvalue of A to the smallest of B, and the lowest bound is the ratio of the smallest to the larges. This is not so important because its quite easy to get the largest and smallest eigenvalues of a standard or a generalized eigenvalue problem using "Power method" and "Inverse method".
     
  8. Thanks,

    Since the time I posted my question, I have learned a little bit more about the problem. My original post was about the generalized eigenvalue problem. There are vector iterative methods to solve such problems. However for large matrices, It's very expensive to calculate all the eigenvalues. We often need a number of smallest eigenvalues. I found "Subspace Iteration Method" a good one.

    As for the Lanczos method, I'm still confused and despite reading dozens of papers and websites. I don't see any relation between the eigenvalues of the small tridiagonal matrix generated by the Lanczos method and the eigenvalues of the original matrix. For example the original matrix is 10000 by 10000 and we use the method for m=50. Are the 50 eigenvalues are approximation to the 50 smallest eigenvalues of the original matrix? The results I get is very different. Could you please explain how we use the Lanczos method?

    Thanks for the help.
     
  9. AlephZero

    AlephZero 7,298
    Science Advisor
    Homework Helper

    You are right, Subspace iteration is a good method. It was pretty much the standard method before the implemtation detals of Lanczos were sorted out (in the 1980s).

    The way normal people "use" Lanczos is to get some code that works, and figure out how to input the matrices to it. You don't need to understand exactly how it works, any more than you need to know how your computer calculates square roots in order to use the sqrt() function in a programming language.

    Lanczos in similar to subspace iteration, in the sense that the subspace spanned by the Lanczos vectors converges to a good approximation to the lowest modes of the eigenproblem. Using the Lanczos vectors as basis vectors, the matrices are tridiagonal (unlke subspace, where they start out as full matrices and become almost diagonal as the iterations converge).

    With subspace, you choose a fixed number of vectors and change all of them at each iteration, but with Lanzos you start with one (random) vector and successively add vectors one at a time. This gives a big speed improvement over subspace, because instead of N units of time to process all N vectors in each iteration, you only need 1 unit of time to create one new vector.

    If you want to find say the smallest 50 eigenvalues of a typical FE model with say 50,000 degrees of freedom, subspace should be perfectly adequate for the job, even though Lanczos might run 20 times faster. But it you want 500 eigenvalues of a model with 500,000 DOF, subspace will probably struggle to work at all.
     
  10. Seems I have misunderstood the Lanczos method. To me, it is more about tridiagonalization algorithm. In fact I don't see the expected iteration for finding the eigenvalues. For an NxN matrix A, you construct and mxm matrix T ( non-iterative) and find its eigenvalues. If m=N, Yes, the you get the same eigenvalues of A. but for m<<N, what we get? And how should we iterate to improve the eigenvalues? If we increase m, what is the relation between the variables of step m+1 and those of the step m?
     
  11. AlephZero

    AlephZero 7,298
    Science Advisor
    Homework Helper

    In Lanczos, each iteration adds another Lanczos vector and inceases the size of m by 1.

    The eigenvalues of the 1x1, 2x2, 3x3, .... tridiagonal matrices converge to the eigenvalues of the full matrix. IIRC the convergence criterion is based on the eigenvectors of the tridiagonal matrix. You check whether an eigenvector of the size m+1 eigenproblem is (nearly) the same as a vector from the size m eigenproblem, with a zero term appended to it, which means the new Lanczos vector is orthogonal to the eigenvector of the NxN matrix. Because of the way the Lanczos vectors are constructed, all the later Lanczos vectors will also be orthogonal to it.
     
  12. If I directly construct the largest tridiagonal that I can handle, do I lose anything by not constructing smaller ones?
    When I test the method for a matrix with known eigenvalues, m needs to be large enough to get good approximations of the eigenvalues of the original matrix. By large enough ,I mean m>N/2, which is not possible in practice. The subspace iteration takes a small number but it compensates it by iteration to get good approximates of the lowest eigenvalues. I still can't recognize the expected iteration in Lanczos method. The iteration I see is in fact Gram-Schmidt orthogonality.
     
  13. I think I understand it now. The iteration is in fact in the dimension of T. For convergence, what maters is "m" rather than the ratio m/N. I tested it with smaller N and expected to get good approximates with m=N/10.

    Thanks for helping me understand.
     
  14. Let A = P'EP
    Let B = Q'DQ
    where P'P = Q'Q = I

    Now:
    AB= P'EPQ'DQ

    What we would need for the eigenvalues to be a direct product is PQ' = I and P'Q = I. But (PQ')' = QP' = I = P'Q
    or Q(P'P) = P'QP = Q
    or P = I
    which is a different finding then previously indicated.
     
  15. AlephZero,

    I have a question about the subspace iteration method, specifically about matrix projection on the approximated eigenvalues at iteration j. One can iterate without projectingobtain the convectors after some number of iteration . The projection is an important step of the algorithm and is expected to reduce the number of required iterations. However, the updated vectors are not orthogonal ( except for last iteration(s) so the projection is not orthogonal while the Rayleigh_Ritz approximation used in the algorithm is based on orthogonal projection. Perhaps for this reason, when I bypass the approximation and instead only orthogonalize the updated vectors, the convergence is achieved in lower number of iteration. What do you think I am doing wrong here?

    Thanks


    Edit: The convergence problem was solved.I found ( by myself) that we need to reorder the Ritz eigenvectors in order to make sure each vector goes to the right one for the next iteration. This seems very important while none of the paper mentioned about it.
     
    Last edited: Apr 9, 2012
  16. AlephZero

    AlephZero 7,298
    Science Advisor
    Homework Helper

    I'm glad you solved it, because I didn't entirely understand your terminology!

    The take-home lesson from this is: there is a reason why eigenvalues and eigenvectors are sometimes called eigenpairs. You just discovered why. :smile:
     
  17. Sorry. For your reference and perhaps my future questions, the generalized eigenvalue problem [itex]KX=MX\Omega^{2}[/itex], the subspace iteration algorithm is as follows:

    1. For p smalleds strat from a starting set of q=min(q+8,2*p) vectors ,[itex]{X}_{1}]\itex].

    for k=2,3,...

    2. Solve [itex]\overline {X}_{k+1}=M X_{k}[/itex] for[itex] \overline{X}_{k+1} [/itex]

    3. Find the projection of K and M on vectors [itex] \overline{X}_{k+1 }[/itex] :

    [itex] \overline{K}_{k+1}= \overline{X}^{T}_{k+1} K \overline{X}_{k+1}[/itex]
    [itex] \overline{M}_{k+1}= \overline{X}^{T}_{k+1} M \overline{X}_{k+1}[/itex]

    4. solve the eigensystem of [itex] \overline{K}_{k+1}Q_{k+1}=\overline{M}_{k+1}Q_{k+1}\Omega^{2}_{k+1}[/itex]
    The above eigensystem is much smaller than the original one and can be handled directly.

    5. Find the improved approximation of the eigenvectors for the original eignesystem:
    [itex]X_{k+1}= \overline{X}_{k+1}Q_{k+1}[/itex]

    6. repeat till convergence is achieved.

    If we jump from step to to step 6, still the convergence is achieved. Steps 3 to 5 are to improve the algorithm. My question was about those steps. My previous comment about the order is not correct. The order becomes important when after convergence is achieved, we want to choose the p lowest ones. Also for convergence criterion, we check for pth eigenvalue. It's then important to order then before errer check at each iteration.
     
  18. AlephZero

    AlephZero 7,298
    Science Advisor
    Homework Helper

    You make a typo in your Tex code in step 1, and I think you also made a typo in step 2:
    [
    It you leave out steps 3 to 5, isn't this just the inverse power iteration method? If you don't orthogonalize the vectors at step 4, won't they all converge to mode 1?
     
  19. Yes, you are right. In fact I replaced those steps with an orthogonalization step which is quite fast. Then it also converges to the smallest eigenvalues but requites a larger number of iteration. The projection is NOT an orthogonal projection yet very effective.
     
  20. CORRECTION:
    It is true for each corresponding eigenvalue to be a direct product, one must have PQ' = I implying that PQ'Q = Q or P=Q (as Q'Q =I), which is the prior statement (namely, the eigenvectors must be the same) for each eigenvalue k. However, it is clear that:

    Det(AB) = Det(A)*Det(B) = Det(P'EPQ'DQ) = Det(P'PQ'QED) = Det(I*I*ED) = Det(ED) = Det(E)*Det(D)

    So, in total, the product of all the eigenvalues of A*B is equal to the product of all the eigenvalues of both A and B, from which one may be able to establish search bounds for individual eigenvalues.

    Note, if the Kth eigenvalue is negative, I claim that by multiplying all entries in the Kth (or, it may be some other) row by minus one, the sign of the negative eigenvalue becomes positive (or, remains negative and another eigenvalue becomes negative) as this matrix modification is known to change only the sign of the determinant (note, absolute values of all eigenvalues can, in general, change by this row negation, but their product remains constant). If the matrix can be diagonalized, this sign change can occur only by a change in sign in one (or an odd number) of the eigenvalues. Interestingly, in one matrix product instance even without any sign change operations, with both matrix A and B having positive eigenvalues, the product matrix AB have an even number of negative eigenvalues! This occurrence is apparent without direct knowledge of the eigenvalues through the Characteristic polynomial where a term indicates the negative of the trace of eigenvalue matrix, which noticeably declined in absolute value.

    Note, for all positive eigenvalues, the log of the Determinant may serve as a bound to the log of the largest possible eigenvalue. If we known the largest eigenvalue (via the Power Method), then the (log Det - Log Max Ei ) is an upper bound to log of the second largest eigenvalue.
     
    Last edited: Apr 16, 2012
  21. Is there a relation between the eigenvalues/eigenvectors of [itex]L^{-1}AL^{-T}[/itex] and those of [itex]A[/itex]?
     
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook